Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Download as pdf or txt
Download as pdf or txt
You are on page 1of 291

Graduate Texts in Physics

Alexandre Zagoskin

Quantum Theory
of Many-Body
Systems
Techniques and Applications
Second Edition
Graduate Texts in Physics

For further volumes:


http://www.springer.com/series/8431
Graduate Texts in Physics
Graduate Texts in Physics publishes core learning/teaching material for graduate- and
advanced-level undergraduate courses on topics of current and emerging fields within
physics, both pure and applied. These textbooks serve students at the MS- or PhD-level and
their instructors as comprehensive sources of principles, definitions, derivations, experi-
ments and applications (as relevant) for their mastery and teaching, respectively. Interna-
tional in scope and relevance, the textbooks correspond to course syllabi sufficiently to serve
as required reading. Their didactic style, comprehensiveness and coverage of fundamental
material also make them suitable as introductions or references for scientists entering, or
requiring timely knowledge of, a research field.

Series Editors

Professor Richard Needs


Cavendish Laboratory
JJ Thomson Avenue
Cambridge CB3 0HE
UK
rn11@cam.ac.uk

Professor William T. Rhodes


Department of Computer and Electrical Engineering and Computer Science
Imaging Science and Technology Center
Florida Atlantic University
777 Glades Road SE, Room 456
Boca Raton, FL 33431
USA
wrhodes@fau.edu

Professor Susan Scott


Department of Quantum Science
Australian National University
Canberra ACT 0200, Australia
susan.scott@anu.edu.au

Professor H. Eugene Stanley


Center for Polymer Studies Department of Physics
Boston University
590 Commonwealth Avenue, Room 204B
Boston, MA 02215
USA
hes@bu.edu

Professor Martin Stutzmann


Technische Universität München
Am Coulombwall
Garching 85747, Germany
stutz@wsi.tu-muenchen.de
Alexandre Zagoskin

Quantum Theory
of Many-Body Systems
Techniques and Applications

Second Edition

123
Alexandre Zagoskin
Department of Physics
Loughborough University
Leicestershire
UK

ISSN 1868-4513 ISSN 1868-4521 (electronic)


ISBN 978-3-319-07048-3 ISBN 978-3-319-07049-0 (eBook)
DOI 10.1007/978-3-319-07049-0
Springer Cham Heidelberg New York Dordrecht London

Library of Congress Control Number: 2014940325

 Springer International Publishing Switzerland 2014


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of
the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or
information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed. Exempted from this legal reservation are brief
excerpts in connection with reviews or scholarly analysis or material supplied specifically for the
purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the
work. Duplication of this publication or parts thereof is permitted only under the provisions of
the Copyright Law of the Publisher’s location, in its current version, and permission for use must
always be obtained from Springer. Permissions for use may be obtained through RightsLink at the
Copyright Clearance Center. Violations are liable to prosecution under the respective Copyright Law.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt
from the relevant protective laws and regulations and therefore free for general use.
While the advice and information in this book are believed to be true and accurate at the date of
publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for
any errors or omissions that may be made. The publisher makes no warranty, express or implied, with
respect to the material contained herein.

Printed on acid-free paper

Springer is part of Springer Science+Business Media (www.springer.com)


To my parents
Preface to the Second Edition

Over the last 15 years, there has been a considerable amount of advancements in
condensed matter physics: graphene, pnictide superconductors, and topological
insulators, to name just a few. The understanding, and to a large degree the very
discovery, of these new phenomena required the use of advanced theoretical tools.
The knowledge of the basic methods of quantum many-body theory thus becomes
more important than ever for each student in the field.
Some of the most challenging current problems stem from the spectacular
progress in quantum engineering and quantum computing, more specifically, in
developing solid-state based—mostly superconducting—quantum bits and qubit
arrays. During this short period, we arrived from the first experimental demon-
stration of coherent quantum tunnelling in single qubits (which are, after all, quite
macroscopic objects) to precise manipulation of quantum state of several qubits,
their quantum entanglement over macroscopic distances and, recently, signatures
of quantum coherent behaviour in devices comprising hundreds of qubits. The
difficulty is that it is impossible to directly simulate such large, partially coherent,
essentially nonequilibrium quantum systems, due to the sheer volume of compu-
tation—which was the motivation behind quantum computing in the first place. It
would seem that one needs a quantum computer in order to make a quantum
computer! The hope is that appropriate generalizations of the methods of non-
equilibrium many-body theory would provide good enough approximations and
keep the research going until the time when (and if) the task can be handed to
quantum computers themselves.
Given the above considerations, I did not feel the need to change the scope or
the approach of the book. I have, though, added a new chapter, in order to
introduce bosonization and elements of conformal field theory. These are beautiful
and powerful ideas, especially useful when dealing with low-dimensional systems
with interactions, and belong to the essential condensed matter theory toolkit. I
have also corrected some typos—hopefully introducing fewer new ones in the
process.
In addition to those of my teachers and colleagues, whom I had the opportunity
to thank in the preface to the first edition, I would like to express my gratitude to
Profs. A. N. Omelyanchouk, F. V. Kusmartsev, Jeff Young, and Franco Nori, and
to all my colleagues at the University of British Columbia, D-Wave Systems Inc.,
RIKEN, and Loughborough University, with whom I had the pleasure and honour

vii
viii Preface to the Second Edition

to collaborate during this time. My special thanks to Dr. Uki Kabasawa, who
translated the first edition of this book to the Japanese, and whose questions and
helpful remarks contributed to improving the book you hold.

Loughborough, UK Alexandre Zagoskin


Preface to the First Edition

This book grew out of lectures that I gave in the framework of a graduate course in
quantum theory of many-body systems at the Applied Physics Department of
Chalmers University of Technology and Göteborg University (Göteborg, Sweden)
in 1992–1995. Its purpose is to give a compact and self-contained account of basic
ideas and techniques of the theory from the ‘‘condensed matter’’ point of view. The
book is addressed to graduate students with knowledge of standard quantum
mechanics and statistical physics. (Hopefully, physicists working in other fields
may also find it useful.)
The approach is—quite traditionally—based on a quasiparticle description of
many-body systems and its mathematical apparatus—the method of Green’s
functions. In particular, I tried to bring together all the main versions of diagram
techniques for normal and superconducting systems, in and out of equilibrium (i.e.,
zero-temperature, Matsubara, Keldysh, and Nambu–Gor’kov formalisms) and
present them in just enough detail to enable the reader to follow the original papers
or more comprehensive monographs, or to apply the techniques to his own
problems. Many examples are drawn from mesoscopic physics—a rapidly
developing chapter of condensed matter theory and experiment, which deals with
macroscopic systems small enough to preserve quantum coherence throughout
their volume; this seems to me a natural ground to discuss quantum theory of
many-body systems.
The plan of the book is as follows.
In Chapter 1, after a semi-qualitative discussion of the quasiparticle concept,
Green’s function is introduced in the case of one-body quantum theory, using
Feynman path integrals. Then its relation to the S-operator is established, and the
general perturbation theory is developed based on operator formalism. Finally, the
second quantization method is introduced.
Chapter 2 contains the usual zero-temperature formalism, beginning with the
definition, properties, and physical meaning of Green’s function in the many-body
system, and then building up the diagram technique of the perturbation theory.
In Chapter 3, I present equilibrium Green’s functions at finite temperature, and
then the Matsubara formalism. Their applications are discussed in relation to linear
response theory. Then Keldysh technique is introduced as a means to handle
essentially nonequilibrium situations, illustrated by an example of quantum

ix
x Preface to the First Edition

conductivity of a point contact. This gives me an opportunity to discuss both


Landauer and tunneling Hamiltonian approaches to transport in mesoscopic
systems.
Finally, Chapter 4 is devoted to applications of the theory to the supercon-
ductors. Here the Nambu–Gor’kov technique is used to describe superconducting
phase transition, elementary excitations, and current-carrying state of a super-
conductor. Special attention is paid to the Andreev reflection and to transport in
mesoscopic superconductor–normal metal–superconductor (SNS) Josephson
junctions.
Each chapter is followed by a set of problems. Their solution will help the
reader to obtain a better feeling for how the formalism works.
I did not intend to provide a complete bibliography, which would be far beyond
the scope of this book. The original papers are cited when the results they contain
are either recent or not widely known in the context, and in a few cases where
interesting results would require too lengthy a derivation to be presented in full
detail (those sections are marked by a star*). For references on more traditional
material, I have referred the reader to existing monographs or reviews.
For a course in quantum many-body theory based on this book, I would suggest
the following tentative schedule1:
Lecture 1 (Sect. 1.1); Lecture 2 (Sect. 1.2.1); Lecture 3 (Sect. 1.2.2, 1.2.3);
Lecture 4 (Sect. 1.3); Lecture 5 (Sect. 1.4); Lecture 6 (Sect. 2.1.1); Lecture 7 (Sect.
2.1.2); Lecture 8 (Sect. 2.1.3, 2.1.4); Lecture 9 (Sect. 2.2.1, 2.2.2); Lecture 10
(Sect. 2.2.3); Lectures 11–12 (Sect. 2.2.4); Lecture 13 (Sect. 3.1); Lecture 14
(Sect. 3.2); Lecture 15 (Sect. 3.3); Lecture 16 (Sect. 3.4); Lecture 17 (Sect. 3.5);
Lecture 18 (Sect. 3.6); Lecture 19 (Sect. 3.7); Lecture 20 (Sect. 4.1); Lecture 21
(Sect. 4.2); Lecture 22 (Sect. 4.3.1, 4.3.2); Lecture 23 (Sect. 4.3.3, 4.3.4); Lecture
24 (Sect. 4.4.1, 4.4.2); Lectures 25–26 (Sect. 4.4.3–5); Lecture 27 (Sect. 4.5.1);
Lecture 28 (Sect. 4.5.2–4); Lecture 29 (Sect. 4.6).

Acknowledgments

I am deeply grateful to Professor R. Shekhter, collaboration with whom in pre-


paring and giving the course on quantum theory of many-body system significantly
influenced this book.
I wish to express my sincere thanks to the Institute for Low Temperature
Physics and Engineering (Kharkov, Ukraine) and Professor I. O. Kulik, who first
taught me what condensed matter theory is about; to the Applied Physics
Department of Chalmers University of Technology and Göteborg University
(Göteborg, Sweden) and Professor M. Jonson, and to the Physics and Astronomy

1
Based on a ‘‘two hours’’ (90 min) lecture length.
Preface to the First Edition xi

Department of the University of British Columbia (Vancouver, Canada) and


Professor I. Affleck for support and encouragement.
I thank Drs. S. Gao, S. Rashkeev, P. Hessling, R. Gatt, and Y. Andersson for
many helpful comments and discussions.
Last, but not least, I am grateful to my wife Irina for her unwavering support
and for actually starting this project by the comment, ‘‘Well, if you are spending
this much time on preparing these handouts, you should rather be writing a book,’’
and to my daughter Ekaterina for her critical appreciation of the illustrations.

Vancouver, British Columbia Alexandre Zagoskin


Contents

1 Basic Concepts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... 1


1.1 Introduction: Whys and Hows of Quantum
Many-Body Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Screening of Coulomb Potential in Metal. . . . . . . . . . . . 3
1.1.2 Time-Dependent Effects: Plasmons . . . . . . . . . . . . . . . . 6
1.2 Propagation Function in a One-Body Quantum Theory . . . . . . . 8
1.2.1 Propagator: Definition and Properties . . . . . . . . . . . . . . 8
1.2.2 Feynman’s Formulation of Quantum
Mechanics: Path (Functional) Integrals . . . . . . . . . . ... 13
1.2.3 Quantum Transport in Mesoscopic Rings:
Path Integral Description . . . . . . . . . . . . . . . . . . . . . . . 20
1.3 Perturbation Theory for the Propagator . . . . . . . . . . . . . . . . . . 24
1.3.1 General Formalism . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.3.2 An Example: Potential Scattering . . . . . . . . . . . . . . . . . 30
1.4 Second Quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
1.4.1 Description of Large Collections of Identical
Particles: Fock’s Space . . . . . . . . . . . . . . . . . . . . . ... 33
1.4.2 Bosons. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... 36
1.4.3 Number and Phase Operators and Their Uncertainty
Relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
1.4.4 Fermions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
1.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

2 Green’s Functions at Zero Temperature . . . . . . . . . . . . . . . . .... 53


2.1 Green’s Function of The Many-Body System: Definition
and Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 53
2.1.1 Definition of Green’s Functions of the Many-Body
System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
2.1.2 Analytic Properties of Green’s Functions . . . . . . . . . . . . 62
2.1.3 Retarded and Advanced Green’s Functions . . . . . . . . . . 67
2.1.4 Green’s Function and Observables . . . . . . . . . . . . . . . . 70

xiii
xiv Contents

2.2 Perturbation Theory: Feynman Diagrams . . . . . . . . . . . . ..... 71


2.2.1 Derivation of Feynman Rules. Wick’s
and Cancellation Theorems . . . . . . . . . . . . . . . . ..... 73
2.2.2 Operations with Diagrams. Self Energy. Dyson’s
Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..... 84
2.2.3 Renormalization of the Interaction. Polarization
Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..... 88
2.2.4 Many-Particle Green’s Functions. Bethe–Salpeter
Equations. Vertex Function . . . . . . . . . . . . . . . . ..... 91
2.3 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..... 99
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..... 100

3 More Green’s Functions, Equilibrium and Otherwise,


and Their Applications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... 101
3.1 Analytic Properties of Equilibrium Green’s Functions . . . . . ... 101
3.1.1 Statistical Operator (Density Matrix):
The Liouville Equation . . . . . . . . . . . . . . . . . . . . . ... 102
3.1.2 Definition and Analytic Properties
of Equilibrium Green’s Functions . . . . . . . . . . . . . . . . . 103
3.2 Matsubara Formalism. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
3.2.1 Bloch’s Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
3.2.2 Temperature (Matsubara) Green’s Function . . . . . . . . . . 111
3.2.3 Perturbation Series and Diagram Techniques
for the Temperature Green’s Function . . . . . . . . . . . . . . 114
3.3 Linear Response Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
3.3.1 Linear Response Theory: Kubo Formulas . . . . . . . . . . . 118
3.3.2 Fluctuation-Dissipation Theorem. . . . . . . . . . . . . . . . . . 122
3.4 Nonequilibrium Green’s Functions. . . . . . . . . . . . . . . . . . . . . . 126
3.4.1 Nonequilibrium Causal Green’s Function: Definition. . . . 126
3.4.2 Contour Ordering and Three More Nonequilibrium
Green’s Functions . . . . . . . . . . . . . . . . . . . . . . . . . ... 128
3.4.3 The Keldysh Formalism. . . . . . . . . . . . . . . . . . . . . ... 131
3.5 Quantum Kinetic Equation . . . . . . . . . . . . . . . . . . . . . . . . ... 133
3.5.1 Dyson’s Equations for Nonequilibrium Green’s
Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... 134
3.5.2 The Quantum Kinetic Equation. . . . . . . . . . . . . . . . ... 135
3.6 Application: Electrical Conductivity of Quantum
Point Contacts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... 137
3.6.1 Quantum Electrical Conductivity in the Elastic Limit ... 139
3.6.2 Elastic Resistance of a Point Contact: Sharvin
Resistance, the Landauer Formula,
and Conductance Quantization . . . . . . . . . . . . . . . . ... 142
Contents xv

3.6.3 The Electron–Phonon Collision Integral


in 3D Quantum Point Contact . . . . . . . . ........... 145
3.6.4 I Calculation of the Inelastic Component
of the Point Contact Current. . . . . . . . . . . . . . . . . . . . . 147
3.7 Method of Tunneling Hamiltonian . . . . . . . . . . . . . . . . . . . . . . 149
3.8 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155

4 Methods of the Many-Body Theory in Superconductivity . . . . . . . 157


4.1 Introduction: General Picture of the Superconducting State . . . . 157
4.2 Instability of the Normal State . . . . . . . . . . . . . . . . . . . . . . . . 168
4.3 Pairing (BCS) Hamiltonian . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
4.3.1 Derivation of the BCS Hamiltonian. . . . . . . . . . . . . . . . 172
4.3.2 Diagonalization of the BCS Hamiltonian:
The Bogoliubov Transformation—Bogoliubov-de
Gennes Equations . . . . . . . . . . . . . . . . . . . . . . . . . . .. 175
4.3.3 Bogolons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 177
4.3.4 Thermodynamic Potential of a Superconductor . . . . . . .. 179
4.4 Green’s Functions of a Superconductor: The Nambu-Gor’kov
Formalism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
4.4.1 Matrix Structure of the Theory . . . . . . . . . . . . . . . . . . . 181
4.4.2 Elements of the Strong Coupling Theory . . . . . . . . . . . . 182
4.4.3 Gorkov’s Equations for the Green’s Functions . . . . . . . . 185
4.4.4 Current-Carrying State of the Superconductor. . . . . . . . . 189
4.4.5 Destruction of Superconductivity by Current . . . . . . . . . 194
4.5 Andreev Reflection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
4.5.1 The Proximity Effect in a Normal Metal
in Contact with a Superconductor . . . . . . . . . . . . . . . .. 203
4.5.2 Andreev Levels and Josephson Effect in a Clean
SNS Junction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 204
4.5.3 Josephson Current in a Short Ballistic Junction:
Quantization of Critical Current in Quantum Point
Contact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 206
4.5.4 Josephson Current in a Long SNS Junction . . . . . . . . .. 209
4.5.5 I Transport in Superconducting Quantum Point
Contact: The Keldysh Formalism Approach . . . . . . . . .. 215
4.6 Tunneling of Single Electrons and Cooper Pairs
Through a Small Metallic Dot: Charge Quantization
Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 217
4.6.1 Coulomb Blockade of Single-Electron Tunneling . . . . .. 218
4.6.2 Superconducting Grain: When One Electron
Is Too Many . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 220
4.7 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 223
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 224
xvi Contents

5 Many-Body Theory in One Dimension . . . . . . . . . . . . . . . . . . ... 227


5.1 Orthogonality Catastrophe and Related Effects . . . . . . . . . . ... 227
5.1.1 Dynamical Charge Screening by a System
of Fermions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
5.1.2 One-Dimensional Tight-Binding Model . . . . . . . . . . . . . 230
5.2 Tomonaga-Luttinger Model . . . . . . . . . . . . . . . . . . . . . . . . . . 235
5.2.1 Spinless Fermions in One Dimension . . . . . . . . . . . . . . 235
5.2.2 Bosonization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
5.2.3 Tomonaga-Luttinger Liquid: Interacting Fermions
in One Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
5.2.4 Spin-Charge Separation . . . . . . . . . . . . . . . . . . . . . . . . 249
5.2.5 Green’s Functions in Tomonaga-Luttinger Model . . . . . . 250
5.3 Conformal Field Theory and the Orthogonality Catastrophe . . . . 254
5.3.1 Conformal Symmetry . . . . . . . . . . . . . . . . . . . . . . . . . 254
5.3.2 Conformal Dimensions, the Energy Spectrum
and the Anderson Exponent . . . . . . . . . . . . . . . . . . ... 256
5.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... 260
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... 261

Appendix A: Friedel Oscillations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263

Appendix B: Landauer Formalism for Hybrid


Normal-Superconducting Structures. . . . . . . . . . . . . . . . 267

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
Chapter 1
Basic Concepts

When asked to calculate the stability of a dinner table with four


legs, a theorist rather quickly produces the results for tables
with one leg and with an infinite number of legs. He spends the
rest of his life in futile attempts to solve the problem for a table
with an arbitrary number of legs.

A popular wisdom.
From the book “Physicists keep joking:”

Abstract Basic ideas of quantum many-body theory. Qualitative picture of quasi-


particles. Thomas-Fermi screening. Plasmons. Propagators and path integrals in a
one-body quantum theory. Aharonov-Bohm effect. Perturbation theory for a propa-
gator. Second quantization and field operators.

1.1 Introduction: Whys and Hows of Quantum Many-Body


Theory

Technically speaking, physics deals only with one-body and many-body problems
(because the two-body problem reduces to the one-body case, and the three-body
problem does not, and is already insolvable, (Fig. 1.1)). Still, what an average physi-
cist thinks of as “many” in this context is probably something of the order of
1019 − 1023 , the number of particles in a cubic centimeter of a gas or a solid, respec-
tively. When you have this many particles on your hands, you need a many-body
theory. At these densities, the particles will spend enough time at several de Broglie
wavelengths from each other, and therefore we need a quantum many-body theory.
(A good thing too: What we really should not mess with is the classical chaos!)
The real reason why you want to deal with such a large collection of particles in
the first place, instead of quietly discussing a helium atom, is of course that 1023 is
much closer to infinity. The epigraph, or intuition, or both, tell us that the infinite

A. Zagoskin, Quantum Theory of Many-Body Systems, 1


Graduate Texts in Physics, DOI: 10.1007/978-3-319-07049-0_1,
© Springer International Publishing Switzerland 2014
2 1 Basic Concepts

Fig. 1.1 One, two, many …

number of particles (or legs) is almost as easy to handle as one, and much, much
easier than, say, 3, 4, or 7.
The basic idea of the approach is that instead of following a large number of
strongly interacting real particles, we should try to get away with considering a rela-
tively small number of weakly interacting quasiparticles, or elementary excitations.
An elementary excitation is what its name implies: something that appears in the
system after it has suffered an external perturbation, and to which the reaction of the
system to this perturbation can be almost completely ascribed—like a ripple on the
surface of a pond, only in quantum theory those ripples will be quantized. In a crystal
lattice, such quantized ripples are phonons, sound quanta, which carry both energy
and quasimomentum, and only weakly interact with each other and, e.g., electrons.
Strike a solid, or heat it, and you excite (that is, generate) a whole bunch of phonons,
which will carry away the energy and momentum of your influence (Fig. 1.1).
Phonons form a rather dilute Bose gas, and therefore are much easier to deal with,
than the actual particles—atoms or ions—that constitute the lattice. The phonons are
called quasiparticles not only because they don’t exist outside the lattice; they also
have finite lifetime, unlike the stable “proper” particles. A key point here is that the
quasiparticles must be stable enough: if they decay faster than they can be created,
the whole description loses sense.
Let us now consider a system of interacting electrons in a metal lattice (which
we will describe by the standard “jellium” model of uniformly distributed positive
charge, neutralizing the total charge of free electrons). Here we have real particles,
which interact through strong Coulomb forces, which moreover have an infinite
radius (because they decay only as 1/r 2 ). For a given electron, we must thus take
into account influences of all the other electrons. Therefore, nothing actually depends
on the details of behavior of any of those electrons! We can safely replace their action
1.1 Introduction: Whys and Hows of Quantum Many-Body Theory 3

by some average field, depending on averaged electronic density n(r), thus arriving
at the mean field approximation (MFA). We immediately use it to calculate the
screening of Coulomb interaction, to see that not only particles, but interactions as
well, are changed in the many-body systems.

1.1.1 Screening of Coulomb Potential in Metal

Suppose we place an external charge Q in the system. It will create a potential (r),
which will change the initial uniform distribution of electronic density,

pF3
n= . (1.1)
3π 2 3

Here pF = 2mμ is the Fermi momentum, and we have used the well-known rela-
tion between pF and density of the electron gas. Of course, if the electronic density
becomes coordinate dependent, so is the Fermi momentum: pF → pF (n(r)). In
equilibrium, the electrochemical potential of the electrons must be constant, that is,

pF2 (n(r))
μ= + e(r) = const, (1.2)
2m
and we easily find that

(2m(μ − e(r)))3/2
n(r) = . (1.3)
3π 2 3
If there is no external potential, we return to the unperturbed case (1.1).
Now let us employ the electrostatics. The potential must satisfy Poisson’s
equation,

∇ 2 (r) = 4πρ ≡ 4πen,

where ρ is the charge density induced on the neutral background by the probe charge,
and n(r) = n(r) − n is the change in electronic density. (The positive “jellium”
neutralized the negative charge of the electrons, remember? Besides, we assume that
it has unit dielectric permeability, ε = 1.) Therefore, we can write
 
(2m(μ − e(r)))3/2 − (2mμ)3/2
∇ (r) = 4πe
2
. (1.4)
3π 2 3

This is the Thomas–Fermi equation, first obtained in the theory of electron density
distribution in atoms.
Generally, this nonlinear equation can be solved only numerically. If, though, we
assume that e is much smaller than the Fermi energy, μ, and expand the right-hand
4 1 Basic Concepts

side of (1.4) in powers of  to the lowest order—that is, taking

3 e(r)n
n(r) = − , (1.5)
2 μ

we obtain a linear equation,

1
∇ 2 (r) = (r). (1.6)
λ2T F

Here λTF is the Thomas–Fermi screening length,

μ1/2 π 1/6 
λTF = √ = n −1/6 . (1.7)
6πen 1/2 2 · 3 em 1/2
1/6

To find the physical meaning of λTF , let us solve (1.6) for (r), imposing the condi-
tion that at small distances (r) ≈ Q/r . This is reasonable, because close enough
to the probe charge—at r ∓ n −1/3 —there will be, on average, no electrons around,
and we should observe the same potential as in vacuum. (Of course, the constant
potential of the “jellium” does not matter.) Therefore, we obtain the following result:

Q
(r) = exp[−r/λTF ]. (1.8)
r
As you see, the Coulomb potential is modified, and it now exponentially decays
at a distance of order λTF from the source—it was screened.1
Physically, when a positive charge is brought into our system, the electrons will
be attracted to it, forming a negatively charged cloud. Formula (1.8) tells us that
the total charge of this cloud is equal and opposite to the external charge, so that
for the electrons at r  λTF it is completely compensated. The size of the cloud is
determined by the interplay between the Coulomb attraction of electrons to the charge
and their Coulomb repulsion from each other. The latter we took into account in our
approximate formula (1.5). In the case of negative external charge, the electrons will
be repulsed, laying bare the positive charge of the “jellium” background, which again
will compensate for it. In either case, the charge is being surrounded by the charged
cloud of equal and opposite sign. This holds, of course, for every single electron in
the system (Figs. 1.2, 1.3).
Now we can refine the criterion for the applicability of our “averaging” approach.
Previously, we thought that since the Coulomb interaction reaches to infinity, the
number of electrons acting on any single electron is always very large—the same as
the number of electrons in the whole system, and their action could be replaced by

1 The dependence (1.8) is often called the Yukawa potential, though in the context of screening of
the Coulomb potential in√(classical) plasma it was derived by Debye and Hückel, with a different
screening length, λD ∼ k B T /(ne). (The difference is due to the use of the Boltzmann instead of
the Fermi distribution in a nondegenerate gas.)
1.1 Introduction: Whys and Hows of Quantum Many-Body Theory 5

Fig. 1.2 Forces acting on an electron and mean field approximation

the action of some average charge density, ne. Now we see that the actual number of
electrons influencing any single electron is only of order nλ3TF . Then our considera-
tions are exact as long as

nλ3TF  1. (1.9)

Since
  
μ pF2 · h vF
n 1/3
λTF ∝ α ∝ ,
n 1/3 e2 m · pF · e 2 e2

the above criterion reduces to


vF
 1. (1.10)
e2

Recalling that e2 /(c) ≈ 1/137, we find on the left-hand side 137vF /c, a ratio of
order one.
This is a nasty surprise, of the sort that are abundant in the many-body theory. It
indicates that even if our approximation gives a qualitatively correct answer (which
it does), we must be ready to take into account the effects of deviations from the
“mean field” picture, and preferably in a systematic way. (A pleasant surprise, which
also occasionally can be encountered here, is that mean field approximation often—
though not always—gives excellent results even outside its domain of applicability.)
One all-important qualitative conclusion that we can make. based on our results is
that in metal, an electron is surrounded by the screening cloud of other electrons. Any
force applied to it will have to accelerate the whole cloud. Therefore, the electron
will behave as if it had a larger effective mass, m ∗ , than in vacuum! (We have
not yet considered its interactions with the crystal lattice itself.) Instead of being a
point particle, it acquires a finite size, the size of the cloud. We can thus call this
complex entity “electron + cloud” a quasi particle. As they say, an electron is dressed.
(Logically, the “lone” electron is called a bare particle.)
6 1 Basic Concepts

Fig. 1.3 Screening of external charge in metal

The quasi electrons are what we will see when probing the system. Pleasantly,
the problem of considering these quasielectrons, interacting through some sort of
short-range potential (even if it is not exactly the Yukawa potential we derived), is
much simpler than the initial problem dealing with electrons strongly interacting
with infinite-range Coulomb forces. We need only accurately find effective masses
and potentials. This “only” is actually the very subject of the many-body theory!

1.1.2 Time-Dependent Effects: Plasmons


To better understand possible underwater rocks, let us recollect that so far we have
considered only the static case, which is all right for a probe charge in rest, but
we implied that this picture should hold for one in motion. One can guess that if a
screening cloud is likely to follow a slow electron, it might lose the fast one: after
all, the cloud formation takes some time.
To this end, let us single out an electron (number zero) with coordinate r(t) and
write for it the classical equation of motion:

m r̈(t) = eE(r(t)),

where the electric field E arises due to local deviation of the electronic density from
its equilibrium value and satisfies the Maxwell equation
⎡ ⎤

∇r · E(r(t)) = 4πe ⎣ δ(r − r(t)) − n ⎦ .
i =0

Here
we have used for the moment the exact electronic density at the point
r, i =0 δ(ri − r) (summation is taken over all other electrons). If now write
1.1 Introduction: Whys and Hows of Quantum Many-Body Theory 7

r(t) = r0 + r(t) and take


into account that the unperturbed density of electrons can
be written as n = n(r0 ) = i =0 δ(ri − r0 ), we can write a linearized equation at r0 :
 
∇r0 · E(r0 ) = −4πer(t) · ∇r0 δ(ri − r0 ) = −4πe∇r0 · r(t) δ(ri − r0 ).
i =0 i =0
(1.11)
Since if r(t) = 0 there will be no field, we can write simply

E(r0 ) = −4πer(t)n.

Substituting this into the equation of motion, performing Fourier transformation over
frequencies, and inserting the expression obtained for r back into the Maxwell
equation, we find

eE(ω)
r(ω) = − ; (1.12)
mω 2
4πe2 n
E(ω) = E(ω). (1.13)
mω 2
The result is consistent if the frequency

4πe2 n
ω = ωp ≡ , (1.14)
m

the plasma frequency. This is the frequency of small oscillations of a uniform elec-
tron gas. The period of plasma oscillations gives the characteristic time of any charge
redistribution in the metal.
Now we see that the screening cloud will be able to follow the electron only as
long as its velocity

v ∓ λTF ω p . (1.15)

Otherwise, the surrounding electrons simply will not have time to react! (Figs. 1.4, 1.5).
The quanta of plasma oscillations are called plasmons. They can propagate across
the system and are created whenever the charge neutrality of the metal is disturbed.
This is yet another example of quasiparticles. The screening of Coulomb potential,
e.g., and dressing of bare electrons in the metal can be directly described in terms of
plasmons.
We have already mentioned phonons. Interactions of the electrons with the crystal
lattice can lead to what can be described as a phonon cloud around an electron,
forming a polaron. Since the characteristic phonon frequency (Debye frequency,
ωD ) is much lower than ω p in metals, there electrons always leave the phonon cloud
behind. Nevertheless, such a cloud can be run into by another electron; as a result, an
effective electron–electron interaction arises, which can lead to such a spectacular
phenomenon as superconductivity.
8 1 Basic Concepts

Fig. 1.4 a Bare particle. b Interaction

Fig. 1.5 Particles in the field. a v ∓ λω; b v  λω

In short, the many-body systems seem to yield to a quasiparticle approach. The


question is, how to make it work.

1.2 Propagation Function in a One-Body Quantum Theory

1.2.1 Propagator: Definition and Properties

The introduction of quasiparticles would hardly be an improvement if for each new


problem we had to invent a completely new method.
1.2 Propagation Function in a One-Body Quantum Theory 9

Fortunately, a general mathematical apparatus, based on the famous Green’s


functions and Feynman diagrams, takes care of all details, and thus makes a
quasiparticle approach efficient. Actually this apparatus works so excellently that
often it is applied to problems that can be more easily solved by other methods. And
as with any efficient tool, people tend to forget about its natural limitations. But in
itself, the approach is a rare example of mathematical beauty, physical insight, and
practical efficiency. It originates from quantum field theory, and therefore is designed
to deal with a system of an infinite number of degrees of freedom. This is exactly what
the condensed matter theory needs. Moreover, usually in condensed matter prob-
lems we are not interested in relativistic covariance—which simplifies the necessary
apparatus and makes mastering it easier. On the other side of this is the fact that the
enormous wealth of effects in solid-state physics already provides a much wider field
for field theorists, than the “standard” field theory, including many effects that either
were not, or simply cannot, be observed elsewhere (like 1 + 1 or 2 + 1 field theory).
We have advised you against using Green’s functions and Feynman diagrams
where simpler methods can be applied. Nevertheless, we now start from the case of
a single quantum particle. The reason is that in this way we will be able to derive
all the general expressions that actually do not depend on the number of particles
in the system, to see clearly the structure of the theory, and to recognize where the
many-particle properties really enter the picture.
It is well known that the probability of observing a single quantum particle at point
x at time t is determined by the square modulus of the wave function of this particle
at this place and time, |Γ(x, t)|2 . In order to find Γ(x, t) we could, e.g., solve the
Schrödinger equation, given initial and boundary conditions. But many properties of
the solution can be obtained directly from general principles.
First, the superposition principle. Mathematically, this means that Γ(x, t) satisfies
a linear differential equation. Physically, the wave functions follow the Huygens
principle; i.e., each point of the wave front acts as a secondary emitter. Anyway, this
allows us to write for the wave function at some time t,

Γ(x, t) = d x ↑ K (x, t; x ↑ , t ↑ )Γ(x ↑ , t ↑ ), t > t ↑. (1.16)

The kernel K (x, t; x ↑ t ↑ ) describes the propagation of the Γ-wave from (x ↑ , t ↑ ) to


(x, t) and therefore is called the propagation function, or propagator. It is funda-
mental for all our theory. Note that due to the causality principle

K (x, t; x ↑ , t ↑ ) = 0, t < t ↑, (1.17)

so that the future does not affect the past. (It would not be this easy if we had to deal
with relativistic covariance!)
Let’s now suppose that the particle at the initial moment is strictly localized:
Γ(x ↑ , t ↑ ) = δ(x ↑ − x0 ). Then from (1.16)

Γ(x, t) = K (x, t; x ↑ t ↑ ). (1.18)


10 1 Basic Concepts

That is, more specifically, the propagator is the transition amplitude of the particle
between the points (x ↑ , t ↑ ) and (x, t), and its square modulus gives the transition
probability.
In (1.16) we did not specify the moment t ↑ , except that it must precede the
observation moment t. Then, for some t ↑↑ > t ↑ we obtain

Γ(x, t) = d x ↑↑ K (x, t; x ↑↑ , t ↑↑ )Γ(x ↑↑ , t ↑↑ )

↑↑
= dx d x ↑ K (x, t; x ↑↑ , t ↑↑ )K (x ↑↑ , t ↑↑ ; x ↑ , t ↑ )Γ(x ↑ , t ↑ ). (1.19)

Since both expressions must be identical, for (t > t ↑↑ > t ↑ ) we obtain



K (xt; x ↑ t ↑ ) = d x ↑↑ K (xt; x ↑↑ t ↑↑ )K (x ↑↑ t ↑↑ ; x ↑ t ↑ ). (1.20)

This is the composition property of the propagator, and we will heavily use it later.
Of course, this is a reformulation of the Huygens principle, from the wave point of
view. But what does it mean from the particle point of view? If we want to know
the probability amplitude for a particle, starting at (xi , ti ) to reach point x f at t f , at
any intermediate moment t ↑ we must take into account all conceivable positions the
particle can occupy in order to obtain a proper result. This situation is often illustrated
by the famous double-slit experiment (we do not know for certain through which
slit the particle passed). It is a close, though a slightly different, situation. There
(in the double-slit experiment) we know the relevant region in space over which we
should integrate (the slits), but we do not know when the particle passes it. Here we
know the relevant time, but we have to integrate over all the available space. You
can ponder how these two situations complement each other (just think about the
stationary wave propagation).
In principle, the above picture is almost identical to one of Brownian motion of
a classical particle. The only difference is that the reasoning is applied rather to
probabilities themselves than to their complex amplitudes, and this, as you know,
changes a lot.
Returning to the properties of the propagator, we have decided that for negative
times it is strictly zero, while for positive times it certainly is not. This might imply
the singular behavior for t − t ↑ = 0. Indeed, for t = t ↑ we must get an identity:

Γ(x, t) ≡ d x ↑ K (xt; x ↑ t)Γ(x ↑ , t),

so that
K (x, t; x ↑ , t) = δ(x − x ↑ ). (1.21)

We have now received all the information available about the properties of the propa-
gator that could be obtained from the most general principles of quantum mechanics.
1.2 Propagation Function in a One-Body Quantum Theory 11

To proceed, we need more specific data. One way is to use the Schrödinger equation.
The other is to formulate instead some statement from which the Schrödinger equa-
tion itself could be derived. We will take both ways, because if we are here most
interested in the inner workings of the formalism (we are), it is wise to run it in both
directions.
If the wave function obeys the Schrödinger equation,
 

i − H(x, ∂x , t) Γ(x, t) = 0, (1.22)
∂t

then it follows from (1.16) that for t > t ↑ the propagator satisfies the same equation:
 

i − H(x, ∂x , t) K (x, t; x ↑ , t ↑ ) = 0. (1.23)
∂t

Besides, we have seen that K (x, t; x ↑ , t ↑ = t) = δ(x − x ↑ ) and K (x, t; x ↑ ,


t ↑ > t) = 0. That is, we can write

K (x, t; x ↑ , t ↑ ) ≡ θ(t − t ↑ )K (x, t; x ↑ , t ↑ ), (1.24)

where θ(t − t ↑ ) is the Heaviside step function. All these properties can be taken care
of by the following equation:
 

i − H(x, ∂x , t) K (x, t; x ↑ , t ↑ ) = iδ(x − x ↑ )δ(t − t ↑ ). (1.25)
∂t

Indeed, for t > t ↑ this reduces to (1.23), while for t → t ↑ + 0 we can keep in the left-
hand side of (1.25) only the term with ∂θ(t − t ↑ )/∂t, which matches the right-hand
side.
From (1.25) we see that up to the factor of i/, the propagator is Green’s function
of the Schrödinger equation in the mathematical sense. (If L̂ is a linear differential
operator, then Green’s function of the equation L̂ψ = 0 is the solution to the equation
L̂G ψ = −δ(x − x ↑ ).) Therefore, quantum-mechanical propagators are more often
called Green’s functions, especially in the many-particle case; since our solution
vanishes for t < t ↑ , it is called the retarded Green’s function. We, though, will keep
calling the function K (x, t; x ↑ , t ↑ ) a propagator, to stress that we are still working
on the one-particle problem.
It is easy to see that for a free particle of mass m (described by the Hamiltonian
(H = (−2 /2m)(∂x )2 )) the propagator depends only on differences of its arguments,
and the solution to (1.25) is given by
d/2 
↑ ↑ m im(x − x ↑ )2
K 0 (x − x , t − t ) = exp θ(t − t ↑ ). (1.26)
2πi(t − t) 2(t − t)

Here d is the space dimensionality.


12 1 Basic Concepts

In the simplest case of one dimension (generalizations to d > 1 are straightfor-


ward) formula (1.26) immediately follows after we Fourier transform (1.25):

(k)2
ω − K 0 (k, ω) = i;
2m
i
K 0 (k, ω) = . (1.27)
ω − (2mk)2

Now we can find


∞ ∞
dk dω ikx−iωt
K 0 (x, t) = e K 0 (k, ω).
2π 2π
−∞ −∞

The integral over ω is convenient to take using complex analysis:


   
dω −iωt
e K 0 (k, ω) = ±i Res K 0 (k, ω)e−iωt ,
2π ω
C 0

where the sum is taken over the residues at all the poles of K 0 (k, ω) as a function
of the complex variable ω, and the closed contour C consists of the real axis and
an infinitely remote half-circle (we assume that the integral converges). The sign
depends on whether the contour is circumscribed in the positive or negative direction.
Since the integrand contains the factor e−iωt = e−itω+t Jω , we must close the
contour in the upper half-plane of ω if t < 0 and in the lower half-plane if t > 0.
Then the factor etω ensures exponential decay of the integrand on the half-circle
and convergence of the integral (Watson’s lemma).
As a matter of fact, the only pole of K 0 (k, ω), at ω = k 2 /2m, lies on the very
integration contour, and adding an infinitesimal imaginary part to ω displaces the
pole to either the positive or negative imaginary half plane, which will dramatically
change the answer (Fig. 1.6).
If, for example, we write ω → ω + i0, the pole will shift below the real axis.
Then for t < 0 the contour does not contain any singularity, and K 0 (x, t) will
be identically zero. This is exactly what we need: a retarded propagator! On the
other hand, for t > 0 the contour encloses the pole, yielding exp(−ik 2 t/2m). The
momentum integral is now straightforward: it has a Gaussian form


dk ikx−i k 2t
K 0 (x, t) = θ(t) e 2m ,

−∞

and directly yields (1.26).


Note that had we displaced the pole to the upper half-plane (with ω → ω − i0),
the result would be an advanced Green’s function, disappearing at all positive times.
1.2 Propagation Function in a One-Body Quantum Theory 13

Fig. 1.6 Integration contour


in the complex frequency
plane

We could leave the pole on the contour, with still another answer. This game of
infinitesimals reflects the fact that besides the differential Eq. (1.25), we need initial
conditions—e.g., that the solution is a retarded Green’s function.
The above result was almost too easy to obtain, and it leaves an unpleasant after-
taste of having cheated. Indeed, it was not worth the trouble to introduce the prop-
agator “from the most general principles,” only to resort finally to the Schrödinger
equation. Of course, in the one-particle case propagator may seem to be simply a
mathematical tool to solve the wave equation, without important physics involved
(as in the many-body case). But as often happens in physics, mathematical reformu-
lation here also provides a tool for deeper understanding of fundamentals, which we
will see in the next section.

1.2.2 Feynman’s Formulation of Quantum Mechanics: Path


(Functional) Integrals

To start with, note the striking similarity between the formula (1.26) for the prop-
agator and the well-known formula for the probability distribution of the classical
Brownian particle. The latter quantity, P(x, t|x ↑ , t ↑ ), gives the conventional proba-
bility of finding the particle at x at some time t, if at some earlier time t ↑ it was at x ↑
(see, e.g., [3]):

↑ ↑ ↑ −d/2 (x − x ↑ )2
P(x, t|x , t ) = (4π D(t − t )) exp − θ(t − t ↑ ). (1.28)
4D(t − t)

The diffusion coefficient D in the quantum case is replaced by 2m/i. From the
mathematical point of view, the similarity between (1.26) and (1.28) is due to the
fact that both K 0 and P are Green’s functions of similar linear differential equations:
14 1 Basic Concepts

a free Schrödinger equation and a diffusion equation, ∂t f (x, t) = D(∂x )2 f (x, t).
Differences in behavior of quantum and classical Brownian particles are due to the
presence of an imaginary unit in one of these equations. From the physical point
of view, though, this might indicate some deeper link between how we describe
classical and quantum motion. But a direct analogy with Brownian motion would
not work, since for a free classical particle, we should obtain deterministic, rather
than probabilistic, equations of motion.
To achieve this goal, we first recall the extremal action principle of classical
mechanics. It states that the particle’s trajectory, or path, xcl (t), between the initial
and final points xi , ti and x f , t f should minimize the action

t f
S[x f t f , xi t j ] = dt L(x, ẋ, t), (1.29)
ti

and this is the only admissible—real-path for a classical particle. The action is the
so-called functional of the trajectory, not a function, since it depends on the behavior
of x(t) on the whole interval [ti , t f ].
Here L(x, ẋ, t) is the Lagrange function of the system: in the simplest case

L(x, ẋ, t) = T (ẋ(t)) − V (x(t)), (1.30)

with T (ẋ(t)) and V (x(t)) being the kinetic and potential energy respectively.
As you know, the condition of extremum means that the action is not sensitive
to small deviations from the classical (extremal) path. More specifically, if we take,
instead of the real path xcl (t), a trial one, xtr (t) = xcl (t) + δx(t), where δx(t) is
small, then the the change in the action integral (1.29) will be only of second order
in δx(t):

δS = O(δx(t)2 ). (1.31)

This condition is employed in derivation of the Lagrange equations of analytical


mechanics, but this is not our concern for the moment. For the free particle the action
is obtained directly:

t f
m ẋ2 m (x f − xi )2 m(x f − xi )2
S0 [x f t f , xi ti ] = dt = (t f − ti ) = . (1.32)
2 2 (t f − ti )2 2(t f − ti )
ti

This is—up to a factor of i/—the very expression we have seen in the exponent of
the free quantum-mechanical propagator!
The numerical factor here is very important. The role of  is more or less clear:
since only dimensionless quantities are allowed in the exponent, and  is the action
quantum, the ratio S/ must appear in the quantum case. The imaginary unit plays a
1.2 Propagation Function in a One-Body Quantum Theory 15

Fig. 1.7 Slicing of classical trajectories

somewhat subtler role: it brings out the interference, which distinguishes propagation
of a quantum particle from its classical counterpart. But anyway, we see that the
quantum-mechanical propagation is somehow related to the classical action S, or
more specifically to exp[i/S].
It was the very idea first suggested by Dirac, and then implemented by Feynman,
that the propagation amplitude of a quantum particle between two points is given by
a coherent sum of terms exp[(i/ h)S[q, q̇]] corresponding to all possible classical
trajectories q(t) between these points. Instead of propagation amplitude here you can
read propagator, K (x f , t f ; xi , ti ), since we have established that they are the same.
What does this give us in the classical limit, when by definition the action S  ?
Then exp[i/S] will very quickly oscillate in response to any minute change in
q(t). This means that the contributions to the transition amplitude from virtually all
trajectories cancel! The only exclusion will be the classical trajectory: by definition,
small deviations from it do not change the action, so that the contribution of this
trajectory will survive. Now you see why classical particles choose classical paths!
(Similar reasoning, long ago, helped to reconcile the wave theory of light with the
fact that light usually propagates along straight lines.)
In order to develop the fundamental idea that we have just described, we need
some way of counting the trajectories and summing up their contributions. Let us
divide the time interval [ti , t f ] into a large number (N − 1) of “slices” each of
length t = (t f − ti )/(N − 1). The N partition moments are thus t1 ≡ ti , t2 , . . .,
t N ≡ t f . Each classical trajectory thus is sliced into (N − 1) pieces (see Fig. 1.7):
[x1 ≡ xi , x2 ], [x2 , x3 ], . . ., [x N −1 , x N ≡ x f ]. Now we can use the composition
property of the propagator, Eq. (1.20), and obtain the expression

∞ ∞ ∞
K (x N t N ; x1 t1 ) = d x N −1 d x N −2 · · · d x2
−∞ −∞ −∞
×K (x N t N ; x N −1 t N −1 )K (x N −1 t N −1 ; x N −2 t N −2 ) · · · K (x2 t2 ; x1 t1 ). (1.33)
16 1 Basic Concepts

Of course, we do not know the exact form of K (xt; x ↑ t ↑ ). But if t → 0, then


the transition amplitude K (xn+1 tn+1 ; xn tn ) must be proportional to
 
it m(xn+1 − xn )2 V (xn+1 ) + V (xn )
exp − . (1.34)
 2t 2 2

What we have done here? We used for simplicity the Lagrangian of Eq. (1.30) (which
is, though, general enough). We chose t so tiny that the kinetic energy term in the
action on this interval is much larger than , so that we can disregard all except the
classical trajectory between the points xn , xn+1 . And finally, we approximated the
classical action on this trajectory by using in (1.34) the average value of the Lagrange
function.
The expression (1.34) lacks the normalization factor, since the dimensionality of
the propagator is inverse volume, L −d . It can be restored from condition (1.21). If
we recall one of limit representations of the delta function,
1
ei x /α ,
2
δ(x) = lim √ (1.35)
α→0 απi

we see that the factor in question will be (m/(2πit))d/2 . (This is the very factor
that we obtained for the free propagator from the Schrödinger equation, but here we
did not exploit this equation at all.)
Now substitute this form of the propagator (for infinitesimal t)

m d/2
 i m(x−x ↑ )2
− V (x)+V (x ↑ )
t 2
K (xt; x ↑ 0) =
 2t 2 2
e (1.36)
2πhit
into the composition equation, to obtain
∞ ∞ ∞
K (x N t N ; x1 t1 ) = lim d x N −1 d x N −2 · · · d x2
N →∞
−∞ −∞ −∞
 N 
 
m d/2 i n−2
m(xn −xn−1 )2
 N
t 2
V (x )+V (x
− n 2 n−1
)

× e 2t
.
2πit
n=2
(1.37)

As you see, the exponent of this nontrivial construction contains i/ times the Rie-
mannian sum for the integral, giving the classical action along some path, x(t). The
limit of the infinite number of consequent integrations over intermediate coordinates,
x j , with the proper normalization factors, is called a continual, functional, or simply
path integral, and is denoted by Dx. Thus,

x(t)
i
↑ ↑
K (x, t; x , t ) = Dxe  S[x(t),ẋ(t)] . (1.38)
x ↑ (t ↑ )
1.2 Propagation Function in a One-Body Quantum Theory 17

This also can be written in a more symmetric (and more general) form:

x(t)
↑ ↑ p i S[ p(t),x(t),t]
K (x, t; x , t ) = Dx D e , (1.39)
2π
x ↑ (t ↑ )

where
t
S[ p(t), x(t), t] = dt[ p ẋ − H ( p(t), x(t), t)]
t↑

is the action expressed through the canonical variables;

H ( p, x, t) = ẋ∂ L/∂ ẋ − L(x, ẋ, t)

is the Hamiltonian function of the particle. Since the above expression explicitly con-
tains H , it proves more useful in applications of path integral methods to the systems
with many degrees of freedom. But for our limited goals, it will be enough to demon-
strate the equivalence of (1.38) and (1.39), simultaneously explaining the meaning
of the symbol D p (before that the expression (1.39) is, of course, null and void).
The simplest way to do that is to employ the Schrödinger equation for the propaga-
tor. Now we do not postulate, but derive it from (1.38), thus preserving the consistency
of speculation. It is clear that given the form of the propagator for infinitesimal times,
(1.36), the integral composition equation can be reduced to a differential one.
Using expression (1.36), we can write (for the one-dimensional case, generaliza-
tions are trivial)
∞  m 1/2
K (x N t N −1 + t; x1 t1 ) ≈ d x N −1
2πit
−∞

i m(x N −x N −1 )2 V (x N )+V (x N −1 )
 − t
2t 2 2
×e K (x N −1 t N −1 ; x1 t1 )

on one hand, and



K (x N t N −1 + t; x1 t1 ) ≈ K (x N −1 t N −1 ; x1 t1 ) + t K (x N −1 t N −1 ; x1 t1 )
∂t N −1

on the other. Now expand the functions under the integral:



j m(x N −x N −1 )2 V (x N )+V (x N −1 ) 
 − t i m(x N −x N −1 )
i 2
2t 2 2
e ≈ e 1 − V (x N )t ;
2t

K (x N −1 t N −1 ; x1 t1 ) ≈ K (x N t N −1 ; x1 t1 )

− (x N − x N −1 ) K (x N t N −1 ; x1 t1 )
∂x N
(x N − x N −1 )2 ∂ 2
+ K (x N t N −1 ; x1 t1 ).
2 ∂x N2
18 1 Basic Concepts

Integrating over x N −1 (which is easy, since the integrals are of Gaussian type) and
keeping the leading terms in t, we obtain:

∂ i
t K (x N −1 t N −1 ; x1 t1 ) = − V (x N )t K (x N t N −1 ; x1 t1 )
∂t N −1 
i ∂2
+ t 2 K (x N t N −1 ; x1 t1 ) + o(t).
2m ∂x N

Dividing by t, and in the limit t → 0, we finally obtain the Eq. (1.25) for the
propagator for t > t ↑ , thus having demonstrated that the Schrödinger equation follows
from the Dirac–Feynman conjecture about the structure of the transition amplitude.
(Of course, the opposite is true as well.)
Now let us return to the basic Eq. (1.16), which determines the action of the
propagator on the wave function. Using Dirac’s “bra” and “ket” notation, in which
the wave function Γ(x) is presented as a scalar product of two abstract vectors in
Hilbert space,
Γ(x) ≡ x|Γ, (1.40)

we can rewrite it as follows:



x|Γ(t) = x|S(t, t ↑ )|x ↑ x ↑ |Γ(t ↑ ) ≡ x|S(t, t ↑ )|Γ(t ↑ ). (1.41)
x↑

We have used the closure relation (completeness) of the quantum states of the particle
with definite coordinate (coordinate eigenstates), |x, that is,

|x ↑ x ↑ | = I, (1.42)
x↑

where I is the unit operator.


We see that the propagator, K (xt; x ↑ t ↑ ) for t > t ↑ , is a matrix element of some
time-dependent operator S(t, t ↑ ) between the coordinate eigenstates, K (xt; x ↑ t ↑ ) =
x|S(t, t ↑ )|x ↑ . The equation for the propagator then can be written in a general form,
notwithstanding the basis (representation):


i S(t, t ↑ ) = HS(t, t ↑ ), (1.43)
∂t
and its formal solution is found immediately:

S(t, t ↑ ) = e−  H(t−t ) .
i
(1.44)

Since x|x ↑  = δ(x − x ↑ ) (orthonormality condition for eigenstates of coordinate),


the above solution indeed satisfies the initial condition for the propagator K (xt;
x ↑ t − 0) = δ(x − x ↑ ).
1.2 Propagation Function in a One-Body Quantum Theory 19

What is the benefit? It is that now we are not limited to the coordinate repre-
sentation, and can easily work in, say, momentum space. This is what we actually
need to prove (1.39). Besides, we will need the closure relation for the momentum
eigenstates | p:

| p ↑  p ↑ | = I. (1.45)
p↑

Recall that in the coordinate (momentum) representation the coordinate and momen-
tum eigenstates look as follows:
i ↑
Γx (x ↑ ) ≡ x ↑ |x = δ(x ↑ − x); Γ p (x ↑ ) ≡ x ↑ | p = e  px , (1.46)

and respectively
i ↑
Γ̃x ( p ↑ ) ≡  p ↑ |x = e−  p x ; Γ̃ p ( p ↑ ) ≡  p ↑ | p = δ( p ↑ − p). (1.47)

Now at last we can return to the path-integral calculation of the propagator. In


complete agreement with our previous treatment, we slice the time interval [t f (=
t N ); ti (= t1 )] in tiny bits t = (t f − ti )/(N − 1) and use the composition property:

x N |S(t N , t1 )|x1  = lim x N |S(t N , t N −1 )S(t N −1 , t N −2 ) · · · S(t2 , t1 )|x1 .x1


N →∞
(1.48)
The Hamiltonian here is a function of coordinate and momentum operators,
H = H( p̂, x̂).
Now we can insert between each of the two propagators in (1.48) the unit operator
x |xx| p | p p|. Evidently

xm |e−  H( p̂,x̂) t| pm  pm |xm−1  ≈ xm |e−  H ( pm ,xm )t | pm  pm |xm−1 
i i

i i i
= e  pm xm e−  H ( pm ,xm ) te−  pm xm−1 (1.49)

(note that now instead of the (operator) Hamiltonian, we have obtained the classical
Hamiltonian function, depending on usual coordinates and momenta). Therefore,
Eq. (1.48) is reduced to

N
−1 
N
dpn
x N |S(t N , t1 )|x1  = lim d xn
N →∞ 2π
n=2 n=2

N  x −x 
i n n−1
 t pn t −H ( pn ,xn )
×e n−2 . (1.50)
 
We have restored the continuous case notation (i.e., x → d x; p → dp/
(2π)). This is the very path integral in the phase space (that is, over coordinates
and momenta), the shorthand notation of which was given above by (1.39). Keep in
20 1 Basic Concepts

 ⎫1/2
mind that here we did not include the normalization factors 2πmit in definition
of Dx. Actually they will be given by integrations over momenta, and there is no
general convention whether such factors should be written explicitly or not.
As you see, the expression (1.50) contains (N − 1) momenta and N coordinates,
but there are N − 2 integrations over coordinates and N − 1 over momenta. As a
result, we have two “loose” coordinates, initial and final ones, as it should be for the
propagator in coordinate representation. But nothing prevents us from calculating a
different matrix element of S, say,  p f |S(t f , ti )| pi . Evidently, this should be the
propagator in momentum representation, giving the probability amplitude for the
particle to change its momentum from pi to p f . You can easily demonstrate that the
corresponding path integral can be written as (see Problem 1.1)

p(t)
p
Dxe  S [ p(t),x(t),t] .
i
↑ ↑
K ( p, t; p , t ) = D (1.51)
2π
p ↑ (t ↑ )

Thus, path integrations generally do not commute!


The last thing we should do now is to demonstrate that (1.50) yields the initial
expression (1.38), i.e. the path integral in the configuration space. This will com-
plete our argument. To do this, let us take the Hamiltonian function in the form
p2
H ( p, x) = 2m + V (x). In (1.50) we can then easily integrate out the momenta,
since the corresponding integrals are Gaussian,
 
 m 1/2 i m(xn −xn−1 )2
x −x p2
dpn i ρn n tn−1 − 2mn t
e = e 2t , (1.52)
2π 2πit

and we are back to the initial formula (1.38).

1.2.3 Quantum Transport in Mesoscopic Rings: Path Integral


Description

The path integral description as we have introduced it is nice and clear when we
deal with a single particle. Then it seems inapplicable to the problems of condensed
matter, with giant numbers of particles involved: we have to develop a more subtle
approach, equivalent to the technique of Green’s functions, etc.
Nevertheless, there exists a class of solid systems where the single particle
approach holds and gives sensible results, namely, the mesoscopic systems (see,
e.g., [5]). These are the systems of intermediate size, i.e., macroscopic but small
enough (≤10−4 cm). In these systems quantum interference is very important, since
at low enough temperatures (<1 K) the phase coherence length of quasiparticles
(“electrons”) exceeds the size of the system. This means that the electrons preserve
their “individuality” when passing through the system.
1.2 Propagation Function in a One-Body Quantum Theory 21

Since the wave function of the quantum particle depends on its energy as e−i Et/,
any inelastic interaction spoils the phase coherence. Then the condition

l φ ≈ li > L (1.53)

must hold. Here lφ is the phase coherence length, li is the inelastic scattering
length, L is the size of the system. The above condition can be satisfied in exper-
iment, due to the fact we have discussed above: that in the condensed matter we
can deal with weakly interacting quasiparticles instead of strongly interacting real
particles.
Because the inelastic scattering length of the quasielectron exceeds the size of the
mesoscopic system, we can regard it as a single particle in the external potential field
and apply to it the path integral formalism in the simplest possible version.

1.2.3.1 Aharonov–Bohm Effect in Normal Metal Rings



Imagine
⎬ a metal ring threaded by a solenoid with a magnetic flux  = dS · B =
C dx · A, where C is any contour encircling the solenoid. There is no magnetic field
in the bulk.
The conductivity between points A and B is related to the probability for an
electron to travel from A to B, given by a square modulus of amplitude
tB
B i
dt L(x,ẋ,t)

Bt B |At A  = Dxe tA
. (1.54)
A

(We have for the sake of brevity denoted the transition amplitude—propagator—
between the points x A , t A and x B , t B simply by Bt B |At A ; in the next section we
will see that this is not only a shorthand.)
The Lagrange function of the electron in the magnetic field is given by a Legendre
transformation:

L(x, ẋ) = P · ẋ − H (P, x); (1.55)


(P − ec A)2
H (P, x) = + V (x). (1.56)
2m ∗

Here V (x) is a static random potential, A is the vector potential, and P = m ∗ ẋ + ec A


is the canonical momentum.
Performing the transformation (1.55), we find that the Lagrange function is related
to one without the magnetic field, L 0 , by
e
L(x, ẋ) = A · ẋ + L 0 (x, ẋ). (1.57)
c
22 1 Basic Concepts

Fig. 1.8 hc/2e oscillations in a mesoscopic ring

Therefore, the transition amplitude is


tB tB
B ie
dtA·ẋ i
dt L 0 (x,ẋ,t)
c 
Bt B |At A  = Dxe tA
e tA
(1.58)
A
tB
B ie
dtA·ẋ
c
e  S0 [x,ẋ] .
i
= Dxe tA

There exists a special class of trajectories that loop around the hole and have a self-
intersection (see Fig. 1.8). Each of them has a counterpart with an opposite direction
of motion around the hole. Each pair of thus conjugated trajectories has the same
value of the exp( i S0 [x, ẋ]) factor (since without the magnetic field the motion is
reversible), while the rest of the expression gives

t B 
ie ie ie ie
dtA · ẋ = ± A · dx = ± rotA · dS = ± . (1.59)
c c c c
tA

Then the following term in the transition amplitude arises:


 ie ie
 i e
Bt B |At A ↔ = e c  + e− c  Dxe  S0 [x,ẋ] ≡ 2F↔ cos . (1.60)
c

The transition probability is then

|B|A|2 = |B|A↔ |2 + |B|Aother |2 + 2B|A↔ B|A∗other . (1.61)


1.2 Propagation Function in a One-Body Quantum Theory 23

Fig. 1.9 hc/e oscillations in a mesoscopic ring

The third term vanishes due to phase randomness; the second term does not contain
any pronounced -dependence. But the first one2 is periodic in  with a period equal
to the superconducting flux quantum, 0 = hc/2e:


|B|A↔ | = 2|F↔ | 1 + cos 2π
2
.2
(1.62)
0

The doubling of the period is, of course, not due to the Cooper pairing and double
electric charge, but due to the simple fact that the transition amplitude contains the
difference between the contributions of particles that encircle the hole clockwise and
counterclockwise, thus doubling the path.
Another type of oscillation originates from a different class of trajectories (see
Fig. 1.9), that run from A to B on the different sides of the hole. Each pair of trajec-
tories from this class produces in the transition probability the term

ie
  i  
c ( A·dx− Adx)e  ( L 0 (x,ẋ)− L 0 (x,ẋ))
2e 1 2 1 2 (1.63)
ie
⎬ 
c A·dx 
= 2e 1 eiχ12 = 2 cos 2π + χ12 .
20

These oscillations have a doubled period, 20 = hc/e, but they include a random
phase, χ12 . Therefore, they are sensitive to the number of possible trajectories of
this class, and quickly vanish when it grows. For example, in the metal rings both
hc/e and hc/2e oscillations were observed, while in the metal cylinders only the

2 It is not so easy to calculate the prefactor F↔ ; but it is not difficult to show that it is small only as
a power of the parameter λ F /L.
24 1 Basic Concepts

latter exist, while the former are averaged to zero. (A cylindrical conductor can be
regarded as a huge number of rings stacked together.)

1.3 Perturbation Theory for the Propagator

1.3.1 General Formalism

Though we always can write a path-integral formula (1.38), an explicit expression for
the propagator in the general case cannot be found, neither directly, nor by solving
the Schrödinger equation (1.25). Apart from exactly solvable cases (which are as
beautiful as rare—and worse still, usually known for years), the only regular way to
deal with a problem is to use some sort of perturbation theory.
Fortunately, the propagator formalism is uniquely suited for the task.
We will work again in Dirac’s notation. Let us start with the Schrödinger represen-
tation, where, as you know from quantum mechanics, the operators of observables are
time independent (except possible explicit time dependence), while the state vectors
(wave functions) evolve according to the Schrödinger equation:


i |(t) S = H|(t) S . (1.64)
∂t
If the Hamiltonian is time independent, the formal solution to this is given by

|(t) S = e−  Ht |(0) S .
i
(1.65)

We have operated with the Hamiltonian as if it were a number; of course, the


operator exponent is meaningful only as a power series,
2
− i Ht i 1 i
e = I − Ht + − Ht + · · · , (1.66)
 2 

where I is the unit operator, and the justification of (1.65) is in the fact that we can
rewrite the Schrödinger equation as

t
i
|(t) S = |(0) S − H|(t ↑ ) S dt ↑ (1.67)

0

and then iterate it, which will give us the series for exp(−i/Ht).
The operator

U(t) = e−  Ht
i
(1.68)
1.3 Perturbation Theory for the Propagator 25

is for an obvious reason called the evolution operator. Written in this form, it satisfies
the Schrödinger equation with time-independent Hamiltonian. What if H is time
dependent? For a usual number, the solution would be

t
− i
e dt ↑ H(t ↑ ),
0

but here we are dealing with operators. There is no reason to believe that at different
moments of time, t1 and t2 , H(t1 ) and H(t1 ) commute, and the above expression
will be invalid. But we can still iterate the Schrödinger equation,

 t
i
U(t) = I + − dt ↑ H(t ↑ ), (1.69)

0

to yield

 t t t1
i i 2 
U(t) = I + − dt1↑ H(t1↑ ) + − dt1↑ dt2↑ H(t1↑ )H(t2↑ )
 
0 0 0
↑ ↑
 t t1 t2
i 3
+ − dt1↑ dt2↑ dt3↑ H(t1↑ )H(t2↑ )H(t3↑ ) + · · · (1.70)

0 0 0

In the above expression the operators are time-ordered (or chronologically


ordered), that is, the operator of larger time argument always stands to the left. We can
introduce the time-ordering operator T , whose action on any set of time-dependent
operators is exactly this:

⎪ A(t A )B(t B )C(tC ) · · · if t A > t B > tC · · ·


B(t B )A(t A )C(tC ) · · · if t B > t A > tC · · ·
T [A(t A )B(t B )C(tC ) · · · ] = (1.71)

⎪ A(t A )C(tC )B(t B ) · · · if t A > tC > t B · · ·

...

This operator allows us to present the series (1.70) in an elegant form:

t
− i dτ H(τ )
U(t) = T e 0 . (1.72)

Indeed, let us expand the exponent and take the nth term,
t t t
1
T[ dτ1 dτ2 · · · dτn H(τ1 )H(τ2 ) · · · H(τn )].
n!
0 0 0
26 1 Basic Concepts

The n-dimensional integral is taken over the region {0 ≤ τ1 ≤ t; 0 ≤ τ2 ≤


t; . . . ; 0 ≤ τn ≤ t}. We can take a part of this region, where, e.g., τ1 ≥ τ2 ≥ · · · ≥ τn .
The corresponding integral coincides with the nth term in the expansion (1.70), if
we forget about the 1/n! factor. But the integration variables are dummy, and can be
rearranged in exactly n! ways, giving the same result. (The time-ordering operator
will ensure that the operators are always in proper order.) Therefore, we can simply
multiply the result by n!, thus proving the validity of (1.72).
We can sum up the important properties of the evolution operator, proceeding
from its definition:

i U(t) = H(t)U(t); (1.73)
∂t
U † (t) = U −1 (t); (1.74)
U(0) = I. (1.75)

The second line contains the all-important unitarity condition, which physically
means that probability is not getting lost when the quantum state evolves – if we start
with one particle, we will not end up with 1/4 (or 22/7). Indeed, the norm of the state
vector, related to the probability,
 
⇔(t)⇔ ≡ (t)|(t) = (0)|U † (t)U(t)|(0)

= (0)|(0) = const

is conserved.
Of course, there is nothing special in the moment t = 0, and we can follow the
evolution of the quantum state from any point: evidently, for any t, t ↑ ,

|(t) S = U(t)U † (t ↑ )|(t ↑ ) S ≡ S(t, t ↑ )|(t ↑ ) S , (1.76)

where the S-operator is defined by

S(t, t ↑ ) ≡ U(t)U † (t ↑ ). (1.77)

Now we can, for example, express the wave function of the particle in coordinate
space at time t via its value at some previous time t ↑ , by

Γ(x, t) = x|(t) S = x|S(t, t ↑ )|(t ↑ ) S



= d x ↑ x| S(t, t ↑ )|x ↑ x ↑ ||(t ↑ ) S

= d x ↑ x|S(t, t ↑ )|x ↑ Γ(x ↑ , t ↑ ). (1.78)
1.3 Perturbation Theory for the Propagator 27

Now we see that it is the very operator S, related to the propagator, that we have
previously introduced (see Eq. 1.41): for t > t ↑ ,

K (x, t; x ↑ , t ↑ ) = {x|S(t, t ↑ )|x ↑ .

This operator (sometimes called the S-matrix) has the following properties:


i S(t, t ↑ ) = HS(t, t ↑ ); (1.79)
∂t
S(t, t) = I; (1.80)
S (t, t ↑ ) = S −1 (t, t ↑ ) = S(t ↑ , t);

(1.81)
S(t, t ↑↑ )S(t ↑↑ , t ↑ ) = S(t, t ↑ ); (1.82)
t
− i dτ H(τ )
for t > t ↑ S(t, t ↑ ) = T e t ↑
 i ↑

= e−  H(t−t ) if H = H(t) . (1.83)

Equation (1.81) is the unitarity condition. Equation (1.82) follows directly from the
definition of S and the unitarity of the evolution operator, but it is the very composition
property that we introduced for the propagator in the beginning (see Eq. 1.20).
The last line follows from (1.72). This is an elegant formula, but not very practical:
the Hamiltonian of the system is “of order one,” and the expansion would converge
very slowly, or simply diverge! Fortunately, in most cases the Hamiltonian can be
split into two parts: the unperturbed, time-independent Hamiltonian (for which we
presumably know the solution) and a small, possibly time-dependent perturbation:

H(t) = H0 + W(t).

The goal is to present the solution for H as one for H0 plus corrections in powers of
the small perturbation. The latter series will hopefully be rapidly convergent.
Until now we have worked in the Schrödinger representation, i.e., the state vectors
were time dependent (governed by U(t)), while the operators were constant (if not
explicitly time dependent). The opposite picture is provided by the Heisenberg rep-
resentation. It can be arrived at by the canonical transformation, using the evolution
operators:

| H = U † (t)|(t) S ≡ |(0) S ;


A H (t) = U † (t)A S U(t).

Now, the operators evolve over time, while state vectors do not. The operators satisfy
the Heisenberg equations of motion:

d ∂
i A H (t) = [A H (t), H H (t)] + i A H (t). (1.84)
dt ∂t
28 1 Basic Concepts

The above equation follows immediately from the definition of A H and properties of
the evolution operator. Here the partial derivative deals with explicit time dependence
of the operator (say, due to changing of external conditions); the Hamiltonian, if
time dependent, should be taken in Heisenberg representation as well, H H (t) =
U † (t)H(t)U(t). The above equation follows immediately from the definition of AH
and properties of the evolution operator.
For our goals it is more convenient to employ an intermediate, interaction rep-
resentation, first suggested by Dirac. In this representation both operators and state
vectors are time dependent, but the evolution of operators is governed by the unper-
turbed Hamiltonian (we will not use the index I to label operators and state vectors
in interaction representation, since this will be our working representation):

A(t) = el H0 t/A S e−i H0 t/; (1.85)


d ∂
i A(t) = [A(t), H0 ] + i A(t). (1.86)
dt ∂t
The last line is exactly the Heisenberg equation for an operator in an unperturbed
system. (Notice that since H0 is time independent, it is the same in the Schrödinger
and interaction representations.)
The state vectors in interaction representation undergo the corresponding canon-
ical transformation,

|(t) = ei H0 t/|(t) S , (1.87)

and obey the equation



i |(t)) = W(t)|(t). (1.88)
∂t

(Here W(t) is also in interaction representation, W(t) = ei H0 t/We−i H0 t/, as


you can see when deriving this formula from the original Schrödinger equation. The
trick is that due to unitarity you can insert the operator U(t)U † (t) ≡ I wherever
it is needed.) The benefit of this representation is, therefore, that the state vectors
are affected only by the perturbation. Now we can almost literally repeat all the
calculations from the beginning of this section. For example, iterating (1.88), we
find the solution
t
− i dτ W (τ )
|(t) = T e 0 |(0), (1.89)

with the same time-ordering operator.


Now we can find the explicit expression for the S-operator in interaction rep-
resentation. The easiest way is to introduce an auxiliary operator O(t, t ↑ ) ≡ exp
1.3 Perturbation Theory for the Propagator 29

(i ↑ H0 t/)S S (t, t ↑ ), which, as is easy to see, satisfies the same equation as the state
operator:

i O(t, t ↑ ) = W(t)O(t, t ↑ ).
∂t
Then, of course,
t
− i dτ W (τ )
O(t, t ↑ ) = T e t↑ O(t ↑ , t ↑ ),

so that the S-operator itself can be written as follows (for t > t ↑ );

t
− i dτ W (τ )
(−i H0 t/) ↑

S S (t, t ) = e Te t↑ e(i H0 t /) . (1.90)

This is the so-called Dyson’s expansion for the S-operator in Schrödinger represen-
tation. Transforming the U-operators according to (1.85), we get that in interaction
representation the S-operator takes the simple form

S(t, t ↑ ) = e(i H0 t/) S S (t, t ↑ )e(−i H0 t /)
t
− i dτ W (τ )
=Te t↑ (t > t ↑ ). (1.91)

It looks as if it depends only on the perturbation! (Of course, the unperturbed Hamil-
tonian is hidden in W(τ ); but we presumably know how everything behaves without
perturbation.)
We have expressed the propagator as a matrix element of the S-operator. There is
another expression, sometimes useful in the many-body case. In order to arrive at it, let
us return to Heisenberg representation with its time-dependent operators. The coor-
dinate operator X H (t) will be time dependent as well. In Schrödinger representation
this operator had time-independent eigenstates, which constituted the complete basis
of the Hilbert space:

χ S |x = x|x; (1.92)



|xx| = I. (1.93)
x

(In coordinate representation they are simply delta functions, in momentum repre-
sentation, plane waves.) Let us now introduce the set of instantaneous eigenstates of
the coordinate operator in Heisenberg representation, {|xt}:

χ H (t)|xt = x|xt. (1.94)


30 1 Basic Concepts

Since χ H (t)|xt = U † (t)X S U(t)|xt, then U(t)|xt = |x, and we see that the time
evolution of these states is governed by U † (t) instead of U(t), and they still constitute
a complete basis at any moment t;

|xt = U † (t)|x; (1.95)


 
|xtxt| = U † (t)|xx|U(t)
x x
= U (t)U(t)

= I. (1.96)

Now we can rewrite the propagator in coordinate space simply as an overlap of two
states from this basis (cf. Eq. (1.54) of the previous section!):

K (x, t; x ↑ , t ↑ ) = x|S(t, t ↑ )|x


≡ x|U(t)U † (t ↑ )|x = xt|x ↑ t ↑ . (1.97)

From this expression straightforwardly follows an expression for the propagator in


the momentum space:

K ( p, t; p ↑ , t ↑ ) = dx d x ↑  p|x xt |x ↑ t ↑ x ↑ | p ↑ 

= dx d x ↑  pt|xtxt|x ↑ t ↑ x ↑ t ↑ | p ↑ t ↑  (1.98)

=  pt| p ↑ t ↑ 

The states { pt} are, of course, instantaneous eigenstates of the momentum operator
in Heisenberg representation, P H (t).

1.3.2 An Example: Potential Scattering

The above formulae are very general: actually they are applicable to any quantum
system, notwithstanding the number of particles and type of interaction. This was
one reason why we went to such lengths to derive them: we will use them later
throughout this book.
Now let us apply them to our initial case of one, structureless, quantum particle.
Now we have a single option for the perturbation operator, a scalar external potential,
so that its coordinate matrix element is

x|W(t)|x ↑  = V (x, t)δ(x − x ↑ ). (1.99)


1.3 Perturbation Theory for the Propagator 31

Fig. 1.10 Feynman diagram for the potential scattering

Table 1.1 Feynman rules for a particle in the external potential field

K (x, t; x↑ , t ↑ ) Propagator
K 0 (x, t; x↑ ,
t ↑) 
Free (unperturbed) propagator
im(x−x↑ )2
m 3/2 exp 2(t−t ↑ )
= (2π i  (t−t ↑ ))3/2
θ(t − t ↑ )

−i V (x, t)/ External potential


(in interaction
representation)
The integration over all intermediate coordinates and times is implied

Table 1.2 Feynman rules for a particle in the external potential field (momentum representation)

K (p, E; p↑ , E ↑ ) Propagator

K 0 (p, E; p↑ , E ↑ ) Free (unperturbed)


↑ )δ(E−E ↑ )
= (2π )4 i  δ(p−p
E− p 2 /2m+i0
propagator

−i V (p, E)/ Fourier transform


of the external potential
The integration over all intermediate momenta and energies is implied, taking into account energy/
momentum conservation in every vertex

It is natural then to work in coordinate representation; taking a corresponding matrix


element of (1.90) we obtain the perturbation expansion for the propagator:

K (x, t; x ↑ , t ↑ ) = K 0 (x, t; x ↑ , t ↑ )

↑↑ ↑↑ ↑↑ ↑↑ i
+ d x dt K 0 (x, t; x , t ) −

× V (x ↑↑ , t ↑↑ )K 0 (x ↑↑ , t ↑↑ ; x ↑ , t ↑ ) + · · · . (1.100)

This expression is presented graphically in Fig. 1.10, the elements of which are
explained in Table 1.1.
Of course, our discourse is not limited to the coordinate representation; as a matter
of fact, it is more often than not easier to use momentum representation. The Feynman
rules for the momentum representation can be found in Table 1.2 (see Problem 1.2).
32 1 Basic Concepts

The graph in Fig. 1.10 is the simplest example of a Feynman diagram. In this
case its use seems superfluous, because of the simple structure of the perturbation
involved. In the many-body case, though, the structure of the terms entering the
perturbation series is much more complicated, and the graphs provide great help
in comprehending their structure and making physically consistent approximations.
The graph under consideration, e.g., suggests to us a clear picture of a quantum
particle repeatedly scattered by an external potential, but propagating freely between
the scattering acts. It will be useful to look into how (and whether) this intuitive
picture fits into a path-integral description of the behavior of the quantum particle.
We shall see that this very result can indeed be easily derived directly from formula
(1.37) for the propagator, in a slightly changed form;

∞ ∞ ∞
K (x N t N ; x1 t1 ) = lim d x N −1 d x N −2 · · · d x2
N →∞
−∞ −∞ −∞

 N 

 m d/2 i n−2
N
t
m(xn −xn−1 )2
i N
× e 2t 2 e−  k−2 t V (xk ,tk ) .
2πit
n=2

All we need to do is expand the exponents containing potential and rearrange this
expression as a power series over the external potential, V .
The zero-order term is, evidently, unperturbed propagator, K 0 (x N t N ; x1 t1 ). The
first-order term is

K 1 (x N t N ; x1 t1 )
∞ ∞ ∞  N 
 m d/2
= lim d x N −1 d x N −2 · · · d x2
N →∞ 2πit
−∞ −∞ −∞ n=2
!
i 
N m(xn −xn−1 )2
N
i
× e  n−2 t 2t − t V (xk , tk ) .

k=2

We can rewrite the last expression as


K 1 (x N t N ; x1 t1 )

⎨ i  N ∞
= lim − t · · · d x N −1 d x N −2 · · · d x2
N →∞ ⎩ 
k=2 −∞
 N 
 !
 m d/2 i N m(xn −xn−1 )2
n−2 t
× e  2t 2 V (xk , tk )
2πit
n=2
⎧ ⎡ ∞
⎨ i 
N ∞
= lim − t d xk ⎣ · · · d xk−1 d xk−2 · · · d x2
N →∞ ⎩ 
k=2 −∞ −∞
1.3 Perturbation Theory for the Propagator 33
 k 
 "
 m d/2 i kn−2 t m(xn −xn−1 )2
× e 2t 2 V (xk , tk )
2πit
n=2
⎡ ∞  N 
  m d/2
× ⎣ · · · d x N d x N −1 · · · d xk+1
2πit
−∞ n=k+1
"!
i N 2
m(xn −xn−1 )
n−k+1 t
× e 2t 2 .

Now we see that

t N ∞
K 1 (x N t N ; x1 t1 ) = dt d x K 0 (x N t N ; xt)V (x, t)K 0 (xt; x1 t1 )
t1 −∞
∞ ∞
≡ dt d x K 0 (x N t N ; xt)V (x, t)K 0 (xt; x1 t1 )
−∞ −∞

(the last transformation has taken into account that for t < t1 or t > t N the integrand
is identically zero). It is clear from our derivation that indeed, in the path integral
picture we can regard the effects of external potential as a result of multiple scatterings
of an otherwise free particle.
The next terms of the expansion can be derived in the same way. Factors 1/n! in
the expansion of the exponent will be canceled because we will have exactly n! ways
to relabel the points xk , tk where the scattering occurs.

1.4 Second Quantization

1.4.1 Description of Large Collections of Identical Particles:


Fock’s Space

By now, we have successfully handled single quantum particle. We have written


the propagator as a path integral, obtained the perturbation series in the presence of
external scattering potential, and explained the magnetoconductance oscillations in
mesoscopic systems. A clear overkill: it would suffice to employ the Schrödinger
equation and standard methods from the theory of partial differential equations. But
thus we have prepared ourselves to the formidable task of dealing with macroscopic
collectives of quantum particles.
To write explicitly the wave function for such a system, even in the simplest case
of a collective of N ≈ 1023 identical particles, is as impossible as to determine
the momentary position and velocity of each particle in its classical counterpart.
34 1 Basic Concepts

In classical statistical mechanics, though, the problem is avoided by introducing the


distribution functions, giving the probabilities of, e.g., finding a particle with velocity
v1 at point r1 , and another with v2 at r2 , and so on. In other words (and with other
normalization constant), the distribution functions give information on how many
particles occupy each cubicle of the phase space.
Of course, there is a whole hierarchy of them, including one, two,…N-particle
distributions functions, and together they contain exactly the same amount of infor-
mation as the record of velocities and positions of all particles in the system. The
enormous advantage is that we usually need only the few first functions of this hier-
archy. Hopefully, such an approach will pay off for the quantum particles as well.
The difference is that in quantum statistics we have to operate somehow with
the wave function of the system as a whole, (ξ1 , ξ2 , . . . , ξi , . . . , ξ j , . . . , ξ N ),
and have trouble in trying to “glue” characteristics to individual particles per se.
The more so, identical quantum particles cannot be distinguished in principle. This
quantum-mechanical principle of indistinguishability of identical particles imposes
the following property on the wave function. If we exchange two particles, the wave
function can only acquire a phase factor (since its modulus, which determines the
observable effects of such interchange, should stay the same):

(ξ1 , ξ2 , . . . , ξi , . . . , ξ j , . . .) = eiχ (ξ1 , ξ2 , . . . , ξ j , . . . , ξi , . . . , ξ N ). (1.101)

After making the second permutation of the same particles, we have

(ξ1 , ξ2 , . . . , ξi , . . . , ξ j , . . .) = e2iχ (ξ1 , ξ2 , . . . , ξi , . . . , ξ j , . . . , ξ N ), (1.102)

so that eiχ = ±1, and we are left with two choices

(ξ1 , ξ2 , . . . , ξi , . . . , ξ j , . . .)
#
+(ξ1 , ξ2 , . . . , ξi , . . . , ξ j , . . .) (Bose−Einstein statistics)
= (1.103)
−(ξ1 , ξ2 , . . . , ξi , . . . , ξ j , . . .) (Fermi−Dirac statistics)

The N -particle wave function (ξ1 , ξ2 , . . .) can be expanded over a complete


set of functions, which are provided, e.g., by the eigenfunctions of the one-particle
Hamiltonian, H1 :

(ξ1 , ξ2 , . . . , ξ N ) = C p1,ρ2,..., pN φ p1 (ξ1 )φ p2 (ξ2 ) · · · φ pN (ξ N );
H1 φ j (ξ) = ε j φ j (ξ).

Here ξ j denotes the coordinates and spin of the jth particle, and p j labels the one-
particle state. Very often (but not always) one chooses for φ j (ξ) plane waves, φ j (ξ) ∝
exp(i(p j x j −ε j t)/). This is an excellent choice when dealing with a uniform infinite
system. Otherwise, it is more convenient to use a different complete set of one-particle
functions that would explicitly express the symmetry of the problem or nontrivial
boundary conditions.
1.4 Second Quantization 35

The condition of (anti)symmetry (1.103) means that we can use only properly
symmetrized products of one-particle functions. For bosons we have thus

BN1 ,N2 ,... (ξ1 , ξ2 , . . . , ξ N ) ≡ |N1 , N2 , . . .(B)



N1 !N2 ! . . . 
= φ p1 (ξ1 )φ p2 (ξ2 ) · · · φ pN (ξ N ). (1.104)
N!

Here the nonnegative number Ni shows how many times the ith one-particle
eigenfunction φi enters the product (N1 +N2 +· · · = N , the number of particles in the
system). It is called the occupation number of the ith one-particle state. Summation is
extended over all distinguishable permutations of indices { p1, p2, . . . , pN }. Notice
that since all the N j are nonnegative and add up to N , the sequence N0 , N1 , . . .
always contains a rightmost nonzero term, N jmax , followed by zeros to infinity.
For fermions we use Slater’s determinants

FN1 ,N2 ,··· (ξ1 , ξ2 , · · · , ξ N ) ≡ |N1 , N2 , . . .(F)


⎛ ⎛
⎛ φ p1 (ξ1 ) φρ1 (ξ2 ) ··· φ p1 (ξ N ) ⎛⎛

1 ⎛⎛ φ p2 (ξ1 ) φ p2 (ξ2 ) ··· φ p2 (ξ N ) ⎛⎛
= √ ⎛ . (1.105)
N ! ⎛⎛ · · · ··· ··· ··· ⎛
⎛ φ pN (ξ1 ) φρN (ξ2 ) ··· φ pN (ξ N ) ⎛

The properties of determinants guarantee the necessary antisymmetry of the wave


function. Indeed, a transmutation of two particles in this case corresponds to trans-
mutation of two columns in the determinant, which by definition changes its sign.
Then, if two columns are equivalent, the determinant equals zero. Physically this
means that two fermions cannot occupy the same quantum state (Pauli principle).
Given the basis of one-particle states, any N-particle wave function is completely
defined by the set of occupation numbers and can be written as |N1 , N2 , (B,F) .
The set of states |N j  will now provide us a basis for the Hilbert space of N -particle
states (for Bose or Fermi system).
Do we really need now a condition N j = N , which seems rather awkward?
After all, as often as not in a solid-state problem the system under consideration
can exchange particles with the exterior. On the other hand, (quasi)particles (like
phonons) can be created and annihilated, so that their total number will fluctuate.
Can’t we simply give the occupation numbers arbitrary values?
As a matter of fact, no. If we had no limitations on N j , the set |N j  would be
non-denumerably infinite; each element would be more like a dot on the real axis
line, than an integer, and a space spanned by such a set would possess unpleasant
mathematical properties and be very unlike a “N-is-very-big-but-finite-dimensional”
vector space, which we are accustomed to. Luckily, not all “infinite” states are bad,
only “actually infinite” ones, like |1, 1, 1, 1, 1, 1, . . .. On the other hand, if we keep
the condition N j = N but allow N to be arbitrarily large, the problem is solved.
The set of states |N j  satisfying that
condition is called the [0]-set and is denumerably
infinite. Indeed, since for each state N j < ∞, there is some N jmax , and the product
36 1 Basic Concepts


( N j ) jmax is a finite nonnegative integer, say M. Moreover, there is only a finite
number of states |N j  with the same value of M. Therefore we can count all states
in the [0]-set (first n 0 states for M = 0, then n 1 states for M = 1, and so on ad
infinitum). This exactly means that there are “as many” states in the [0]-set as integer
numbers; i.e., the set is denumerably infinite.
The Hilbert space spanned by a [0]-set is called Fock’s space, and it is in Fock’s
space that second-quantized operators act. The state vectors here, as we have said,
are defined by the corresponding set of the occupation numbers, and the second
quantized operators change these numbers. Thus, any operator can be represented
by some combination of basic creation/annihilation operators, which act as follows
Annihilation operator(we will establish the proper factors a little later):
Annihilation operator:

c j | . . . , N j , . . . ∝ | . . . , N j − 1, . . .. (1.106)

Creation operator:

c†j | . . . , N j , . . . ∝ | . . . , N j + 1, . . .. (1.107)

Evidently, any element of the [0]-set can be obtained by the repeated action of creation
operators on the vacuum state (state with no particles) |0 = |0, 0, 0, 0, . . .:
  N1   N2  N j
|N1 , N2 , . . . , N j , . . . ∝ c1† c2† · · · c†j · · · |0. (1.108)

The vacuum state is annihilated by any annihilation operator:

c|0 = 0. (1.109)

What is extremely important to keep in mind is that while in the representation of


second quantization we explicitly deal with occupation numbers only, our calcula-
tions make sense only as long as we can point out the correct (consistent with the
system’s properties) one-particle basic functions. This may also be stated as a prob-
lem of choice of the vacuum state, for starting from a wrong vacuum we never build
the proper Fock space. (It turns out that sentences like “I know that you don’t have
a dog, but I must know what sort of dog you don’t have” are quite reasonable when
you deal with a vacuum state of a many-body system.) We will see a good example
of this problem later, in superconductivity.

1.4.2 Bosons

Let us begin with a one-particle operator



F1 = f 1 (ξ j ). (1.110)
j
1.4 Second Quantization 37

Here f 1 (ξ j ) is an operator, acting on a one-particle state of the system φ(ξ j ), and


summation is taken over all particles. An example is provided by the kinetic energy
operator,
 2 
K= − ∇ 2j . (1.111)
2m
i

Let us take a matrix element of F1 between two N -particle Bose states, B↑ |F1 |B .
Since in order to make a number (matrix element) of an operator f 1 (ξ j ) we need
two one-particle wave functions, and since any two such functions φi (ξ), φ j (ξ) are
orthogonal for i = j, then there will be only two sorts of nonzero matrix elements of
the operator F1 : (a) diagonal, and (b) between the states |B↑ , |B , which differ
by one particle, which from some (initial) state φi (ξ) was transferred to another
(final) state φ f (ξ) : |B  = | . . . , Ni , . . . , N f − 1, . . ., and |B↑  = | . . . , Ni −
1, . . . , N f , . . ..
The diagonal matrix element is

 N1 ! · · · Na ! · · · 
B |F1 |B  = ··· dξ1 dξ1 · · · dξ N
a
N!

× Ps [φ∗ρ↑ (ξ1 )φ∗p↑ (ξ2 ) · · · φ∗ρ↑ ](ξ N ) f 1 (ξa )Ps [φρ1 (ξ1 )φρ2 (ξ2 ) · · · φ p N ](ξ N ).
1 2 N
ρ,ρ↑

Here Ps [. . .] denotes symmetrization of indices. Due to the fact that sets of in-
dices pi and pi↑ coincide, pi = pi↑ . Let us say that the particle that is affected
by the operator is in the state  pa . After we take the (diagonal) matrix element of
f 1 (ξa ),  pa | f 1 (ξa )| pa  = dξa φi∗ (ξa ) f 1 (ξa )φi (ξa ), and calculated the other inte-
grals (which are all equal to one due to orthonormality), we can symmetrically
rearrange the rest of the occupied states in (N − 1)! ways. This must be divided by
N1 !, N2 !, . . . , (Na − 1)!, . . ., because we cannot distinguish between N j ! possible
rearrangements of identical particles occupying the same, jth, state. (An equivalent
way to state this is that there are (N − 1)!/(N1 !N2 ! · · · (Na − 1)! · · · ) ways to choose
one-particle wave functions to be acted upon by f 1 (ξa ).) Therefore, we obtain the
expression

B |F1 |B 


  N1 !N2 ! · · · Na ! · · · (N − 1)!

=  pa | f 1 (ξa )| pa 
a p
N! N1 ! · · · (Na − 1)! · · ·
a
  Na
=  pa | f 1 (ξa )| pa 
a pa
N

= Nq q| f 1 |q, (1.112)
q

and we no longer need to show explicitly on the coordinates of what particle the
operator f 1 acts.
38 1 Basic Concepts

Now let us calculate the off-diagonal matrix elements of the operator. This time
on the left side there will be one extra function φ∗f (ξ), and on the right one extra
φi (ξ), so that

 N1 ! · · · (Ni − 1)! · · · N f ! · · · 1/2


B↑ |F1 |B  =
a
N!

N1 ! · · · Ni ! · · · (N f − 1)! · · · 1/2
×
N!

× · · · dξ1 dξ1 · · · dξ N Ps [φ∗p↑ (ξ1 )φ∗p↑ (ξ2 ) · · · φ p↑ ]∗ (ξ N )
1 2 N
p, p ↑
× f 1 (ξa )Ps [φ p1 (ξ1 )φ p2 (ξ2 ) · · · φ p N ](ξ N ).

These unmatched functions must then be integrated with the operator to yield
( f | f 1 (ξa )|i = dξa φ∗f (ξa ) f 1 (ξa )φi (ξa ), while the rest can be rearranged in (N −
1)!/(N1 !N2 ! · · · (Ni − 1)! · · · (N f − 1)! . . .) ways. The result is

 N1 !N2 ! · · · (Ni − 1)! . . . (N f − 1)! · · ·


B↑ |F1 |B  = Ni N f
a
N!

(N − 1)!
×  f | f 1 (ξa )|i
N1 ! · · · (Ni − 1)! · · · (N f − 1)! · · ·

= Ni N f  f | f 1 |i. (1.113)

Now we are in a position to employ the creation/annihilation operators described


earlier. For the bosons they are often denoted by b† , b. We define them with the
following factors:
Bose annihilation operator:

b j | . . . , N j , . . .B = N j | . . . , N j − 1, . . .B . (1.114)

Thus it has a single nonzero matrix element, N j − 1|b j |N j  = N j . It equals its
complex conjugate, which according to the rules of quantum mechanics is given by
N j − 1|b j |N j ∗ = N j |b†j |N j − 1. This means that the creation and annihilation
operators as we defined them are indeed Hermitian conjugate (in the Bose case so
far), and
Bose creation operator:

b†j | . . . , N j , . . .B = N j + 1| . . . , N j + 1, . . .B . (1.115)

The combinations b j b†j , b†j b j are evidently diagonal: N j |b j b†j |N j  = N j + 1;


N j |b†j b j |N j  = N j . The latter combination, for evident reasons, is called the
1.4 Second Quantization 39

occupation number operator (or particle number operator):

N j ≡ b†j b j ; (1.116)
N j |N j  = N j |N j .

The commutator of the two operators is then [b j , b†j ] = 1. It is straightforward to


check that generally,
 
b j , bk† = δ jk ; (1.117)
   ⎫
b†j , bk† = b j , bk
= 0.

These are the Bose commutation relations; we could start from them, and the demand
that they are satisfied would completely determine the structure of the many-particle
Bose wave function.
Returning to our one-particle operator, we see that it can be expressed via cre-
ation/annihilation operators as follows:

F1 =  f | f 1 |ib†f b j . (1.118)
i, f

Indeed, the matrix elements of this operator between any two states from Fock’s
space are the same as we have calculated above. Intuitively, the expression looks
evident: a particle is being “teleported” (annihilated in state |i > and then created
in state | f ), that is, scattered. A less evident and very useful property of the above
expression is that the coefficients  f | f 1 |i are just matrix elements of a one-particle
operator f between corresponding one-particle states. Not only have we expressed
the operator via b† , b, but we have also reduced F = a f 1 (ξa ) to a constituent
operator f 1 in the process. This suggests that we rewrite Eq. (1.118) as

F1 = dξ φ̂† (ξ) f 1 φ̂(ξ). (1.119)

1.4.2.1 Bosonic Field Operators

In the above equation we have introduced the so-called field operators, φ̂† (ξ), φ̂(ξ),
which are by definition

φ̂(ξ) = φ p (ξ)b p ;
p
40 1 Basic Concepts

φ̂† (ξ) = φ∗p (ξ)b†p . (1.120)
p

Evidently, these operators also act in Fock space.3 What do they create or annihilate?
The operator b†p , e.g., creates a particle with a wave function φ p (ξ ↑ ). The operator
φ̂† (ξ) thus creates a particle with a wave function

φ∗p (ξ)φ p (ξ ↑ ) = δ(ξ − ξ ↑ )
p

(we have used the completeness of the basis of one-particle states). That is, the field
operator creates (or annihilates) a particle at a given point. An important density
operator,
 
(ξ) = φ̂† (ξ)φ̂(ξ) = |φ p (ξ)|2 b†p b p ≡ |φ p (ξ)|2 N p , (1.121)
p p

evidently gives the density of particles at a given point.


The commutation relations for bosonic field operators immediately follow from
the definition and Eq. (1.117):

[φ(ξ, t), φ† (ξ ↑ , t)] = δ(ξ − ξ ↑ ),


 ⎫
φ(ξ, t), φ(ξ ↑ , t) = [φ† (ξ, t), φ† (ξ ↑ , t)] (1.122)
= 0.

Time dependence, of course, can enter the above equations either through the opera-
tors b† , b (Heisenberg representation) or through the basic functions φ p (Schrödinger
representation), or both (interaction representation). What is important is that definite
commutation relations exist only between field operators taken at the same moment
of time.
Looking at the definition, Eq. (1.120), one sees that field operators are built of the
annihilation/creation operators in the same way as an expansion of a single-particle
wave function in a generalized Fourier series over some complete set of functions
∞ :
{φi }i=0

Γ(ξ) = φi (ξ)Ci ;
i

ψ̂(ξ) = φi (ξ)bi .
i

3 We will omit the hats over the field operators when it does not create confusion.
1.4 Second Quantization 41

That is why the method is called second quantization: it looks as if we quantize the
quantum wave function one more time, transforming it into an operator! (See also
Problem 3 to this chapter.)

1.4.2.2 Operators of Observables in the Second Quantization Formalism

Using field operators, we can write any operator in the second-quantization form
almost without thinking. The recipe is that any n-particle operator

1 
Fn = f n (ξ j1 , ξ j2 , . . .)
n!
j1 = j2 =··· = jn

(where f n (ξ j1 , ξ j2 , . . .) acts on the coordinates of particles number j1 , j2 , . . . , jn )


in the formalism of second quantization is given by

1
Fn = dξ1 dξ2 · · · dξn φ̂† (ξ1 )φ̂† (ξ2 ) · · · φ̂† (ξn )
n!
× f n (ξ1 , ξ2 , . . . , ξn )φ̂(ξn )φ̂(ξn−1 ) · · · φ̂(ξ1 ). (1.123)

This is exactly the sort of expression that we would obtain if we calculated the average
value of an operator f n using one-particle wave functions; here they are substituted
by field operators. The factorial simply takes into account that there are n! versions
of the above expression differing only by the arrangement of indices 1, 2, . . . , n.
(Being operators, the φ̂’s are sensitive to their order; for this particular recipe to be
correct, the ordering of the field operators should be as shown; the outermost couple
of operators should have the same argument, etc.)
This rule can be derived for arbitrary n, though three-and more-particle interac-
tions (collisions) are found rarely. For one-particle operators it is correct; it is enough
to look at our previous results. We therefore sketch here a proof for n = 2.
In this case
1
F2 = f 2 (ξa , ξb ).
2
a =b

(An example: scalar pair interaction, where f 2 (ξa , ξb ) is simply a scalar potential
energy U (|ξa − ξb |).) We will calculate its matrix elements in the same way as we
did it for a one-particle operator. Besides diagonal ones, there are only following
nonzero matrix elements:

N f , Ni − 1, N j − 1|F2 |N f − 2, Ni , N j ;

N f , Ng , Ni − 1, N j − 1|F2 |N f − 1, Ng − 1, Ni , N j ;
42 1 Basic Concepts

N f , Ng , Ni − 2|F2 |N f − 1, Ng − 1, Ni ;

N f , Ni − 2|F2 |N f − 2, Ni , N 

(corresponding to all possible transitions of two particles); and, of course,

N f , Ni − 1|F2 |N f − 1, Ni 

(affecting only one particle). Evidently, the operator F2 must be of the form

bn b p bq ,
† †
Cm,n, p,q bm
m,n, p,q

and we have only to find the coefficients.


Let us calculate the matrix element N f , Ng , Ni − 1, N j − 1|F2 |N f − 1, Ng − 1,
Ni , N j , corresponding to scattering of two particles from states |i, | j in two
different states | f , |g.

N f , Ng , Ni − 1, N j − 1|F2 |N f − 1, Ng − 1, Ni , N j 

1  . . . N f ! · · · Ng ! · · · (Ni − 1)! · · · (N j − 1)! · · ·
=
2 N!
a =b

· · · (N f − 1)! · · · (Ng − 1)! · · · Ni ! · · · N j ! · · ·
×
N!

× dξ1 · · · dξ N φ p↑ (ξ1 ) · · · φ∗p↑ (ξ N ) f 2 (ξa , ξb )P [φ p1 (ξ1 ) · · · φ p N ](ξ N ).

1 N
p, p ↑ P [

In this expression there are unmatched states, φ∗f , φ∗g on the left and φi , φ j on the
right, which will be integrated with the operator f 2 to yield:

dξa dξb (φ∗f (ξa )φ∗g (ξb ) f 2 (ξa , ξb )φi (ξa )φ j (ξb )

+ φ∗g (ξa )φ∗f (ξb ) f 2 (ξa , ξb )φi (ξa )φ j (ξb )


+ φ∗f (ξa )φ∗g (ξb ) f 2 (ξa , ξb )φ j (ξa )φi (ξb )
+ φ∗g (ξa )φ∗f (ξb ) f 2 (ξa , ξb )φ j (ξa )φi (ξb ))

(we have explicitly written all terms following from symmetrization, P). The sym-
metrization of the rest (N − 2) one-particle states produces the combinatorial factor
(N −2)!/(. . . (N f −1)! . . . (Ng −1)! . . . (Ni −1)! . . . (N j −1)! . . .), so that gathering
these results we obtain

N f , Ng , Ni − 1, N j − 1|F2 |N f − 1, Ng − 1, Ni , N j 
1.4 Second Quantization 43

1  · · · N f ! · · · Ng ! · · · (Ni − 1)! · · · (N j − 1)! · · ·
=
2 N!
a =b

· · · (N f − 1)! · · · (Ng − 1)! · · · Ni ! · · · N j ! · · ·
×
N!
(N − 2)!
×
· · · (N f − 1)! · · · (Ng − 1)! · · · (Ni − 1)! · · · (N j − 1)! · · ·
× ( f g| f 2 |i j) + (g f | f 2 |i j) + ( f g| f 2 | ji + g f | f 2 | ji)
 
1 a =b 1 
= N f Ng Ni N j ( f g| f 2 |i j
2 N (N − 1)
+ g f | f 2 |i j +  f g| f 2 | ji + g f | f 2 | ji)
1
= N f Ng Ni N j ( f g| f 2 |i j + g f | f 2 |i j +  f g| f 2 | ji + g f | f 2 | ji).
2
† †
If we take the same matrix element of m,n, p,q Cm,n, p,q bm bn b p bq , we obtain
for each b† b† bb the combination:

N f , Ng , Ni − 1, N j − 1|bm bn b p bq |N f − 1, Ng − 1, Ni , N j 
† †

= N f Ng Ni N j N f , Ng , Ni − 1, N j − 1|N f , Ng , Ni − 1, N j − 1
× (δm f δng δ pi δq j + δmg δn f δ pi δq j + δm f δng δ pj δqi + δmg δn f δ pj δqi ),

which means that Cm,n,ρ,q = (1/2)mn| f 2 | pq. Now, this is the very coefficient that
we obtain if we write the expression for F2 following the general recipe, and then
express field operators through b† , b. You can check that this is the case for the rest
of the matrix elements as well.

1.4.3 Number and Phase Operators and Their


Uncertainty Relation

We have introduced the occupation number operator N . In the simplest case of a sin-
gle harmonic oscillator it would describe the number of quanta—that is, the amplitude
of the oscillations. This is a well-defined observable in the classical limit. Therefore,
it is reasonable to ask, what its conjugate operator is, if such can be constructed,
and what would be the corresponding classical variable. The canonical commutation
relation between this hypothetical Hermitian operator, ϕ̂, and N should be

[N , ϕ̂] = −i. (1.124)

(We do not write  here for reasons that will be clear shortly.)
44 1 Basic Concepts

The Hamiltonian of the harmonic oscillator can be written as


1
H = ω0 (N + ), (1.125)
2

where N = b† b, and b† , b are Bose operators: [b, b† ] = 1. (This is a fairly


standard exercise in any textbook on quantum mechanics, e.g. [6].) Using (1.124) in
the Heisenberg equation of motion for ϕ̂, we find

˙
iϕ̂(t) = [ϕ̂, H] = iω0 . (1.126)

Therefore, ϕ̂(t) = ω t is the phase of the oscillation. We come to the conclusion,


that the phase and the number of quanta (i.e., the amplitude) look like conjugate
observables—something that sounds quite reasonable. An immediate important con-
sequence of this should be the number–phase uncertainty relation (but see later):

1
N · ϕ ≥ . (1.127)
2
It means that you cannot measure phase and amplitude of an oscillator simultaneously
with an arbitrary degree of accuracy. (You may ask, where is the Planck constant in
here? Is it a classical uncertainty relation? Of course not, as you can check by direct
observation of a pendulum. The  is hidden in N . Indeed, the number of quanta of
an oscillator is its classical energy E divided by ω0 , so that  is in the denominator
of the left-hand side of (1.127).)
More generally, we can attempt to introduce a Hermitian operator ϕ̂ demanding
that the creation/annihilation Bose operators could be presented as

b = e−i ϕ̂ N ; (1.128)

b† = N ei ϕ̂ .
√ √
We see that b† b = N ei ϕ̂ · e−i ϕ̂ N = N . On the other hand, bb† = e−i ϕ̂ N ei ϕ̂ ,
and this must equal b† b + 1. Therefore, we must have

[e−i ϕ̂ , N ] = e−i ϕ̂ , (1.129)

which is identically satisfied if

[N , ϕ̂] = −i, (1.130)

exactly the commutation relations we discussed. (This can be verified by expanding


the exponent and commuting it with N term by term.)
The definition (1.128) makes perfect sense in the classical limit. In this case the
number of quanta is huge (we are speaking of bosons here). The difference between
b† b and bb† is only one. Therefore, N becomes essentially a classical number, n,
1.4 Second Quantization 45

and so does ϕ̂. The operators b† , b √


themselves become complex conjugate numbers,
their phase and amplitude given by n and ϕ. (Recalling the definition of the second
quantized wave function, Γ(x) = n (bn φn (x) + b† φ∗n (x)), we see that it becomes
a classical field – like an electromagnetic field (many photons), or a sound wave
(many phonons).)
Unfortunately, the situation is more complicated than that. For example, it is clear
that for the eigenstates of N , N |n = n|n, where N = 0, Eq. (1.127) would
imply ϕ = ∞. This is strange, since the phase is defined only modulo 2π, and its
maximum uncertainty should not exceed 2π.
A subtler analysis (see [1]) shows that the phase does not make a good quantum-
mechanical “observable,” which means that the operator ϕ̂ introduced earlier cannot
be Hermitian.
The sketch of a proof goes as follows. If ϕ̂ is Hermitian, then Û ≡ e−i ϕ̂ must be
unitary,

Û Û † = Û † Û = I.

Here Û † = ei ϕ̂ . On the other hand, acting by (1.129) on |n, we obtain

N Û |n = Û (N − I)|n = (n − 1)Û |n.

This means that Û |n = |n − 1. Note, that Û must annihilate the vacuum state,
Û |0 = 0. (Otherwise we could repeatedly act on Û |0 by Û , producing states with
negative occupation numbers!)
In a similar way we find that Û † |n = |n + 1.
Now, for any n = 0, 1, 2, . . . we see that m|Û Û † |n = m|Û |n + 1 = m|n  =
δmn , and therefore, Û Û † = I. This holds for Û † Û as well, but only for positive n,
because 0|Û † Û |0 = 0. Therefore, Û † Û = I, and ϕ̂ cannot be a well defined Her-
mitian operator. This is the result of the fact that the states with negative occupation
numbers do not exist.
On the bright side, it can be shown that the above approach provides a good
approximation if N  1 (which is, fortunately, what we are usually dealing with).
(The deviation of the occupation number from its average value N is confined to the
interval [−N , ∞); at large N , it is “almost” (−∞, ∞).) The relation (1.127) holds
for the states with small ϕ.4
Keeping in mind these caveats, we can infer from (1.124) that in the basis of
eigenstates of ϕ̂ the action of operators ϕ̂ and N on a wave function Γ(ϕ) ≡ (ϕ|Γ)
is given by

ϕ̂Γ(ϕ) = ϕΓ(ϕ), (1.131)


1 ∂
N Γ(ϕ) = Γ(ϕ). (1.132)
i ∂ϕ

4For the eigenstates of N , as one should expect, the results are close to the uniform distribution of
ϕ in the interval [0, 2π).
46 1 Basic Concepts

The eigenstates of N in this basis are evidently

ϕ|n ∝ einϕ , (1.133)


1 ∂
N ϕ|n = ϕ|n = nϕ|n, (1.134)
i ∂ϕ

and in the basis of eigenstates of N ,

N Γ(n) ≡ N n|Γ = nΓ(n), (1.135)

we formally have
1 ∂
ϕ̂Γ(n) = − Γ(n). (1.136)
i ∂n
These representations are related by a Fourier transform:


dϕ inϕ
Γ(n) = e Γ(ϕ); (1.137)

0

Γ(ϕ) = e−inϕ Γ(n). (1.138)
n

We will see how useful this proves when dealing with transport in small super-
conductors. The phase there is the superconducting phase; the bosons are Cooper
pairs of electrons – fermions – which is certainly nontrivial.

1.4.4 Fermions

In the beginning, we state that the formal results of the above section hold for Fermi
statistics as well. We can introduce fermionic field operators in the same way as
bosonic ones, and use the same recipe to write down a second-quantized expression
for any n-particle operator Fn . The only difference (and a world of difference!) is that
now instead of Bose creation/annihilation operators we use the Fermi ones, a † , a.
Unlike the previous case, due to the Pauli exclusion principle, for any state of the
Fermi system and for any one-particle state p, a †p a †p |)F = 0 and a p aρ |)F = 0;
i.e., (aρ† )2 = (a p )2 = 0. (In formal mathematical language they are called nilpotent
operators.) In order to find their matrix elements, we again return to the case of a
one-particle operator F1 and calculate its matrix elements between fermionic states
|F , which are now given by Slater determinants (1.105),

FN1 ,N2 ,... (ξ1 , ξ2 , . . . ξ N ) ≡ |N1 , N2 , (F)


1.4 Second Quantization 47
⎛ ⎛
⎛ φ p (ξ1 ) φ p (ξ2 ) · · · φ p1 (ξ N ) ⎛⎛
⎛ 1 1
1 ⎛⎛ φ p2 (ξ1 ) φ p2 (ξ2 ) · · · φ p2 (ξ N ) ⎛⎛
= √ ⎛.
N ! ⎛⎛ · · · ··· ··· ··· ⎛
⎛ φ p (ξ1 ) φ p (ξ2 ) · · · φ p N (ξ N ) ⎛
N N

To fix the sign of the wave function, set p1 < p2 < · · · < p N .
For a one-particle operator the only nonzero off-diagonal matrix elements are of
the form 1 f , 0i |F1 |0 f , 1i (here we explicitly use the fact that in the Fermi system
any occupation number N p is either 0 or 1; indices i, f show what occupied or
empty state is affected by the operator). Using the definition of a determinant, we
can write

 1  ↑
1 f , 0i |F1 |0 f , 1i  = dξ1 · · · dξ N (−1) P (−1) P
N! ↑
a P P
× φ∗p↑ (ξ1 ) · · · φ∗p↑ · · · f 1 (ξa )φ p1 (ξ1 ) · · · φ pa · · · ,
1 a

where the sums are taken over all particles and all permutations of indices P[ p1 , p2 ,

· · · , p N ], P ↑ [ p1↑ , p2↑ , · · · , p ↑N ], (−1) P,P being parities of corresponding
permutations.
There is an extra φi (ξ) on the right-hand side and an extra φ∗f (ξ) on the right-
hand side, and they must be connected by the operator f 1 (ξa ). Therefore, in all terms
contributing to the matrix element, the permutations P and P ↑ differ only by a single
index, f , instead of i:

P : p1 p2 . . . i . . . p N ;
P ↑ : p1 p2 . . . f . . . p N .

These permutations have relative parity (−1) Q , where Q is the number of occupied
states between i and f (that is, with i < p < f , if i < f ). Indeed, the parity of
the permutation P is (−1) P , where P is the number of steps you need to achieve it
starting from the ordered set of indices pa < pb < · · · < i < · · · (we can transpose
two adjacent indices at each step). To obtain the permutation P ↑ , we replace i in the
initial set with the index of the final state, f . Now we first have to put index f in
place; if between i and f there is no occupied state, then it is already in place; if
not, we have to make exactly Q steps, to order the set of indices, after which they
can be rearranged in the same P steps. Therefore, the relative parity of P and P ↑
is (−1) Q .
This allows us to write for the matrix element

1 f , 0i |F1 |0 f , 1i  =  f | f 1 |i(−1) Q ,
48 1 Basic Concepts

while for the diagonal matrix element, of course,



|F1 | =  j| f 1 | jN j .
j

Here we keep the same notation for the matrix elements of operator f 1 as in Bose
case.
Evidently, the creation/annihilation operators in the Fermi case, a † , a, will have
only one nonzero matrix element each. We define them to be
j−1
0 j |a j |1 j  = 1 j |a †j |0 j  = (−1) s−1 Ns
. (1.139)

First let us check whether the particle number operator N j in the Fermi case is still
a †j a j . The answer is positive, since

0 j |a †j a j |0 j  = 0,
1 j |a †j a j |1 j  = 1,

and N j can take no other values.


The only nonzero matrix element of a “transmission” operator a †f ai is

1 f , 0i |a †f ai |0 f , 1i  = 1 f , 0i |a †f |0 f , 0i 0 f , 0i |ai |0 f , 1i 
f −1 ↑ i−1
= (−1)s=1 Ns (−1) z=1 Nz
= (−1) Q

(the prime means that in the left sum the occupation numbers are calculated after
a particle in the initial state was annihilated. E.g., if i < f , then the resulting
f −1 ↑
f −1
expression is (−1) s=i Ns = (−1) s=i+1 Ns ≡ (−1) Q .) This confirms our guess
that a one-particle operator can indeed be written via a † , a operators in the same
way as in Bose case,


N
F1 =  f | f 1 |ia †f ai .
f,i=1

On the other hand, for the operator ai a †f we obtain

1 f , 0i |ai a †f |0 f , 1i  = 1 f , 0i |ai |1 f , 1i  1 f , 1i |a †f |0 f , 1i 
f −1 ↑ i−1
= (−1) s=1 Ns (−1) z=1 Nz

= (−1)(Q+1) .
1.4 Second Quantization 49

Table 1.3 Second quantization representation of operators


1 N
Fn = n! a1 =a2 =··· =an=1 f n (ξa1 , . . . , ξan ) n-particle operator

φ̂(ξ, t) = Nj=1 φ j (ξ, t)b j (t) Field operator (annihilation)

φ̂† (ξ, t) = Nj=1 φ∗j (ξ, t)b†j (t) Field operator (creation)
 
Fn = n! 1
dξ1 · · · dξn φ̂† (ξn , t) · · · φ̂† (ξ1 , t) f n (ξ1 , · · · , ξn )φ̂(ξ1 , t) · · · φ̂(ξn , t)

This gives us the Fermi(anti)commutation relations,

{ai , a †j } = δi j , {ai , a j } = {ai† , a †j } = 0. (1.140)

Here {, }, as usual, denotes an anticommutator, {A, B} ≡ AB + BA. In its turn,


this immediately yields Fermi (anti)commutation relations for the field operators,

{ψ(ξ, t), ψ † (ξ ↑ , t)} = δ(ξ − ξ ↑ )


{ψ(ξ, t), ψ(ξ ↑ , t)} = {ψ † (ξ, t), ψ † (ξ ↑ , t)} = 0. (1.141)

(Fermi field operators are built from a † , a and one-particle wave functions in the
same way as in the Bose case, and (1.141) follows from the completeness of those
functions’ set.)
As an exercise, you can check that the same recipe as in the Bose case holds, e.g.,
for two-particle operators. But we can now finally conclude our considerations on
how to express operators of observables in the second quantization representation
(Table 1.3)

1.5 Problems

• Problem 1

Derive the path integral expression for the single-particle propagator in momentum
space, starting from the definition:

↑ ↑ ↑ ↑ ↑ Dp i
K (p, t; p , t ) = p, t|p , t θ(t − t )  Dx e  S(p,x) ,
(2π)3

where |p, t is an eigenvector of the Heisenberg momentum operator p̂ H (t) ==


U † (t) p̂ U(t).
What form has the action S(p, x) in this case?
• Note that in classical mechanics the Lagrange function “is always uncertain to a
total time derivative of any function of the coordinates and time” ([4], pp. 2–5),
say d/dt(p(x, ẋ)x).
50 1 Basic Concepts

Show that the result coincides with the Fourier transform of the coordinate-space
propagator:

i i ↑ ↑
K (p, t; p↑ , t ↑ ) = d 3x d 3 x↑ e−  px K (x, t; x↑ , t ↑ )e  p x

• Problem 2
Derive the Feynman rules in momentum space for the scattering by the potential

d 3 pd E i (px−Et)
V (x, t) = e v(p, E).
(2π)4

• Problem 3
Suppose the interactions of particles in the system are described by a scalar pair
potential u,
1
W= u|xa − xb |,
2
a =b

and the Hamiltonian is thus

H = K(kin.energy) + W.

Write down the expression for K and W in the second quantized form. Write the
equations of motion for field operators (Bose and Fermi case) in Heisenberg and inter-
action representations. Compare them to one-particle Schrödinger equation. What is
the difference?

References

1. Carruthers, P., Nieto, M.M.: Phase and angle variables in quantum mechanics. Rev. Mod. Phys.
40, 411 (1967)
2. Feynman, R.P., Hibbs, A.R.: Quantum Mechanics and Path Integrals. McGraw-Hill, New York
(1965)
3. Gardiner, C.W.: Handbook of Stochastic Methods for Physics, Chemistry, and the Natural Sci-
ences. Springer, Berlin (1985)
4. Goldstein, H.: Classical Mechanics. Addison-Wesley, Reading (1980)
5. Imry, Y.: Physics of mesoscopic systems. In: Grinstein, G., Mazenko, G. (eds.) Directions in
Condensed Matter Physics: Memorial Volume in Honor of Shang-Keng Ma. World Scientific,
Singapore (1986)
6. Landau, L.D., Lifshitz, E.M.: Quantum mechanics, non-relativistic theory. A Course of theo-
retical physics, vol. III, pp. 64–65. Pergamon Press, New York (1989) (A concise and clear
explanation of the second quantization formalism)
7. Ryder, L.: Quantum field theory. Chapter 5, Cambridge University Press, New York (1996) (A
discussion of path integrals in quantum mechanics and field theory)
References 51

8. Washburn, S., Webb, R.A.: Quantum transport in small disordered samples from the diffusive to
the ballistic regime. Rep. Progr. Phys. 55, 1311 (1992) (A review of theoretical and experimental
results on mesoscopic transport)
9. Ziman, J.M.: Elements of advanced quantum theory, Cambridge University Press, Cambridge
(1969) (An excellent introduction into the very heart of the method of Green’s functions)
Chapter 2
Green’s Functions at Zero Temperature

Men say that the Bodhisat Himself drew it with grains of rice
upon dust, to teach His disciples the cause of things. Many ages
have crystallised it into a most wonderful convention crowded
with hundreds of little figures whose every line carries a
meaning.

Rudyard Kipling. “Kim.”

Abstract Green’s functions as a tool for probing the response of a many-body system
to an external perturbation. Similarity and difference from a one-particle propagator.
Statistical ensembles. Definition of Green’s functions at zero temperature. Analytical
properties of Green’s functions and their relation to quasiparticles. Perturbation the-
ory and diagram techniques for Green’s functions at zero temperature. "Dressing"
of particles and interactions: Polarization operator and self energy. Many-particle
Green’s functions.

2.1 Green’s Function of The Many-Body System: Definition


and Properties

2.1.1 Definition of Green’s Functions of the Many-Body System

When we discussed one-particle states in the second quantization representation,


from the formally mathematical point of view any complete set of functions depen-
dent on the coordinates (spin, etc.) of one particle would work. But from the point of
view of physics it is not so: the set must be chosen in a “physically reasonable way.”
That is, since we virtually never can solve our equations exactly and not always can
provide an “epsilon-delta” -style estimate of the approximations involved, the initial
setup must be as close as possible to the solution that we are striving to reach.

A. Zagoskin, Quantum Theory of Many-Body Systems, 53


Graduate Texts in Physics, DOI: 10.1007/978-3-319-07049-0_2,
© Springer International Publishing Switzerland 2014
54 2 Green’s Functions at Zero Temperature

For the one-particle state this means that it should be relatively stable, thus possess-
ing some measurable characteristics and giving us a palatable zero-order approxima-
tion: the one of independent quasiparticles. (We have discussed this at some length
in Chap. 1. Here we are going to elaborate that qualitative discussion and see how it
fits the general formalism of perturbation theory developed so far.)
We have seen that the field operator satisfies the “Schrödinger equation”

π
i ρ(ε, t) = (E(ε, t) + V(ε, t))ρ(ε, t)
πt 
+ dε √ ρ † (ε √ , t)U(ε √ , ε)ρ(ε √ , t)ρ(ε, t) + · · · (2.1)

Here E and V are operators of kinetic energy and external potential; U describes
instantaneous particle–particle interactions, and so on. Evidently, if the basic set of
one-particle functions is chosen correctly, the leading term in this equation will be
the one-particle one; i.e.,

π
i ρ(ε, t) → (E(ε, t) + V(ε, t))ρ(ε, t). (2.2)
πt
This is a mathematical demonstration of the fact that we have a system of weakly
interacting objects—“quasiparticles”—which can be approximately described by
(2.2). Since the deviations from this description are small, the lifetimes of these
objects are large enough to measure their characteristics in some way, and so they
are reasonably well defined. And as often as not they are drastically different from
the properties of free particles, already due to the presence of the V-term in the above
equation. Let us consider this point in more detail.
For example, when we investigate the properties of electrons in a metal, the rea-
sonable first approximation is to take into account the periodic potential of the crystal
lattice, neglecting for a while both electron—electron interactions and “freezing” the
ions at their equilibrium positions. Even in this crude approximation the properties
of these “quasielectrons” are very different from those of a QED electron, with its
mass of 9.109 ×10−28 g and electric charge of −4.803 × 10−10 esu. Its mass is
now, generally, anisotropic and may be significantly less or more; its charge may
become positive; its momentum is no longer conserved due to the celebrated Umk-
lapp processes; and there can exist several different species of electrons in our system!
(See Fig. 2.1.)
On the other hand, when we are interested in the properties of the crystal lattice
of the very metal, we quantize the motion of the ions, and come to the concept of
phonons. These are quasiparticles, if there are any, because outside the lattice phonons
simply don’t exist, while inside it they thrive. They even interact with electrons and
each other—through terms like the third one in (2.1).
Of course, our ultimate goal is to take into account these other terms as well. Then
we will have some other objects, governed by an equation like (2.2) without any
extra interaction—and they will be the actual quasiparticles in our system.
2.1 Green’s Function of The Many-Body System: Definition and Properties 55

Fig. 2.1 Quasiparticles in the Fermi system

There is no contradiction here. In Chap. 1 we have seen how interactions “dress”


a bare particle, making of it a quasiparticle; here we see it being made in two steps.
The first step (a right choice of the basic set of one-particle states) is made without
addressing any perturbation theory, usually based on symmetry considerations (or
common physical sense), as when translation symmetry of the crystal lattice forces
us to describe otherwise free electrons in terms of Bloch functions instead of simple
plane waves, and we use the concept of phonons as a more adequate description of
low-energy dynamics of the ions. Once chosen, this set of states (“basic quasipar-
ticles,” if you wish) plays the very same role as the states of free particles in the
absence of an external potential, and these two sets of states have a lot in common.
For example, they live infinitely long (because by definition they have definite energy,
Eλ j = E j λ j . If—as is the case in most books—we are dealing with a “liquid” of
interacting fermions on a uniform background, the most natural choice of “basic
quasiparticles” is real particles.
Therefore, later on we will call them simply “particles”, while reserving the term
“quasiparticles” (or “elementary excitations”) par excellence for the ones “dressed”
due to interactions with other particles. (The assumption that the description of
the Fermi liquid can be based on the picture of a weakly interacting gas of such
quasifermions was in the foundation of Landau’s phenomenological theory.)
56 2 Green’s Functions at Zero Temperature

Fig. 2.2 Fermi surface and Fermi sphere

Now we can turn to the building of such a theory in the many-body case. We will
begin with the case of single-component normal, uniform, and homogeneous Fermi
and Bose system (the above-mentioned textbook case) at zero temperature.
This is the simplest possible and practically important case, since it does not
involve superfluid (superconducting) condensate. (The discussion of the latter we
postpone until Chap. 4.) In the Fermi case it applies to nonsuperconducting metals
and semiconductors—if we forget for a while about the subtleties of band structures.
Alkali metals are especially good examples (Fig. 2.2).
In the Bose case the example seems purely academic (since bosons must undergo
Bose condensation at zero) until we recall that there is at least one practically impor-
tant system of bosons that don’t condense: phonons! (This is because the number of
phonons is not conserved, but this is not important when we use the grand potential
formalism.)
In Chap. 1 we introduced the one-particle propagator
   
K (x, t; x √ , t √ ) = x|S(t, t √ )|x √ = xt|x √ t √

as a transmission amplitude of a particle between points (x √ , t √ ) and (x, t). A


straightforward generalization
 of the former expression is a matrix element of the
N -particle S-operator |S(t, t √ )|√ . Unfortunately, it is useless, since it a involves
transmission amplitude involving N → 1023 particles. On the other hand, the one-
particle propagator in the latter form suggests that we could look at two states with
a single particle excited,
2.1 Green’s Function of The Many-Body System: Definition and Properties 57

|x, t∇state ≡ ρ † (x, t)|state∇;


|x √ , t √ ∇state ≡ ρ † (x √ , t √ )|state∇

(here the Heisenberg field operator ρ † (x, t) creates a particle at a given point), and
introduce Green’s function as their overlap:

Green’s function  ≈state|ρ(x, t)ρ † (x √ , t √ )|state∇.

Of course, we must average over the states of the many-body system on which our
field operators act, in order to get rid of all other nonmacroscopic variables except
the two coordinates and moments of time between which the quasiparticle travels.
Such an averaging of an operator A (both quantum and statistical) is achieved by
taking its trace with the statistical operator (density matrix) of the system, α̂,

≈A∇ = tr(α̂A). (2.3)

Now, the above formula indeed looks like a propagator, describing a process when
we add to our system of N identical fermions one extra particle, let it propagate
from (x √ , t √ ) to (x, t), and then take it away. It is a good probe of particle-particle
interactions in the system. The other option would be first to take away the particle,
look at how the resulting hole propagates, and then fill it, restoring the particle
(Fig. 2.3).
Then the one-particle causal Green’s function, describing both processes, can be
defined by the expression

G δδ√ (x, t; x√ , t √ )
 ⎡  ⎡
= −i ρδ (x, t)ρδ† √ (x√ , t √ ) (t − t √ ) ∓ i ρδ† √ (x√ , t √ )ρδ (x, t) (t √ − t)
 ⎡
≡ −i T ρδ (x, t)ρδ† √ (x√ , t √ ) . (2.4)

Here we write spin indices explicitly. They can take two values (e.g., up and down)
for fermions (and only one for phonons).1
Averaging defined by (2.3) is a linear operation. Using this property, we can apply
the time differentiation operator, π/πt, to T ρδ (x, t)ρδ† √ (x√ , t √ ), and average it to
obtain

1 It can be shown that no matter what the spin of real fermions, the (basic) quasiparticles will have
spin 1/2 (though there will be several types of quasiparticles); see [4], §1).
58 2 Green’s Functions at Zero Temperature

Fig. 2.3 Causal Green’s function: the physical sense

π
i G (x1 , t1 ; x2 , t2 ) (2.5)
πt1 δω
= E(x1 )G δω (x1 , t1 ; x2 , t2 ) + V(x1 , t1 )G δω (x1 , t1 ; x2 , t2 )
  ⎡
− i dx3 U (x3 , x1 ) T ρ∂† (x3 , t1 )ρ∂ (x3 , t1 )ρδ (x1 , t1 )ρω† (x2 , t2 ) + · · ·

+ θ(x1 − x2 )θ(t1 − t2 ). (2.6)

We see that the many-body Green’s function defined above is not a Green’s func-
tion in the mathematical sense. It is a solution to differential Eq. (2.6). This is not
a closed equation for Green’s function, since it contains averages of four and more
field operators (two-particle, three-particle, etc. Green’s functions). Thus (2.6) is only
the first equation in the quantum analogue to the well known infinite BBGKY chain
(Bogoliubov–Born–Green–Kirkwood–Yvon) in classical statistical mechanics. The
latter consists of interlinked equations of motion for classical n-particle distribution
functions (see e.g. in Chap. 3, [2]).
As in the classical case, breaking this chain leads to a nonlinear differential
equation for Green’s function, as distinct from the linear Eq. (1.25), which governs
the one-particle propagator K(x, t; x √ , t √ ).
The averaging procedure in equilibrium can be performed most conveniently
using either the canonical or grand canonical ensemble. Mathematically, the two
reflect different choices of independent variables: (T, V, N : temperature, volume,
2.1 Green’s Function of The Many-Body System: Definition and Properties 59

Fig. 2.4 Ensemble averaging. a Canonical ensemble; b grand canonical ensemble

number of particles) versus ( T, V, μ; temperature, volume, chemical potential).


The physical difference is that in the former case the system can exchange only
energy with its surroundings (thermostat), while in the latter the particles can leave
and enter the system as well (Fig. 2.4), the average number of particles being fixed
by the chemical potential, μ, as the temperature fixes the average kinetic energy of
the particles in both cases.
The statistical operators are of Gibbs form,

α̂C E = eω(F−H) , (2.7)

or

α̂GC E = eω(Γ−H ) . (2.8)

Here ω = 1/T (we put k B = 1 to simplify the notation); F = −(1/ω) × ln


tre−ω H is the free energy; H√ = H − μN , N is the particle number operator; and

the grand potential Γ = −(1/ω) ln tre −ω H = −P V . (The Lagrange term −μN
evidently commutes with the rest of the Hamiltonian.) We can therefore introduce
Heisenberg and interaction representations using H√ instead of H and come to the
same results, with the only difference being in corresponding eigenvalues of the
Hamiltonian.
In the thermodynamic limit both approaches are equivalent. The situation may
change when the size of the system under consideration becomes small enough, so
that the fluctuations of the particle number cannot be ignored, and the two ensembles
describe physically different systems (such as an isolated conducting grain vs. one
connected to massive conductors by leads). At present, we will not deal with such a
situation. Since the grand canonical ensemble is easier to work with, we will use it,
and—to simplify notation—will henceforth omit primes in H√ and put  = 1.
60 2 Green’s Functions at Zero Temperature

The Hamiltonian now has the complete set of eigenstates |n∇ with eigenvalues
E n√ = E n − μNn , where Nn is the number of particles in state |n∇. The average of
a time-ordered product of two field operators in Heisenberg representation can now
be written as
 ⎡
T ρ(t1 )ρ † (t2 )

= tr(eω(Γ−H) T ρ(t1 )ρ † (t2 ))



= ≈n|eω(Γ−H) |m∇≈m|T ρ(t1 )ρ † (t2 )|n∇≈m|m∇−1 ≈n|n∇−1
n,m

= eωΓ−ω(E n −μNn ) ≈n|T ρ(t1 )ρ † (t2 )|n∇≈n|n∇−1
n
⎣ ⎣
= e−ω(E n −μNn ) ≈n|T ρ(t1 )ρ † (t2 )|n∇≈n|n∇−1 / e−ω(E n −μNn ) . (2.9)
n n

(We have used the standard trick of inserting the complete set of states, and allow
for the possibility that they are not normalized to unity.)
At zero temperature (ω → ∼) we are left with

≈0|T ρδ (x1 , t1 )ρω† (x2 , t2 )|0∇


G δω (x1 , t1 ; x2 , t2 ) = −i . (2.10)
≈0|0∇

Here |0∇ is the exact ground state of the system in Heisenberg representation: it
is time independent and includes all interaction effects.
In a homogeneous and isotropic system in a stationary state, Green’s function can
depend only on differences of coordinates and times:

G δω (x1 , t1 ; x2 , t2 ) = G δω (x1 − x2 , t1 − t2 ). (2.11)

If, moreover, the system is not magnetically ordered and is not placed in and
external magnetic field, then spin dependence in (2.11) reduces to a unit matrix:

G δω = θδω G, (2.12)

where G = 21 tr G δω (otherwise there would be a special direction in space, the


axis of spin quantization).

2.1.1.1 Unperturbed Green’s Functions

It is straightforward now to calculate the unperturbed Green’s functions, starting


from the definition. For the fermions,

1 ⎡
G 0 (x, t) = T ρ(x, t)ρ † (0, 0) . (2.13)
i 0
2.1 Green’s Function of The Many-Body System: Definition and Properties 61

Expanding the field operators in the basis of plane waves, ρ(x, t) = ∝1


 V
ikx−i(ψk −μ)t , and taking into account that at zero temperature in equilibrium
a
k k e
the average ≈ak† ak√ ∇0 = θk,k√ n F (ψk ) = θk,k√ φ(μ − ψk ), we find that

1 ⎣
G 0 (x, t) = [φ(t)(1 − φ(μ − ψk )) − φ(−t)φ(μ − ψk )]eikx−i(ψk −μ)t . (2.14)
iV
k

The Fourier transform of this expression yields, finally,

1
G 0 (p, χ) =
χ − (ψp − μ) + i0sgn(ψp − μ)
1
= . (2.15)
χ − (ψp − μ) + i0sgnχ

The infinitesimal term in the denominator indicates in what half-plane of complex


frequency the corresponding integrals will converge, exactly
⎤ ∼ like what we had earlier
for the retarded propagator. (For example, the integral 0 dteiχt due to the first term
in (2.14) converges if ∗χ → 0 + .) The difference is that here we have the causal
Green’s function, which contains both φ(t) and φ(−t).
You are welcome to calculate the expression for the unperturbed phonon Green’s
function, defined as

D(x, t; x√ , t √ ) = −i≈0|T τ(x, t)τ(x√ , t √ )|0∇, (2.16)

where the phonon field operator is

1 ⎣ ⎦ χk 1/2 i(kx−χk t)
τ(x, t) = ∝ bk e + bk† e−i(kx−χk t) , (2.17)
V k 2

and b, b† are the usual Bose operators. (Since the phonon field is ultimately a
quantized sound wave, it should be Hermitian to yield in the classical limit a classical
observable, the medium displacement.) You will see that

χk2
D 0 (k, χ) = . (2.18)
χ 2 − χk2 + i0

The unperturbed Green’s function is the Green’s function in the mathematical


sense. For the fermions, e.g., it satisfies a linear equation,
 
π
i − E(x1 ) G 0δω (x1 , t1 ; x2 , t2 ) = θ(x1 − x2 )θ(t1 − t2 ). (2.19)
πt1

Symbolically this can be written as


62 2 Green’s Functions at Zero Temperature

(G 0 )−1 (1)G 0 (1, 2) = I(1, 2),

where the operator (G 0 )−1 (1) in coordinate space is (iπ/πt1 −E(∇x1 )), in momentum
space (χ − E(k)).
The derivation of the corresponding equation for D 0 (1, 2) is suggested as one of
the problems to this chapter.

2.1.2 Analytic Properties of Green’s Functions


When dealing with one-particle problems, we observed that some important proper-
ties of propagators could be obtained from general physical considerations, indepen-
dently of the details of the system. This can be done in the many-body case as well.
Specifically, we will derive the Källen-Lehmann representation for Green’s func-
tions in momentum space (as a function of (p, χ)), which determines the analytic
properties of Green’s function in complex χ plane and leads to physically significant
consequences.
Our only assumption here will be that our system is in a stationary and homoge-
neous state, or is space and time invariant. This means that (1) the full Hamiltonian
H is time independent and (2) the momentum operator P commutes with H (then
the total momentum is by definition conserved); here
⎣
P= d 3 xρδ† (x)(−i∇)ρδ (x). (2.20)
δ

It can be easily seen that for the simultaneous commutator,

[ρδ (x, t), P] = −i∇ρδ (x, t) (2.21)

for both Fermi and Bose field operators. (It is enough to substitute the definition of
P and use the canonical (anti)commutation relations.) Equation (2.21) is reminiscent
of the Heisenberg equations of motion for a field operator (1.84), and they together
imply

ρδ (x, t) = e−i(P x−Ht) ρδ ei(P x−Ht) ; (2.22)


ρδ ≡ ρδ (0, 0).

In a more formal language, the Hamiltonian and the operator of total momentum are
generators of temporal and spatial shifts respectively.
We can substitute the above expression into Green’s function, (2.4), and then
insert the unity operator constructed of the full set of common eigenstates of the two
commuting operators H, P,
2.1 Green’s Function of The Many-Body System: Definition and Properties 63

I= |s∇≈s|,
s

wherever it seems reasonable. The following calculations are tedious but straight-
forward.

≈0|T ρ(x, t)ρ † (x √ , t √ )|0∇ = φ(t − t √ ) ≈0|ρ(x, t)|s∇≈s|ρ † (x √ , t √ )|0∇
s


∓ φ(t − t) ≈0|ρ † (x √ , t √ )|s∇≈s|ρ(x, t)|0∇
s
⎣ √ √ √ √
= φ(t − t ) √
≈0|ei(Ht−P x) ρe−i(Ht−P x) |s∇≈s|ei(Ht −P x ) ρ † e−i(Ht −P x ) |0∇
s
⎣ √ √ √ √

∓ φ(t − t) ≈0|ei(Ht −P x ) ρ † e−i(Ht −P x )|s∇≈s| ei(Ht−P x) ρe−i(Ht−P x) |0∇
s

= φ(t − t √ ) ei(E 0 −μN0 )t ≈0|ρ|s∇e−i((E s −μNs )t−Ps x)
s
i((E s −μNs )t √ −P x √ ) √
× e ≈s|ρ † |0∇e−i(E 0 −μN0 )t
⎣ √ √ √
∓ φ(t √ − t) ei(E 0 −μN0 )t ≈0|ρ † |s∇e−i((E s −μNs )t −Ps x )
s
× ei((E s −μNs )t−Ps x) ≈s|ρ|0∇e−i(E 0 −μN0 )t .

The momentum of the state |0∇ is zero. The energy exponents here contain some
subtlety. Since the field operators create or annihilate particles one at a time, in the
first part of the expression, which contains ≈0|ρ|s∇≈s|ρ†|0∇ ≡ |≈0|ρ|s∇|2 states |s∇
must contain one particle more than the state |0∇, say Ns = N0 + 1 ≡ N + 1
(otherwise the annihilation operator has nothing to annihilate). On the other hand,
in the second half of the expression, with ≈0|ρ † |s∇≈s|ρ|0∇ ≡ |≈s|ρ|0∇|2 the states
|s∇ contain N − 1 particles. Since, generally, the eigenvalues of the Hamiltonian
depend on the number of particles both via the −μN term and directly, we have in
the exponents (showing explicitly the dependence of eigenenergies on the particle
number)

exp[i(E 0 (N ) − μN )t − i((E s (N + 1) − μ(N + 1))t − Ps x)]


× exp[i((E s (N + 1) − μ(N + 1))t √ − P x √ ) − i(E 0 (N ) − μN )t √ ]
= exp[i[(E 0 (N ) − E s (N + 1) + μ)(t − t √ ) + Ps (x − x √ )]];
exp[i(E 0 (N ) − μN )t √ − i((E s (N − 1) − μ(N − 1))t √ − Ps x √ )]
× exp[i((E s (N − 1) − μ(N − 1))t − P x) − i(E 0 (N ) − μN )t]
= exp[i[(E 0 (N ) − E s (N − 1) − μ)(t √ − t) + Ps (x √ − x)]].

As expected, everything depends on the differences of coordinates and times.


Denote the excitation energies by
64 2 Green’s Functions at Zero Temperature

ψs(+) = E s (N + 1) − E 0 (N ) > μ; (2.23)


ψs(−) = E 0 (N ) − E s (N − 1) < μ. (2.24)

Evidently, the former gives the energy change when a particle is added to the state
|s∇; the latter, when the particle is removed from the state |s∇. (You can check the
above inequalities, if you recall that at zero temperature for the ground-state energy
(a thermodynamic observable!) (π E 0 /π N ) = (π/π N ), ( is thermodynamical
potential) and that by definition (π/π N ) = μ.) For the phonons, μ = 0.
Now we can sweep all the details of the system under the carpet by introducing
   
1 ⎣
−1
As = ≈0|0∇ |≈0|ρδ |s∇|2 ; (2.25)
2 δ
   
1 ⎣
Bs = ≈0|0∇−1 |≈s|ρδ |0∇|2 . (2.26)
2 δ

(The operations in the brackets are reserved for the spin degrees of freedom, δ.)
Those are some functions of index s only. Therefore, we can easily take the Fourier
transform of Green’s function, using the above results, and finally get the Källén–
Lehmann’s representation:
 
⎣ As θ(p − Ps ) Bs θ(p + Ps )
G(p, χ) = (2ξ) 3
(+)
± (−)
. (2.27)
s χ − ψs + μ + i0 χ − ψs + μ − i0

In this expression, delta functions of momenta arise from the exponential factors
exp[iPs x]; they indicate the values of momenta, corresponding to single-particle
excitations (note that the second term in (2.27) clearly indicates the holes, with
(−)
momenta −Ps and energies ψs ). The frequency denominators contain the infin-
itesimal ±i0, due to the presence of theta functions of time in the initial expres-
sion, exactly as when we calculated the unperturbed Green’s functions. Of course,
our present result is consistent with expressions (2.15, 2.18). Mathematically, the
Källén–Lehmann representation tells us that Green’s function of a finite system is a
meromorphic function of the complex variable χ; all its singularities are simple poles.
(±)
Each pole corresponds to a definite excitation energy, ψs , and definite momentum
of the system, ±Ps . The poles are infinitesimally shifted into the upper half-plane of
χ when χ > 0, and into the lower one when χ < 0. Thus the causal Green’s function
is not analytic in either half-plane.
In the thermodynamic limit (N , V → ∼, N /V = const) it is more convenient
to use a different form of (2.27):

∼  
dχ √ ρ A (χ √ ) ρ B (χ √ )
G(p, χ) = + , (2.28)
ξ χ √ − χ − i0 χ √ − χ + i0
−∼
2.1 Green’s Function of The Many-Body System: Definition and Properties 65

where

ρ A (p, χ √ ) = −ξ(2ξ)3 As θ(p − Ps )θ(χ √ − ψs(+) + μ); (2.29)
s

ρ B (p, χ √ ) = ∓ξ(2ξ)3 Bs θ(p + Ps )θ(χ √ − ψs(−) + μ). (2.30)
s

(±)
Indeed, in this limit we can no longer resolve the individual levels ψs . The den-
sities ρ A,B (p, χ √ ) become continuous functions, zero at negative (resp. positive)
frequencies on the real axis; the latter becomes a branch cut in the complex χ plane.
The real and imaginary parts of Green’s function (for real frequencies) can be
easily obtained from (2.27), using the Weierstrass (or Sokhotsky—Weierstrass) for-
mula
1 1
= P ∓ iξθ(x). (2.31)
x ± i0 x

Here P means the principal value. This is to be understood as a generalized function;


that is, the above formula strictly speaking makes sense only under the integral, with
an integrable function F(x):
 
1 F(x)
d x F(x) =P ∓ iξ F(0).
dx
x ± i0 x
⎤ ⎤
−ψ
The principal value integral P d x F(x)/x is defined as limψ→0 d x F(x)/x+
⎤ 
ϕ d x F(x)/x . The other term arises from the integration over an infinitesimal semi-
circle around the pole at x = 0. This formula is easy to prove using the technique of
residues in the complex analysis.
Using this recipe, we find
 
⎣ As θ(p − Ps ) Bs θ(p + Ps )
↑G(p, χ) = (2ξ) 3
P
(+)
± (−)
; (2.32)
χ − ψs + μ χ − ψs + μ
s
 
− s As θ(p − Ps )θ(χ − ψs(+) + μ), χ > 0;
∗G(p, χ) = (2ξ) ξ
3
 (−) (2.33)
± s Bs θ(p + Ps )θ(χ − ψs + μ), χ < 0.

We thus obtain an important relation:

sgn∗G(p, χ) = −sgn χ for Fer mi systems;


sgn∗G(p, χ) = −1 for Bose systems. (2.34)

The difference reflects the fact that there is no Fermi surface in Bose systems (and
thus “particles” and “holes” are the same).
66 2 Green’s Functions at Zero Temperature

The asymptotic behavior of G(χ) as χ → ∼ is very simple:

G(χ) ∞ 1/χ. (2.35)

To prove it, note that in this limit we can neglect all other terms in the denominators
of (2.27), so that

1 ⎣
G(p, χ) ∞ (2ξ)3 (As θ(p − Ps ) ± Bs θ(p + Ps )).
χ s

The sum can be evaluated by performing an inverse Fourier transform and making
use of the canonical (anti)commutation relation between field operators in coordinate
space:

(2ξ)3 (As θ(p − Ps ) ± Bs θ(p + Ps ))
s
⎣ ⎦ √ √

= d 3 (x − x√ ) As ei(p−Ps )(x−x ) ± Bs ei(p+Ps )(x−x )
s
 
⎣ 1 ⎣ √
= d 3 (x − x√ ) |≈0|ρδ (0)|s∇|2 ei(p−Ps )(x−x )
s
2 δ

2 i(p+Ps )(x−x√ )
±|≈s|ρδ (0)|0∇| e
  
1 ⎣ √
= d 3 (x − x√ ) eip(x−x )
2 δ
× ≈0|ρδ (r, t)ρδ† (r√ , t) ± ρδ† (r√ , t)ρδ (r, t)|0∇
= 1.

Poles of Green’s Function and Quasiparticle Excitations

It is easy to see from the Källén–Lehmann representation that in the thermodynamic


limit only those poles of G(χ) survive as distinct poles (and do not merge into the
branch cut along the real axis) that correspond to the situation when all energy and
momentum of the excited system can be ascribed to one object, a quasiparticle, with
a definite dispersion law (i.e., energy-momentum correspondence). Otherwise, for
any given energy there is a whole set of corresponding momenta, and the pole will be
eliminated by the integration over them. Thus the quasiparticle dispersion law χ(p)
is defined by the equation
1
= 0. (2.36)
G(p, χ − μ)
2.1 Green’s Function of The Many-Body System: Definition and Properties 67

2.1.3 Retarded and Advanced Green’s Functions

We will define two more Green’s functions: retarded and advanced ones, G R and
G A.
R
G δω (x1 , t1 ; x2 , t2 )
 ⎡
= −i ρδ (x1 , t1 )ρω† (x2 , t2 ) ± ρω† (x2 , t2 )ρδ (x1 , t1 ) φ(t1 − t2 ); (2.37)
A
G δω (x1 , t1 ; x2 , t2 )
 ⎡
= +i ρδ (x1 , t1 )ρω† (x2 , t2 ) ± ρω† (x2 , t2 )ρδ (x1 , t1 ) φ(t2 − t1 ). (2.38)

The definition is chosen in such a way as to guarantee that (1) the retarded (advanced)
Green’s function is zero for all negative (positive) time differences t − t √ , and (2) at
t = t √ both have a (−iθ(x − x √ )) discontinuity, exactly as does the causal Green’s
function. The latter statement is easy to check by taking its time derivative and using
the canonical relation
π  ⎫
lim√ G R(A) (t − t √ ) = ∓i≈0| ρδ (x, t), ρω† (x√ , t √ ) |0∇ · (±θ(t − t √ ))
t→t πt ±

= −iθ(x − x ).
R(A)
Again, in the uniform and stationary case, G δω (x1 , t1 ; x2 , t2 ) = G R(A) (x1 −
x2 , t1 − t2 )θδω . Unperturbed retarded and advanced Green’s functions, for example,
can be easily found by direct calculation:
1
G 0R,A (p, χ) = ;
χ − (ψp − μ) ± i0
 
χk 1 1 χk2
D 0R,A
(k, χ) = − = 2 .
2 χ − χk ± i0 χ + χk ± i0 χ − χk2 + i0sgnχ
(2.39)

The Källén–Lehmann representation for G R,A can be obtained in the same way
as for the causal Green’s function. The result is as follows:
 
⎣ As θ(p − Ps ) Bs θ(p + Ps )
G (p, χ) = (2ξ)
R 3
(+)
± (−)
; (2.40)
s χ − ψs + μ + i0 χ − ψs + μ + i0
 
⎣ As θ(p − Ps ) Bs θ(p + Ps )
G (p, χ) = (2ξ)
A 3
(+)
± (−)
. (2.41)
s χ − ψs + μ − i0 χ − ψs + μ − i0

Taking real and imaginary parts of these expressions, we see that on the real axis
68 2 Green’s Functions at Zero Temperature


⎧ ↑G A (p, χ) = ↑G A (p, χ) = ↑G(p, χ);

⎪ ∗G R (p, χ) = ∗G(p, χ); χ > 0;
∗G A (p, χ) = ∗G(p, χ); χ < 0; (2.42)

⎧  ⎫∗

⎨ G R (p, χ) = G A (p, χ) .
δω ωδ

On the other hand, the retarded (advanced) Green’s function is clearly analytic in the
upper (lower) χ-half-plane. This means that they are analytic continuations of the
causal Green’s function from the rays χ > 0(χ < 0) respectively. Their asymptotic
behavior is, of course, the same as that of the causal Green’s function:

G R,A (χ) ∞ 1/χ, |χ| → ∼.

In the thermodynamic limit, as in (2.28), we can write

∼
dχ √ ρ R,A (p, χ √ )
G R,A
(p, χ) = , (2.43)
ξ χ √ − χ ± i0
−∼

with spectral density



ρ R,A (p, χ √ ) = − ξ(2ξ)3 (As θ(p − Ps )θ(χ √ − ψs(+) + μ) (2.44)
s
± Bs θ(p + Ps )θ(χ √ − ψs(−) + μ)).

Evidently,
ρ R (p, χ √ ) = −∗G R (p, χ √ ). (2.45)

From formula (2.44) it is clear that ρ R (p, χ √ ) is proportional to the probability density
of an elementary excitation with momentum p having energy χ. In the non-interacting
case, e.g., −∗G 0,R = ξθ(χ − (ψp − μ)), because in the absence of interactions,
quasiparticles would indeed coincide with “basic” particles, with dispersion law ψp .

2.1.3.1 Quasiparticle Excitations and Retarded and Advanced Green’s


Functions

We can now visualize the concept of a quasiparticle excitation and its relation to
the existence of isolated poles of G(p, χ). Suppose that there is such a pole at
χ = Γ − i,  > 0. (This corresponds to a particle added in the state p.) To see the
evolution of the excitation created by the operator ap† , calculate Green’s function in
the (p, t)-representation:
2.1 Green’s Function of The Many-Body System: Definition and Properties 69

Fig. 2.5 Contour integrations


in the complex frequency
plane. The poles of G R (χ) are
marked by crosses, those of
G A (χ) by circles

∼
dχ −iχt
G(p, t) = e G(p, χ).

−∼

For negative t the integration contour can be closed in the upper half-plane, and
since there are no singularities there, the integral is zero. For positive t the contour
closes in the lower half-plane and will contain the pole. We cannot, though, simply
calculate the residue, since the causal Green’s function is not analytic in the lower
half-plane. We must then replace it by its analytic continuations, G R,A . In order to
do this, we close the integration contour as shown in Fig. 2.5. For ↑χ < 0 we can
replace G(p, χ) by G A (p, χ), and for ↑χ > 0 by G R (p, χ). Now, the Cauchy
theorem of complex analysis tells us that the integral we ar e interested in can be
written as follows:
 
dχ −iχt A dχ −iχt R
G(p, t) = − e G (p, χ) − e G (p, χ).
2ξ 2ξ
√√ √√
C1√ +C1 C2√ +C2 +C

Watson’s lemma, together with the 1/χ asymptotics of Green’s functions, ensures
that the integrals over infinitely remote quarter circles C1√ and C2√ are zero. Therefore,
we are left with two terms: the contribution from the pole, and the integral along the
negative imaginary axis:

0
−iΓt −t dχ −iχt  A ⎫
G(p, t) = −i Z e e + e G (p, χ) − G R (p, χ) , (2.46)

−i∼

where Z is the residue of G R (χ) in the pole. The first term describes a free quasi-
particle with finite lifetime ∞ 1/ . The contribution from the integral is small, if
only Γt  1, t  1. (This means that the decay rate must be small enough,
  Γ.) Indeed, invoking the Källén-Lehmann representation, we see that
70 2 Green’s Functions at Zero Temperature
 0 dχ −iχt  A ⎫
e G (p, χ) − G R (p, χ)
−i∼ 2ξ
 0  
dχ −iχt A A
→ e −
−i∼ 2ξ χ − ψ p + μ − i χ − ψρ + μ + i
 0 −iχt
dχ e
= −2i A
−i∼ 2ξ (χ − ψ p + μ)2 +  2
− A e−iμt
→  Z e−iΓt e−t
ξt (μ − ψρ )2

if   (ψρ − μ) = Γ.

2.1.3.2 Kramers-Kronig Relations

From the Källén–Lehmann representation for the retarded and advanced Green’s
functions (2.40, 2.41) follows a beautiful (and important) relation between the real
and imaginary parts of those functions of real frequencies: the Kramers–Kronig
relation
∼
dχ √ ∗G R,A (p, χ √ )
↑G R,A (p, χ) = ±P . (2.47)
ξ χ√ − χ
−∼

(This can be established directly by taking the imaginary and real parts of (2.40, 2.41).
It can be shown that the reason why this relation holds is the causality, that is, the
property of advanced and retarded Green’s functions of time to be zero for t > (<)t √ .
The proof (which is quite straightforward) is like our calculation of the Fourier trans-
form of the propagator in Chap. 1, where we established for the first time that the poles
of K (χ) must be infinitesimally displaced from the real axis in order to provide for
the φ(t)-like behavior of K (t). Then, knowing that G (R,A) (χ) is an analytic function
⎤∼ √ ∗G R,A (p,χ √ )
in the corresponding half-plane, we can calculate the integral −∼ dχ ξ χ √ −χ
along the real axis, using the Cauchy theorem, and come to the above relations. In
mathematics this relation is known as the Plemelj theorem [6].

2.1.4 Green’s Function and Observables

The way of expressing observables (average values of quantum-mechanical opera-


tors) in terms of Green’s function (once the latter is known) directly follows from its
definition. (Since it contains only an average of two field operators, it is clear that
only one-particle operators can be treated this way.)
For example, the particle density in the system (meaning real or “basic” particles)
is by definition
2.1 Green’s Function of The Many-Body System: Definition and Properties 71
⎣ ⎣
n(r) = ≈ρδ† (r)ρδ (r)∇ ≡ ∓i G δδ (r, t − 0; r, t). (2.48)
δ δ

So, for a uniform system of spinful fermions we have

N
n= = −2i G(r = 0, t = −0) = 2∗G(r = 0, t = −0). (2.49)
V
The relation between density and Green’s function allows us, in turn, to express
the thermodynamic properties of the system at T = 0 through its Green’s function.
Indeed, the grand potential of the system satisfies (see, e.g., [4])

dΓ = −SdT − N dμ = −N dμ

at T = 0 (since the entropy S(0) = 0). This equation can thus be integrated (remem-
bering that Γ(μ = 0) = 0):


Γ=− dμN (μ),
0

where we can substitute the expression for N (μ) from (2.48) or (2.49) (where Green’s
function is explicitly μ-dependent; see Problem 2.1).
A slightly more sophisticated example is presented by current, which in agreement
with Sect. 1.4 is expressed through the field operators as

ie ⎣
j(r) = ≈(∇ρδ† (r))ρδ (r) − ρδ† (r)(∇ρδ (r))∇.
2m δ

We can express the current through Green’s function, using a “hair splitting” trick
that allows us to separate the differentiation over two coinciding coordinates:

ie ⎣
j(r) = lim (∇r√ − ∇r )≈ρδ† (r√ )ρδ (r)∇
2m δ r√ →r
ie ⎣
= lim lim (∇r√ − ∇r )(∓i G δδ (r√ , t; r, 0)). (2.50)
2m δ t→−0 r√ →r

2.2 Perturbation Theory: Feynman Diagrams


The beautiful formulae of complex analysis and the physical insight we have achieved
so far unfortunately do not provide more specific information about Green’s functions
of a realistic system—the one with interactions. On this level of generalization all
Fermi systems look the same, and all Bose systems look the same (which is, of course,
72 2 Green’s Functions at Zero Temperature

Fig. 2.6 Different ways of drawing Feynman diagrams

true, but not sufficient). On the other hand, we cannot look into the differences due
to interactions, because, e.g., we have no way of determining the matrix elements of
field operators in the Källén-Lehmann representation.
As always, we have to apply perturbation theory. The great achievement by Feyn-
man was to build the perturbation theory formalism, in which the whole pertur-
bation expansion, including its most cumbersome expressions, is reduced to a set
of sometimes spectacular and always physically understandable graphs—Feynman
diagrams.
As in medieval paintings, there is a strict set of rules for both drawing and reading
those images (determined by the Hamiltonian of the system). This makes diagram
techniques a highly symbolic form of art. (Of course, there are differences between
various schools and books, as in Fig. 2.6.)
When calculating the Green’s function of a system with interactions, we meet the
usual obstacle of not knowing the wave function (state) of the system over which the
average is to be taken. We don’t know the ground state, |0∇. We don’t know excited
states. Moreover, any approximation we are going to make will be virtually orthogo-
nal to the proper many-particle state. Luckily, the approximate matrix elements (like
Grren’s function) can be quite accurate. The seeming paradox is just a reflection of
the fact that while the wave function involves all N particle states, Green’s function
deals with only two one-particle states (initial and final). As Thouless has noted
[7], if the one-particle state is approximated with a small mistake δ, the projection
of the corresponding many-particle state on the exact state will be of the order of
(1 − δ) N ∞ e−N δ → 0 as N → ∼ for my finite δ. On the other hand, the
average of a one-particle operator (like Green’s function) will contain only a small
mistake δ !
2.2 Perturbation Theory: Feynman Diagrams 73

Two conclusions can be drawn. (1) Green’s functions provide a physically sensible
method of approaching the many-body problem. (Something we could guess after a
short glance cast at library shelves filled with numerous folios on the subject, to which
we dare add a book of our own.) (2) The results are to be expressed in terms of averages
over the unperturbed state of the system, rather than corrections to many-body wave
functions. (In usual quantum mechanics both would be equivalent—because it is a
one-body theory.)

2.2.1 Derivation of Feynman Rules. Wick’s and Cancellation


Theorems

In order to fulfill our program, let us turn to the interaction representation. We have
seen before that this is a natural way to deal with perturbation theory. To avoid unnec-
essary subscripts, here we denote the field operators in the interaction representation
by uppercase Greek letters. The connection between them and Heisenberg operators
is given by

ρ(x, t) = U † (t)ρ S (x)U(t) = U † (t)e−i H0 t Ψ(x, t)ei H0 t U(t).

Inserting this into the definition of Green’s function, we obtain


 ⎫−1
G δδ√ (x, t; x√ , t √ ) = −i ≈0|U † (t)e−i H0 t ei H0 t U(t)|0∇

× ≈0|U † (t)e−i H0 t Ψδ (x, t)S(I ) (t, t √ )Ψδ† √ (x√ , t √ )ei H0 t U(t √ )|0∇φ(t − t √ )


±≈0|U † (t √ )e−i H0 t Ψδ† √ (x√ , t √ )S(I ) (t √ , t)Ψδ (x, t)ei H0 t U(t)|0∇φ(t √ − t) .
(2.51)

Later on we suppress the (I) subscript in the S-matrix as well; we use it only in the
interaction representation anyway; that is,

⎤t
−i ,dt W (t)
S(t, t √ ) = T e t√ , t > t √.
√ √
Here |0∇ is a Heisenberg ground-state vector. Then ei H0 t U(t √ )|0∇ = ei H0 t |0(t √ )∇ S =
|0(t √ )∇ I , i.e., the ground-state vector in interaction representation, and

|0(t √ )∇ I = S(t √ , t √√ )|0(t √√ )∇ I (t √ > t √√ ) = S(t √ , −∼)|0(−∼)∇ I , (2.52)

because we can take the moment t √√ to minus infinity.


We can repeat this argumentation for ≈0|U † (t)e−i H0 t to get eventually
74 2 Green’s Functions at Zero Temperature


ei H0 t U(t √ )|0∇ = S(t √ , −∼)|0(−∼)∇ I ; (2.53)
−i H0 t
≈0|U (t)e

= ≈0(∼)| I S(∼, t). (2.54)

Now we introduce a very important adiabatic hypothesis. First, let us assume that
the perturbation was absent very long ago and was turned on in an infinitely slow
way, say W(t ≤ t1 ) = W exp(δ(t − t1 )), δ → 0+. It will be turned off in some
very distant future, say W(t ≤ t2 ) = W exp(−δ(t − t2 )), δ → 0+. Here [t1 , t2 ]
is the time interval within which we investigate our system (and we don’t care what
happens later or previously: aprés mois le déluge).
Of course, physically it is rather easy to turn on and off the external potential, while
we don’t have such a free hand when the perturbation is due to particle—particle
interaction. But nothing a priori prohibits such a property of the perturbation term
in the Hamiltonian, and finally we take δ = 0 anyway. (A mathematically inclined
reader may recognize that we are actually going to use so-called Abel regularization
of conditionally convergent integrals, which appear a little later.)
Now, since at minus infinity there is no perturbation, we can write instead of
|0(−∼)∇ I the unperturbed ground state vector |0 ∇ (which is time independent,
because iπ|0 (t → −∼)∇/πt = W exp(δ(t − t1 ))|0 (t → −∼)∇ → 0). It is
convenient to choose a normalized state, ≈0 |0 ∇ = 0.
Now, it seems natural to think that since we had an unperturbed ground state at
minus infinity, when there was no interaction, we should have the same state at plus
infinity, when there will be no interaction. This is not true, though: it is known from
usual quantum mechanics that the adiabatically slow perturbation can actually switch
the system to a different state with the same energy. Our good luck is that the ground
state of a quantum-mechanical system is always non-degenerate, and it is the ground
state averages that we deal with! Therefore, the only difference between the states
at minus and plus infinity may be a phase factor: |0(+∼)∇ I = (exp(i L))|0 ∇, and
this factor anyway cancels from the numerator and denominator of (2.51)).
Thus we have derived the key formula

≈0 |T S(∼, −∼)Ψδ (x, t)Ψδ† √ (x √ , t √ )|0 ∇


i G δδ√ (x, t; x √ , t √ ) = (2.55)
≈0 |S(∼, −∼)|0 ∇

(we rearranged the terms under the sign of time


⎤ ∼ ordering to gather all the parts of the
S-operator into S(∼, −∼) = T exp{−i −∼ dtW(t)}. This formula is the basis
for the perturbation theory: we have only to expand the exponent and obtain the
series like the one we obtained for the one-particle propagator, containing terms

≈0 |T W(t1 )W(t2 ) · · · W(tm )Ψδ (x, t)Ψδ† √ (x √ , t √ )|0 ∇.

The difference is that (1) we need the matrix elements not between the coordinate
(momentum) eigenstates, but between unperturbed ground state vectors of a many-
body system, and (2) now we have the denominator ≈0 |S(∼, −∼)|0 ∇.
2.2 Perturbation Theory: Feynman Diagrams 75

2.2.1.1 Wick’s Theorem

This is the theorem of quantum field theory, because it makes all the formalism
tick. After the exponent in the S-operator is expanded, we must calculate the matrix
elements of the sort ≈0 |T λ1 λ2 · · · λm |0 ∇. Here λ1 , λ2 , . . . , λm are (Fermi or
Bose) field operators in interaction representation, and it is Wick’s theorem that
allows us to do this.
For the sake of clarity, from now on we denote the set of variables (x, t, δ) by a
single number or capital letter. For example:

Ψδ (x, t)  Ψ X ;
Ψ∂1 (x1 , t1 )  Ψ1 ;
⎣  
d 3 x dt  d X;
δ
⎣  
3
d x1 dt1  d1.
∂1

Wick’s theorem states that


The time-ordered product of field operators in interaction representation equals
to the sum of their normal products with all possible contractions:

T λ1 λ2 · · · λm λm+1 λm+2 . . . λn =: λ1 λ2 · · · λm λm+1 λm+2 · · · λn :


⎩ !
+ : λ1 λ2 · · · λm λm+1 λm+2 · · · λn :
⎩ ! ⎩  !
+ : λ1 λ2 · · · λm λm+1 λm+2 · · · λn :
⎩  !⎩  !
+ · · · + : λ1 λ2 · · · λm λm+1 λm+2 · · · λn : .
!⎩ 
(2.56)

Now some definitions.


The normal ordering of field operators, : λ1 λ2 . . . λm :, means that all “destruc-
tion” operators stay to the right of the “construction” ones. We label “destruction”
those operators that give zero when acting on the unperturbed ground state (vacuum
state); and the “construction” operators are their conjugates. In the Fermi case, e.g.,
the vacuum is the filled Fermi sphere, and therefore “destruction” operators are anni-
hilation operators ap with p > p F , and creation operators ap† with p < p F . We can,
if we wish, explicitly split the Fermionic field operator in two corresponding parts,

(−) (+)
ρX = ρX + ρX
1 ⎣ i(px−ψ p t) 1 ⎣ i(px−ψ p t)
= ∝ e ap + ∝ e ap . (2.57)
V p> p F V ρ<ρ F
76 2 Green’s Functions at Zero Temperature

The bosonic (e.g., phonon) field is already presented in this form,

(−) (+)
τX = τX + τX
1 ⎣ ⎦ χk 1/2 1 ⎣ ⎦ χk 1/2 † −i(kx−χk t)
= ∝ bk ei(kx−χk t) + ∝ bk e . (2.58)
V k 2 V k 2

Since both time ordering and normal ordering are distributive, we can deal with
the (+) and (−) parts separately, and therefore all the intermediate manipulations
can be on the field operators ρ, τ themselves.
By definition, the normal product of any set of the field operators A, B, C, . . .
has zero ground-state average,

≈0 | : ABC · · · : |0 ∇ = 0. (2.59)


⎩ !
The contraction, or pairing, of two operators, λm λn , is the difference between
their time- and normally ordered products
⎩ !
λm λ n = T λm λn − : λ m λ n : . (2.60)

If both operators here are of the same sort (both creation or both annihilation), the
contraction is identically zero. Indeed, then the normal ordering does not affect their
product, and
⎩ !
λ1 λ2 = φ(t1 − t2 )λ1 λ2 ∓ φ(t2 − t1 )λ2 λ1 − λ1 λ2
= (φ(t1 − t2 ) + φ(t2 − t1 ))λ1 λ2 − λ1 λ2
= 0.

On the other hand, a contraction of conjugate field operators is a number: taking into
account that the operators are in interaction representation and their time dependence
is trivial, we see that, for example,
⎩ ! ⎣⎣⎦ ⎦
λ†1 λ2 = ei E k t1 λ†k e−i E q t2 λq
k q
⎣⎣ ⎦
= ei E k t1 e−i E q t2 φ(t1 − t2 )λ†k λq ∓ φ(t2 − t1 )λq λ†k − λ†k λq
k q
⎣⎣  ⎫
= ei E k t1 e−i E q t2 (φ(t1 − t2 ) + φ(t2 − t1 ))λ†k λq − λ†k λq + φ(t2 − t1 )θkq ,
k q

and all the operator terms cancel. This is an important fact, that the contraction of
Fermi/Bose field operators is a usual number, because then we can write
2.2 Perturbation Theory: Feynman Diagrams 77

⎩ ! ⎩ !
λ1 λ2 = ≈0 | λ1 λ2 |0 ∇
= ≈0 |T λ1 λ2 |0 ∇ − ≈0 | : λ1 λ2 : |0 ∇
= ≈0 |T λ1 λ2 |0 ∇ = i G 0 (12). (2.61)

Contraction of Fermi/Bose operators is actually an unperturbed Green’s function,


which we know how to find! Look how lucky we are: if the commutation relations for
our field operators contained an operator instead of a delta function (as they do for spin
operators), we would have no use for Wick’s theorem when calculating the averages of
many-operator products. This is why there is no really handy diagrammatic approach
to the corresponding problems.
But with bosons and fermions we can use the theorem to deal with the average
≈o |λ(1) · · · λ(N )|o ∇. Note that we can extract all the contractions from under
the symbol of normal ordering. (We have only to commute the paired operators
with their neighbors in order to bring them together, and then simply calculate the
contraction (a number) and take it outside. This operation will give us at most a factor
of (−1) P , the parity of the permutation of Fermi operators that we did on our way.)
Therefore only the contribution due to the sum of all possible fully contracted terms
survives (the other terms contain some normally ordered operators and thus have
zero vacuum average): evidently, only terms with an even number of operators can
be fully contracted. This sum is actually a sum of products of unperturbed Green’s
functions, corresponding to all possible ways of picking pairs of conjugate field
operators from the general crowd, taken with corresponding parity factors. Each of
these terms cm be presented by a distinctive Feynman diagram, of which the rules
for drawing, reading, and calculating we are going to establish.
But before doing so, let us check that Wick’s theorem is plausible. (The proof
involves some tiresome algebra and can be found, e.g., in [2].) Following [4], we
will make instead a simpler argument valid not for the operators themselves, but for
their matrix elements (a so-called weak statement), and only in the thermodynamic
limit. On the other hand, it is valid for averages over an arbitrary state of the system,
not only its ground state.
The argument goes as follows. Write an Another example: average in Fourier com-
ponents ≈X |λ1 λ2 · · · |X ∇ = Γ−1/2 k1 Γ−1/2 k2 · · · eik1 x1 −i E 1 t1 eik2 x2 −i E 2 t2 · · ·
≈X |ck1 ck2 · · · |X ∇. Γ is the volume of the system. In this expression there must
be an even number of operators: N /2 creation operators and N /2 annihilation oper-
ators, with the same values of k (otherwise the matrix element is zero). There can be
no more than N /2 different values of k. If indeed they are all different, we have, e.g.,
⎣⎣ √ √
Γ−N /2 · · · eik1 (x1 −x1 )−i E 1 (t1 −t1 ) · · · ≈X |ck+1 ck1 |X ∇
k1 k2

× ≈X |ck+2 ck2 |X ∇ · · · ≈X |ck+N /2 ck N /2 |X ∇(−1) P

 could here insert |X ∇≈X | instead of the complete expression for the unit operator,
(we
s |s∇≈s|, because the rest of it does not contribute anything.) The above expression
78 2 Green’s Functions at Zero Temperature

Fig. 2.7 The art of physics

is actually the fully contracted term of Wick’s theorem. In the thermodynamic limit,
Γ → ∼, it stays finite, because every power of Γ in the normalization term is com-
pensated by the summation over k (both are proportional to the number of particles
in the system).
On the other hand, there are other terms in the expression for ≈X |λ1 λ2 · · · |X ∇,
but they contain N /2 − 1, N /2 − 2, etc. different values of k, and thus independent
summations. Because of that, some powers of volume in the denominator will not be
canceled by summations, and all these terms vanish, which concludes our reasoning
(Fig. 2.7).
Now the path is straightforward. (1) Expand the time-ordered exponent in the
expression for Green’s function; (2) Take all averages over the ground state, using
Wick’s theorem, thus factoring all terms in products of unperturbed Green’s func-
tions (with appropriate integrations); (3) Represent these terms by graphs—Feynman
diagrams. After the correspondence between those graphs and the analytic terms in
the expansion series is established, it is much simpler to work with the diagrams,
which give much clearer understanding of the structure of the expressions involved.
The rules of drawing and reading Feynman diagrams in some detail, of course,
depend on the interaction. What is even worse, they depend on tastes and preferences
of the author whose book or chapter you are reading (Fig. 2.6); there are at least three
popular schools. Here we will take one of those approaches, where time flows from
right to left and so it is also from right to left that the lines symbolizing Green’s
2.2 Perturbation Theory: Feynman Diagrams 79

functions are drawn. This has at least the advantage, that the order of letters labeling
the diagram is the same as that in the analytic formulas, where later moments stand
to the left.
For illustration, we derive the rules for the simple case of scalar electron—electron
interaction, which by definition involves only one sort of particles (electrons) inter-
acting via instantaneous spin-independent potential:
 
1 ⎣⎣
W(t) = d 3 x1 d 3 x2 Ψδ† 1 (x1 , t)Ψδ† 2 (x2 , t)U (x1 − x2 )
2 δ δ
1 2

× Ψδ2 (x2 , t)Ψδ1 (x1 , t).

For convenience we introduce

U (1 − 2) ≡ U (x1 − x2 )θ(t1 − t2 );

and now we will integrate over space and time coordinates indiscriminately, and the
whole set (x, t, δ) (δ is spin index) is written down as X .
Expanding the exponent in the general expression for i G(X, X √ ) up to the first
order in interactions and substituting out scalar electron–electron interaction, we find

i G(X X √ ) (2.62)
⎤ ⎤
≈0 |Ψ X Ψ X† √ |0 ∇ + (− i ) 21 d1 d2U (1 − 2)≈0 |T Ψ1† Ψ2† Ψ2 Ψ1 Ψ X Ψ X† √ |0 ∇
→ ⎤ ⎤ .
1 + (− i ) 21 d1 d2U (1 − 2)≈0 |T Ψ1† Ψ2† Ψ2 Ψ1 |0 ∇

This was step one. In step two, the average of the six operators in the numerator
and that of the four operators in the denominator must be evaluated using Wick’s
theorem.
The expression ≈0 |T Ψ1† Ψ2† Ψ2 Ψ1 Ψ X Ψ X† √ |0 ∇ can be fully contracted in six
⎩  !⎩  !
different ways. For example, ≈0 |T Ψ1† Ψ2† Ψ2 Ψ1 Ψ X Ψ X† √ |0 ∇ = n 0 (1)n 0 (2)i G 0
!⎩ 
(X, X √ ). Here n 0 (1) = ≈Ψ1† Ψ1 ∇0 is simply unperturbed electronic density in the
system. As you see, in this term the “probe” particle (traveling from X √ to X ) is
decoupled—disconnected—from the rest of the system, that is, does not interact
with it.
Another example:
⎩  ! ⎩  !
≈0 |T Ψ1† Ψ2† Ψ2 Ψ1 Ψχ Ψχ†√ |0 ∇ = −i G 0 (1, 2)i G 0 (2, 1)i G 0 (X, X √ )
!⎩ 

also produces a disconnected term, with minus sign, because in order to put together
the field operators to form corresponding pairings we had to change the places of an
odd number of Fermi operators.
80 2 Green’s Functions at Zero Temperature

Fig. 2.8 Green’s function to first order in scalar e–e interaction

Still another choice gives us at last a connected term, where the probe particle
interacts with the system:
⎩  !⎩  !
≈0 |T Ψ1† Ψ2† Ψ2 Ψ1 Ψχ Ψχ√

|Ψ0 ∇ = i G 0 (X, 2)i G 0 (2, 1)i G 0 (1, X √ ).
!⎩ 

All this is rather boring, but if we mark all the elements of the above expression—
points where interaction occurs, unperturbed Green’s functions, etc.–as suggested
in Table 2.1, we can draw Green’s function to first order in interaction as shown
in Fig. 2.8. The resulting diagrams are like the ones we obtained earlier for the
one-particle propagator. Once again, the probe particle interacts with the particles
in the system, and propagates freely between those acts of scattering. But now the
“background” particles interact with each other, and this is expressed in the structure
of the graphs, which is now far richer.
You see that the first two terms in the numerator are indeed disconnected; they
literally fall apart. On the other hand, the remaining four terms are connected, and
they show that the probe particle is scattered (that is, interacts) with the other panicles
in the system.
By definition, the connected diagrams are the ones that do not contain parts dis-
connected from the external ends, that is, the coordinates of the “external” particle
(in our case, external ends are X, X √ ). Only the external ends of the diagram carry
significant coordinates (spins, etc.), the ones that actually appear as the arguments of
the exact Green’s function that we wish to calculate. All the rest are dummy labels,
because there will be integrations (summations) over them. Of course, it does not
matter how we denote a dummy variable, and all the diagrams that differ only by the
mute labels are the same.
Here we see that connected terms contain integrations over the dummy variables
1 and 2. Therefore, of four connected terms there are only two that are different, and
we can get rid of the factor one half before them.
2.2 Perturbation Theory: Feynman Diagrams 81

Expanding the denominator, we find that

Within the same accuracy, we can factor the numerator (neglecting the higher-
order terms in the interaction), and see that the denominator actually cancels the
disconnected terms from the numerator!
This observation is actually a strict mathematical statement, and since the proof
is very simple and general, let us prove.

2.2.1.2 Cancellation Theorem

All disconnected diagrams appearing in the perturbation series for the Green’s func-
tion exactly cancel from its numerator and denominator. Therefore Green’s function
is expressed as a sum over all connected diagrams.
No need to specify the interaction term, W. Let us consider the νth order term in
the numerator of Green’s function:

∼ ⎣
⎣ ∼   ∼
1 ν!
θm+n,ν (−i) m+n
dt1 · · · dtm ≈0 |T W(t1 ) · · · W(tm )
ν! m!n!
n=0 m=0 −∼

× Ψ(X )Ψ (X )|0 ∇connected

∼
× dtm+1 · · · dtν ≈0 |T W(tm+1 ) · · · W(tm+n )|0 ∇.
−∼

In this expression each term is explicitly presented as a product of a connected part


(of mth order) and a disconnected part (of nth order), m and n, adding up to ν. We
ν!
included here a combinatorial factor ( m!n! ), which is the number of ways to distribute
ν interaction operators W(ti ) in these two groups (connected and disconnected),
consisting of m and n operators respectively. (Since interaction terms contain an
even number of Fermi operators, no sign change occurs from such a redistribution.)
1
This factor combines with the ν! (from the exponential series) and leaves us with
1 1
m! n! .
Summation ν from 0 to ∼ simply eliminates the θ-symbol, and we have for a
product of two series, of which the second, due to disconnected diagrams, is, after
trivial relabeling of the integration variables,
82 2 Green’s Functions at Zero Temperature

Fig. 2.9 Example of


topologically equivalent
diagrams

Table 2.1 Feynman rules for scalar electron-electron interaction


X X'
i G(X X √ ) ≡ i G δδ√ (x, t; x√ , t √ ) Causal Green’s function
i G 0 (X X √ ) ≡ i G 0 (x − x√ , t − t √ )θδδ√ Unperturbed causal
Green’s function
−iU (1 − 2) ≡ −iU (x1 − x2 )θ(t1 − t2 ) × θ(t1 − t2 ) Interaction potential

n 0 (1) ≡ ≈0 |Ψ1† Ψ1 ||0∇ Unperturbed electron


density
The integration over all intermediate coordinates and times and summation over mute spin indices
is implied

∼ ∼
1 ⎣
dt1 . . . dtn (−i)n ≈0 |T W(t1 ) . . . W(tn )|0 ∇
n!
n=0−∼
∼
−i
= ≈0 |T e dtW(t)|0 ∇,
−∼

that is, the denominator of the expression for Green’s function! This contribution
cancels, which proves the theorem.
We see as well that in any connected term of mth order there will be exactly
m! identical contributions due to rearrangements of t1 , . . . , tm in ≈0 |T W(t1 ) · · ·
W(tm )Ψ(X )Ψ † (X √ )|0 ∇connected . This cancels the m!
1
factors and allows us to deal
only with topologically different graphs. An example (Fig. 2.9): these two second
order diagrams are the same diagram, because they differ only by labels of the
interaction lines, 12 ↔ 34. Returning to our specific form of the interaction, we will
see that in our case there is also a 2−n factor associated with diagram, due to the one
half in the two-particle interaction term. This factor also cancels, this time because
we don’t distinguish between the ends of the interaction line, being
the same as . (As we said, only the labels of external ends matter.
The rest are just dummy integration variables!) Then we finally come to
2.2 Perturbation Theory: Feynman Diagrams 83

2.2.1.3 General Rules

1. Draw all topologically distinct connected Feynman diagrams,


2. Decode them according to Table 2.1,
3. Multiply every diagram by (−1) F , where F is the number of closed loops with
more than one vertex, consisting of fermionic lines. (“Bubbles” (n 0 ) do not count
here.)

The origin of rule 3 is self-evident. When a fermionic loop is formed, we have


to contract the Fermi operators like this: ρ(1)ρ † (2)ρ(2)ρ(3) · · · ρ(N )ρ † (1). Since
in any Hamiltonian we have an arrangement ρ † (1)ρ(1) etc., this means that the
operator ρ † (1) must have been dragged to the rightmost place through all the rest of
the operators, that is, through an odd number of Fermi operators: its own conjugate,
and no matter how many ρ † ρ pairs. This yields the overall minus sign and explains
the rule. The bubbles don’t count, because such a bubble corresponds to a ρ † ρ pair,
and no rearrangement is necessary.
If we now perform a Fourier transformation to the momentum representation, we
will see that the same rules apply, but the decoding table is somewhat different (here
we denote by P the set (p, χ)), see Table 2.2. The energy and momentum conserva-
tion law in each vertex (which reduces the number of integrations in each vertex by
one) has a simple origin. In coordinate representation,
⎤ an intermediate integration in
a vertex Y = (y, t y ) involves the expression d 4 Y G 0 (.. − Y )G 0 (Y − ..)U (Y − ..).
(We take into account that the unperturbed Green’s function and interaction potential
are spatially uniform.) Rewriting this in Fourier components, we obtain
   
d4 K d 4 K1 d 4 K2
d 4 Y ei K (..−Y )+i K 1 (Y −..)+i K 2 (Y −..)
(2ξ)4 (2ξ)4 (2ξ)4
× G 0 (K )G 0 (K 1 )U (K 2 ).

The integral over Y can be taken immediately; it is a simple exponential integral


yielding delta functions:
  
d 4 Y eiY (−K +K 1 +i K 2 ) = dt y e−it y (−χ+χ1 −χ2 ) d 3 yei y(−k+k1 +k2 )

= (2ξ)4 θ(−χ + χ1 − χ2 )θ(−k + k1 + k2 ).

Thus the energy (frequency) and momentum (wave vector) are conserved in each
vertex. The physical reason for this is clear: each vertex of the diagram describes a
scattering process. The Hamiltonian of our problem (which describes such scattering)
is spatially uniform and time independent, which in agreement with general principles
yields momentum and energy conservation.
Besides scalar electron–electron interaction, another important interaction in
solid-state systems is electron–phonon interaction. We will not derive here the corre-
sponding Hamiltonian in terms of electron and phonon field operators: this is rather
84 2 Green’s Functions at Zero Temperature

Table 2.2 Feynman rules for scalar electron–electron interaction (momentum representation)

i G(P) ≡ i G δδ√ (p, χ) Causal Green’s function


i G 0 (P) ≡ i G 0 (p, χ)θδδ√ Unperturbed causal
Green’s function
−iU (Q) ≡ −iU (q) Fourier transform of the
interaction potential

n 0 (μ) Unperturbed electron


density
The integration over all intermediate momenta and frequencies (d P/(2ξ)4 ) and summation over
dummy spin indices is implied, taking into account energy (frequency)/momentum conservation in
every vertex

a subject for a course in solid state physics. It is enough for us to note that the
electron–phonon interaction is described by terms in the Hamiltonian proportional
to ρ † (X )ρ(X )λ(X ) (this expression is Hermitian, since the phonon operator λ (as
we defined it) is real). It is clear then that only even-order terms in electron–phonon
interaction enter the perturbation expansion, because otherwise there will be unpaired
phonon operators, giving zero vacuum average. In the even-order terms, phonon oper-
ators pair to form unperturbed phonon Green’s functions (propagators) D 0 (k, χ).
The definition of the vertex and of the phonon propagator depends on convention;
we give here for your convenience the rules used in two basic monographs on the
subject. The following discussion will not actually depend on such details, but each
time you perform or follow specific calculations, it pays to check all the conventions
beforehand.

2.2.2 Operations with Diagrams. Self Energy. Dyson’s Equation


One of reasons why Green’s function are so widely used is that the corresponding
diagrams have a very convenient property: The value of any Feynman diagram for
Green’ s function can be found as the composition of expressions corresponding to
its parts, independently of the structure of the diagram as a whole.
This means that any part of the diagram (subdiagram) can be calculated separately
once and for all and then inserted into an arbitrary diagram containing such a part.
(This is not so, e.g., in the case of diagram expansion for the grand potential.)
What does this mean? Let us look at two different diagrams shown in Fig. 2.10,
of second and sixth order respectively. The expressions for them are easily written,
and we underline the terms that correspond to the marked parts of the diagrams:
2.2 Perturbation Theory: Feynman Diagrams 85

Fig. 2.10 Two diagrams

Table 2.3 Feynman rules for electron-phonon interaction (momentum representation)

i D(K ) ≡ i D(k, χ) Exact phonon propagator


[1]
χ2
i D 0 (K ) ≡ i D 0 (k, χ) = i χ2 −χk2 +i0 Unperturbed phonon propagator
k

−ig Electron-phonon coupling constant


[3]
2χk
i D 0 (K ) ≡ i D 0 (k, χ) = i χ2 −χ 2 +i0 Unperturbed phonon propagator
k

−i|Mk | Electron-phonon matrix element


d1d2d3d4[i G 0 (X 1)(−1)i G 0 (23)i G 0 (32)

× i G 0 (14)i G(4X √ )(−iU (12))(−iU (34))];



d1d2d3d4d5d6d7d8d9d10d11d12[i G 0 (X 1)(−1)i G 0 (23)i G 0 (32)

× i G 0 (54)i G 0 (46)i G 0 (65)(−1)i G 0 (97)i G 0 (78)i G 0 (89)


× (−1)i G 0 (1, 10)i G 0 (10, 11)i G 0 (11, 12)i G 0 (12, X √ )
× (−iU (12))(−iU (34))(−iU (57))(−iU (6, 10))(−iU (8, 11))
(−iU (9, 12))].

The final expression is simply constructed of elementary blocks like

This is very different from the diagrammatic series for the grand potential, Γ,
where a factor 1/n in each nth-order diagram prohibits such partial summation of
diagram series (Table. 2.3).
The idea of this summation is simple and mathematically shaky. Suppose we
have a diagram, for example, . In the infinite series for Green’s
function there is an infinite subset of diagrams like
which include all possible corrections to the inner line. Due to
86 2 Green’s Functions at Zero Temperature

Fig. 2.11 Self energy diagrams: a self energy parts, b irreducible self energy parts, c proper self
energy

the fact that there is no explicit dependence of the expression on the order of the
diagram, we can forget about everything that lies beyond these interaction points and
concentrate on the inside of the graph. The corrections here should transform the thin
line (unperturbed Green’s function, G 0 ) into a solid line (exact Green’s function, G) in
the same way, as the whole series gives the exact Green’s function
We have partially summed the diagram series for Green’s function!
This is not yet a victory, though. First, the summation of this sort still gives
us an equation: a self-consistent equation for the exact Green’s function, usually
a nonlinear integral or integro-differential one. To solve it would be really tough!
Second, there is absolutely no guarantee that this equation is correct. Indeed, we
know from mathematics that only for a very restricted class of convergent series
(absolutely convergent) the sum is independent of the order of the terms. What we
have done here is to redistribute the terms of the perturbation series, about which
we even do not know (and usually cannot know) whether it converges at all! The
justification here comes from the results: if they are wrong, then something is wrong
in our way of partial summation (evidently, there are many, and each is approximate,
since some classes of diagrams are neglected). Or maybe something funny occurs
to the system, and this is already useful information. We will meet such a case later,
when discussing application of the theory to superconductivity. In most cases the
results are right if the partial summation is made taking into account the physics of
the problem. Usually we can show, with physical if not mathematical rigor, that a
certain class of diagrams is more important than the others, and therefore the result
of its summation reflects essential properties of the system.
To approach such partial summations systematically, let us make some definitions.
The self energy part is called any part of the diagram connected to the rest of it
only by two particle lines (Fig. 2.11).
2.2 Perturbation Theory: Feynman Diagrams 87

Fig. 2.12 Dyson’s equation

The irreducible, or proper, self energy part is the one that cannot be separated by
breaking one particle line, like the one in Fig. 2.11b.
Finally, the proper self energy, or self energy par excellence, or mass opera-
tor, is called the sum of all possible irreducible self energy parts and is denoted
by δδ√ (X, X √ ). The name is given for historical field-theoretical reasons, and its
meaning will become clear a little later.
It is convenient to include a (−i) factor into the definition (Fig. 2.11). Then the
series for Green’s function can be read and drawn as follows (Fig. 2.12):

i G = i G 0 + i G 0 G 0 + t G 0 G 0 G 0 + · · · (2.63)

Here the terms in the infinite series are redistributed in such a way as to make
it a simple series (a geometric progression!) over the powers of self energy and
unperturbed Green’s function only. (Of course, all necessary integrations and matrix
multiplications with respect to spin indices are implied, so that this is an operator
series.)
Separating the i G 0 factor, we obtain the celebrated Dyson’s equation (see
Fig. 2.12), which is exactly of the self-consistent form we anticipated:

G(X, X √ ) = G 0 (X, X √ ) (2.64)


 
+ d X √√ d X √√√ G 0 (X, X √√ )(X √√ , X √√√ )G(X √√√ , X √ ).

(Of course, we could take i G 0 from the other side, and get G(P) = G 0 +
G(P)(P)G 0 (P).) In a homogeneous stationary and nonmagnetic system (the
last condition means that G and  are diagonal with respect to spin indices)
we can make a Fourier transformation, reducing the above equation to G(P) =
G 0 + G 0 (P)(P)G(P). Then we see that
 ⎫−1 1
G(p, χ) = (G 0 (p, χ))−1 − (p, χ) = . (2.65)
χ − ψ(p) + μ − (p, χ)

Symbolically this can be written as


88 2 Green’s Functions at Zero Temperature

Fig. 2.13 Interaction renormalization: a polarization insertions, b polarization operator

 −1
π
G= i ˆ
−E − . (2.66)
πt

The latter equation holds even if G and  are nondiagonal (e.g., in the nonhomoge-
neous case), understanding [. . .]−1 as an inverse operator.
An important feature of (2.65) is that if we substitute there some finite-order
approximation for the self energy, the resulting approximation for G will be equiv-
alent to calculating an infinite subseries of the perturbation series, and this gives a
much better result than the simple-minded calculation of the initial series term by
term.
This is a natural consequence of a self-consistent approach. Another, less pleasant,
one is that any approximate self energy is to be checked, lest it violates the general
analytic properties of Green’s function (which follow from the general causality
principle and should not be toyed with). Returning to the simple case (2.65) and
recalling the Källén–Lehmann representation, we see that necessarily
"
∗(p, χ) ≤ 0, χ < 0,
(2.67)
∗(p, χ) ≤ 0, χ > 0.

We see as well that ∗ is the inverse lifetime of the elementary excitation, while
↑ defines the change of dispersion law due to interaction. (In quantum field theory
this leads to the change of the particle mass, which is why  is also called the mass
operator.)

2.2.3 Renormalization of the Interaction. Polarization Operator

Following the same approach, we can consider the insertions into the interaction line
as well, like those shown in Fig. 2.13.
The polarization insertion is called the part of the diagram that is connected to
the rest of it only by two interaction lines. The irreducible polarization insertion
is one that cannot be separated by breaking of a single interaction line. Finally, the
polarization operator is the sum of all irreducible polarization insertions, , and is
a direct analogue to the self energy.
2.2 Perturbation Theory: Feynman Diagrams 89

Fig. 2.14 Equation for the polarization operator

Since there is a (−i) factor in the definition of the interaction line, it is convenient
to introduce an (i) factor into the polarization operator. The analogue to Dyson’s
equation is readily obtained and reads (see Fig. 2.14)

Ueff (P) = U (P) + U (P)(P)Ue f f (P). (2.68)

Then we find the generalized dielectric function, κ(p, χ):

U (p, χ) U (p, χ)
Ueff (p, χ) ≡ = , (2.69)
κ(p, χ) 1 − U (p, χ)(p, χ)

which describes the effect of the polarization of the medium on particle–particle


interaction. A good example of such an effect is the following.

2.2.3.1 Screening of Coulomb Interaction

The Thomas–Fermi result concerning the screening of the Coulomb potential by the
charged Fermi gas can be reproduced if we use the random phase approximation
(RPA), which here means taking the lowest-order term in the polarization operator:


d 3 qdζ 0
i0 (p, χ) = =2 G (p + q, χ + ζ)G 0 (q, ζ). (2.70)
(2ξ)4

The calculations give the following result for the static screening:
 # #
mp F p 2F − p 2 /4 ## p F + p/2 ##
↑0 (p, 0) = − 2 1+ ln # ; (2.71)
2ξ pF p p F − p/2 #
∗0 (p, 0) = 0. (2.72)

For the long-range screening ( p  p F ),

0 → −2N (μ),
90 2 Green’s Functions at Zero Temperature

Fig. 2.15 Random phase approximation

Fig. 2.16 Ladder approxima-


tion

where N (μ) ≡ mp F
2ξ 2
is the density of states on the Fermi surface. Thus the Fourier
transform of the interaction is

4ξe2 /q 2 4ξe2
Uefff (q) → = . (2.73)
1 + 2N (μ)4ξe2 /q 2 q 2 + 8ξe2 N (μ)

The quantity

qT2 F = 8ξe2 N (μ)

is the squared Thomas–Fermi wave vector, and the potential indeed takes the Yukawa
form:

e2
Uefff (r ) = exp(−qT F r ). (2.74)
r
Thus, the presence of other charged particles leads to screening of initial long- range
Coulomb interactions, and limits it to a finite Thomas–Fermi radius. How this hap-
pens is graphically clear from the simplest polarization diagram. The interaction cre-
ates a virtual electron–hole pair. (Virtual, of course, because the energy-momentum
relation for every internal line of a diagram is violated: we integrate over all ener-
p2
gies and all momenta independently! For a real particle, E = 2m or something like
this.) The approximation we used included only independent events of such virtual
electron-hole creation: because the energy and momentum along the interaction line
are conserved, the quantum-mechanical phase of the electron–hole pair is immedi-
ately lost and does not affect the next virtual pair. This is the reason it is called RPA,
random phase approximation (Fig. 2.15). As we discussed in the very beginning of the
book, this kind of approach works well if there is a large number of particles within
the interaction radius: then indeed it is much more probable to interact with two
different particles consecutively than with the same one twice. In the opposite case,
when the density of particles is low, RPA naturally fails, while the ladder approxi-
mation is relevant: Here a virtual pair (quasiparticle–quasihole) interacts repeatedly
before disappearing (Fig. 2.16). This is again reasonable, because when density is
low, it is improbable to find some other quasiparticle close at hand to interact with.
Unfortunately, on our path to the Thomas-Fermi screening, Eq. (2.74), from the
random phase approximation, Eqs. (2.71), (2.72), we made one simplification too
2.2 Perturbation Theory: Feynman Diagrams 91

many when replaced the static polarization 0 (p, 0) with its value at p = 0. The
logarithmic term in (2.71) is non-analytical at p = p F , and—as it turns out—it pro-
duces instead of the exponential screening (2.74) a qualitatively different potential,
which far away from the charge behaves as

e2
Uefff (r ) ≥ cos(2 p F r ).
r3
Not only it does not fall off exponentially, but it also demonstrates Friedel oscil-
lations. Both effects are due to the sharp step of the Fermi distribution function
at T = 0, which produced the non-analytical term in (2.71) in the first place (see
Appendix A). At finite temperature the step is smeared, and the above expression is
multiplied by an exponentially decaying factor, thus reverting to the Yukawa–type
screening with Friedel oscillations superimposed.

2.2.4 Many-Particle Green’s Functions. Bethe–Salpeter Equations.


Vertex Function

We have seen that Green’s functions give a convenient apparatus for a description of
many-body systems. So far we have used a one-particle Green’s function, dealing
with a single quasiparticle excitation, though against the many-body background.
They don’t apply, e.g., to the case of the bound state of two such excitations. Indeed,
in a Fermi system such a state would be a boson, while the one-particle Green’s
function describes a fermion.
This problem can be easily solved. Nobody limits us to consideration of averages
≈ρρ † ∇ only. The “Schrödinger equation” for G(X, X √ ) included terms ≈ρρρ † ρ † ∇.
Therefore it is natural to introduce n-particle Green’s functions. (As usual, there is
no common convention here, so when reading a chapter be careful what definition
is actually used.)
The n-particle (or 2n-point) Green’s function (Fig. 2.17) is defined as follows:

G (n)δ1 δ2 ...δn ,δ1√ δ2√ ...δn√ (x1 t1 , x2 t2 , . . . , xn tn ; x1√ t1√ , x2√ t2√ , . . . , xn√ tn√ )
≡ G (n) (12 · · · n; 1√ 2√ · · · n √ )
1
= ≈T ρ(1)ρ(2) · · · ρ(n)ρ † (n √ ) · · · ρ † (2√ )ρ † (1√ )∇ (2.75)
(i)n

The rules of drawing and decoding Feynman diagrams stay intact and can be
easily derived from the expansion of the S-operator in the average ≈T · · · ρ † ρ † ∇.
There is only one additional rule.
The diagram is multiplied by (−1) S , where S is the parity of the permutation of
the fermion lines’ ends (1√ 2√ · · · n √ ) ↔ (12 · · · n) (see Fig. 2.18).
92 2 Green’s Functions at Zero Temperature

Fig. 2.17 Many-particle


Green’s function (This
convenient “stretched skin”
graphics are introduced in [5];
ots mark the outgoing ends)

Fig. 2.18 Sign rule for


a many-particle Green’s
function: a (1, 2, 3) ⇔
(3√ , 2√ , 1√ ), sign +1, b
(1, 2, 3) ⇔ (2√ , 3√ , 1√ ),
sign −1

The origin of this rule is easy to see applying Wick’s theorem to the lowest order
expression for the two-particle Green’s function:

G (2) (12, 1√ 2√ ) ≡ (−i)2 ≈T ρ1 ρ2 ρ2†√ ρ1†√ ∇ (2.76)


→ (−i)≈T ρ1 ρ1†√ ∇0 (−i)≈T ρ2 ρ2†√ ∇0 ∓ (−i)≈T ρ1 ρ2†√ ∇0 (−i)≈T ρ2 ρ1†√ ∇0
= G 0 (11√ )G 0 (22√ ) ∓ G 0 (12√ )G 0 (21√ ).

The cancellation theorem removes only the diagrams with loose parts discon-
nected from the external ends. This means that not every diagram looking discon-
nected is actually disconnected! For example, the diagrams corresponding to (2.76)
(see the two diagrams in Fig. 2.19) are not disconnected and are not canceled. As
a matter of fact they provide a Hartree–Fock approximation for the two-particle
Green’s function (direct and exchange terms, as is evident from their structure).
The two-particle Green’s function is most widely used and therefore has its own
letter, K :

K (12; 1√ 2√ ) = −≈T ρ(1)ρ(2)ρ † (2√ )ρ † (1√ )∇. (2.77)

Its diagram expansion to the second order is given in Fig. 2.19.


The importance of the two-particle Green’s function is that (1) it determines the
scattering amplitude of quasiparticles, that is, their interactions, and thus (2) its poles
define the dispersion law of two-particle excitations (e.g., bosonic excitations in a
normal Fermi system—say zero sound), as well as appearance of boundstate of two
quasiparticles—and therefore the superconducting transition point.
We can define the irreducible two-particle Green’s function by separating all
“seemingly disconnected” diagrams (Fig. 2.20). The first two of this set give the
2.2 Perturbation Theory: Feynman Diagrams 93

Fig. 2.19 Second-order expansion of the two-particle Green’s function

Fig. 2.20 Generalized Hartree–Fock approximation for the two-particle Green’s function

self-consistent Hartree–Fock approximation for the two-particle Green’s function


(self-consistent, because it contains exact one-particle Green’s functions):

G(11√ )G(22√ ) ∓ G(12√ )G(21√ ) = G 0 (11√ )G 0 (22√ ) ∓ G 0 (12√ )G 0 (21√ ) + · · · .


(2.78)
The rest is the irreducible two-particle Green’s function and is expressed through
the vertex function  (Fig. 2.21):

K̃ (12; 1√ 2√ ) = K (12; 1√ 2√ ) − [G(11√ )G(22√ ) ∓ G(12√ )G(21√ )]


   
= d3 d3√ d4 d4√ G(13)G(24)i(33√ ; 44√ )G(3√ 1√ )G(4√ 2√ ). (2.79)

The poles of the two-particle Green’s function define two-particle excitations of


the system in the same fashion as the poles of the one-particle Green’s function
defined quasiparticles. Examples are such excitations in the Fermi systems as zero
sound and plasmons. Evidently, the relevant poles of the two-particle Green’s func-
94 2 Green’s Functions at Zero Temperature

Fig. 2.21 Irreducible part of the two-particle Green’s function and the vertex function

tion appear only in the vertex function: the “tails” are one-particle Green’s functions
and as such don’t bring anything new. Therefore, we concentrate on the vertex func-
tion.
The discourse is simpler in momentum representation, if we are (as usual) dealing
with a stationary, spatially uniform system. Evidently, only three sets of variables
of four are independent (because a uniform shift of coordinates or times should not
change anything). We choose the following set of independent combinations:

X 1 − X 1√ , X 2 − X 2√ , X 1√ − X 2 . (2.80)

Here and later on X = x, t; P = p, χ; and the “scalar product” P X = p · x − χt.


Then the two-particle Green’s function in momentum space is defined as
   
d X1 d X 1√ d X2 d X 2√ e−i(P1 X 1 +P2 X 2 −P1√ X 1√ −P2√ X 2√ ) K (X 1 , X 2 ; X 1√ , X 2√ )

= (2ξ)4 θ(P1 + P2 − P1√ − P2√ )K (P1 , P2 ; P1√ , P1 + P2 − P1√ ). (2.81)

The Fourier transformation for any function of these four sets of variables is defined
by

K (P1 , P2 ; P1√ , P1 + P2 − P1√ )


  
= d(X 1 − X 1√ ) d(X 2 − X 2√ ) d(X 1√ − X 2√ )
√ √ √
×e−i P1 (X 1 −X 2 )−i P2 (X 2 −X 2√ )+i P1 (X 1 −X 2 ) K (X 1 , X 2 ; X 1√ , X 2√ );
K (X 1 , X 2 ; X 1√ , X 2√ )
  
d P1 d P2 d P1√
=
(2ξ)4 (2ξ)4 (2ξ)4
√ √ √
ei P1 (X 1 −X 2√ )+i P2 (X 2 −X 2√ )−i P1 (X 1 −X 2 ) K (P1 , P2 ; P1√ , P1 + P2 − P1√ ).

Then Eq. (2.79) can be rewritten as

K̃ (P1 , P2 ; P1√ , P1 + P2 − P1√ ) = G(P1 )G(P2 )


× i(P1 , P2 ; P1√ , P1 + P2 − P1√ )
G(P1√ )G(P1 + P2 − P1√ ). (2.82)
2.2 Perturbation Theory: Feynman Diagrams 95

Now we are ready to derive (for scalar electron–electron interaction, our stan-
dard guinea pig) an important general relation between the vertex function and self
energy. That such a relation should exist is reasonable, since both  and  have in
common, besides being uppercase Greek letters, that they result from summation of
all somehow irreducible diagrams. First, we present a very graphic proof, which will
be then supported by more rigorous calculation (which, on the other hand, is only a
translation of graphs into equations).
We start from writing down the equation of motion for the one-particle Green’s
function, in position space. As we observed much earlier, such an equation will
contain the two-particle Green’s function:
 
π 1 2
i + ∇x + μ G δδ√ (X, X √ ) = θδδ√ θ(X − X √ )
πt 2m

−i d 4 Y U (X − Y )K δ∂,δ√ ∂ (X, Y ; X √ , Y ) (2.83)

(we have made use of the definition: ≈T Ψ∂† (Y )Ψ∂ (Y )Ψδ (X )Ψδ† (X √ )∇ ≡ K δ∂,δ√ ∂ (X,
Y ; X √ , Y )). Since G = G 0 + G 0 G, the relation in question is indeed here, and we
have only to extract it.
Graphically, it is simple: the equation can be symbolically written as [i G 0 ]−1 i G =
I − (−iU )(i 2 K ), that is,

Then we do a series of transformations:

The result is shown in Fig. 2.22. Notice that we used the specific for (n > 2)-
particle Green’s function sign convention in order to determine the signs of the first
two terms on the right-hand side of Fig. 2.22: if “decoded” following the one-particle
rules, they would lack a (-1)-factor due to the exchange of tails of the two-particle
diagram.
96 2 Green’s Functions at Zero Temperature

Fig. 2.22 Relation between


the self energy and vertex
function

In analytical form, this equation (sometimes called Dyson’s equation, but less
often than the Dyson equation we encountered earlier) reads

d P1
(P)θδω = U (0)n(μ)θδω + iθδω U (P − P1 )G(P1 )
(2ξ)4
 
d P1 d P2
+ G(P1 )G(P2 ) (2.84)
(2ξ)4 (2ξ)4
× δ∂,ω∂ (P1 , P2 ; P, P1 + P2 − P)G(P1 + P2 − P)U (P − P1 ).

Now let us derive it without graphs, or rather write down each step instead of
drawing it. Again, assume a uniform, stationary, and isotropic system. Then, in
momentum space, (2.83) looks like
 ⎫ 
d P1 d P2
(G 0 (P))−1 G(P) − 1 θδδ√ = −i K δ∂,δ√ ∂ (P1 , P2 ; P, P1 + P2 − P)U (P − P1 ).
(2ξ)8

p2
(Here (G 0 (P))−1 ≡ χ − 2m + μ is a function, not an operator, and simply equals
1/G 0 (P).) Now substitute in this equation the definition (2.79) and divide by G(P).
After this messy operation we obtain

d P2
[1/G 0 (P) − 1/G(P)]θδδ√ = − iθδδ√ U (0) G(P2 )
(2ξ)4

d P1
± iθδδ√ U (P − P1 )G(P1 )
(2ξ)4

d P1 d P2
+ ∂δ,∂δ√ (P1 , P2 ; P, P1 + P2 − P)
(2ξ)8
× G(P1 )G(P2 )G(P1 + P2 − P)U (P − P1 ).

Since by virtue of the Dyson equation 1/G 0 (P) − 1/G(P) = (P), then we even-
tually recover Eq. (2.84). See how much easier it was with the diagrams? By the way,
the graphs immediately show the physical sense of this relation. The first two terms in
(2.84) give the self-consistent Hartree–Fock approximation with initial (bare) poten-
tial: they take into account the interaction of the test particle with the medium, and
with itself (exchange term). The rest must contain the effects of renormalization of
the interaction, and indeed, the third graph can be understood as containing the renor-
malized interaction vertex (Fig. 2.23). As you see, it contains, in particular, all the
polarization insertions in the interaction line. This is the reason we had a bare poten-
tial line in Fig. 2.22 and Eq. (2.84): otherwise certain diagrams would be included
2.2 Perturbation Theory: Feynman Diagrams 97

Fig. 2.23 Vertex function and renormalized interaction

Fig. 2.24 Particle–particle irreducible vertex function

Fig. 2.25 Particle-particle irreducible vertex function and two–particle Green’s function

twice. In all operations with diagrams we must pay special attention to avoiding the
double count.

2.2.4.1 The Bethe–Salpeter Equation

Earlier we introduced the irreducible self energy as a sum of all diagrams that cannot
be separated by severing one fermion line. Let us generalize this and introduce the
particle-particle irreducible vertex function, ˜ (P P) , which includes all diagrams that
cannot be separated by severing two fermion lines between in- and outcoming ends.
(In Fig. 2.24 diagram (a) is particle–particle irreducible, but diagram (b) is not.)
Then the diagram series for the particle–particle irreducible vertex part (or the
particle–particle irreducible two–particle Green’s function, if drop the external tails)
can be drawn as in Fig. 2.25.
For the vertex function we thus obtain the Bethe–Salpeter equation, which is
a direct analogue of the Dyson equation for the one-particle Green’s function2
(Fig. 2.26):

2Of course, this equation can as well be written for the two–particle Green’s function itself, instead
of the vertex function.
98 2 Green’s Functions at Zero Temperature

Fig. 2.26 Bethe-Salpeter


equation (particle-particle
channel)

Fig. 2.27 Particle–hole irreducible vertex function

Fig. 2.28 Bethe-Salpeter equation (particle–hole channel)

(12; 1√ 2√ ) = ˜ (P P) (12; 1√ 2√ )
   
+i d3 d3 √
d4 d4√ ˜ (P P) (12; 3√ 4√ )G(33√ )G(44√ )(3√ 4√ ; 1√ 2√ ). (2.85)

Two-particle functions allow for more possibilities: there are more loose ends
in a diagram! Thus, we have a different particle–hole irreducible vertex, ˜ (P H ) ,
(Fig. 2.27, where diagram (b) is now (particle–hole) irreducible, while diagram (a)
is not). This yields another version of the Bethe–Salpeter equation (Fig. 2.28):

(12; 1√ 2√ ) = ˜ (P H ) (12; 1√ 2√ )
   
+ i d3 d3√ d4 d4√ ˜ (P H )(42;4√ 2√ )G(43)G(3√ 4√ )(13;1√ 3√ ) .
(2.86)

Of course, both versions are equivalent mathematically, but not physically. Since
there is little hope that either can be solved exactly, some approximations are in
order, and we should choose, as usual, the version that is better as a starting point.
The latter one, e.g., proves to be useful for investigation of the processes with small
momentum transfer between quasiparticles, but this is beyond the scope of this book.
2.3 Problems 99

2.3 Problems

• Problem 1

starting from the expression for the grand potential, Γ = −P V ,

μ 
dpdχ −iχt
Γ= dμ(2i V ) lim e G(p, χ),
t→−0 (2ξ)4
0

find the pressure of the ideal Fermi gas at zero temperature


 
1
G(p, χ) = G 0 (p, χ) = .
χ − (ψp − μ) + i0sgnχ

Compare to the classical expression P = nk B T and find the “effective pressure


temperature.” How is it related to the “effective energy temperature” TF = μ/k B ?
• Problem 2
Reduce the ladder approximation series for the two-particle Green’s function to an
integral equation:

• Problem 3
Calculate the lowest order diagram for the polarization operator:

and reproduce the results of Eqs. (2.71), (2.72).


• Problem 4
Starting from the definition (2.16), derive the equation of motion for the unperturbed
phonon propagator.
100 2 Green’s Functions at Zero Temperature

References

Book and Reviews

1. Abrikosov, A.A., Gorkov, L.P., Dzyaloshinski, I.E.: Methods of quantum field theory in statistical
physics. Ch. 2. Dover Publications, New York (1975) (An evergreen classic on the subject.)
2. Fetter, A.L., Walecka, J.D.: Quantum theory of many-particle systems. McGraw-Hill, San Fran-
cisco (1971)
3. Mahan, G.D.: Many-particle physics. Plenum Press, New York (1990) ([2] and [3] are high-level,
very detailed monographs: the standard references on the subject.)
4. Lifshitz, E.M., Pitaevskii, L.P.: Statistical physics pt. II. (Landau and Lifshitz Course of theo-
retical physics, v. IX.) Pergamon Press, New York (1980) (Ch. 2. A comprehensive, but very
compressed account of the zero-temperature Green’s functions techniques.)
5. Mattuck, R.: A guide to Feynman diagrams in the many-body problem. McGraw-Hill, New York
(1976) (Green’s functions techniques are presented in a very instructive and intuitive way.)
6. Nussenzvejg, H.M.: Causality and dispersion relations. Academic Press, New York (1972). (A
very good book for the mathematically inclined reader)
7. Thouless, D.J.: Quantum mechanics of many-body systems. Academic Press, New York (1972)
8. Ziman, J.M.: Elements of advanced quantum theory. Ch. 3, 4. Cambridge University Press,
Cambridge (1969)
Chapter 3
More Green’s Functions, Equilibrium
and Otherwise, and Their Applications

Such, such were the joys


When we all, girls and boys,
In our youth time were seen
On the Ecchoing Green.
William Blake
“Songs of Innocence”

Abstract Equilibrium Green’s functions at finite temperature. Formal analogy


between the equilibrium statistical operator and the evolution operator in imagi-
nary time. Temperature (Matsubara) Green’s functions and their relation to equilib-
rium Green’s functions. Diagram technique for the temperature Green’s functions
(Matsubara formalism). Linear response theory, Kubo formulas, and fluctuation-
dissipation theorem. Nonequilibrium Green’s functions. Keldysh formalism: time-
ordering along the Keldysh contour and diagram technique for matrix Green’s
functions. Quantum kinetic equation and its quasiclassical limit. Quantum transport:
Landauer formula and conductance quantization. Method of tunneling Hamiltonian.

3.1 Analytic Properties of Equilibrium Green’s Functions

The formalism we have developed so far is limited to zero temperature (i.e., to the
ground state) properties of many-body systems. As you remember, this is because
the ground state is always nondegenerate, so that we could pull off the trick with
adiabatic hypothesis: if you slowly turn interactions on, and then off, the worst that
can happen is some phase factor, which anyway cancels. This, in turn, allowed us to
build up the diagrammatic technique.
Physically, it is rather awkward to be confined to the case of T = 0. In principle,
the average in the definition of Green’s function could be taken over any quantum
state, or set of states, and we would be able at least to determine its analytic properties,
following the same steps as at T = 0. For example, we can define equilibrium Green’s

A. Zagoskin, Quantum Theory of Many-Body Systems, 101


Graduate Texts in Physics, DOI: 10.1007/978-3-319-07049-0_3,
© Springer International Publishing Switzerland 2014
102 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications

functions at finite temperature. Moreover, it turns out that diagram techniques exist
that can be used to actually calculate such Green’s functions. In this chapter, we will
discuss how and why this can be done.

3.1.1 Statistical Operator (Density Matrix): The Liouville


Equation
If a quantum system is in some definite quantum state, |√, one says it is in a
pure state; otherwise (i.e., when the quantum state of the system is known only
statistically) it is in a mixed state and is described not by the single state vector, but
by the statistical operator, π̂;

π̂ = |m √Wm →m |. (3.1)
m

Here Wm is the probability of finding the system in the quantum state |m √; evidently,

Wm = 1 (3.2)
m

(the states {m }∇


m=1 are supposed to be normalized, but not necessarily orthogonal).
The idea is that the statistical operator allows one to find the average value of any
operator O in this mixed state via

→O√ ≡ Wm = →m |O|m √ = tr(π̂O). (3.3)
m

(In a mixed state we have to do averaging twice: first over each constituent quantum
state, and then over the set of these states with weights Wm ; the trace with the
statistical operator in the above formula takes care of both.) Equation (3.2) ensures
that the probabilities add up to one—that is, the unitarity.
If we choose some orthonormal basis, {|n√}, then the statistical operator can be
rewritten as follows:

π̂ = |n√πnn≈ →n ≈ |; (3.4)
n n≈

πnn ≈ = Wm →n|m √→m |n ≈ √. (3.5)
m

In this form the statistical operator (as a set of matrix elements {πnn≈ }) is often called
the density matrix.1
3.1 Analytic Properties of Equilibrium Green’s Functions 103

The diagonal elements of the density matrix, πnn ∓ 0, give the probabilities of
finding the system in the state |n√, while the off-diagonal terms describe the quantum
correlations between different states.
Useful properties of the trace of the statistical operator directly follow from its
definition:

tr(π̂) = 1; (3.6)

  ⎪ 2
tr π̂2 ≤ tr(π̂) (3.7)

(an equality is achieved if and only if the system is in a pure state). The former equality
ensures probability conservation and directly follows from (3.2): the trace of a matrix
(or an operator) is an invariant under unitary transformations of coordinates, and since
in one special basis it equals one (3.2), so will it under any choice of a basic set of
states.
The time evolution of the statistical operator can be determined if write it in the
form Eq. (3.1) and recall that |(t)√ = U(t)|(0)√:

π̂(t) = |m (t)√Wm →m (t)| = U(t)π̂(0)U † (t). (3.8)
m

Therefore, the statistical operator satisfies the Liouville equation (called so because it
is a direct analogue to the classical Liouville equation for the distribution function):

˙ = [H(t), π̂(t)].
i π̂(t) (3.9)

Note that this is an equation in Schrödinger, not Heisenberg, representation. The


Hamiltonian is a Schrödinger operator, and its dependence on time (if any) can only
be explicit, e.g., due to an alternate external field.

3.1.2 Definition and Analytic Properties of Equilibrium


Green’s Functions

The general definitions of the causal, retarded, and advanced one-particle Green’s
functions,

1 In the case of a single particle we can take the basic set of coordinate eigenfunctions {|x√}, so that
→n|m √ ∼ →x|m √ = m (x), and the result takes the familiar form

π(x, x≈ ) = Wm m (x)∝m (x≈ ).
m
104 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications
 
G ρε (x1 , t1 ; x2 , t2 ) = −itr π̂T λρ (x1 , t1 )λε† (x2 , t2 ) , (3.10)

R
G ρε (x1 , t1 ; x2 , t2 )
  
= −itr π̂ λρ (x1 , t1 )λε† (x2 , t2 ) ± λε† (x2 , t2 )λρ (x1 , t1 ) α(t1 − t2 ), (3.11)

A
G ρε (x1 , t1 ; x2 , t2 )
  
= +i tr π̂ λρ (x1 , t1 )λε† (x2 , t2 ) ± λε† (x2 , t2 )λρ (x1 , t1 ) α(t2 − t1 ),
(3.12)

are, of course, valid for the equilibrium state at finite temperature, when the statistical
operator has standard Gibbs form,

π̂ = e−ε(H−)

= e−ε(E s −μNs −) |s√→s|
s

= πs |s√→s|, (3.13)
s
1
ε≡ .
kB T

We choose the basic set of common energy and particle number eigenstates, |s√, and
work, as usual, with the grand canonical ensemble:

HGCE |s√ ≡ (HCE − μN )|s√ = (E s − μNs )|s√.

Therefore, the statistical operator is diagonal, π̂ss≈ ≡ δss≈ πs = eε(−E s −μNs ) . The

normalization factor, eε = treε H , contains the grand potential, .
In the isotropic uniform case, of course,

G ρε (x1 , t1 ; x2 , t2 ) = δρε G(x1 − x2 , t1 − t2 ), etc.

Now we will quickly repeat, mutatis mutandis, the calculations we made when
discussing analytic properties of zero-temperature Green’s functions.

3.1.2.1 The Generalized Källén–Lehmann Representation

The generalized Källén–Lehmann representation is derived in the same way as at


zero temperature (Sect. 2.1.2), with the only difference that we must include matrix
3.1 Analytic Properties of Equilibrium Green’s Functions 105

elements of field operators between all states of the system, →s|λ(X )|s ≈ √, because
now all excited states enter with nonzero weight. As a result, we obtain for the causal
Green’s function


1
G(p, ω) = (2∂)3 πn Amn δ(p − Pmn )
2 m n

1 e−εωmn
× ± ; (3.14)
ω − ωmn + i0 ω − ωmn − i0


Amn = |→n|λρ |m√|2 ; (3.15)
ρ
ωmn = E m − μNm − (E n − μNn ). (3.16)

Separating real and imaginary parts of (3.14) at real frequencies (using the
ubiquitous Weierstrass formula (2.31)), we find


1
∗G(p, ω) = (2∂) P 3
πn Amn δ(p − Pmn )
2 m n
1
× (1 ± e−εωmn ) , (3.17)
ω−ω

 mn
1
G(p, ω) = −∂(2∂)3 πn Amn δ(p − Pmn )
2 m n
× (1 ↑ e−εωmn )δ(ω − ωmn ). (3.18)

On the other hand, for the retarded and advanced Green’s functions we obtain by the
same method


1  1 ± e−εωmn
G (p, ω) =
R
(2∂) 3
πn Amn δ(p − Pmn ) ; (3.19)
2 m n
ω − ωmn + i0


1  1 ± e−εωmn
G (p, ω) =
A
(2∂) 3
πn Amn δ(p − Pmn ) . (3.20)
2 m n
ω − ωmn − i0

In the thermodynamic limit (N , V ∞ ∇, N /V = const) it is more convenient to


use the generalized Källén–Lehmann representation in the continuum form:

∇
dω ≈ π R,A (p, ω ≈ )
G R,A (p, ω) = ; (3.21)
∂ ω ≈ − ω ± i0
−∇

where the weight function (spectral density) is


106 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications

  
1
π R,A (p, ω ≈ ) = −∂(2∂)3 πn Amn 1 ± e−εωmn
2 m n
× δ(p − Pmn )(ω ≈ − ωmn ). (3.22)

After applying the Weierstrass formula once again, we see that

∗G R,A (p, ω) = ∗G(p, ω); (3.23)



εω
coth 2 Fermi statistics,
G (p, ω) = ± G(p, ω) ×
R,A
εω (3.24)
tanh 2 Bose statistics.

In the limit ε ∞ ∇ this, of course, reduces to (2.42).


Thus we have derived an important expression of the retarded/advanced
equilibrium Green’s function through the causal Green’s function at finite temperature
(for real frequencies):

εω
±i coth 2 G(p, ω) Fermi statistics,
G R,A (p, ω) = ∗G(p, ω) + εω (3.25)
±i tanh 2 G(p, ω) Bose statistics,

Relation (3.25) allows us to find the G R,A (ω) if we know G(ω). Note that the
latter is not an analytic function, so that now the quasiparticle excitations are rather
defined by the poles of G R,A (ω) in the lower (upper) half-plane of complex frequency,
respectively.
This comes as no surprise, since we already know that these two Green’s functions
have direct physical meaning. They are involved, e.g., in calculations of the kinetic
properties of the system in linear response theory, which we will consider later. But
since there is no regular perturbation theory to calculate G R,A directly, we will use
an easy detour. There is a regular way to find the causal Green’s function (the so
called Matsubara formalism), after which retarded and advanced Green’s functions
can be directly obtained with the help of (3.25).
We still have a safeguard against mistakes that can be caused by inadequate
approximations, the Kramers–Kronig relations, which, of course, hold at any tem-
perature (as causality itself):
∇
→≈ G R,A (p, →≈ )
∗G R,A
(p, ω) = ±P ,
≈ →≈ − →
−∇

as well as the asymptotic formula

1
G(ω), G R,A (ω)||ω|∞∇ ∼ ;
ω
3.1 Analytic Properties of Equilibrium Green’s Functions 107

Fig. 3.1 Spectral density of


the retarded Green’s function

the latter, as we remember, is a result of canonical commutation relations and


probability conservation.
It follows from the Kramers–Kronig relations and (3.21) that for the real frequen-
cies the spectral density in the thermodynamic limit, π R (p, ω), is

1
π R (p, ω) = G R (p, ω) ≡ − Γ(p, ω). (3.26)
2
The latter function, Γ(p, ω), is also frequently called spectral density, which (hope-
fully) will not lead to any confusion.

3.1.2.2 The Sum Rule for the Retarded Green’s Function

Here is another very useful safeguard: the sum rule for the spectral density Γ (here
is the opportunity not to get confused!),


Γ(p, ω) = 1. (3.27)
2∂

Indeed,


1
Γ(p, ω) = (2∂)4 πn Amn (1 ± e−εωmn )δ(p − Pmn )(ω − ωmn ), (3.28)
2 m n

and we can integrate over frequency and then roll the calculations back to canonical
commutation relations between field operators, in the same manner as we did when
we calculated the 1/ω-asymptotics of Green’s functions.
What is the physical meaning of this formula? Γ(p, ω) gives the probability that
a quasiparticle with energy ω has momentum p (or vice versa). We have already dis-
108 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications

cussed that due to interactions there is always some momentum and energy exchange
between particles, broadening the (θp − μ − ω) peak of a noninteracting system
(Fig. 3.1). Since a quasiparticle must have some energy given momentum, the inte-
gral (with appropriate normalization) must yield unity. Which it does, as we have
seen.

3.1.2.3 Unperturbed Green’s Functions

Unperturbed Green’s functions can be easily calculated directly from the definition.
Here it is easier, though, to calculate retarded and advanced Green’s functions first,
and then obtain the causal Green’s function from (3.25). If you perform this useful
exercise, you will find

1
G R,A(0) (p, ω) = ; (3.29)
ω − θp + μ ± i0
1
G (0) (p, ω) = P
ω − θp + μ
⎧  
⎨tanh εω Fermi statistics,
− i∂δ(ω − θp + μ)  2  (3.30)
⎩coth εω Bose statistics,
2

in agreement with general analytic properties.

3.1.2.4 Particle Density for Fermi/Bose Particles

The particle density in the momentum space (per spin) is given by



1  † 
np = cp[ρ] cp[ρ] . (3.31)
2 ρ


Here cp[ρ] , cp[ρ] are Fermi (Bose) creation/annihilation operators. Note also a useful
relation  

cp[ρ] cp≈ [ρ≈ ] = (2∂)3 δ(p − p≈ )[δρρ≈ ]n p . (3.32)

Then we can write

(2∂)3 δ(p − p≈ )[δρρ≈ ]n p


  
1  ≈ ≈
= πm →m| †
d 3 xeipx λ[ρ] (x) d 3 x≈ e−ip x λ[ρ≈ ] (x≈ )|m√,
2 ρ m
3.1 Analytic Properties of Equilibrium Green’s Functions 109

and

1
np = πm δ(p − Pmn )Amn . (3.33)
2 m n

Comparison of this expression to (3.28) immediately leads to the following beautiful


formula:


np = Γ(p, ω)n F,B (ω); (3.34)
2∂
1
n F (ω) = εω ; (3.35)
e +1
1
n B (ω) = εω . (3.36)
e −1

It has an evident physical meaning: the statistical Fermi (Bose) distribution determines
the probability for the particle to have energy ω at given temperature, while the spec-
tral density Γ(p, ω) gives the probability that the particle with this energy has the
momentum p.

3.2 Matsubara Formalism

3.2.1 Bloch’s Equation

After learning a lot about the analytic properties of equilibrium Green’s functions at
finite temperatures, we once again meet the nasty question of how to calculate the
actual Green’s function?
We cannot use directly the results of zero-temperature diagram technique. The
reason is that now we have to average over all excited states of the system, not
only its ground state. And while the latter is unique, the former are highly degenerate
(infinitely degenerate in the thermodynamic limit). Therefore our previous reasoning
employing the adiabatic hypothesis no longer works: the adiabatic turning on and
off of the interaction can leave the system at t = +∇ in any linear combination
of excited states, very different from the one present at t = −∇, depending on the
interaction, initial state, and exact way of turning this interaction on and off. This,
in its turn, means that we cannot separate the 1/→S√-term, and the entire scheme
fails. A clear indication of this fact is that the causal Green’s function is essentially
nonanalytic, and thus cannot be obtained by summation of a series.
There are different ways of dealing with this trouble. First, we could write down
an equation of motion for the Green’s function, like (2.83), then decouple the higher-
order Green’s function and find an approximation (checking that Kramers–Kronig
relations are satisfied, etc.). The setback here is that you don’t have a regular proce-
dure and must rely on a happy guess.
110 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications

Second, we could calculate Green’s function directly from the general formula
(3.10) for the average of Heisenberg operators, n|λλ † |n = n|S −1 T (S)|n .
There is an ingenious way to actually succeed (the Keldysh formalism), and it has a
bonus of being naturally applicable to any nonequilibrium state of the system as well.
We will discuss it later. The setback of this method is that all the Green’s functions,
self energies etc. become 2 × 2 matrices, which does not make calculations easier. If
we do not exactly need to deal with an essentially nonequilibrium situation, we had
better opt for something handier.
Third, we can use the remarkable analogy between the evolution operator in
conventional time, U = e−i H t , and the (non-normalized) equilibrium statistical
operator π̂ = e−ε H , ε = 1/T . The idea of Matsubara was to use this analogy to
define some new, Matsubara, or temperature, Green’s functions, closely related to
conventional causal Green’s functions in real time. It turns out that for temperature
Green’s functions a simple and useful diagrammatics can be developed [1, 6].
If we introduce the variable ψ , 0 < ψ < ε, we see that π̂ satisfies the Bloch
equation,
φ
π̂(ψ ) = −Hπ̂(ψ ), (3.37)
φψ

with the initial condition π̂(0) = I. If we perform the transformation

t ↔ −iψ , (3.38)

this equation transforms into the Schrödinger equation for π̂(it) on the imaginary
interval 0 > t > −iε:
φ
i π̂(it) = Hπ̂(it). (3.39)
φt
The statistical operator is a generalization of the wave function, and it is not sur-
prising that it satisfies some sort of Schrödinger equation. What is mildly surprising
is that imaginary time enters the picture; but this is not totally exotic, because a
vaguely similar situation with imaginary frequencies we meet when the evolution to
equilibrium is discussed (e.g., in the classical theory of a damped oscillator). Here
it is more convenient to rotate the time axis by ∂/2 in the complex time plane (see
Fig. 3.2).
This so-called Wick’s rotation transforms the Heisenberg operators into Matsub-
ara operators:

λ(x, t) = ei Ht λ(x)e−i Ht ∞ λ M (x, ψ )


= eHψ λ(x)e−Hψ ; (3.40)
Hψ −Hψ
λ (x, t) ∞ λ̄ (x, ψ ) = e
† M
λ (x)e

. (3.41)

Let us stress that the conjugated Matsubara field operator is not the Hermitian con-
jugate of the Matsubara field operator:
3.2 Matsubara Formalism 111

Fig. 3.2 Wick rotation in the complex time plane


 †
λ̄ M (x, ψ ) = λ M (x, ψ ) !

These operators satisfy the equations of motion, which are an “analytic continuation”
of Heisenberg equations (1.84) at imaginary times:

φ M  
λ (x, ψ ) = H, λ M (x, ψ ) ; (3.42)
φψ
φ M  
λ̄ (x, ψ ) = H, λ̄ M (x, ψ ) . (3.43)
φψ

3.2.2 Temperature (Matsubara) Green’s Function

Now we can define the temperature Green’s functions. First, we introduce the tem-
perature ordering operator, Tψ , which, as usual, orders the operators so that the larger
is the argument ψ the further to the left it stands. The temperature Green’s function
is then, in direct analogy to (3.10),
 
Gρρ≈ (x, ψ ; x≈ , ψ ≈ ) = − Tψ λρM (x, ψ )λ̄ρM≈ (x≈ , ψ ≈ )
 
= −tr e−ε(H−) Tψ λρM (x, ψ )λ̄ρM≈ (x≈ , ψ ≈ ) (3.44)
112 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications

As usual, we consider the uniform state without magnetic ordering, so that the Green’s
function depends on x − x≈ , and spin dependence (if any) reduces to δρρ≈ .
Following the usual drill, we now explore the analytic properties of this function.
First we show that it depends only on the difference of its ψ -arguments:

G(x, ψ ; x≈ , ψ ≈ ) = G(x, x≈ ; ψ − ψ ≈ ). (3.45)


If, for example, ψ > ψ ≈ , then
 
G(x, ψ ; x≈ , ψ ≈ ) = −tr e−ε(H−) λ M (x, ψ )λ̄ M (x≈ , ψ ≈ )
 ≈ ≈

= −eε tr e−ε H eHψ λ(x)e−Hψ eHψ λ̄(x≈ )e−Hψ
 ≈ ≈

= eε tr e−(ε−ψ +ψ )H λ(x)e−H(ψ −ψ ) λ̄(x≈ )

(we have used the cyclic invariance of the trace).


So, temperature Green’s functions depend on a variable ψ − ψ ≈ , which changes
from −ε to ε and can be considered as 2ε-periodic on the whole of the real axis ψ .
Therefore, it can be expanded in a Fourier series:

1 
G(ψ ) = G(ωn )e−iωn ψ , (3.46)
ε n=−∇

where the Matsubara frequencies are


∂n
ωn = . (3.47)
ε

Now we will show that depending on the statistics of particles involved, the series
contains either odd or even Matsubara frequencies,

(2v + 1)∂
ωvF = ; (3.48)
ε
2v∂
ωvB = . (3.49)
ε

To see this, let us take some ψ < 0 and calculate G(ψ ) and G(ψ + ε):
3.2 Matsubara Formalism 113

Fig. 3.3 Periodic and


antiperiodic functions of
imaginary time ψ

 
G(ψ ) = ±tr eε()−H λ † eHψ λe−Hψ
 
= ±eε tr e−H(ψ +ε) λ † eHψ λ ;
 
G(ψ + ε) = −tr eε(−H) eH(ψ +ε) λe−H(ψ +ε) λ †
 
= eε tr eHψ λe−H(ψ +ε) λ †
= ↑G(ψ ).

We have used here the cyclic invariance of the trace. The upper sign, as usual,
corresponds to the Fermi statistics (Fig. 3.3). Thus temperature Green’s functions
are periodic (for bosons) or antiperiodic (for fermions) with period ε. This is exactly
what we obtain in keeping only even or odd Matsubara frequencies in series (3.46),
because

e−iωχ (ψ +ε) = e−iωχ ψ e−i(2v+1)∂


F F

= −e−iωχ ψ ;
F

B(ψ +ε)
e−iωχ = e−iωχ ψ e−i2v∂
B

= e−iωχ ψ .
B

Finally, expanding G(x, ωn ) in a Fourier integral over momenta, we come to the


following form:
∇ 
1  d 3 p i(px−ωχ ψ )
G(x, ψ ) = e G(p, ωv ), (3.50)
ε v=−∇ (2∂)3

with the inverse transformation

ε 
G(p, ωv ) = dψ d 3 xe−i(px−ωχ ψ ) G(x, ψ ). (3.51)
0
114 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications

3.2.2.1 The Generalized Källen–Lehmann Representation

Without writing the details of this by now routine calculations (which you can do as
an exercise), we have


1  1 ± e−εωmn
G(p, ωv ) = (2∂) 3
πn Amn δ(p − Pmn ) . (3.52)
2 m n
iωv − ωmn

The coefficients Amn here are the same ones that enter the formulas for real time
Green’s functions in equilibrium, Eqs. (3.14), (3.19), (3.20). Comparing (3.52) to
these equations immediately leads to the relation between temperature and realtime
Green’s functions:

G(p, ωv ) = G R (p, iωv ); ωv > 0; (3.53)


G(p, ωv ) = G A (p, iωv ); ωv < 0. (3.54)

If we know temperature Green’s function, we can find real-time ones by simple


analytic continuation to imaginary frequencies! (A word of warning is in order here:
analytic continuation is simple as a concept, but may prove notoriously difficult in
actual calculations. On the other hand, static properties of the system can be directly
obtained from temperature Green’s functions.)

3.2.3 Perturbation Series and Diagram Techniques for the


Temperature Green’s Function

Now, at last, we can return to drawing pictures. To begin with, we present the system’s
Hamiltonian in the standard form

H = H0 + H1

(now both terms must be time independent in Schrödinger representation; other-


wise, we will not be in equilibrium state: there will be no equilibrium state!). The
“Matsubara interaction representation” is then defined by

 M (x, ψ ) = eH0 ψ λ(x)e−H0 ψ , (3.55)

so the “Heisenberg” Matsubara field operator equals

λ M (x, ψ ) = eHψ e−H0 ψ Ψ M (x)eH0 ψ e−Hψ . (3.56)

Repeating essentially the same steps as before, (Sects. 1.3, 2.2.1), let us introduce
the imaginary-time S-matrix in “interaction representation”:
3.2 Matsubara Formalism 115

τ(ψ1 , ψ2 ) = eH0 ψ1 e−H(ψ1 −ψ2 ) e−H0 ψ2 . (3.57)

It satisfies self-evident conditions:

τ(ψ2 , ψ1 ) = τ −1 (ψ1 , ψ2 ); (3.58)


−1
τ(ψ1 , ψ3 )τ (ψ2 , ψ3 ) = τ(ψ1 , ψ2 ). (3.59)

We can also write down a differential equation for τ(ψ , ψ2 ):

φ
τ(ψ , ψ2 ) = −H1 (ψ )τ(ψ , ψ2 ),
φψ
where
H1 (ψ ) = eH0ψ H1 e−H0 ψ . (3.60)

Iterating it, we find the analogue to Dyson’s expansion for τ, which is (for ψ1 > ψ2 )
⎧ ψ ⎫
⎨ 1 ⎬
τ(ψ1 , ψ2 ) = Tψ exp − dψ H1 (ψ ) . (3.61)
⎩ ⎭
ψ2

You already know what to do next, but we will nevertheless explicitly derive
the basic expression for G. Using the “Matsubara interaction” representation for
Matsubara field operators, we find that the temperature Green’s function can be
expressed through the “interaction” field operators as follows (omitting the “M”
superscripts for brevity):

G(x1 , x2 ; ψ1 − ψ2 )

= −eε tr e−ε H eHψ1 e−H0 ψ1 (ψ1 )eH0 ψ1 e−Hψ1

× eHψ2 e−H0 ψ2 (ψ
¯ 2 )eH0 ψ2 e−Hψ2 α(ψ1 − ψ2 )

↑ tr e−ε H eHψ2 e−H0 ψ2 (ψ
¯ 2 )eH0 ψ2 e−Hψ2


Hψ1 −H0 ψ1 H0 ψ1 −Hψ1


×e e (ψ1 )e e α(ψ2 − ψ1 ) .

Take, for example, the first term in this equation. Using the definition of τ(ψ1 , ψ2 )
(3.57), we can rewrite it as
 
−eε tr e−ε H0 τ(ε, ψ1 )(ψ1 )τ(ψ1 , ψ2 )(ψ
¯ 2 )eH0 ψ2 τ(ψ2 , 0) α(ψ1 − ψ2 ).

Therefore, we can write


116 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications
 
Gρξ (x1 , x2 ; ψ1 − ψ2 ) = −eε tr e−ε H0 Tψ (τ(ε, 0)(ψ1 )(ψ
¯ 2 ))
¯ 2 ))√0
= −eε(−0 ) →Tψ (τ(ε, 0)(ψ1 )(ψ

(here →· · · √0 is the average over the unperturbed normalized statistical operator


eε(0 −H) ). This would be an exact counterpart of our zero-temperature formula
(2.55) if not for the eε(−0 ) -factor instead of the →S(∇, −∇)√0− denominator. But
after noticing that
 −1
eε(−0 ) = eε(0 −)
 −1
= eε0 tr e−ε H
 −1
= tr eε(0 −H0 ) τ(ε, 0)
= →τ(ε, 0)√−1
0 ,

we see that actually the formula is a direct analogue to the zero-temperature case
(where we have explicitly written spin indices):

¯ ρ2 (x2 , ψ2 )τ(ε, 0)√0


→Tψ ρ1 (x1 , ψ1 )
Gρ1 ,ρ2 (x1 , x2 ; ψ1 − ψ2 ) = − . (3.62)
→τ(ε, 0)√0

This formula provides the basis for the Matsubara diagram techniques. We again
expand the S-matrix τ(ε, 0) in series over the interaction H1 and express the terms
as averages over the unperturbed ground state. Wick’s and cancellation theorems
are still valid in this case, but we will not bother to rewrite the proof for imaginary
times. You can easily check that e.g., the “thermodynamic” proof of Wick’s theorem
holds after the substitution it ∞ ψ . Therefore we can present the terms as Feynman
diagrams; all disconnected diagrams cancel, and we are left with the usual connected
lot. The rules are given in Table 3.1.
The only difference is that in Fourier representation, instead of integrating over
dummy frequencies in the vertices from minus to plus infinity we will sum over
the discrete set of Matsubara frequencies. This is generally more troublesome than
integration (as all discrete mathematics goes), but there are many useful tricks. I will
give here the most basic one, which in many cases does the job.

3.2.3.1 Summation over Frequencies

If a function of a complex variable z satisfies f (z) ∼ |z|−(1+θ) , when |z| ∞ ∇, θ


being a positive infinitesimal, then the following identities hold:
Fermi frequencies
3.2 Matsubara Formalism 117

Table 3.1 Feynman rules for temperature Green’s function (scalar electron–electron interaction)
Coordinate space

X X'
−G(X X ) Temperature Green’s
≡ −Gαα (x, x ; τ − τ ) function
−G 0 (X X ) Unperturbed temperature
≡ −G 0 (x, x ; τ − τ )δαα Green’s function

−U (1 − 2) ≡ −U (x1 − x2 ) Interaction potential


×δ(τ1 − τ2 )δα1 α2

n 0 (x) Unperturbed electron density


The integration over all intermediate coordinates, and “times” ( 0β dτ )
and summation over dummy spin indices are implied
Momentum space
−G(p, ωv ) Temperature Green’s function
−G 0 (p, ωv ) Unperturbed temperature Green’s function
−U (p) Interaction potential

n 0 (μ) Unperturbed electron density


The integration over all intermediate momenta ( d3 p/(2π)3 )
1 ∞
and summation over discrete frequencies β v=−∞ and over
dummy spin indices are implied, taking into account energy (fre-
quency)/momentum conservation in every vertex

1 
∇   1 εz s
f iωvF = − tanh Res f (z). (3.63)
ε v=−∇ 2 s 2 Z =z s

Bose frequencies

1 
∇   1 εz s
f iωvB = − coth Res f (z). (3.64)
ε v=−∇ 2 s 2 z=z s

The origin of these formulae is clear if we recall that the function of complex
eεz −1
variable z, tanh εz2 = eεz +1 , has poles exactly at the points z v = i∂(2v + 1)/ε =
iωvF , and its residue at any of these points equals 2/ε.

The contour integral dzf (z) tanh εz 2 along the infinitely large circleof Fig. 3.4
vanishes (this is ensured by the condition that f (z) vanishes faster than 1/|z| at
infinity). On the other hand, by Cauchy theorem, the integral is proportional to the
sum of the residues of the integrand; the residues of tanh give the left-hand side of
(3.63), while the rest gives its right-hand side. The same considerations lead to (3.64)
if we use coth(εz/2) instead of tanh(εz/2). Since tanh(εz/2) = n B (z)/n F (z), we
118 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications

Fig. 3.4 An integration trick

see that equilibrium distribution functions naturally enter the calculations from this
(seemingly) formal side.

3.3 Linear Response Theory

3.3.1 Linear Response Theory: Kubo Formulas

We have developed some approaches that allow us, we hope, to calculate equilib-
rium real-time Green’s functions, both at zero and at finite temperatures. We have
repeatedly referred to G R (t) as the function that naturally describes the reaction
of the system to external perturbation. This could seem a contradiction, since in
equilibrium—where only we can calculate Green’s functions using the methods
developed so far—there can be no external perturbation. Nevertheless, we still can
use equilibrium Green’s functions in order to find the linear reaction of the system
to a weak perturbation. This constitutes so-called linear response theory. The main
idea behind it is, well, that of linear response: we can neglect the higher powers of
the perturbation, as long as it is small enough. The two famous examples of this
approach are Hooke’s law (F = kx) and Ohm’s law (V = IR): the constants k, R
are taken in equilibrium, at zero strain or current, and we neglect the higher-order
corrections in x or I .
Suppose that the system is affected by a weak external perturbation (which is
generally time dependent, say an external electromagnetic field). Its Hamiltonian is
thus
H(t) = H0 + H1 (t)

(here H0 includes all interactions in the system except the external perturbation under
consideration). We are interested in some observable, represented by an operator A
3.3 Linear Response Theory 119

(say, the electric current). We measure its average value, →A√t , as a function of the
perturbation strength.
In accordance with our usual approach, we introduce the statistical operator in
interaction representation:

π̃(t) = ei H0t π̂(t)e−i H0 t , (3.65)

which satisfies the Liouville equation


 
˙ = H̃∇ (t), π̃(t) .
i π̃(t) (3.66)

Here H̃∇ (t) = ei H0 t H1 (t)e−i H0 t .


We will use the statistical operator (3.66) in order to calculate →A√t . Due to the
cyclic invariance of the trace it is given by
⎪   
→A√t = tr π̂(t)A = tr π̃(t)Ã(t) . (3.67)

The Liouville equation for π̃(t) rewritten in integral form gives

t  
π̃(t) = −i dt ≈ H̃1 (t ≈ ), π̃(t ≈ ) + π̃(−∇).
−∇

The idea of Kubo’s approach to linear response theory is as follows: Assume that at
t = −∇ the perturbation was absent (and later adiabatically switched on), so that
the system is in equilibrium:

π̃(−∇) = π̂0 ≡ exp [ε( − H0 )] .

Then the linear response of the system to the perturbation is given by the following
expression:

t    
π̃(t) ≡ π̃(t) − π̂0 = −i dt ≈ H̃1 (t ≈ ), π̂0 + O (H1 )2 . (3.68)
−∇

In other words, we use the first-order perturbation theory for the nonequilibrium sta-
tistical operator. Of course, we could go further and find the second, third, etc. orders
in perturbation.The unpleasant
 feature
 of such aseries is thatit is a
series in nth-order
≈ ≈≈ ≈≈≈ (n)
commutators, H̃1 (t ), H̃1 (t ), H̃1 (t ), . . . H̃1 (t ), π̂0 . . . , which cannot
be expressed in such convenient way as the higher-order time-ordered products of
field operators. But as the basis for linear theory it is all right.
120 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications

The shift of the average value of an operator A in the first order in perturbation
(linear response) is given by
 
A(t) = tr π̃(t)Ã(t)
t   
= −i dt ≈ tr H̃1 (t ≈ ), π̂0 Ã(t)
−∇
t  
= −i dt ≈ Ã(t), H̃1 (t ≈ ) (3.69)
−∇

(here →· · · √ ≡ tr(π̂0 · · · )).


It is usually convenient to write the perturbation in the form

H1 (t) = − f (t)B, (3.70)

where the c-number function f (t) is the so-called generalized force, and B is some
operator defined for the system under consideration. This is usually described as the
perturbation being coupled to B through f (t). Examples are −Ŝ · H(t) (the spin
coupled to the external magnetic field), or − 1c ĵ · A(t) (the current coupled to the
vector potential). It is not always this easy to tell what we should consider as the
generalized force. A useful recipe is as follows. Since the only time dependent term
in (3.70) (as well as in the whole Hamiltonian H) is f (t), we can figure out the
proper expression for the generalized force if we write down the energy dissipation
of the external field in the system per unit time:

Q = Ė
= →Ḣ√
= q(t)→B√ (3.71)
∼ q(t) = − f˙(t). (3.72)

Now let us introduce the retarded Green’s function of two operators:

1   ⎪ 
 A(t)B(t ≈ ) ≤ R = A(t), B(t ≈ ) α t − t ≈ . (3.73)
i
This construction may seem a little strange, but it agrees with our earlier retarded
Green’s function: evidently, G R (t, t ≈ ) =  λ(t)λ † (t ≈ ) ≤ R . After all, we can always
rewrite the operators A(t), B(t ≈ ) in terms of the field operators, thus reducing this
retarded Green’s function to the ones we are accustomed to. This expression, though,
has more direct physical meaning: it defines the system’s response (already in terms of
the observable A(t) we are interested in) at time t to an external perturbation at earlier
moments of time, t ≈ < t (coupled to the operator B(t ≈ ). We can as well introduce
3.3 Linear Response Theory 121
⎪ 
the advanced Green’s function,  A(t)B(t ≈ ) ≤ A = − 1i [A(t), B(t ≈ )] α(t ≈ − t),
though it does not have a straight-forward physical sense.
Please notice that we no longer write tildes (∼) over the operators: they all
are assumed to be in interaction representation in relation to external perturbation,
A(t) = exp(iH0 t)A(0) exp(−iH0 t). But the only thing not included in H0 is an
external perturbation term (3.70). Therefore, the averages are to be calculated using
the perturbation series over interaction terms, and then the operator can be regarded
as taken in Heisenberg representation.
Now we can rewrite Eq. (3.69) in the form of the Kubo formula:

∇
A(t) = − dt ≈ f (t ≈ )  A(t)B(t ≈ ) ≤ R . (3.74)
−∇

This is a very transparent formula: the change in the value of the observable is
determined by the (first order of the) external force f (t ≈ ) applied at all earlier
moments of time, and the kernel of this integral operator is exactly the “AB”-Green’s
function. Generally it is a tensor, since the operators A, B don’t have to be scalars.
The equilibrium state of the system is, of course, time independent. Therefore,

 A(t)B(t ≈ ) ≤ R =  A(0)B(t ≈ − t) ≤ R =  A(t − t ≈ )B(0) ≤ R , (3.75)

and in Fourier components we find

A(ω) = − f (ω)  AB ≤ωR . (3.76)

The generalized susceptibility is defined by

A(ω)
χ(ω) = . (3.77)
f (ω)

Then
χ(ω) = −AB ≤ωR .

Examples are many: electric conductivity, ja (ω) = τab (ω)Eb (ω); magnetic suscep-
tibility, mg (ω) = χab (ω)Hb (ω); and so on. Here tensor indices a, b = x, y, z, and
Einstein’s summation rule is implied.
There are various ways of writing Kubo formulas. The above one seems quite
general. If you can write the perturbation as coupled to the same operator, the average
value of which you investigate, you will come to a more often met expression of the
Kubo formula. For example, if we are calculating the electrical conductivity, the
operator A is the current operator, ĵ. (I don’t want to bother with tensor indices here;
suppose we have a thin wire, with only one current component allowed.) On the
other hand, the external field is coupled to the system through − 1c ĵA. Since E(t) =
⎪ 
1 1c Ȧ(t), in Fourier components we rewrite the perturbation as − 1c ĵ − icω E(ω) =
122 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications

i
ω ĵ E(ω). This is the correct coupling, since we want to find the system’s response
to the external electric field, not the
 vector potential.
 The Green’s function thus will

include equilibrium averages like ĵ(t) ĵ(t ) . I will not dwell on the detailed form
0
of this specific Green’s function. A general statement is, though, to be made here.
The linear response of the system to the external electric field—the nonequilibrium
current—turns out to be determined by the equilibrium correlators of the current
itself. The external field, in a sense, simply reveals these equilibrium fluctuations.
In the next subsection we will see that this is indeed the case, and we will give this
vague statement an exact form in the fluctuation-dissipation theorem.

3.3.2 Fluctuation-Dissipation Theorem

First we introduce some basic apparatus of the mathematical theory of fluctuations.


The autocorrelation function, or correlator, of an observable A is defined as
follows:
1  
K A (t, t ≈ ) = A(t), A(t ≈ ) . (3.78)
2
In this definition we take into account that the operator taken at different moments
of time may not commute with itself.
We will limit our consideration to the stationary case. This means that the average
values of all observables are time independent, and the autocorrelation function
depends only on the difference of its arguments:

1
K A (t) = →{A(t), A(0)}√ . (3.79)
2
Often it is simpler to deal with the autocovariation function,

1
K δ A (t) = →{δA(t), δA(0)}√
2
1
≡ →{A(t) − →A√, A(0) − →A√}√
2
= K A (t) − →A√2 . (3.80)

In this way we explicitly consider the fluctuations around the average value.
The Fourier transform of the correlator is called the spectral density of fluctuations:

∇
(A )ω =
2
dt eiωt K A (t). (3.81)
−∇

(It is often denoted by S(ω), but we will not use this notation here.)
3.3 Linear Response Theory 123

Fig. 3.5 Noise measurement


in a resistor

The Wiener–Khintchin theorem of the theory of random processes states that the
⎪ power of the fluctuations of A in the fre-
spectral density (3.81) gives the average
quency interval [ω, ω + ω) through A2 ω ω, and thus can be directly measured.
For example, if we are measuring the voltage fluctuations on a resistor (Fig. 3.5), this
is what the wattmeter shows.
For the equilibrium state average of the product of two operators A, B we have
the Kubo–Martin–Schwinger identity:
 
→A(t)B(0)√ = tr eε(−H) ei Ht A(0)e−i Ht B(0) = →B(0)A(t + iε)√. (3.82)

It is an evident result of the cyclic invariance of trace and a special form of the
equilibrium statistical operator. This allows us to rewrite the spectral density as

  ∇
1 
A2
= 1 + eεω dt eiωt →A(0)A(t)√. (3.83)
ω 2
−∇

In this expression enters the Fourier transform of the anticommutator, {A(t), A(0)}.
It is easy to find a like formula for the commutator:

∇   ∇
εω
([ A, A])ω ≡ dt e iωt
→[A(t), A(0)]√ = e −1 dt eiωt →A(0)A(t)√.
−∇ −∇
(3.84)
Then
1 εω
(A2 )ω = coth ([ A, A])ω . (3.85)
2 2
On the other hand, we can rewrite Eq. (3.84) in the form
124 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications

Fig. 3.6 Fluctuation-dissipation


theorem in a curcuit: Johnson–
Nyquist noise

∇ ∇
−iωt
([ A, A])ω = − dte {[A(t), A(0)]} + dteiωt →[A(t), A(0)]√,
0 0

to find that
([ A, A])ω = −2  AA ≤ωR . (3.86)

We have proved the fluctuation-dissipation theorem (alias Callen–Welton for-


mula): the spectral density of fluctuations of an observable A in equilibrium is pro-
portional to the imaginary part of the generalized susceptibility of this system to a
weak external perturbation coupled to this very observable:
  εω εω
A2 = coth χ(ω) ≡ − coth  AA ≤ωR . (3.87)
ω 2 2
It is the imaginary part of susceptibility that determines the energy dissipation rate
in the system, hence the name of the theorem.

3.3.2.1 Current and Voltage Fluctuations in Linear Circuits: Nyquist’s


Theorem

Let us consider one of the most important applications of the theorem. Take the
current operator, J , in a simple electric circuit (Fig. 3.6). At frequencies low enough
(ω  c/L, where L is the size of the circuit) the current is the same throughout the
circuit and depends only on time:

→J (t)√ = J (t).

If there were an external emf in the circuit, W (t), the energy dissipation per unit
time would be

Q = JW = →J √W.

According to our previous considerations, the generalized force is then given by

f˙ = −W ; iω f (ω) = W (ω).
3.3 Linear Response Theory 125

Then we see that


J (ω) = χ(ω) f (ω),

and
J (ω) = W (ω)/Z (ω) = iω f (ω)/Z (ω),

where Z (ω) is the circuit impedance. Therefore,


χ(ω) =
Z (ω)

= ; (3.88)
R(ω) + iY (ω)
ω
χ (ω) = R(ω). (3.89)
|Z (ω)|2

Using the fluctuation-dissipation theorem, we immediately find for the current


fluctuations
  ω εω   εω
−1
J2 = R(ω) coth = ∗ Z (ω) ω coth . (3.90)
ω |Z (ω)| 2 2 2

(As it should be, the imaginary part of the susceptibility corresponds to the real part
of the impedance: the reactance).
The corresponding voltage fluctuations, W (ω) = Z (ω)J (ω), are given by
  εω
W2 = ∗Z (ω)ω coth . (3.91)
ω 2
In the classical limit (T ≤ ω) we obtain the famous Nyquist theorem:
 
J2 = 2TG(ω); (3.92)
 ω
W2 = 2TR(ω). (3.93)
ω

(G = 1/R is the circuit conductance.) It relates the equilibrium thermal noise in an


electrical circuit to its resistance. This beautiful relation was discovered experimen-
tally by Johnson and theoretically by Nyquist, hence the often-used term “Johnson–
Nyquist noise”.
126 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications

3.4 Nonequilibrium Green’s Functions

3.4.1 Nonequilibrium Causal Green’s Function: Definition

In the general definition of the causal many-particle Green’s function (2.4),


 
G ρε (x1 , t1 ; x2 , t2 ) = −i T λρ (x1 , t1 )λε† (x2 , t2 )
 
≡ −itr ϕ̂T λρ (x1 , t1 )λε† (x2 , t2 ) , (3.94)

we imposed no specific limitations upon the quantum state of the system, i.e., on its
statistical operator ϕ̂. If later we had to deal with the system in its ground state (at
T = 0) or in equilibrium (at T = 0), it was because our elegant diagram technique
based on perturbation theory essentially used specific properties: the uniqueness of
the ground state (up to the phase factor) in the first case, and formal equivalence of the
equilibrium statistical operator and the (analytically continued) evolution operator
in the second.
Extremely powerful with regard to the thermodynamic properties, these
approaches are evidently unable to cope with the kinetic problems, which are very
important in condensed matter theory. For example, they cannot describe the response
of the system to time-dependent external perturbation, even at zero temperature
(since such a perturbation will lead to energy pumping into the system, which can
be transferred to some excited state). Linear response theory can answer this sort of
question, but only in the first order, while there is no convenient way to write the
higher-order terms in this method.
Nevertheless there exists a possibility to develop a diagram technique for the
nonequilibrium Green’s function, if we take into account several types of Green’s
functions simultaneously [7]. Then the Green’s function becomes a matrix. This is
the price we pay for universality, and the reason why this technique introduced by
Keldysh did not replace the other two in their respective fields.
We define the causal nonequilibrium Green’s function as follows:
 
G −−
ρε (x1 , t 1 ; x 2 , t 2 ) = −i |T λ ρ (x1 , t 1 )λ †
ε (x 2 , t 2 )| . (3.95)

Here |√ denotes an arbitrary quantum state (in Heisenberg representation) of the
system under consideration.2

2 Statistical averaging can be included at any stage of the calculations without any problem, since

tr(ϕ̂A) ≡ W →|A|√


is a linear operation.
3.4 Nonequilibrium Green’s Functions 127

This is essentially the same expression as the one we introduced in Eq. (2.10),
except that now |√ generally is not the ground state. What are the consequences of
this difference?
Let us try to repeat the steps that have led us to the generating formula (2.55) of the
perturbation theory for Green’s function. We express the Heisenberg field operators,
λ, through the interaction representation operators, :

λ(x, t) = U † (t)e−i H0 t (x, t)ei H0 t U(t).

Then note that the Heisenberg state |√ is related to the corresponding state vector
in the interaction representation, |(t)√ I , by

ei H0t U(t)|√ = |(t)√ I ≡ S(t, −∇)|0 √. (3.96)

Here the S-matrix (in the interaction representation given by Dyson’s expansion,
see Chap. 2) relates the actual state |(t)√ I to the presumably unperturbed one
|(−∇)√ I ≡ |0 √, i.e., to the state of the system of free Particles.
This allows us to rewrite Green’s function as3

G −− (x1 , t1 ; x2 , t2 )
= −i→(∇)|T S(∇, −∇)(x1 , t1 ) † (x2 , t2 )|0 √. (3.97)

The fundamental difference between this expression and the corresponding result
for the ground state average (2.55) is that the state at t = ∇, |(∇)√, is by no means
simply related to the unperturbed state at t = −∇, |0 √. The only way to bring it
back is to use a straightforward formula,

|(∇)√ = S(∇, −∇)|0 √. (3.98)

Then we obtain the basic formula for the nonequilibrium Green’s function,

G −− (x1 , t1 ; x2 , t2 )
= →0 |S † (∇, −∇)T S(∇, −∇)(x1 , t1 ) † (x2 , t2 )|0 √. (3.99)

Now we substitute in this expression the Dyson expansion for the S-matrix (1.90,
1.91),

3 We omit the spin indices for clarity.


128 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications

Fig. 3.7 Time ordering along the Keldysh contour


⎧ ⎫
⎨ i ∇ ⎬
S(∇, −∇) = T exp − dψ W(ψ ) ; (3.100)
⎩  ⎭
−∇
⎧ ∇ ⎫
⎨i  ⎬
S † (∇, −∇) = T̃ exp dψ W(ψ ) . (3.101)
⎩ ⎭
−∇

The anti time ordering operator T̃ arranges the operators in the opposite order to that
of the T -operator.
It can be shown that the main features of theory are kept intact; namely, (1)
Wick’s theorem is still valid, so that we can express anything in terms of pairings in
the unperturbed state, and (2) the vacuum (disconnected) diagrams are canceled.

3.4.2 Contour Ordering and Three More Nonequilibrium Green’s


Functions

We see from (3.99) that the main difference between the present case and the zero
temperature technique is the presence of two time orderings in the same formula:

G −− (x1 , t1 ; x2 , t2 )
⎡∇ ⎡∇
−∇ dψ W (ψ ) −∇ dψ W (ψ )
i i
= (0 |T̃ e  T e−  (x1 , t1 ) † (x2 , t2 )|0 √. (3.102)

It appears due to the necessity to return back in time, to the initial unperturbed state,
before the interaction was turned on, since we don’t know what the state will be
like after it is finally turned off. Formally this can be presented as a single time
ordering along the contour running from −∇ to ∇ and back again (Fig. 3.7). (The
time ordering along a contour that returns to −∇ instead of running to ∇ was first
suggested by Schwinger.) The operators standing to the right of the T -operator in
(3.102) belong to the right-going (−), the other to the left-going (+) branch of the
contour. The + operators always stand to the left of the—ones.
Evidently, if we use Wick’s theorem, we obtain four types of pairings, namely
(± denotes the branch)
† †
< T −  † >, < T̃ + + >, < +  † >, < + − > . (3.103)
3.4 Nonequilibrium Green’s Functions 129

The first of these gives the causal Green’s function; the rest we have not met before.
The diagram technique in nonequilibrium thus includes four Green’s functions, which
we will define as follows4 :
 
G −− (1, 2) = −i T λ(1)λ † (2) ; (3.104)
 
G +− (1, 2) = −i λ(1)λ † (2) ; (3.105)
 
G −+ (1, 2) = ±i λ † (2)λ(1) ; (3.106)
 
G ++ (1, 2) = −i T̃ λ(1)λ † (2) . (3.107)

The (−+) function is directly proportional to the density of real particles in the
system.

3.4.2.1 Relations Between Different Nonequilibrium Green’s Functions


The Green’s functions defined above are not independent, since as follows from their
definition,

G −− (1, 2) + G ++ (1, 2) = G −+ (1, 2) + G +− (1, 2). (3.108)

Then, we have
⎪ ∝
G −− (1, 2) = − G ++ (2, 1) ;
⎪ ∝
G −+ (1, 2) = − G −+ (2, 1) ; (3.109)
⎪ ∝
G +− (1, 2) = − G +− (2, 1) . (3.110)

If we define the retarded and advanced Green’s functions as before, they can be
expressed as follows:

G R (1, 2) = G −− (1, 2) − G −+ (1, 2) = G +− (1, 2) − G ++ (1, 2); (3.111)


G A (1, 2) = G −− (1, 2) − G +− (1, 2) = G −+ (1, 2) − G ++ (1, 2). (3.112)

3.4.2.2 Nonequilibrium Green’s Function for the System of Noninteracting


Particles

The unperturbed Green’s functions satisfy the following linear differential equations:

4 The upper sign for the Fermi system.


130 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications
⎣ #
φ
i − E(x1 ) G −− −1 −−
0 (1, 2) ≡ (G0 ) (1)G 0 (1, 2)
φt1
= δ(1 − 2); (3.113)
−1 −+
(G0 ) (1)G 0 (1, 2) = 0; (3.114)
(G0 ) (1)G +−
−1
0 (1, 2) = 0; (3.115)
(G0 ) (1)G ++
−1
0 (1, 2) = −δ(1 − 2). (3.116)

Suppose that the system of noninteracting particles (ideal quantum gas) is in a


stationary uniform state. Then it can be characterized by the (nonequilibrium) distri-
bution function in the momentum space, n p . Then we easily get useful expressions
for all Green’s functions:
1 ⎪ 
G −−
0 (p, ω) = ± 2∂in p δ ω − (θp − μ) ; (3.117)
ω − (θp − μ) + i0
−+ ⎪ 
G 0 (p, ω) = ±2∂in p δ ω − (θp − μ) ; (3.118)
⎪ 
G +−
0 (p, ω) = −2∂i(1 ↑ n p )δ ω − (θp − μ) ; (3.119)
1 ⎪ 
G ++
0 (p, ω) = − ± 2∂in p δ ω − (θp − μ) ; (3.120)
ω − (θp − μ) − i0
1
G 0R (p, ω) = ; (3.121)
ω − (θp − μ) + i0
1
G 0A (p, ω) = ; (3.122)
ω − (θp − μ) − i0
⎪ 
G 0K (p, ω) = −2∂i(1 ↑ 2n p )δ ω − (θp − μ) . (3.123)

Here we have introduced one more linear combination of G ±± , the so-called Keldysh
Green’s function:

G K (1, 2) = G −+ (1, 2) + G +− (1, 2) = G −− (1, 2) + G ++ (1, 2). (3.124)

Note that the retarded and advanced Green’s functions don’t contain any infor-
mation on the state of the system, which is given solely by the Keldysh Green’s
function. Since they are given by linear combinations of G ±± , we can use them as
three independent functions, instead of four dependent G ±± . As we will see later,
in many cases the set (G R , G A , G K ) indeed is simpler to use than the initial set
(G −− , G −+ , G +− , G ++ ).
3.4 Nonequilibrium Green’s Functions 131

3.4.3 The Keldysh Formalism

The rules of the Keldysh diagram technique directly follow from the expansion of
S-matrices in Eq. (3.99).5 First consider the rules for the scalar interaction with an
external field W (x, t).
The only difference from the zero-temperature case will be that each electron
or interaction line bears at its ends ±-indices, which show to which branch of the
Keldysh contour the corresponding operator belongs; besides, the “+”-vertices are
multiplied by +i instead of −i, since they originate from the S † -operator. This can
be taken into account in an elegant way, if we introduce the matrix Green’s function
in the Keldysh space:
⎣ #
G −− (1, 2) G −+ (1, 2)
Ĝ(1, 2) = (3.125)
G +− (1, 2) G ++ (1, 2)

and the matrix of the external potential


⎣ #
−i W (1) 0
− i Ŵ (1) = = −iW (1)ψ̂3 , (3.126)
0 iW (1)

where ψ̂3 is one of the Pauli matrices.


This allows us to gather all the diagrams that differ only by the arrangement of
the ±-indices into a single one, which is understood as a single matrix expression,
for example that shown in Fig. 3.8 (the integration over intermediate coordinates and
times is implied):

i Ĝ 1 (1, 2) = i Ĝ 0 · i Ŵ · i Ĝ 0 (1, 2); (3.127)


⎣ −− #
G 1 (1, 2) G −+ 1 (1, 2)
i
G +− ++
1 (1, 2) G 1 (1, 2)
⎛ ⎞
i G −− −−
0 (−i W )i G 0 (1, 2) i G −− −+
0 (−i W )i G 0 (1, 2)
⎜ +i G −+ (+i W )i G +− (1, 2) + i G −+ ++ ⎟
⎜ 0 0 0 (+i W )i G 0 (1, 2) ⎟
=⎜ ⎜
⎟.
⎟ (3.128)
⎝ i G +− (−i W )i G −− (1, 2) i G 0 (−i W )i G 0 (1, 2) ⎠
+− −+
0 0
+i G ++ +−
0 (+i W )i G 0 (1, 2) + i G ++ ++
0 (+i W )i G 0 (1, 2)

The Feynman rules for some cases of interest are given in Table 3.2. Note that four
differential equations for the unperturbed Green’s functions (3.113)–(3.116) are now
gathered in one elegant expression:

(G0 )−1 (1)Ĝ 0 (1, 2) = ψ̂3 δ(1 − 2). (3.129)

5 We follow the notation of [10].


132 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications

Fig. 3.8 First-order diagrams for the nonequilibrium Green’s function

Table 3.2 Feynman rules for the matrix Ĝ (after [10])

iĜ(12) Matrix electron Green’s function

iĜ0 (12) Unperturbed matrix Green’s function

−iŴ (1) ≡ −iW (1)τ̂3 External potential

−iU (1, 2) Scalar electron - electron interaction


τ̂3 Bare electron - electron vertex

iD̂(1, 2) Phonon propagator


φ(1)φ(2) φ(2)φ(1)
=
φ(1)φ(2) T̃ φ(1)φ(2)

iD̂0 (1, 2) Unperturbed phonon propagator

ig(τ̂3 )ik δij Bare electron–phonon vertex


The integration over all intermediate coordinates and times
and summation over dummy spin indices is implied

We can make use of the relations between the G ±± functions to obtain another
representation of the same formalism. If we perform the transformation

1
Ĝ ∞ Ḡ = (ψ̂0 − i ψ̂2 )ψ̂3 Ĝ(ψ̂0 − i ψ̂2 )† , (3.130)
2
we come to Green’s function in the following form (check this!):
⎣ #
GR GK
Ḡ = . (3.131)
0 GA

The equation of motion for this matrix is


3.4 Nonequilibrium Green’s Functions 133

Table 3.3 Feynman rules for the matrix Ḡ (after [10])

iḠ(12) Matrix electron Green’s function

−iW̄ (1) ≡ −iW (1)τ̂0 External potential

−iU (1, 2) Scalar electron - electron interaction

γijk Bare electron - electron vertex (absorption)

γ̃ijk Bare electron - electron vertex (emission)

iD̄(1, 2) Phonon progator

−igγijk Bare phonon vertex (absorption)

−igγ̃ijk Bare phonon vertex (emission)



γij1 = γ̃ij2 / 2
2 1

γij = γ̃ij =( τ1 )ij / 2
The integration over all intermediate coordinates and times
and summation over dummy spin indices is implied

(G0 )−1 (1)Ḡ 0 (1, 2) = δ(1 − 2). (3.132)

The Feynman rules for this representation can be obtained from the initial ones if we
use the transformation inverse to (3.130). They are given in Table 3.3.

3.5 Quantum Kinetic Equation

The Keldysh Green’s functions often contain more information than we need. Indeed,
as we have seen, of the three independent components of the matrix Ḡ, G R and
G A contain only the information about the dispersion relation in the system; all
information about the occupation of the states is contained in the component G K
(plus again the information about the dispersion relation). In many cases we are more
interested in the kinetics, i.e., in how the states are occupied, than in what exactly
these states are (after all, we can use some approximate relation θ(p) and forget about
them). Since the matrix theory is rather awkward, this puts a premium on some sort
of reduced description, which would let us get rid of nonessential information. In
134 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications

this way we will derive a quantum kinetic equation for the quantum analogue of the
distribution function of classical statistical mechanics.
Defining the statistical distribution function in a quantum limit is a nontrivial prob-
lem. The uncertainty relations prohibit the use of the classical distribution function
itself, f (r, p, t). Instead, we can introduce the Wigner function,
 * ⎣ # ⎣ #+
−ip·ξ/ ξ ξ
f W
(r, p, t) ≡ d ξe
3
λ †
r − ,t λ r + ,t (3.133)
2 2

The Wigner function has many useful properties, but being positively determined
is not one of them: f W (r, p) can take negative values in some regions of the phase
space. This is the price we have to pay for our wish to have some definite rela-
tion between momentum and coordinate in quantum mechanics! Nevertheless, if we
average f W (r, p) over the scale of h d (d is the dimensionality of the system), this
“roughened” distribution function coincides with the classical distribution function
f (r, p) up to the terms of higher order in h:

d d pd d r W
f (r, p, t) = f (r, p, t) + o(h d ). (3.134)
hd
h

According to the principle of correspondence, this means that f W is indeed the


proper quantum analogue to the classical distribution function. A detailed discussion
of the Wigner distribution function and related formalism can be found in [2].
Note that f W (r, p, t) is obtained from the (−+)-component of the nonequilib-
rium Green’s function with coinciding temporal arguments by a specific Fourier
transformation. Therefore, we can derive the equation for f W (r, p, t) (quantum
kinetic equation) starting from the corresponding component of the matrix Dyson’s
equation (for G −+ or G K ). This can be done using gradient expansion. First,
we present any Green’s function G(r1 , r2 ) = →λ(r1 )λ † (r2 )√ as G(R, r), where
R = r1 +r2 ; r = r1 − r2 , and assume that the properties of the system “on a big
2

scale,” R, change slowly compared to the characteristic quantum length, ν B (e.g.,


Fermi wavelength, in the case of the Fermi system). The latter dominates the depen-
dence on the small scale, given by r. You see that Wigner functions are very well
suited for such an approach, since they by definition depend on R (“slow” variable)
and p, the latter being the conjugate of r (“fast” variable). What we are going to do
is to Taylor expand everything in gradients ↔R , ↔r and derive a simpler equation.
We will see how it works in a moment.

3.5.1 Dyson’s Equations for Nonequilibrium Green’s Functions

We start from the exact matrix Dyson’s equation,

ˆ Ĝ;
Ĝ = Ĝ 0 + Ĝ 0  (3.135)
(Ḡ = Ḡ 0 + Ḡ 0 +  ¯ Ḡ); (3.136)
3.5 Quantum Kinetic Equation 135

or (the conjugated equation)

ˆ Ĝ 0 ;
Ĝ = Ĝ 0 + Ĝ  (3.137)
¯ Ḡ 0 ).
(Ḡ = Ḡ 0 + Ḡ  (3.138)

Here we introduce the self energy matrix,


⎣ −− −+ #
ˆ  
= ; (3.139)
 +−  ++
⎣  R K #
¯ =  A . (3.140)
0 

Only three components of the former matrix are independent, since as is easy to
see, ⎪ 
 −− +  ++ = −  −+ +  +− . (3.141)

The independent combinations are given by

 K =  −− +  ++ ;
 R =  −− +  −+ ; (3.142)
 A =  −− +  +− .

Acting on Dyson’s equation from the left (or from the right on its conjugate) by
the operator G0−1 , we find the Keldysh equations, equivalent to the set of integro-
differential equations for the component Green’s functions:
 
G0−1 − ψ̂3 
ˆ · Ĝ(1, 2) = ψ̂3 δ(1 − 2); (3.143)
 
Ĝ(1, 2) · G0−1 − ˆ ψ̂3 = ψ̂3 δ(1 − 2), (3.144)

or in the “barred” representation,

(G0 −)¯ · Ḡ(1, 2) = δ(1 − 2); (3.145)


 
Ḡ(1, 2) · G0−1 −¯ = δ(1 − 2). (3.146)

3.5.2 The Quantum Kinetic Equation

Now we can obtain the quantum kinetic equation. We will use the “hat” representation
as more straightforward.
136 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications

Set T = t1 +t
2 , ψ = t 1 − t2 ; R =
2 x1 +x2
2 ,ξ = x1 − x2 . Then Wigner’s function can
be written as follows:
∇
dω −+
f W (R, p, T ) = −i G (R, T ; p, ω); (3.147)
2∂
−∇
−+
G (R, T ; p, ω)
 ⎣ #
ξ ψ ξ ψ
= d 3 ξdψ eiωψ −ip·ξ G −+ R + , T + ; R − , T − . (3.148)
2 2 2 2

Taking the (−+)-component of Eqs. (3.143), (3.144), we find:


  
⎪ 
G0−1 (1)G −+
(1, 2) = d 4 3  −− (1, 3)G −+ (3, 2) +  −+ (1, 3)G ++ (3, 2)
(3.149)
 ∝ 
⎪ 
G0−1 (2)G −+ (1, 2) = − d 4 3 G −− (1, 3) −+ (3, 2) + G −+ (1, 3) ++ (3, 2) .

After subtracting the first line from the second and noticing that
 ∝   ⎣ #
φ i
G0−1 (2) − G0−1 (1) = −i − ↔R · ↔ξ , (3.150)
φT m

we can integrate the equation over dω


2∂ and see that the resulting equation takes the
standard form6 :
⎣ #
φ p
+ · ↔R f W (R, p, T ) = I(R, p, T ). (3.151)
φT m

This is the quantum kinetic equation. Its right-hand side in the quasiclassical
 limit
must yield the (quasli)classical collision integral, St f W (R, p, T ) . Generally, there
appears one more term, which is responsible for the renormalization of the energy
spectrum of the quasiparticles and can be merged with the dynamic left-hand side
of (3.151). This is consistent with the fact that the imaginary part of the self energy
(entering the right-hand side of (3.151)) determines the lifetimes of the quasiparticles

6 In the most general case, we could introduce the distribution function of all four conju-
gated variables, f (R, p, T, θ), which in the presence of the external potential obeys the equation
(Footnote 6 continued)

⎣ #
φ p φU (R, T ) φ
+ · ↔R − ↔R U (R, T ) · ↔p + f (R, p, T, θ)
φT m φT φθ
= I[ f (R, p, T, θ)].
3.5 Quantum Kinetic Equation 137

(in our case, through the collision integral governing the in- and outscattering rates),
while its real part changes their dispersion law (thus modifying the dynamical part
of the kinetic equation).
But in the quasiclassical limit the corrections to the dispersion law are negligible,
and only the collision integral survives. To show this, we take into account that then
we can write

d 4 3(1, 3)G(3, 2)
 ⎣ #
X1 + X3 X1 − X3 X1 + X3 X1 − X3
= d 3 4
+ , −
2 2 2 2
⎣ #
X3 + X2 X3 − X2 X3 + X2 X3 − X2
×G + , −
2 2 2 2

≥ d 4 3 (X + (X 1 − X 3 )/2, X − (X 1 − X 3 )/2)

× G (X + (X 3 − X 2 )/2, X − (X 3 − X 2 )/2) .

ˆ we find that the collision


Using the identities for different components of Ĝ and ,
integral takes the following form:

∇
  dω ⎪
St f w (R, p, T) = −  −+ (R, T ; p, ω)G +− (R, T ; p, ω) (3.152)
2∂
−∇

+  +− (R, T ; p, ω)G −+ (R, T ; p, ω) .

In the quasiclassical limit, due to the slow variation of the distribution function,
we can substitute into the previous expression the values of Green’s function for the
uniform stationary case [Eqs. (3.117)–(3.123)], changing there np to f W (R, p, T ):
   
St f w (R, p, T) = i −+ (R, T ; p, θp − μ) 1 − f W (R, p, T ) (3.153)
+ i +− (R, T ; p, θp − μ) f W (R, p, T ).

The specific form of the collision integral is determined by the interaction, which
enters the self energy functions.

3.6 Application: Electrical Conductivity of Quantum Point


Contacts

As an example of how the above formalism can be applied, we discuss here the quan-
tum conductivity of quantum point contacts (QPC). Point contacts are the contacts
between two conductors, whose dimension is much less than the inelastic scattering
138 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications

Fig. 3.9 Quantum point contact

length of carriers, l j (Fig. 3.9). The size of the quantum point contact, moreover, is
of the order of the Fermi wavelength or smaller.
In the last 10 years, quantum point contacts were realized in highly mobile 2-
dimensional electron gas (2DEG) of semiconductor heterostructures by “squeezing”
it from under the gate electrodes by applied negative voltage. In this case, there are
two benefits: the size of the contact can be changed at will, and—because ν F in
2DEG is large, about 400⇔ A—they really are quantum point contacts. On the other
hand, three-dimensional metallic QPC of atomic size are now being created using a
variety of experimental techniques.
We begin with the three-dimensional case, following the analysis of [15]. The
model of such a contact is presented in Fig. 3.10: it is the round opening with radius
a in a thin planar dielectric barrier  separating two conducting half-spaces.
The matrix Green’s function in this system satisfies the Keldysh equation (3.143),
where the electron–phonon interaction enters the self energy operator. (We consider
electron–phonon interactions as the only interactions present.)
The geometry of the system imposes specific boundary conditions. Far from the
contact, the electron gas of either bank does not feel the presence of the contact at
all and is in equilibrium, so that the Wigner’s distribution function of the electron
(related to the (−+)-component of the Keldysh matrix Green’s function by (3.147))
must satisfy the boundary conditions
,
1, z > 0,
lim f W
(r, p) = n p,τ , τ = (3.154)
r ∞∇ 2, z < 0,

where
1
n p,τ = θp −μτ (3.155)
exp T +1

is the equilibrium distribution. The chemical potentials in the banks of the contact
are biased by the driving voltage
3.6 Application: Electrical Conductivity of Quantum Point Contacts 139

Fig. 3.10 Model of three-dimensional point contact

μ1 − μ2 = eV. (3.156)

The impenetrability of the dielectric barrier  for the electrons is taken into
account by setting

Ĝ(1, 2)|r1 ∈ = Ĝ(1, 2)|r2 ∈ = 0. (3.157)

The density of the electric current in the system is given by


⎣ # -
e φ φ -
j(r) = − G −+
(1, 2)-- ; (3.158)
m φr2 φr1 1=2

d3 p
j(r) = 2e v f W (r, p). (3.159)
(2∂)3

Here v = p/m.

3.6.1 Quantum Electrical Conductivity in the Elastic Limit

In the absence of collisions (l ∞ ∇) the motion of the electrons is nondissipative.


It can be described with a complete system of wave functions χkτ (r) of electrons
140 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications

with momentum k, impinging against the point contact from the right (τ = 1) and
from the left (τ = 2). They are the solution of the Schrödinger equation

χkτ (r) = θk χkτ (r) (3.160)

with the boundary conditions


  ⎪ 
χkτ (r)|r ∞∇ = (−1)τ eikr − eikR r α −(−1)œ z , (3.161)
χkτ (r)|r∈Σ = 0, (3.162)

where we take k z > 0, and k R is the wave vector with the opposite to k z-component.7
In (r, ω)-representation the (unperturbed) retarded and advanced Green’s func-
tions can be expressed as

 χkτ (r1 )χ∝ (r2 )


G R(A) (r1 , r2 , ω) = kτ
. (3.163)
ω − θk ± i0

This formula is evidently a version of the Källén–Lehmann representation for unper-


turbed Green’s functions. It already has correct analytic properties, and can be
checked directly by substituting
. it in (2.19) and using the completeness of the set of
one-particle eigenstates, kτ χkτ (r1 )χ∝kτ (r2 ) = δ(r1 − r2 ).
Making use of the relations (3.117)–(3.123), we now build the solution of the
Keldysh matrix equation (3.143) with zero self energy (i.e., in the elastic limit) with
proper boundary conditions, in the following form:

2 
 d 3k
Ĝ 0 (1, 2) = i α(k z )n̂ kτ (t1 − t2 )e−iθk (t1 −t2 )
(2∂)3
τ=1
× χkτ (r1 )χ∝kτ (r2 ). (3.164)

The matrix n̂ kτ (t) has the form


⎣ #
n kτ − α(t) n kτ
n̂ kτ (t) = . (3.165)
n kτ − 1 n kτ − α(−t)

Substituting (3.164) into the relation (3.158) and carrying out the integration over
any surface enveloping the point contact from the right (z > 0) or the left (z < 0),
we obtain the following expression for the total current in the elastic limit:

7 We can neglect the effect of the electric field in our considerations due to the fact that the potential

drop in the vicinity of the contact is (a/r D ) times smaller than the total potential drop, V . Here r D
is the screening length, which is large enough in semiconductors to provide the condition r D ≤ a.
3.6 Application: Electrical Conductivity of Quantum Point Contacts 141
 
2em d 3k ⎪ 
I (V ) = α(k z )(n k2 − n k1 ) dS χ∝k2 ↔χk1 . (3.166)
 (2∂)3
S1

This expression can be rewritten in an alternative form, namely as



d 3k
I (V ) = α(k z )(n k2 − n k1 )Jk , (3.167)
(2∂)3
⎡ ∝
where Jk = 2em  α(k z ) S1 dS(χk2 ↔χk1 ) is the partial current carried through the
contact by the electron incident from infinity with the momentum k. This expression
is very useful in the calculation of the electrical conductance of a mesoscopic system
in the elastic limit.

3.6.1.1 Current in the Quasiclassical Limit

In the limit k F a ≤ 1 far from the contact, the wave function has the asymptotic form
(for example, for τ = 2)
⎣ #
iaeikr kz
χk2 (r) = − kz + J1 (qa), z > 0, (3.168)
2qr r

where q = |k|| − kr|| /r |, here k|| is the component of the vector parallel to the
dielectric barrier; J1 (x) is a Bessel function.
The contact current is thus given by

I (V ) = I1 (V ) − I2 (V ), (3.169)

(k Fτ a)−9/2
Iτ (V ) = Iτ(0) 1 − √ cos(4k Fτ a − ∂/4) . (3.170)
64 2∂

Here

|e|ma 2 μ2τ
Iτ(0) = (3.171)
4∂3
is the point contact current in the classical limit. In the linear response regime the latter
formula yields the expression for the classical point contact resistance (Sharvin’s
resistance) in the form
e2 SS F
R0−1 = , (3.172)
h3

where S = ∂a 2 , S F = 4∂ p 2F are the areas of the contact and Fermi surface respec-
tively. Equations (3.169), (3.170) show that in the limit k F a ≤ 1, only small correc-
tions to this resistance appear, oscillating with k F (that is, with driving voltage).
142 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications

A two-dimensional version of the above model is an isolating line (or, more


exactly, two rays: say, −∇ < x < −d/2 and d/2 < x < ∇), separating two
conducting half-planes (y < 0, y > 0) [22]. The 2D analogue of (3.167) is, evidently,

d 2k
I (V ) = Jk α(k y )(n k2 − n k1 ). (3.173)
4∂ 2

By standard methods of Green’s functions for the classical wave equation, described,
e.g., in Chap. 7 of [8], it can be expressed through the values of the wave function in
the opening, χk (x, 0):

d/2 d/2
ek 2  χk (x, 0)χk (x, 0)∝
Jk = dx dx≈ J1 (k(x − x ≈ )). (3.174)
m∝ k(x ≈ − x)
−d/2 −d/2

Here J1 (z) is the Bessel function. In the quasiclassical limit we can take χk (x, 0) ≥
eik x x , and find
⎣ #
kF de2 sin 2k F d 1
I (V ) = G(d)V = +V − . (3.175)
∂ ∂ 2∂k F d 4
 
−1 2
We again found an analogue to Sharvin resistance R0,2D = ∂e k F∂d plus small
oscillating corrections. As we will see shortly, this is not a purely academic exercise.
Now a natural question arises: What is Sharvin resistance?

3.6.2 Elastic Resistance of a Point Contact: Sharvin Resistance,


the Landauer Formula, and Conductance Quantization
Let us return to the picture presented in Fig. 3.9. We apply a voltage across a point
contact. The carriers flow through it, and since the contact has finite size, there will
be finite current I at any finite voltage V , which means finite contact resistance,
Sharvin resistance R0 . Then the energy dissipation rate is I V = I 2 R. But the size
of the contact is less than li , inelastic scattering length, therefore the carriers cannot
lose energy in the contact!
This apparent paradox is resolved if we ask what happens at distances larger than
li from the contact. At infinity we have equilibrium, at differing chemical potentials
(Eq. 3.154) to the right and to the left of the contact. On the way to infinity from
the contact, which is somewhat farther than l j , electrons will relax to one of those
equilibrium distributions, due to various inelastic processes. Thus, dissipation will
occur far away from the contact; and its rate is nevertheless determined by the contact
resistance, R0 , which we calculated in apparent neglect of any inelastic scattering!
This would be strange indeed, but actually we did take inelastic processes into
account, when we postulated the boundary conditions at infinity (3.154). They may
3.6 Application: Electrical Conductivity of Quantum Point Contacts 143

seem self-evident, but they imply that all electrons impinging at the contact have
equilibrium distribution—are completely thermalized (and thus have no “memory” of
their previous history). This thermalization is vital for the theory and can be achieved
only if there is sufficiently strong inelastic scattering in the system. Its details are,
though, irrelevant, since as long as li exceeds the contact size, the dissipation rate is
determined by Sharvin resistance.
The point contact is thus a very fine example of the Landauer formalism, a power-
ful tool in transport theory of small systems. Landauer considered a one-dimensional
(1D) wire a scatterer, its quantum-mechanical transition and reflection amplitudes
being t and r respectively, |t|2 + |r |2 = 1, and asked a question:
What is its electrical resistance? To answer this question, he connected with this
wire two equilibrium electron reservoirs (that is, systems containing vast numbers
of electrons at equilibrium, and with effective energy and momentum relaxation
mechanisms) at differing chemical potentials, μ1 − μ2 = eV (Fig. 3.9). Assuming
that once leaving the wire for a reservoir, an electron never returns (or, more exactly,
immediately thermalizes)—which is an exact analogue to our boundary condition
(3.154)—it is easy to write down the current (since it is the same in all of the wire,
we can calculate it to the right of the scatterer):
  
dk
I (V ) = 2e v(k) n 1 (k)|t|2 − n 2 (k)(1 − |r |2 )
2∂

2e 1
= |t| 2
dθ v(θ)(n F (θ − μ − eV ) − n F (θ − μ))
2∂ v(θ)
2e2 2
≥ |t| V. (3.176)
h

Here v(θ) = φφθ


k is the velocity of an electron, and the factor of 2 comes from spins.
Had we several 1D wires in parallel, the currents would simply add up. Thus we come
to the Landauer formula for electrical conductance of a quantum wire (or contact):

2e2 
N⊥
1
G≡ = |ta |2 . (3.177)
R h
a=1

The quantum resistance unit h/(2e2 ) (which is, evidently, 137∂/3 × 10−10 s/cm, or
approximately 13 k in more convenient units) is the same as appears, e.g., in the
quantum Hall effect. The sum is taken over so-called quantum channels. It is proper
to say that 2e2 / h is a conductance of an ideal quantum channel.
How this is related to Sharvin resistance? We had previously

−1 e2 ∂a 2 4∂ p 2F
R0,3D =
h3
2e 2
= (2∂a/ν F )2 ;
h
144 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications

−1 e2 k F d
R0,2D =
∂ ∂
2e2 2d
= .
h νF

We see that in both cases the conductance is really given by 2e2 / h times approx-
imately N⊥ ≥ (a/ν F )dim , where a is the size of the opening and dim its dimen-
sionality. (There is no additional scattering, so |t|2 = 1.) In this case the number of
channels tells us how many electrons at the Fermi surface (because they are carrying
current) can squeeze through the opening simultaneously, given the uncertainty rela-
tion which keeps them ≥ν F apart. But thus defined the number of channels is fuzzy:
the conductance, as we see, changes continuously with the size of the opening.
The situation changes dramatically if instead of an opening in an infinitely thin
wall, we consider a long channel that smoothly enters the reservoirs. This system
is more like a quantum wire of Landauer formula, and we should expect that as
we change the width of the wire, the number of modes changes by 1, so that the
conductance is quantized in units of 2e2 / h. This quantization was indeed observed
in 2D quantum point contacts [19, 20], which we have mentioned above. The beauty
of the thing is that in this system the size of the contact can be continually changed by
changing the gate voltage, and it turns out that the conductance changes by 2e2 / h-
steps. (Unfortunately, the accuracy of these steps is far inferior to those of quantum
Hall effect.)
To understand this, let us return to (3.167). The partial current Jk is the current
carried through the opening by a particle incident from infinity. In a smooth
(adiabatic) channel (with diameter d(z) slowly varying with longitudinal coordi-
nate az) we can approximately present such a wave function in factorized form,
χk,a (π, z) = κa (π; z)eikz , where now k is the longitudinal momentum, π is the
transverse coordinate, and a labels transverse eigenfunctions (which in the 2D case
can be, e.g., ≥ sin(∂aπ/d(z))). The channel is effectively presented as a set of 1D
“subbands”, playing the role of Landauer quantum wires. It is almost self-evident
that only if the energy of transverse motion in the narrowest pan of the channel,
≥2 a 2 /(2m ∝ dmin
2 ), is less than the Fermi energy, can the corresponding ath mode

participate in conductivity. Otherwise the particle will be reflected back to the initial
reservoir. Here indeed, the number of conducting modes is determined by how many
of them can squeeze through the narrowest part of the channel [14].
This picture should naturally hold in the three-dimensional case as well, with the
only complication that now there may occur multiple steps, n × 2e2 / h, due to acci-
dental degeneracy of different transverse modes [11]. Such conductance quantization
was observed in 3D metal point contacts—mechanically controllable break-junctions
[16] and scanning tunneling microscopy (STM) devices [18].
For further reading on the Landauer formalism and transport in point contacts and
other mesoscopic devices I refer you to [3, 4, 9], and references therein.
3.6 Application: Electrical Conductivity of Quantum Point Contacts 145

3.6.3 The Electron–Phonon Collision Integral in 3D Quantum


Point Contact

Now let us return to the interactions. As we have said above, generally the righthand
side of the quantum kinetic equation (3.151) yields not only the collision integral,
but also the renormalization effects. Therefore, we will call its right-hand side the
“collision integral”:

Iph (r, p) = d 3 ξe−ipξ I ph (r + ξ/2, t; r − ξ/2, t), (3.178)

where


Iph (1, 2) = − d 4 3  −+ (1, 3)G++ (3, 2) +  −− (1, 3)G−+ (3, 2)

+ G −− (1, 3) −+ (3, 2) + G −+ (1, 3) ++ (3, 2) . (3.179)

In the lowest order in the electron–phonon interaction the self energy components
are given by the expressions (see Table 3.2)

ˆ =
 (3.180)

 −+ (1, 3) = −i G −+ +−
0 (1, 3)D0 (3, 1);
 +− (1, 3) = −i G +− −+
0 (1, 3)D0 (3, 1);
−− −− −−
 (1, 3) = i G 0 (1, 3)D0 (3, 1);
 ++ (1, 3) = i G ++ ++
0 (1, 3)D0 (3, 1). (3.181)

(We can take g = 1 by redefining the phonon operators.)


It is more convenient to present Iph (1, 2) as
 
dω  −+
Iph (1, 2) = − d 3 r3  (r1 , r3 ; ω)G ++ (r3 , r2 ; ω)
2∂
+  −− (r1 , r3 ; ω)G −+ (r3 , r2 ; ω)
+ G −− (r1 , r3 ; ω) −+ (r3 , r2 ; ω)

+ G −+ (r1 , r3 ; ω) ++ (r3 , r2 ; ω) . (3.182)

The phonon field operator can be written as follows:


146 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications
 
κ(r, t) = q (r)e−iωq t bq + q∝ (r)eiωq t bq† . (3.183)
q

The phonon distribution function Nq (not necessarily equilibrium) is defined by


   
bq† bq ≡ Nq ; bq bq† ≡ Nq + 1. (3.184)

With the use of Eqs. (3.164), (3.183) we find the following expressions for the
self energy components (to spare space, we will later on denote the combination
(k, τ)(k z > 0) by K ):

 −− (r1 , r3 ; ω) = χ K (r1 )χ∝K (r3 )
qK
, ⎣ #
n K (Nq + 1) (1 − n K )Nq
× q (r3 )q∝ (r1 ) +
ω + ωq − θ K − i0 ω + ωq − θ K + i0
⎣ #/
n K Nq (1 − n K )(Nq + 1)
+ q∝ (r3 ) q (r1 ) + ;
ω − ωq − θ K − i0 ω − ωq − θ K + i0
(3.185)


 ++ (r1 , r3 ; ω) = − χ K (r1 )χ∝K (r3 )
qK
, ⎣ #
∝ n K (Nq + 1) (1 − n K )Nq
× q (r3 )q (r1 ) +
ω + ωq − θ K + i0 ω + ωq − θ K − i0
⎣ #/
n K Nq (1 − n K )(Nq + 1)
+ q∝ (r3 ) q (r1 ) + ;
ω − ωq − θ K + i0 ω − ωq − θ K − i0


 −+ (r1 , r3 ; ω) = −2∂i χ K (r1 )χ∝K (r3 )
qK

× q (r3 )q∝ (r1 )n K (Nq + 1)δ(ω + ωq − θ K )

+ q∝ (r3 )q (r1 )n K Nq δ(ω − ωq − θ K ) ; (3.186)


 +− (r1 , r3 ; ω) = −2∂i χ K (r1 )χ∝K (r3 )
qK

× q (r3 )q∝ (r1 )(n K − 1)Nq δ(ω + ωq − θ K )

+ q∝ (r3 )q (r1 )(n K − 1)(Nq + 1)δ(ω − ωq − θ K ) .
3.6 Application: Electrical Conductivity of Quantum Point Contacts 147

Substituting these expressions into (3.182) and (3.178), and gathering like terms,
we finally obtain the following expression for the electron–phonon collision integral:


q q
Iph (r, p) = −2 CK ≈ K S K ≈ K (r, p)
K ,K ≈ q

n K (1 − n K ≈ )(Nq + 1) − n K ≈ (1 − n K )Nq
× .
θ K − θ K ≈ − ωq − i0
(3.187)

Here the function 


d 3r3 χ∝K ≈ , (r3 )∝ (r3 )χ K (r3 )
q
CK ≈ K ≡ (3.188)

q
in the uniform case would yield the momentum conservation law, C K ≈ K ∝ δ(k −
k≈ − q). The function S K ≈ K (r, p) is defined as
q


d 3 ξe−ipξ χ K ≈ (r + ξ/2)χ∝K (r − ξ/2)
q
S K ≈ K (r, p) =
⎪ 
× q (r + ξ/2) − q (r − ξ/2) . (3.189)

It is clearly seen that Eq. (3.187) yields two different terms. One would contain
the energy-conserving delta functions δ(θ K − θ K ≈ − ωq ) multiplied by the usual
statistical in–out factors for the electronic scattering with emission or absorption of
phonons. The other (expressed through the main value integrals) is responsible for
the spectrum renormalization.

3.6.4  Calculation of the Inelastic Component of the Point Contact


Current

The kinetic equation (3.151) in the point contact,

v↔r f W (r, p) = Iph (r, p), (3.190)

must be supplemented by appropriate boundary conditions. They are conveniently


written after introducing the inverse Fourier transform of Wigner’s function,
 ⎣ #
d 3 p ip(r1 −r2 ) W r1 + r2
f W (r1 , r2 ) = e f ,p , (3.191)
(2∂)3 2

as follows:
148 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications

f W (r1 , r2 )|r1 ,r2 ∞∇ = 0;


z 1 ,z 2 >0
f W
(r1 , r2 )|r1 ∈ = f W (r1 , r2 )|r2 ∈
= 0. (3.192)

Then the solution to the boundary value problem (3.190), (3.192) has the form
 
f W (r, p) = d 3 p≈ d 3 r ≈ gpp≈ (r, r≈ )Iph (r≈ , p≈ ), (3.193)

where the function gpp≈ (r, r≈ ) = −gp≈ p (r≈ , r) satisfies the linear equation

v↔r gpp≈ (r, r≈ ) = δ(r − r≈ )δ(p − p≈ ) (3.194)

with boundary conditions for its inverse Fourier transform analogous to (3.192).
This allows us to express the inelastic correction to the point contact current as
follows (we used Eq. (3.159)):
 
2e
Iph = 3
d p d 3 r Fp (r)Iph (r, p). (3.195)
(2∂)3

The function F is defined by


 
Fp (r) ≡ d S≈ d 3 p ≈ vz≈ gp≈ p (r≈ , r) (3.196)
0

(the integration is taken over the opening O) and is the solution of the following
boundary value problem:

v↔r Fp (r) = −vz δ(z);


F(r1 , r2 )|r1 ,r2 ∞∇ = 0;
z 1 ,z 2 >0
F(r1 , r2 )|r1 ∈ = F(r1 , r2 )|r2 ∈
= 0. (3.197)

It can be shown that this quantity can be written as [15]

Fp (r) = ρ−p (r) − α(z). (3.198)

The quantity ρp (r) in (3.198), defined as


  ⎣ # ⎣ #
≈ d 3k r≈ r≈
ρp (r) = d 3 r ≈ e−ipr χ k1 r + χ∝
k2 r − , (3.199)
(2∂)3 2 2
k z >0
3.6 Application: Electrical Conductivity of Quantum Point Contacts 149

is the quantum analogue of the classical probability for an electron moving from
infinity with momentum p to be located at the point r to the right of the contact.
The above results allow us to explain the experimentally observed nonlinear
current–voltage dependence, I (V ), in point contacts, the origins of the nonlinearity
being (1) electron–phonon interaction and (2) renormalization of the electron mass.
It turns out that the peaks of the function d 2 V /d I 2 (eV ) are situated at the maxima
of the phonon density of states. Qualitatively this is understandable, since on the one
hand, the distribution functions of the electrons injected through the point contact is
shifted by eV with respect to the surrounding electrons, and on the other hand, its
relaxation to the distribution function of the surroundings will be accompanied by
emission of phonons with energy ω = eV .
This effect [17, 21] is a basis for point contact spectroscopy, the method of restora-
tion of the phonon density of states from the nonlinear current–voltage characteristics
of a point contact (for a review see [5]). It was first developed in metals, where due
to both screening length and Fermi wavelength being very small, the quasiclassical
theory is already sufficient.

3.7 Method of Tunneling Hamiltonian


The tunneling Hamiltonian approximation (THA) is most often used in the theory
of the Josephson effect—superconducting current flow between two weakly coupled
superconductors (separated by a potential barrier), which we will address later in the
book. But it can be successfully employed in a variety of problems where electron
transfer between the conductors is realized through a “weak link,” be it a tunneling
barrier or a point contact, and it is natural to discuss this method here.
The idea of the method consists in presenting the total Hamiltonian of the system
as a sum:
H = H L + H R + HT . (3.200)

Here H L ,R are the Hamiltonians of isolated (i.e., unperturbed) (left/right) conductors,


while the tunneling term HT describes the effect of electron transitions between them:
 †
HL = θk,τ ck,τ ck,τ ; (3.201)
k,τ


HR = θq,τ dq,τ

dq,τ ; (3.202)
q,τ

 
† ∝ †
HT = Tkq ck,τ dq,τ + Tkq dq,τ ck,τ . (3.203)
k,q,τ
150 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications

As you see, H L ,R are written as though the banks of the contact were infinite,
and as though quasiparticle states could be characterized by their momenta (which
in reality are not good quantum numbers).
The tunneling matrix element in the case of time reversal symmetry satisfies the
relation

Tkq = T−k,−q . (3.204)

In the case of planar barrier of amplitude U and thickness d in WKB approximation


we would obtain 1

Tkq ∝ k x qx e−  2mU d δ(k⊥ − q⊥ ),

but the detailed behavior of Tkq is not relevant. It will suffice to note that the compo-
nent of the momentum parallel to the interface is conserved, and that in many cases
the energy dependence of the tunneling matrix element can be neglected in a rather
wide interval around the Fermi energy.
If we apply to the left bank the voltage V , the Hamiltonian will acquire the form

H L (V ) = H L (0) − |e|V N L . (3.205)

The particle number operator in the left/right bank can be defined as


 †

NL = ck,τ ck,τ ; N R = †
dq,τ dq,τ , (3.206)
k,τ q,τ

and it commutes with the corresponding unperturbed Hamiltonian. The only term
changing particle numbers in each separate bank is, of course, HT .
The Heisenberg equation of motion for the electron annihilation operator in the
left bank for nonzero bias, c̃(t), is then

ic̃˙k,τ (t) = [c̃k,τ (t), H L (0)] − |e|V c̃k,τ (t). (3.207)

The tunneling current can be written, e.g., as


 
I (V, t) = −|e| Ṅ R (t) . (3.208)

Commuting N R with HT , we find that

2|e|   

I (V, t) = Tkq c̃kτ (t)dqτ (t) . (3.209)

kqτ

We have thus reduced the problem of calculating the current to one of calculating an
average of four field operators over a nonequilibrium quantum state—which, as we
know well, can be expressed as an infinite series in the perturbation, HT . Here we
will use the Keldysh formalism and consider the simplest case of a constant tunneling
3.7 Method of Tunneling Hamiltonian 151

Fig. 3.11 “Left–right” Green’s function in the Keldysh formalism for the tunneling contact

matrix element,
Tkq ≡ T,

in which case the whole matrix series can be summed explicitly.


Then we can write the current as
4|e|T   †  4|e|T  −+
I =− dq c̃k = ∗ Fkq (0). (3.210)
 
k,q k,q

We have introduced Keldysh Green’s functions, mixing the states in different banks
of the contact:
⎛ −− −+ ⎞
Fkq (t) Fkq (t)
F̂kq (t) = ⎝ ⎠;
+− ++
Fkq (t) Fkq (t)
−− 1 
Fkq (t) = T c̃k (t)dq† (0) ;
i
1  
++
Fkq (t) = T̃ c̃k (t)dq† (0) ; (3.211)
i
−+ 1 
Fkq (t) = − dq† (0)c̃k (t) ;
i
+− 1 
Fkq (t) = c̃k (t)dq† (0) .
i

The diagram series for F̂kq (ω) is shown in Fig. 3.11 (here solid and broken lines
correspond to the unperturbed Keldysh matrix in the left and right bank, Ĝl,r ).8
Due to T being momentum independent, we can easily integrate over internal
momenta. For example,

8All left-bank Green’s functions (labeled by k, k≈ , k≈≈ , . . .) depend on shifted frequency, ω − ω0 ,


where ω0 = |e|V /. This answers to the difference between c̃k and ck .
152 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications

Here we have introduced 


ĝ(ω) = Ĝ(k, ω). (3.212)
k

The current itself is thus written as


∇
4|e|T dω  −+
I (V ) = ∗ ˆ
ĝl (ω − ω0 ) · (ω, ω0 ) · ĝr (ω) (3.213)
 2∂
−∇

where

(3.214)

and
ζ̂(ω, ω0 ) = ĝr (ω) · ψ̂3 · ĝl (ω − ω0 ) · ψ̂3 .

Calculations are easier in the “rotated” representation:


 −1 ⎣ #
1 1−1
ĝ ∞ ǧ = LĝL ; L = L
† †
=√ .
2 11

Now we are left only with independent components of the Keldysh matrix:
⎣ # ⎣ #
g−− g−+ 0 gA
ĝ = ∞ ǧ = . (3.215)
g+− g++ g R gK

The components of the ǧ-matrix are easily found:


 1  1 
g R,A (ω) = =P ↑ i∂ δ(ω − ξk );
ω − ξk ± i0 ω − ξk
k k k

  ∇
1 N (ξ) dξ
P =P dξ ≥ N (ω)P = 0.
ω − ξk ω−ξ ω−ξ
k −∇

Therefore,

g R,A (ω) ≥ ↑i∂ δ(ω − ξk ) ≡ ↑i∂ N (ω), (3.216)
k
3.7 Method of Tunneling Hamiltonian 153

while the Keldysh component is, by definition,



g K (ω) = G K (k, ω)
k
1
= (1 − 2n F (ω)) · 2∂δ(ω − ξk )
i
k
= −2∂i (1 − 2n F (ω)) N (ω). (3.217)

After performing the inverse rotation, we find the expression for the normal tun-
neling current in the first nonvanishing order:

∇
(1) 4|e|T dω 2T
I (V ) = ∗ ∂ Nl (ω − ω0 ) · ∂ Nr (ω) [n F (ω) − n F (ω − ω0 )]
 2∂ 
−∇
⎣ #⎣ #
e2 2∂T 2∂T
=V· · Nl (0) Nr (0) (3.218)
∂h  

in the linear response limit (V ∞ 0). The conductance is thus


⎣ #⎣ #
(1) 2∂T 2∂T
G = Nl (0) Nr (0) , (3.219)
 

and the effective barrier transparency in the sense of the Landauer formula is
⎣ #⎣ # ⎣ #2
2∂T 2∂T 2∂
Teff = Nl (0) Nr (0) ≥ N (0) T2 . (3.220)
  

The evaluation of  ˆ leads to the following correction: the integrand of the expression
for I (V ) (3.218) acquires the factor

- -−2 ⎣ #2 −2
- -
-1 − T g A (ω − ω0 )g A (ω)- ≥ 1 + ∂T Nl (0)Nr (0)
2
- r - ,
2 l 

which (for V ∞ 0) leads to

e2 Teff
I (V ) = V · · . (3.221)
∂ (1 + Teff /4)2

This result was first obtained by [13] using the Matsubara formalism. Note that the
actual small parameter of the problem is not T, but rather Teff /4.
A rather disturbing property of the above result is that it agrees with the Landauer
formula only in the lowest order; moreover, for large T conductance tends to zero as
transparency grows! Nevertheless, neither of the approaches is at fault. Indeed, the
154 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications

tenet of the Landauer formalism is that once leaving the contact, the particle never
comes back (or immediately loses the phase memory, which is the same). On the
other hand, from the first look at the diagram series that we draw, it becomes clear
that the higher-order terms represent exactly these processes of multiple coherent
reentrances. Then Landauer transparency should be defined as

Teff
TLandauer = ; (3.222)
(1 + Teff /4)2

that is all: after the many happy returns to the contact, an electron finally goes to
infinity, and the Landauer approach becomes legitimate.
The question of why conductance goes to zero as T grows may seem irrelevant,
because the THA itself cannot possibly apply in this limit. We could, though, consider
a formally equivalent case of a one-dimensional tight-binding Hamiltonian:

−1 
 

H L = −t0 ci−1 ci + ci† ci−1 ; (3.223)
i=−∇
∇  
H R = −t0 di† di+1 + di+1

di ; (3.224)
i=1
 

HT = −T c−1 d1 + d1† c−1 . (3.225)

It is evident that if T = t0 , we have an ideal 1D chain, and there will be no reflection


at the “contact” between the sites −1 and 1. On the other hand, both T < t0 and
T > t0 would disrupt the chain and eventually break it in two [12], in agreement
with our result.

3.8 Problems

• Problem 1
Check whether the approximation for the retarded Green’s function

Z
G R (p, ω) =
ω − (θp − μ) − (p, ω)

satisfies
(1) the Kramers–Kronig relations;
(2) the sum rule;
(3) asymptotics at |ω| ∞ ∇, if the approximation for the self energy is given by:
(A) (p, ω) =  ≈ − i ≈≈ ;
3.8 Problems 155

(B) (p, ω) = Aω − i ≈≈ ;
(C) (p, ω) = Aω − i Bω 2
(where Z ,  ≈ ,  ≈≈ , A, B are positive constants).
• Problem 2
Write the analytical expression for the polarization operator at finite temperature
in the lowest order and evaluate it.
• Problem 3

Using the above result, find the large wavelength (q ∞ 0) screening of Coulomb
potential in the nondegenerate limit (eμε  1). Calculate the (Debye–Hückel)
screening length and compare it to the Thomas–Fermi screening length.

References

Books and Reviews

1. Abrikosov, A.A., Gorkov, L.P., Dzyaloshinski, I.E.: Methods of Quantum Field Theory in
Statistical Physics. Dover Publications, New York (1975) (Matsubara formalism)
2. Balescu, R.: Equilibrium and Nonequilibrium Statistical Mechanics. Wiley, New York (1975)
(Definition and properties of Wigner’s functions quantum distribution functions)
3. Datta, S.: Electronic Transport in Mesoscopic Systems. Cambridge University Press, Cam-
bridge (1995) (Very detailed and pedagogical presentation of transport theory in normal meso-
scopic systems)
4. Imry, Y.: Physics of mesoscopic systems. In: Grinstein, G., Mazenko, G. (eds.) Directions in
Condensed Matter Physics: Memorial Volume in Honor of Shang-Keng Ma. World Scientific,
Singapore (1986)
5. Jansen, A.G.M., van Gelder, A.P., Wyder, P.: Point-contact spectroscopy in metals. J. Phys. C
13, 6073 (1980)
6. Lifshitz, E.M., Pitaevskii, L.P.: Statistical Physics pt. II. (Landau and Lifshitz Course of The-
oretical Physics, v. IX.). Pergamon Press, Oxford (1980) (Matsubara formalism)
7. Lifshitz, E.M., Pitaevskii, L.P.: Physical Kinetics. (Landau and Lifshitz Course of Theoretical
Physics, v.X.). Pergamon Press, Oxford (1981) (Keldysh formalism)
8. Morse, P.M., Feschbach, H.: Methods of Theoretical Physics. McGraw-Hill, New York (1953)
9. Washburn, S.,Webb, R.A.: Quantum transport in small disordered samples from the diffusive to
the ballistic regime. Rep. Progr. Phys. 55, 1311 (1992) (A review of theoretical and experimental
results on mesoscopic transport)
10. Rammer, J., Smith, H.: Quantum field-theoretical methods in transport theory of metals. Rev.
Mod. Phys. 58, 323 (1986)
156 3 More Green’s Functions, Equilibrium and Otherwise, and Their Applications

Articles

11. Bogachek, E.N., Zagoskin, A.M., Kulik, I.O.: Sov. J. Low Temp. Phys. 16, 796 (1990) (An
emphasis is made on the Keldysh formalism and the method of quantum kinetic equations)
12. Cuevas, J.C., Martín-Rodero, A.: Phys. Rev. B 54, 7366 (1996)
13. Genenko, Yu.A., Ivanchenko, Yu.M.: Theor. Math. Physics 69, 1056 (1986)
14. Glazman, L.I., Lesovik, G.B., Khmelnitskii, D.E., Shekhter, R.I.: JETP Lett. 48, 239 (1988)
15. Itskovich, I.F., Shekhter, R.I.: Sov. J. Low Temp. Phys. 11, 202 (1985)
16. Krans, J.M., van Ruitenbeek, J.M., Fisun, V.V., Yanson, I.K., de Jongh, L.J.: Nature 375, 767
(1995)
17. Kulik, I.O., Omelyanchuk, A.N., Shekhter, R.I.: Sov. J. Low Temp. Phys. 3, 1543 (1977)
18. Pascual, J.I., et al.: Science 267, 1793 (1995)
19. van Wees, B.J., et al.: Phys. Rev. Lett. 60, 848 (1988)
20. Wharam, D.A., et al.: J. Phys. C 21, L209 (1988)
21. Yanson, I.K.: Sov. Phys. JETP 39, 506 (1974)
22. Zagoskin, A.M., Kulik, I.O.: Sov. J. Low Temp. Phys. 16, 533 (1990)
Chapter 4
Methods of the Many-Body Theory
in Superconductivity

There’s a fallacy somewhere: he murmured drowsily, as he


stretched his long legs upon the sofa. “I must think it over
again.” He closed his eyes, in order to concentrate his attention
more perfectly, and for the next hour or so his slow and regular
breathing bore witness to the careful deliberation with which he
was investigating this new and perplexing view of the subject.
Lewis Carroll
“A Tangled Tale”

Abstract Physical origins of superconductivity and peculiarity of the supercon-


ducting state. Cooper pairing. Instability of the normal state of a system of fermions.
BCS Hamiltonian. Elementary excitations in a superconductor. Nambu-Gor’kov for-
malism for matrix Green’s functions in a superconductor. Andreev reflection and
Josephson effect. Coulomb blockade.

4.1 Introduction: General Picture of the Superconducting


State
The discovery of superconductivity by Kamerlingh Onnes in 1911 was a real
challenge for contemporary physical theory. The theory of metals developed by Drude
on the basis of classical statistical physics, while giving a very good explanation of
their normal properties, was absolutely unable to deal with superconducting ones.
This was also true later for Sommerfeld’s and Bloch’s theories, based on quantum
mechanics. It took almost a half century of studies for the microscopic theory of
this phenomenon to appear. (One of the boldest sci-fi writers, R. Heinlein, in a novel
written in the mid-40s, predicted the creation of such a theory only by the middle of
the next millennium.)
We will not give a detailed account on the properties of superconductors. On the
one hand, they are well known. On the another, we are not giving a course on super-
conductivity, or even on the theory of superconductivity, but on the applications of
the many-body theory methods to condensed matter, including the superconducting

A. Zagoskin, Quantum Theory of Many-Body Systems, 157


Graduate Texts in Physics, DOI: 10.1007/978-3-319-07049-0_4,
© Springer International Publishing Switzerland 2014
158 4 Methods of the Many-Body Theory in Superconductivity

state. For our purposes, thus, it is enough to recollect two deciding consequences of
the experimental data.
The ground state of the superconductor is unusual
At sufficiently low temperature below the superconducting transition temperature
Tc there exists macroscopic phase coherence of the electrons throughout the sample.
You can imagine the superconductor as a single giant molecule in the sense that its
electrons are described rather by the wave function than by the density matrix (as in
any decent macroscopic system).
The elementary excitations in the superconductor
Elementary excitations separated from the ground state by a finite energy gap.
This means that the system opposes attempts to excite it, staying in its ground state
if the perturbation is not sufficiently strong.
These key properties can be reproduced by the theory, provided it accounts for
the following things:
(1) Degeneracy of the electron gas (exclusion principle): The existence of the Fermi
surface is essential.
(2) Attraction between electrons. This property seems incredible due to the inevitable
Coulomb repulsion between electrons. Nevertheless, in the previous chapter you
were shown the possibility of such an attraction due to the electron–phonon
interaction (EPI). The role of EPI was understood by Frölich even before the
appearance of BCS theory and is directly confirmed by an isotope effect.
(3) The characteristic interaction energy, i.e., the range of electrons involved in the
interaction (the order of the Debye energy, π D ), must be much less than the
Fermi energy:

π D √ E F .

If these demands are met, the normal zero-temperature ground state of the
metal-filled Fermi sphere-becomes unstable. The qualitatively different ground state
appears instead, with all the strange features that we observe.
This instability of the ground state, or of the vacuum (you remember this termi-
nology), means that we can no longer use the perturbation techniques starting from
the normal ground state. As in the search for the root of a function by iteration, we
must start not too far from the root we want to find, lest we get a wrong answer, or
no answer at all. Then we have no regular way to build the new vacuum; we need an
insight.
But analogies can help to bring the insight closer. The normal ground state has
no gaps. The superconducting one has. In the usual quantum mechanics we also find
the situation when the gap appears: when the bound state exists. For an electron in
the isolated atom, e.g., the gap between the level and the continuous spectrum is the
ionization potential.
Note that the bound state cannot be obtained by the perturbation theory from the
propagating one. Indeed, the bound state does not carry a current. So, no matter how
4.1 Introduction: General Picture of the Superconducting State 159

weak the attractive potential is, there is a finite, qualitative difference between these
states.
“No matter how weak” is a slightly inaccurate statement, for in the three-
dimensional (3D) case too weak an attractive potential cannot create the bound
state. But in the two- and one-dimensional cases the bound state can be created by
an arbitrarily weak attractive potential.
Here the dimensionality is physically significant. We guess that the attraction
between the electrons is fairly weak. (Experimental measurements of the gap confirm
this supposition.) Thus we must have an effectively low-dimensional situation to get
a bound state.
What takes place in the one-body case? Assuming that the attractive potential is
of the simplest form,

U (r) = −uρ(r), u > 0,

we have the Schrödinger equation

−→ 2  − uρ(r) = E.

Fourier-expanding the wave function,



(r) = k exp(ikr),
k

we find
 
(k 2 − E)k eikr = u(0)eikr ,
k k

or
u u 
k = (0) ∇ 2  ≡.
k2 −E k −E ≡ k
k

Summing over wave vectors k, we find the relation

1  1
= . (4.1)
u k −E
2
2k >0

In the propagating states we have E, k 2 > 0, in the bound ones E, k 2 < 0. First
of all, we see that for repulsive potential (u < 0) only the propagating states are
possible (the r.h.s. of 4.1 is strictly positive for negative E).
Then, for weak attraction (the case we are interested in) the l.h.s. of this equation
is a large positive number. For positive E it can be easily matched by one of the terms
in the r.h.s. sum. This demands only a slight shift of the energy levels relative to their
160 4 Methods of the Many-Body Theory in Superconductivity

Fig. 4.1 Formation of the


bound state

positions without a potential (see Fig. 4.1). But for negative E it can be matched
only by the whole sum, since none of its terms is any longer arbitrarily large. If the
attraction is infinitely weak, the excitation energy, |E|, of the possible bound state
tends to zero, and can be neglected in (4.1).
Thus, for the bound state to be created by an infinitesimal attractive potential,
the sum k 2 >0 k −2 must diverge. Here the dimensionality role becomes absolutely
clear, for  ≈
⎪ ≈
 4ε 0 (k dk) k 2 ∓ k|0 , (3D)
2 1
 1 ≈
= 2ε 0 (kdk) k12 ∓ ln k|≈ 0 , (2D) (4.2)
 2  ≈ dk 1

2
k 1 ≈
k
0 k 2 ∓ k 0 . (1D)

The divergence on the upper limit is an artifact due to our choice of ρ-like potential;
we should renormalize it, which means introducing a cutoff at some kmax ≈ 1/a,
where a is the scattering length, related to scattering cross-section λ = 4εa 2 . The
physically significant divergence, at small momenta (i.e., large distances), occurs
only in the 1D and 2D cases.
These academic speculations become practically important as soon as we recollect
that in the metal at low temperature we have effectively a 2D situation.
Indeed, a filled Fermi sphere does not allow the electrons to be scattered inside
it. On other hand, the energy transfer during the electron–electron collision due to
our attractive potential is of the order of its energy scale, π D √ E F . Thus the
electrons under consideration are confined to an effectively 2D layer around the
Fermi surface. It was Cooper who first understood this fact, and now we shall sketch
his considerations of the famous Cooper pairing.
Consider two electrons (the “pair”) above the filled Fermi sphere (at T = 0)
(Fig. 4.2a). We neglect all the electrons inside in any other relation. We then assume
the translation invariance of the whole system, and neglect all spin-dependent forces.
4.1 Introduction: General Picture of the Superconducting State 161

Fig. 4.2 a Cooper pair scattering, b Scattering phase volume

Fig. 4.3 Pair formation in the


superconductor

Then the momentum of the pair mass center, q, and its total spin, S, are the motion
constants. The pair orbital wave function is then

(r1 , r2 ) = αq (δ)eiqR , (4.3)

where δ = r1 − r2 , R = r+r 2 .
The problem is greatly simplified if the pair is at rest (q = 0). (It is easy to see
(Fig. 4.2b) that indeed in this case the scattering phase volume is largest.) In this case
the problem is spherically symmetric; α(δ) is an eigenfunction of the moment L.
For the singlet spin state of the pair (S = 0) the orbital wave function is symmetric
(Fig. 4.3).
We can write
 
(r1 , r2 ) = α(δ) = ak eikδ = ak eikr1 e−ikr2 , (4.4)
k>k F k>k F
162 4 Methods of the Many-Body Theory in Superconductivity

and see that the pair wave function is a superposition of the configurations with
occupied states (k, −k). The eigenstate of the pair with spin S = 0 can be found
from the Schrödinger equation:

(E − H0 ) = V , (4.5)

 2
where H0 = − 2m (→r1
2 + → 2 ), and V is an interaction potential. The energy is
r2
measured, as usual, from the Fermi level.
Substituting here (4.4) and taking matrix elements of both sides of the equation,
we obtain

(E − 2E k )ak = ∼k, −k|V |k≡ , −k≡ ∝ak≡ . (4.6)
k ≡ >k F

This equation can be solved if we take V in the factorized form,

∼k, −k|V |k≡ , −k≡ ∝ = ωwk∗ wk≡ . (4.7)

Then we obtain the same sort of equation as in our quantum-mechanical example:

1  |wk |2
− = . (4.8)
ω 2E k − E
k>k F

Now we see that in the case of attraction (ω < 0) the bound state appears if the
interaction is confined to the vicinity of the Fermi surface, i.e., if

1, 0 < E k < πc , (πc √ E F )


|wk | = 2
(4.9)
0. otherwise

Indeed, in this case



1 πc N (E ≡ ) N (0) 2πc + |E|
− = dE ≡ ≈ ln , (4.10)
ω 0 2E ≡ − E 2 E

and
2πc
|E| = exp(−2/|ω|N (0)). (4.11)
1 − exp(−2/|ω|N (0))

Of course, if the attraction is strong, there is a bound state with |E| ≈ πc |ω|N (0).
But in the weak coupling limit it still exists, and its energy is given by

|E| ≈ 2πc exp(−2/|ω|N (0)). (4.12)


4.1 Introduction: General Picture of the Superconducting State 163

Thus, even an infinitesimal electron–electron attraction near the Fermi surface


creates a bound state of two electrons. This is greatly due to the effectively 2 D
situation, which allowed us to write instead of the 3 D D.O.S., N (E) ∓ E 1/2 , the 2
D one, N (E) ≈ N (0) = const.
The binding energy is enormously sensitive to the interaction strength, |ω| (4.12)
and is an essentially nonanalytic function of |ω| when |ω| → 0 (all terms in its Taylor
expansion around this point are zeros). This indicates the instability of the normal
ground state and unavoidable failure of any perturbation scheme starting from it.
The above calculations have a serious flaw: they are essentially “few-body” and
deal rather with a single pair of real electrons injected into metal than with quasipar-
ticles. Thus, only the possibility to leaving the (k ↑, −k ∞) state due to interaction
is considered, while the inverse process is omitted (when another pair is scattered
from (k≡ ↑, −k≡ ∞) to (k ↑, −k ∞)). The quasiparticles below the Fermi surface
(quasiholes) also were not considered, leading to a wrong exponent in the expres-
sion for the binding energy (4.12): in BCS theory we have exp(−1/|ω|N (0)) instead
of exp(−2/|ω|N (0)) (an enormous difference, since exp(−1/|ω|N (0)) is already a
small number). Then, the quasiparticle approach allows us to explain how the pairs
(binding energy ≈ k B Tc ≈ 10−4 eV) survive the Coulomb electron–electron and
electron–phonon interactions, with a scale of some 1 eV per particle. (In the quasi-
particle picture all these interactions are included into background, renormalizing the
quasiparticle mass, etc.; what is left (if any) is just the effective weak (quasi)electron-
(quasi)electron attraction.)
The existence of many pairs means that we no longer can be sure whether a given
(k ↑, −k ∞)-state is occupied or empty; we have instead to introduce the probability
amplitudes that it is occupied, vk , or free, u k ;

|vk |2 + |u k |2 = 1. (4.13)

The fact that we use the amplitudes, not the probabilities, shows that this uncertainty
is not due to the usual scattering: we have a coherent state, preserving the quantum-
mechanical phase effects.
As usual, the price to be paid for the advantages of the many-body approach is
high. The calculations, though, could be greatly simplified if some sort of a mean
field approximation (MFA) can be applied. For this, the pair concentration must be
large enough. Let us make an estimate. The size of a Cooper pair may be defined as
 
|α(δ)|2 δ2 d 3 δ k |→k ak |
2
(δ) =2  = 
|α(δ)|2 d 3 δ |ak |2
≈ k
N (0)(∂θ/∂k)θ=0 0 (∂a/∂θ)2 dθ
2
≈ ≈ (4.14)
N (0) 0 a 2 dθ

(θ is the energy measured from the Fermi level).


Because of
164 4 Methods of the Many-Body Theory in Superconductivity

(E − 2E k )ak = ∼k ↑, −k ∞ |V |k≡ ↑, −k≡ ∞∝ak≡
k≡

= ωwk∗ wk≡ ak≡ = const
k≡

(if k is near the Fermi surface), we find that

const ∂a 2 const
a(θ) = ; = .
E − 2θ ∂θ (E − 2θ)2

Therefore,

4 v F
δ ≈ . (4.15)
3 E

If we take E = k B Tc , we get quite a macroscopic length, δ ≈ 10−4 cm. The


density of pairs at sufficiently low temperature T √ Tc must be, on the other hand,
of the order of the density of the electrons themselves, n e ≈ 1022 cm−3 . Thus in the
volume of a single pair there exist at least 1010 more pairs. This provides us with
perfect conditions for MFA application.
To use this advantage, note that the term in the Hamiltonian for the electron–
electron attraction described above is of the form
 † †
Hi = Vkk≡ ak↑ a−k∞ ak≡ ∞ a−k≡ ↑ .
kk≡

This term contains four operators and will lead to the four-legged vertex in the dia-
gram techniques. Along with the sharp momentum dependence of the matrix element,
this is a rather unpleasant thing to deal with. The MFA immediately allows us to get
rid of two of these operators, writing instead of them the average value (and then
obtaining for it the self-consistent equation). But the usual choice in accordance with
Wick’s theorem, say, a † a † aa → ∼a † a∝a † a, does not give rise to any superconduc-
tivity: this term is for usual electron scattering and has nothing to do with pairing
instability. Anyway, it could have been included in other terms of the Hamiltonian
or/and in the renormalization of quasiparticle characteristics.
To keep the pairs in our approximation, we must make a crazy step and write
 † †
Hi → Hi,MFA = Vkk≡ ak↑ a−k∞ ∼ak≡ ∞ a−k≡ ↑ ∝ + Hermitian conjugate
kk≡
 
= † †
k ak↑ a−k∞ + ∗k a−k∞ ak↑ . (4.16)
k

The quantity k is, naturally, called the pairing potential:


4.1 Introduction: General Picture of the Superconducting State 165

k = Vkk≡ ∼ak≡ ∞ a−k≡ ↑ ∝. (4.17)
k≡

The craziness of this step consists in the fact that we have introduced anomalous
averages of two creation or annihilation operators, which must be zero in any state
with a fixed number of particles! But there is a method (or at least self-consistency)
in this madness. The Hamiltonian Hi,MFA also violates the law of conservation of
the particles’ number, creating and annihilating them in pairs. Then an average like
∼aa∝ or ∼a † a † ∝ calculated using this Hamiltonian will not be zero, and the ends
are met. Moreover, as you certainly know, the nonzero anomalous averages are the
fundamental feature of the superconducting state. They describe the off-diagonal
long-range order (ODLRO) of this state (the concept introduced by Yang [27])—the
sort of symmetry that makes it qualitatively different from the normal state.
ODLRO is so called because the anomalous averages are related to off-diagonal
terms of the density matrix of the system (which, as you know, are responsible for the
quantum coherence phenomena). The long-range order can be understood as follows.
The pair-pair correlation function,

S↑∞ (r1 , r2 ) = ∼↑† (r1 )∞† (r1 )∞ (r2 )↑ (r2 )∝, (4.18)

describes the correlation between the pairs at r1 and at r2 . When the distance between
them grows, this function must factorize (because of the general principle of the
correlations’ extinction):

S↑∞ (r1 , r2 ) ∼ ∼↑† (r1 )∞† (r1 )∝ × ∼∞ (r2 )↑ (r2 )∝. (4.19)
|r1 −r2 |→≈

In the normal state it factorizes trivially, for anomalous averages are zero. But in the
superconducting state we get a nontrivial factorization

S↑∞ (r1 , r2 ) ∓ ∗ (r1 )(r2 ), (4.20)


|r1 −r2 |→≈

which means that there exists a long-range correlation between the electronic states
(macroscopic quantum coherence). The superconductor is in an ordered state. There-
fore, the pairing potential  is also called the order parameter.
But if these anomalous averages are so important, then the superconductivity
should not appear in a system with a fixed number of particles. That is, the super-
conducting ground state seems to be forbidden in a closed system.
The most trivial answer (and the correct one) is that there are no isolated objects
in this Universe; the electrons are, in principle, delocalized, etc., so that two electrons
more or less don’t play any role in practice.
Another answer, also true, is that, O.K., anomalous averages are important. But
let us think: what are the observable consequences of nonzero , ∗ ? We shall see
that the superconductor is described in terms of || and Arg.
166 4 Methods of the Many-Body Theory in Superconductivity

Fig. 4.4 Are the anomalous


averages real?

We notice, then, that for an isolated system anything (thermodynamic properties,


for example) is defined only by ||2 ∇ ∗  ∓ ∼a † a † aa∝, which is nonzero even
if the number of particles is fixed. And the phase of  in this situation is physically
senseless.
Of course, there exist situations when the phase is significant (the Josephson effect,
for instance). But then we have an essentially open system-where electrons can tunnel
this way and that way, so their number in either superconductor is fluctuating.
We see, then, that this “nonconservation” is rather an artifact related to a certain
way of describing the superconductivity, MFA Hamiltonian, than some mystical
physical phenomenon (see Fig. 4.4).1
Let us recall that in the second quantization method we also had violated all the
conservation laws, by introduction of the creation/annihilation operators. The only
difference is that then we restored justice immediately, in the very term (writing
something like a † a), and now we wait until the end of the calculations.
Now let us look at the problem from another point of view.
As you know, if we calculate the average magnetic moment of the magnetic system
in zero magnetic field, the result is zero even below the Curie point. To get a proper
result—finite spontaneous magnetization M—you have to perform a trick: turn on
an infinitesimal magnetic field h, find M, and then set h = 0.
Why? It is simple. The magnetic moment is a vector. Then the calculation of its
average value includes an averaging over directions. But without the magnetic field,
all directions of M are equally probable, and the average is zero! An arbitrarily small
field, though, orients M along itself, and it is no longer averaged to zero. This sort
of average was first introduced by Bogoliubov and is called quasiaverages.

1 Of course, if you deal with a very small system, when one or two extra electrons significantly

change the total energy, certain precautions must be taken; an example-the parity effect—will be
discussed later.
4.1 Introduction: General Picture of the Superconducting State 167

Fig. 4.5 Spontaneous symmetry breaking

This is an example of a very general and very important situation. The symmetry
of the Hamiltonian is higher, than the symmetry of the ground state: that is, the
symmetry is spontaneously broken.
The role played by the field h is just to reduce the symmetry of the Hamiltonian
to that of the ground state, through the term-Mh.
The fact that below the transition temperature an infinitesimal external field leads
to finite magnetization in our calculations means that the previous ground state, with
zero M, has lost stability and spontaneously acquired a finite magnetic moment.
(Therefore we speak of a spontaneous symmetry breaking; see Fig. 4.5.)
We encounter a like situation in a superconductor. If we introduce the pair sources
into the Hamiltonian, say f a † a † and f ∗ aa, calculate ∼a † a † ∝ and ∼aa∝, and then put
f = f ∗ = 0, above Tc we get zero. But below Tc we find a finite result even for zero
pair sources, and it is possible to show that all the averages of observables calculated
with the help of our “pair” Hamiltonian HMFA are the same as quasiaverages obtained
by more complicated procedure explained above. The normal state is then unstable,
but the real particle non-conservation is not necessary.
The principle “push the one who is falling” is never violated. Thus, any instability
will fully develop, and below Tc we never find the normal state. We find instead the
qualitatively different, superconducting one.

Conclusions

The normal ground state of the superconductor is unstable below the transition
temperature with regard to the creation of Cooper pairs of electrons (in the case
of arbitrarily weak electron–electron attraction near the Fermi surface).
The superconducting ground state is qualitatively different from the normal one
and possesses the special kind of symmetry that is revealed in the existence of nonzero
168 4 Methods of the Many-Body Theory in Superconductivity

Fig. 4.6 Ladder series for the vertex function

anomalous averages (related to the superconducting order parameter) and leads to


the macroscopic quantum coherence in the system.
The standard many-body perturbation theory starting from the normal state is
unable to describe the superconducting one; nevertheless, it allows us to find the
transition point, where the instability arises.
The apparent violation of the particle conservation law due to the existence of
nonzero anomalous averages does not lead to any physically observable conse-
quences in massive superconductors.

4.2 Instability of the Normal State

The poles of the two-particle Green’s function correspond to the bound states of two
quasiparticles (such as plasmons). This is me for the temperature Green’s functions
as well. We will show that if there exists an arbitrarily weak attraction between
the quasiparticles on the Fermi surface, the pole appears in the vertex function
(see Lifshits and Pitaevski [5]). As we know, this is equivalent to the pole in the
two-particle Green’s function itself.
Let us consider the temperature vertex function,

Γ (p1 ψ1 , p2 ψ2 ; p≡1 ψ1≡ , p≡2 ψ2≡ ), (4.21)

under the conditions

p≡1 + p≡2 = 0; |p≡1 | = p F ; ψ1≡ = ψ2≡ = 0. (4.22)

This means that the pairing occurs on the Fermi surface, with zero binding energy
(this must be so at the very transition point), and zero total momentum (this was
discussed earlier).
The pole arises due to the ladder diagrams of Fig. 4.6. (We do not have to consider
the diagrams with interchanged ends, for the pole arises in both series simultane-
ously.) The Bethe-Salpeter equation can be thus written as follows (Fig. 4.7):
4.2 Instability of the Normal State 169

Fig. 4.7 Bethe-Salpeter equation in the ladder approximation

Fig. 4.8 Parametrization of momenta vectors on the Fermi sphere

(p1 , −p1 ; p≡1 , −p≡1 )ρφχ ρτρ



1  d 3 p3
+ ρφξ G 0 (p3 ψs )ρτμ G 0 (−p3 , −ψs )
τ s=−≈ (2ε)3
×U (p3 − p1 )(p3 , −p3 ; p≡1 , −p≡1 )ρξχ ρμρ (4.23)
= −U (p≡1 − p1 )ρφχ ρτρ .

In the sum and integral, only p3 close to the Fermi surface and small Matsubara
frequencies ψ are important. Therefore, we can set |p3 | = p F , ψs = 0 in the
arguments of both  and U under the summation and integration.
Now all the vector arguments lie on the Fermi sphere, and , U depend each on
a single variable (the angle between the corresponding momenta). Then they can be
expanded in Legendre polynomials:


U (θ) = (2l + 1)u l Pl (cos θ); (4.24)
l=0
≈
(θ) = (2l + 1)χl Pl (cos θ). (4.25)
l=0

Then the equation takes the form


170 4 Methods of the Many-Body Theory in Superconductivity

(2l + 1)(−u l − χl )Pl (cos θ)
l
3
1 d p3 0
= G (p3 ψs )G 0 (−p3 , −ψs )
τ s (2ε)3

× (2l ≡ + 1)(2l ≡≡ + 1)u l ≡ χl ≡≡ Pl ≡ (cos θ≡ )Pl ≡≡ (cos θ≡≡ ) (4.26)
l≡ l ≡≡

(see Fig. 4.8).


Making use of the summation theorem for the spherical functions, and taking into
account that
1
G 0 (p3 ψs ) = = G 0 (−p3 , −ψs )∗
p32
iψs − 2m + μ

is direction independent, we finally obtain the following expression, first obtained


by Landau and Pitaevskii, for the lth angular component of the vertex function:

−u l
χl = . (4.27)
1 + ul Ψ

In this expression
3 2 d 3 p 1 
1 d p3 0 3 1
Ψ= G (p 3 s =
ψ ) . (4.28)
τ s (2ε)3 (2ε)3 τ s ψs2 + θ 2p

The sum quickly converges. Using the formula for summation over fermionic
Matsubara frequencies, we find that

τθ p
d 3 p tanh
Ψ= 2
. (4.29)
2(2ε)3 θ p

The high-momentum divergence is not a physical one, and as we have mentioned


earlier, it is to be cut off at some pmax ≈ 1/a. Thus we have obtained the condition
for the appearance of the pole in the vertex function:

τθ p
−u l tanh
d p3 2
= 1. (4.30)
2(2ε)3 θp

This very equation (with l = 0) is obtained from BCS theory if we put the order
parameter  = 0 (i.e., in the transition point). The critical temperature in the lth
channel is thus, up to a factor of order unity,

1
Tc(l) ≈ ϕmax exp{− }. (4.31)
N (0)|u l |
4.2 Instability of the Normal State 171

Here ϕmax is the energy cutoff parameter. We see that the bound state appears if
at least one of the coefficients u l in the potential’s angular expansion is negative
(l)
(attraction). The transition takes place at the largest of the Tc , and the nonanalytic
(l)
dependence of Tc on the interaction parameter is properly restored.
In conventional superconductors the electron–electron attraction appears already
in the s-wave channel (l = 0) due to the phonon-mediated electron–electron coupling
(see Sect. 4.4.2). But for l > 0 negative coefficients may arise from the bare repulsive
electron–electron interaction without such an intermediary. This so-called Kohn–
Luttinger pairing mechanism [21] has the same origin as Friedel oscillations of
screened electrostatic potential at large distances (see Appendix A). Namely, due
to the sharp edge of Fermi distribution the polarization operator, as a function of
momentum, is non-analytic when the momentum transfer is close to 2 p F . As a
result, even starting from a purely repulsive bare potential, all angular components
u l of which are positive, one obtains the screened potential, where at least some
components u leff are negative. Substituting such a u leff in Eq. (4.27) instead of u l , we
obtain the instability of the normal state and the superconducting transition. Note
that the existence of a sharp Fermi surface and the resulting effective reduction of
the dimensionality of the momentum space, which was critically important for the
formation of Cooper pairs, is also crucial here.
The Kohn–Luttinger mechanism offers a possibility to explain the superconduct-
ing pairing with l = 0 in systems, where the phonon-mediated coupling is too weak
(like in high-Tc cuprates). While the original version of the Kohn-Luttinger argu-
ment does not directly apply to the systems without spherical symmetry, its recent
extensions to lattice models provide interesting insight in the possible nature of super-
conducting state in the cuprates, the Fe- graphene (see Maiti and Chubukov [6]). In
the following, though, we will mainly concern ourselves with conventional, s-wave,
phonon-mediated superconductivity.
Finally, note also an important difference of Eq. (4.31) from the result we obtained
earlier: −1/(N (0)|u l |) in the exponent, instead of twice this value, as followed
from Cooper’s initial argument. This is because now we have properly taken into
account the many-body character of the problem. Unfortunately, we cannot proceed
any further: the fact of some transition due to electron–electron coupling, and the
temperature of this transition are the only things which can be obtained by the usual
technique, which starts from the normal ground state. In fact, no real bound state
(in terms of quasiparticles above the normal vacuum) appear, but rather instability
arises, which cannot be dealt with.
Therefore, we need the modified formalism, which will be built with the help of
the so-called pairing Hamiltonian.
172 4 Methods of the Many-Body Theory in Superconductivity

4.3 Pairing (BCS) Hamiltonian

4.3.1 Derivation of the BCS Hamiltonian

We start from the Hamiltonian with a four-fermion attractive term



H = H0 + Hint = H0 + g d 3 rψ↑† (r)ψ∞† (r)ψ∞ (r)ψ↑ (r), g<0 (4.32)

(an appropriate momentum cutoff confining the interaction into the narrow layer near
the Fermi surface is implied). From the naïve point of view we have followed earlier,
the MFA pairing Hamiltonian is obtained by selective averaging of operators in the
previous expression:
 ⎧ 
HMFA = H0 + g d 3 r ψ↑† (r)ψ∞† (r)∼ψ∞ (r)ψ↑ (r)∝ + ψ↑† (r)ψ∞† (r) ψ∞ (r)ψ↑ (r)

∇ H0 − d 3 r ψ↑† (r)ψ∞† (r)(r) + ∗ (r)ψ∞ (r)ψ↑ (r) ; (4.33)
(r) = |g|∼ψ∞ (r)ψ↑ (r)∝. (4.34)

The result is correct, but its foundation seems to be a bit shaky. In the following we
pursue two objectives: to show that the pairing Hamiltonian is more reliable than one
may guess, and how the corrections can be introduced in a regular way (following
Svidzinskii [8]). The equilibrium properties of the system can be derived from its
grand partition function,

 = tre−τ(H0 +Hint ) , (4.35)

which, as we know from the Matsubara formalism, can be written as


τ 
 = tr e−τ H0 Tν e− 0 dν Hint (ν ) , (4.36)

where Hint is taken in Matsubara interaction representation. Rewriting Eq. 4.36 in


more detail, we have
τ  3 
 = tr e−τ H0 Tν e+ 0 dν d rA(r,ν )A(r,ν ) , (4.37)

where

A(r, ν ) = |g|ψ ↑ (r, ν )ψ ∞ (r, ν ); (4.38)

A(r, ν ) = |g|ψ∞ (r, ν )ψ↑ (r, ν ). (4.39)
4.3 Pairing (BCS) Hamiltonian 173

Now we use one of the functional integration formulae, which is a direct generaliza-
tion of the standard formula for Gaussian integrals

≈ −x 2 +2Ax

A2 1 −x 2 +2Ax −≈ dxe
e =√ dxe = ≈ . (4.40)
ε −≈ −x 2
−≈ dxe

Let us write

A = P + iQ, A = P − iQ, (4.41)

where P, Q are Hermitian operators. Then reordering the operators under the sign
of the Tν -operator, we can obtain the expression
τ  τ  2 τ  2
d 3 rA(r,ν )A(r,ν ) d 3 rP (r,ν ) d 3 rQ (r,ν )
Tν e 0 dν
= Tν e 0 dν
e 0 dν
, (4.42)

and then, introducing two real auxiliary fields θ(r, ν ), ξ(r, ν ):


τ  τ  3 2
d 3 rA(r,ν )A(r,ν )
Dθ(r, ν )Dξ(r, ν ) e− 0 dν d r[θ (r,ν )+ξ (r,ν )]
2
Tν e 0 dν
=
τ  
×Tν e 0 dν d r2[P (r,ν )θ(r,ν )+Q(r,ν )ξ(r,ν )]
3


τ  ⎩−1
− 0 dν d 3 r[θ 2 (r,ν )+ξ 2 (r,ν )]
× Dθ(r, ν )Dξ(r, ν )e . (4.43)

Introducing then instead of θ(r, ν ), ξ(r, ν ) a complex field

κ(r, ν ) = θ(r, ν ) + iξ(r, ν ), (4.44)

noticing that

− [θ 2 + ξ 2 ] + 2[Pθ + Qξ] = −|κ|2 + Aκ ∗ + Aκ, (4.45)

and finally setting


1
κ(r, ν ) = √ (r, ν ), (4.46)
|g|

we can write the partition function as follows:


1
τ 
− d 3 r|(r,ν )|2
 = 0 D(r, ν )D∗ (r, ν )e |g | 0

 τ  3 ∗

× Tν e+ 0 dν d r[(r,ν )ψ↑ (r,ν )ψ∞ (r,ν )+(r,ν ) ψ∞ (r,ν )ψ↑ (r,ν )] (4.47)
0

τ  3 ⎩−1
1
− dν d r|(r,ν )| 2
× D(r, ν )D∗ (r, ν )e |g| 0 .
174 4 Methods of the Many-Body Theory in Superconductivity

The partition function contains a Gaussian average over the pair sources’ fields,
, ∗ , of the Bogoliubov functional


 τ ⎧
 B [, ∗ ] ∇ e−τ B [, ] = Tν e− 0 dν H B (ν ) , (4.48)
0

with the pairing Hamiltonian H B (in Matsubara representation). Then the approx-
imation of the pairing Hamiltonian corresponds to the main order in the expansion
of the functional integral over the pair sources’ fields in (4.48). The corresponding
values of these fields are given by the extremum condition:
τ 
1
− |g| dν d 3 r|(r,ν )|2 −τ B [,∗ ]
e 0 = max, (4.49)

that is2
ρΣ B [, ∗ ]
∗ (r, ν ) = −τ|g| . (4.50)
ρ(r, ν )

By our definition,  = 0  B = 0 e−τΣ B . Then

ρΣ B ρ ln(/0 ) ρ ln 
= −τ −1 = −τ −1 . (4.51)
ρ ρ ρ
Substituting here the partition function (4.48), we obtain

ρB 1 −τ H0
=− tr e Tν ψ ↑ (r, ν )ψ ∞ (r, ν )
ρ(r, ν ) τ
τ  
d 3 r[(r,ν ) ↑ (r,ν ) ∞ (r,ν )+(r,ν )∗ ψ∞ (r,ν )ψ∞ (r,ν )]
× e+ 0 dν

= −τ −1 ∼Tν ψ ↑ (r, ν )ψ ∞ (r, ν )∝, (4.52)

which finally yields the self-consistency relation

∗ (r, ν ) = |g|∼Tν ψ ↑ (r, ν )ψ ∞ (r, ν )∝. (4.53)

Its conjugate coincides with the relation (4.34) obtained from the “naïve” point of
view.

2 The functional, or variational, derivative of a functional F[ f ] of a function f (x) is denoted


by ρ F/ρ f (x). It describes the variation of the functional when f (x) is replaced by f (x)+ρ f (x), in
(Footnote 2 continued)
the first order in ρ f (x),
 
ρF
F[ f + ρ f ] − F[ f ] ∇ dx ρ f (x) + · · · .
ρ f (x)

Higher-order functional derivatives are introduced in the similar way.


4.3 Pairing (BCS) Hamiltonian 175

The advantage of our approach is that we not only have established the validity
of the “MFA” Hamiltonian in the superconducting case, but also found its limits and
way of introducing the necessary corrections: there may occur situation in which not
only the extremal value (4.53), but its vicinity as well is to be taken into account.3

4.3.2 Diagonalization of the BCS Hamiltonian: The Bogoliubov


Transformation—Bogoliubov-de Gennes Equations

The BCS Hamiltonian can be diagonalized by a canonical Bogoliubov transforma-


tion, which expresses the electron field operators in terms of new creation/annihilation
Fermi operators,

φq,∞↑ , φq,↑∞

(now we use a usual, time-dependent representation; q labels the eigenstates). The


direct transformation is given by
 
φq,↑ = d 3 r u q∗ (r)ψ↑ (r, t) − vq∗ (r)ψ∞† (r, t) ; (4.54)
 

φq,∞ = d 3 r u q (r)ψ∞† (r, t) + vq (r)ψ↑ (r, t) , (4.55)

while the inverse transformation is


 
ψ↑ (r, t) = u q (r)φq,↑ + vq∗ (r)φq,∞

; (4.56)
q
 
ψ∞† (r, t) = u q∗ (r)φq,∞

− vq (r)φq,↑ . (4.57)
q

As you see, the Bogoliubov transformation mixes electron and hole operators with
opposite spins—this is the only way to get rid of nondiagonal pairing terms! The phys-
ical significance of this is that the quasiparticles in the superconductor are rather like
centaurs: “part electrons, part holes.” For obvious reasons they are called bogolons.
The coefficients of the Bogoliubov transformation must satisfy the following
canonical relations,

3 You can see from the structure of the functional integral that the overall sign (or, more generally,
the initial phase) of the complex field  is of no importance. In different books you thus can find
the same equation with opposite signs of . This is a matter of convention.
176 4 Methods of the Many-Body Theory in Superconductivity
 
u q (r)u q∗ (r≡ ) + vq (r≡ )vq∗ (r) = ρ(r − r≡ ); (4.58)
q
 
u q (r)vq∗ (r≡ ) − u q (r≡ )vq∗ (r) = 0; (4.59)
q
 
d 3 r u q (r)u q∗ ≡ (r) + vq (r)vq∗≡ (r) = ρqq≡ ; (4.60)

 
d 3 r u q (r)vq ≡ (r) − u q ≡ (r)vq (r) = 0, (4.61)

in order to comply with Fermi statistics of both old and new creation/annihilation
operators. (To check this would be a really useful exercise, even in the simplest case
of the plane wave basis.)
In new operators the Hamiltonian takes the simple form
 ⎫ ⎬
† †
H B = U0 + E q φq,↑ φq,↑ + φq,∞ φq,∞ . (4.62)
q

The first term here is the ground state energy of the superconductor. The second
one is the quasiparticle term, which describes the elementary excitations above the
ground state.
The excitation energies along with the transformation coefficients are given by
the solution of the following system of Bogoliubov-de Gennes equations:
⎭  2 
1 1
→ − eA/c − μ + V (r) u q (r) + (r)vq (r) = E q u q (r);
2m i
⎭  2 
1 1
→ + eA/c − μ + V (r) vq (r) − ∗ (r)u q (r) = −E q vq (r),
2m i

ˆ θˆc (it is convenient to


or denoting the kinetic energy operator and its conjugate by θ,
measure energies from the Fermi level),
 2
1 1
θˆ = → − eA/c − μ,
2m i
 2
1 1
θˆc = → + eA/c − μ,
2m i

we can write the Bogoliubov-de Gennes equations in matrix form:


⎭    
θˆ + V (r) (r) uq uq
= Eq (4.63)
∗ (r) −θˆc − V (r) vq vq
4.3 Pairing (BCS) Hamiltonian 177

These equations can be easily derived if we write down the Heisenberg equations of
motion for the field operators, i ψ̇ = [ψ, H B ]; that is,
 
i ψ̇↑ (r, t) = θˆ + V (r) ψ↑ (r, t) − (r)ψ∞† (r, t);
  (4.64)
i ψ̇∞† (r, t) = − θˆc + V (r) ψ∞† (r, t) − ∗ (r)ψ↑ (r, t)

or in matrix form,
  ⎭  
∂ ψ↑ (r, t) θˆ + V (r) − (r) ψ↑ (r, t)
i = . (4.65)
∂l ψ∞† (r, t) −∗ (r) − θˆc − V (r) ψ∞† (r, t)

The change of sign before eA/c in the conjugate operator appears in the process
of integration by parts, which we must perform while calculating the commutator
[ψ † , H B ]. This change of sign of the electrical charge should be expected, since ψ †
is the hole annihilation operator.
The ψ-operators are not the eigenvectors of the Hamiltonian and have no def-
inite frequency. We can, though, express them through the eigenoperators of the
Hamiltonian, φ, φ† :

φq,↑ (t) = φq,↑ e−iE q t ; φq,∞


† †
(t) = φq,∞ eiE q t .

Gathering the terms, say, with φq,↑ , we get the Bogoliubov-de Gennes equations.
Here we will derive a useful general relation between the excitation energies, order
parameter, and coherence factors, which follows directly from the Bogoliubov-de
Gennes equations (4.63). Multiplying the first line of (4.63) from the left by u q∗ (r)
and the second by vq∗ (r), adding them, and integrating over the whole space, we find
(with the help of (4.60) and after an integration by parts)

Eq = d 3 r u q∗ (r)(θˆ + V (r))u q (r) − vq (r)(θˆ + V (r))vq∗ (r)

+2R((r)u q∗ (r)vq (r)) . (4.66)

4.3.3 Bogolons

The elementary excitations above the ground state of the superconductor, bogolons,
are created and annihilated by the φ, φ† -operators.
It follows from the relation (4.55) that this quasiparticle is a coherent combination
of an electron-like and hole-like excitations with opposite spins. The coefficients
u q , vq (they, or some their bilinear combinations-depending on the book you read -
are called coherence factors) give the probability amplitudes of these states in the
actual mixture and are defined by the Bogoliubov-de Gennes equations. Bogolons
178 4 Methods of the Many-Body Theory in Superconductivity

should not be mixed up with Cooper pairs, which are not excitations, but form the
ground state of the superconductor. Two bogolons appear when a Cooper pair is tom
apart (by thermal fluctuations, for example); of course, they keep some information
about the superconducting phase coherence, which is contained in the coefficients
u, v, as we will see later.
The electric charge of such a quasiparticle is no longer an integer multiple of e,
but rather equals

Q = |u q |2 · e + |vq |2 · (−e). (4.67)

Of course, the charge conservation law is here violated no more than the mass con-
servation law was by the fact that quasielectrons can have a mass different from m 0 .
The extra charge is taken or supplied by the condensate.
In the spatially uniform case and in the absence of external fields we can use
the momentum representation to simplify equations (4.63). In the absence of the
supercurrent, we can choose the order parameter  to be real:
  
θp − E p  up
= 0. (4.68)
 −θp − E p vp

The solvability condition gives the dispersion law of bogolons (see Fig. 4.9):

Ep = θp2 + 2 . (4.69)

The solutions are thus

1 θp 1 θp
|u p | = √ 1 + ; |vp | = √ 1 − . (4.70)
2 Ep 2 Ep

The self-consistency relation for the gap follows from the extremum condition ∗ =
|g|∼ψ † ψ † ∝ and reads in the general case as follows:
 E q (∗ )
∗ (r, T ) = |g| u q∗ (r)vq (r) tanh . (4.71)
q
2T

In the uniform case this is reduced to the famous BCS equation for the energy gap:

E p (∗ )
d 3 p tanh 2T
1 = |g| . (4.72)
(2ε)3 2 E p (∗ )

Finally, since in equilibrium bogolons satisfy the Fermi distribution, an equilib-


rium average of a single-electron operator O in the superconducting state can be
easily calculated. Using the Bogoliubov transformation, we find
4.3 Pairing (BCS) Hamiltonian 179

Fig. 4.9 Quasiparticle dispersion law: a normal metal (electrons and holes), b superconductor
(electron-like and hole-like bogolons)

  ⎧
∼O∝ = ψλ† Oψλ
λ=↑,∞

= (u q∗ φq↑

+ vq φq∞ )O(u q φq↑ + vq∗ φq∞

)
q

+(u q∗ φq∞

− vq φq↑ )O(u q φq∞ − vq∗ φq↑ †
)
 
= 2 d 3 r u q∗ Ou q n F (E q ) + vq Ovq∗ (1 − n F (E q )) . (4.73)
E q >0

Here we have assumed for simplicity that O is spin independent, but the
generalization is trivial.
For example, the equilibrium current density is given by

2e  ∗ e e 
j= R u v (p + A)u v n F (E v ) + vv (p + A)vv∗ (1 − n F (E v )) . (4.74)
m c c
E>0

4.3.4 Thermodynamic Potential of a Superconductor

Returning to the extremum condition (4.49) that we used earlier to derive the self-
consistency relation for the order parameter, we see that the role of the thermody-
namic potential of the superconductor is played by

1
[, ∗ ] = d 3 r|(r)|2 +  B [, ∗ ], (4.75)
|g|

where the fluctuation corrections are thus neglected. The Bogoliubov functional  B
is easily calculated from the diagonalized form of the BCS Hamiltonian, (4.62):
180 4 Methods of the Many-Body Theory in Superconductivity

1
 B [, ∗ ] = − ln tre−τ H B
τ
2
= U0 − ln(1 + e−τ Eq )
τ q
  
2 τ Eq
= U0 + Eq − ln 2 cosh .
q
τ q 2

In the latter formula it is advantageous to separate from U0 all terms dependent on


quasiparticle energies E q . To this end, we need an explicit expression for U0 . Since
U0 originates from all terms in the BCS Hamiltonian that are not normally ordered
after Bogoliubov transformation (that is, contain φφ† ), we can easily see that
 
U0 = 2 d 3 r vq (r)(θˆ + V (r))vq∗ (r) − R((r)u q∗ (r)vq (r)) .
q

From (4.66) we find



− d 3 r2R((r)u q∗ (r)vq (r))
⎫ ⎬
= −E q + d 3 r u ∗ (r)(θˆ + V (r))u q (r) − vq (r)(θˆ + V (r))v ∗ (r) ,
q q

and therefore
  
U0 = − Eq + d 3 r vq (r)(θˆ + V (r))vq∗ (r) + u q∗ (r)(θˆ + V (r))u q (r) .
q q
(4.76)

Finally, the (infinite) sums of excitation energies in  B exactly cancel, and we obtain
an important formula

1
[, ∗ ] = d 3 r|(r)|2 + d 3 r vq (r)(θˆ + V (r))vq∗ (r)
|g| q
 2  τ Eq

+ u q∗ (r)(θˆ + V (r))u q (r) − ln 2 cosh . (4.77)
τ q 2
4.4 Green’s Functions of a Superconductor: The Nambu-Gor’kov Formalism 181

4.4 Green’s Functions of a Superconductor:


The Nambu-Gor’kov Formalism

4.4.1 Matrix Structure of the Theory

We have already seen how the pairing Hamiltonian can be derived from the one with
two-particle point interactions, and how anomalous averages appear if accept such a
Hamiltonian. Here we will arrive at anomalous averages from a different direction.
Namely, we will develop a special Green’s function technique, where both normal
and anomalous averages appear in a natural way.
We have seen from the Bogoliubov transformation that the superconducting state
somehow mixes electrons and holes. It is then natural to introduce two-component
field operators (Nambu operators)
   
ψ↑ (r)
(r) = ,  † (r) = ψ↑† (r), ψ∞ (r) . (4.78)
ψ∞† (r)

In the momentum representation, Nambu operators are given by


   
ψp↑ (E) †
p (E) = † , p† (E) = ψp↑ (E), ψ−p∞ (−E) . (4.79)
ψ−p∞ (−E)

Using them, we can rewrite the pairing Hamiltonian as



HB = d 3 r † (r) · Ĥ(r) · (r), (4.80)

where the matrix Ĥ is


⎭ 
θˆ + V (r) − (r)
Ĥ(r) = . (4.81)
−∗ (r) − (θˆc + V (r))

Then we quite naturally can introduce a matrix (Gor’kov) Green’s function,

1 ⎧
Ĝjl = T  j (X )l† (X ≡ ) ; (4.82)
i 
⎡ ⎧
† #
≡ ) 1 T ψ (X )ψ (X ≡ )
⎛⎜
1
T ψ ↑ (X )ψ ↑ (X ↑ ∞
Ĝ = ⎣ 1  † ⎧i  ⎧⎝
i
† ≡ † ≡
i T ψ∞ (X )ψ↑ (X ) i T ψ∞ (X )ψ∞ (X )
1
 
G(X, X ≡ ) F(X, X ≡ )
∇ . (4.83)
F + (X, X ≡ ) −G(X ≡ , X )
182 4 Methods of the Many-Body Theory in Superconductivity

We see that the relevant terms arise from the pairings of Nambu operators of the
type ∼ † ∝, while the pairings ∼ †  † ∝, ∼∝ contain terms like ∼ψ↑ ψ↑ ∝ or ∼ψ↑† ψ∞ ∝
(which would correspond to triplet pairing or magnetic ordering respectively) and are
equal to zero in our case. (Of course, we could not rule them out a priori: here we use
our knowledge of the properties of the superconducting state, based on experimental
data, to narrow the field of search.) Then Wick’s theorem for the Nambu operators
looks the same as for the usual ones, and we can at once build the diagram technique.
It is done, of course, along the same lines as before, and we need not repeat all the
calculations.

4.4.2 Elements of the Strong Coupling Theory

We start from the unperturbed Hamiltonian without the pairing potentials:


⎭ 
θˆ 0
H0 = d r ·
3 †
· . (4.84)
0 −θˆc

This is an important point: neither the unperturbed Hamiltonian nor the unperturbed
Green’s function contains anomalous (i.e., off-diagonal) terms (see Table 4.1). Nev-
ertheless we will see that they naturally appear in Nambu-Gor’kov picture after
interactions are taken into account [12].
As before, we use the Pauli matrices as a basis in the space of 2 × 2 matrices.
Denoting them by
       
10 01 0 i 10
ν̂0 = ; ν̂1 = ; ν̂2 = ; ν̂3 = ,
01 10 −i 0 0 −1

we can write the perturbation terms (due to electron–phonon and electron–electron


interactions) as follows:
  ⎫ ⎬
HEPI = g j (p, p≡ )p† · ν̂3 · p≡ bq j + b−q

j ; (4.85)
p−p≡ =q j

HC = ∼p1 p2 |VC |p≡1 p≡2 ∝
p1 +p2 =p≡1 +p≡2
⎫ ⎬⎫ ⎬
† †
× p1 · ν̂3 · p2 p1 ≡ , ·ν̂3 · p2≡ . (4.86)

The Feynman rules are given in Table 4.1.


Now we can write the Dyson equation at once (Fig. 4.10):

−1 −1
Ĝ = Ĝ0 − Σ̂ ∇ E ν̂0 − θp ν̂3 − Σ̂. (4.87)
4.4 Green’s Functions of a Superconductor: The Nambu-Gor’kov Formalism 183

Table 4.1 Feynman rules for Nambu-Gor’kov Green’s function (Momentum space)

iĜ(p, E) Matrix Green’s function

0
iĜ (p, E) = (E τ̂0 − ξp τ̂3 )−1 Unperturbed matrix Green’s function

i D0 (q, ω)δjj Unperturbed phonon propagator

−i VC (q) Coulomb electron -


electron interaction

−iτ̂3 gj (p, p ) Bare electron-phonon vertex

τ̂3 Bare Coulomb vertex


The integration over all intermediate momenta and frequencies is im-
plied, taking into account energy/momentum conservation in every
vertex and the matrix structure of Nambu-Gor’kov Green’s function
and vertices

Now we can present the self energy in the form

Σ̂(p, E) = [1 − Z (p, E)]E ν̂0 + Z (p, E)(p, E)ν̂1 + ρϕ(p)ν̂3 . (4.88)

The function (p, E) here is not the “initial” pairing potential, which we set to zero.
In the absence of the magnetic field, in a stationary state, we can choose the phase
in such a way as to eliminate the ν̂2 -component. Then

Z (p, E)E ν̂0 + Z (p, E)(p, E)ν̂1 + (θp + ρϕp )ν̂3


Ĝ(p, E) = . (4.89)
Z 2 (p, E)E 2 − Z 2 (p, E)2 (p, E) − (θp + ρϕp )2

The off-diagonal contribution to Σ̂ should appear due to exchange terms, but it


is absent in any finite approximation (see Fig. 4.11a):
184 4 Methods of the Many-Body Theory in Superconductivity

Fig. 4.10 Matrix Dyson equation for Nambu-Gor’kov’s Green’s function

Fig. 4.11 a Lowest-order exchange terms in the self energy, b self-consistent approximation for
the exchange self energy


(1) d E ≡ dp≡
Σ̂ex (pE) =i ν̂3 Ĝ0 (p≡ E ≡ )ν̂3
(2ε)4
 ⎞
 ⎟
× |g j (pp≡ )|2 D 0j (p − p≡ , E − E ≡ ) + VC (p − p≡ ) ∓ ν̂0 .
 ⎠
j

In order to obtain this contribution, we must use a self-consistent exchange (Fock’s)


approximation, effectively summing up an infinite subsequence of diagrams
(Fig. 4.11b):

(≈) d E ≡ dp≡
Σ̂ex (pE) = i ν̂3 Ĝ(p≡ E ≡ )ν̂3
(2ε)4
 ⎞
 ⎟
× |g j (pp≡ )|2 D 0j (p − p≡ , E − E ≡ ) + VC (p − p≡ ) , (4.90)
 ⎠
j

because of the identity


ν̂3 ν̂1 ν̂3 = −ν̂1 .

In this way self energy and Green’s function acquire an off-diagonal ν̂1 -term:

Z (p, E)E ν̂0 − Z (p, E)(p, E)ν̂1 + (θp + ρϕp )ν̂3


ν̂3 Ĝ(pE)ν̂3 = . (4.91)
Z 2 (p, E)E 2 − Z 2 (p, E)2 (p, E) − (θp + ρϕp )2

Thus we arrive at a matrix equation:


4.4 Green’s Functions of a Superconductor: The Nambu-Gor’kov Formalism 185

[1 − Z (p, E)]E ν̂0 + Z (p, E)(p, E)ν̂1 + ρϕ(p)ν̂3



d E ≡ dp≡ Z (p≡ , E ≡ )E ≡ ν̂0 − Z (p≡ , E ≡ )(p≡ , E ≡ )ν̂1 + (θp≡ + ρϕp≡ )ν̂3
=i
(2ε)4 Z 2 (p≡ , E ≡ )(E ≡ )2 − Z 2 (p≡ , E ≡ )2 (p≡ , E ≡ ) − (θp≡ + ρϕp≡ )2
 ⎞
 ⎟
× |g j (pp≡ )|2 D 0j (p − p≡ , E − E ≡ ) + VC (p − p≡ ) . (4.92)
 ⎠
j

From this matrix relation the set of two nonlinear integral equations for Z ,  fol-
lows, the so-called Eliashberg equations. They are central to the theory of supercon-
ductors with strong coupling and contain the expression for the intensity of electron–
phonon interaction, traditionally written as
 d 2 p̂  d 2 p̂ ≡ 
SF v p S F v p≡ j |g j (pp≡ )|2 ρ(π − π j (p − p≡ ))
φ (π)F(π) ∇
2
 d 2 p̂ .
SF v p

In the weak interaction limit these equations reduce to the BCS theory. For exam-
ple, the transition temperature is given by the same formula:


1
Tc = 1.14π D exp − . (4.93)
ω − μ∗

Here μ∗ is the Coulomb pseudopotential, which would appear in the BCS theory
if Coulomb repulsion were explicitly taken into account.4 Usually μ∗ √ ω. The
electron–phonon coupling constant, ω, is here defined as
≈ φ2 (π)F(π)
ω=2 dπ . (4.94)
0 π

4.4.3 Gorkov’s Equations for the Green’s Functions

The matrix Green’s function contains only two independent components: normal
and anomalous Green’s functions. The set of equations for these functions (Gor’kov
equations) immediately follows from their definitions and the equation of motion for
the field operators (4.64):
⎫ ⎫ ⎬ ⎬

i ∂t − θˆ + V G(X, X ≡ ) + (r)F + (X, X ≡ ) = ρ(X − X ≡ );
⎫ ⎫ ⎬X ⎬ (4.95)

i ∂t + θˆc + V F + (X, X ≡ ) + ∗ (r)G(X, X ≡ ) = 0.
X

4 See Vonsovsky et al. [12] for more details and references.


186 4 Methods of the Many-Body Theory in Superconductivity

In the stationary homogeneous case these equations take the following form (in the
momentum representation):

(π − θp )G(p, π) + F + (p, π) = 1;
(4.96)
(π + θp )F + (p, π) + ∗ G(p, π) = 0.

At zero temperature
the solution to this set is given by

∗
F + (p, π) = − G(p, π);
π + θp
π + θp
G(p, π) = 2 .
π − (θp2 + ||2 )

The infinitesimal imaginary term in the denominator of G(p, π) is determined by


comparing it with the Källén-Lehmann representation:
2 2
u p v p
G(p, π) = + . (4.97)
π − E p + i0 π + E p − i0

Here u, v are the parameters of the Bogoliubov transformation.


The Gor’kov equations must be completed with the self-consistency relation
between ∗ and F + .
 ⎧
∗ (rt) = |g| ψ↑† (rt)ψ∞† (rt) = −i|g|F + (rt + , rt). (4.98)

It is easy to show that in the uniform case this equation coincides with the BCS
equation for the gap at T = 0.
At finite temperatures
we use the same methods as in Chap. 3. Due to the analyticity of the retarded
Green’s function in the upper half-plane, we just put π → π + i0 and obtain
2 2
u p v p
G (p, π) =
R
+ . (4.99)
π − E p + i0 π + E p + i0

The spectral density (Fig. 4.12) is given by


⎫ ⎬
(p, π) ∇ −2JG R (p, π) = 2ε u 2p ρ(π − E p ) + v 2p ρ(π + E p ) . (4.100)

The causal Green’s function is obtained from the retarded one with the use of the
relation between their real and imaginary parts in equilibrium:
4.4 Green’s Functions of a Superconductor: The Nambu-Gor’kov Formalism 187

Fig. 4.12 Spectral density of the retarded Green’s function in the superconductor. Note that the
quasiparticles (bogolons) have infinite lifetime in spite of interactions

* 2 2 +
u p v p
G(p, π) = P +
π − Ep π + Ep
E p ⎫ 2 2 ⎬
−iε tanh · u p ρ(π − E p ) − v p ρ(π + E p ) . (4.101)
2T
Ep
Since tanh 2T = 1 − 2n F (E p ), this can be represented as
⎫ ⎬
G(p, π)|T =0 = G(p, π)|T =0 + 2εin F (E p ) u 2p ρ(π − E p ) − v 2p ρ(π + E p ) ,
(4.102)
while the anomalous Green’s function is now

F + (p, π)|T =0 = F + (p, π)|T =0


∗ (T ) ⎫ ⎬
− 2εin F (E p ) u 2p ρ(π − E p ) − v 2p ρ(π + E p ) . (4.103)
π + θp

Again, this equation leads to the BCS equation for (T ) at a finite temperature.

4.4.3.1 Matsubara Functions for the Superconductor

All the modifications of Green’s functions techniques discussed earlier can be gener-
alized to Nambu-Gor’kov Green’s functions. For example, a Keldysh Green’s func-
tion becomes a 4 × 4 matrix (because G A,R,K are 2 × 2 Nambu matrices) (see
Rammer and Smith [7]). In equilibrium it is, though, more convenient to use a less
cumbersome Matsubara formalism.
The temperature anomalous Green’s functions are defined by
188 4 Methods of the Many-Body Theory in Superconductivity

F(r1 ν1 ; r2 ν2 ) = ∼Tν ↑ (r1 ν1 )∞ (r2 ν2 )∝;


(4.104)
F(r1 ν1 ; r2 ν2 ) = −∼Tν  ∞ (r1 ν1 ) ↑ (r2 ν2 )∝.

The order parameter is now expressed as

∗ (r) = |g|F(rν + ; rν ). (4.105)

Matsubara operators satisfy the equations of motion [cf. (4.64)]


 

− ∂ν ↑ (r, ν ) = θˆ + V (r) ↑ (r, ν ) − (r) ∞ (r, ν );
  (4.106)

− ∂ν  ∞ (r, ν ) = − θˆc + V (r)  ∞ (r, ν ) − ∗ (r)↑ (r, ν ).

The Gor’kov equations for the temperature Green’s functions are thus
⎫ ⎫ ⎬⎬

− ∂ν − θˆ + V G(rν , r≡ ν ≡ ) + (r)F(rν , r≡ ν ≡ ) = ρ(r − r≡ )ρ(ν − ν ≡ );
⎫ ⎫ r ⎬⎬

− ∂ν + θˆc + V F(rν , r≡ ν ≡ ) + ∗ (r)G(rν , r≡ ν ≡ ) = 0.
r
(4.107)
In the uniform case we Fourier transform this to get

(iπs − θ p )G(pπs ) + F(pπs ) = 1;


(4.108)
(iπs + θ p )F(p, πs ) + ∗ G(pπs ) = 0.

The solution is
iπs + θ p
G(p, πs ) = − ; (4.109)
πs2 + E 2p
∗
F(p, πs ) =
πs2 + E 2p
+
= F (p, iπs ). (4.110)

The equation for the order parameter now follows from



 d3 p
∗ = |g|F(r = 0, ν = 0+ ) ∇ |g|T F(p, πs ).
s=−≈
(2ε)3

Then

|g|T  d3 p
1= . (4.111)
(2ε)3 s πs2 + E 2p
4.4 Green’s Functions of a Superconductor: The Nambu-Gor’kov Formalism 189

This equation immediately yields the BCS equation for (T ). The sum is evaluated
with the help of the formula
 1 a
[(2s + 1)2 ε 2 + a 2 ]−1 = tanh .
s
2a 2

4.4.4 Current-Carrying State of the Superconductor


So far we have not considered the most remarkable feature of the superconducting
state: the supercurrent. We have suggested that it is carried by the superconducting
condenstate, i.e., by Cooper pairs, but all the same we have limited our analysis to
the case of the Cooper pair at rest. Therefore, we would seemingly have to repeat
everything from the beginning, this time allowing for the finite momentum of a
Cooper pair.
Fortunately, instead of following this lengthy route, we can modify the many-body
Gor’kov equations to describe a current-carrying state of the superconductor.
First, let us suppose the condensate moves as a whole with a uniform velocity
(superfluid velocity) vs . Then the Galileo transformation yields the obvious changes:


ap → ap−mvs ; ap† → ap−mv s
;
ψλ (r) → eimvs r ψλ (r); (4.112)
ψλ† (r) → e−imvs r ψλ† (r). (4.113)

Thus the field operators gain the phase factor

exp(imζ(r)),

where (in the absence of the external field)

vs = →ζ(r).

The Gor’kov functions accordingly acquire the phase factors

G(X, X ≡ ) → exp(im(ζ(r) − ζ(r≡ ))G̃(X, X ≡ ); (4.114)


+ ≡ ≡ + ≡
F (X, X ) → exp(−im(ζ(r) + ζ(r )) F̃ (X, X ); (4.115)
∗ (r) → exp(−2imζ(r))|(r)| ∇ exp(−2imζ(r))(r). (4.116)

In the case of spatially uniform superflow, ζ(r) = vs · r(+const), but once we have
related the superfluid velocity to the phase of the order parameter, we no longer
depend on this uniformity assumption and can consider arbitrary ζ(r) and vs (r).
Let us choose the transverse gauge → · A = 0. It simplifies the calculations,
since then the vector potential and the momentum operator commute (see, e.g.,
190 4 Methods of the Many-Body Theory in Superconductivity

Landau and Lifshitz [4]). Indeed, for arbitrary coordinate functions, f (r), g(r), we
find [p̂, f ]g = −i(→ f )g. Therefore, [p̂, A] = −i→ · A = 0 in this gauge.
Substituting (4.115) into Gor’kov’s equations, we then find that they obey the
equations
⎫ ⎬
i∂ − 2m (p̂ + mvs )
1 2 − μ G̃(X, X ≡ ) + (r) F̃ + (X, X ≡ ) = ρ(X − X ≡ );
⎫ ∂t ⎬ (4.117)

i ∂t + 2m (p̂ − mvs )
1 2 + μ F̃ + (X, X ≡ ) + (r)G̃(X, X ≡ ) = 0,

where the order parameter phase and the vector potential of the electromagnetic field
enter through the gauge-invariant combination, superfluid velocity
e
vs (r) = →ζ(r) − A(r); (4.118)
mc
e
→ × vs (r) = − B(r). (4.119)
mc
The supercurrent is a “thermodynamic current”: it flows in equilibrium, it is a
property of the ground state of the system. Therefore, it can be calculated from ther-
modynamics only, where the current density is expressed as a variational derivative
of any of thermodynamic potentials with respect to the vector potential:
   
1 ρE ρF
− j(r) = =
c ρA(r) S,V,N ρA(r) T,V,N
   
ρW ρ
= = (4.120)
ρA(r) S,P,N ρA(r) T,P,N
 
ρ
= .
ρA(r) T,V,μ

Therefore, we find

ρ
j(r) = −c
ρA(r)
i e  #, - ⎛ e2 # ⎛
= →ψλ† (r) ψλ (r) − ψλ† (r)(→ψλ (r)) − A(r) ψλ† (r)ψλ (r) .
2m λ
mc λ
(4.121)

This definition satisfies the charge conservation law:

→ · j(r) = 0. (4.122)

The current can be expressed through the normal Green’s function (compare to
Sect. 2.1.4):
4.4 Green’s Functions of a Superconductor: The Nambu-Gor’kov Formalism 191

ie
j(r) = lim (→r≡ − →r )G̃(r0, r≡ 0+ ) + 2evs (r)G̃(r0, r≡ 0+ )
m r≡ →r
≈
ie
= lim (→ r≡ − →r )T G̃(r, r≡ , πs )
m r≡ →r s=−≈


+2evs (r)T G̃(r, r≡ , πs ), (4.123)
s=−≈

which follows directly from its definition.

4.4.4.1 Gradient Expansion: Local Approximation

Here we will do basically the same thing as in Chap. 3, when we derived the quantum
kinetic equation: the gradient expansion. Again we assume that the field A(r), the
superfluid velocity, and the order parameter are slow functions of the coordinates,
and introduce the Wigner representation:

R = (r + r≡ )/2; ρ = r − r≡ ; (4.124)

f (r, r≡ ) → f (R, q) ∇ d 3 ρe−iqρ f (R + ρ/2, R − ρ/2); (4.125)
i i
p̂ = −i→ → q − →R ; r → R + →q . (4.126)
2 2
Then the Gor’kov’s equations read
.   2 /
i i
iπs − 1/2m q − →R + mvs R − →q + μ G(R, q, πs )
2 2
i
+(R − →q )F(R, q, πs ) = 1;
. 2 /
  2
i i
iπs + 1/2m q − →R − mvs R − →q − μ F(R, q, πs )
2 2
i
+(R − →q )G(R, q, πs ) = 0.
2
In zeroth order in gradients we have thus the set of algebraic equations

iπs − 1/2m(q + mvs (R))2 + μ G(R, q, πs ) + (R)F(R, q, πs ) = 1;

iπs + 1/2m(q − mvs (R))2 − μ F(R, q, πs ) + (R)G(R, q, πs ) = 0,

while for the current we have the expression


192 4 Methods of the Many-Body Theory in Superconductivity

2e  d 3q
j(R) = T (q + mvs (R))G(R, q, πs ). (4.127)
m s
(2ε)3

The solutions to the Gor’kov equations are thus


.
1 1 + θ˜q (R)/E q (R)
G(R, q, πs ) =
2 iπs − q · vs (R) − E q (R)
/
1 − θ˜q (R)/E q (R)
+ ; (4.128)
iπs − q · vs (R) + E q (R)

(R) 1
F(R, q, πs ) =
2E q (R) iπs − q · vs (R) + E q (R)

1
− . (4.129)
iπs − q · vs (R) − E q (R)

The only change brought to these formulae by the supercurrent is that instead of
iπs we have

iπs − qvs ,

and the chemical potential is shifted as well:

mv2s
μ → μ(R) = μ − , (4.130)
2
so that the kinetic energy term becomes

q2 mv2s
θ˜q (R) = − μ(R) = θq (R) + .
2m 2
Taking this into account, the relation between the kinetic energy and excitation
energy is the same as before, locally:

E q (R)2 = θ˜q (R)2 + (R)2 . (4.131)

The shift of iπs (or π, in the formalism of time-dependent Green’s functions)


means that the energy gap in the superconductor is diminished by the maximum
value of qvs , i.e., by qvs : the elementary excitation can now be created with energy
of only −qvs . But the question of whether and how the order parameter is changed
must be addressed separately.

4.4.4.2 Supercurrent: Explicit Expression

Using the formula for summation over odd (fermionic) Matsubara frequencies,
4.4 Green’s Functions of a Superconductor: The Nambu-Gor’kov Formalism 193


 1
T = n F (φ),
s=−≈
iπs − φ

we find that


T G̃(q, πs )
s=−≈
.* + * + /
1 θ̃q θ̃q
= 1+ n F (qvs + E q ) + 1 − n F (qvs − E q ) , (4.132)
2 Eq Eq

so that the supercurrent equals



e d 3q
j(R) = (q + mvs (R))
m (2ε)3
.* + * + /
θ˜q θ˜q
× 1+ n F (qvs + E q ) + 1 − n F (qvs − E q ) (4.133)
Eq Eq

The total density of electrons is evidently given by




d 3q
n(R) ∇ 2 T G̃(q, πs )
(2ε)3 s=−≈
.* +
d 3q θ˜q
= 1+ n F (qvs + E q )
(2ε)3 Eq
* + /
θ˜q
+ 1− n F (qvs − E q ) . (4.134)
Eq

Therefore, the term proportional to the superfluid velocity in (4.133) is simply

j2 = nevs .

For the supercurrent, we should rather expect that j = n s evs , where n s ≤ n,


because not all electrons can enter the condensate at finite temperatures and thus
participate in the supercurrent. This formula would serve as a definition of n s , the
density of“superconducting electrons”, whatever this may mean, if we could show
that the other term in (4.133) equals

j1 = −n n evs ∇ −jn ,

in order to eliminate the contribution of the “normal” electrons (n n is then the density
of the normal component, n n + n s = n).
Noting that
194 4 Methods of the Many-Body Theory in Superconductivity

n F (−x) = 1 − n F (x),

and
* +
θ˜q
d qq 1 ±
3
=0
Eq

(the latter because the energies are direction independent), we can write this term as
follows:
.* +
e d 3q θ˜q
j1 (R) = q 1+ n F (E q + qvs )
m (2ε)3 Eq
* + /
θ˜q
− 1− n F (E q − qvs )
Eq

2e d 3q
≈− qn F (E q − qvs ). (4.135)
m (2ε)3

But n F (E q − qvs ) is simply the distribution function of Fermi particles (in the event,
bogolons) moving with velocity vs . Thus the term j1 indeed is minus the current of
the elementary excitations moving with the velocity of the condensate:

j1 = −jn ∇ −n n evs .

Since the current carried by elementary excitations is a dissipative (i.e., normal)


current, this gives us the effective density of the “normal component” in the super-
conductor. Then we see that the net current in equilibrium really can be written as
the current of the superconducting component,

j(R) = js = nevs − n n evs ∇ n s evs (R), (4.136)

where

n n + n s = n. (4.137)

As we mentioned, this should be understood as a definition of the density of the


superconducting component.

4.4.5 Destruction of Superconductivity by Current

We noted that while the energy gap in the current-carrying state linearly drops with
the supercurrent, the behavior of the order parameter should be determined from the
4.4 Green’s Functions of a Superconductor: The Nambu-Gor’kov Formalism 195

self-consistency relation

 d 3q
(R) = |g|T F(R, q, πs ). (4.138)
s
(2ε)3

Substituting there the expression for the anomalous Green’s function (4.129), we
obtain the integral equation

d 3q 1
1 = |g| (1 − n F (E q + qvs ) − n F (E q − qvs )). (4.139)
(2ε)3 2E q

After integration over the angles we get


 
πD 1 + exp − E(θ)+ p F vs
dθ  T T
1 = |g|N (0) 1+ ln  . (4.140)
0 E(θ) p F vs 1 + exp − E(θ)− p F vs
T

At zero temperature, this unpleasant equation becomes tractable. First, we observe


that the second term in the brackets is zero, provided

p F vs < 0 , (4.141)

where 0 is the order parameter in the state with j = 0. Indeed, then both expo-
nents are zero. Therefore, the equation for the order parameter stays the same as
in the absence of the supercurrent. Equation (4.141) is the celebrated Landau crite-
rion, telling at what vs the energy gap (not the order parameter!) first goes to zero.
But, unlike the case of Bose superfluid, the superconducting state in three dimen-
sions is not immediately destroyed when vs reaches vs,Landau = 0 / p F : there still
exists gapless superconductivity up to somewhat higher vs,c , which can be seen from
(4.140) (see [8]). ⎨
Let the order parameter be  < 0 < p F vs , and introduce π = ( p F vs )2 − 2 .
The r.h.s. of (4.140) is then
* +
π dθ πD dθ
|g|N (0) + ⎨
0 p F vs
θ 2 + 2 π
 
πD
 2 dθ
= |g|N (0)  1 − + ⎨ ,
( p F vs )2 π θ 2 + 2

while the l.h.s. can be identically rewritten as


196 4 Methods of the Many-Body Theory in Superconductivity

Fig. 4.13 Order parameter (a) and energy gap (b) dependence on the superfluid velocity

πD dθ
|g|N (0)  .
0 θ + 20
2

The integrals can be taken explicitly, and we finally obtain


⎡ ⎜
2
1 + 1 −
2 ⎢ πD ⎥
2

1− + ln ⎢
⎣  ⎥

( p F vs ) 2
1 + 1 − ( p v )2
2
F s
⎭ 
p F vs 20
= ln + ln 1 + 1− .
0 π 2D

The superconductivity is thus destroyed by the current when  = 0; that is,


⎭ 
p F vs,c 20
1 = ln + ln 1 + 1 − 2 .
0 πD

Therefore, the critical velocity is


⎭ 
2
0 1−ln 1+ 1− 20
πD
vs,c = e ≈ vs,Landau e1−ln 2 , (4.142)
pF

since (20 /π 2D ) √ 1.
We have established the existence of the region of vs between vs,Landau and
vs,c ≈ 1.359vs,Landau , where the superconductivity exists in spite of the possibility of
the creation of elementary excitations with arbitrarily small energies (see Fig. 4.13).
4.4 Green’s Functions of a Superconductor: The Nambu-Gor’kov Formalism 197

Fig. 4.14 Andreev reflection of the quasiparticle from the N-S interface. At the point where E =
(x) the quasiparticle changes the branch (particle to hole, e.g.), thus changing the velocity to its
opposite with the minimal possible momentum change

(It is straightforward to check that the order parameter in the end point behaves as

(vs,c − vs ) ∓ vs,c − vs .) The reason why such a regime of gapless supercon-
ductivity can exist is that unlike the Bose superfluid, the elementary excitations in
the superconductor are fermions. Since the destruction of the supercurrent can occur
only after all of its momentum has been transferred to the quasiparticles, there must
be a certain number of them generated. For fermions, this demands a finite phase
space, with zero energy gap. But in three dimensions, after the Landau criterion is
met, the gap is zero only on an infinitesimally small portion of the Fermi sphere.
This portion, and the quasiparticle generation rate, grows as the superfluid velocity
grows, while the order parameter continuously drops until it becomes zero at vs,c .
The dimensionality is here crucial: if, e.g., formally consider the one-dimensional
case, it is evident from (4.139), that the order parameter will drop to zero immediately
at vs,Landau (Bagwell [14]). Small wonder: in this case there is no room for growth
of the phase space volume available to quasiparticle generation.

4.5 Andreev Reflection

Andreev reflection is a remarkable phenomenon that takes place at the boundary


between a superconductor and a normal conductor.
Let us ask a naïve question we have never asked before: How can the electric
current flow from a normal-metal lead to a superconductor? In the normal state, at
arbitrarily small applied voltage, current is carried by quasiparticles near the Fermi
surface, but how can the quasiparticles cross an NS interface? There is a gap in the
density of states of the superconductor. Electron-and hole-like bogolons can exist
only above this gap: E > , so that subgap quasielectrons (and quasiholes) simply
cannot penetrate the superconductor. Nevertheless, since current can flow through
such an interface at subgap voltages, then real electrons can pass the boundary.
Quasielectrons of the normal metal disappear in the process. What appears instead?
198 4 Methods of the Many-Body Theory in Superconductivity

To begin with, consider a planar NS interface. Let (x) be a slowly varying


function of the normal coordinate. In our system, the local value of the order para-
meter makes the only difference between S-and N -regions, and the quasiparticle
dispersion law changes spatially along the x-axis, as shown in Fig. 4.14. We have
(x = −≈) = 0 (normal state), and (x = ≈) =  (bulk superconductor).
The quasiparticles satisfy the Bogoliubov-de Gennes equations (4.143), where
now θˆc = θˆ = − 2m
2 2
→ − μ, and we can choose real :
⎭    
θˆ (r) u(r) u(r)
=E . (4.143)
(r) −θˆ v(r) v(r)

We seek a solution in the form

u q (r) = eik F n·r ξ(r);


vq (r) = eik F n·r ζ(r),

where ξ(r) and ζ(r) vary slowly in comparison to the exponential factors. Neglecting
second derivatives, we thus obtain

(iv F (n · →) + E)ξ(r) + (r)ζ(r) = 0;


(iv F (n · →) − E)ζ(r) + (r)ξ(r) = 0. (4.144)

When x → ≈, propagating solutions exist only if E > , since they depend on


coordinates as

eq(E)n·r ; v F q = ± E 2 − 2 . (4.145)

So much we already guessed: the quasiparticle with subgap energy inside the gap
cannot enter the superconductor, where there is no available place for it. Nor can it
change its momentum significantly, since the effective potential varies too slowly.
The only possible reflection process is thus to change the direction of its group
velocity, changing the branch of the excitation spectrum (see Fig. 4.14).
When x → −≈, we find that

ξ(r) = Aeikn·r ; (4.146)


ζ(r) = Be−ikn·r ; (4.147)
E
k= . (4.148)
v F

A and B are integration constants. Equation (4.146) describes an electron-like, and


(4.147) a hole-like, quasiparticle. If n x > 0, a quasiparticle comes from −≈ to the
boundary, and a quasihole is reflected, and vice versa.
4.5 Andreev Reflection 199

Fig. 4.15 Andreev reflection: the physical picture

Note that unlike the usual reflection, Andreev reflection changes all the veloc-
ity components of the incident quasiparticle (simultaneously transforming it into a
quasihole), and vice versa.
Due to this feature, Andreev reflection provides a mechanism for current flow
through the NS interface. A quasielectron with velocity v is transformed into a qua-
sihole with velocity −v which carries the same current in the same direction. In
terms of real electrons this means that an electron above the Fermi surface in the
normal conductor forms a Cooper pair with another electron (below the Fermi sur-
face) and leaves for the superconducting condensate (Fig. 4.15), thus transferring the
dissipative quasiparticle current on the N -side into the condensate supercurrent on
the S-side. The Andreev-reflected hole is the hole left by the second electron in the
Fermi sea. In the conjugated process, a Cooper pair pounces at an unsuspecting hole
wandering too close to the interface; one electron fills the hole, and the other moves
away into the normal region.
To investigate this picture in some detail, let us now consider the case of steplike
pairing potential (Fig. 4.16):

(x, y, z) = eiα θ(x). (4.149)

This is, of course, an approximation (though a very often used one). Since pairing
potential must be determined self-consistently from (4.71), in reality it sags a little
near to the SN interface (at distances about θ0 ). This is sometimes called the proximity
effect (later we will discuss another meaning of this term); anyway, it is irrelevant
for our problem, since it can bring only small corrections.
Since the system is homogeneous in the y, z-directions, we can separate a sin-
gle mode with dependence on y, z given by eik y y+ik z z . The Bogoliubov-de Gennes
equations for such a mode will have the same form as (4.143), with
200 4 Methods of the Many-Body Theory in Superconductivity

Fig. 4.16 Steplike pairing potential. Both Andreev and normal reflection are possible. (We will see
that in the gap, normal reflection is still suppressed)

2 d 2
θˆ → − ,
2m dx 2
k 2y + k z2
μ → μk y ,kz = μ − ,
2m
and

k F → k F,k y ,kz = k 2F − (k 2y + k z2 ).

(We will keep this in mind and omit trivial y, z-dependence and k y,z subscripts to
simplify notation.)
In the normal part of the system there exist electrons and holes, described by the
(non-normalized) vectors
 
1 ±ik + x
ψe± (x) = e ; (4.150)
0
 
0 ±ik − X
ψh± (x) = e , (4.151)
1

where as is easy to see from the Bogoliubov-de Gennes equations, the wave vectors
satisfy the dispersion law
E
k± (E) = k F 1 ± . (4.152)
μ

In the superconductor there exist electron-like and hole-like bogolons,


 
ueiα/2
e± (x) = e±iq+ x ; (4.153)
ve−iα/2
 
veiα/2
h± (x) = e±iq− X . (4.154)
ue−iα/2

Here the dispersion law and expressions for u, v also follow from the Bogoliubov-de
Gennes equations (plus the condition u 2 + v 2 = 1) after some exercise in elementary
4.5 Andreev Reflection 201

algebra, and look as follows:


1+
1 − 2 /E 2
u(E) = ; (4.155)
2

1 − 1 − 2 /E 2
v(E) = ; (4.156)
2

E 2 − 2
q ± (E) = k F 1 ± . (4.157)
μ

√ subgap excitations (E < ), u, v, and q acquire imaginary parts (we will take
For
−1 = +i). Physically possible subgap solutions must exponentially decay into
the bulk of the superconductor (in our case, at x → ≈, which allows only e+ and
h− ).
Now we can solve the stationary scattering problem for the Bogoliubov-de Gennes
equations. Let an electron impinge on the boundary from the left. Then the wave
function at x < 0 will be

ψe+ + ree ψe− + reh ψh+ ,

while at x > 0 it is

tee e+ + teh h− .

(We keep physical subgap states.)


Of the transmission/reflection amplitudes, we are especially interested in reh , the
amplitude of Andreev electron-hole reflection. Matching the wave functions and their
derivatives at x = 0 will, after some tedious but straightforward calculations, give
the following results:

1 q+ − k− iα/2
teh = e reh ; (4.158)
u q+ + k −
1 q− + k− iα/2
tee = e reh ; (4.159)
v q+ + k −
 
u q+ + k − v q+ − k −
ree = + eiα reh − 1, (4.160)
v q+ + q− u q+ + q−

while

2e−iα
reh = ⎫ ⎬ ⎫ ⎬. (4.161)
u q− −k− −k−
v q+ +q− 1 + qk++ + uv qq++ +q −
1− q−
k+
202 4 Methods of the Many-Body Theory in Superconductivity

This somewhat cumbersome expression greatly simplifies in the so-called Andreev


approximation, that is, in the lowest nonvanishing order in max(.E)
μ : quite reasonable
if we recall the orders of magnitude of these energies. Thus, we take k± ≈ q± ≈ k F ,
while
u 
k + − q− ≈ q+ − k − ≈ k F .
v 2μ

Therefore, the Andreev and normal reflection amplitudes are respectively



−iα E− E 2 − 2
reh ≈ e (4.162)

and

E− E 2 − 2
ree ≈ . (4.163)

Note that the Andreev electron-hole amplitude acquires the phase −α. We will
not repeat the calculations, but the hole-electron amplitude acquires phase +α:

2eiα
rhe = ⎫ ⎬  
u q− +k− q+ v q+ −k−
v q+ +q− 1 + k+ + u q+ +q− 1 q−
k+

iα E − E − 
2 2
≈e . (4.164)

We will see the importance of this in the next section.
In the Andreev approximation the Andreev reflection amplitude (4.162, 4.164)
can be written as
. E
≤iα e−iarccos  , E ≤ ;
reh(he) = e × E (4.165)
e−arccosh  , E > .

For subgap particles, we thus have total Andreev reflection |reh(he) (E)|2 = 1, in
complete agreement with our qualitative reasoning, and the sharp change in pairing
potential notwithstanding. The latter leads to small corrections o(/μ), leaving
place for finite, but small, normal reflection (evidently, for E < , |reh(he) (E)|2 +
|ree(hh) (E)|2 = 1), which can be usually neglected. It becomes significant if there
is a (normal) potential barrier at the NS interface, or if the Fermi vectors in normal
and superconducting regions differ (see Blonder et al. [16] for a detailed discussion).
Instead, we will consider the Andreev levels and Josephson effect in SNS junctions,
which is far more exciting.
4.5 Andreev Reflection 203

4.5.1 The Proximity Effect in a Normal Metal in Contact


with a Superconductor

We have mentioned the proximity effect: a suppression of order parameter in the


superconductor close to a boundary with normal conductor. Probably more often
proximity effect is called the effect of a superconductor inducing superconducting
correlations between electrons and holes in the normal conductor. More formally,
the anomalous Green’s function F(x, x ≡ ) is nonzero in the normal region, though it
decays as we move away from the boundary (in our case, as x, x ≡ → −≈). This does
not mean that there always appears finite pairing potential, alias order parameter, in
the normal part of the system: since  ∓ gF, it can be identically zero for a normal
conductor with g = 0 (the case we are considering). Such a conductor never becomes
superconducting by itself, but externally induced superconducting correlations in it
may survive.
To see this, we will not solve the Gorkov equations explicitly; it is enough to look
at the solutions of our scattering problem.
We have seen that inside the gap, the wave function of an Andreev reflected
electron (hole) is (in Andreev approximation)
* +
eiEx/v F x
eh (x; E) = ψe+ + reh ψh+ ≈ E
−iα−iarccos  iEx
− v eik F X ; (4.166)
e F
* E iEx
+
iα−iarccos  − v
ψe− + rhe ψh− e e−ik F x .
F
he (x; E) = ≈ (4.167)
eiEx/v F

Here we have expanded k± (E) ≈ k F ± E/v F .


The anomalous Green’s function at the point x can be expressed through products
like, e.g.,

E
[eh (x; E)]2 ([eh (x; E)]1 )∗ = ei E x/v F · elα+t arccos + t E x/v F ∓ e2i E x/v F

[cf. Eq. (3.163)]. The only coordinate dependence enters this expression via the
phase factor, 2Ex/(v F ), which has the sense of the relative phase shift of electron and
hole components of the wave function eh (x; E). If E = 0, then these components
keep constant relative phase α + arccos  E
(and thus superconducting coherence) all
the way to x = −≈, where no pairing interactions exist (g = 0)! At finite energy,
this coherence decays when the relative phase becomes of order unity, that is, at a
distance from the boundary ≈ l E = 2E vF
. At temperature T , for a thermal electron-
hole pair correlations decay at a distance ≈ l T = kBvTF . The latter length is usually
called the normal metal coherence length (in the clean case: we completely neglected
impurity scattering).
The strange coherence of electrons and holes in the absence of any pairing inter-
action can be understood if we return to the picture of Andreev reflection. A hole into
204 4 Methods of the Many-Body Theory in Superconductivity

Fig. 4.17 Andreev levels in an SNS junction

which the incident electron has been transformed has exactly the opposite velocity
(if E = 0), and will thus retrace exactly the same path all the way to minus infinity!
Little wonder that the correlations are conserved. (They will be conserved even in
the presence of nonmagnetic scatterers, but this is not a book on superconductivity
theory.) At E = 0 the paths of electron and hole will diverge, and l E is exactly
the measure of when they diverge irreparably: the proximity effect is essentially a
kinematic phenomenon.

4.5.2 Andreev Levels and Josephson Effect in a Clean SNS


Junction

Since the subgap particle in the normal region is reflected by the pairing potential
as a hole, and vice versa, in an SNS junction, when a normal region, of width L, is
sandwiched between two superconductors, there should appear quantized Andreev
levels [22]. This is illustrated in Fig. 4.17. In the normal pail of the system the solution
of the Bogoliubov-de Gennes equations will be

ψ(x; E) = aψe+ (x, E) + bψh+ (x, E) + cψe− (x, E) + dψh− (x, E).

The coefficients a, b, c, d are found easily, since

cψe− (0, E) = ree (E)aψe+ (0, E) + rhe (E)dψh− (0, E);


bψh+ (0, E) = reh (E)aψe+ (0, E) + rhh (E)dψh− (0, E);
aψe+ (−L , E) = ree (E)cψe− (−L , E) + rhe (E)bψh+ (−L , E);
dψh− (−L , E) = reh (E)cψe− (−L , E) + rhh (E)bψh+ (−L , E).

In the gap, the above system yields the discrete set of allowed energy levels of the
bound states, the Andreev levels, which must satisfy

E n±
−2arccos ±(α1 −α2 )+(k+ (E n± )−k− (E n± ))L = 2εn; n = 0, ±1, . . . . (4.168)

4.5 Andreev Reflection 205

Here there are two sets of levels, with ±(α1 − α2 ), depending on the direction
of motion of the electron. Both depend explicitly on the superconducting phase
difference between the superconductors, due to the phase acquired by the wave
function during Andreev reflection. Note that the phase itself is irrelevant, as it
should be: it is the difference that counts. You may have noticed that (4.168) could
be written immediately from the quasiclassical quantization condition in the pairing
potential well,
6
p(E)dq = 2εn,

if we take into account scattering phases ( ±α j + arccos E


) at the interfaces.
Unlike bound states in a usual rectangular well, Andreev levels carry electric
current. This fact, together with phase sensitivity of their positions, is responsible
for the Josephson effect in SNS junctions, which is a very remarkable phenomenon.
The transcendental equation (4.168) can be easily solved in two limiting cases:
L = 0 and L → ≈. (Actually, in the second case we must have L ↔ θ0 , but still
L √ l T ).
In the first case,

E n± 1
arccos = (±α − 2εn), (4.169)
 2
where α ∇ α1 − α2 ; therefore,

α1 − α2 α
E(α) =  cos ∇  cos . (4.170)
2 2
The contact contains a single, twice-degenerate level. (If we now recall that we
considered a single k y , k z mode, the degeneracy is 2× N ≥-fold, where N≥ ≈ A/ω2F
is the number of transverse modes in the contact of area A, in the spirit of the Landauer
formula.)
In the second case, for E √ , we can expand k+ (E) − k− (E) ≈ k F E/μ and
set arccos(E/) = ε/2, to yield

v F
E n± = [ε(2n + 1) ± α]. (4.171)
2L
In this case there are many Andreev levels in each mode. (Exactly how many we
cannot tell, because the above formula doesn’t work when E ± is close to the top of
the well.)
The knowledge of Andreev levels will allow us to calculate the Josephson current
in the SNS junction. We will do this, using different approaches in the cases L = 0
and L → ≈.
The most popular version of the Josephson effect is the one in SIS (superconductor-
insulator-superconductor) tunneling junctions. Josephson’s great achievement was
206 4 Methods of the Many-Body Theory in Superconductivity

the discovery of coherent tunneling of Cooper pairs-and thus supercurrent flow -


through the insulating barrier between superconductors, and its dependence on their
superconducting phase difference. As you know, the Josephson current is then given
by
I J (α) = Ic sin α. (4.172)

(We will not discuss it here: the topic was covered in great detail, e.g., in Barone
and Paterno [2]). But the Josephson effect is not limited to SIS junctions and sin α
dependence. It appears wherever supercurrent can flow due to coherent transport of
Cooper pairs through a weak link-a layer of normal conductor, for example, where
superconducting correlations are not supported dynamically. The specific mechanism
of the effect is, though, different, which leads-as we shall see shortly -to drastic
deviations from (4.172).

4.5.3 Josephson Current in a Short Ballistic Junction:


Quantization of Critical Current in Quantum Point Contact

First we must decide whether the limit L = 0 for an SNS contact corresponds to
a physically sensible situation. It would seem that this limit corresponds simply to
a bulk superconductor, and a stationary jump of the superconducting phase is as
impossible to realize as a finite voltage drop between two banks of such a contact in
the normal state. But we have already encountered the latter situation. in the point
contact, which is a weak link of exactly the sort we need for the Josephson effect (it
is often called an ScS junction, c for constriction). Therefore, the limit L = 0 can be
considered as an approximation to the case of a superconducting point contact.
The Josephson current can be calculated if we know a thermodynamic potential
of the system, Ψ (for example, G, F, , . . .), as a function of the phase difference
across the junction, α (this is a general formula, valid for any sort of Josephson
junction):

2e dΨ
I = . (4.173)
 d(α)

We have already noted that the supercurrent is a “thermodynamic current,” and its
density can be obtained by taking a variational derivative of any of the thermodynamic
potentials over the vector potential, A (4.120). Since the latter always enters through
the gauge invariant supercurrent velocity,
e
mvs (r) = m→ζ(r) − A(r),
c

2mζ ∇ α being the phase of the order parameter, then


4.5 Andreev Reflection 207

ρ 2e ρ
=− .
ρA(r) c ρ→α(r)

Therefore, the supercurrent density is given by

ρΨ
j(r) = 2e .
ρ→α(r)

The variational derivative in this expression is the coefficient that appears when we
write Ψ as

ρΨ
Ψ = d 3r · →α(r) + · · · .
ρ→α(r)

(We have dropped the term independent on →α(r).)


On the other hand, a thermodynamic potential of the Josephson junction can be
expanded in powers of α:

∂Ψ
Ψ= α + · · · .
∂α

In this system the phase changes abruptly by α, so that

→α(r) = αρ(x)ex .

Writing down the identity (A being the area of the contact)



1 1
Ψ= dAΨ = dydz dxρ(x)Ψ
A A

1 ∂Ψ
= dxdydz ex · αρ(x)ex + · · · ,
A ∂α

we notice that
ρΨ 1 ∂Ψ
= ex . (4.174)
ρ→α(r) A ∂α

Then the total Josephson current is indeed given by (4.173).


We use here the grand potential , because then we can immediately use the
results of Sect. 4.3.4, namely (4.77)
208 4 Methods of the Many-Body Theory in Superconductivity

1
[, ∗ ] = d 3 r|(r)|2 + d 3 r{vq (r)(θˆ + V (r))vq∗ (r)
|g| q

+ u q∗ (r)(θˆ +
V (r))u q (r)}
  
2 τ Eq
− ln 2 cosh .
τ q 2

In this formula, the first two lines are independent of α. Therefore, the Josephson
current can be written as follows:
2e  E p dE p
I =− tanh .
 p 2k B T dα

2e E dN c (E)
− · 2k B T dE ln(2 cosh )· .
 0 2k B T dα

The first term contains the contributions from the discrete Andreev levels in the gap,
while the second term accounts for the excited states in the continuum, with energies
exceeding ||, Nc (E) being the density of states in the continuum. The latter—since
we consider the case L = 0—is evidently the same as in a bulk superconductor, and
thus is α independent. Therefore only the discrete Andreev levels contribute to the
Josephson current:

2e  E p dE p
I =− tanh . (4.175)
 p 2k B T dα

Substituting here (4.170), we see that [1]


2N≥ e 0 α 0 cos α/2
I = · sin · tanh . (4.176)
 2 2 2k B T

At zero temperature this reduces to

ε0 G N α
I = sin , (4.177)
e 2

where G N = N≥ e2 /(ε) is the normal (Sharvin) conductance of the contact. The


latter formula was derived by Kulik and Omelyanchouk [23], for a classical super-
conducting point contact. Note the nonsinusoidal and moreover, discontinuous phase
dependence of the Josephson current (Fig. 4.18), as well as its proportionality to the
normal conductance of the system. We already know that in the case of normal
quantum point contact G N is quantized in the units of e2 /(ε). The result (4.176)
4.5 Andreev Reflection 209

Fig. 4.18 Phase dependence


of the Josephson current in a
superconducting point contact

tells us that in the superconducting quantum point contact the critical current is also
quantized, in the (nonuniversal) units of e0 τ.5
An important property of the short ScS contact, which made quantization possible,
was the N≥ -fold degeneracy of Andreev levels (4.170), which ensured that each
transverse mode in the contact gives the same contribution to the current-as it did in
the normal case. Unfortunately, for an ScS junction this holds only in the zero length
limit, as we will see in the next subsection.

4.5.4 Josephson Current in a Long SNS Junction

Let us now consider along ballistic SNS junction. Here the normal layer width is still
much smaller than both normal metal coherence length and elastic scattering length,
but exceeds the superconducting coherence length: l T , le ↔ L ↔ θ0 .
We begin with an insightful picture due to Bardeen and Johnson [15]. Suppose the
temperature is zero and there is a supercurrent through the junction, with superfluid
velocity vs in the x-direction. In a long channel, we can neglect the boundary effects
and simply relate vs to the phase difference:

α
mv s = . (4.178)
L
We have seen earlier, Eq. (4.136), that the supercurrent can be presented as the current
due to macroscopic flow of the condensate, minus the current of the elementary
excitations, carried with velocity vs :

5 The effect was recently observed on experiment [25].


210 4 Methods of the Many-Body Theory in Superconductivity

Fig. 4.19 Current-phase


relation in a long clean SNS
junction

 ek q,x
I = nev s − In,x = nev s − n F (E q± − kq,x vs ). (4.179)
mL
kq

The factor of 1/L appears because the u, v-components of the wave function must
be normalized to the width of the normal layer. The quasiparticle energies in this
expression are measured in the reference frame of the moving condensate. Therefore,
they correspond to vs = 0 and are simply the energies of the Andreev levels at
α = 0.
At zero temperature, the Fermi distribution function is identically zero for all “–”
levels (with kq,x < 0, when the electron moves to the left). Let us now increase α
(and vs ). At first, the contribution from the “+”-levels will be zero as well, and the
current grows linearly with the phase difference. When for the lowest Andreev level

E q+ − kq,x vs = 0,

then n F = 21 . It can be shown that this happens at α = ε, and the contribution


of this level exactly cancels the first term in (4.179), zeroing the supercurrent. For
an infinitesimally larger vs , n F = 1, and the current changes sign. Then the current
will grow linearly again, until at α = 3ε the same happens for the second lowest
level, ad infinitum. Of course, the picture holds for negative α, vs , only this time
“–”-levels are involved.
Thus the characteristic sawtooth dependence arises at T = 0 (Fig. 4.19). It was
first obtained by Ishii [19] with the use of a more sophisticated method.
The remarkable feature of this result is that the 2ε-periodic I (α) dependence
is again nonsinusoidal and discontinuous at odd multiples of ε, as in the case of a
short ScS junction.
We will neither reproduce here the details of the Bardeen-Johnson or Ishii cal-
culations, nor apply Eq. (4.173). Instead we derive I (α) at arbitrary temperature
by direct calculation of the density of Andreev states and the supercurrent. We will,
4.5 Andreev Reflection 211

though, use the insight that the low-lying Andreev states dominate the supercurrent
in a long SNS junction.
We use the basic formula (4.73) to calculate the current in the normal part of the
junction, and for the time being consider a single transverse mode, with fixed k y , k z .
Taking normalization into account, we see that the current is given by

e
I = dE(ν+ (E) − ν− (E))[n F (E)k+ (E) − (1 − n F (E))k− (E)]. (4.180)
Lm 0

Here in the brackets k± (E) is the momentum of the electron (hole) excitation. The
parentheses contain the densities of “±” -states, which take into account both bound
Andreev levels and the continuum. At E < || they we simply sets of ρ-functions
at energies E q± .
Since we expect that the main contribution in the current is from the lowest
Andreev levels, we substitute k F (that is, k F,x ) for of k± (E). Then a simple manipula-
tion with Fermi distribution functions allows us to extend the integration to (−≈, ≈):

ev F,x ≈
I = dE(ν+ (E) − ν− (E))[2n F (E) − 1]
L −≈

ev F,x τE
=− dE(ν+ (E) − ν− (E)) tanh , (4.181)
L −≈ 2

τ = 1/k B T .
Now we find the density of the excited states. Using the Weierstrass formula, we
write


ν± (E) = ρ(E − E k± )
−≈
≈ ≈
1  1 1  1
∇− ⇔ → − ⇔ , (4.182)
ε −≈ E − E k± + i0 ε −≈ E − E k± + i 
ν

where on the right-hand side we immediately recognize the retarded Green’s function
in Källén-Lehmann representation.
Here we introduced a finite lifetime, ν , due, e.g., to weak nonmagnetic impurity
scattering. In the limit ν → ≈ it reduces to the infinitesimal i0 term. It is convenient
to parameterize by

L L L
ν= ; ϕ= ∇ √ 1,
ϕv F,x v F,x T le

where le = v F,x ν is the transport scattering length in a given mode.


After substituting the low-energy approximation for Andreev levels, the densities
of states become
212 4 Methods of the Many-Body Theory in Superconductivity

≈   −2
ϕL LE 1 1
ν ± (E) = − (q + )ε ± α + iϕ .

εv F,x q=−≈ v F,x 2 2

The sum is easily taken using the Poisson summation formula, which transforms the
functional series to the series of Fourier transforms:

 ≈ ≈

f (n) = dxf (x) ρ(x − n)
n=−≈ −≈ n=−≈
≈ ≈ ≈

= dxf (x) e 2εipx
∇ f˜( p). (4.183)
−≈ p=−≈ p=−≈

It is very convenient, e.g. if the latter series converges faster.


The integration over x is executed using the methods of complex analysis, and
we obtain

≈
L 2i p( vLE − ε±α
2 )
ν± (E) = e−2| p|ϕ e Fx ;
εv F,x p=−≈


−2iL −2| p|ϕ+2ip vLE
ν+ (E) − ν− (E) = (−1) p e F,x sin pα. (4.184)
εv F,x
p=−≈, p =0

Substituting the last expression in (4.181) and taking the tabular integral
≈ 2ipLE
τE 2εi
dEe v F,x tanh =
−≈ 2 τ sinh 2εLp
τv F,x

and summing up the contributions from all transverse modes, we finally obtain the
expression for the Josephson current in a long clean SNS junction:

 ev F,x 2  L
−2 p l (k L sin pα
I (α) = (−1) p+1 e C F) . (4.185)
L ε l T (k F ) sinh pL
kF p=1 lT (k F )

v
Here l T (k F ) = 2εkF,x
BT
, and le (k F ) = v F,x ν .
This is a remarkable expression. At zero temperature, and in the ballistic limit,
all L/le , L/l T are zero. You can check that the series

2 sin pα
(−1) p+1
ε p
p=1

converges to the 2ε-periodic sawtooth of unit amplitude. Thus we indeed recover


the result of Ishii. Finite temperature and elastic scattering smoothes it over, and
4.5 Andreev Reflection 213

eventually, as is clear from (4.185), only the lowest harmonic survives, leaving us
the “standard” I ∓ sin α-dependence.
The total Josephson current is given by the sum of contributions of different
modes, k F . To calculate it in a classical SNS junction we should integrate over all
k F (with positive projection on the x-axis), which will give the critical current as
proportional to R N . In the case of a quantum junction, when a few allowed modes are
present, we see that the critical current is not quantized even at zero temperature: the
amplitudes ev F,x /L depend on the direction of k F ; that is, opening an extra mode
will bring a mode-dependent increase to the critical Josephson current.

4.5.4.1 SND Junction: Case of d-Wave Pairing Symmetry

A curious situation arises if the pairing symmetry in one bank of a long SNS junction
is different from that in the other. We have already discussed superconducting transi-
tion in case of arbitrary orbital symmetry of pairing. The only complication was that
the order parameter ∼ak a−k ∝ ∇ k̂ becomes dependent on the relative momentum
direction on the Fermi surface, k̂.
It is now firmly established that in superconducting cuprates the order parameter
has d-wave symmetry [3, 10, 11, 26]. This means that under an appropriate choice
of coordinate axes, k̂ ∓ k x2 − k 2y . Therefore, it is negative for some directions. If
we insist on writing the order parameter as ||eiα , we are now compelled to add ε
to the superconducting phase. As a result, the Andreev reflected electron may now
(depending on its direction of propagation) acquire an extra phase of ε.
What happens in a long SNS junction between conventional (s-wave) and d-
wave superconductors (a so-called SND junction)? There will now be two kinds of
Andreev levels (Fig. 4.20): zero-and ε-levels, so called because of additional phase
acquired when reflected from the d-wave bank. The Josephson current is thus a sum
of two contributions, each given by (4.185), with momentum summation limited
to appropriate states [28]. The magnitudes of these terms of course depend on the
orientation of the d-wave crystal. In the most symmetric case, when each zero-level
has its ε-counterpart, we would find that

 ≈
ev F,x 2 
zerolevels
−2 p le (kL )
I (α) = (−1) p+1 e F
L ε
kF p=1
L sin pα + sin p(α + ε)
× . (4.186)
l T (k F ) sinh pL l T (k F )

This is a ε-periodic sawtooth of α (Fig. 4.21): the Josephson effect period is thus
halved.6

6This phenomenon can occur in different types of Josephson junctions between superconductors
with different pairing symmetry; see Zagoskin [28] and references therein.
214 4 Methods of the Many-Body Theory in Superconductivity

Fig. 4.20 SND junction: a zero- and b ε-Andreev levels

Fig. 4.21 Current-phase


dependence in a symmetric
SND junction

From the relation I (α) = (2e/)dE/dα, we see that the equilibrium is


achieved not at α = 0, but at α = ±ε/2.7 The Josephson current I (±ε/2)
is indeed zero, because contributions from zero-and ε-levels cancel. But the current
component parallel to the SN boundary does not vanish. On the contrary, the contri-
butions to this component from zero- and ε-levels add up, thus creating spontaneous
currents in the normal layer [18].

7Generally, depending on relative contributions of zero- and ε-levels to the current, the equilibrium
phase can take any value from −ε to ε.
4.5 Andreev Reflection 215

4.5.5 Transport in Superconducting Quantum Point Contact:


The Keldysh Formalism Approach

We have already mentioned the possibility of generalizing the Keldysh formalism to


Nambu matrices. Since this makes Green’s functions 4 × 4 matrices (each Keldysh
component being a Nambu matrix), this is warranted only if other approaches fail
(in an essentially nonequilibrium situation) or if the problem allows some significant
simplifications. Examples can be found in Rammer and Smith [7].
Here we present some recent results by Cuevas et al. [17] for transport in super-
conducting point contact, obtained using a combination of Keldysh techniques and
the method of tunneling Hamiltonian. We have discussed this approach in application
to normal contact in Sect. 3.7.
The Hamiltonian is again presented as a sum (3.200)

H = H L + H R + HT , (4.187)

where the tunneling term is convenient to write as [cf. (3.203), (3.225)]



HT = (Teiα(t)/2 cλ† dλ + T∗ e−iα(t)/2 dλ† cλ ). (4.188)
λ

Here the phase

2eV
α(t) = α0 + t (4.189)

describes the time dependence of field operators due to voltage applied to one of the
banks [as in (3.207)]. We write it explicitly because in the superconducting junction,
(4.189) gives the observable nonstationary (ac) Josephson effect:. oscillations of
superconducting phase difference across a junction (and therefore Josephson current)
with Josephson frequency

2eV
πJ = , (4.190)

which is determined by the applied voltage. (Physically, this is simply the quantum
frequency associated with a Cooper pair gaining or losing energy 2eV in transfer
across a finite voltage drop V.8 )
The current through the contact is derived in the same way as (3.208), and the
analogue to (3.210) is given by the (11)-component of a Nambu matrix:

2e
I (t) = [TF+− ∗ +−
1 (t, t) − T F2 (t, t)]11 . (4.191)


8 In an SND junction of the previous section (4.186) the frequency of the nonstationary Josephson
effect doubles to 2π J = 4eV /, because the period of I (α) was halved.
216 4 Methods of the Many-Body Theory in Superconductivity

Here Nambu matrices


 
T0
T= , (4.192)
0 −T∗
* +
+− ≡ ∼d↑† (t ≡ )c ↑ (t)∝ ∼d∞ (t ≡ )c ↑ (t)∝
F1 (t , t) = , (4.193)
∼d↑† (t ≡ )c∞† (t)∝ ∼d∞† (t ≡ )c∞† (t)∝

and in F2+− (t ≡ , t) the c’s and d’s are transposed. Expression (4.191) contains both
superconducting (Josephson) current and normal (quasiparticle) current; the latter
can flow if there is a finite voltage drop across the contact.
The “hybrid” Green’s functions F+− are then calculated as a (+−)-component of
a matrix series over the Keldysh-Nambu matrix T̂ and unperturbed Green’s functions
in the banks (assuming them identical):
 
ε N (0) −π ± iζ 
g R,A
(π) = ⎨ ; (4.194)
2 − (π ± iζ)2  −π ± iζ

 
1
g+− (π) = 2εin F (π) − ⇔g R (π) . (4.195)
ε

Here ζ is the small energy dissipation rate in the banks due to inelastic scattering.
The calculations are of necessity much more involved than those of Sect. 3.7.
Therefore, we present here only the results.
In the linear response regime, V → 0, the quasiparticle current through the
quantum contact is characterized by phase-dependent conductance (τ = 1/k B T )
⎭ 2
2e2 ετ φ sin α τ E a (α)
G(α) = ⎨ sech . (4.196)
h 16ζ 1 − φ sin (α/2)
2 2

The Josephson current, in its turn, is given by the well-known expression

e2 φ sin α τ|E a (α)|


I (α) = tanh . (4.197)
2 E a (α) 2

(2ε N (0)T) 2
In the above formulae, φ = 1+(ε N (0)T)2
is nothing but the effective Landauer trans-
parency of the barrier, TLandauer , that we calculated in (3.222). Energies

E a (α) = ± 1 − φ sin2 (α/2) (4.198)

are energies of Andreev levels in the contact. If φ = 1, we are back to our previous
results (4.170) and (4.176) for an ideal short Josephson junction in a one-mode limit.
4.6 Tunneling of Single Electrons and Cooper Pairs 217

Fig. 4.22 Single-grain tunnel junction

4.6 Tunneling of Single Electrons and Cooper Pairs Through


a Small Metallic Dot: Charge Quantization Effects

In this, the final section of the book, we are going to make good on our promise
to discuss the case when a single electron can make a difference in a many-body
system.9
This can occur if the picture of independent quasiparticles is no longer valid,
and we need to take into account their correlations. Roughly, this happens when
the criterion of the mean-field approximation applicability, Eq. (1.10), fails. This
criterion,
v F
↔ 1, (4.199)
e2
shows explicitly that the faster the particles move, the better is the MFA description
-something we have discussed already. On the other hand, for correlations to become
important, the particles must be slowed down. One way to do this is to create “bumps”
in their way: tunneling barriers, for example, or point contacts, or whatever weak
links we can invent. Consider, e.g., the system shown in Fig. (4.22), a single-grain
tunnel junction. Here electrons can travel between two massive banks through a
small grain, separated from it by tunneling barriers. (The role of gate electrode will
become clear in a moment.) Though v F is not affected by the presence of the barriers,
the characteristic time electrons spend on the grain (and effective travel velocity) is,
which allows correlations to develop. Alternatively, we could rewrite the left-hand
side of (4.199) as ev2F/l/l , where l is some characteristic length. This is the ratio of
interlevel spacing in the grain due to spatial quantization (remember the similar result
for Andreev levels?) to a characteristic Coulomb energy. If the ratio is large, then
correlations that are due to Coulomb interactions are irrelevant, and MFA will work.
In the opposite limit we need a more refined approach.

9 In this section I follow an unpublished lecture by R. Shekhter.


218 4 Methods of the Many-Body Theory in Superconductivity

Fig. 4.23 Coulomb blockade


of a single-electron tunneling

4.6.1 Coulomb Blockade of Single-Electron Tunneling

First let us consider a normal system. Denote the capacitance of our grain by C
(for an isolated spherical grain of radius ρ in the medium with dielectric constant
κ, C = κρ). Then the electrostatic energy due to the extra electron on the grain is

e2
E= . (4.200)
2C

If ρ ∼ 103 ∈ A , then E ≈ 10 K. Therefore, at a low enough temperature the Coulomb


energy quantization due to charge discreteness is observable and leads to a Coulomb
blockade of single-electron tunneling10 : in the situation of Fig. 4.22, we should expect
that no current can flow between the banks until the voltage is high enough to offset
the electrostatic energy due to charging of the grain (Fig. 4.23).
But we did not yet take into account quantum mechanical properties of the elec-
trons. The energy of the system with n extra electrons on the grain is expressed
through C and mutual capacitances C1 , C2 , Cg as

n 2 e2 C1 C2 Cg e2
E(n) = + en( V1 + V2 + Vg ) ≈ const + [n − n ∗ (V1 , V2 , Vg )]2 .
2C C C C 2C
(4.201)

Here the parameter n ∗ is given by

1
n ∗ = − [C1 V1 + C2 V2 + Cg Vg ]. (4.202)
e

10 See, e.g., Tinkham [9], Chap. 7, Zagoskin [13], 2.4 and references therein.
4.6 Tunneling of Single Electrons and Cooper Pairs 219

Note that though there is no charge transfer between the gate electrode and the rest
of the system, the gate voltage Vg critically affects the electrostatic energy and the
transport through the grain.
Indeed, at a certain combination of parameters two the lowest-lying states with
n and n ± 1 electrons on the dot become degenerate (Fig. 4.24). This degeneracy
is equivalent to lifting the Coulomb blockade, deblocking the current. It appears as
periodic conductance dependence on Vg (single-electron oscillations) with period

e
ρVg ≈ . (4.203)
C
Note that in a normal system, degeneracy between states with n and n ± 2 extra
electrons on the grain is impossible: the system will always drop to a lower-lying
state with n ± 1 electrons (Fig. 4.24c).
Observation of the Coulomb blockade and single-electron oscillations is possible,
if we can resolve the levels differing by the charging energy E(n + 1) − E(n) ≈
e2
2C = Uc . On the other hand, the level width is


ρE ≈ , (4.204)
ν
where ν is the characteristic lifetime of the extra electron on the grain. If the con-
ductance of the system is G, this time will be of order the discharge time of an
RC-contour,

ν = C/G. (4.205)

Therefore, we have the observability criterion

e2 G
> , (4.206)
2C C
leading to the condition on the conductance of our system

e2 2e2
G< ≈ . (4.207)
2 h

The latter is the same quantum conductance unit (≈(13k)−1 in more conventional
units) that we have met before. As we remarked, the transport through the grain must
be hindered enough by the barriers in order to make correlation effects important.
The quantum resistance unit provides a quantitative measure of this hindrance.
220 4 Methods of the Many-Body Theory in Superconductivity

Fig. 4.24 a Electrostatic energy quantization. b Single-electron degeneracy of the ground state:
deblocking of the single-electron tunneling. c No double-electron degeneracy of the ground state

Fig. 4.25 Splitting of the Coulomb parabola due to parity effect: a 2e-degeneracy is possible if
 > Uc = e2 /2C. b No 2e-degeneracy otherwise

4.6.2 Superconducting Grain: When One Electron Is Too Many

If the grain becomes superconducting, there appear interesting new possibilities. As


we know, in the ground state of a superconductor all electrons are bound in Cooper
pairs (and therefore the ground state can contain only an even number of electrons).
Any odd electron will thus occupy an excited state, as a bogolon, and its minimum
energy, measured from the ground state energy, will be .
This is the parity effect in superconductivity. Of course, in a bulk superconductor
it is of no importance, but not so in our small system, where charging effects enter
the game.
Let us write the number of electrons on the grain as
4.6 Tunneling of Single Electrons and Cooper Pairs 221

n = 2n C + n q , (4.208)

where n C is the number of Cooper pairs, and n q = 0 or 1 is the number of unpaired


elecrons. The energy of the system is thus (Fig. 4.25)

E = Uc (2n C + n q − n ∗ )2 + n q . (4.209)

The situation depends significantly on whether  > Uc or  < Uc . In the former


case the 2e-degeneracy of the ground state becomes possible, and it occurs at odd
values of n ∗ (Vg ). The period of corresponding oscillations is now

2e
ρVg(SC) ≈ . (4.210)
C
Let us now make the entire system superconducting. Then the grain will play the
role of a weak link between the banks 1 and 2, allowing for a superconducting phase
difference and Josephson current to appear. The latter is carried by condensate, i.e.,
Cooper pairs. Therefore, one should expect that 2e-degeneracy of the ground state
will lead to enhancement of the critical current, which thus oscillates as a function of
Vg . Moreover, since at  < Uc 2e-degeneracy becomes impossible, the effect must
disappear, e.g., in a magnetic field strong enough to suppress  below the charging
energy.
A quantitative consideration must take into account that we are dealing with a
system of bosons—Cooper pairs—which can be described by a couple of comple-
mentary operators, number, and phase (like in Sect. 1.4.3). It is convenient to work
in the basis of phase eigenstates, where [see (1.132)]

1 ∂
n̂ C = .
i ∂α

Here α is the superconducting phase of the grain. The eigenstates of the operator
n̂ C are evidently
∼α|n∝ ∓ einα , (4.211)

since (1/i)∂einα /∂α = neinα . The phases of the massive superconducting banks,
α1,2 , are external parameters, and we can choose them to be

α0
α1,2 = ± .
2
The Josephson current through the grain is thus

2e ∂ E 0 (α0 )
I = , (4.212)
 ∂α0
222 4 Methods of the Many-Body Theory in Superconductivity

where E 0 is the lowest eigenstate of the Hamiltonian

∂ nq n∗ 2
H = 4Uc (−i + − ) + n q  − E J (cos(α1 − α) + cos(α2 − α)). (4.213)
∂α 2 2

In the above formula, the first term is the charging energy expressed through the
Cooper pair number operator; the second term is the odd electron contribution. The
third term is the so-called Josephson coupling energy between the banks and the
grain (we assume that the system is symmetric). By itself, a term like −E J cos(α)
would yield a usual Josephson current (2eE J /) sin(α) between the bank and the
grain [see Eq. (4.173)].
The Hamiltonian (4.213) is convenient to rewrite as


H = Uc (−i + n q − n ∗ )2 + n q  − 2 Ẽ J (α0 ) cos(α), (4.214)
∂α

where Ẽ J (α0 ) = E J cos(α0 /2) is the only parameter dependent on the supercon-
ducting phase difference α0 between the banks.
The coupling term mixes states with n and n ± 1 Cooper pairs on the grain:

≡ ≡
∼n| cos α|n ∝ ∓ dαe−inα cos αein α ∓ ρn,n ≡ ±1 . (4.215)

We can thus write the Hamiltonian in the basis of two states, e.g., |n∝ and |n + 1∝:
 
E(2n) − Ẽ J (α0 )
H= . (4.216)
− Ẽ J (α0 ) E(2n + 2)

Here E(2n), E(2n + 2) are corresponding eigenvalues of the charging part of the
Hamiltonian. The eigenvalues of matrix (4.216) are easily found, with the ground
state energy

1 
E 0 (α0 ) = (E(2n) + E(2n + 2)) − (E(2n) − E(2n + 2))2 + 4 Ẽ 2J (α0 )
2
1
= (E(2n) + E(2n + 2)) (4.217)
2
α0 
− (E(2n) − E(2n + 2))2 + 4E 2J cos2 ( ) .
2

The Josephson current is thus

2e (1/4)E J sin α0
I (α0 ) =  . (4.218)
h ( E(2n)−E(2n+2) )2 + cos2 ( α0 )
2E J 2
4.6 Tunneling of Single Electrons and Cooper Pairs 223

It depends on the gate voltage Vg through E(2n) − E(2n + 2), and it is clear that
near the values Vg = Vg,n where these two energies are degenerate, it peaks. Near
these values of Vg ,

2e (1/4)E J sin α0
I (Vg , α0 ) ≈  . (4.219)
 [ e(Vg −Vg,n ) ]2 + cos2 ( α0 )
EJ 2

We obtain a periodic in Vg enhancement of the Josephson current through a super-


conducting grain, with period 2e/C.
The effect exists, of course, only while 2e-degeneracy is possible. Therefore when
 becomes lower than the charging energy, the critical current sharply drops. It retains
dependence on Vg , but its period becomes e/C, which corresponds to e-degeneracy,
as in the normal case. This behavior was predicted by Matveev et al. [24], and it was
confirmed experimentally a year later (Joyez et al. [20]).

4.7 Problems

• Problem 1
Show that the following equation for the anomalous Green’s function at finite
temperature leads to the BCS equation for (T ):

F + (p, π)|T =0


∗ (T )
= F + (p, π)|T =0 − · 2εin F (E p )(|u p |2 ρ(π − E p ) − |v p |2 ρ(π + E p )).
π + θp

Use the self-consistency relation between  and F + and the expressions for F +
at zero from Sect. 4.4.3.
• Problem 2
Write the analytical expressions for the following Nambu diagram both in the
matrix form and in components.

• Problem 3
Prove that the expression for the superconducting current
224 4 Methods of the Many-Body Theory in Superconductivity

ie 
j(r) = ∼(→ψλ† (r))ψλ (r) − ψλ† (r)(→ψλ (r))∝
2m λ
e2 
− A(r) ∼ψλ† (r)ψλ (r)∝,
mc λ

satisfies the charge conservation law,

→ · j(r) = 0.

Use the Bogoliubov transformation, then Bogoliubov-de Gennes equations, and


finally, the equation for the order parameter (self-consistency relation).

References

Books and Reviews

1. Beenakker, C.W.J., van Houten, H.: Phys. Rev. Lett. 66, 3056 (1991)
2. Barone, A., Paternó, G.: Physics and Applications of the Josephson Effect. Wiley, New York
(1982)
3. Basov, D.N., Tinusk, T.: Electrodynamics of high-Tc superconductors. Rev. Mod. Phys. 77,
721 (2005) (Section VIII)
4. Landau, L.D., Lifshitz, E.M.: Quantum Mechanics, Non-relativistic Theory (Landau and Lif-
shitz Course of Theoretical Physics, v. III.). Pergamon Press, Oxford (1989)
5. Lifshitz, E.M., Pitaevskii, L.P.: Statistical Physics pt.II (Landau and Lifshitz Course of the-
oretical physics, v. IX.). Pergamon Press, Oxford (1980) (Ch. 5. Theory of Superconducting
Fermi Gas)
6. Maiti, S., Chubukov, A.V.: Superconductivity from repulsive interaction, . In: Proceedings
of the XVII Training Course in the Physics of Strongly Correlated Systems, Vietri sul Mare
(Salerno), Italy (2013)
7. Rammer, J., Smith, H.: Quantum field-theoretical methods in transport theory of metals. Rev.
Mod. Phys. 58, 323 (1986)
8. Svidzinskii A.V.: Spatially Inhomogeneous Problems in the Theory of Superconductivity.
Nauka, Moscow (1982) (in Russian) (An excellent book, using both functional integration
and Green’s functions formalism)
9. Tinkham M.: Introduction to Superconductivity, 2nd edn. McGraw Hill, New York (1996) (A
classical treatise on superconductivity. This second edition contains a special chapter (Ch.7)
devoted to effects in small (mesoscopic) Josephson junctions)
10. Tsuei, C.C., Kirtley, J.R.: Pairing symmetry in cuprate superconductors. Rev. Mod. Phys. 72,
969 (2000)
11. Van Harlingen, D.J.: Phase-sensitive tests of the symmetry of the pairing state in the high-
temperature superconductors-evidence for dx 2 −y 2 symmetry. Rev. Mod. Phys. 67, 515 (1995)
12. Vonsovsky, S.V., Izyumov Yu A., Kurmaev, E.Z.: Superconductivity of Transition Metals,
Their Alloys and Compounds. Springer, Berlin (1982) (Chapter 2 of this comprehensive book
provides both formulation of Nambu-Gor’kov formalism and detailed derivation and analysis
of Eliashberg equations. In Chapter 3 the formalism is generalized on case of magnetic metal)
13. Zagoskin, A.M.: Quantum Engineering: Theory and Design of Quantum Coherent Structures.
Cambridge University Press, Cambridge (2011). (Chapters 2 and 4 contain a review of theory
and experiment on superconducting quantum bits (qubits) and qubit arrays)
References 225

Articles

14. Bagwell, P.F.: Phys. Rev. B 49, 6481 (1994)


15. Bardeen, J., Johnson, J.L.: Phys. Rev. B 5, 72 (1972)
16. Blonder, G.E., Tinkham, M., Klapwijk, T.M.: Phys. Rev. B 25, 4515 (1982)
17. Cuevas, J.C., Martín-Rodero, A., Levy Yeyati A.: Phys. Rev. B 54, 7366 (1996)
18. Huck, A., van Otterlo, A., Sigrist, M.: Phys. Rev. B 56, 14163 (1997)
19. Ishii, G.: Prog. Theor. Phys. 44, 1525 (1970)
20. Joyez, P., Lafarge, P., Filipe, A., Esteve, D., Devoret, M.H.: Phys. Rev. Lett. 72, 2458 (1994)
21. Kohn, W., Luttinger, J.M.: Phys. Rev. Lett. 15, 524 (1965)
22. Kulik, I.O.: Sov. Phys. JETP 30, 944 (1970)
23. Kulik, I.O., Omelyanchuk, A.N.: Fiz. Nizk. Temp. 3, 945; 4, 296 (Sov. J. Low Temp. Phys. 3,
459; 4, 142) (1977)
24. Matveev, K.A., Gisselfält, M.. Glazman, L.I., Jonson, M., Shekhter, R.I.: Phys. Rev. Lett. 70,
2940 (1993)
25. Takayanagi, H., Akazaki, T., Nitta, J.: Surf. Sci. 361–362, 298 (1996)
26. Tsuei, C.C., et al.: Science 271, 329 (1996)
27. Yang, C.N.: Rev. Mod. Phys. 34, 694 (1962)
28. Zagoskin, A.M.: J. Phys.: Condens. Matter 9, L419 (1997)
Chapter 5
Many-Body Theory in One Dimension

Stay on the Path. Never step off!


Ray Bradbury. “A Sound of Thunder”

Abstract Orthogonality catastrophe and its treatment in a one-dimensional model.


Anderson orthogonality exponent. Tomonaga-Luttinger model for fermions in one
dimension. Bosonization: describing one-dimensional fermions in terms of bosons
and vice versa. Interacting one-dimensional fermions: Tomonaga-Luttinger liquid.
Spin-charge separation. Elements of conformal field theory. Conformal field theory
approach to the orthogonality catastrophe in one dimension. General relation between
the orthogonality exponent and the scattering phase.

5.1 Orthogonality Catastrophe and Related Effects

5.1.1 Dynamical Charge Screening by a System of Fermions

We have seen, that the existence of the Fermi surface gives rise to subtle and important
effects. One example is the Cooper pairing, where the instability of the normal ground
state in the presence of an arbitrarily small electron–electron attraction near the Fermi
surface was due to the effectively two-dimensional character of the problem. Another
is provided by the charge screening, where instead of an exponential Thomas–Fermi
screened potential one obtains Friedel oscillations with a much slower, power-law
potential drop. Mathematically, the latter followed from branch cuts—as opposed
to simple poles—in the complex frequency plane of the response function of the
electron systems (in this case, the polarization operator). It is natural to expect that
such non-analyticity may also produce a non-exponential time dependence in the
response of a system of fermions to a perturbation. This is actually the case, e.g., if
a point charge is suddenly introduced into a sea of electrons.

A. Zagoskin, Quantum Theory of Many-Body Systems, 227


Graduate Texts in Physics, DOI: 10.1007/978-3-319-07049-0_5,
© Springer International Publishing Switzerland 2014
228 5 Many-Body Theory in One Dimension

εF εF

Eh Eh +
Fig. 5.1 Schematics of X-ray spectroscopy of metals. A core hole potential is instantaneously
created or eliminated with, respectively, the absorption or emission of an X-ray photon

Such a situation arises in the X-ray spectroscopy of metals (see Fig. 5.1). An elec-
tron in a core level of an ion in the metal absorbs an X-ray photon and is (practically
instantaneously) ejected on top of the conduction band, leaving behind produces a
positively charged core hole in the middle of a sea of conduction electrons, which
rush to screen it. The filling in of the hole by a conduction electron, with the emission
of a photon, will later eliminate this charge. This process can be described (see, e.g.,
[3], Sect. 8.3.B) by the Hamiltonian
 
H = E h b† b + πk ck† ck + b† b Vkq ck† cq . (5.1)
k kq

Here b† , b are the core hole creation/annihilation operators, ck† , ck ditto for the band
electrons, E h is the core hole energy, πk the band dispersion law, and Vkq the matrix
elements of the core hole scattering potential. We consider here spinless fermions,
for the sake of simplicity.
At zero temperature the X-ray emission and absorption spectra obviously have the
coinciding threshold, ρ0 : ρab √ ρ0 √ ρem , where ρ0 = π F − E h (with the energies
measured from the bottom of the conduction band). The absorption and emission
spectra near the threshold have a power-law shape. For example, the absorption
intensity
A(ρ) → (ρ − ρ0 )ε . (5.2)

The Fermi-edge singularity (FES) exponent, ε, was calculated in several different


ways ([3], Sect. 8.3.C) and turned out to be
 2
λ(π F )
ε=1− , (5.3)
α

where λ(π f ) is the phase shift of an electron at the Fermi surface, scattered by the
core-hole potential.
In order to understand this, we start from the result of [11], who related the X-ray
absorption and emission rates to the Green’s function of the core hole,
5.1 Orthogonality Catastrophe and Related Effects 229

G h (t) = −i∇T b(t)b† (0)≡ = −iδ(t)∇0 |e  H0 t be−  (H1 +ρ0 )t b† |0 ≡


i i

= −iδ(t)e−iρ0 t ∇0 |e  H0 t e−  H1 t |0 ≡.


i i
(5.4)

Here |0 ≡ is the ground state of the system with no core hole; obviously, ∇0 |bb† |0 ≡
= 1; and we made use of the hole being created and annihilated instantaneously, so
that the Hamiltonian (5.1) is either

H0 = πk ck† ck , (5.5)
k

or  
ρ0 + H1 = ρ0 + πk ck† ck + Vkq ck† cq . (5.6)
k kq

Now, as usual, we insert in (5.4) the closure relation,



˜ m ≡∇
| ˜ m | = I, (5.7)
m
 ⎡
˜ m≡
where | is the full set of states of the electron system with the core
m=0,1,...
hole potential. The result is

∇0 |e  H0 t | ˜ m |e−  H1 t |0 ≡
˜ m ≡∇
i i
G h (t) = −iδ(t)e−iρ0 t
m
 ⎣ ⎣2
i ⎣ ˜ ⎣
= −iδ(t)e−iρ0 t e−  ( Ẽ m −E 0 )t ⎣∇ m |0 ≡⎣ . (5.8)
m

If the core hole potential produces a bound state, there are two possibilities: it will be
filled by one of the conduction electrons, or will remain empty (Fig. 5.2). The above
expression is easily modified to take both cases into account:
 ⎣ ⎣2
i ⎣ ˜s ⎣
G h (t) = −iδ(t)e−iρ0 t e−  ( Ẽ m −E 0 )t ⎣∇
s
m | 0 ≡⎣ . (5.9)
m s=e,f

Here the index s labels empty or filled bound state of the core hole potential. We will
see what difference it makes.
The overlap ∇ ˜ s |0 ≡ between the ground state wave functions of the N -electron
0
systems with and without the core hole potential tends to zero as the number of elec-
trons grows. We have quoted [14] on a similar situation in Sect. 2.2 when discussing
an approximate calculation of an N -body wave function in the limit N ≈ ∓: even
a small error in a one-particle wave function leads to the approximate N -particle
function being orthogonal to the exact one. This situation is called the orthogonality
catastrophe.
230 5 Many-Body Theory in One Dimension

Fig. 5.2 X-ray spectroscopy,


when the core hole potential
has a bound state εF εF

εB εB
Eh Eh
+ +

The wave function of N spinless fermions can be written as a Slater determinant


(1.105), |≡ = √1 det [(ωk (rm )], where ωk (rm ) is the kth one-particle wave function
N!
taken at the position of the mth particle. The overlap ∇ ˜ s |0 ≡ is thus
0
   ⎤ ⎦
1
˜ s0 |0 ≡ =
∇ dr1 dr2 · · · dr N det (ω̃s∼ (rm det (ωq (rm )
)
N! k
⎤ ⎦
∝ det ∇ω̃s∼
k |ωq ,
≡ (5.10)

that is, a determinant composed of the single-particle overlaps. Anderson [13] demon-
strated that actually
∇˜ s0 |0 ≡ → N −x , (5.11)

where the Anderson orthogonality exponent


 2
1 λ(π F )
x= (5.12)
2 α

also appears in the expression (5.3) for the FES exponent.

5.1.2 One-Dimensional Tight-Binding Model

Calculating overlaps of N -particle wave functions in (5.9) and (5.11) requires certain
approximations. For example, one can assume the band and the core-hole potential
to be spherically symmetric and consider only s-wave scattering. This effectively
reduces the problem to one dimension, with spinless (for simplicity) fermions con-
fined to a ray, r > 0, and a scattering potential placed at the origin. Let us go one
step farther and consider a 1D chain of finite length (l − 1), with nearest-neighbour
hopping, free boundary conditions and a potential at the first site, which represents
the core hole charge and can be switched on and off (Fig. 5.3); [12]:
5.1 Orthogonality Catastrophe and Related Effects 231

Fig. 5.3 1D tight-binding


chain
...
l-1
...
-V

l−2
 
H0 = −T ∂i† ∂i+1 + ∂i+1

∂i ; (5.13)
i=1
H1 = H0 − V ∂1† ∂1 . (5.14)

Here ∂i† , ∂i creates/annihilates an electron on the ith site, and the negative signs at
the tunneling amplitude T and the “hole potential” V are chosen for convenience.
The one-particle eigenstates of H0 and H1 are produced by acting with the creation
operators on the vacuum state, |0≡, and can be written as


l−1 
l−1
|k≡ = k j ∂ †j |0≡ ∝C sin [k( j − l)] ∂ †j |0≡. (5.15)
j=1 j=1

Here C is the normalization constant. The free boundary condition at the last ((l −
1)th) site is satisfied: the wave function is zero at the j = l. Since sin [k( j + 1 − l)]+
sin [k( j − 1 − l)] = 2 sin [k( j − l)] cos k, the Schrödinger equation H|k≡ = π(k)|k≡
is also automatically satisfied everywhere, except the first site, for any k, if only

π(k) = −2T cos k (5.16)

(the standard tight-binding dispersion law). The allowed values of k are determined
from the contribution to the Schrödinger equation from site j = 1:

−Tk2 − V k1 = −2T cos k k1 .

Substituting here k j → sin k( j − l), after an exercise in trigonometry we find the


implicit expression for the spectrum:

sin k(l − 1) T
= . (5.17)
sin kl V

Without the core hole, V = 0, this yields sin kl = 0 and the obvious set of (l − 1)
band states with kn = αn/l; n = 1, 2, . . . (l − 1). Small enough attractive potential
(V > 0) does not change this situation qualitatively. Nevertheless at some V a bound
state can be formed (Fig. 5.4). This happens when Eq. (5.17) acquires an imaginary
root, k = iθ, i.e.,
232 5 Many-Body Theory in One Dimension

2
V
εF εF

...

...
...
k

εB

Fig. 5.4 Band spectrum and the bound state in the tight-binding model

sinh θ(l − 1) T
= . (5.18)
sinh θl V

Then the one-particle wave function amplitude is  B → sinh θ(l − j). The wave
function should decay away from the origin. Therefore θ > 0, and there is only one
bound state in our model. Its energy

π B = −2T cosh θ. (5.19)

In the limit of an infinitely long chain we can introduce the scattering phase shift
for the band states via

k j ∗ sin [k j + λ(k)] , j 1. (5.20)

Comparing this to (5.15), we see, that

λ(k) + αn = −kl, l ≈ ∓. (5.21)

Therefore
sin[λ(k) + k] T
= , (5.22)
sin λ(k) V

and  
sin k
λ(k) = arctan . (5.23)
(T/V ) − cos k

In the same limit for the bound state we find

T T2 + V 2
e−θ = ; πB = − . (5.24)
V V
5.1 Orthogonality Catastrophe and Related Effects 233

The advantage of our tight-binding model is that it allows a simple expression for
the overlap (5.10). We will use the trigonometric identities


l−1  
1 sin k(l − 1) cos kl
sin k( j − l) =
2
(l − 1) − ,
2 sin k
j=1


l−1  
1 sinh θ(l − 1) cosh θl
sinh2 θ( j − l) = − (l − 1) − ,
2 sinh θ
j=1


l−1
1 sin k(l − 1) sin ql − sin q(l − 1) sin kl
sin k( j − l) sin q( j − 1) = .
2 cos q − cos k
j=1

From the two first equations the normalizations of one-particle states are easily found.
If substitute in the last one the wave vectors k and k̃, which satisfy Eq. (5.17) with
the potential U and Ũ , respectively, we obtain


l−1
(Ũ − U ) sin k(l − 1) sin k̃(l − 1)
sin k( j − l) sin k̃( j − 1) =
j=1
2T(cos k̃ − cos k)
(Ũ − U ) sin k(l − 1) sin k̃(l − 1)
∝ . (5.25)
π(k̃) − π(k)

The one-particle overlaps are thus given by

(Ũ − U )C(k̃)C(k)
˜ |k ≡ =
∇ , (5.26)

π(k) − π(k̃)

where C(k) = 2 sin k(l − 1) [(l − 1) − (sin k(l − 1) cos kl)/ sin k]−1/2 . For the
overlap with a bound state (if it exists) we just need to replace π(k̃) with π B Eq. (5.19)
and C(k̃) with

CB = 2 sinh θ(l − l) [(sinh θ(l − 1) cosh θl)/ sinh θ − (l − 1)]−1/2 .

In our problem, U = 0 (no core hole), Ũ = V , sin k(l − 1) = ± sin k and, in the
limit l ≈ ∓, sin k̃(l − 1) = (T/V ) sin k̃l = ±(T/V ) sin λ(k).
Equation (5.26) is exact. We now linearize the dispersion relation near the Fermi
surface:

π(k) = π(k F + Γk) ↑ π(k F ) + v F Γk = −2T cos k F + 2T sin k F · Γk. (5.27)

and find
234 5 Many-Body Theory in One Dimension

2V sin k F (T/V ) sin(λ(k̃)) sin(αλñ ) 1


˜ |k ≡ ↑ ±
∇ =± , (5.28)

2lT sin k F ([k − k̃ − λ(k̃)/l] α n − ñ + λñ

where k = αn/l, and λn = λ(αn/l)/α. Substituting this in (5.10), we obtain the


scalar product of N -particle wave functions with and without core hole potential
   

N
sin(λm ) 1
˜ 0 |0 ≡ = ±
∇ det , . (5.29)
α n − ñ + λñ
m=1

This is the same formula, which was obtained by Anderson [13] in the first-order
perturbation theory for s-wave scattering. Following his approach towards deriving
the orthogonality exponent (5.12), we transform it to a more convenient form using
the Cauchy formula,
   
1 m>n (am − an ) m>n (bm − bn )
det =  . (5.30)
am − bn m,n (am − bn )

˜ 0 |0 ≡|:
This allows to calculate the logarithm of |∇

  
  
sin αλn λn − λm
˜ 0 |0 ≡| ↑ −
ln |∇ ln ln 1 + −
n m<n
αλn n−m
   
λn λm
− ln 1 + − ln 1 − . (5.31)
n−m n−m

Using also the Euler’s product formula


∓ 
 
sin x x2
= 1− 2 2 , (5.32)
x α j
j=1

we can expand (5.31) in powers of λn (assuming λn /α ∞ 1) and check that the linear
terms cancel, while the quadratic terms yield the Anderson’s result (5.11, 5.12): up
to a factor,

1 λ(k F ) 2
˜ 0 |0 ≡| ↑ N − 2
|∇ α
.

This formula holds also if the core potential has a bound state, if this bound state is
filled. Otherwise the Anderson exponent is given by
 
1 λ(k F ) 2
xe = 1− . (5.33)
2 α
5.1 Orthogonality Catastrophe and Related Effects 235

Instead of following the original derivation of the latter [9, 10] we will use some
generalizations of our one-dimensional approach, which do not rely on such assump-
tions as smallness of the phase shift, provide us with some powerful theoretical tools
and, in the end, better illuminate the physical meaning of these results.

5.2 Tomonaga-Luttinger Model

5.2.1 Spinless Fermions in One Dimension

Let us return to the tight-binding Hamiltonian of Eq. (5.13), adding to the model the
particle–particle interaction on the neighbouring sites:

l−2
  l−2
  
H = −T ∂i† ∂i+1 + ∂i+1

∂i + g ∂i† ∂i − n i ∂i+1

∂i+1 − n i+1 .
i=1 i=1
(5.34)
Here n j ∝ ∇0|∂ †j ∂ j |0≡ is the ground state expectation value of the site occupation
number. Besides modeling various one-dimensional conductors with interactions
(Eggert 2009) and impurity scattering (with a given orbital momentum) in a 3D
electron gas, this Hamiltonian also describes the XXZ spin-1/2 chain. It turns out
that the latter’s Hamiltonian,

1  x x y y  1  z z
HS = J⊥ ψi ψi+1 + ψi ψi+1 + Jz ψi ψi+1 , (5.35)
4 4
i

where ψ x,y,z are the Pauli matrices, can be transformed to the form (5.34) with
T = J⊥ , g = Jz and n = 1/2, i.e., the Fermi level in the middle of the band (see, e.g.,
[4], 1.2). The fermionic operators create and remove kinks in the spin configuration.
Thus the model we are going to investigate has quite versatile applications.

Substituting the Fourier-transforms of the site operators (∂ j = C k eikx j ck ), we
find for the non-interacting part of the Hamiltonian:

k
max 
H0 = π(k)ck† ck = †
π(−k F + q)c−k c
F +q −k F +q
k=−kmax q


+ π(k F + q )ck†F +q  ck F +q  . (5.36)
q

Since linearizing the dispersion law near the Fermi surface (5.27) brought useful
simplifications earlier, we will do the same here:
236 5 Many-Body Theory in One Dimension
 
H0 ↑ †
(−v F )qc−k c
F +q −k F +q
+ v F q  ck†F +q  ck F +q  (5.37)
q q

(measuring energy from the Fermi level). We can denote

dq = ck F +q , dq† = ck†F +q ; sq = c−k F +q , sq† = c−k



F +q
(5.38)

and rewrite (5.37) as  


H0 = v F q dq† dq − sq† sq . (5.39)
q

The anticommutation relations between the new operators are, obviously, as follows:

{dk , dq† } = λkq ; {sk , sq† } = λkq ; {dk , sq† } = λk,q−2k F . (5.40)

This is just a different notation, as long as we keep k F +q > 0 for the “right-movers”,
and −k F + q < 0 for the “left-movers”. The crucial step is now to consider them
as two different kinds of fermions, and let q run from −∓ to ∓ both for sq and dq
(Fig. 5.5). This is, of course, an approximation. As long as we are interested in low-
energy excitations, it is reasonable enough: situations, when the anticommutators of
right- and left-movers would be nonzero, require the momentum transfer of 2k F .
The advantage of this approximation is that the Hamiltonian (5.34) becomes exactly
solvable [3].
The electron annihilation operator at the position x j is now written as

1  ik F x j iq x j 1  −ik F x j iq x j
∂j = √ e e dq + √ e e sq
L q L q
1  iq x j 1  iq x j
= eik F x j √ e dq + e−ik F x j √ e sq
L q L q
∝ eik F x j d(x j ) + e−ik F x j s(x j ), (5.41)

where L is the length of our chain. The anticommutation relations between the right-
or left-movers are
  
{d(x j ), d † (xm )} = L −1 eiq x j −iq xm {dq , dq† } = L −1 eiq(x j −xm )
q,q  q

∓ ∓

= L −1 e2αin(x j −xm )/L = λ(x j − xm − n  L). (5.42)
n=−∓ n  =−∓

Assuming periodic boundary conditions, the allowed wave numbers are

2αn
kn = , n = 0, ±1, ±2, . . .
L
5.2 Tomonaga-Luttinger Model 237

and we have used the identity



 ∓

2αn
ei L y
=L λ(y − n  L). (5.43)
n=−∓ n  =−∓

In the limit L ≈ ∓ the summation over the wave numbers is replaced by integration,
 
L
≈ dq.
q

On the right-hand side of Eq. (5.42) will remain only the term with n  = 0:

{d(x), d † (y)} = {s(x), s † (y)} = λ(x −y); {d(x), s † (y)} = {d(x), d(y)} = · · · = 0.
(5.44)
Then the fermion field operators ∂(x) will satisfy the standard anticommutation
relations,

{∂(x), ∂ † (y)} = λ(x − y), {∂(x), ∂(y)} = {∂ † (x), ∂ † (y)} = 0,

if
1 
∂(x) = √ eik F x d(x) + e−ik F x s(x) . (5.45)
2

Note that the interaction term of Hamiltonian (5.34) enters the expression

∂i† ∂i − n i ∝ : ∂i† ∂i :, (5.46)

the normally ordered product of Fermi operators, and its ground state expectation
value is zero. In the continuous limit then
1 † 
: ∂ † (x)∂(x) : = : d (x)d(x) + s † (x)s(x) :
2
1 
+ : d † (x)s(x)e−2ik F x + s † (x)d(x)e2ik F x : (5.47)
2
We would like to get rid of electron–electron interactions (quartic terms in (5.34)) and
deal with free particles. In the BCS theory this was made possible by the presence
of superconducting condensate, but at a price of having to solve a self-consistent
equation for the order parameter. Here a different approach is possible, due to the
strictly one-dimensional character of the problem (and the approximations we made
up to this point). The goal of the following excercise is to express the Hamiltonian
of 1D fermions in terms of bosonic fermion density operators,

φd (x) = d † (x)d(x), φs (x) = s † (x)s(x). (5.48)


238 5 Many-Body Theory in One Dimension

(a) (b)
vF |k| -vF k vF k
F F

k k

(a') (b')

k k

Fig. 5.5 Tomonaga (a) and Luttinger (b) models of a one-dimensional system of fermions. Note
that in the Tomonaga model the spectrum is bounded from below, and the total number of fermions
is finite. In the Luttinger model the spectrum is not bounded, the number of particles is infinite, and
we have two distinct “species” of fermions (right- and left-movers)

The interaction terms, which are quartic in terms of fermions, will become quadratic
in terms of these new operators, i.e., yielding the Hamiltonian for non-interacting
bosons. This is the idea of bosonization.

5.2.2 Bosonization

For the right-moving fermion density we find1


 
φd (x) = L −1 e−i(k F +k)x+i(k F +q)x dk† dq = L −1 ei Qx φdQ , (5.49)
k q Q

where †
 
φdQ = dk† dk+Q ∝ dk† −Q dk  ; φdQ = φd−Q , (5.50)
k k

1 See [3], Sect. 4.4; [4], Sect. 2.1; [2, 6] for more details and original references. Be aware of
significant differences in notation and conventions.
5.2 Tomonaga-Luttinger Model 239

|Q|

|Q|
Fig. 5.6 Action of the fermion density operators on the ground state of the Luttinger model

and therefore φd (x) is Hermitian.


In the ground state of our model all states of right-movers with k < 0 (i.e., below
the Fermi surface) are occupied, and all those with k > 0 are empty. Therefore the
action of φdQ on the ground state will give zero for all Q > 0, while for Q < 0 this
operator shifts the ground state by |Q| to the right (Fig. 5.6). Of course, the same
relations hold, mutatis mutandis, for φs .
The operators φdQ and φdQ  , Q, Q  = 0, do not commute. Indeed,
 †  † 
[φdQ , φdQ  ] = [dk dk+Q , dk† dk  +Q  ] = †
dk dk+Q+Q  − dk−Q  dk+Q .
k,k  k

Substituting here

dk† dk+Q+Q  = : dk† dk+Q+Q  : +∇0|dk† dk+Q+Q  |0≡;


† † †
dk−Q  dk+Q = : dk−Q  dk+Q : +∇0|dk−Q  dk+Q |0≡,

we see that, since the contributions from normal products cancel,


  QL
[φdQ , φdQ  ] = λ Q,−Q  ∇0|dk† dk |0≡ − ∇0|dk+Q

dk+Q |0≡ = λ Q,−Q  . (5.51)

k

In the same way, we obtain the relations for the left movers. In particular, for
Q, Q  = 0
Q
[φsQ , φsQ  ] = −λ Q,−Q  . (5.52)

For the position-dependent densities this yields
240 5 Many-Body Theory in One Dimension

 Q L i Qx+i Q  x  −≈
[φd (x), φd (x  )] = L −2 λ Q,−Q  e L≈∓

Q,Q 
i χ
− λ(x − x  ); (5.53)
2α χx

−≈ i χ
[φs (x), φs (x  )] L≈∓ λ(x − x  ). (5.54)
2α χx

We should include in the expansions (5.49) for φd,s (x) also the contributions from
φd,s
Q=0 . The case Q = 0 (so called zero mode) requires a special consideration. Indeed,
the expectation value of φd0 is infinite. But the normally ordered operator : φd0 :
  † 
: φd0 := : dk† dk := ck F +k ck F +k − n k F +k = Nd , (5.55)
k k

is the number operator for the right-moving particles in the given state relative to
the ground state. Same reasoning applies to the left-moving zero mode. Note, that
: φ0d,s : commute with each other and with all φkd,s with k = 0.
It is clear from (5.51, 5.52), that the Fourier components φkd,s can be expressed
through standard Bose operators bkd,s with [bk , bk† ] = λkk  : for k > 0
 
kL d kL d †
φdk = −i bk ; φ−k = i
d
(b ) ; (5.56)
2α 2α k
 
kL s † kL s
φsk = i (bk ) ; φs−k = −i b . (5.57)
2α 2α k

Again, it is straightforward to check that left and right bosons commute with each
other and with both right- and left-moving zero modes.

5.2.2.1 “Bosonic” Hamiltonian of Non-Interacting Fermions in One


Dimension

The non-interacting part of the Hamiltonian, (5.39) can be expressed in terms of


the Bose operators b, b† . The least cumbersome way of doing this is to find first its
commutator with the densities of right- and left movers. For example,

[φdQ , H0 ] = v F [dk† dk+Q , qdq† dq ]
k,q

= v F ((k + Q) − k) dk† dk+Q = v F QφdQ ,
k

which for positive Q’s means


5.2 Tomonaga-Luttinger Model 241

[bdQ , H0 ] = (v F Q)bdQ , (5.58)

and for negative Q’s (since φ−Q = (φ Q )† )

[(b|Q|
d †
) , H0 ] = −(v F |Q|)(b|Q|
d †
) . (5.59)

(You see that the linear dispersion law was here important). In the same way we can
show that (Q > 0)
[bsQ , H0 ] = (v F Q)bsQ (5.60)

and

[(bsQ )† , H0 ] = −(v F Q)(bsQ )† . (5.61)

Recalling, that for Bose operators [b, b† b] = b and [b† , b† b] = −b† , we see that the
Hamiltonian H0 can be written as
  ⎡
H0 = v F Q (bdQ )† bdQ + (bsQ )† bsQ + H0 . (5.62)
Q>0

The term H0 is the part of the expression (5.39), which commutes with all Bose
operators. Actually, it equals

αv F
H0 = (Nd (Nd + 1) + Ns (Ns + 1)) (5.63)
L
and is simply the energy of zero modes. Indeed, in the right-moving zero mode all the
states up to the state with momentum k F + kmax are occupied. The additional number
of right-moving particles is Nd = ∇Nd ≡ (which can have either sign, see Eq. (5.55)),
and the additional energy respective to the (infinite) energy of states filled up to k F is

 2α 
Nd
2α Nd (Nd + 1)
E 0,d = v F Q = v F n = v F
L L 2
Q n=1

for positive Nd ’s, and

2α 
0
2α Nd (Nd + 1)
E 0,d = −v F n = v F
L L 2
n=Nd +1

for negative Nd ’s (see Fig. 5.6). In the same way, for the left-moving zero mode

2α Ns (Ns + 1)
E 0,s = v F ,
L 2
242 5 Many-Body Theory in One Dimension

which is in full agreement with Eq. (5.63). The rest of the Hamiltonian (5.62)
describes the electron-hole excitations against the background of zero modes.

5.2.2.2 Bosonization Formulas

Not only the Hamiltonian of a system of free fermions in one dimension can be
expressed in terms of Bose operators. Single Fermi operators can be expressed in
terms of Bose operators as well. In order to do so, let us first introduce the “phase”
operators ωd,s (x), such that

Nd,s 1 χ d,s
φd,s (x) = + ω (x). (5.64)
L 2α χx
Then we can write

ωd,s (x) 1  1  ikx d,s ⎡


= e φk − e−ikx φ−k
d,s
e−kε/2 . (5.65)
2α L ik
k>0

This expression reminds the formula (2.17) for the phonon field, especially if sub-
stitute here the Bose operators from (5.56, 5.57):

2α  1  ikx d ⎡
ω (x) = −
d
√ e bk + e−ikx (bkd )† e−kε/2 ∝ τd (x) + τd† (x);
L k
k>0
(5.66)
  ⎡
2α  1
ωs (x) = √ eikx (bks )† + e−ikx bks e−kε/2 ∝ τs† (x) + τs (x). (5.67)
L k
k>0

The factor exp[−kε/2], where we will eventually put ε ≈ 0, enables the Abel regu-
larization (Sect. 2.2.1) in case the resulting expressions diverge. It is straightforward
to check that the only nonzero commutators of τ-operators are

2α  1 ik(x−y)−kε  1 ⎤ 2α (i(x−y)−ε) ⎦n

[τ (x), τ (y)] =
d d†
e = eL
L k n
k>0 n=1
⎤ ⎦  

(i(x−y)−ε) −≈ 2αi
= − ln 1 − e L L≈∓ − ln (y − x − iε) ,
L
(5.68)

and
5.2 Tomonaga-Luttinger Model 243

2α  1 −ik(x−y)−kε
[τs (x), τs† (y)] = e
L k
k>0
⎤ ⎦  
2α −≈ 2αi
= − ln 1 − e L (−i(x−y)−ε) L≈∓ − ln (x − y − iε) ,
L
(5.69)

Here we have used the expansion



 yn
ln(1 − y) = − . (5.70)
n
n=1

For such operators A, B, that [A, B] commutes with either of them, the Baker-
Hausdorff formula holds:
eA eB = eA+B e 2 [A,B] .
1
(5.71)

Then in the limit L ≈ ∓ for either right- or left-moving fields


⎤ ⎦
1 2αi
iτ† (x) iτ(x) i[τ† (x)+τ(x)] − 2 ln L (−iε)
e e = e e = (L/2αε)1/2 eiω(x) ; (5.72)
⎤ ⎦
1 2αi
iτ† (x) i[τ(x)+τ (x)] + 2 ln L (−iε)
= (L/2αε)−1/2 eiω(x) .

e eiτ(x) = e e (5.73)

We can also calculate the commutator of the fields ω(x). For example,

[ωd (x), ωd (y)] = [τd (x), τd† (y)] − [τd (y), τd† (x)]
2αi
= − ln(1 − e L (x−y+iε) )
2αi
2αi −≈ 1 − e− L (x−y) −≈
+ ln(1 − e L (y−x+iε) ) ε≈0 ln L≈∓
2αi
1−e L (x−y)

iαsgn(x − y), (5.74)

where sgn(x) = ±1 if x is positive (negative), and sgn(0) = 0. We have used here


the formula

1−e −i x ⎬ i(α − x), x > 0;
ln = 0, x = 0; (5.75)
1 − ei x ⎧
−i(α + x), x < 0.

Indeed, ln(1 − e−z ) − ln(1 − e z ) = ln(−e−z ) in the complex plane, but z = 0 is a


branch point. A branch cut along the negative real semi-axis imposes the choice of
−1 = exp[iα] for the upper and −1 = exp[−iα] for the lower complex half-plane.
It follows from Eqs. (5.44, 5.48), that the field operators and fermion density
operators satisfy the conditions
244 5 Many-Body Theory in One Dimension

[d(x), φd (y)] = d(x)λ(x − y); [s(x), φs (y)] = s(x)λ(x − y). (5.76)

Using the Baker-Hausdorff theorem (of which the formula (5.71) is a corollary),

 1
e−B AeB = [A, B]n
n!
n=0
1 1
∝ A + [A, B] + [[A, B], B] + · · · + [· · · [[A, B], B] · · · B] + · · · ,
2! n! ⎪ ⎨⎩ 
n times
(5.77)

we see that, since [ω, ω† ] is not an operator, the exponent of the “phase operator”
satisfies the conditions
d  d
[eiω (x) , φd (y)] = eiω (x) φd (y)e−iω (x) − φd (y) eiω (x)
d d

d (x)
= −i[φd (y), ωd (x)]eiω
eiω (x) χ
d
−i χ d
[ω (y), ωd (x)]eiω (x) =
d
= sgn(y − x)
2α χ y 2 χy
d (x)
= eiω λ(x − y), (5.78)

(see Eqs. (5.64, 5.74)), and the same holds for ωs (x), φs (y). Therefore eiω(x) could
be a candidate for representing the right- or left-moving field operator.
Now we arrive at the central point of the bosonization technique. With the help
of Baker-Hausdorff formula (5.71) written as

eA eB = eB eA e[A,B] , (5.79)

and the commutation relation (5.74) we see, that the exponentials of bosonic “phase”
operators, e±iω(x) , e±iω(y) , anticommute:
d (x) d (y) d (y) d (x) −≈
e[iω
d (x),iωd (y)] d (y) d (x)
eiω eiω = eiω eiω L≈∓ eiω eiω e±iα
d (y) d (x)
= −eiω eiω ; (5.80)
−iωd (x) −iωd (y) −≈ −iωd (y) −iωd (x) ±iα
e e L≈∓ e e e
−iωd (y) −iωd (x)
= −e e ;
5.2 Tomonaga-Luttinger Model 245
1 d 
d (x) 1 d −≈
, e−iω
d (y) d (x)−ωd (y))
e− 2 [ω (x),ω (y)] + e 2 [ω (x),ω (y)] L≈∓
d d
{eiω } = ei(ω
−iα iα
 0, x = y;
ei(ω (x)−ω (y)) e 2 sgn(x−y) + e 2 sgn(x−y) =
d d

2, x = y,
(5.81)

and the same for the left-movers. Therefore it is indeed possible to represent fermions
in one dimension in terms of bosonic operators: d(x) ∗ eiω (x) , s(x) ∗ eiω (x) , with
d s

the appropriate normalization factors.


A more painstaking approach [6], reminiscent of what we did when introducing
second quantization in §1.4, yields the bosonization formulas

1 1
Fd ei L Nd x eiω (x) = √ Fd ei L Nd x ei(τ ) (x) eiτ (x) ;
2α d 2α d † d
d(x) = √ (5.82)
2αε L
1 1
Fs ei L Ns x eiω (x) = √ Fs ei L Ns x ei(τ ) (x) eiτ (x) .
2α s 2α s † s
s(x) = √ (5.83)
2αε L

The appearance of zero mode number operators Nd,s in the exponents does not
contradict (5.78), since they commute with the “phase” operators. The operators Fξ
(so called Klein factors) satisfy the following (anti)commutation relations

{Fξ , Fγ† } = 2λξγ ; Fγ Fγ† = Fγ† Fγ = 1; (5.84)


{Fξ , Fγ } = {Fξ† , Fγ† } = 0, ξ = γ;
⎤ ⎦
Nξ , Fγ† = λξγ Fγ† ; Nξ , Fγ = −λξγ Fγ . (5.85)

They play a double role. First, they ensure that right- and left-moving fermions of
(5.82, 5.83) anticommute (the relations (5.80, 5.81) only provide for the anticom-
mutation inside each of the right- or left-moving groups). They also enforce the
anticommutation relations between the fermions belonging to some other distinct
species (e.g., with opposite spins). Second, the Klein factors change the number of
fermions in the system by one. This is important: as one can show [6], the reason
why bosonization works is that the Fock space of a one-dimensional Fermi sys-
tem can be split into subspaces, each with a fixed particle number; the excitations
in each of these subspaces are creating particle-hole pairs, and are thus bosonic.
The Klein factors serve as ladder operators, i.e., they allow the transitions between
such subspaces. (Note that the commutaton relations (5.85) between F, F † and the
number operators are the same as between creation/annihilation operators a, a † and
the number operator a † a.) The bosonic and fermionic descriptions of 1D systems
are thus exactly equivalent2 if the energy spectrum is not bound from below (as in
Luttinger, but not Tomonaga, model—see Fig. 5.5). This is not going to create any

2 One way of wrapping one’s head around this counterintuitive fact is to recall that the difference

between bosons and fermions comes from the wave function of the latter changing sign, when two
fermions change places. But in one dimension there is no way to make two particles change places
246 5 Many-Body Theory in One Dimension

problems, as long as we are only interested in low energy excitations (compared to


the Fermi energy). Remarkably, this equivalence and the bosonization rules hold for
an arbitrary dispersion relation (as long as the spectrum is not limited from below),
and not necessarily a linear one. The linearity, though, is essential for most of the
applications of bosonization techniques.

5.2.3 Tomonaga-Luttinger Liquid: Interacting Fermions in One


Dimension

The non-interacting Hamiltonian (5.62, 5.63) can be written as


 L ⎤ ⎦
H0 = αv F d x φd (x)2 + φs (x)2 (5.86)
0
! " 2   # ⎛
L 1 χωd (x) 1 χωs (x) 2 (Nd )2 + (Ns )2
= αv F dx + + .
0 2α χx 2α χx L
(5.87)

(We have used the bosonization formulas (5.64, 5.66, 5.67). The difference between
N 2 /L and N (N + 1)/L is negligible in the limit L ≈ ∓.) The fermion-fermion
interaction term, following from the tight-binding Hamiltonian (5.34), with help of
(5.47) and (5.48) becomes
 
g
H1 = d x : φd (x)φd (x + a) + φs (x)φs (x + a) + 2φd (x)φs (x)(1 − cos 2k F a)
4

+ (· · · )e−2ik F x + (· · · )e2ik F x + (· · · )e−4ik F x + (· · · )e4ik F x :, (5.88)

where a ≈ 0 is the lattice constant. The second line describes the backward scat-
tering processes with momentum transfer 2k F , which contain products like d † dd † s,
and the Umklapp processes (with momentum transfer 4k F and terms like d † d † ss).
These processes convert left-moving fermions into right-moving ones, and vice versa.
They can be produced by sharp enough interaction potentials. But such potentials
could also couple, within the right-moving (or left-moving) sector, the states just
above the Fermi surface with the unphysical states below the bottom of the band
(Fig. 5.5), which we have added in order to make bosonization possible, and with
the understanding that they will not be excited at low energies we are interested
in. It is possible to deal with such processes, but they requre special care. We will
therefore drop the second line in (5.88), and understand the “pointlike” interactions
in the first line as short-range, but smooth enough to justify such treatment (which is
usually the case). Since the factor (1 − cos 2k F a) is not quite controllable, and the
next-neighbour coupling in (5.34) is just an approximation anyway, we will simply

without passing through each other—i.e., occupying the same state simultaneously, which fermions
simply cannot do.
5.2 Tomonaga-Luttinger Model 247

introduce two different coupling constants g1 , g2 , and finally write the Hamiltonian
of the Tomonaga-Luttinger liquid as
 L ⎤ ⎦
HT L L = αv F d x φd (x)2 + φs (x)2 + g4 (φd (x)2 + φs (x)2 ) + 2g2 φd (x)φs (x)
0
⎜  L  
αv F 1
= (1 + g4 )2 − g22 dx φ+ (x)2 + g̃φ− (x)2 . (5.89)
2 0 g̃

Here φ± (x) = φd (x) ± φs (x), and


&
1 + g4 − g2
g̃ = (5.90)
1 + g4 + g2

is the Luttinger liquid parameter. Now using an appropriate Bogoliubov transfor-


mation we will diagonalize this Hamiltonian and reduce the system of interacting
fermions to the one of noninteracting bosons. In the same way, as we obtained
Eq. (5.87) from Eq. (5.86), we get
 ! " 2  2   #
L 1 Nd + Ns2 1 χωd 1 χωs 2
HT L L = A dx + g̃ + + +
0 g̃ L2 2α χx 2α χx
    ⎞
1 2Nd Ns 1 χωd 1 χωs
− g̃ + 2 .
g̃ L2 2α χx 2α χx

Here A = α2v F (1 + g4 )2 − g22 . Substituting the expansions (5.66, 5.67), we find
(always dropping the unphysical contribution of zero point oscillations)
    
1 A 1
HT L L + g̃ (Nd + Ns ) +
= 2 2
− g̃ · 2Nd Ns +
g̃ L g̃
     
A 1 1
k + g̃ (bk ) bk + (bk ) bk −
d † d s † s
− g̃ (bk ) (bk ) + bk bk
d † s † d s
α g̃ g̃
k>0
(5.91)

In the second line of (5.91) we can get rid of the off-diagonal terms by introducing
new Bose operators, related to the old ones via a Bogoliubov transformation (cf.
(4.54–4.57)):

Bk+ = bkd cosh δ − (bks )† sinh δ; Bk− = bks cosh δ − (bkd )† sinh δ;
bkd = Bk+ cosh δ + (Bk− )† sinh δ; bks = Bk− cosh δ + (Bk+ )† sinh δ. (5.92)

The chosen parametrization ensures the Bose commutation relations for B, B † -


operators. Substituting (5.92) in (5.91), we see that the off-diagonal terms disap-
248 5 Many-Body Theory in One Dimension

pear, if

1
g̃ − g̃
tanh 2δ = , (5.93)
1
g̃ + g̃

i.e., exp[−2δ] = g̃. The remaining term is simply 2αA k>0 k[(Bk+ )† Bk+ +(Bk− )† Bk− ].
The first line in (5.91) is diagonalized by introducing N± = (Nd ±Ns )/2. Finally
we find, that

HT L L = Hμ , (5.94)
μ=±

where
  
2αṽ F 1 μ μ
Hμ = + g̃ (Nμ )2 + ṽ F k(Bk )† Bk . (5.95)
L g̃
k>0

The Hamiltonian of interacting 1D fermions with linear dispersion law is indeed


reduced to the one of non-interacting 1D bosons, which allows an exact solution.
Of course, the physical variables are defined in terms of the initial Fermi field ∂(x),
expressed through the operators d(x), s(x), the “old” Bose-operators b, b† , and even-
tually through B ± , (B ± )† , with the coefficients dependent on the Luttinger liquid
parameter g̃. Besides that, and the change of the factors at new zero modes, the only
effect of interactions is the renormalization of the Fermi velocity:

ṽ F = v F (1 + g4 )2 − g22 . (5.96)

This is the “speed of sound” of the “acoustic phonons” (the second term in (5.95)),
which appear on top of zero modes.
As the final touch, we can introduce the new “phase” operators via (cf. (5.66,
5.67))

2α  1  ikx + ⎡
+
 (x) = − √ e Bk + e−ikx (Bk+ )† e−kε/2 ; (5.97)
L k
k>0

2α  1  ikx − † ⎡
− (x) = √ e (Bk ) + e−ikx Bk− e−kε/2 . (5.98)
L k
k>0

The Hamiltonian (5.95) can now be written as3

3 Starting from ± and N± , one can go backwards and introduce new Fermi operators. Such
refermionization is sometimes useful ([6], §10.C).
5.2 Tomonaga-Luttinger Model 249

 !      ⎛
L 1 Nμ 2 1 χμ 2
Hμ = αṽ F dx 2 + g̃ + . (5.99)
0 g̃ L 2α χx

5.2.4 Spin-Charge Separation

One more counterintuitive property of the Tomonaga-Luttinger model is the splitting


of spin and charge degrees of freedom. Including spin in the model is trivial: we just
add a spin index to the index distinguishing left- and right-movers. As mentioned
before, the Klein factors will ensure that all different species of fermions anticom-
mute, and the anticommutation relations within the same species are guaranteed by
Eqs. (5.80, 5.81).
The total right(left)-moving charge density is then given by

d(s)
φC (x) = φd(s)↑ (x) + φd(s)≤ (x), (5.100)

and the net spin density by

φd(s)
S (x) = φ
d(s)↑
(x) − φd(s)≤ (x). (5.101)

We can now introduce new Bose operators, related to the ones of Eqs. (5.56, 5.57)
for each spin species via

d(s)↑ d(s)≤ d(s)↑ d(s)≤


d(s) bk +b bk −b
bC,k = √ k ; bd(s)
S,k = √ k , (5.102)
2 2

and new “phases” and number operators

d(s) ωd(s)↑ (x) + ωd(s)≤ (x) d(s) ωd(s)↑ (x) − ωd(s)≤ (x)
ωC (x) = √ ; ω S (x) = √ ; (5.103)
2 2
d(s) Nd(s)↑ (x) + Nd(s)≤ (x) d(s) Nd(s)↑ (x) − Nd(s)≤ (x)
NC (x) = √ ; N S (x) = √ . (5.104)
2 2

Now the Hamiltonian (5.62, 5.63), trivially generalized to include contributions from
both spin projections, can be written similarly to (5.87), and it splits into charge and
spin parts, which commute with each other:
250 5 Many-Body Theory in One Dimension
 L ⎤ ⎦
H0 = αv F d x φd↑ (x)2 + φd≤ (x)2 + φs↑ (x)2 + φs≤ (x)2
0
 ⎤ L ⎦
= αv F d x φCd
(x)2 + φC
s
(x)2 + φdS (x)2 + φsS (x)2 (5.105)
0
⎫  2  
 ⎬ L 2
1 χωdμ (x) 1 χωsμ (x) 
= αv F dx  +
⎧ 0 2α χx 2α χx
μ=C,S

(Nμd )2 + (Nμs )2
+ .
L

The physical right- or left-moving spinful fermion fields and densities are expressed
in terms of new charge/spin operators through the generalizations of the bosonization
formulas (5.64, 5.82, 5.83):
d(s) d(s) d(s) d(s)
ωC (x)±ω S (x)
i √ d(s) 1 χ ωC (x) ± ω S (x)
d(s)↑/≤ (x) → e 2 ; φ↑/≤ (x) = √ .(5.106)
2α χx 2

Therefore the spin and charge degrees of freedom can be treated separately. This
becomes interesting in the presence of interactions, which would couple to either
spin or charge density and make their dynamics different (see, e.g., [5], Chap. 28;
Nagaosa 1998, Sect. 3.2).

5.2.5 Green’s Functions in Tomonaga-Luttinger Model

Let us return to free 1D fermions with a linear dispersion law. Their creation/
annihilation operators depend on position and time through exp[±(i x −iv F t)] (right-
movers) or exp[±(i x + iv F t)] (left-movers). We will consider the Green’s functions
in imaginary time, like in Sect. 3.2, only here we denote

ϕ = iv F t. (5.107)

Then the free left-moving (right-moving) Fermi operators depend, respectively, on


the complex variable z = ϕ + i x or its complex conjugate z ∼ = ϕ − i x.
As in Sect. 3.2.1, here we replace the free Heisenberg field operators with Mat-
subara operators, and omit the superscript “M”
5.2 Tomonaga-Luttinger Model 251

dk (t) = e−iv F kt dk (0) ≈ dk (ϕ ) = e−kϕ dk (0);


dk† (t) = eiv F kt dk† (0) ≈ d̄k (ϕ ) = ekϕ d̄k (0); (5.108)
sk (t) = e iv F kt
sk (0) ≈ sk (ϕ ) = e sk (0);

−iv F kt †
sk† (t) = e sk (0) ≈ s̄k (ϕ ) = e−kϕ s̄k (0).

Same relations hold for the right- and left-moving Matsubara Bose operators:

(ϕ ) = e↔kϕ bd(s) (0); b̄k (ϕ ) = e±kϕ b̄d(s) (0).


d(s) d(s)
bk (5.109)

Introducing the ϕ -ordered Green’s function for the right-movers,

Gd (ϕ − i x) = −∇Tϕ d(ϕ − i x)d̄(0)≡, (5.110)

we can immediately calculate it in equilibrium:

1 ∼ 1 ∼
Gd (ϕ − i x) = −δ(ϕ ) (1 − n k )e−kz + δ(−ϕ ) n k e−kz , (5.111)
L L
k k

where n k = n F (v F k) (for right-movers; for left-movers, of course,


n k = n F (−v F k)). At zero temperature this reduces to

1  −kz ∼ −kε 1  −kz ∼ +kε


Gd0 (ϕ − i x) ∝ Gd0 (z ∼ ) = −δ(ϕ ) e + δ(−ϕ ) e
L L
k>0 k<0

1  − 2α (z ∼ +ε) n 1  2α (z ∼ −kε) n
∓ ∓
= −δ(ϕ ) e L + δ(−ϕ ) eL (5.112)
L L
n=1 n=1
α
1 e L sgn(ϕ )(ϕ −i x+εsgn(ϕ )) −≈ 1 1
=− − · ∼ .
L 2 sinh αL (ϕ − i x + εsgn(ϕ ))
L≈∓
2α z + εsgn(ϕ )

Here ε ≈ 0 is the regularization parameter. In the same way, for the left-movers

−≈ 1 1
Gs0 (ϕ + i x) ∝ Gs0 (z) L≈∓ − · . (5.113)
2α z + εsgn(ϕ )

At a finite temperature 1/ξ we use the relation

1
1 − n F (E) = 1 − = n F (−E)
eξ E + 1

and write
252 5 Many-Body Theory in One Dimension

1/ξ 1  e−k(ϕ −i x)−ksgn(ϕ )ε −≈


Gd (ϕ − i x) = −sgn(ϕ ) L≈∓
L
k
e−ξ kv F sgn(ϕ ) + 1
 ∓ dk e−k(ϕ +sgn(ϕ )ε) eikx 1 α/(ξv F )
−sgn(ϕ ) =− · , (5.114)
2α e −ξ kv F sgn(ϕ ) +1 2α sin α(ϕ −i x+εsgn(ϕ ))
−∓
ξ v F

which in the limit ξ ≈ ∓ coincides with (5.112) (see Problem 2). For the left-
movers, of course,

1/ξ −≈ 1 α/(ξv F )
Gs (ϕ + i x) L≈∓ − · . (5.115)
2α sin α(ϕ +i x+εsgn(ϕ ))
ξ v F

Now let us compute the equilibrium boson Green’s functions. For the right-movers

Dd (ϕ − i x) = − ∇Tϕ ωd (ϕ − i x)ωd (0)≡ (5.116)


2α  1 ⎤ −k(ϕ −i x+εsgn(ϕ ))
= − δ(ϕ ) e (n B (v F k) + 1)
L k
k>0

+ek(ϕ −i x+εsgn(ϕ )) n B (v F k)
2α  1 ⎤ −k(ϕ −i x+εsgn(ϕ ))
− δ(−ϕ ) e n B (v F k)
L k
k>0

+ek(ϕ −i x+εsgn(ϕ )) (n B (v F k) + 1) .

At zero temperature n B = 0, and (see (5.70))

2α  1 −k sgn(ϕ )(ϕ −i x+εsgn(ϕ ))


Dd0 (ϕ − i x) = − e
L k
k>0

1 −(2α/L) sgn(ϕ )(ϕ −i x+εsgn(ϕ )) n


∓
−≈
=− e L≈∓
n
n=1
 

ln (sgn(ϕ )(ϕ − i x) + ε) . (5.117)
L

At a finite temperature in the limit L ≈ ∓ the evaluation of (5.116) yields after


rather more work, than for fermions see [6], H.2.b, for details
  
1/ξ 2ξv F α
Dd (ϕ − i x) = ln sin sgn(ϕ )(ϕ − i x) + ε . (5.118)
L ξv F

The Green’s functions for the left-movers are obtained by replacing ϕ − i x by ϕ + i x.


Comparing (5.112) with (5.117) and (5.114) with (5.118) we see, that in the limit
L≈∓
5.2 Tomonaga-Luttinger Model 253

1 ∼
Gd (z ∼ ) ≈ − sgn(ϕ ) e−Dd (z ) . (5.119)
L
This intuitively agrees with the bosonization formulas (5.82, 5.83). This intuition is
fully justified. In order to demonstrate this we will need the following relation for
the averages of exponents of a Bose operator B:

∇eλB ≡ = e 2 λ ∇B ≡ .
1 2 2
(5.120)

It holds identically for any linear combination of bosons, B = q>0 (μq bq† + μ̃q bq ),
if the average is taken over the thermal state of the free boson Hamiltonian (5.62)
(see [6], C.10, Theorem 4). In the thermodynamical limit it holds for an arbitrary
state and Hamiltonian, as long as ∇B 2n+1 ≡ = 0. Indeed, then

 ∓

λ2n λ2n (2n)! 2 n
∇eλB ≡ = ∇B ≡ = e 2 λ ∇B ≡ .
1 2 2
∇B 2n ≡ =
(2n)! (2n)! 2n n!
n=0 n=0

Here we have used the weak version of the Wick’s theorem (Sect. 2.2.1) to reduce
each macroscopic average ∇B 2n ≡ to the sum of (2n)!/(2n n!) identical fully contracted
terms ∇B 2 ≡n . Next, from (5.120) and the Baker-Hausdorff formula we find

∇eλ1 B1 eλ2 B2 ≡ = e 2 (λ1 ∇B1 ≡+λ2 ∇B2 ≡)+λ1 λ2 ∇B1 B2 ≡ .


1 2 2 2 2
(5.121)

Then, substituting (5.82), we find

1
δ(ϕ )∇Fd e− L Nd (ϕ −i x) eiω (x,ϕ ) e−iω (0,0) Fd† ≡
2α d d
Gd (ϕ − i x) = −
2αε
1
δ(−ϕ )∇e−iω (0,0) Fd† Fd e− L Nd (ϕ −i x) eiω (x,ϕ ) ≡
d 2α d
+
2αε

(the factor exp[− 2αL Nd (ϕ −i x)] reflects the dependence of the Klein factor on (imag-
inary) time, following from the commutation relations (5.85) and the Hamiltonian
(5.62), (5.63)). Finally, using (5.84), dropping the Nd /L ≈ 0 terms in the exponents,
applying Eq. (5.121) and using D(0) = ln(2αε/L), we see, that indeed

1
sgn(ϕ )e∇Tϕ ω(x,ϕ )ω(0,0)≡− 2 (∇ω(0,0)ω(0,0)≡+∇ω(x,ϕ )ω(x,ϕ )≡)
1
Gd (ϕ − i x) ≈ −
2αε
1 1
=− sgn(ϕ )e−Dd (ϕ −i x)+Dd (0) = − sgn(ϕ )e−Dd (ϕ −i x) .
2αε L
For the left-movers, of course,

1
Gs (z) ≈ − sgn(ϕ ) e−Ds (z) . (5.122)
L
254 5 Many-Body Theory in One Dimension

5.2.5.1 Tomonaga-Luttinger Versus Fermi Liquid


Taking the Fourier transform of the fermion Green’s function with respect to the
position, we obtain the single particle momentum distribution function (see (5.45)):

n(k) = d xe−ikx ∇∂ † (x, 0)∂(0, 0)≡
 
i
= − lim d x e−i(k−k F )x G d (x, t) + e−i(k+k F )x G s (x, t) , (5.123)
2 t≈−0

where G d,s are real time causal Green’s functions (cf. Eq. (2.48)). Using the analytic
continuations of zero-temperature thermal Green’s functions (5.112, 5.113) (which in
this case amounts to the replacement of ϕ with iv F t) and taking the contour integrals,
we find that the contributions of right- and left-movers are n d (k) = (1/2)δ(k F − k)
and n s (k) = (1/2)δ(k F + k), as one would expect. In the presence of interactions
the situation drastically changes. Expressing the physical Fermi operators in terms
of the Bogoliubov-transformed Bose operators (5.92), and n(k) in terms of their
Green’s functions, one discovers that in the presence of an infinitesimally small
interaction n(k) = 1/2: not only the step at |k| = k F disappears, but the distribution
becomes altogether momentum-independent ([15]; [3], Sect. 4.4.E). This is in a sharp
contrast to the behaviour of the Fermi liquid, where the step in n(k) at |k| = k F
becomes less than unity, but still survives in the presence of interactions (Lifshits
and Pitaevskii 1980, Sect. 10). Like in the case of the instability of the normal state of a
superconductor considered in Chap. 4, the dependence of the Green’s functions on the
interaction strength is non-analytic and could not be reproduced by the perturbation
theory.

5.3 Conformal Field Theory and the Orthogonality Catastrophe

5.3.1 Conformal Symmetry

A remarkable property of the Tomonaga-Luttinger model is that the field operators


and Green’s functions of left-movers depend only on the complex variable z = ϕ +i x,
and those of right-movers - only on its complex conjugate z ∼ , while the Hamiltonian
splits into a sum of z- and z ∼ -dependent parts.4 This provides significantly more
than just the opportunity to treat the two sectors separately. Analytic functions of
complex variable realize a special kind of symmetry of the complex plane - local
conformal invariance.5 Specifically, a conformal mapping is a one-to-one correspon-
dence between the domains D and D  in the complex plain such, that in the vicinity

4 z- (resp. z ∼ -) dependent quantities are called holomorphic (antiholomorphic), or analytic (antian-


alytic).
5 Global conformal invariance exists in higher dimensions as well.
5.3 Conformal Field Theory and the Orthogonality Catastrophe 255

of any point in D it is an orthogonal, orientation-preserving transformation. In other


words, it preserves the angles between intersecting curves, transforms infinitesimal
circles into infinitesimal circles, and maintains the clockwise direction on them. Only
the overall scale may be locally changed.
One of the basic theorems of complex analysis states that a function f (z) realizes
a conformal mapping of the domain D if and only if it is single-valued and analytic
in D, and its derivative d f (z)/dz = 0 everywhere in D. Thus any analytic function
f (z) (or f (z ∼ )) can be used for a mapping between some D and D  , determined by
the properties of the specific function. (In this context z and z ∼ should be treated
as independent complex variables.) This is what makes the 2-dimensional case (one
spatial dimension, plus (imaginary) time) so special and rich in possibilities. The
conformal field theory, which investigates these possibilities and applies them to a
broad range of physical problems, is a very large subject and well beyond the scope
of this book.6 We limit our acquaintance with it to what is necessary to derive the
formulas (5.12, 5.33) for the Anderson orthogonality exponent.
In a conformally invariant theory there exist so called primary fields, that is, such
operators Oh,h ∼ (z, z ∼ ), that under the conformal mapping z ≈ w(z), z ∼ ≈ w ∼ (z ∼ )
transform as
 −h  −h ∼
∼ ∼ dw dw ∼
O h,h ∼ (z, z ) ≈ O h,h ∼ (w, w ) = Oh,h ∼ (z, z ∼ ). (5.124)
dz dz ∼

In particular, their correlation functions in the infinite complex plane will have the
form
 2h  2h ∼
1 1
∇Oh,h ∼ (z 1 , z 1∼ )Oh,h
† ∼
∼ (z 2 , z 2 )≡ = . (5.125)
z1 − z2 z 1∼ − z 2∼

The real integers h and h ∼ are called conformal dimensions of the field. As is clear
from Eqs. (5.112), (5.113) and the bosonization formulas, for the system of free
bosons (or fermions) in 1+1 dimensions these operators are left- and right-movers,
d ∼
s(z) → eiω (z) and d(z ∼ ) → eiω (z ) , with the conformal dimensions (1/2,0) and
s

(0,1/2) respectively. Using (eq:Last-100) and an appropriate analytic function, which


maps the complex plane to a domain D, one can obtain from (5.125) the correlation
function in this domain.7

6 An introduction into it can be found in [5], Chap. 24, 25; [4], Sect. 2.2, and a massive exposition
in [1].
7 The conformal field theory with boundaries was developed by Cardy [8].
256 5 Many-Body Theory in One Dimension

ir ir
iL A

τ τ
A τ1 B τ2 A A τ1 B τ2 A

Fig. 5.7 Conformal field theory for a 1D system of infinite (left) and finite (right) length. The time
axis is chosen to coincide with the real axis. Boundary condition changing operators act at ϕ1 and ϕ2

5.3.2 Conformal Dimensions, the Energy Spectrum and the


Anderson Exponent

We have previously reduced the problem of the orthogonality catastrophe to the


behaviour of a system of free one-dimensional fermions on a ray, r √ 0, in the
presence of a scattering potential at the boundary. In order to apply the methods of
conformal field theory to this problem, it is necessary to consider such boundary
condition changing operators [7, 8]. In the following we put  = 1, v F = 1.
Let us choose the coordinates in the complex plane z = ϕ + ir , and consider the
upper half-plane, r √ 0 (Fig. 5.7). The real axis then represents the position of the
boundary, where the scattering potential is located. It produces the scattering phase,
relating the in- and outgoing waves via

∂out (0) = e2iλ(k F ) ∂in (0). (5.126)

Change of the scattering phase, which reflects the creation and filling in of the core
hole, is produced by the operator O, which changes the boundary condition from
“A” (no core hole, zero phase shift) to “B” (core hole, phase shift λ). Assuming that
O is a primary field with the conformal dimension x, its zero-temperature Green’s
function will be
1
∇A|O(ϕ1 )O† (ϕ2 )|A≡ = . (5.127)
(ϕ1 − ϕ2 )2x

Here |A≡ is the ground state of the system with the boundary condition “A” at the
origin (≥z ∝ r = 0). The analytic function

z(w) = Leαw/L ∝ Leα(u+iv)/L (5.128)

maps the upper half-plane into the strip 0 ⇔ v ⇔ L, which corresponds to a system
of a finite length L, with the positive real ray mapped on the lower, and the negative
real ray - on the upper boundary of the strip. Let’s take ϕ1 , ϕ2 > 0. Then the boundary
condition at r = L will be always “A”. The Green’s function (5.127) is transformed
according to (5.124):
5.3 Conformal Field Theory and the Orthogonality Catastrophe 257

 x  x  2x
dz dz 1
∇A A|O(u 1 )O† u 2 )|A A≡ =
dw(u 1 ) dw(u 2 ) z(u 1 ) − z(u 2 )
 2x α 2x
α/2L −≈ αx(u 2 −u 1 )
= (u 2 −u 1 ) L e− L ,
sinh[(α/2L)(u 1 − u 2 )] L
(5.129)

where |A A≡ is the ground state of the system of length L with the boundary conditions
“A” (i.e., with zero phaseshifts) at both ends. On the other hand, by directly inserting
the closure relation I = |n≡∇n| in the expression ∇A A|O(u 1 )O† (u 2 )|A A≡, we find

|∇A A|O|AB, n≡|2 e−(E AB −E A A )(u 2 −u 1 ) .
n 0
∇A A|O(u 1 )O† (u 2 )|A A≡ = (5.130)

Here n labels all energy eigenstates in the system of length L with the boundary
conditions “B” at r = 0 and “A” at r = L.
In the limit (u 2 − u 1 ) L the leading exponent in (5.130) should coincide with
(5.129). This exponent corresponds to the lowest-energy state of the system with the
phase shift, which is usually the ground state energy of the system with the boundary
conditions “A” and “B”, i.e.
α 2x αx(u 2 −u 1 )
e− ∗ |∇A A|O|AB, 0≡|2 e−(E AB −E A A )(u 2 −u 1 ) .
0 0
L (5.131)
L
Therefore the conformal dimension of the boundary condition change operator can
be obtained from the shift of the ground state energy due to change in the boundary
conditions (Affleck and Ludwig [7]):
L 0
x= (E − E 0A A ). (5.132)
α AB
The matrix element ∇A A|O|AB, 0≡ is the overlap between the ground states of the
system with and without the scattering potential at the origin. Thus, the power x in
(5.131) is the Anderson orthogonality exponent of Eq. (5.11), and the relation (5.132)
provides a convenient way of computing it directly from the energy spectrum of the
system.

5.3.2.1 Ground State Energy in the Presence of Scattering Potential

In order to find the ground state energy shift due to scattering potential, let us recall
Eq. (5.20) for the asymptotic form of the wave function:
⎤ ⎦
k̃n (x j ) ∗ sin k̃n x j + λ(k̃n ) , j 1. (5.133)
258 5 Many-Body Theory in One Dimension

Here k̃n is the wave vector shifted from its value kn in the absence of scattering
potential, x j = ja, and a is the lattice constant. The ground state energy is then


N
E0 = π(k̃n ) (5.134)
n=1

(recall that in Sect. 5.1.2 we were imposing free boundary conditions on the 1D chain;
therefore kn = αn/L , n > 0). Taking in (5.133) x j = L and demanding, in order to
satify the boundary condition, that

k̃n L + λ(k̃n ) = kn L ,

we obtain

λ(k̃n ) λ(kn ) λ(kn )λ  (kn )


k̃n = kn − ↑ kn − + ∝ K (kn ), (5.135)
L L L2

where λ  (k) = dλ/dk.


The energy (5.134) can be evaluated using the Euler-MacLaurin formula:


N    N
1 1
F n− = d x F(x) − (F  (N ) − F  (0)) + O(F  ). (5.136)
2 0 24
n=1

Setting  
1
F n− = π(K (kn )), (5.137)
2

we get F(N ) = π(K (k F )), and


 "    #
N α n + 21 αv F
E0 = dn π K − , (5.138)
0 L 24L

where now v F = π (k F ) (while still  = 1), and we have kept only terms O(1/L).
Changing the integration variable to ν = α(n + 1/2)/L, using (5.135) and again
neglecting corrections of order 1/L 2 , we find
 kF dν αv F
E0 = L π [K (ν)] −
0 α 24L
  
kF dν π (ν)λ(ν) π (ν)λ 2 (ν) π (ν)λ  (ν)λ(ν) αv F
=L π(ν) − + + − .
0 α L 2L 2 L 2 24L
(5.139)
5.3 Conformal Field Theory and the Orthogonality Catastrophe 259

Integrating by parts and using π (0) = 0, we eventually obtain the desired result:
  "   #
kF dν 1 π(k F ) 1 λ(k F ) 2
αv F 1 1
E0 = L π(ν) − dπλ(π) + − + O( 2 ).
0 α απ(0) 2 α L 24 L
(5.140)
It can be directly checked using the tight-binding model of Sec. 5.1.2 with free
boundary conditions. There π(k) = −2T cos k, v F = 2T sin k F , and the ground state
energy is given exactly by a geometric series,

T sin k F l αv F 1
E0 = T − = vF + T − + O( 2 ). (5.141)
sin(α/2l) α 24l l

If add to the ground state n extra electrons directly above the Fermi level, the
energy of the system becomes

n  
λ(k F ) α(m − 1/2)
En = E0 + π kF − +
L L
m=1
 
αv F 
n
1 λ(k F ) 1
= E 0 + nπ(k F ) + m− − + O( 2 ). (5.142)
L 2 α L
m=1

Then, instead of (5.140), we find [12]


 
kF
dν 1 π(k F )
En = L π(ν) − dπλ(π) + nπ(k F )
0 α α π(0)
"   #
αv F 1 λ(k F ) 2 1 1
+ n− − + O( 2 ). (5.143)
L 2 α 24 L

5.3.2.2 Anderson Orthogonality Exponent

Returning to Eq. (5.132) for the Anderson exponent, we substitute there the O(1/L)-
correction to the ground state energy from (5.140) and immediately find, that we have
successfully rederived Eq. (5.12):
 2
1 λ(k F )
x= .
2 α

Moreover, we are now equipped to find out what happens, if the core hole potential
creates a bound state. Then the Green’s function (5.130) becomes the sum of two
terms, corresponding to the bound state being empty or filled [12]:
260 5 Many-Body Theory in One Dimension

|∇A A|O|AB, n, e≡|2 e−(E AB,e −E A A )(u 2 −u 1 )
n 0
∇A A|O(u 1 )O† (u 2 )|A A≡ =

|∇A A|O|AB, n, f ≡|2 e−(E AB, f −E A A )(u 2 −u 1 ) .
n 0
+
(5.144)

They give rise to two peaks in the absorption rate, separated by the bound state energy
|π B |. The processes of core hole creation with or without filling the bound state can
be in the long time limit considered independently, as due to separate operators. In
case of filled bound state the result will be the same as Eq. (5.12): the wave function
of the bound state is exponentially decaying and cannot influence the O(1/L)-terms
in the ground state energy. Therefore
 2
1 λ(k F )
xf = . (5.145)
2 α

If the bound state is empty, then the corresponding operator not only changes the
boundary condition, but also creates an extra electron above the Fermi level. Using
(5.143) with n = 1, we finally reproduce, using quite a different approach, the result
of [9, 10]:
 
1 λ(k F ) 2
xe = 1− . (5.146)
2 α

We have reached the goal in a circuitous way, but learned some useful techniques
and did not need the assumption λ ∞ α!

5.4 Problems

• Problem 1

Verify that Eq. (5.87) is equivalent to Eqs. (5.62, 5.63). When integrating, make use
of φ(x + L) = φ(x).

• Problem 2
Obtain the explicit expression (5.114) for the thermal Green’s function of free fermi-
ons in the Tomonaga-Luttinger model. Taking the integral over k, consider it as a
complex variable and use the method of contour integration. Take into account that
the integrand has infinitely many simple poles on the imaginary axis, and close the
integration contour in either upper or lower half-plane of complex wave vector k,
depending on the sign of x.

• Problem 3

Using conformal mapping, obtain the finite-temperature Green’s function of free


one-dimensional fermions (5.115) from the zero-temperature one (5.113).
References 261

References

Books and Reviews

1. Di Francesco, P., Mathieu, P., Sénéchal, D.: Conformal Field Theory, Springer, GTCP, New
York (1997) (A fundamental textbook on conformal field theory in 1+1 and higher dimensions.)
2. Eggert, S.: One-dimensional quantum wires: A pedestrian approach to bosonization. In: Kuk
Y. et al. (eds.) Theoretical Survey of One Dimensional Wire Systems, Sowha Publishing, Seoul
(2007); arXiv:0708.0003 (chapter 2)
3. Mahan, G.D.: Many-Particle Physics, 2nd edn. Plenum Press, New York (1990)
4. Nagaosa, N.: Quantum Field Theory in Strongly Correlated Electronic Systems. Springer, TMP,
Berlin-Heidelberg (2010)
5. Tsvelik, A.M.: Quantum Field Theory in Condensed Matter Physics. Cambridge University
Press (1995)
6. von Delft, J., Schoeller, H.: Bosonization for Beginners - Refermionization for Experts, Ann.
Phys. 4, 225 (1998) (A very detailed tutorial, where bosonization is introduced constructively
and different approaches are compared.)

Articles

7. Affleck, I., Ludwig, A.W.W.: J. Phys. A: Math. Gen. 27, 5375 (1993)
8. Cardy, J.L.: Nucl. Phys. B 324, 581 (1989)
9. Combescot, M., Nozières, P.: J. Physique 32, 913 (1971)
10. Hopfield, J.J.: Comment. Solid State Phys. 11, 40 (1969)
11. Nozières, P., De Dominicis, C.T.: Phys. Rev. 178, 178 (1969)
12. Zagoskin, A.M., Affleck, I.: J. Phys. A: Math. Gen. 30, 5743 (1997)
13. Anderson, P.W., Phys. Rev. Lett. 18, 1049 (1967)
14. Thouless, D.J.: Quantum mechanics of many-body systems. Academic Press, New York (1972)
15. Mattis, D.C., Lieb, E.H., J. Math. Phys. N.Y. 6, 304 (1965)
Appendix A
Friedel Oscillations

In the static limit the polarization operator, which describes screening of the Coulomb
potential by the Fermi gas, is given by Eqs. (2.71, 2.72):
  
mp F p 2F − p 2 /4  p F + p/2 
0 ( p) = − 2 1+ ln  .
2π pF p p F − p/2 

This formula was obtained in the random phase approximation at zero temperature.
As mentioned before, the second term in the parentheses is non-analytic at p = 2 p F .
Indeed, though
 
p 2F − p 2 /4 p F + p/2
lim ln → lim g( p) = 0, (A.1)
p√2 p F pF p p F − p/2 p√2 p F

all the derivatives of g( p) diverge at this point. The screened Coulomb potential in
the coordinate representation is then

d3 p eipr 4πe2 / p 2
Ueff (r ) =
(2π)3 1 − (4πe2 / p 2 )0 ( p)
  1
e2 ∇ ei pr cos ρ
= dp p 2 d(cos ρ) 2
π 0 −1 p + (1/2)qT2 F (1 + g( p))

2e2 ∇ p sin( pr )
= dp 2
πr 0 p + (1/2)qT2 F (1 + g( p))
 ∇
e2 pei pr
= ≡ dp 2 . (A.2)
πr −∇ p + (1/2)qT2 F (1 + g( p))

Here we took into account that g( p) and p sin( pr ) are even functions of p and
extended the integration to all of the real axis. Now we can, as usual, close the
integration contour in the upper halfplane of the complex variable p and reduce

A. Zagoskin, Quantum Theory of Many-Body Systems, 263


Graduate Texts in Physics, DOI: 10.1007/978-3-319-07049-0,
© Springer International Publishing Switzerland 2014
264 Appendix A: Friedel Oscillations

ip0
iqTF p
-2pF -iqTF 2pF
-ip0

Fig. A.1 Analytic structure of the integrand in (A.2) after regularization, and the initial (a) and
deformed (b) integration contours

the integral to the contributions from the singularities of the integrand. Replacing
g( p) in this expression with its limiting value g(0) = 1 we would indeed obtain the
exponential screening ≈exp(−qT F r ) due to the simple poles at p = ±iqT F .
Taking into account the actual expression for g( p) drastically changes the out-
come. It is straightforward to see that the poles will be slightly shifted along the
imaginary axis,
±iqT F
p = ±i p0 ∓ 
2 .
qT F
1 − 12 p F
1

But the main difference comes from the logarithm having singularities - branch points
p = ±2 p F .
Following [3], let us regularize the logarithm:
 
 p F + p/2  1 | p F + p/2|2 ( p + 2 p F )2 + ε2

ln   = ln = lim ln .
p F − p/2  2 | p F − p/2|2 ε√0 ( p − 2 p F )2 + ε2

This way we shift the singularities to the points p = ±2 p F ± iε, away from the
real axis (see Fig. A.1). The branch cuts of the logarithm in the upper halfplane are
chosen along the rays ±2 p F + iε + is, where 0 ≤ s < ∇. Since

1 1
ln(z 2 + ε2 ) = [ln(z − iε) + ln(z + iε)] , z = p ± 2 p F ,
2 2
it is clear that going full circle around a branch point one adds to the logarithmic
term an extra ±πi.
Now we can deform the integration contour to run mostly along the infinitely large
cemicircle in the upper complex half-plane. Its contribution to the integral will be
suppressed due to the exponential factor exp(i pr ) √ 0. The surviving terms come
Appendix A: Friedel Oscillations 265

from the integrations along the branch cuts and around the pole at i p0 (Fig. A.1).
The latter, together with the prefactor in (A.2), will yield the exponentially screened
potential,
e2
≈ e − p0 r ,
r
but as we shall see, at r √ ∇ it becomes irrelevant compared to the slower-decaying
contributions from the former.
Consider the integral around the left branch cut. Taking ε √ 0, we find (the
prime indicates that we first integrate along the left, and then along the right bank of
the cut):
 
e2 −2 p F  −2 p F +i∇ ∼ pei pr
I−2 p F = ≡ + dp 2
πr −2 p F +i∇ −2 p F p + (qT2 F /2)(1 + g( p))
  
e2 −2 p F +i∇ pei pr
= ≡ dp .
πr −2 p F p 2 + (qT2 F /2)(1 + g( p))

In the last expression [...] is the difference between the right and left banks of
the branch cut, which is solely due to the multivaluedness of the logarithm. Writing
now p = −2 p F + is and replacing the slowly varying terms under the integral with
their values on the real axis (which is justified in the limit r √ ∇ by the exponent,
exp(i pr ) = exp(−2i p F r ) exp(−sr )), rapidly decaying away from the real axis, we
can write
  ∇  
e2 (1/2)qT2 F −2i p F r i −u 1 ∇ 2 −u
I−2 p F ∓ ≡ e du u e + 3 du u e
r (4 p 2F + (1/2)qT2 F )2 r2 0 r 0
e2 (1/2)qT2 F
∓ cos(2 p F r ). (A.3)
r (4 p F + (1/2)qT2 F )2
3 2

We have kept the slowest-decaying term. The integral over the branch cut at p = 2 p F
yields the same expression, and therefore, as promised, at large distances the screened
potential behaves as
e2
Ueff (r ) ≈ 3 cos(2 p F r ). (A.4)
r
The non-analytic behaviour of the polarization operator at p = ±2 p f is due to
the sharp edges of the Fermi distribution; the momentum transmission ±2 p F clearly
corresponds to the transitions between the opposite points of the Fermi surface.
Therefore one expects that when the Fermi step is smeared by finite temperature or
interactions (like in a superconductor), an exponential screening should be restored.
This is indeed the case (see [3], p.180). At a finite temperature T in a normal system,
266 Appendix A: Friedel Oscillations

or at zero temperature in a superconductor with the superconducting gap , Eq. (A.4)


acquires an exponentially decaying factor, respectively

exp[−2πr (mk B T /2 p F )] or exp[−r p F (/ε F )].

You should not be too surprized realizing that the exponents, up to a factor of order
unity, are simply the ratios of the distance r to the normal metal (resp. superconduct-
ing) coherence length,
v F 2v F
and ,
kB T π

which provide the natural length scales for these systems.


Appendix B
Landauer Formalism for Hybrid
Normal-Superconducting Structures

B.1 The Landauer–Lambert formula

An important generalization of Landauer’s formula (3.176) was made by Lambert


[7]. He considered a situation in which besides elastic scatterers, the system contains
a superconducting “island” (Fig. B.1). We address here the simplest, single-channel,
case. That is, in the spirit of Landauer’s approach, the equilibrium electronic reser-
voirs are connected to the “scattering part” of the system by perfect one-dimensional
leads A, A∼ . The former can be considered as a “black box” containing, along with
normal conductors and scatterers, a superconductor in some way connected to the
rest of the system.
Sweeping all the details of this inner structure under the rug, we can describe it
by a 4 × 4 matrix P̂ that gives us probabilities for a quasiparticle injected in a lead
from the respective reservoir to be scattered to this or the other lead:
⎛ ⎞
Ree Reh Tee∼ Teh∼
⎜ Rhe Rhh The∼ Thh∼ ⎟
P̂ = ⎜
⎝ Te∼ e
⎟. (B.1)
Te∼ h Re∼ e∼ Re∼ h ∼ ⎠
Th∼ e Th∼ h Rh ∼ e∼ Rh ∼ h ∼

The matrix has a rich structure (a purely normal system would be described simply
by transmission, T , and reflection, R, coefficients, related by the unitarity condition
T + R = 1), because in the presence of a superconductor, quasiparticles can switch
between particle and hole branches of the spectrum due to Andreev reflections (as
in Fig. 4.16). For example, Ree (Reh ) is the probability for an electron in the left
lead to be reflected as an electron (hole) to the same lead, while Te∼ e (Th ∼ e ) gives the
probability of its normal (Andreev) transmission to the other lead, etc. (see Fig. B.3).
In other words, we have added to the system an off-diagonal scattering potential.
The probability flux conservation (unitarity) requires that the elements in any row or
column of P̂ add up to unity:

A. Zagoskin, Quantum Theory of Many-Body Systems, 267


Graduate Texts in Physics, DOI: 10.1007/978-3-319-07049-0,
© Springer International Publishing Switzerland 2014
268 Appendix B: Landauer Formalism for Hybrid Normal-Superconducting Structures

Fig. B.1 Landauer conductance in a normal-superconducting system

Fig. B.2 Measuring quasiparticle energies from zero or from μ0 : two equivalent pictures. Occupied
states are shown by solid lines for quasielectrons, dotted lines for quasiholes

 
Pij = 1; Pij = 1. (B.2)
j i

Let us denote the chemical potential of the superconductor by μ0 . If we apply a


small bias eV = μ − μ∼ between the normal reservoirs, evidently μ > μ0 > μ∼ . In
the presence of a superconductor, it is expedient to use the “folded” dispersion law
(Fig. B.1), which we used when considering Andreev reflections in Sect. 4.5, and to
measure the quasiparticle energies from μ0 . Then we see that at zero temperature
the left reservoir injects into the system quasielectrons with energies in the interval
[0, μ − μ0 ]; the right reservoir injects quasiholes, with energies within [0, μ0 − μ∼ ]
(Fig. B.2).
To calculate the two-probe conductance of the system, we now find the current
in, e.g., lead A and divide it by (μ − μ∼ ):

I
G= .
V
Appendix B: Landauer Formalism for Hybrid Normal-Superconducting Structures 269

Fig. B.3 Matrix P̂ and the


physical sense of its elements

The current is
2
= ev F · [(μ − μ0 )(1 − Ree + Rhe ) + (μ0 − μ∼ )(Thh∼ − Teh∼ )]. (B.3)
hv F

Here 2/(hv F ) is the one-dimensional density of states per velocity direction, and the
meaning of the terms in brackets is self-evident.
Now we must somehow get rid of μ0 . This can be done if we impose the condition
that there be no net electric current in or out of the superconductor. This will be so, e.g.,
if the superconductor is finite: otherwise it would accumulate electric charge until
its field stops the further charge transfer. This gives us an extra equation necessary
to exclude μ0 from the answer.
The total current from the left reservoir flowing into the system is carried by
quasielectrons and equals

2e 4e
λi = (μ − μ0 )(1 − Ree + Rhe − Te∼ e + Th∼ e ) = (μ − μ0 )(Rhe + Th∼ e ) (B.4)
h h
(we have used 1 = Ree + Rhe + Te∼ e + Th∼ e ). The current from the right reservoir is
carried by quasiholes (thus the minus sign):

4e
λi ∼ = − (μ0 − μ∼ )(Rh ∼ e∼ + Teh∼ ). (B.5)
h

From the “no-charging” condition λi + λi ∼ = 0 we find

Re∼ h∼ + Teh∼
μ − μ0 = (μ − μ∼ ) , (B.6)
Re∼ h ∼ + Teh∼ + Rhe + Th∼ e
Rhe + Th∼ e
μ − μ0 = (μ − μ∼ ) ,
Re∼ h ∼ + Teh∼ + Rhe + Th∼ e

and
270 Appendix B: Landauer Formalism for Hybrid Normal-Superconducting Structures

Ie
G=
μ − μ∼
2e2 (Re∼ h∼ + Teh∼ )(1 − Ree + Rhe ) + (Rhe + Th∼ e )(Thh∼ − Teh∼ )
=
h Re∼ h∼ + Teh∼ + Rhe + Th∼ e
2e (Re∼ h∼ + Thh∼ )(Rhe + Th∼ e ) + (Re∼ h∼ + Teh∼ )(Te∼ e + Rhe )
2
= . (B.7)
h Re∼ h∼ + Teh∼ + Rhe + Th∼ e

If there is particle–hole symmetry (Rhh = Ree = R N , Reh = Rhe = R A , The∼ =


Teh∼ = T A∼ etc.), N , A denoting normal and Andreev processes, then the above
formula reduces to

2e2 (R ∼A + T A∼ )(R A + TN ) + (R A + T A )(R ∼A + TN∼ )


G= . (B.8)
h R ∼A + T A∼ + R A + T A

Finally, if the system is spatially symmetric (that is, the difference between primed
and nonprimed coefficients disappears), we see that simply

2e2
G= (TN + R A ). (B.9)
h
This is an intuitively clear result: In addition to the normal transmission channel,
2e2
h TN , which we had in the normal case, another conductivity channel opens due to
Andreev reflections.
This is only one of Landauer-type formulas that describe conductivity of normal-
superconducting mesoscopic systems. We could, e.g., calculate the four-probe con-
ductance,
Ie
G̃ = ,
μ A − μ A∼

where μ A , μ A∼ are the chemical potentials in the leads. Evidently, μ > μ A ∝ μ0 ∝


μ A∼ > μ∼ , and therefore G̃ > G. For example, if there are no scatterers in the system,
then μ A = μ∼A , and G̃ becomes infinite. On the other hand, G = 2e2 / h stays finite,
being an inverse of what is an analogue to Sharvin resistance of a clean point contact.
The chemical potentials of the wires are determined by the charge densities
brought there by the currents, that is (we must take into account both directions
of velocity, hence the factor of 2):

2 2
2× (μ A − μ0 ) = [(μ − μ0 )(1 − Rhe + Ree ) + (μ0 − μ∼ )(Teh∼ − Thh∼ )],
hv F hv F
2 2
2× (μ A∼ − μ0 ) = [(μ0 − μ A∼ )(−1 − Rh∼ h∼ + Re∼ h∼ ) (B.10)
hv F hv F
+ (μ − μ0 )(Te∼ e − Th∼ e )].
Appendix B: Landauer Formalism for Hybrid Normal-Superconducting Structures 271

Finding from here μ A − μ A∼ and eliminating μ0 as before, we obtain the expression

2e2 (Re∼ h∼ + Teh∼ )(Rhe + Te∼ e ) + (Rhe + Th∼ e )(Re∼ h∼ + Thh∼ )


G̃ = , (B.11)
h (Re∼ h∼ + Teh∼ )(Ree + Th∼ e ) + (Rhe + Th∼ e )(Rh∼ h∼ + Teh∼ )

which in the case of particle–hole and spatial symmetry simplifies to

2e2 R A + TN
G̃ = . (B.12)
h R N + TA

By the way, if there is no superconductor in the system, then R A = 0, T A =


0, TN + R N = 1, and
2e2 TN 2e2 TN
G̃ = = . (B.13)
h RN h 1 − TN

This is the original Landauer formula for the four-point conductance; due to (1− TN )
in the denominator, it indeed diverges in the limit of ideal transparency of the barrier,
TN √ 1.
Many more theoretical and experimental results in this very dynamic field are
discussed in [1, 2, 4].

B.2 Giant Conductance Oscillations in Ballistic Andreev


Interferometers

As an example, we will consider an Andreev interferometer, that is, a mesoscopic


device, where the “black box” of Fig. B.1 contains two or more separate NS inter-
faces with different superconducting phases. Since as we know, Andreev reflection
coefficients are phase sensitive, the resulting conductance between normal reservoirs
may depend on the (controllable) superconducting phase difference between these
interfaces.
A simple version of this device is shown in Fig. B.4: a ballistic Andreev inter-
ferometer. It is essentially a clean SNS junction, its normal part being a wire AD
with N∗ transverse modes, to which normal electronic reservoirs are only weakly
linked in points B and C. Those points are the only places where normal scattering
takes place: The quasiparticles fly through the wire ballistically, and reflections at
the NS interfaces are purely of Andreev type. To further simplify the situation, we
assume that normal scattering does not mix different transverse modes. Thus the
wire reduces to a stack of independent one-dimensional
 “wires,” each with its own
||
effective longitudinal Fermi velocity v F,α = v 2F − (v ∗ 2 ∗
F,v ) (v F,v being determined
by the transverse quantization conditions).
The longitudinal motion of the electrons in a vth transverse mode is quantized,
giving rise to a set of Andreev levels (see 4.175)
272 Appendix B: Landauer Formalism for Hybrid Normal-Superconducting Structures

Fig. B.4 Landauer conductance of a ballistic Andreev interferometer. Below Andreev levels in the
vth and v ∗ th transverse modes (see text)

||
±
v F,v
E v,n = ((2n + 1)π ↑ δ) (B.14)
2L

controlled by the superconducting phase difference δ = δ − δ∼ . The normal


conductance of the system is due to the current-carrying states at the Fermi energy
(that is, the levels with E ∓ 0, the Fermi level being our reference point). Therefore,
if the Andreev level coincides with the Fermi level, E = 0, we should expect a
resonant peak in the conductance of the order of maximal quantum conductance
2e2 / h.
Because longitudinal velocities in different modes differ, typically Andreev lev-
els are nondegenerate (see Fig. B.4). But it is clear from (B.14) that the condition
± = 0,
E v,n
δ = ±(2n + 1)π, (B.15)

||
is independent on v F,α ! When it is satisfied, not one, but N∗ Andreev levels (one
in each of N∗ transverse modes) are simultaneously aligned with the Fermi energy,
thus producing a giant conductance peak with amplitude N∗ 2e2 / h. The width of the
peak is of the order of the single-electron transparency of the barriers at B, C.
There is one more interesting detail. The very Andreev levels that are responsible
for the normal conductance are responsible for the Josephson current that flows
between the superconductors when δ ∞= 2πn. When the Andreev level coincides
Appendix B: Landauer Formalism for Hybrid Normal-Superconducting Structures 273

with the Fermi level, the Josephson current abruptly changes sign, while normal
conductance peaks. One could therefore infer that there is a relation between I J and
G in this system such as G(δ) ∓ −ω I J (δ)/ωδ.
The quantitative consideration of the problem [6] is based on the Landauer-
Lambert formalism of the previous paragraph. (The presence of the Josephson current
is consistent with our assumption that there is no net current flow in or out of the
superconducting part of the system.) We assume that the electron–hole symmetry
holds and that the normal part of the system is spatially symmetric. Then the con-
ductance can be expressed as
 ∇  
2e2 ωn F (∂)
G= 2 d∂(TN (∂) + R A (∂)) − + θ, (B.16)
h 0 ω∂

where TN (∂)(R A (∂)) is the probability for normal transmission (Andreev reflection)
of an electron incident from the left normal reservoir with energy ∂, n F (∂) is the
Fermi distribution function, and the energy ∂ is measured from the Fermi level.
(This is an evident generalization of our formula (B.9) to finite temperatures.) The
additional term θ in (B.16) reflects the fact that (B.9) requires spatial symmetry of
the system, which would also include δ = δ∼ . In the presence of finite δ we should
use formula (B.8) instead, but due to the fact that this correction term is a rapidly
oscillating function of the electron momentum (θ ≈ exp 2ik F L), we can neglect it
if we are not interested in investigating the fine structure of conductance peaks.
The scattering coefficients in (B.16) can be found by solving the Bogoliubov-de
Gennes equations in the normal part of the system. To do this, we must somehow
describe scattering of electrons and holes at the junctions B and C. This is conve-
niently done by introducing (after [5]) identical, real scattering matrices
⎛ √ ⎞
−ε/2 1 − ε/2 √ε
S = ⎝ 1√− ε/2 √
−ε/2 ε ⎠. (B.17)
ε ε −1 + ε

Here ε  1 parametrizes the weak coupling of the system to normal reservoirs.


For example, a quasiparticle reflected from the left superconductor has probability
|1 − ε/2|2 ∓ 1 to pass through B√unhindered, while the probability of it being
diverted to the normal reservoir is | ε|2 and of being reflected back only | − ε/2|2 ,
while a quasiparticle incident from the left normal reservoir is reflected back with
probability | − 1 + ε|2 ∓ 1.
Neglecting all quickly oscillating terms and all terms of order higher than ε2 , we
finally obtain



1 2
TN (∂) ∓ R A (∂) ∓ . (B.18)
1 + 2ε2 + cos(δ + ψ 2L
∂)
ψ=±1 v ||F
274 Appendix B: Landauer Formalism for Hybrid Normal-Superconducting Structures

Fig. B.5 Phase dependence of the normal conductance (a) and Josephson current (b) in a ballistic
Andreev interferometer at zero temperature and ε = 0.1

Resonance is achieved at energies of Andreev levels (B.14), in agreement with our


qualitative reasoning.
At zero temperature the resonant conductance depends only on TN (0), R A (0).
Since the contribution to the resonant conductance of each transverse mode is exactly
the same, the total resonant conductance of the system (within accuracy of ε2 ) is
(Fig. B.5a)
2e2 2ε2
G(δ) = N∗ . (B.19)
h 1 + 2ε2 + cos δ

We have described the method of calculation of the Josephson current in this system
in Sect. 4.5.4. The only difference is that Andreev levels are broadened not due to
impurity scattering, but due to ε-proportional “leakage” into the normal reservoirs,
and we take T = 0. As a result,
|| ∇
2ev F  (−1)n+1 e−2|n|ε sinnδ
I J(ε) (δ) = N∗ , (B.20)
πL n
n=1

  N∗ 
where v F = N∗−1 α=1 v F,v (Fig. B.5b).
Comparing (B.20) and (B.19), we see that the normal conductance and Josephson
current in this system are indeed related by
 (ε)

eL dI J 2e2
G(δ) = ε − || + N∗ (B.21)
v dδ F
h

(within the accuracy of ε2 , that is, neglecting the details of the conductance peak
structure on a finer scale).
Appendix B: Landauer Formalism for Hybrid Normal-Superconducting Structures 275

References

Books and reviews

1. Beenakker, C.W.J.: Quantum transport in semiconductor-superconductor microjunctions. In:


Akkermans, E. Montanbaux, G. Pichard, J.L. (eds.). Mesoscopic Quantum Physics North-
Holland, Amsterdam (1994)
2. Beenakker, C.W.J.: Random-matrix theory of quantum transport. Rev. Mod. Phys. 69, 731 (1997)
3. Fetter, A.L., Walecka, J.D.: Quantum Theory of Many-Particle Systems. McGraw-Hill, San
Francisco (1971)
4. Lambert, C.J., Raimondi, R.: Phase-coherent transport in hybrid superconducting nanostruc-
tures. J. Phys. Cond. Matter 10, 901 (1998)

Articles

5. Büttiker, M., Imry, Y.: Phys. Rev. A 30, 1982 (1984)


6. Kadigrobov, A., Zagoskin, A., Shekhter, R.I., Jonson, M.: Phys. Rev. B 52, R8662 (1995)
7. Lambert, C.J.: J. Phys.: Cond. Matter 3, 6579 (1991)
Index

A C
Abel regularization, 74, 242 Callen–Welton formula, 124
Action, 14ff. Cancellation theorem, 81
Adiabatic hypothesis, 74, 101, 109 Canonical ensemble, 58, 59
Aharonov-Bohm effect, 21 Cauchy
Anderson orthogonality exponent, 230, 255 formula, 234
Andreev theorem, 69
levels in an SNS junction, 204ff. Charging energy, 219, 222
and Josephson current, 205, 208 Chronological ordering operator, see Time
reflection, 197ff. ordering operator
Annihilation operator, 36, 38, 48 Closure relation, 19, 229, 257
Anomalous average, 165
Coherence factors, 177
Autocorrelation function, 122
Coherence length in normal metal, 203, 266
Autocovariation function, 122
Collision integral, 136, 145
Commutation (anticommutation) relations
between phase and number operators, 45
B Bose, 39
Baker-Hausdorff Fermi, 49
formula, 243 Completeness, see Closure relation
theorem, 244 Conductance, 125, 143, 153
Bardeen–Cooper–Schrieffer (BCS) Hamil- quantization, 142ff.
tonian, 172ff. Conformal
Bare particle, 5, 7
dimensions, 255
BBGKY chain, 58
field theory, 255
Bethe–Salpeter equation, 91ff., 168
symmetry, 254
Bloch equation, 109, 110
Bogoliubov functional, 174 Continual integral, see Path integral
Bogoliubov-de Gennes equations, 176ff. Contraction, 76
transformation, 175ff. Cooper
Bogolon, 177ff. pair, 163ff.
Bose-Einstein statistics, 34 pairing, 160
Bosonization, 238ff. Coulomb
bosonization formulas, 242, 245 blockade, 218ff.
Bound state formation, 160, 229 potential screening, 3, 89, 263
Boundary condition changing operators, 256 Creation operator, 36, 48

A. Zagoskin, Quantum Theory of Many-Body Systems, 277


Graduate Texts in Physics, DOI: 10.1007/978-3-319-07049-0,
© Springer International Publishing Switzerland 2014
278 Index

D G
Debye Gaussian integral, 12, 18, 20
frequency, 7, 158 Gaussian integrals, 173
Hückel screening length, 4, 155 Generalized force, 120
Density matrix, see Statistical operator Generalized susceptibility, 121
Density operators, 238, 239 Gibbs, 59, 104
Dirac Gor’kov
“bra” and “ket” notation, 18 equations, 185ff.
interaction representation, 29 Green’s function, 181
Dispersion law, 66, 68, 92, 137, 178, 200, Gradient expansion, 134, 191
228, 231 Grand canonical ensemble, 58, 59
Dressed particle, 5, 55 Grand potential, 59, 71, 85, 99, 103
Dyson equation of a superconductor, 179
for Green’s function, 134, 182 and Josephson curent, 207
for the vertex part, 95 Green’s function, 11, 53ff.
Dyson expansion, 29, 115, 119 n-particle, 91
advanced, 12, 67ff., 103ff., 121, 129
analytic properties, 62ff., 103ff., 185
and observables, 70
E causal, 57, 103ff., 186
Electron–phonon nonequilibrium, 126ff.
collision integral, 145ff. of two operators, 120
coupling constant, 185 retarded, 11, 67ff., 103ff., 186
Elementary excitation, see Quasiparticle temperature (Matsubara), 111ff., 187
Eliashberg equations, 185
Euler-MacLaurin formula, 258
Evolution operator, 25ff., 110 H
Hamiltonian function, 17
Hartree–Fock approximation, 92
F Heaviside step function, 11
Fermi Heisenberg
anticommutation relations, 49 equations of motion, 27, 44, 62, 111, 150,
177
Dirac statistics, 34
representation, 28, 29, 59, 60
distribution function, 91, 210, 211
Hilbert space, 18, 29, 35
edge singularity (FES), 228
Holomorphic (antiholomorphic), 254
exponent, 228
Hooke’s law, 118
field operators, 49
Huygens principle, 10
momentum, 3
surface, 56, 141, 144, 160ff., 236
Feynman I
diagrams, 31, 32, 71ff., 117, 132, 133, Impedance, 125
182, 183 Indistinguishability principle, 34
path integrals, 14ff. Instantaneous eigenstates, 30
rules, 31, 71ff., 117, 132, 133, 182, 183ff. Interaction representation, 40, 59, 73, 114,
Field operators, 39, 46, 49 121
equations of motion, 27, 50, 66, 177, 188,
237, 244
Fluctuation-dissipation theorem, 122ff. J
Flux quantum, 23 Jellium model, 2
Fock space, 36, 40, 245 Johnson–Nyquist noise, 122, 125
Friedel oscillations, 91, 171, 263 Josephson
Functional, 14, 174 coupling energy, 222
Functional integrals, see Path integrals current, 205ff.
Index 279

effect, 149, 166, 204ff. N


frequen cy, 215 Nambu
Gor’kov formalism, 181ff.
operators, 181
K Normal ordering, 75
Källén–Lehmann representation, 62ff., 88, Number
104, 114ff., 140, 186, 211 operator, 39, 43
Keldysh phase uncertainty relation, 43ff.
contour, 128, 131 Nyquist theorem, 125
equation, 134
formalism, 109, 131ff., 151, 215
Green’s function, 130ff., 151 O
Klein factors, 245 Occupation number, 38
Kohn–Luttinger pairing mechanism, 171 Off-diagonal long-range order (ODLRO),
Kramers–Kronig relation, 70, 106 165
Kubo formula, 118, 121 Ohm’s law, 118
Kubo–Martin–Schwinger identity, 123 Order parameter, 165, 178, 179, 192,
195, 213

L P
Ladder approximation Pairing
for the polarization operator, 89 in superconducting state, 160, 164
for the two-particle Green’s function, 99, Hamiltonian, see BCS Hamiltonian
168 potential, 164, 165, 182, 183
Ladder operators, 245 of field operators, see Contraction
Lagrange function, 14 Parity
Landau, 55, 170 effect, 166, 220
criterion, 195 of a permutation, 47
Landauer formula, 142ff., 153, 271 Partial current, 141, 144
Left-movers, 236, 238 Partial summation, 85, 86
Linear response theory, 118ff. Partial summation of Feynman diagrams,
Liouville equation, 103 85, 86
Luttinger liquid, see Tomonaga-Luttinger Path (functional) integrals, 16, 20, 32, 174,
liquid 175
parameter, 247, 248 Pauli principle, 35, 46
Luttinger model, see Tomonaga-Luttinger Perturbation expansion
model in many-body theory, 71, 84, 114
in one-body theory, 24, 31
Phase
M coherence, 21, 158, 178
Mass operator, 87 coherence length, 20
Matsubara number uncertainty relation, 44ff.
conjugate, 110 Phonon, 2, 54, 56, 76, 242, 248
field operator, 110 Green’s function (propagator), 61, 85,
formalism, 106, 109ff., 187 132, 134, 183
frequencies, 112, 170, 192 Plasma frequency, 7
Green’s function, see Green’s function Plasmon, 7
summation frequencies, 116 Plemelj theorem, 70
Mean field approximation (MFA), 3, 5, 163, Poisson summation formula, 212
217 Polarization
Meromorphic function, 64 insertion, 88
Mesoscopic system, 20, 141, 270 operator, 88–90, 171, 227, 263, 265
Mixed state, 102 Primary field, 255
280 Index

Propagation function (propagator), 8ff. Superfluid velocity, 189


composition property, 10
path integral expression, 17
Proximity effect, 203 T
Pure state, 102 Temperature Green’s function, see Green’s
function
Temperature ordering operator, 111
Q Thomas–Fermi
Quantum coherence, 165, 168 equation, 3
Quantum kinetic equation, 133ff. screening length, 4, 89
Quantum point contact (QPC), 137, 143, 206 Time ordering operator, 25, 128
Quasiparticle, 2, 53–55, 66–68, 92, 106, 136, in imaginary time, see Temperature
163, 177, 187, 197, 198, 216, 267 ordering operator
lifetime, 54, 69, 88, 136, 187 Tomonaga-Luttinger
liquid, 246ff.
model, 235ff.
R Transition amplitude, 10, 15, 16
Random phase approximation, 89, 90 Tunneling Hamiltonian, 149ff.
Refermionization, 249 Two-particle excitations, 92
Renormalization
of energy spectrum, 136, 147
of interactions, 88 U
Right-movers, 236, 239 Umklapp process, 246

S V
S-operator (S-matrix), 26, 73, 114, 127 Vacuum, 36, 45, 158
Schrödinger Vector potential, 22, 189ff.
equation, 11, 18, 24, 54, 110 Vertex function, 91–94
representation, 24, 40
Screening, 3, 4, 6, 89, 90, 227, 263
Second quantization, 33ff. W
representation of operators, 41, 49 Watson’s lemma, 12, 69
Self energy, 84 Weierstrass formaula, 65
matrix, 134 Wick’s
part, 86 rotation, 110
proper, 86, 143, 183 theorem, 75ff.
Sharvin resistance, 141ff., 208, 270 Wiener–Khintchin theorem, 123
Slater determinant, 35, 46, 230 Wigner function, 134
SND junction, 213, 214
SNS junction, 204ff., 271
Spectral density Y
of fluctuations, 122 Yukawa potential, 4, 90
of Green’s function, 68, 105–107, 186
Spontaneous symmetry breaking, 167
Statistical operator, 57, 58, 102ff. Z
Sum rule, 107 Zero mode, 240

You might also like