Statistical Physics
Statistical Physics
Statistical Physics
Release 1.38
Lei Ma
1 Acknowledgement 3
2 Introduction 5
3 Vocabulary 7
3.1 Vocabulary and Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.1.1 Vocabulary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.1.2 Green Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.3 Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4 Equilibrium System 19
4.1 Equilibrium Statistical Mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.1.1 Summary 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.1.2 Week 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.1.3 Week 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.1.4 Week 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.1.5 Week 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.1.6 Week 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.1.7 Week 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
5 Nonequilibrium System 59
5.1 Stochastic And Non-Equilibrium Statistical Mechanics . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.1.1 Week 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.1.2 Week 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.1.3 Week 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5.1.4 Week 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
5.1.5 Week 11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
5.1.6 Week 12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.1.7 Effect of Defects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
5.1.8 Quantum Master Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
5.1.9 Brownian Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Index 109
i
ii
Statistical Physics Notes, Release 1.38
CONTENTS 1
Statistical Physics Notes, Release 1.38
2 CONTENTS
CHAPTER
ONE
ACKNOWLEDGEMENT
I have finished the whole set of notes for my statistical mechanics class which is taught by V. M. Kenkre 2014 Spring.
Professor Kenkre’s lectures are as fantastic as an excellent thrilling movie without which I could never finish notes like
this. I am very grateful to him for this adventure to modern statistical mechanics. The words I can think of to describe
these lectures are the words used on the best chinese novel, Dream of the Red Chamber:
which is too hard to translate. It basically means that the subplot permeats through thousands of pages before people
realize it’s importance. Professor’s Kenkre’s lectures do have such power that a tiny hint could develop to an important
idea as the lectures go on.
I am also very grateful to the TA of this course, Anastasia, who helped me a lot with my homework and lecture notes.
3
Statistical Physics Notes, Release 1.38
4 Chapter 1. Acknowledgement
CHAPTER
TWO
INTRODUCTION
Statistical Physics is the central topic of physics. It taught us great lessons about nature and it is definitely going to
teach us more. Some ideas (Verlinde’s scenario) even put thermodynamics and statistical physics as the fundamental
of all theories thus the thought that everything is emergent has been announced.
Basically, statistical mechanics is the mechanics of large bodies.
• Mechanics is Newton’s plan of kinematics.
• Large means a lot of DoFs. Usually DoFs add up to 1023 which is the order of Avogadro’s number.
• Bodies, of course, is the subject or system we are dealing with.
One question worth thinking about is how we end up with probabilities.
We wouldn’t need probability theory if we carry out Newton’s plan exactly. Note that the first thing we drop to come
over the obstacles is to drop initial condition because it’s impossible to write down the initial condition of a large
system for each particle. The price of that is we have to use probability to describe the system. Later on we find out
that some dynamics have to be dropped to make it calculable which then gives us other sources of probability.
It’s kind of disappointing that stat mech is so vague. I feel unsafe.
5
Statistical Physics Notes, Release 1.38
6 Chapter 2. Introduction
CHAPTER
THREE
VOCABULARY
3.1.1 Vocabulary
Gaussian Intergral
∞
∫︁ √︂
2 𝜋
𝑒−𝑎𝑥 d𝑥 =
−∞ 𝑎
Behavior of Functions
0. Boltzmann factor
The most important and weirdest function in statmech
7
Statistical Physics Notes, Release 1.38
1. Tanh(x)
2. 1 − 𝑒𝑥𝑝(−𝑥)
3. 𝑐𝑜𝑠ℎ(1/𝑥) − 1/𝑥
8 Chapter 3. Vocabulary
Statistical Physics Notes, Release 1.38
4. 1/(1 + 1/𝑥)
Fourier Transform
Laplace Transform
5. 𝐿[𝑒𝑎𝑡 ] = 1[ 𝑠 − 𝑎].
A very nice property of Laplace transform is
∫︁ ∞
𝑎𝑡
𝐿𝑠 [𝑒 𝑓 (𝑡)] = 𝑒−𝑠𝑡 𝑒−𝑎𝑡 𝑓 (𝑡)𝑑𝑡
0
∫︁ ∞
= 𝑒−(𝑠+𝑎)𝑡 𝑓 (𝑡)𝑑𝑡
0
= 𝐿𝑠+𝑎 [𝑓 (𝑡)]
10 Chapter 3. Vocabulary
Statistical Physics Notes, Release 1.38
and
1
𝐿[𝐽0 [2𝐹 𝑡]] = √︀ ,
𝜖2 + (2𝐹 )2
where 𝐼0 (2𝐹 𝑡) is the modified Bessel functions of the first kind. 𝐽0 (2𝐹 𝑡) is its companion.
Using the property above, we can find out
1
𝐿[𝐼0 (2𝐹 𝑡)𝑒−2𝐹 𝑡 ] = √︀ .
(𝜖 + 2𝐹 )2 − (2𝐹 )2
1 1
1 − 𝑒−𝛼𝑥 tanh(𝑥) cosh( ) −
𝑥 𝑥
Legendre Transform
The geometrical of physical meaning of Legendre transformation in thermodynamics can be illustrated by the follow-
ing graph.
For example, we know that entropy 𝑆 is actually a function of temperature 𝑇 . For simplicity, we assume that they are
monotonically related like in the graph above. When we are talking about the quantity 𝑇 d𝑆 we actually mean the area
shaded with blue grid lines. Meanwhile the area shaded with orange line means 𝑆d𝑇 .
Let’s think about the change in internal energy which only the thermal part are considered here, that is,
d𝑈 = 𝑇 d𝑆.
So internal energy change is equal to the the area shaded with blue lines. Now think about a three dimensional graph
with a third axis of internal energy which I can’t show here. Notice that the line of internal energy is on the plane
which is vertical to 𝑇, 𝑆 plane and contains the line black line in the graph above. The change of internal energy with
an increase of d𝑆 is the value that the line of internal energy goes up.
Now we do such a transform that we actually remove the internal energy from d(𝑇 𝑆), which finally gives us Helmholtz
free energy,
d𝐴 = 𝑆d𝑇.
It’s obvious that after this Legendre transform, the new area is the part shaded with orange lines.
Now the key point is that 𝑆(𝑇 ) is a function of 𝑇 . So if we know the blue area then we can find out the orange area,
which means that the two function 𝐴(𝑇 ) and 𝑈 (𝑆) are identical. Choosing one of them for a specific calculation is a
kind of freedom and won’t change the final results.
12 Chapter 3. Vocabulary
Statistical Physics Notes, Release 1.38
Thermodynamics
Thermodynamics is about the desciption of large systems which is mostly about the following keypoints. (A Modern
Course in Statistical Physics by L. E. Reichl)
1. Thermodynamic variables; extensive, intensive, neither;
2. Equations of state;
3. Four fundamental laws of thermodynamics;
4. Thermodynamics potentials
5. Phase transitions
6. Response
7. Stability
Anyway, thermodynamics is a kind of theory that deals with black boxes. We manipulate any variables we like and
look at the changes. Then we summarize and get a set of laws.
Laws
Zeroth Law: A first feeling about temperature
Two bodies, each in thermodynamic equilibrium with a third system, are in thermodynamic equilibirum with each
other.
This gives us the idea that there is a universal quantity which depends only on the state of the system no matter what
they are made of.
Laws
First Law: Conservation of energy
Energy can be transfered or transformed, but can not be destroyed.
In math,
d𝑈 = 𝑊 + 𝑄
where 𝑊 is the energy done to the system, 𝑄 is the heat given to the system. A better way to write this is to make up
a one-form Ω,
Ω ≡ d𝑈 − 𝑊 − 𝑄 = 0
Using Legendre transformation, we know that this one form have many different formalism
Laws
Second Law: Entropy change; Heat flow direction; Efficieny of heat engine
There are three different versions of this second law. Instead of statements, I would like to use two inequalities to
demonstrate this law.
∆𝑊
𝜂= ≤1
∆𝑄
d𝑆 ≥ 0
Combine second law with first law, for reversible systems, 𝑊 = 𝑇 d𝑆, then for ideal gas
Ω ≡ d𝑈 − 𝑇 d𝑆 + 𝑝d𝑉 = 0
Take the exterior derivative of the whole one-form, and notice that 𝑈 is exact,
𝜕𝑇 𝜕𝑝
− |𝑆 d𝑉 ∧ d𝑆 + |𝑆 d𝑆 ∧ d𝑉 = 0
𝜕𝑉 𝜕𝑆
Clean up this equation we will get one of the Maxwell relations. Use Legendre transformation we can find out all the
Maxwell relations.
Laws
Third Law: Abosoulte zero; Not an extrapolation; Quantum view
The difference in entropy between states connected by a reserible process goes to zero in the limit 𝑇 → 0𝐾.
Due to the asymptotic behavior, one can not get to absolute zero in a finite process.
Thermodynamic Potentials
1. Internal Energy
2. Enthalpy
3. Helmholtz Free Energy
4. Gibbs Free Energy
5. Grand Potential
The relations between them? All potentials are Legendre transformation of each other. To sum up, let’s gliffy.
14 Chapter 3. Vocabulary
Statistical Physics Notes, Release 1.38
(The gliffy source file is here . Feel free to download and create your own version.)
This graph needs some illustration.
1. Legendre transformation: 𝑆𝑇 − 𝑈 (𝑆) transform a funcion 𝑈 (𝑆) with variable 𝑆 to another function 𝐻(𝑇 ).
However, in thermodynamics use the different sign can be more convinient. In other words, 𝑈 (𝑆) and −𝐻(𝑇 )
are dual to each other.
2. Starting from this graph, we can find out the differentials of thermodynamic potentials. Next take the partial
derivatives of thermodynamic potential with respect to their own variables. By comparing the partial derivatives
and the definitions of them, we find out expressions of their vairables. Finally different expressions for the same
variable are equal, which are the Maxwell relations.
3. As we have seen in 2, all the thermodynamic quantities can be obstained by taking the derivatives of thermody-
namic potentials.
Hint: Question: Mathematically we can construct the sexth potential namely the one that should appear at the right
bottom of the graph. Why don’t people talk about it?
We can surely define a new potential called 𝑁 𝑢𝑙𝑙(𝑇, 𝑋, {𝜇𝑖 }). However, the value of this function is zero. So we can
have the derivitive of this potential is also zero. This is the Gibbs Duhem equation.
The answer I want to hear is that this is something dd𝑓 = 0 where f is exact.
The Entropy
When talking about entropy, we need to understand the properties of cycles. The most important one is that
𝑛
∑︁ 𝑄𝑖
≤0
𝑖=1
𝑇𝑖
where the equality holds only if the cycle is reversible for the set of processes. In another sense, if we have infinitesimal
processes, the equation would have become
∮︁
d𝑄
= 0.
𝑇
The is an elegent result. It is intuitive that we can build correspondence between one path between two state to any
other paths since this is a circle. That being said, the following integral
∫︁ 𝐵
d𝑄
,
𝐴 𝑇
∫︀ 𝐵 d𝑄
is independent of path on state plane. We imediately define 𝐴 𝑇 as a new quantity because we really like invariant
quantities in physics, i.e.,
∫︁ 𝐵
d𝑄
𝑆(𝐵) − 𝑆(𝐴) = ,
𝐴 𝑇
which we call entropy (difference). It is very important to realize that entropy is such a quantity that only dependents
on the initial and final state and is independent of path. Many significant results can be derived using only the fact that
entropy is a function of state.
1. Adiabatic processes on the plane of state never go across each other. Adiabatic lines are isoentropic lines since
d𝑆 = d𝑄𝑇 as d𝑄 = 0 gives us d𝑆 = 0. The idea is that at the crossing points of adiabatic lines we would get a
branch for entropy which means two entropy for one state.
2. No more than one crossing point of two isothermal lines is possible. To prove it we need to show that entropy is
a monotomic equation of 𝑉 .
3. We can extract heat from one source that has the same temperature and transform into work if the isoentropic
lines can cross each other which is not true as entropy is quantity of state. Construct a system with a isothermal
line intersects two crossing isoentropic lines.
4. We can extract heat from low temperature source to high temperature source without causing any other results
if we don’t have entropy as a quantity of state.
𝐵1 [𝑦] = 𝐵2 [𝑦] = 0.
The solution is
∫︁ 𝑏
𝑦(𝑥) = 𝐺(𝑥|𝜉)𝑓 (𝜉)𝑑𝜉
𝑎
16 Chapter 3. Vocabulary
Statistical Physics Notes, Release 1.38
𝐿[𝐺(𝑥|𝜉)] = 𝛿(𝑥 − 𝜉)
𝐵1 [𝐺] = 𝐵2 [𝐺] = 0.
First order differential of Green function 𝐺′ (𝑥|𝜉) has a jump condition at 𝑥 = 𝜉 that the jump discontinuity of height
is 1.
Examples
2nd DE,
We know the two solutions to the homogeneous equation, 𝑦 = 1 or 𝑦 = 𝑥. However, only the second solution can
satisfy the BC. So Green function should have these properties,
{︃
𝑐1 + 𝑐2 𝑥 𝑥<𝜉
𝐺(𝑥|𝜉) =
𝑑1 + 𝑑2 𝑥 𝑥 > 𝜉.
𝑐𝜉 = 𝑑(𝜉 − 1).
𝑑𝑥 𝑑(𝑥 − 1) − 𝑑𝑥 𝑐𝑥 = 1.
3.1.3 Program
Most problems in stat mech have similiar procedures. This page is for the programs of solving problems.
18 Chapter 3. Vocabulary
CHAPTER
FOUR
EQUILIBRIUM SYSTEM
4.1.1 Summary 1
Here is the framework of lectures for the first six weeks. This part is a review of equilibrium statistical mechanics.
Review of Thermodynamics
Figure 4.1: The relationship between different thermodynamics potential. There are three different couplings and five
different potentials. For more details read vocabulary.
19
Statistical Physics Notes, Release 1.38
Two approaches utilize most probable distribution and ensemble respectively. However they have something in com-
mon.
1. Phase space
2. Liouville equation
Figure 4.2: A UML modeling of the two theories. Refer to Important box in week 6
Boltzmann Statistics
1. Two postulates: One is about state occurrence in phase space; The other is about which state will equilibrium
system in.
1. Ensembles
2. Density of states; Liouville equation; Von Neumann equation
3. Equilibrium
4. Three ensembles
5. Observables
Oscillators
To do!
Heat Capacity
Gibbs mixing paradox is important for the coming in of quantum statistical mechanics.
Mean field theory is the idea of treating interaction between particles as interactions between particles and a mean
field.
Van der Waals gas model can be derived using Mayer expansion and Leonard-Jones potential.
4.1.2 Week 1
Phase Space
Why is statistics important? Remember we are dealing with Avogadro’s number of DoFs. If we are going to calculate
the dynamics of this system by calculating the dynamics of each particles. To store one screen shot of the system with
each DoF take only 8 bits, we need 102 3 bytes that is 101 7 GB. It is not even possible to store only one screen shot of
the system. So time to change our view of these kind of systems.
Figure 4.3: Newton’s plan of mechanics. Mechanics was in the center of all physics.
As we already mentioned initial state, we need to explain how to describe a state. A vector in phase space gives us a
state. Time evolution is motion of points in phase space. Finally, we can do whatever is needed to extract observables,
for example just use projection of points in phase space.
Problem is, when it comes to stat mech, it’s not possible to do all these DoFs one by one. We need a new concept.
Boltzmann Factor
𝐸
probability of a point in phase space = exp(− )
𝑘𝐵 𝑇
Boltzmann factor gives us the (not normalized) probability of the system staying on a phase space state with energy
𝐸.
Note: Why do Boltzmann factor appear a lot in equilibrium statistical mechanics? Equilibrium of the system means
when we add infinitesimal amount of energy to the whole thing including system and reservoir, a characteristic quan-
tity 𝐶(𝐸) = 𝐶𝑆 𝐶𝑅 won’t change. That is the system and the reservoir will have the same changing rate of the
characteristic quantity when energy is changed, i.e.,
𝜕 ln 𝐶𝑆 𝜕 ln 𝐶𝑅
=− .
𝜕𝐸𝑆 𝜕𝐸𝑅
We have d𝐸1 = −d𝐸2 in a equilibrium state. They should both be a constant, which we set to 𝛽. Finally we have
something like
𝜕 ln 𝐶𝑆
=𝛽
𝜕𝐸𝑆
Magnetization
Use ipython notebook to display this result. The original notebook can be downloaded from here. (Just put the link to
nbviewer and everyone can view online.)
%pylab inline
from pylab import *
x=linspace(0,10,100)
y=tanh(x)
figure()
plot(x, y, ’r’)
xlabel(’External Magnetic Field’)
ylabel(’M’)
title(’Tanh theory’)
show();
Heat Capacity
Another category of problems is temperature related. For example, a study of average energy with change temperature.
For the paramagnetic example, the energy of the system is
Obviously, no phase transition would occur. But if we introduce self interactions between dipoles and go to higher
dimensions, it’s possible to find phase transitions.
Importance of Dimensions
4.1.3 Week 2
Behaviors of functions
2. 1 − 𝑒𝑥𝑝(−𝑥)
3. 𝑐𝑜𝑠ℎ(1/𝑥) − 1/𝑥
4. 1/(1 + 1/𝑥)
1
Now we can plot out 𝑉 𝑃 and it shows a behavior just like 1/(1 + 1/𝑥).
Note: To see the true nature of graphs, make quantities dimensionless. This is also true for theoretical derivations.
Dimensionless equations can reveal more.
Note: The nth derivative of this function is always 0 at x=0, for all finite n. Then how does it rise? The only thing I
can say is that we are actually dealing with infinite n.
Professor Kenkre: sleeping lion
Specific Heat
𝑑
𝐶= ⟨𝐸⟩
𝑇
Partition Function
Energy
∫︁ ∫︁
1 2
/(2𝑚) −𝛽 21 𝑞𝑥2
𝐸= 𝑒−𝛽𝑝 𝑒 𝐻𝑑𝑝𝑑𝑥 = · · · = 𝑘𝐵 𝑇
𝑍
This is the power of partition function. To continue the SHO example, we find the specific heat is
𝐶 = 𝑘𝐵
Note: This result has nothing to do with the detail of the SHO, no matter what mass they have, no matter what
potential constant 𝑞 they have, no matter what kind of initial state they have. All the characteristic quantities of SHO
are irrelevant. Why? Mathematically, it’s because we have Gaussian integral here. But what is the physics behind
this? Basicly this classical limit is a high temperature limit.
√︀
where 𝜔 = 𝑘/𝑚.
Partition function
1
∑︁ 𝑒− 2 ℎ̄𝜔 1
𝑍= 𝑒−𝛽𝐸𝑛 = = sinh(𝛽ℎ̄𝜔/2)
𝑛
1 − 𝑒−𝛽ℎ̄𝜔 2
Energy
(︂ )︂
𝜕 1 1
⟨𝐸⟩ = − ln 𝑍 = ℎ̄𝜔 +
𝜕𝛽 2 exp(𝛽ℎ̄𝜔) − 1
Specific heat
𝜕 exp(𝛽ℎ̄𝜔)
𝐶= ⟨𝐸⟩ = 𝑘𝐵 (𝛽ℎ̄𝜔)2
𝜕𝑇 (exp(𝛽ℎ̄𝜔) − 1)2
Though we have infinite energy levels, the specific heat won’t blow up because the probability of a high energy level
state is very small.
Density of States
In another way,
𝐴 = −𝑘𝐵 𝑇 ln 𝑍
As long as we find out partition function, all thermodynamics quantities are solved, including entropy.
(︂ )︂
𝜕
𝑆 = −𝑘𝐵 𝛽 𝑇 ln 𝑍 − ln 𝑍
𝜕𝛽
Specific heat is very different in 1D, 2D, 3D even for similar Hamiltonian. In the case of dipole systems, this is because
for 1D, there discrete energy levels, for 2D and 3D energy levels fill up the gap. But what’s the difference between 2D
and 3D? Obviously the degeneracies are different.
Generally,
∫︁
𝑍= 𝑔(𝐸)𝑒−𝛽𝐸 d𝐸
Calculation of DoS
How do we calculate the DoS? For a given E, we have 𝑘𝑥2 + 𝑘𝑦2 = Constant𝐸. The # of states between 𝐸 𝐸 + d𝐸 is
just the area of it divided by the area of each grid box.
𝑑𝑉𝑘 2𝜋𝑘𝑑𝑘
𝑁= = 2𝜋 2𝜋 ≡ 𝑔(𝐸)𝑑𝐸
𝑑𝑉0 𝐿𝑥 𝐿𝑦
DoS 𝑔(𝐸) is
2𝜋𝑘 𝐿𝑥 𝐿𝑦
𝑔(𝐸) = 𝑑𝐸 (2𝜋)2
𝑑𝑘
Examples of DoS
ℎ̄2 𝑘 2 ℎ̄2 𝑘
(︂ )︂
𝑑𝐸 𝑑
= =
𝑑𝑘 𝑑𝑘 2𝑚 𝑚
Thus we have,
2𝜋𝑘 𝐿2 1 𝑚 2
𝑔(𝐸) = =( )𝐿
ℎ̄2 𝑘 (2𝜋)2 2𝜋 ℎ̄2
𝑚
𝑑𝐸 ℎ̄2 𝑘 2
=
𝑑𝑘 2𝑚
DoS
𝑚 𝐿3
𝑔(𝐸) = 𝑘
ℎ̄2 2𝜋 2
This is 𝑘 dependent.
3. 1D
1 𝑚𝐿
𝑔(𝐸) =
𝑘 2𝜋ℎ̄2
Note: These results are so different. For 1D system, the higher energy of the system is, the small DoS is. 2D DoS
doesn’t depend on energy. 3D is proportional to the square root of energy.
DoS is very important in quantum systems because quantization can make strange DoS. In classical systems without
quantization, DoS is always some kind of constant.
It’s obvious that for a N particles system without interaction between particles, 𝑍𝑁 = (𝑍1 )𝑁 . Free energy
Note: This quantity is neither intensive nor extensive! If we combine two exactly same system, then we won’t have
twice of the free energy. It’s called Gibbs mixing paradox.
4.1.4 Week 3
This is a physical idea of how do we get the quantum partition function from Classical Mechanics.
Classically, the partition function
∫︁ ∫︁ (︂√︂ )︂3
3 3 −𝛽𝑝2 /2𝑚 2𝑚𝜋
𝑍= 𝑑 𝑥 𝑑 𝑝𝑒 =𝑉
𝛽
√︁
We can see from this that thermal wave length is 1/ 2𝑚𝜋
𝛽 classically. In quantum, partition function is a summation,
∑︁
𝑍= 𝑒−𝛽𝐸𝑖
𝑖
If we are going to write this into some integration, which is something like
∫︁ ∫︁
2
𝑍 = 𝑑 𝑥 𝑑3 𝑝𝑒−𝛽𝑝 /2𝑚
3
which is problematic because it has a different dimension with the summation definition. So we need to put some
quantity which has a dimension [𝑝 · 𝑥]3 , and it got to be ℎ3 . So the integration form of partition function is
∫︁ ∫︁
1 2
𝑍= 3 𝑑3 𝑥 𝑑3 𝑝𝑒−𝛽𝑝 /2𝑚
ℎ
Warning: Here we used phase space of 𝑞𝑖 ; 𝑝𝑖 which is not a good choice for quantum mechanics. So this might
be a problem. Should check books for a more rigorous method.
Average of an observable is
ˆ
𝑂 = 𝑇 𝑟(𝑂𝜌)
𝑒−𝛽𝐻
𝜌=
Tr(𝑒−𝛽𝐻 )
𝐴 = −𝑘𝐵 𝑇 𝑁 (ln 𝑉 − 3 ln 𝜆)
Suppose we have two systems, one with 𝑁1 and 𝑉1 the other with 𝑁2 and 𝑉2 . Now we mix them. Our physics intuition
would tell us that the free energy of this new system should be 𝐴 = 𝐴1 + 𝐴2 . However, from the free energy equation,
we get
𝐴 = · · · − 𝑘𝐵 𝑇 ln(𝑉1𝑁1 𝑉2𝑁 2 )
That is, free energy becomes neither intensive nor extensive in our derivation.
The fairly simple way to make it extensive is to divide V by N. Now a new term will appear in our free energy, namely
𝑁 ln 𝑁 . Recall that in Sterling approximation, ln 𝑁 ! = 𝑁 ln 𝑁 − 𝑁 . So in a large system, we can create some kind
of free energy definition which makes it extensive.
Note: We can’t just pull out some results from statistical mechanics and apply them to a small system composed of
several particles. In stat mech we use a lot of approximations like Sterling approximation which is only valid when
particle number is huge.
𝐴 = −𝑘𝐵 𝑇 𝑁 (ln(𝑉 /𝑁 !) − 3 ln 𝜆)
which is to say
𝑍1𝑁
𝑍𝑁 =
𝑁!
This definition “solves” the Gibbs mixing paradox. The physics of this modification requires QM.
Interacting Particles
Statistical mechanics starts from a given energy spectrum. With energy spectrum solved, we can do statistics.
For an interacting system, we need to solve the energy spectrum first then calculate the partition function. Usually
we would have coupled equations for interacting systems. For example, in a coupled HO system with N oscillators.
Image of two HO example is given here.
𝑥𝑞 + 𝑘 ′ 𝑥𝑞 = 0
𝑚¨
In Einstein model, every particle is identical and have the same frequency. However, this theory shows a result of
Boltzmann factor behavior, which is
We got sleeping slope at very low temperature where experiments show that this wrong.
So the Debye theory is composed of two steps.
1. Calculate the energy spectrum of N coupled particle system by finding out a decouple transformation;
2. Evaluate the heat capacity integral.
Once we find the energy spectrum, we will know the dispersion relation, which is different from Einstein’s model.
𝑉 𝜔2
𝑔(𝜔) =
2𝜋 2 𝑐3
for a 3D lattice.
So average energy is
𝑥(𝜔𝑚 )
𝑥3
∫︁
3𝑉
𝐸= (𝑘𝐵 𝑇 )4 𝑑𝑥
2𝜋 2 𝑐3 ℎ̄3 0 𝑒𝑥−1
Heat Capacity is
)︂3 ∫︁ 𝑥(𝜔𝑚 )
𝑥4 𝑒𝑥
(︂
𝑇
𝐶 = 9𝑁 𝑘𝐵 𝑑𝑥
Θ𝐷 0 (𝑒𝑥 − 1)2
Note: What’s amazing of Debye theory is that the low temperature behavior is independent of cut off frequency. At
low temperature, 𝑥(𝜔𝐷 ) becomes infinite and it becomes an integration from 0 to infinity thus we do not need to know
the cut off temperature to find out the low temperature result and it agrees with experiments well.
Important: We start for an Einstein theory and reaches a non sleeping model. What happened when we integrated
over all DoS in Debye model? Work this out in details.
This is because our density of states 𝑔(𝜔) ∝ 𝜔 2 at low temperature tells us that we would have more states at a certain
energy as 𝜔 increases. That is to say the system need more energy to increase temperature, identically the heat capacity
line becomes steeper.
Important: Why is the # of modes important in Debye model? The degree of freedom is finite in an system.
If we don’t cut off the frequency, that means we would have infinite degree of freedom because we have made an
approximation that dispersion relation is a straight line 𝜔 = 𝑐𝑘. That would certainly lead to infinite heat capacity and
infinite total energy.
Debye model is simple yet classic. Generally we can not find the right transformation that decouples the particles. For
example we have the Ising model,
∑︁ ∑︁
𝐻= 𝜇𝜎𝐵 − 𝐽 𝑖𝑗 𝜎𝑖 𝜎𝑗
𝑖 𝑖,𝑗
Hint: The reason that we can decouple the simple coupled HO system is that the coupling constant are the same and
each HO is identical. In that case the system is just a homogeneous chain such that the normal modes are sin or cos
waves depending on the boundary condition. If the system is inhomogeneous, there is no way that we can use simple
plain waves on the chain as normal modes.
4.1.5 Week 4
Phase Transitions
a link here.
Phase transitions are a property of infinite systems.
Note: Why is this an approximation? Because the actual magnetic field for example is often not the average of all the
magnetic dipoles.
Mean field theory often fails at the critical points that is mean field theory is not precise enough for phase transition in
some low dimensional systems. This gives us a very interesting thought.
Important: Why does mean field theory work? From the view of mathematics, potential can always be expanded at
some value which is exactly the mean value of the field. For an Hamiltonian,
∑︁ ∑︁
𝐻=− 𝐽 𝑖𝑗 𝜎𝑖 𝜎𝑗 − 𝜇 ℎ𝑖 𝜎𝑖
⟨𝑖,𝑗⟩ 𝑖
∑︀
where 𝜎 = 𝑖 𝜎𝑖 /𝑁 is the average spin configuration.
This looks just like we take the 0 order of spin configuration expansion. And we can also include the second order
which means add the interaction of just two spins.
Note: Susceptibility is a parameter that shows how much an extensive parameter changes when an intensive parameter
increases. Magnetic susceptibility is
d𝑀 (𝑇 )
𝜒(𝑇 ) =
T
Important: What makes the phase transition in such a system? Finite system has no phase transitions because finite
continuous function can only make up continuous function by addition. Phase transition happens when the correlation
length becomes infinite. So this is all about correlations.
Important: Why is mean field theory an approximation? Because the actual spin is more or less different from the
average of spin configuration. Fluctuations in the spin makes the difference.
Gas Models
Ideal gas is the simplest. Van de Waals model considers the correction in pressure and volume.
as
∑︁ 𝑝⃗2 ∑︁
𝑖
𝐻= + 𝜑(𝑟)
2𝑚
⟨𝑖,𝑗⟩
in which 𝜑(𝑟) is the average of potential and all particles interaction have the same value.
Onnes used series to write the equation of state,
[︂ (︁ 𝑛 )︁2 ]︂
𝑛𝑅𝑇 𝑛
𝑃 = 1 + 𝐵(𝑇 ) + 𝐶(𝑇 ) + · · ·
𝑉 𝑉 𝑉
Ensemble
A standard procedure of solving mechanics problems, said by Prof. Kenkre which is don’t really accept, is
Initial condition / Description of states -> Time evolution -> Extraction of observables
States
𝜕𝑡 𝜌 + ∇ · (𝜌⃗𝑢) = 0
This conservation law can be more simpler if dropped the term ∇ · ⃗𝑢 = 0 for incompressibility.
Or more generally,
𝜕𝑡 𝜌 + ∇ · ⃗𝑗 = 0
Divergence means
∑︁ (︂ 𝜕 𝜕
)︂
⃗ =
∇· + .
𝑖
𝜕𝑞𝑖 𝜕𝑝𝑖
𝜕𝐻
𝑞˙𝑖 =
𝜕𝑝𝑖
𝜕𝐻
𝑝˙𝑖 = −
𝜕𝑞𝑖
Then
(︂ )︂
𝜕 𝜕
𝑞˙𝑖 + 𝑝˙𝑖 𝜌.
𝜕𝑞𝑖 𝜕𝑝𝑖
Finally convective time derivative becomes zero because 𝜌 is not changing with time in a comoving frame like perfect
fluid.
𝑑 𝜕 ∑︁ [︂ 𝜕 𝜕
]︂
𝜌≡ 𝜌+ 𝑞˙𝑖 𝜌 + 𝑝˙𝑖 𝜌 =0
𝑑𝑡 𝜕𝑡 𝑖
𝜕𝑞𝑖 𝜕𝑝𝑖
Time evolution
𝜕𝑡 𝜌 = {𝐻, 𝜌}
That is to say, the time evolution is solved if we can find out the Poisson bracket of Hamiltonian and probability
density.
1. Liouville theorem;
2. Normalizable;
Hint: What about a system with constant probability for each state all over the phase space? This is not
normalizable. Such a system can not really pick out a value. It seems that the probability to be on states with a
constant energy is zero. So no such system really exist. I guess?
Like this?
Someone have 50% probability each to stop on one of the two Sandia Peaks for a picnic. Can we do an average
for such a system? Example by Professor Kenkre.
Extraction of observables
where 𝑖 = 1, 2, ..., 3𝑁 .
4.1.6 Week 5
The problem in statistical mechanics is that we can only gain partial information of the system, thus only partial results
are obtained. However we know that in equilibrium, all possible states appear with an equal probability. Recall that
states means points in phase space. We use hydrodynamics to represent the phase space distribution but this can not
determine if the phase space points are change or not. The only thing this density show us the the aparant density
doesn’t change. So it seems that we don’t need to know the exact position of a phase space point. We can rely on
average values.
The system is like a black box. Even we know all the quantities, we still have no idea about the exact state of the
system.
Ensemble
Gibbs’ idea of ensemble is to create a lot of copies of the system with the same thermodynamics quantities.
The question is where to put them? In different time or in different space?
We can create a uge amount copies of the system and imagine that they are at different place. Then we have all the
possible states of the system.
We can also wait infinite long time and all possible states will occur, at least for some system, which is called ergodic.
Ergodic means the system can visit all possible states many times during a long time. This is rather a hypothesis than
a theorem.
The problem is, not all systems are ergodic. For such systems, of course, we can only do the ensemble average.
Note: Cons
1. Not possible to prove ensemble average is the same as time average. In fact some systems don’t obey this rule.
2. Not possible to visit all possible states on constant energy surface in finite time.
3. Even complicated systems can exhibit almost exactly periodic behavior, one example of this is FPU experiment
.
4. Even the system is ergodic, how can we make sure each state will occur with the same probability.
Here is an example of non ergodic system:
This is a box of absolutely smooth with balls collide with walls perpendicularly. Then this system can stay on some
discrete points with same values of momentum components.
Another image from Wikipedia :
Note: Pros
1. Poincaré recurrence theorem proves that at least some systems will come back to a state that is very close the
the initial state after a long but finite time.
2. Systems are often chaotic so it’s not really possible to have pictures like the first one in Cons.
⟨𝑜⟩ = Tr𝜌𝑂
All we care about is the left hand side. So as long as 𝜌 is not changed, we can stir the system as crazy as we can and
keep the system the same.
Hint: Only one trace in phase space is true. How can we use ensemble to calculate the real observables?
Actually, what we calculated is not the real observable. What we calculated is the ensemble average. Since we are
dealing with equilibrium, we need the time average because for equilibrium system, time average is the desired result.
(Fluctuations? yes but later.) As we discussed previously, for ergodic systems, ensemble average is the same as time
average.
Equilibrium
𝜕
𝜌=0
𝜕
or equivalently,
{𝐻, 𝜌} = 0
𝜌 ∝ 𝑒−𝛽𝐻
Ensembles, Systems
𝜌(𝑝; 𝑞; 0) = 𝛿(𝐻(𝑝; 𝑞; 0) − 𝐸)
That is the system stays on the energy shell in phase space. Also we have for equilibrium system,
𝐻(𝑝; 𝑞; 𝑡) = 𝐸
Hint: Is it true that ensemble average is equal to the actual value of the system?
Not for all classical systems. (But for ALL quantum systems? Not sure.)
Important: How about state of the system moving with changing speed on the shell? Then how can we say the
system is ergodic and use ensemble average as time average?
𝑆 = 𝑘𝐵 ln Ω
Canonical Ensemble
For a system weakly interacting with a heat bath, total energy of the system is
𝐸𝑇 = 𝐸𝑆 + 𝐸𝑅 + 𝐸𝑆,𝑅
where the interacting energy 𝐸 is very small compared to 𝐸1 ≪ 𝐸2 . So we can drop this interaction energy term,
𝐸𝑇 = 𝐸𝑆 + 𝐸𝑅
A simple and intuitive derivation of the probability density is to use the theory of independent events.
1. 𝜌𝑇 𝑑Ω𝑇 : probability of states in phase space volume 𝑑Ω𝑇 ;
2. 𝜌𝑆 𝑑Ω𝑆 : probability of states in phase space volume 𝑑Ω𝑆 ;
Since there is no particle exchange between the two systems, overall phase space volume is the system phase space
volume multiplied by reservoir phase space volume,
Obviously we can get the relation between the three probability densities.
𝜌𝑇 = 𝜌𝑅 𝜌𝑆 .
ln 𝜌𝑇 = ln 𝜌𝑅 + ln 𝜌𝑆 .
Key: :math:‘rho‘ is a function of energy :math:‘E‘. AND both :math:‘rho‘ and energy are extensive. The only
possible form of :math:‘ln rho‘ is linear.
Finally we reach the destination,
ln 𝜌 = −𝛼 − 𝛽𝐸
i.e.,
𝜌 = 𝑒−𝛼 𝑒−𝛽𝐸
Warning: This is not an rigorous derivation. Read R.K. Su’s book for a more detailed and rigorous derivation.
Systems with changing particle number are described by grand canonical ensemble.
Identical Particles
If a system consists of N indentical particles, for any state 𝑛𝜉𝑖 particles in particle state i, we have the energy of the
system on state 𝜉 is
∑︁
𝐸𝜉 = 𝜖𝑖 𝑛𝜉𝑖
𝑖
4.1.7 Week 6
The three ensembles are the same when particle number N is really large, 𝑁 → 0 .
The reason is that
1. when N becomes really large, the interaction between √ system and reservoir becomes really negligible. The
variance of Gaussian distribution is proportional to 1/ 𝑁 .
2. 𝑑𝐸𝑆 + 𝑑𝐸𝑅 = 𝑑𝐸 and we know 𝑑𝐸 = 0 so when the energy of the system increase that of reservoir drop.
Professor Kenkre have a way to prove that the energy of the system is peaked at some value. However I didn’t
get it.
Quite different from Gibbs’ ensemble theory, Boltzmann’s theory is about most probable distribution.
1. Classical distinguishable particles 𝑎𝑙 = 𝑤𝑙 𝑒−𝛼−𝛽𝑒𝑙 ;
1
2. Bosons 𝑎𝑙 = 𝑤𝑙 𝑒𝛼+𝛽𝑒 𝑙 −1
;
1
3. Fermion 𝑎𝑙 = 𝑤𝑙 𝑒𝛼+𝛽𝑒 𝑙 +1
.
This image tells us that the three lines converge when the factor 𝛼 + 𝛽𝑒𝑙 becomes large. Also Fermions have less
micro states than classical particles because of Pauli exclusive principle.
𝛼 + 𝛽𝑒𝑙 being large can have several different physical meanings.
1. Temperature low;
2. Energy high;
3. Chemical coupling coefficient 𝛼 large.
We have several identical conditions for the three distribution to be the same.
𝛼 + 𝛽𝑒𝑙 ≫ 1 ⇔ 𝛼 ≫ 1 ⇔ 1/ exp(𝛼 + 𝛽𝑒𝑙 ) ≪ 1 ⇔ 𝑎𝑙 /𝑤𝑙 ≪ 1
where the last statement is quite interesting. 𝑎𝑙 /𝑤𝑙 ≪ 1 means we have much more states then particles and the
quantum effects becomes very small.
Warning: One should be careful that even when the above conditions are satisfied, the number of micro states for
classical particles is very different from quantum particles,
Recall that thermal wavelength 𝜆𝑡 is a useful method of analyzing the quantum effect. At high temperature, thermal
wavelength becomes small and the system is more classical.
Hint:
ℎ √ ℎ ℎ
1. Massive particles 𝜆𝑡 = 𝑝 = 2𝑚𝐾
= √
2𝜋𝑚𝑘𝑇
𝑐ℎ
2. Massless particles 𝜆𝑡 = 2𝜋 1/3 𝑘𝑇
However, at high temperature, the three micro states numbers are going to be very different. This is because
thermal wavelength consider the movement of particles and high temperature means large momentum thus
classical. The number of micro states comes from a discussion of occupation of states.
Important: What’s the difference between ensemble probability density and most probable distribution? What makes
the +1 or -1 in the occupation number?
Most probable distribution is the method used in Boltzmann’s theory while ensemble probability density is in ensemble
theory. That means in ensemble theory all copies (states) in a canonical ensemble appear with a probability density
exp(−𝛽𝐸) and all information about the type of particles is in Hamiltonian.
Being different from ensemble theory, Boltzmann’s theory deals with number of micro states which is affected by
the type of particles. Suppose we have N particles in a system and occupying 𝑒𝑙 energy levels with a number of 𝑎𝑙
particles. Note that we have a degeneration of 𝑤𝑙 on each energy levels.
Finally we have
𝑁!
Ω𝑀.𝐵. = Π𝑙 𝑤𝑙𝑎𝑙
Π𝑙 𝑎𝑙 !
(𝑎𝑙 + 𝑤𝑙 − 1)
Ω𝐵.𝐸. = Π𝑙
𝑎𝑙 !(𝑤𝑙 − 1)!
is the number of micro states for a Boson system with a {𝑎𝑙 } distribution.
𝑎𝑙 𝑤𝑙 !
Ω𝐹.𝐷. = Π𝑙 𝐶𝑤 = Π𝑙
𝑙
𝑎𝑙 !(𝑤𝑙 − 𝑎𝑙 )!
is the number of micro states for a Fermion system with a distribution {𝑎𝑙 }. We get this because we just need to pick
out 𝑎𝑙 states for 𝑎𝑙 particles on the energy level.
DoS and partition function have already been discussed in previous notes.
Suppose we know only M.B. distribution, by applying this to harmonic oscillators we can find that
⟨𝐻⟩ = (¯
𝑛 + 1/2)ℎ̄𝜔
where 𝑛
¯ is given by
1
𝑛
¯=
𝑒𝛽ℎ̄𝜔 − 1
Hint: Note that this is possible because energy differences between energy levels are the same for arbitrary adjacent
energy levels. Adding one new imagined particle with energy ℎ̄𝜔 is equivalence to excite one particle to higher energy
levels. So we can treat the imagined particle as a Boson particle.
FIVE
NONEQUILIBRIUM SYSTEM
5.1.1 Week 7
Two Spaces
5.1.2 Week 8
Instead of writing Poisson bracket as an bracket, we can define a Poisson bracket operator:
𝑁 (︂
𝜕𝐻 𝑁 𝜕 𝜕𝐻 𝑁 𝜕
∑︁ )︂
Hˆ 𝑁 = − −
𝑗=1
𝜕⃗𝑞𝑗 𝜕⃗
𝑝𝑗 𝜕⃗
𝑝𝑗 𝜕⃗𝑞𝑗
𝜕𝜌𝑁 ˆ 𝑁 𝜌𝑁 .
𝑖 =𝐿
𝜕𝑡
59
Statistical Physics Notes, Release 1.38
BBGKY Hierarchy
Now we think about the problems we are going to solve. In statistical mechanics, the most ideal method is to solve the
Liouville equation directly and leave only initial condition missing for the problem. However, solving the Liouville
equation is so hard for complicated problems we finally thinking about dropping some dynamics. This is no big deal
as we already dropped initial condition and makes our solution probabilistic.
Now the question is what to drop. For non-interacting systems, the solution comes directly from Liouville equa-
tion. It’s interactions that makes our life so hard (yet make us alive). So we would like to make approximations on
interactions.
For non-interacting systems, Γ space can actually be reduced to 𝜇 space which is spanned by the freedoms of only one
particle. Here we need to address the fact that we are dealing with identical particles.
Actually we are not trying to reduce to 𝜇 space exactly. We just want to make the dimension as less as possible. So
we want to talk about some reduced quantities.
First of all is the probability density of s particles. For one particle, it’s
∫︁ ∫︁
⃗ 1 , 𝑡) := · · · 𝑑𝑋
𝜌1 ( 𝑋 ⃗ 2 · · · 𝑑𝑋
⃗ 𝑁 𝜌𝑁 (𝑋
⃗ 1, · · · , 𝑋
⃗ 𝑁 , 𝑡)
Obviously, 𝑠 = 𝑁 gives us
𝐹𝑁 𝑠 = 𝑉 𝑁 𝜌.
We can write down the Hamiltonian of the system for any two-body spherically symmetric interaction,
𝑁 𝑁 (𝑁 −1)/2
𝑁
∑︁ 𝑝⃗2𝑖 ∑︁
𝐻 = + 𝜑(|⃗𝑞𝑖 − ⃗𝑞𝑗 |).
𝑖=1
2𝑚 𝑖<𝑗
𝜕𝜌𝑁 ˆ 𝑁 𝜌𝑁 .
= −𝐿
𝜕𝑡
Now we have
𝑁
𝑝⃗𝑖 𝜕
Hˆ𝑁 =
∑︁ ∑︁
− Θ̂𝑖𝑗
𝑖=1
𝑚 𝜕⃗𝑞𝑖 𝑖<𝑗
where
𝜕𝜑𝑖𝑗 𝜕 𝜕𝜑𝑖𝑗 𝜕
Θ̂𝑖𝑗 := +
𝜕⃗𝑞𝑖 𝜕⃗
𝑝𝑖 𝜕⃗𝑞𝑗 𝜕⃗
𝑝𝑗
⃗ 𝑠+1 , · · · , 𝑋
Next write down the explicit Liouville equation for this problem and integrate over {𝑋 ⃗ 𝑁 }. Make approxi-
mations (large N etc), finally we have a hierarchy,
𝑠 ∫︁
𝜕𝐹𝑠 1 ∑︁
+ Hˆ 𝑠 𝐹𝑠 = ⃗ 𝑠+1 Θ̂𝑖,𝑠+1 𝐹𝑠+1 (𝑋
𝑑𝑋 ⃗ 1, · · · , 𝑋
⃗ 𝑠+1 , 𝑡)
𝜕𝑡 𝑣 𝑖=1
where 𝑣 = 𝑉 /𝑁 .
This shows exactly why stat mech is hard. The smaller s is, the easier the solution. BUT we see that to find out 𝑠 = 1,
we need 𝑠 = 2 and the hierarchy never cut. What do we do? We cut manually.
Why Irreversible
The reason that a system is irreversible is because we lost information. In other words, the correlation function of
time is shorter as the any system would be coupled to the reservoir. So any system would transfer information to the
reservoir and the information just runs aways deep into the reservoir. With information loss the system can not be
reversible. More quantum mechanically, the system loses information throught entanglement (mostly).
The classical idea of irreversibility is through H theorem. Boltzmann defines a quantity
∫︁ ∫︁
𝐻= 𝜌(⃗𝑟, ⃗𝑣 , 𝑡) ln 𝜌(⃗𝑟, ⃗𝑣 , 𝑡)𝑑𝜏 𝑑𝜔
As we said previously, the ideal situation is that we solve Liouville equation directly and exactly. However, it’s not
generally possible. So we turn to some mesoscropic method for help.
We start from microscopic equation, work on them, them trucate at some point, meaning approximation. Then use the
approximated result to calculate the marcoscopic quantities.
An example of this method is that we divide a box of gas into two parts. Then we talk about only two states which is
LEFT state and RIGHT state instead of the phase space states.
and
𝑑𝑃𝑅
= 𝑇𝑅𝐿 𝑃𝐿 − 𝑇𝐿𝑅 𝑃𝑅 .
𝑑𝑡
The first equation means that the change of probability that a particle in LEFT state is rate from RIGHT to LEFT time
the probability that the particle is in RIGHT state, minus the rate from LEFT state to RIGHT state times the probability
that the particle is in LEFT state. This is just simply an linear model of gaining and losing.
It becomes interesting that we can discribe the system in such an easy way. Will it work? We’ll see.
More generally, we have
𝑑 ∑︁
𝑃𝜉 = (𝑇𝜉𝜇 𝑃𝜇 − 𝑇𝜇𝜉 𝑃𝜉 ) .
𝑑𝑡 𝜇
The idea is to find out an equation for one particle probability density 𝑓𝑗 (⃗𝑟, ⃗𝑣𝑗 , 𝑡) by considering the number of particles
into this state and out of state due to collision. Since we can find all contributions to 𝑓𝑗 by applying scattering theory
of classical particles, this equation can be written down explicitly which turns out to be an integrodifferential equation.
The number of particles in a volume 𝑑⃗𝑟𝑑⃗𝑣 at position ⃗𝑟 with velocity ⃗𝑣𝑗 is
Consider the situation after a short time 𝑑𝑡 we can write down the change of particle numbers due to collision and
finally we will get Boltzmann equation.
𝜕𝑓𝑗 ⃗𝑗
𝑋 ∑︁ ∫︁ ∫︁ (︀
𝑓𝑖′ 𝑓𝑗′ − 𝑓𝑖 𝑓𝑗 𝑔𝑖𝑗 𝑏d𝑏d⃗𝑣𝑖
)︀
+ ⃗𝑣𝑗 · ∇⃗𝑟 𝑓𝑗 + · ∇⃗𝑣𝑗 𝑓𝑗 = 2𝜋
𝜕𝑡 𝑚𝑗 𝑖
⃗ is the external force on the particle, prime denotes the quantity after collision, 𝑏 is the impact parameter.
where 𝑋
In the derivation, the most important part is to identify the number of particles into and out of this state due to collision.
We can derive from Boltzmann equation the Enskog’s equation then simplify to continuity equation by picking up an
conserved quantity as 𝜓𝑖 in Enskog’s equation.
Continuity equation is alway true for such an conserved system so this results is very conceivable.
H Theorem
H theorem says that the quantity 𝐻 can not decrease. The requirements of course should be that in a classical, particle
number conserved system.
First define
∫︁ ∫︁
𝐻(𝑡) = 𝑓 (⃗𝑟, ⃗𝑣 , 𝑡) ln 𝑓 (⃗𝑟, ⃗𝑣 , 𝑡)𝑑⃗𝑟𝑑⃗𝑣
𝑑𝐻
≤0
𝑑𝑡
𝑓 ′ 𝑓1′ = 𝑓 𝑓1 .
H Theorem Discussion
Footnotes
5.1.3 Week 9
Master Equation
One way of starting somewhere in between instead of starting from microscopic equations to get macroscopic quanti-
ties is coarsing the system and use Master Equation.
𝑑𝑃𝜉 ∑︁
= (𝑇𝜉𝜇 𝑃𝜇 − 𝑇𝜇𝜉 𝑃𝜉 )
𝑑𝑡 𝜇
which means the rate of 𝑃𝜉 (𝑡) is determined by the gain and loss of probability.
One way of deriving master equation is to start from Chapman-Kolmogorov equation which is
∑︁
𝑃𝜉 (𝑡) = 𝑄𝜉𝜇 𝑃𝜇 (𝑡 − 𝜏 ).
𝜇
This equation describes a discrete and randomwalk process, aka Markov process. In other words, the information
about motion is lost with every movement. In this case, all information has been lost before a time interval 𝜏 .
The formalism of this equation reminds us the time derivative,
𝑃𝜉 (𝑡) − 𝑃𝜉 (𝑡 − 𝜏 )
𝜕𝑡 𝑃𝜉 (𝑡) = lim .
𝜏
Important: It’s very important to see this result clearly. Here we write this identity by regarding that the system must
jump out of 𝜉 state because the summation doesn’t include the case that 𝜇 = 𝜉.
in which we used
∑︁
𝑃𝜉 (𝑡 − 𝜏 ) = 𝑄𝜇𝜉 𝑃𝜉 (𝑡 − 𝜏 ).
𝜇
Then it seems clear that we can just divide 𝜏 on each side and take the limit that 𝜏 → 0.
∑︀ ∑︀
𝑃𝜉 (𝑡) − 𝑃𝜉 (𝑡 − 𝜏 ) 𝜇 𝑄𝜉𝜇 𝑃𝜇 (𝑡 − 𝜏 ) − 𝜇 𝑄𝜇𝜉 𝑃𝜉 (𝑡 − 𝜏 )
lim = lim
𝜏→ 𝜏 𝜏 →0 𝜏
Watch out. The right hand side goes to infinity usually. One of the way out is to assume that
𝑄𝑢𝑥 = 𝑅𝑢𝑥 𝜏 + 𝑂(𝜏 𝑛 )
Warning: By saying a system obeys Chapman-Kolmogorov equation we admit that the system loses information
after an time interval 𝜏 . Now we take the limit 𝜏 → 0, which means the system has no memory of the past at all!
How to explain this?
Or can we assume that 𝑃 (𝑡 − 𝜏 ) ∝ 𝜏 ?
2
Note: Derivation of master equation can be more rigorous. This note is a rephrase of Reichl’s chapter 6 B. Also
refer to Irwin Oppenheim and Kurt E. Shuler’s paper. 3
To do this we need to use conditional probability,
which means the probability density of variable Y has value 𝑦1 at time 𝑡1 and 𝑦2 at time 𝑡2 is given by the probability
density of variable Y has value 𝑦1 at time 𝑡1 times that of it has value 𝑦1 at time 𝑡1 given it has value 𝑦2 at time 𝑡2 .
Assume that the probability density at 𝑡𝑛 only depends on that at 𝑡𝑛−1 , we have
𝑃𝑛−1|1 (𝑦1 , 𝑡1 ; · · · ; 𝑦𝑛−1 , 𝑡𝑛−1 |𝑦𝑛 , 𝑡𝑛 ) = 𝑃1|1 (𝑦𝑛−1 , 𝑡𝑛−1 |𝑦𝑛 , 𝑡𝑛 ) = 𝑃1|1 (𝑦𝑛−1 , 𝑡𝑛−1 |𝑦𝑛 , 𝑡𝑛 ),
We integrate over 𝑦1 ,
∫︁
𝑃1 (𝑦2 , 𝑡2 ) = 𝑃1 (𝑦1 , 𝑡1 )𝑃1|1 (𝑦1 , 𝑡1 |𝑦2 , 𝑡2 )𝑑𝑦1
As we can write 𝑡2 = 𝑡1 + 𝜏 ,
∫︁
𝑃1 (𝑦2 , 𝑡1 + 𝜏 ) = 𝑃1 (𝑦1 , 𝑡1 )𝑃1|1 (𝑦1 , 𝑡1 |𝑦2 , 𝑡1 + 𝜏 )𝑑𝑦1
3 Irwin Oppenheim and Kurt E. Shuler. “Master Equations and Markov Processes”. Phys. Rev. 138, B1007 (1965) .
The next step is the messy one. Expand the right hand side using Taylor series, which one can find in Reichl’s book 1
, we get the expression for this time derivative,
∫︁
𝜕𝑡 𝑃1 (𝑦2 , 𝑡) = 𝑑𝑦1 (𝑊 (𝑦1 , 𝑦2 )𝑃1 (𝑦1 , 𝑡) − 𝑊 (𝑦2 , 𝑦1 )𝑃1 (𝑦2 , 𝑡)) .
Important: Now we see that Markov process is the hypothesis we need to get master equation? DO NOT mistake
this with Markov process ever. There are things unclear in the derivation.
2
Read Irwin Oppenheim and Kurt E. Shuler’s paper for more details.
𝑃2 (𝑦1 , 𝑡1 ; 𝑦3 , 𝑡3 )
= 𝑃1|1 (𝑦1 , 𝑡1 |𝑦3 , 𝑡3 )
𝑃1 (𝑦1 , 𝑡1 )
Master equation is
∑︁
𝜕𝑡 𝑃𝜉 (𝑡) = (𝑅𝜉𝜇 𝑃𝜇 (𝑡) − 𝑅𝜇𝜉 𝑃𝜉 (𝑡)) .
𝜇
𝜕𝑡 𝑃1 = 𝑅(𝑃2 − 𝑃1 )
and
𝜕𝑡 𝑃2 = 𝑅(𝑃1 − 𝑃2 ).
𝑃 + = 𝑃1 + 𝑃2
and
𝑃− = 𝑃1 − 𝑃2 .
𝜕𝑡 𝑃+ = 0
and
𝜕𝑡 𝑃− = −2𝑅𝑃− .
This result proves that whatever states was the system in initially, it will get to equilibrium finally.
The term 𝑒−2𝑅𝑡 is a decaying process, or in other words a relaxation process.
which is very similar to the solution to the stat mech Liouville equation,
𝑃 (𝑡) = 𝑃 (0)𝑒−𝐴𝑡 ,
where A is a matrix
∑︁
𝐴𝜉𝜇 = −𝑅𝜉𝜇 , 𝐴𝜉𝜉 = 𝑅𝜇𝜉 .
𝜇
The difference here is the 𝑖 in the exponential. Think about decay and rotation.
We will see similar exponential decaying or growing behaviors as degenerate system. The difference is the equilibrium
point.
𝜕𝑡 𝑃1 = 𝑅12 𝑃2 − 𝑅21 𝑃1
Footnotes
5.1.4 Week 10
To figure out the elements of this new matrix 𝐴, we need to understand how to combine the RHS of master equation.
⎛ ⎞ ⎛ ⎞⎛ ⎞ ⎛ ⎞⎛ ⎞
𝑃1 0 𝑅12 · · · 𝑅1𝑁 𝑃1 1 0 ··· 0 𝑃1
⎜ 𝑃2 ⎟ ⎜ 𝑅21 0 · · · 𝑅2𝑁
⎟ ⎜ 𝑃2 ⎟ ∑︁ ⎜0 1 · · · 0⎟ ⎜ 𝑃2 ⎟
𝜕𝑡 ⎜ . ⎟ = ⎜ . .. ⎟ ⎜ .. ⎟ − 𝑅𝑛𝑚 ⎜ . . .
⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟⎜ ⎟
.. .. ⎟⎜ . ⎟
⎝ .. ⎠ ⎝ .. . . . ⎠ ⎝ . ⎠ 𝑛
⎝ . .
. . . . 0⎠ ⎝ .. ⎠
𝑃𝑁 𝑅𝑁 1 𝑅𝑁 2 ··· 0 𝑃𝑁 0 0 ··· 1 𝑃𝑁
∑︀
Since 𝐴𝑚𝑚 = 𝑛 𝑅𝑛𝑚 is just a real number, we can combine the two matrices on the RHS, which is
⎛ ⎞ ⎛ ⎞⎛ ⎞
𝑃1 𝐴11 −𝑅12 · · · −𝑅1𝑁 𝑃1
⎜ 𝑃2 ⎟ ⎜ −𝑅21 𝐴 22 · · · −𝑅 2𝑁
⎟ ⎜ 𝑃2 ⎟
𝜕𝑡 ⎜ . ⎟ + ⎜ . .. ⎟ ⎜ .. ⎟ = 0.
⎜ ⎟ ⎜ ⎟⎜ ⎟
.. ..
⎝ .. ⎠ ⎝ .. . . . ⎠⎝ . ⎠
𝑃𝑁 −𝑅𝑁 1 −𝑅𝑁 2 ··· 𝐴𝑁 𝑁 𝑃𝑁
A matrix is defined in this way so that we have the master equation becomes
𝜕𝑡 P + AP = 0.
So solve such a matrix equation, we need to diagonalize A, so that we have a simple solution
Pdiag = 𝑒−Adiag 𝑡 .
in which we’ll define Adiag = S−1 PS and Pdiag = S−1 P. For simplicity, we won’t write down the indices from now
on.
Warning: Is there a mechanism that ensures the A is invertible ? If A is defective , none of these can be done.
Do all physical systems have invertible A?
Hint: Notice that the form of master equation after this transform is similar to the dynamics of quantum mechanics
which is
𝜕 ˆ |𝜓⟩ .
𝑖ℎ̄ |𝜓⟩ = 𝐻
𝜕𝑡
We consider the case that coarsed system has translational symmetry, then we know the value of elements in A matrix
only dependends on 𝑙 := 𝑛 − 𝑚. In ohter words,
∑︁
𝜕𝑡 𝑃𝑚 + 𝐴𝑚𝑛 𝑃𝑛 = 0.
𝑛
For translational symmetric system, we can work out discrete Fourier transform to find the normal modes. Define the
kth mode as
𝑃 𝑘 = 𝑃𝑚 𝑒𝑖𝑘𝑚 .
With the definition of kth mode above there, the master equation can be written as
∑︁ ∑︁
𝜕𝑡 𝑃 𝑘 + 𝑒𝑖𝑘(𝑚−𝑛) 𝐴𝑚−𝑛 𝑒𝑖𝑘𝑛 𝑃𝑛 = 0
𝑛 𝑚
which “accidently” diagonalizes the matrix A. So define the kth mode of A as 𝐴𝑘 = 𝑒𝑖𝑘(𝑚−𝑛) 𝐴𝑚−𝑛 then
∑︀
𝑙=𝑚−𝑛
∑︁ ∑︁
𝜕𝑡 𝑃 𝑘 + 𝑒𝑖𝑘(𝑚−𝑛) 𝐴𝑚−𝑛 𝑒𝑖𝑘𝑛 𝑃𝑛 = 0
𝑙=𝑚−𝑛 𝑛
is reduced to
Note: Note that summation over n and m is equivalent to summation over n and m-n.
To find out the final solution, inverse Fourier transform on kth mode,
1 ∑︁ 𝑘
𝑃𝑚 (𝑡) = 𝑃 (𝑡)𝑒−𝑖𝑘𝑚 .
𝑁
𝑘
𝑒𝑖𝑘𝑚 = 𝑒𝑖𝑘(𝑚+𝑁 )
which leads to
2𝜋
𝑘= .
𝑁
Discrete transform will become integral if we are dealing with continous systems, which will be achieved by using the
following transformation,
∫︁
1 ∑︁ 1
→ 𝑑𝑘
𝑁 2𝜋
𝑘
1
∑︀
We write down this identity like transformation because the discrete result has the form 𝑁 𝑘.
𝜕𝑡 𝑃𝑚 = 𝐹 (𝑃𝑛+1 + 𝑃𝑛−1 ) − 2𝐹 𝑃𝑚 .
Note: There is no need to use the definition of A matrix in this simple case.
Combine terms,
𝑘
𝜕𝑡 𝑃 𝑘 = 4𝐹 sin2 .
2
The last thing is to find out the value of 𝑘. Apply Born-Von Karman boundary condition, we find that 𝑘 is quantized,
2𝜋
𝑘= 𝑛, 𝑛 = 0, 1, 2, · · · , 𝑁 − 1.
𝑁
Matrix Form
𝜕𝑡 𝑃𝑚 = 𝐹 (𝑃𝑛+1 + 𝑃𝑛−1 ) − 2𝐹 𝑃𝑚 .
Hint: An easy way to get the matrix is to write down the R matrix which has 0 diagonal elements, construct A matrix
by adding minus sign to all elements and put the sum of the original elements at the diagonal in the corresponding
line. Pay attention to the signs.
Another propertity worth talking about is the additive of the matrices. A more complicated system can be decomposed
into several simple systems.
The technique used to solve this problem is to diagonalize the 6 times 6 matrix because we can then just write down
the solutions.
The way to do this is exactly what have did in the previous part, that is defining new quantities. Then we have in
matrix form
⎛ 𝑘 ⎞ ⎛ 2 𝑘1 ⎞⎛ 𝑘 ⎞
𝑃 1 sin 2 0 0 0 0 0 𝑃 1
⎜𝑃 𝑘2 ⎟ ⎜ 0 2 𝑘 ⎟ ⎜𝑃 𝑘2 ⎟
sin 2
2
0 0 0 0
sin2 𝑘23
⎜ 𝑘 ⎟ ⎜ ⎟⎜ 𝑘 ⎟
⎜𝑃 3 ⎟ ⎜ 0 0 0 0 0 ⎟ ⎟ ⎜𝑃 𝑘 ⎟ = 0
⎜ 3⎟
𝜕𝑡 ⎜ 𝑘4 ⎟ + 4𝐹 ⎜
⎜ ⎟ ⎜ 2 𝑘4
⎜𝑃 𝑘 ⎟ ⎜ 0 0 0 sin 2 0 0 ⎟⎜ ⎜𝑃 𝑘 ⎟
⎟ 4⎟
⎝𝑃 5 ⎠ ⎝ 0 2 𝑘5 𝑃 5⎠
0 0 0 sin 2 0 ⎠ ⎝
𝑃 𝑘6 0 0 0 0 0 sin2 𝑘26 𝑃 𝑘6
By writing in this form, we imediately know why diagonalizing is the thing we eager to do. The solutions are just
2
𝑃 𝑘 (𝑡) = 𝑃 𝑘 (0)𝑒−4𝐹 sin (𝑘/2)𝑡
.
Hint: Recall that the elements in the diagonailized Adiag matrix are just the eigenvalues of A matrix with corre-
sponding eigenvectors. So in other words, the way to solve this kind of descrete master equations is to solve the eigen
problem of A matrix and then find the eigen modes and finally inverse transform back to 𝑃𝑚 (𝑡).
We have exactly the same equations as in finite chain. The difference lies in the boundary condition where 𝑁 → ∞
and
∫︁
1 ∑︁ 1
→ 𝑑𝑘.
𝑁 2𝜋
𝑘
Hint: The way to check this result is to check the sum of unit probability.
𝑁 ∫︁ 𝜋
1 ∑︁ 1
1=1⇔ 1𝑑𝑘 = 1
𝑁 2𝜋 −𝜋
𝑘=1
Since the argument has imaginary part, this is also called Bessel function of imaginary argument.
Check out more properties about this function in vocabulary part.
2D Lattice
A 2D lattice image is shown below with image cridet of Jim Belk (public domain).
Note that we have translational symmetry in both x and y direction. So the solution is simply the product of solutions
to two directioins,
Continuum Limit
𝜕𝑡 𝑃𝑚 = 𝐹 (𝑃𝑚−1 + 𝑃𝑚+1 ) − 2𝐹 𝑃𝑚
= 𝐹 (𝑃𝑚+1 − 𝑃𝑚 − (𝑃𝑚 − 𝑃𝑚−1 ))
(𝑃𝑚+1 − 𝑃𝑚 )/𝜖 − (𝑃𝑚 − 𝑃𝑚−1 )/𝜖)
= 𝐹 𝜖2
𝜖
We can identify the form of derivatives but RHS becomes zero if 𝐹 is a constant when we take the limit 𝜖 → 0.
It does make sense that 𝐹 actually increases when the distance between two sites becomes smaller. And the way to
reconcile the zero problem is to assume that
𝐷
𝐹 = .
𝜖2
𝜕𝑃 (𝑥, 𝑡) 𝜕 2 𝑃 (𝑥, 𝑡)
=𝐷
𝜕𝑡 𝜕𝑥2
5.1.5 Week 11
Irrersibility
Any equations can be expanded using Fourier series including the decay behavior we have seen in previous lectures.
However the problem is, for finite system, the expansion is not really complete. The series composed by all these finite
components might not be decay and recurrence could happen. But the time needed for recurrence is so long that we
never see it.
Master Equation
Set 𝑘 = 0, we have
∑︁ 𝜕2
⟨𝑚2 ⟩ := 𝑚 2 𝑃𝑚 = .
𝑚
𝜕𝑘 2
Note: So we don’t need to calculate 𝑃𝑚 (𝑡) which takes a lot of time to calculate.
is alway true.
Continuum Limit
𝜕 𝜕2𝑃
𝑃 =𝐷 2
𝜕𝑡 𝜕𝑥
Hint:
• Propagators of this equation is gaussian like.
• Propagators of discrete master equation is a decay, 𝐼𝑚 (2𝐹 𝑡)𝑒−2𝐹 𝑡 .
Warning: In principle, we can take some limit of the master equation propagator and make it the diffusion
propagator.
⟨𝑥2 ⟩ = 2𝐷𝑡.
where
∫︁ +∞
⟨𝑥2 ⟩ := 𝑃 (𝑥, 𝑡)𝑥2 𝑑𝑥
−∞
+∞
∑︁
⟨𝑚2 ⟩ := 𝑃𝑚 (𝑡)𝑚2 .
−∞
Consider a system in a heat bath, we can expand the system to second order and the system becomes a HO or HOs.
Fermi golden rule tells us that HO can only have nearest energy transition.
Note: Transition
𝑑
𝑃𝑚 =(𝑅𝑚,𝑚+1 𝑃𝑚+1 − 𝑅𝑚,𝑚−1 𝑃𝑚−1 )
𝑑𝑡
− (𝑅𝑚+1,𝑚 𝑃𝑚 + 𝑅𝑚−1,𝑚 𝑃𝑚 )
𝑑
𝑃𝑚 = 𝐹 (𝑃𝑚+1 + 𝑃𝑚−1 − 2𝑃𝑚 )
𝑑𝑡
with solution
∑︁
𝑃𝑚 (𝑡) = Π𝑚−𝑛 (𝑡)𝑃𝑛 (0)
𝑛
Continuum Limit
𝜕 𝜕2
𝑃 (𝑥, 𝑡) = 𝐷 2 𝑃 (𝑥, 𝑡)
𝜕𝑡 𝜕𝑥
𝑥2 √
Π(𝑥, 𝑡) = · · · 𝑒− ··· / · · · 𝑡
Fourier transform
𝜕𝑃𝐹 (𝑘, 𝑡)
= −𝐷𝑘 2 𝑃𝐹 (𝑘, 𝑡)
𝜕𝑡
Solution
2
𝑃𝐹 (𝑘, 𝑡) = 𝑃𝐹 (𝑘, 0)𝑒−𝐷𝑘 𝑡
Inverse
∫︁
𝑃 (𝑥, 𝑡) = Π(𝑥 − 𝑥′ , 𝑡)𝑃 (𝑥′ , 0)𝑑𝑥′
5.1.6 Week 12
Propagator
To solve master equation, we usually find out the propagator Π(𝑥 − 𝑥′ , 𝑡). For simple discrete master equation, we
have the propagator with the form 𝐼𝑚 (2𝐹 𝑡)𝑒−2𝐹 𝑡 .
For continuous master equation which is the diffusion equation, given the initial distribution, it evolves according to
𝜕 𝜕 𝜕2
𝑃 (𝑥, 𝑡) = 𝜁 𝑃 (𝑥, 𝑡) + 𝐷 2 𝑃 (𝑥, 𝑡).
𝜕𝑡 𝜕𝑥 𝜕𝑥
So we can write down the transformed equation right away (for 𝜁 = 0 case).
𝜕 𝑘
𝑃 = −𝐷𝑘 2 𝑃 𝑘
𝜕𝑡
Hint: There might be sigularity in the propagator. One example of it is logorithm sigularity in 2D.
𝜕 𝜕 𝜕2
𝑃 (𝑥, 𝑡) = 𝜁 𝑃 (𝑥, 𝑡) + 𝐷 2 𝑃 (𝑥, 𝑡)
𝜕𝑡 𝜕𝑥 𝜕𝑥
We can Fourier transform then complete the square to solve this kind of problem.
Hint: This formalism is very much like the Gauge Transformation. We define a new derivative
𝜕 𝜕
→ + Γ(𝑥)
𝜕𝑥 𝜕𝑥
𝜕 𝜕2
𝑃 (𝑥, 𝑡) = 𝐷 2 𝑃 (𝑥, 𝑡)
𝜕𝑡 𝜕𝑥
(︂ 2 )︂ (︂ (︂ )︂)︂
𝜕 𝜕 𝜕 2 𝜕
→ 𝑃 (𝑥, 𝑡) = 𝐷 𝑃 (𝑥, 𝑡) + 2Γ 𝑃 (𝑥, 𝑡) + 𝐷 𝑃 2Γ + Γ
𝜕𝑡 𝜕𝑥2 𝜕𝑥 𝜕𝑥
𝜕
Now we define 𝜁 := 2Γ, and let 2Γ2 + 𝜕𝑥 Γ.
4
The diffusion equation under this kind of transformation becomes the
one we need.
Does it imply that the diffusion equation and gauge transformation are related?
We might find some symmetry using this method.
Smoluchowski Equation
If we have some potential with a mininum point, then the motion of particles will be attracted to this minimum point.
With the force in mind, we can write down the master equation, which is the 𝜁 ̸= 0 case,
𝜕2
(︂ )︂
𝜕 𝜕 𝜕𝑈 (𝑥)
𝑃 (𝑥, 𝑡) = 𝑃 (𝑥, 𝑡) + 𝐷 2 𝑃 (𝑥, 𝑡).
𝜕𝑡 𝜕𝑥 𝜕𝑥 𝜕𝑥
𝜕 𝜕 𝜕2
𝑃 (𝑥, 𝑡) = 𝛾 (𝑥𝑃 (𝑥, 𝑡)) + 𝐷 2 𝑃 (𝑥, 𝑡).
𝜕𝑡 𝜕𝑥 𝜕𝑥
We can also use Fourier transform to solve the problem. However, we will only get
𝜕 𝑘 𝜕 𝑘
𝑃 = ··· 𝑃 + · · · 𝑘2 𝑃 𝑘
𝜕𝑡 𝜕𝑘
1−𝑒−2𝛾𝑡
where T (𝑡) = 2𝛾 .
Defects
Figure 5.6: The redefined time parameter in the solution of Smoluchowski equation example.
A chain might have defects where the site captures the walker with a rate C. We would have a master equation of this
kind
𝑑
𝑃𝑚 = 𝐹 (𝑃𝑚+1 + 𝑃𝑚−1 − 2𝑃𝑚 ) − 𝐶𝛿𝑚,𝑟 𝑃𝑚 .
𝑑𝑡
Consider an equation
𝑑
𝑦 + 𝛼𝑦 = 𝑓 (𝑡)
𝑑𝑡
∫︀ 𝑡
2. 𝛼 can be time dependent, instead of the expential term, we have 𝑒− 𝑡′
𝑑𝑠𝛼(𝑠)
as the Green function.
Hint: Suppose we have a first order inhomogeneous differential equation with homogeous initial condition
Hint: As a comparison to Green function method (which is not very helpful in 1st order ODE), general solution to
first order differential equation
𝑦˙ + 𝛼(𝑡)𝑦 = 𝑓 (𝑡)
is
∫︀
𝜇(𝑡)𝑓 (𝑡)𝑑𝑡 + 𝐶
𝑦(𝑡) =
𝜇(𝑡)
𝛼(𝑡′ )𝑑𝑡′
∫︀
where 𝜇(𝑡) := 𝑒 .
Hint: Green function method for second order inhomogeneous equation is here in vacabulary.
5. 𝐿[𝑒𝑎𝑡 ] = 1[ 𝑠 − 𝑎].
We haven’t solved it. The RHS depends on the probabillity. Notice that if we choose m=r, then
Warning: We have to inverse using computer but why bother these transforms?
Survival Probability
∑︁
𝑄(𝑡) = 𝑃𝑚 (𝑡)
𝑛
In our example,
(︂ )︂
˜ =
∑︁ 1 𝜂˜𝑟
𝑄(𝑡) 𝑃˜𝑚 = 1−
𝑚
𝜖 1/𝐶 + Π̃0
𝑑 ˜−1
𝑄→𝑄
𝑑𝑡
A brief review:
Equation:
𝑑
𝑃𝑚 = Without Defect − 𝐶𝛿𝑚,𝑟 𝑃𝑚
𝑑𝑡
Insert this result back to the solution, we can write donw the final result.
1
𝑃˜𝑚 (𝜖) = 𝜂˜𝑚 (𝜖) − 𝐶 Π̃𝑚−𝑟 (𝜖)˜
𝜂𝑟 (𝑡) .
1 + 𝐶 Π̃(𝜖)
Looking through the table of Laplace transform, we know it’s a transform of the time derivative of survival probability,
∫︁ 𝑡
𝑑
𝑄(𝑡) = − 𝑑𝑡′ M (𝑡 − 𝑡′ )𝜂(𝑡′ )
𝑑𝑡 0
in which
1
M (𝑡 − 𝑡′ ) = .
1/𝐶 + Π̃0
The meaning of this is that the propagator decrease with time so fast that it becomes very small. In this limit,
the survival probability is dominated by motion not the capture.
2. Capture limit,
1
≫ Π̃0 .
𝐶
and
1
𝐿[𝐽0 [2𝐹 𝑡]] = √︀ ,
𝜖2 + (2𝐹 )2
where 𝐼0 (2𝐹 𝑡) is the modified Bessel functions of the first kind. 𝐽0 (2𝐹 𝑡) is its companion.
Using the property above, we can find out
1
𝐿[𝐼0 (2𝐹 𝑡)𝑒−2𝐹 𝑡 ] = √︀ .
(𝜖 + 2𝐹 )2 − (2𝐹 )2
Photosynthesis
The obsorbed energy of photons is random walking in the chloroplast until it hit on a reaction center. Besides that, the
photons can be emited after some time. So the process can be descibed with the following master equation.
𝑑 𝑃𝑚
𝑃𝑚 = Without reaction and emission − 𝐶𝛿𝑚,𝑟 𝑃𝑚 −
𝑑𝑡 𝜏
Figure 5.7: Captions are here on wikipedia. Chloroplast ultrastructure: 1. outer membrane 2. intermembrane space
3. inner membrane (1+2+3: envelope) 4. stroma (aqueous fluid) 5. thylakoid lumen (inside of thylakoid) 6. thylakoid
membrane 7. granum (stack of thylakoids) 8. thylakoid (lamella) 9. starch 10. ribosome 11. plastidial DNA 12.
plastoglobule (drop of lipids)
so
𝑄(𝑡) = 𝑄(0)𝑒−𝑡/𝜏 .
Do the integration,
∫︁ ∞
1
𝑄(𝑡)𝑑𝑡 = 1.
𝜏 0
The problem becomes the derivation of the two survival probabilities. However, we don’t need the inverse Laplace
transform because
∫︁ ∞
𝑄(𝑡)𝑑𝑡 = 𝐿𝜖=0 [𝑄(𝑡)].
0
Let’s define the quantities without traps to be with a prime. Notice that if we define a new quantity
𝑃¯𝑚 = 𝑒𝑡/𝜏 𝑃𝑚
′
because if we plugin it into the master equation, we will get back to the case without emission. Then we the solution
immediately.
𝑃¯𝑚 = 𝑃𝑚 .
𝑄′ = 𝑒−𝑡/𝜏 𝑄.
This result simplifies our calculation so much because we don’t need to calculate the survival probability in the case
of traps. What we do is to use the Laplace transform of 𝑄(𝑡) and set different 𝜖.
In details,
(︃ )︃
˜ 1 𝜂˜
𝑄(𝜖) = 1− √︀ .
𝜖 1/𝐶 + 1/ 𝜖(𝜖 + 4𝐹 )
So
1 ∞
∫︁
𝑑𝑡𝑄(𝑡) = 𝐿𝜖=0 [𝑄(𝑡)] = 𝑄(𝜖˜ = 0)
𝜏 0
1 ∞
∫︁
˜ = 1 ).
𝑑𝑡𝑄′ (𝑡) = 𝐿𝜖=1/𝜏 [𝑄′ (𝑡)] = 𝑄(𝜖
𝜏 0 𝜏
Multiple Defects
We can solve for finite defects in principle. For example the two defects case should give us the following master
equation
𝑑
𝑃𝑚 = · · · − 𝐶𝑟 𝛿𝑚,𝑟 𝑃𝑚 − 𝐶𝑠 𝛿𝑚,𝑠 𝑃𝑚 .
𝑑𝑡
By using the two special cases that 𝑚 = 𝑟 and 𝑚 = 𝑠, we get two equations about 𝑃˜𝑟 and 𝑃˜𝑠 ,
𝑃˜𝑟 = 𝜂𝑟 − 𝐶𝑟 Π̃0 𝑃˜𝑟 − 𝐶𝑠 Π̃𝑟−𝑠 𝑃˜𝑟
𝑃˜𝑠 = 𝜂𝑠 − 𝐶𝑟 Π̃𝑠−𝑟 𝑃˜𝑠 − 𝐶𝑠 Π̃0 𝑃˜𝑠 .
However, the problem gets very complicated as the number of defects becomes very large.
Quantum mechanics observables are averaging over all density matrix elements,
∑︁
⟨𝑂⟩ = 𝑂𝑛𝑚 𝜌𝑚𝑛 .
𝑚,𝑛
For diagonalized density matrix, this averaging becomes the ordinary probability averaging.
However, even if we start with a diagonalized density matrix, the averaging procedure won’t stay on the classical
averaging procedure as time goes on. Off diagonal elements can be created out of the diagonal elements.
In that sense, it’s not even possible to use the classical master equation to solve most quantum problems. We need the
quantum master equation.
The first principle of quantum mechanics is
𝑑 ˆ 𝜌ˆ] = 𝐿ˆ
ˆ 𝜌.
𝑖ℎ̄ 𝜌ˆ = [𝐻,
𝑑𝑡
Then the question is, as the first idea, how to derive an equation for the probability.
Pauli’s Mistake
Pauli derived the first quantum master equation which is not quite right.
The solution to a quantum system is
𝜌ˆ(𝑡) = 𝑒−𝑖𝐿(𝑡−𝑡0 ) 𝜌ˆ(𝑡0 ).
In Heisenberg picture,
^ ^
𝜌ˆ(𝑡 + 𝜏 ) = 𝑒−𝑖𝜏 𝐻 𝜌ˆ(𝑡)𝑒𝑖𝜏 𝐻 .
The left hand side is the probability, 𝑃𝑚 . Right had side becomes
^ ^
∑︁
RHS = ⟨𝑚| 𝑒−𝑖𝜏 𝐻 |𝑛⟩ ⟨𝑛| 𝜌ˆ(𝑡) |𝑙⟩ ⟨𝑙| 𝑒𝑖𝜏 𝐻 |𝑚⟩ .
𝑛,𝑙
Here is where Pauli’s idea comes in. He assumed that the system is dirty enought to have repeatedly recurance of
diagonalized density matrix. Then he use diagonalized density matrix to calculate the probability,
^ ^
∑︁
𝑃𝑚 (𝑡 + 𝜏 ) = ⟨𝑚| 𝑒−𝑖𝜏 𝐻 |𝑛⟩ ⟨𝑛| 𝜌ˆ(𝑡) |𝑛⟩ ⟨𝑛| 𝑒𝑖𝜏 𝐻 |𝑚⟩
𝑛
⃒ ⃒2
^
∑︁
= 𝑃𝑛 ⃒⟨𝑚| 𝑒−𝑖𝜏 𝐻 |𝑛⟩⃒ .
⃒ ⃒
𝑛
⃒ ⃒2
^
The term ⃒⟨𝑚| 𝑒−𝑖𝜏 𝐻 |𝑛⟩⃒ RHS is the probability of a state n to be at state m after a short time 𝜏 . We’ll define this as
⃒ ⃒
𝑄𝑚𝑛 (𝜏 ).
So in short the probability is
∑︁
𝑃𝑚 (𝑡 + 𝜏 ) = 𝑄𝑚𝑛 (𝜏 )𝑃𝑛 (𝑡).
𝑛
Important: However, the Pauli assumption is basically the Fermi golden rule which requires a infinite amount of
time. This is obviously not valid for an master equation system.
= 𝑄𝑚𝑛 (−𝜏 ).
𝑃𝑚 (𝑡 + 𝜏 ) = 𝑃𝑚 (𝑡 − 𝜏 ),
van Hove
Important: van Hove made a great progress by bringing up the following questions.
1. What systems can be described by master equations?
2. What’s the time scale for quantum master equation to be valid?
3. How to derive a quantum master equation?
van Hove’s idea was that quantum master equations can describe systems with diagonal singularity conditions.
Then he said, the time scale of the system should be long enough, the perturbation should be as small as the condition
𝜆2 𝑡 ≈ constant.
Warning: This looks weird to me because I can not see why this is good for an approximation.
𝑃𝑚 = ⟨𝑚| 𝜌ˆ |𝑚⟩
^
∑︁
= ⟨𝑚| 𝑒𝑖𝐻𝑡/ℎ̄ |𝑛⟩ ⟨𝑛| 𝜌ˆ(0) |𝑙⟩ ⟨𝑙| 𝑒−𝑖𝐻𝑡/ℎ̄ |𝑚⟩
𝑚,𝑛
∑︁ ⃒⃒ ⃒2
^ ^
= ⃒⟨𝑚| 𝑒𝑖(𝐻0 +𝜆𝑊 )𝑡/ℎ̄ |𝑛⟩⃒ 𝜌𝑛𝑙 (0).
⃒
𝑚,𝑛
van Hove applied random phase condition for only initial condition, 𝜌𝑛𝑙 (0) is diagonalized at initial 𝑡 = 0.
Then we have
Then use the whole Dyson series then selectively make some terms zero and use the assumptions to derive a master
equation.
𝜌ˆ = 𝜌ˆ𝑑 + 𝜌ˆ𝑜𝑑 .
ˆ and 1 − 𝐷
Apply 𝐷 ˆ to the von Neumann equation,
ˆ 𝐿ˆ
𝜕𝑡 𝜌ˆ𝑑 = −𝑖𝐷 ˆ𝜌
ˆ 𝐿ˆ
𝜕𝑡 𝜌ˆ𝑜𝑑 = −𝑖(1 − 𝐷) ˆ 𝜌.
𝜕𝑡 𝜌ˆ𝑑 = −𝑖𝐷ˆ 𝐿ˆ
ˆ 𝜌𝑑 − 𝑖𝐷ˆ 𝐿ˆ
ˆ 𝜌𝑜𝑑
𝜕𝑡 𝜌ˆ𝑜𝑑 = −𝑖(1 − 𝐷)ˆ 𝐿ˆ
ˆ 𝜌𝑑 − 𝑖(1 − 𝐷)
ˆ 𝐿ˆ
ˆ 𝜌𝑜𝑑 .
𝑦˙ + 𝛼𝑦 = 𝑓
is
∫︁ 𝑡
′
−𝛼𝑡
𝑦=𝑒 𝑦(0) + 𝑑𝑡′ 𝑒−𝛼(𝑡−𝑡 ) 𝑓 (𝑡′ ).
0
What happened to the blue term? It disapears when we apply the initial random phase condition.
When it happens we get our closed master equation for 𝜌ˆ𝑑 , which is an equation for the probability.
Though we need to set 𝜌𝑜𝑑 (0) = 0 to have a closed master equation, that doens’t mean we have to make only localized
initial condition on only one state.
“We can always use phasers.”
—V. M. Kenkre
Suppose we have a system with five possible states, the off-diagonal elements don’t exist initially if the system is in
only one state.
The density matrix will contain off-diagonal elements if we have two states initially.
However, we can always choose a combination of the states to use as the basis, so that the density matrix becomes
diagonalized.
We derived some sort of quantum master equation using projection method. Here we will simplify it.
Let’s stare at the results for a minute.
∫︁ 𝑡
^ ^ ′ ^ 𝐿𝑡
^
ˆ 𝐿ˆ
𝜕𝑡 𝜌ˆ𝑑 = −𝑖𝐷 ˆ 𝜌𝑑 − 𝐷
ˆ𝐿ˆ ˆ 𝐿ˆ
𝑑𝑡′ 𝑒−𝑖(1−𝐷)𝐿(𝑡−𝑡 ) (1 − 𝐷) ˆ 𝜌𝑑 (𝑡′ )−𝑖𝐷
ˆ 𝐿𝑒
ˆ −𝑖(1−𝐷) 𝜌𝑜𝑑 (0).
0
ˆ So what is 𝐷
By definition, 𝜌𝑑 = 𝐷𝜌. ˆ 𝐿𝜌
ˆ 𝑑?
ˆ 𝐿𝜌
𝐷 ˆ 𝑑=𝐷
ˆ𝐿ˆ𝐷
ˆ 𝜌ˆ
⎡⎛ ⎞⎛ ⎞ ⎛ ⎞⎛ ⎞⎤
𝜌11 0 0 ··· 𝐻11 𝐻12 𝐻13 ··· 𝐻11 𝐻12 𝐻13 ··· 𝜌11 0 0 ···
⎢⎜ 0 𝜌22 0 · · ·⎟ ⎜𝐻21 𝐻22 𝐻23 · · ·⎟ ⎜𝐻21 𝐻22 𝐻23 · · ·⎟ ⎜ 0 𝜌22 0 · · ·⎟ ⎥
ˆ⎢
=𝐷
⎜ ⎟⎜
−⎜
⎟ ⎜ ⎟⎜ ⎟⎥
⎢⎜ 0 0 𝜌33 · · ·⎟⎠ ⎝𝐻31
⎜ 𝐻32 𝐻33 · · ·⎟⎠ ⎝𝐻31 𝐻32 𝐻33 · · ·⎟⎠⎝ 0
⎜ 0 𝜌33 · · ·⎟ ⎥
.. .. .. .. .... .. .... .. ..
⎣⎝ ⎠⎦
.. .. .. ..
. . . . . .. . . .. . . . . ···
We can easily see that the diagonal elements are equal for the two terms (magenta and green) in the braket so all the
diagonal elements go away. Now when the 𝐷 ˆ outside of the bracket applied, the whole term is zero.
ˆ 𝐿ˆ
We are so lucky to eliminate the term −𝑖𝐷 ˆ 𝜌𝑑 .
ˆ =𝐻
We do perturbation theory most of the time. Consider the case that Hamiltonian of the system is 𝐻 ˆ 0 + 𝜆𝑊
ˆ . We
can split the Liouville operator into two parts, :math:‘hat L = hat L_0 + lambda hat L_W ‘.
Our master equation becomes
∫︁ 𝑡
ˆ 𝐿ˆ 0 + 𝜆𝐿 ^ 𝐿
ˆ 𝑊 )𝑒−𝑖(1−𝐷)( ^ 𝑊 )(𝑡−𝑡′ )
^ 0 +𝜆𝐿 ˆ 𝐿ˆ
ˆ 𝜌𝑑
𝜕𝑡 𝜌ˆ𝑑 = − 𝑑𝑡′ 𝐷( (1 − 𝐷)
0
∫︁ 𝑡
^ ^ ^ ′
=− ˆ 𝐿
𝑑𝑡′ 𝐷( ˆ 0 + 𝜆𝐿
ˆ 𝑊 )𝑒−𝑖(1−𝐷)(𝐿0 +𝜆𝐿𝑊 )(𝑡−𝑡 ) (𝐿
ˆ 0 + 𝜆𝐿
ˆ 𝑊 )ˆ
𝜌𝑑
0
∫︁ 𝑡
=− 𝑑𝑡′ K (𝑡 − 𝑡′ )ˆ
𝜌𝑑 .
0
ˆ 𝐿𝑒 ^ 𝐿𝑡
ˆ −𝑖(1−𝐷) ^ ˆ 𝐿𝜌
ˆ 𝑑 = 0 (just proved).
in which −𝑖𝐷 𝜌𝑜𝑑 (0) = 0 (initial condition), 𝐷
We have the definition
^ ^ ^ ′
K (𝑡 − 𝑡′ ) = 𝐷(
ˆ 𝐿ˆ 0 + 𝜆𝐿
ˆ 𝑊 )𝑒−𝑖(1−𝐷)(𝐿0 +𝜆𝐿𝑊 )(𝑡−𝑡 ) (𝐿
ˆ 0 + 𝜆𝐿
ˆ 𝑊 ).
I dropped several terms even the first order of 𝜆. This has been done correctly because the interaction term can be very
different from the zeroth order. 5
With a lot of terms being disappears, we can now start to look at the numbers which ia the density matrix elements
sandwiched between states,
∫︁ 𝑡
⟨𝑚| 𝜕𝑡 𝜌𝑑 |𝑚⟩ = −𝜆2 ⟨𝑚| ˆ 𝑊 𝑒−𝑖(𝑡−𝑡′ )𝐿^ 0 𝐿
𝑑𝑡′ 𝐿 ˆ 𝑊 𝜌𝑑 (𝑡′ ) |𝑚⟩ .
0
2
Define Ω𝑚𝑛 (𝑡 − 𝑡′ ) = Ω𝑛𝑚 (𝑡 − 𝑡′ ) = 2𝜆2 |𝑊𝑚𝑛 | cos((𝑡 − 𝑡′ )(𝜖𝑚 − 𝜖𝑛 )), we can write the master equation into a
really simple form,
∫︁ 𝑡 ∑︁
𝜕 𝑡 𝑃𝑚 = 𝑑𝑡′ (Ω𝑚𝑛 (𝑡 − 𝑡′ )𝑃𝑛 − Ω𝑛𝑚 (𝑡 − 𝑡′ )𝑃𝑚 ) .
0 𝑛
∫︀ 𝑡
So we have the Laplace transform of Ω𝑚𝑛 (𝑡) = 𝛿(𝑡) 0
𝑑𝜏 Ω𝑚𝑛 (𝜏 ) on both sides,
∫︁ ∞ ∫︁ 𝑡
Ω̃𝑚𝑛 (𝜖) = 𝑑𝑡𝑒−𝜖𝑡 𝛿(𝑡) 𝑑𝜏 Ω𝑚𝑛 (𝜏 )
0 0
∫︁ ∞ ∫︁ 𝑡
1
= 𝛿(𝑡) 𝑑𝜏 Ω𝑚𝑛 (𝜏 )𝑑𝑒−𝜖𝑡
0 −𝜖 0
1 ∞ −𝜖𝑡
∫︁ (︂ ∫︁ 𝑡 )︂
= 𝑒 𝑑 𝛿(𝑡) 𝑑𝜏 Ω𝑚𝑛 (𝜏 )
𝜖 0 0
∫︁ ∞
1 ∞ −𝜖𝑡
∫︁ 𝑡 ∫︁ (︂∫︁ 𝑡 )︂
1
= 𝑒−𝜖𝑡 𝑑𝜏 Ω𝑚𝑛 (𝜏 )𝑑 (𝛿(𝑡)) + 𝑒 𝛿(𝑡)𝑑 𝑑𝜏 Ω𝑚𝑛 (𝜏 )
𝜖 0 0 𝜖 0 0
∫︁ ∞
1 −𝜖𝑡
= 𝑒 𝛿(𝑡)Ω𝑚𝑛 (𝑡)𝑑𝑡
𝜖 0
1
= Ω𝑚𝑛 (0)
𝜖
I’ll put all the :math:‘hbar‘s back into the equations in this subsection.
I read the Markovian idea on quantiki 6 . Here is my derivation of Fermi’s golden rule from quantum master equation
using this approach.
First of all, we can use interaction picture. Master equation here can be rewritten using interaction picture.
∫︁ 𝑡 ∑︁ [︁ ]︁
′ ′ ′
𝜕𝑡 𝑃𝑚 = −𝜆2 /ℎ̄2 𝑑𝑡′ (𝑊𝑚𝑛 𝑒−𝑖(𝑡−𝑡 )(𝜖𝑛 −𝜖𝑚 )/ℎ̄ 𝑊𝑛𝑚 (𝑃𝑚 − 𝑃𝑛 ) − (𝑒−𝑖(𝑡−𝑡 )𝜖𝑚 /ℎ̄ (𝑊𝑚𝑛 𝑃𝑛 − 𝑃𝑚 𝑊𝑚𝑛 )𝑒𝑖(𝑡−𝑡 )𝜖𝑛 /ℎ̄ )𝑊𝑛𝑚
0 𝑛
∫︁ 𝑡 ∑︁ [︁ ′ ′ ′
= −𝜆2 /ℎ̄2 𝑑𝑡′ (𝑒𝑖𝑡𝜖𝑚 /ℎ̄ 𝑊𝑚𝑛 𝑒−𝑖𝑡𝜖𝑛 /ℎ̄ 𝑒𝑖𝑡 𝜖𝑛 /ℎ̄ 𝑊𝑛𝑚 𝑒−𝑖𝑡 𝜖𝑚 /ℎ̄ (𝑃𝑚 − 𝑃𝑛 ) − (𝑒𝑖𝑡𝜖𝑚 /ℎ̄ 𝑊𝑚𝑛 𝑒−𝑖𝑡𝜖𝑛 /ℎ̄ (𝑃𝑛 − 𝑃𝑚 )𝑒𝑖𝑡 𝜖𝑛 /ℎ̄ )𝑊𝑛
0 𝑛
∑︁ [︂∫︁ 𝑡 ∫︁ 𝑡 ]︂
= −𝜆2 /ℎ̄2 𝑑𝑡′ 𝑊𝑚𝑛
𝐼 𝐼
𝑊𝑛𝑚 (𝑃𝑚 − 𝑃𝑛 ) − 𝑑𝑡′ 𝑊𝑚𝑛
𝐼 𝐼
𝑊𝑛𝑚 (𝑃𝑛 − 𝑃𝑚 )
𝑛 0 0
Markovian means there is no dependence on the past, in other words, the two point correlation in time is non-zero
only when the two time are equal in the correlation function, Corr(𝑡1 , 𝑡2 ) = 0 for all 𝑡1 ̸= 𝑡2 . In our master equation
case,
∫︁ 𝑡
𝑑𝑡′ 𝑊𝑚𝑛𝐼
(𝑡)𝑊𝑛𝑚𝐼
(𝑡′ )(𝑃𝑚 (𝑡′ ) − 𝑃𝑛 (𝑡′ ))
0
∫︁ 𝑡
= 𝑑𝑡′ 𝑊𝑚𝑛
𝐼
(𝑡 − 𝑡′ )𝑊𝑛𝑚𝐼
(0)(𝑃𝑚 (𝑡) − 𝑃𝑛 (𝑡))
0
∫︁ 𝑡
= (𝑃𝑚 (𝑡) − 𝑃𝑛 (𝑡)) lim 𝑑𝑡′ 𝑊𝑚𝑛
𝐼
(𝑡 − 𝑡′ )𝑊𝑛𝑚
𝐼
(0).
𝑡→∞ 0
sin(𝑥/𝜖)
lim = 𝛿(𝑥).
𝜖→0 𝜋𝑥
Comparing this result with the classical master equation, we can find out the transition rate,
2
2𝜋𝜆2 |𝑊𝑚𝑛 |
Ω𝑚𝑛 = 𝛿(𝜖𝑚 − 𝜖𝑛 )
ℎ̄
Footnotes
Knowing that 𝑅(𝑡) is random, or stochastic, we imediately realize that 𝑣(𝑡) is stochastic. The thing to do is to find out
the statistics.
This quantity squared is,
∫︁ 𝑡 ∫︁ 𝑡
′ ′′
𝑣(𝑡)2 = 𝑣(0)2 𝑒−2𝛾𝑡 + 𝑑𝑡′ 𝑑𝑡′′ 𝑒−𝛾(𝑡−𝑡 ) 𝑒−𝛾(𝑡−𝑡 ) 𝑅(𝑡′ )𝑅(𝑡′′ ) + CrossTerms
0 0
Hint: Note that the average here is ensemble average. Define probability density 𝑃 (𝑣, 𝑡) for velocity at time t. Recall
that in velocity space, Liouville equation is
𝑑 𝑑
𝑃 (𝑣, 𝑡) + 𝑗 = 0.
𝑑𝑡 𝑑𝑡
It’s very important to realize that this current density is the current density in velocity space. Applying 𝑑𝑡 𝑣 =
−𝛾𝑣𝑡𝑅(𝑡),
𝑑 𝑑
𝑃 (𝑣, 𝑡) + ((−𝛾𝑣 + 𝑅(𝑡))𝑃 (𝑣, 𝑡)) = 0.
𝑑𝑡 𝑑𝑣
Warning: Now I am curious what ENSEMBLE average is. It’s not this probability density because there is time
dependence.
By using the definition of random motion, the ensemble average shows us very simple results.
⟨𝑣(𝑡 → ∞)⟩ = 0
𝐶
𝑣(𝑡 → ∞)2 =
⟨︀ ⟩︀
.
2𝛾
Wiener Process
𝑑
𝑥 = 𝑅(𝑡).
𝑑𝑡
Ornstein-Uhlenbeck Process
𝑑
𝑥 + 𝛾𝑥 = 𝑅(𝑡).
𝑑𝑡
⟨︀ ⟩︀
Suppose we have ⟨𝑣(0)⟩ = 0, 𝑥2 becomes constant.
𝑑2 𝑑
𝑚 𝑥 + 𝛼 𝑥 + 𝛽𝑥 = 𝑅(𝑡).
𝑑𝑡2 𝑑𝑡
Divided by 𝛼,
𝑑2 𝛼 𝑑 𝛽 1
2
𝑥+ 𝑥 + 𝑥 = 𝑅(𝑡).
𝑑𝑡 𝑚 𝑑𝑡 𝑚 𝑚
We understand immediately that this reduces to a Ornstein-Uhlenbeck equation if mass is very small.
B nonequilibrium, 59
BBGKY, 59
Boltzmann Factor, 22 P
Phase Space, 22
C Probability, 22
Canonical Ensemble, 49 Propagator, 81
Chapman-Kolmogorov equation, 65
S
D Special Functions, 7
Defect, 85 Statistical Mechanics, 22
E T
Ensemble, 44 Thermodynamics Potentials, 14
Entropy, 15
G
Gamma Space, 59
Gaussian Integral, 7
Grand Canonical Ensemble, 51
Green Function, 16
H
H Theorem, 61
I
Identical Particles, 52
Irrersibility, 78
L
Laws of Thermodynamics, 13
Liouville Operator, 59
M
Master Equatioin, 65
Master Equation, 70
Mechanics, 22
Microcanonical Ensemble, 47
Modified Bessel Function, 75
mu Space, 59
N
Non-Equilibrium, 59
109