Statistical Mechanics: Alejandro L. Garcia
Statistical Mechanics: Alejandro L. Garcia
Statistical Mechanics: Alejandro L. Garcia
Statistical Mechanics
Alejandro L. Garcia
San Jose State University
These lecture notes should supplement your notes, especially for those of you (like myself) who are poor
note takers. But I don’t recommend reading these notes while I lecture; when you go to see a play you
don’t take the script to read along. Continue taking notes in class but relax and don’t worry about
catching every little detail. WARNING: THESE NOTES ARE IN DRAFT FORM AND
PROBABLY HAVE NUMEROUS TYPOS; USE AT YOUR OWN RISK.
The main text for the course is R. K. Pathria and Paul D. Beale, Statistical Mechanics, 3rd Ed.,
Pergamon, Oxford (2011). Other texts that will sometimes be mentioned and that you may want to
consult include: K. Huang, Statistical Mechanics, 2nd Ed., Wiley, New York (1987); F. Reif, Funda-
mentals of Statistical and Thermal Physics, McGraw-Hill, New York (1965); L.D. Landau and E.M.
Lifshitz, Statistical Physics, Addison-Wesley, Reading Mass. (1969).
Here is the general plan for the course: First review some basic thermodynamics and applications;
this is not covered in your text book but any undergraduate thermodynamics book should be suitable.
Given the entropy of a system, all other thermodynamic quantities may be found. The theory of
statistical ensembles (Chapters 1−4 in Pathria) will tell us how to find the entropy given the microscopic
dynamics. The theory of ensembles connects mechanics and thermodynamics.
The calculations involved in ensemble theory can be quite hard. We start with the easier non-
interacting systems, such as paramagnets, classical ideal gas, Fermion gas, etc., (Chapters 6 − 8) and
work up to interacting systems, such as condensing vapor and ferromagnets (Chapter 12 − 13).
Most of statistical mechanics is concerned with calculating statistical averages but we briefly con-
sider the variation of microscopic and macroscopic quantities about their mean values as predicted by
fluctuation theory (Chapter 15). Finally we’ll touch on my own specialty, computer simulations in
statistical mechanics in Chapter 16.
Alejandro L. Garcia
1
Chapter 1
Thermodynamics
In this course we will only consider systems at thermodynamic equilibrium, which is characterized by
an absence of time appearing in any of our discussions. This also means that we do not consider fluxes,
such as heat flux, to be occurring other than to move us from one equilibrium state to another. That is,
equilibrium thermodynamics tells us nothing about the thermal conductivity of a system (or any other
transport coefficient, such as viscosity). All we can know is what the equilibrium state is like once we
reach it, not how quickly we arrive there or anything about the non-equilibrium states before we get
there.
As system at a steady state (a state that is not changing in time) is not necessarily at thermodynamic
equilibrium since there may be a flux imposed by boundary conditions. An example would be a crystal
with the top and bottom surfaces held at different temperatures; in this case there is a constant heat flux
and the problem is in the domain of solid mechanics. If instead of a crystal we have a fluid then it’s a
hydrodynamics problem. These other branches of physics extend the concepts developed in equilibrium
thermodynamics (e.g., temperature, pressure) but they are not what we’ll consider in this course.
Equation of State
Lecture 1
Typically a system at thermodynamic equilibrium is homogeneous since gradients usually produce fluxes
(e.g., gradients of temperature produce a heat flux). One situation in which we may have a gradient
while at equilibrium is a gas in a constant gravitational field; the density decreases with height but,
strictly speaking, the system is still at thermodynamic equilibrium. This is a subtle point and since
most equilibrium systems are homogeneous we’ll restrict our attention to to physical systems entirely
described by a set of global thermodynamic parameters. Commonly used parameters include:
• pressure, P , and volume, V
• tension, τ and length, L
• magnetic field, H and magnetization, M
These parameters appear in pairs, the first being a generalized force and the second a generalized
displacement, with the product of the pair having the dimensions of energy. All these parameters are
well-defined by mechanics and are readily measured mechanically.
An additional thermodynamic parameter, temperature, T , is not a mechanical parameter; we defer
its definition for now. Rather than trying to be general, let’s say that our thermodynamic parameters
are P , V , and T , for example, if our system was a simple gas in a container.
2
CHAPTER 1. THERMODYNAMICS 3
An isolated system (no energy or matter can enter or leave the system) will, in time, attain thermo-
dynamic equilibrium. At thermodynamic equilibrium, the parameters do not change with time. Though
thermometers have been in use since the time of Galileo (late 1500’s) many concepts regarding temper-
ature were not well understood until much later. In the mid 1700’s Joseph Black used thermometers to
establish that two substances in thermal contact have the same at thermodynamic equilibrium. This is
counterintuitive — if metal and wood are at thermal equilibrium the metal still “feels colder” than the
wood. This misleading observation is due to the fact that our sense of touch uses conduction of heat
and thus is a poor thermometer.
The equation of state is the functional relation between P , V , and T at equilibrium (see Fig. 1.1).
An example of an equation of state is the ideal gas law for a dilute gas,
where N is the number of molecules in the gas and k (k = 1.38 × 10−23 J/K is Boltzmann’s constant.
Boltzmann’s constant does not have any deep meaning. Since the temperature scale was developed long
before the relation between heat and energy was understood the value of k simply allows us to retain
the old Kelvin and Celsius scales. Later we’ll see the theoretical justification for the ideal gas law in
the next chapter.
You should not get the impression that the equation of state completely specifies all the thermody-
namic properties of a system. As an example, both argon and nitrogen molecules have about the same
mass and the two gases obey the ideal gas law yet their molar heat capacities are very different (can
you guess why Ar and N2 are different?).
It follows from the equation of state that given V and T , we know P at equilibrium (or in general,
given any two we know the third).
The mental picture for this process is that of a piston lifting a weight as the system’s volume expands.
Notice that if the path between A and B is changed, ∆WAB is different; for this reason dW = P dV is
not an exact differential.
By conservation of energy,
where ∆QAB is the non-mechanical energy change in going from A to B. We call ∆QAB the heat added
to the system in going from A to B. If A and B are infinitesimally close,
dU = dQ − dW = dQ − P dV (∗)
where dU is an exact differential though dQ and dW are not. This is the first law of thermodynamics,
which simply states that total energy is conserved.
The results so far are very general. If our mechanical variables are different, everything carries over.
For example, for a stretched wire the mechanical variables are tension τ and length L instead of P and
V . We simply replace P dV with −τ dL in equation (∗). Note that the minus sign comes from the fact
that the wire does work when it contracts (imagine the wire contracting and lifting a weight).
Example: For n moles of an ideal gas held at constant temperature T0 find the work done in going
from volume VA to VB .
Solution: The number of molecules in n moles is N = nNA where NA = 6.205 × 1023 is Avogadro’s
number (not to be confused with Avocado’s number which is the number of molecules in a guaca-mole).
The ideal gas equation of state is
Note that on a different path we would have to know how temperature varied with volume, i.e., be
given T (V ) along the path.
Heat capacity
We define heat capacity, C, as
dQ = CdT
Specifically, the heat capacity at constant volume, CV , is
( )
∂Q
CV =
∂T V
The heat capacity gives the amount of heat energy required to change the temperature of an object
while holding certain mechanical variables fixed. We also define the heat capacity per unit mass, which
is often called the specific heat, and the heat capacity per particle (e.g., cV = CV /N ).
For liquids and solids, since their expansion at constant pressure is often negligible, one finds CP ≈
CV ; this is certainly not the case for gases.
For any function f (x, y) of two variables, the differential df is
( ) ( )
∂f ∂f
df = dx + dy (Math identity)
∂x y ∂y x
where the subscripts on the parentheses remind us that a variable is held fixed. Applying this identity
to internal energy ( ) ( )
∂U ∂U
dU (T, V ) = dT + dV (∗∗)
∂T V ∂V T
If we hold volume fixed (i.e., set dV = 0) in this expression and in (*) then we obtain the result
( )
∂U
dQ = dT (constant V )
∂T V
Figure 1.4: Illustration of an ideal gas cooling in an adiabatic expansion under pressure.
or
dU = CV dT (Ideal gas)
If CV constant then,
U (T ) = CV T + constant (Ideal gas)
If we say U (T = 0) = 0 then for an ideal gas U (T ) = CV T . Later we derive these results using statistical
mechanics.
Note: When a gas expands in a piston, it does work and thus if the system is insulated, the gas
cools (see Fig. 1.4). In this case ∆QAB = 0 as before but ∆WAB > 0 so UB < UA and thus TB < TA .
Example: For n moles of an ideal gas held at constant temperature T0 find the heat added in going
from volume VA to VB .
Solution: Since U (T ) for an ideal gas, if T is fixed then so is the internal energy. The first law
says,
UB − UA = ∆WAB − ∆QAB
since UB = UA , then ∆QAB = ∆WAB , so the heat added equals the work done for an isothermal path
(see the previous example).
Entropy
Lecture 2
The first law of thermodynamics may be written as:
dU = dQ − dW
where ∆WAB is the work done by the system in going from state (TA , VA ) to (TB , VB ) along some given
path.
CHAPTER 1. THERMODYNAMICS 8
where ∆QAB is the heat added to the system. Unfortunately, the entropy, S, is not as easily measured
as other thermodynamic variables. We can write:
∫ QB
dQ 1
dS = or SB − SA = dQ
T QA T
The differential dS is exact (like dU and dV ) so if we select a path between A and B such that
T = T0 is fixed, then:
∫ B
1 1
SB − SA = dQ = ∆QAB
T0 A T0
By experimentally measuring ∆QAB , we obtain S.
But what if we cannot get from A to B along an isothermal path? Along an adiabatic path (no heat
added or removed) dQ = 0 so dS = 0. In general, we can build a path between any two points A and
B by combining isothermal and adiabatic paths and thus calculate SB − SA .
The entropy is important in that given S(U, V ) for a physical system, all other thermodynamic
properties may be computed. We derive and use this result when we develop statistical mechanics.
along any convenient path. There are many deep meanings to the third law and I encourage you to
plunge in and read about them on your own.
On leg ab an amount of heat Q+ = T+ (Sb − Sa ) is added to the system; this is the area under this
leg in the T − S diagram. On the leg, cd, an amount of heat Q− = T− (Sb − Sa ) is removed from the
system. Since Q = Q+ − Q− = W then
Q+ − Q− Q− T−
η= =1− =1− (Carnot Engine)
Q+ Q+ T+
The efficiency of a Carnot engine is given by this simple relation involving the ratio of temperatures.
Using the fact that a Carnot engine is reversible you can prove that no engine can be more efficient
than a Carnot engine.
The Carnot engine has important theoretical significance. Using the fact that Q− /Q+ = T− /T+ for
a reversible path, Clausius established that dQ/T was an exact differential, which led to the definition
of entropy. Lord Kelvin established the absolute temperature scale using the fact that the efficiency of
a Carnot engine only depends on the temperatures of the reservoirs and not on the properties of the
materials.
Note that since (∂S/∂U )V = 1/T > 0 the entropy increases with increasing energy, keeping volume
fixed. But let’s formulate a more fundamental result, namely the second law of thermodynamics.
Consider an isolated system (no heat or work in or out) that we arbitrarily divide into two parts
(see Fig. 1.7). Call the total volume V = V1 + V2 . Similarly, the total internal energy is U = U1 + U2 .
These thermodynamic variables are said to be extensive. At thermal equilibrium T1 = T2 , at mechanical
equilibrium P1 = P2 ; temperature and pressure are intensive parameters.
Each subsystem has an entropy that depends on temperature and pressure; entropy is an extensive
variable like energy. For example, given S(U, V ) then S(cU, cV ) = cS(U, V ) where c is a constant;
notice that if instead we had S(U, P ) then S(cU, cP ) ̸= cS(U, P ) since P is an intrinsic variable.
CHAPTER 1. THERMODYNAMICS 11
Another extensive variable is N , the number of molecules in the system, so if we allow it to vary then
we’d write S(cU, cV, cN ) = cS(U, V, N ).
The second law states that the total entropy S1 (T1 , P1 ) + S2 (T2 , P2 ) is maximum if and only if
T1 = T2 and P1 = P2 , that is, at equilibrium.
Total entropy is extremum if and only if dS = 0 yet dS = 0 if and only if T1 = T2 and P1 = P2 . With
a little extra work, we may show that this extremum is a maximum.
We have two forms for the second law: Verbal form: The total entropy S(U, V ) of an isolated system
is maximum at equilibrium; Mathematical form: dU = T dS − P dV . Can derive one from the other.
Thermodynamic Manipulations
We often want to manipulate partial derivatives in thermodynamics calculations. Use the partial deriva-
tive rules:
( ) ( ) ( )
∂x ∂y ∂x
= (D1)
∂y w ∂z w ∂z w
( )
∂x 1
= ( ) (D2)
∂y z ∂y
∂x
( ) ( ) ( ) z
∂x ∂y ∂z
= −1 (D3)
∂y z ∂z x ∂x y
T V α2
CP − CV =
KT
which many thermodynamics texts derive.
Maxwell Relations
Lecture 3
Consider our expression for the second law:
dU = T dS − P dV
Since dU is an exact differential, we can use the mathematical identity for the differential of a function
of two variables, ( ) ( )
∂U ∂U
dU (S, V ) = dS + dV
∂S V ∂V S
In formulating the Second Law we already saw that,
( ) ( )
∂U ∂U
T = and P =−
∂S V ∂V S
which gives the formal definitions of temperature and pressure for a thermodynamic system.
Using these results along with the partial derivative identity (D4),
( ) ( ) ( ) ( )
∂2U ∂ ∂U ∂ ∂U
= =
∂S∂V ∂V S ∂S V ∂S V ∂V S
CHAPTER 1. THERMODYNAMICS 13
so ( ) ( )
∂ ∂
(T ) = (−P )
∂V S ∂S V
or ( ) ( )
∂T ∂P
=− (M1)
∂V S ∂S V
This identity is called a Maxwell relation; it is a relation between T, V, P, and S arising from the fact
that dU is an exact differential.
There are three other Maxwell relations:
( ) ( )
∂T ∂V
= (M2)
∂P S ∂S P
( ) ( )
∂S ∂P
= (M3)
∂V T ∂T V
( ) ( )
∂S ∂V
= − (M4)
∂P T ∂T P
These other Maxwell relations come from the three other basic thermodynamic potentials, that we’ll
discuss in a moment.
Example: Express the rate of change of temperature when volume changes adiabatically in terms
of α, KT , and CV .
Solution: The quantity we want to find is:
( ) ( )
∂T ∂P
= − (using M1)
∂V S ∂S V
( ) ( )
∂P ∂S
= − / (using D1)
∂T V ∂T V
( ∂P ) ( ∂V )
= 1
( ( ∂S ) )
∂V T ∂T P
(using D2 and D3)
T T ∂T V
( )
−1
V KT (αV )
= 1 (defn. of α,KT , and CV )
T CV
−αT
=
KT CV
Thermodynamics Potentials
From the second law, we know that for an isolated system (U and V fixed) the entropy S is maximum
at equilibrium. We now establish a related result: for a system with fixed S and V , the internal energy
is minimum at equilibrium.
Consider the closed system, arbitrarily divided into two parts, a and b; the total energy is U = Ua +Ub
(see Fig. 1.8).
Consider the diagram shown in Fig. 1.9 illustrating the possible values of energy for the two
subsystems. The three diagonal lines indicate three values of total energy, U1 < U2 < U3 . In each case,
CHAPTER 1. THERMODYNAMICS 14
Figure 1.9: Energy graph with entropy contours (S2 > S1 ) and equilibrium points (marked by ×).
CHAPTER 1. THERMODYNAMICS 15
at equilibrium the total energy is divided between the two subsystems a and b. At equilibrium, for a
total system energy of U1 , the total system entropy is S1 (and similarly for S2 and S3 ).
As U increases, S must increase since:
( )
∂S 1
= >0
∂U V T
so S1 < S2 < S3 .
Now follow the contour of constant entropy S1 . From the diagram, if the system has entropy S1 ,
(and V is fixed) then equilibrium occurs at the minimum energy U1 .
To summarize: For fixed U and V thermodynamic equilibrium implies maximum S. For fixed S
and V thermodynamic equilibrium implies minimum U .
The internal energy is called a thermodynamic potential because it is minimum at equilibrium (for
fixed S and V ).
There are 3 other thermodynamic potentials:
H = U + PV dH = T dS + V dP (Enthalpy)
A = U − TS dA = −SdT − P dV (Helmholtz free energy)
G = U + PV − TS dG = −SdT + V dP (Gibbs free energy)
For fixed S and P , thermodynamic equilibrium implies minimum H. For fixed T and V , thermodynamic
equilibrium implies minimum A. For fixed T and P , thermodynamic equilibrium implies minimum G.
The Gibbs free energy is important to experimentalists since they often do their work at constant
temperature and pressure.
Example: Derive the Maxwell relation obtained from the enthalpy, that is, from dH = T dS + V dP .
Solution: Since dH is an exact differential, we can use the mathematical identity for the differential
of a function of two variables,
( ) ( )
∂H ∂H
dH(S, P ) = dS + dP
∂S P ∂P S
Gibbs-Duhem Equation
One last general result that should be mentioned is Gibbs-Duhem equation,
SdT − V dP = 0 (Gibbs-Duhem)
We arrive at this result using the Euler theorem for extensive functions, which states that if f is an
extensive function, that is, if f (cx1 , . . . , cxn ) = cf (x1 , . . . , xn ) then
n (
∑ )
∂f
f (x1 , . . . , xn ) = xi
i=1
∂xi xj
dU = T dS + SdT − P dV − V dP = T dS − P dV + {SdT − V dP }
so by our result for the second law the term in the curly braces is zero.
But wait a minute! In the discussion of thermodynamic potentials we said that the Gibbs free energy
was,
G = U + PV − TS dG = −SdT + V dP
Does the Gibbs-Duhem equation imply that dG = 0? No, because we’ve been neglecting the dependance
of the energy on number of particles; instead of writing U (S, V ) it’s more complete to write U (S, V, N ).
The second law is then generalized to,
dU = T dS − P dV + µdN
where µ is the chemical potential.† Notice that this defines chemical potential as,
( )
∂U
µ=
∂N S,V
That is, µ is the change in the internal energy when we add one particle to the system while keeping
entropy and volume fixed. We won’t need to work with chemical potential very much until later in the
course so let’s wait until then to discuss it further.
Getting back to the Gibbs-Duhem equation, it’s full form is
SdT − V dP + N dµ = 0 (Gibbs-Duhem)
Then dG = N dµ + µdN = d(µN ), that is, we may also say that the chemical potential is the Gibbs
free energy per particle.
† Note that the chemical potential has nothing to do with chemical reactions.
CHAPTER 1. THERMODYNAMICS 17
Figure 1.10: Phase diagram in P − V plane for Van der Waals gas.
where n is the number of moles and the positive constants a and b depend on the material. To understand
the origin of these constants, let’s consider them separately.
Take a = 0, then:
nRT
P =
(V − bn)
Notice that P → ∞ if V → bn. There is a minimum volume to which we can compress the gas; for
one mole, this volume is b so b/NA is the volume of a single molecule. This term simulates the strong
repulsion between atoms when they are brought close together (Coulomb repulsion of electron shells).
To understand the other constant, a, write the equation of state as,
nRT an2
P = − 2
V − nb V
The larger the value of a, the smaller the pressure. This term represents the weak, attractive force
between atoms in the gas. This binding force reduces the pressure required to contain the gas in
a volume V . The smaller the volume, the closer the atoms get and the stronger this binding force
becomes. However, if we further decrease V , the repulsive term, (V − nb), kicks in and keeps the gas
from collapsing.
Fig. 1.10 shows the isotherms on the P − V diagram. The inflection point is called the critical point.
Using the fact that (∂P/∂V )T = (∂ 2 P/∂V 2 )T = 0 at the critical point you can work out the critical
CHAPTER 1. THERMODYNAMICS 18
values,
8a a Pc Vc 3
Tc = Pc = 2
Vc = 3bn =
27bR 27b nRTc 8
For T < Tc the equation of state has an S-shape.
The region where the slope (∂P/∂V )T is positive is thermodynamically impossible; this region is
called the spinoidal region. The isothermal compressibility
( )
1 ∂V
KT ≡ −
V ∂P T
of a substance must be positive or else second law is violated. To understand why, consider Fig. 1.11;
weight is added to the piston and if KT < 0 then the piston rises. Heat energy is spontaneously removed
from the reservoir and converted into work, a violation of the second law of thermodynamics.
dG = −SdT + V dP + µdN
We will fix T and N so this reduces to dG = V dP . Say that starting from PA , we move along the
isotherm of temperature T0 < Tc so
∫ P
G(P, T0 ) = V dP + C
PA
Figure 1.15: Isotherm in V − P plane for Van der Waals fluid. Compare with Fig. 1.14
CHAPTER 1. THERMODYNAMICS 21
Figure 1.16: Gibbs free energy versus pressure for the isotherm shown in Fig. 1.15.
Figure 1.17: Maxwell construction for coexistence isotherm of Van der Waals gas.
Figure 1.18: Superheated liquid occurs when the system deviates off the coexistence line onto the
metastable branch.
Chapter 2
Ensemble Theory I
The system is not frozen in this state; particles move and interact and thus the q’s and p’s change with
time. The Hamiltonian may be used to compute the dynamics.
At any instant in time, the values (q, p) may be viewed as a point in 6N dimensional space (hard
to draw). In time, this point moves around in this phase space, the trajectory completely describes the
evolution of the system (see Fig. 2.1).
It is neither possible nor desirable to compute this trajectory for real systems (in which N ≈ 1023 ).
The trajectory cannot wander arbitrarily through phase space. For example, if our system is isolated
(so U and V are fixed) then only points that satisfy these constraints can be reached.
In Fig. 2.2 suppose that only points within the shaded region are permitted by the constraints on
the system.
22
CHAPTER 2. ENSEMBLE THEORY I 23
Figure 2.2: Point in phase space constrained to remain within restricted region.
An isolated system at equilibrium is equally likely to be found in any accessible state, that
is, all points in the allowed region of phase space are equally probable. This is called the
ergodic hypothesis.
This is a big assumption. It is very difficult to prove that a system will even visit every allowed
point much less prove that all are equally probable. In the absence of a mathematical proof, we use
the postulate of equal probabilities as a working hypothesis (that happens to be well supported by
experiments and numerical simulations).
One technical problem, a continuous space has an infinity of points so “total number” is an ill-defined
concept.
Two equally valid solutions to this technical difficulty:
1) Continuous Probability Distribution
Define P(q, p) dqdp to be the probability that the system is in a state within the infinitesimal volume
between (q, p) and (q + dq, p + dp) (see the Fig. 2.3).
Our equal probability hypothesis gives:
dqdp
P(q, p) dqdp = ∫
A
dqdp
where the integral is over the allowed region in phase space (given the constraints that energy, U , and
volume, V , are fixed).
Since all points in the allowed region are equally probable, the probability of the system being a
given sector of phase space is just the fractional volume of phase space occupied by that sector.
2) Coarse Graining
Suppose we partition the real estate in phase space into finite lots of volume h3N , as shown in
Fig. 2.4.
The function P(q, p) is the probability that the system is in lot (q, p). The equal probability hy-
pothesis now becomes:
1
P(p, q) =
Number of lots inside A
1
=
Γ
While this coarse graining of phase space into parcels might seem ad hoc, it is a useful construction.
Specifically, it is closer to what we find in quantum mechanics—there is a discrete set of allowed states.
CHAPTER 2. ENSEMBLE THEORY I 25
∑
Γ
S = −k P(q, p) ln P(q, p)
CHAPTER 2. ENSEMBLE THEORY I 26
Probability theory tells us this expression for S gives the uncertainty (or disorder) in a system. You
can prove that it possesses all the desired features for entropy, for example, it is maximum when all
states are equally probable thus S is maximum at equilibrium.
Example: Suppose that a system has only 2 states, a and b. Show that the entropy is maximum
when Pa = Pb = 1/2.
Solution: Our definition for entropy is:
∑
Γ
S = −k P ln P
= −k(Pa ln Pa + Pb ln Pb )
S = −k(Pa ln Pa + (1 − Pa ) ln(1 − Pa ))
For an isolated system, the equal probability hypothesis gives the probability P(q, p) = 1/Γ so:
∑
Γ
S = −k P(q, p) ln P(q, p)
∑ Γ ( ) ( )
1 1
= −k ln
Γ Γ
( ) ( Γ )
1 ∑
= +k ln (Γ) 1
Γ
( )
1
= k ln (Γ) (Γ)
Γ
= k ln Γ
Aside from the constant k, the entropy is the logarithm of the number of accessible states.
By the way, remember that the only reason that Boltzmann’s constant appears in the definition of
entropy is that historically it was decided to define the Kelvin scale based on the Celsius scale instead
of defining temperature in Joules. Since T S has the units of energy this alternative makes entropy
dimensionless. But then Boltzmann has the equation above for entropy carved on his tombstone so
maybe we’ll keep the traditional form.†
† The equation is written as S = k log W on Boltzmann’s tombstone.
CHAPTER 2. ENSEMBLE THEORY I 27
Finally, use conventional thermodynamics to find any other desired quantities (e.g., heat capacity,
compressibility).
Notice that temperature now has a mechanical definition since
( ) ( )
1 ∂S ∂
= = (k ln Γ(U, V ))
T ∂U V ∂U V
so ( )
1 k ∂Γ
=
T Γ ∂U V
or
1 Γ(U, V )
T =
k (∂Γ/∂U )V
The temperature is inversely proportional to the rate at which the number of states increases as internal
energy increases. Of course it is difficult find a “number of allowed states” meter in the lab. Simpler to
calibrate thermometers that use properties, such as electrical conductivity, that vary with temperature
and are easy to measure mechanically.
For most physical systems, the number of accessible states increases as energy increases so (∂Γ/∂U ) >
0. There are exceptional systems for which the energy that can be added to the system has some max-
imum saturation value. For these systems, one can have (∂Γ/∂U ) < 0 giving negative temperatures.
Note that negative temperatures occur not when a system has very little energy but rather when it has
so much energy that it is running out of available states. You’ll do an example of such a system as a
homework problem.
CHAPTER 2. ENSEMBLE THEORY I 28
1 ∑ 2
N
E = m (υ + υy2i + υz2i )
2 i=1 xi
1 ∑ 2
3N
= p
2m j=1 j
For just one particle (with position q1 , q2 , and q3 in the x,y, and z directions) we can sketch the allowed
region of coordinate phase space (see Fig. 2.5).
The particle can be anywhere inside this rectangle of volume Lx Ly Lz = V .
For just one particle (with momenta p1 , p2 , and p3 ) we can sketch the accessible area in momentum
phase space (see Fig. 2.6).
Let’s count states by saying all points in phase space for which E = U√ are allowed states. These
allowed states are all points on the surface of this sphere with radius R = 2mE. The surface area of
a 3D sphere is 4πR2 .
If we have N particles instead of just one, the box in coordinate phase space is a 3N dimensional
CHAPTER 2. ENSEMBLE THEORY I 29
We now follow the steps in our recipe; first get the entropy,
S = k ln Γ(U, V )
= kN ln(V (2mU )3/2 ) + k ln BN
Combinatorics
Counting can be tricky; here’s a quick review: The number of ways of making M selections from a
population of N individuals is
Distinguishable Selections
N!
N PM = (N −M )! (Permutations)
Indistinguishable Selections
N!
(N )
N CM = (N −M )!M ! = M (Combinations)
CHAPTER 2. ENSEMBLE THEORY I 31
Example: Consider a class with N = 10 persons (labelled A, B, ..., J). In a beauty contest, find
the number of ways to award 1st, 2nd, and 3rd prize.
Solution: Since the selections are distinguishable (1st prize is different from 2nd prize) making the
list of possible selections we have,
ABC
ACB
ABD
..
.
The number of permutations is
10! 10!
= = 10 · 9 · 8 = 720
(10 − 3)! 7!
Example (cont.): If in a class of 10 students three students flunk the class, find the number of ways
of selecting them.
Solution: In this case the selections are indistinguishable so the list of possible selections is
ABC
ABD
..
.
10!
The number of combinations is 7!3! = 120.
Finally, the number of ways of putting M indistinguishable objects into N distinguishable boxes is
( ) ( )
N +M −1 N +M −1 (N + M − 1)!
= =
M N −1 (N − 1)!M !
the total number of quanta of energy in the system. Figure 2.7 shows some states for M = 3, N = 4.
The total energy in the system is
∑
N ∑
N
U= ϵi (ni ) = h̄ω (ni + 1/2) = h̄ω (M + N/2)
i=1 i=1
Figure 2.7: Two possible quantum harmonic oscillator system states for M = 3, N = 4.
Now we need to count the number of possible states given M and N . This is equivalent to the
number of ways that M indistinguishable objects may be placed into N distinguishable boxes, the
number of states is ( )
N +M −1 (M + N − 1)!
Γ(U ) = =
M (N − 1)!M !
where M = U/h̄ω − N/2.
The entropy in the system is
S(U ) = k ln Γ ≈ k [(M + N ) ln (M + N ) − M ln M − N ln N ]
where we’ve used the approximation ln(x!) ≈ x ln(x) − x for x ≫ 1. We can now proceed to compute
other thermodynamic quantities. For example,
( )
1 dS dS dM 1
= = = (k ln(M + N ) − k ln M )
T dU dM dU h̄ω
which gives us the following relation between M and T ,
N
M=
eh̄ω/kT −1
so
h̄ωN 1
U= + h̄ωN
eh̄ω/kT − 1 2
Notice that M → 0, U → 12 h̄ωN as T → 0 and M → ∞, U → ∞ as T → ∞. This is the Einstein model
for a solid. Though it gives the right qualitative behavior and is better than a classical description of
a solid this model is superseded by the Debye model.
Example: Find the heat capacity for a system of N quantum mechanical harmonic oscillators.
Solution: Since the system cannot do mechanical work, CP = CV = C. In other words, the internal
energy is only a function of temperature so,
dU dU dM
C= =
dT dM dT
−N −h̄ω
= (h̄ω) ( )2 2
eh̄ω/kT − 1 kT
( )2
h̄ω kN
= ( )2
kT eh̄ω/kT −1
CHAPTER 2. ENSEMBLE THEORY I 33
h2 ( 2 )
ϵ(nx , ny , nz ) = 2
nx + n2y + n2z
8mL
where nx , ny , nz = 1, 2, . . . are the quantum numbers.
Now suppose we have N identical, but distinguishable, particles; particle i has energy ϵi (nx,(i) , ny,(i) , nz,(i) ).
The particles do not interact (i.e., ideal gas) so the total energy is
∑ h2 ∑ [( )2 ( )2 ( )2 ]
N N
E= ϵi = n x,(i) + n y,(i) + n z,(i)
i=1
8mL2 i=1
or
h2 ∑ 2
3N
E= n
8mL2 r=1 (r)
where the index r runs over all 3N quantum numbers.
Now we have to count the number of possible combinations of quantum numbers that give an energy
E = U . An easier job is to count the number of combinations that have a total energy E ≤ U ; we’ll
call Σ the total number of states such that E ≤ U .
Our constraint may be written as
8mL2 U ∑
3N
≥ n2(r)
h2 r=1
Figure 2.8 illustrates this constraint for just two quantum numbers. The number of combinations for
n’s that satisfies the constraint is approximately the number of unit squares inside the circular wedge
with radius R. But this is nothing more than the volume of the wedge. Notice that we have a wedge
instead of a sphere since n(r) > 0 so we only consider the wedge in the first quadrant.
CHAPTER 2. ENSEMBLE THEORY I 34
π 3N/2 3N
V3N (R) = R
(3N/2)!
so U = 32 kN T and ( )
P ∂S kN
= =
T ∂V U V
so P V = N kT .
You can compare this result with our previous expression for S(U, V ) obtained from classical me-
chanics; aside from a few unimportant constants, they are the same. From the above expression, you
can derive the ideal gas law, P V = N kT and the energy-temperature relation U = 32 kT as we did in
the classical case.
Finally, there is an interesting result that connects Σ, the number of states with energy E ≤ U and
Γ, the number states with energy E = U . For a d-dimensional sphere of radius R, the fraction of the
volume which lies within the outer shell of thickness ∆R,
Figure 2.9: Two alternative expansions of a gas in an insulated container leading to the same final state.
If d is very large then virtually all points in the sphere are infinitesimally close to the surface. For
example, if d = 1000, then 90% of the volume is inside the outer shell of thickness ∆R ≈ R/400.
Imagine if d ≈ 1023 , how dramatic this effect becomes. For this reason, as N → ∞, the two ways to
count states become equivalent since Σ ≈ Γ. In most physical systems, the number of states in phase
space increases very rapidly with energy (e.g., as U N ).
Now consider the change in entropy when a gas expands from an initial volume Vi to a final volume Vf ,
holding U (and thus T ) fixed. There are two ways to do this, as shown in Fig. 2.9.
For the reversible process, we can compute the change in entropy from the heat added as
∫ f ∫ f
dQ 1 1
Sf − Si = = dQ = ∆Qif
i T T i T
CHAPTER 2. ENSEMBLE THEORY I 36
Figure 2.10: Mixing of two different gases (e.g., helium and neon).
The internal energy U is constant (since T is constant) so ∆Qif = ∆Wif . Back in the first lecture we
worked out that for the isothermal expansion of a gas
so thermodynamics predicts
Sf − Si = kN ln(Vf /Vi )
Using our statistical mechanics formula for entropy, we have
3
S(U, Vf , N ) − S(U, Vi , N ) = (kN ln Vf + kN ln u + N s0 )
2
3
−(kN ln Vi + kN ln u + N s0 )
2
= kN ln(Vf /Vi )
If particles are indistinguishable, the number of states in phase space, Γ or Σ, decreases. The
reason is that exchanging particle i and j quantum numbers is not counted as a separate state. For N
indistinguishable particles, the number of states,Σ, is reduced by a factor of 1/N ! so
1
Σnew = Σold
N!
and
where C is a constant. You can see that S(U, V, N )new is an extensive function so this resolves Gibbs’
paradox. This corrected expression for the entropy of a monatomic ideal gas is called the Sackur-Tetrode
equation. Our previous results (before the 1/N ! correction) are unaffected because the term added to
the entropy is independent of U and V .
If instead of one type of particle we have M distinguishable species with particle numbers N1 , . . . , NM
then the Gibbs correction is
1
Σnew = Σold
N1 ! . . . NM !
so
Snew = Sold − k ln N1 ! − . . . − k ln NM !
which you can check makes S(U, V, N1 , . . . , NM )new an extensive function.
Finally, note that the combinatorial argument which gave us the 1/N ! factor assumes only one
particle per quantum state. This is a good assumption except at low temperatures where particle
occupancy of states has to be treated more carefully. We return to do the low temperature scenario
when we consider Fermi and Bose quantum ideal gases.
Chapter 3
Ensemble Theory II
where the sum is over all allowed states, and Pi is the probability of each state.
In the microcanonical ensemble, only states that had energy Ei = U were allowed and all allowed
states were equally probable, thus
1
Pi = (microcanonical ensemble)
Γ
where Γ was the total number of allowed states. We found that sometimes it was more convenient to
find Σ, the number of states with energy Ei ≤ U . Besides, we saw that Γ ≈ Σ which comes from
the fact that there are very few states with energy less than U (i.e., the number of states increases
astronomically fast with increasing energy).
In the canonical ensemble, we let all states be allowed states but demand that the average energy
equal U . The average energy is
∑
states
⟨E⟩ = Ei Pi
i
We now allow states with energy Ei > U . However, our constraint on ⟨E⟩ will assign them low
probability.
We want to determine Pi such that S is maximum. There are two constraints on Pi ,
∑ ∑
Pi = 1 ; Ei Pi = U
i i
The first is the demand that probabilities must sum to unity; the second is our condition that ⟨E⟩ = U .
To maximize S with these constraints, we use the method of Lagrange multipliers. Introduce the
function ∑ ∑
W = S − α′ Pi − β ′ Ei Pi
i i
′ ′
where α and β are Lagrange multipliers.
38
CHAPTER 3. ENSEMBLE THEORY II 39
or ∑
e−α e−βEi = 1
i
so e−α QN = 1 or
α = ln QN
This gives us α in terms of β.
The probabilities are thus
e−βEi e−βEi
Pi = ∑ −βEi =
e QN
i
CHAPTER 3. ENSEMBLE THEORY II 40
Finally,
S = kβU + k ln QN
Our link to thermodynamics is still shaky since we haven’t nailed down β in terms of something
familiar. Let’s hammer using ( )
∂S 1
=
∂U V T
then ( )
1 ∂
= [kβU + k ln QN ] = kβ
T ∂U V
so
1 1
β= ; T =
kT kβ
which is the link we wanted to find.
Our entropy is
1
S= U + k ln QN
T
or
kT ln QN = T S − U = −A
CHAPTER 3. ENSEMBLE THEORY II 41
so
QN = e−A/kT
where A is the Helmholtz free energy.
We have all the results we need. Here is the recipe for using the canonical ensemble:
1. Determine all the possible states that a system can be in given the constraints that volume, V ,
and particle number, N , are fixed.
2. Determine the energy, Ei , of each state for the system.
3. Compute the partition function by evaluating the sum
∑
QN = e−βEi
i
where β = 1/kT .
4. Given QN compute other quantities of interest, for example,
∂
U =− ln QN
∂β
1
S= U + k ln QN
T
A = −kT ln QN
and ( ) ( )
∂A ∂
P =− = kT ln QN
∂V T ∂V T
Before going to some more examples, I should discuss the physical meaning of the canonical ensemble.
We already saw that the microcanonical ensemble represents an isolated system (fixed U, V, and N )
such as illustrated in Fig. 3.1. The canonical ensemble is similar except that the system’s energy is not
strictly fixed to be U , rather on average the energy is U . This turns out to be equivalent to a system
at fixed temperature T (see Fig. 3.2).
Important Point: The thermodynamic properties derived from the two ensembles are identical. The
only reason we use two ensembles is for computational convenience; for different problems one is easier
to work with than the other. However, all thermodynamic quantities (entropy, equation of state, heat
capacities, etc.) are identical (except for small differences that go to zero as N → ∞).
CHAPTER 3. ENSEMBLE THEORY II 42
where nj = 0, 1, .... is the quantum number specifying the energy level of the oscillator.
The total energy of the system is
E = ϵ1 + ϵ2 + . . . + ϵN
∑
states
QN = e−βE
∞ ∑
∑ ∞ ∞
∑
= ... e−β(ϵ1 +ϵ2 +...+ϵN )
n1 =0 n2 =0 nN =0
( ∞
)( ∞
) ( ∞
)
∑ ∑ ∑
= e−βϵ1 e−βϵ2 ... e−βϵN
n1 =0 n2 =0 nN =0
N
= (Q1 )
Before doing this simple sum, two points: 1) Since the individual oscillators may be treated as
independent elements and the energy written as the sum of the individual energies, QN = (Q1 )N . This
is a common (and nice) feature; 2) In this problem the oscillators are taken to be distinguishable. For
indistinguishable elements, we must include a Gibbs correction to the sum over states, in which case
QN = N1 ! (Q1 )N .
To finish the example, we use the identity
∞
∑ 1
an = |a| < 1
n=0
1−a
so
e−βh̄ω/2 1
Q1 = =
1 − e−βh̄ω 2 sinh (βh̄ω/2)
Using
∂ ∂
U =− ln QN = − (N ln Q1 )
∂β ∂β
then
1
U=
N h̄ω coth(βh̄ω/2)
2
You can check that this result matches our microcanonical result for U (recall β = 1/kT ).
h2
ϵ (nx , ny , nz ) = (n2 + n2y + n2z )
8mL2 x
Since we have an ideal gas, the interactions between the particles are negligible. The partition function
is thus
1 N
QN = Q
N! 1
where Q1 is the partition function for a single particle. Notice we have a factor of 1/N ! out in front
because the particles are indistinguishable.
The single particle partition function is
∞ ∑
∑ ∞ ∑
∞
Q1 = e−βϵ(nx ,ny ,nz )
nx =1 ny =1 nz =1
or ( ∞ ( ))
∑ πλ2 n2x
Q1 = exp − (same for ny ) (same for nz )
nx =1
4L2
where
h
λ= √
2πmkT
is the thermal wavelength (the de Broglie wavelength of a particle with energy ≈ kT ).
CHAPTER 3. ENSEMBLE THEORY II 44
In the classical
( limit the de Broglie
) wavelength of the particles is much smaller than the distance
√ √
between them λ ≪ V /N = L/ N , so we may replace the sums with integrals.
3 3
∞
∑ ( ) ∫∞ ( )
πλ2 n2x πλ2 x2
exp − ≈ dx exp −
nx =1
4L2 4L2
0
so ( )( )( )
L L L V
Q1 ≈ = 3
λ λ λ λ
The N particle partition function is
( )N
1 N 1 V
QN = Q =
N! 1 N! λ3
The pressure is ( ) ( )
∂ ∂ N kT
P = kT ln QN = kT ln V N =
∂V T ∂V T V
confirming all of our previous results.
where the prime indicates that pi is missing and a2) The function ϵi is quadratic in pi so
ϵi (pi ) = bp2i
where b is a constant.
OR
b1) The total energy splits additively as
E(q, p) = ϵi (qi ) + E ′ (q ′ p)
where the prime indicates that qi is missing and b2) The function ϵi is quadratic in qi as
ϵi (qi ) = bqi2
where b is a constant.
Case a) is common since often the energy of a system is of the form
1 d d
since f (x) dx f (x) = dx ln f (x).
Using a2),
(∫ )
∂ −βbp2i
⟨ϵi ⟩ = − ln e dpi
∂β
( ∫ )
∂
ln β −1/2 e−by dy
2
= − (Use y ≡ β 1/2 pi )
∂β
[ ∫ ]
∂
−1/2 ln β + ln e−by dy
2
= − (No β dependence in 2nd term)
∂β
11 1
= = kT
2β 2
Thus the average value of energy for component i is 12 kT . The derivation for case b) is identical except
we interchange p ⇐⇒ q.
Whenever the energy of a particle has a component that goes as p2 or q 2 we call this a degree of
freedom. From the equipartition theorem, each degree of freedom has an average energy of 21 kT . A
monatomic gas atom has three degrees of freedom (d.o.f.) so the energy in one mole of the gas is
U = 32 NA kT = 32 RT . At low T , a diatomic gas has five degrees of freedom (three translational d.o.f.
for the center of mass motion plus two rotational kinetic energy d.o.f.) in the absence of vibrational
states. At high temperatures, the molecule may be approximated as a pair of masses coupled by a
spring giving seven degrees of freedom (three translational d.o.f. for each atom in the molecule plus
one potential d.o.f. from the spring). Thus a diatomic gas has a molar specific heat of cV = 52 R at low
temperatures and cV = 72 R at high temperatures.
Chemical Potential
Lecture 11
So far we have implicitly assumed that the number of particles, N , in a system was fixed. We now
consider the more general case in which N is allowed to vary.
As we have seen, the entropy, S(U, V, N ), and internal energy, U (S, V, N ), are functions of N .
Specifically we have ( ) ( ) ( )
∂U ∂U ∂U
dU = dS + dV + dN
∂S V,N ∂V S,N ∂N S,V
or
dU = T dS − P dV + µdN
where the chemical potential is defined as
( )
∂U
µ=
∂N S,V
that is, µ is the change in the internal energy with N , given that S and V are fixed. Notice that µ has
the dimensions of energy per particle; in fact it is the Gibbs free energy per particle.
An alternative expression for chemical potential is
( )
∂S
µ = −T
∂N U,V
Notice that even an inert monatomic gas (such as neon) has a chemical potential, i.e., µ has no more
to do with chemistry than, say, temperature.
There are some additional Maxwell relations involving chemical potential, for example,
( ) ( )
∂µ ∂P
=−
∂V T,N ∂N T,V
We can check that the expression above for a monatomic ideal gas satisfies this relation since we may
write it as,
µ = −kT ln V + f (T, N ))
using the fact that for a monatomic ideal gas U = 32 N kT . Thus we have for the l.h.s. of the Maxwell
relation, ( ) ( )
∂µ ∂ kT
= (−kT ln V + f (T, N )) = −
∂V T,N ∂V T,N V
and for the r.h.s., ( ) ( )
∂P ∂ N kT kT
− =− =−
∂N T,V ∂N T,V V V
The other Maxwell relations are easily found on-line.
Notice that the chemical potential for a monatomic ideal gas is negative since increasing the number
of particles will increase the number of accessible states (and thus increase S) unless the energy U
simultaneously decreases. From this observation and the definition
( )
∂U
µ=
∂N S,V
we see that the chemical potential is the amount of energy that must be removed when one particle is
added in order to keep S fixed.
Example: Consider a system of distinguishable particles with energy levels e = 0, ϵ, 2ϵ, . . .. For a
system with N = 2 particles and energy U = 2ϵ, find the entropy and the chemical potential.
Solution: For N = 2 and U = 2ϵ you can count that there are Γ = 3 accessible states, specifically,
So the entropy is S = k ln Γ = k ln 3. If we add one particle keeping the energy fixed then the number
of accessible states increases to 6, specifically,
(2ϵ, 0, 0) (0, 2ϵ, 0) (0, 0, 2ϵ)
(ϵ, ϵ, 0) (ϵ, 0, ϵ) (0, ϵ, ϵ)
To find the chemical potential, we need to find how much energy we must remove to bring the entropy
back down to its original value. You can check that with N = 3 and U = 1 the number of accessible
states returns to Γ = 3, specifically,
Thus ( )
∆U −ϵ
µ= = = −ϵ
∆N S,V 1
CHAPTER 3. ENSEMBLE THEORY II 49
Figure 3.4: Some possible states for the grand canonical ensemble.
Since the constraint on N is similar to the constraint on E it should not be surprising that their terms
in the above expression are similar.
The denominator in the above expression for Pi is the grand canonical partition function, Q. It is
useful to write it as
∑
states
Q = e−βEi +βµNi
i
CHAPTER 3. ENSEMBLE THEORY II 50
∞
∑ ∑
= eβµN e−βEj (sum j is over states with Ni = N )
N =0 j
∑∞
= eβµN QN
N =0
and
PV
= ln Q
kT
Example: Given the canonical partition function for a highly relativistic ideal gas of N indistin-
guishable particles to be,
1 [ ]N
QN = 8πV (hcβ)−3
N!
Find the grand canonical partition function, Q, and use it to obtain the equation of state, P (T, V, N ).
Solution: Right off the bat I should mention that since we have QN we could find all the ther-
modynamic quantities directly using it; this exercise is simply to demonstrate the manipulation of
Q.
The grand canonical partition function may be written in terms of QN as,
∞
∑ ∞
∑ 1 [ ]N
Q= z N QN = 8πV (hcβ)−3 z
N!
N =0 N =0
Using
∞
∑ xN
ex =
N!
N =0
gives
Q = exp(8πV (hcβ)−3 z)
so
PV
= 8πV (hcβ)−3 z
kT
To replace z with N in this expression we evaluate,
∑ ( )
∂
N= Ni Pi = z = 8πV (hcβ)−3 z
i
∂z β
CHAPTER 3. ENSEMBLE THEORY II 51
but ( ) ( ) ( )
∂S CV ∂S ∂P
= =
∂T V T ∂V T ∂T V
52
CHAPTER 4. SAMPLE EXERCISES FOR FIRST MIDTERM 53
Problem: Consider the classical ideal gas (with N indistinguishable particles in a volume V ) in the
highly relativistic limit where the energy of a particle may be approximated as, ϵ = c(|px | + |py | + |pz |),
where pi are the components of the momentum of the particle and c is the speed of light.
(a) Using the classical micro-canonical ensemble (fixed U , V , N ), find the number of states Σ that
have energy E ≤ U . Note that the volume of a simplex in d dimensions (a pyramid whose edges
are the lines connecting the origin and the points x1 = L, x2 = L, . . ., xd = L) is Ld /d!. (see
http://en.wikipedia.org/wiki/Simplex\verb)
(b) From the result in part (a), find U (V, T ). Note: If you could not do part (a) then take Σ = AV B U C ,
where A, B, and C are constants.
(c) From the result in part (a), find the equation of state.
(d) Find γ = CP /CV , the ratio of the heat capacities.
Each position integral is independent and gives a factor of V . The momentum integrals equal the
volume of a simplex with sides L = U/c and dimension d = 3N multiplied by the number of quadrants
in d dimensions, which is 2d . Thus
3N 3N
1 N 2 U V N U 3N
Σ= V =
N !h3N (3N )!c3N N !(3N )!(hc/2)3N
(b) The entropy is
S = k ln Σ = kN ln V + 3kN ln U + f (N )
where f = −k ln(N !(3N )!(hc/2)3N ). To obtain the energy in terms of temperature we may use
( )
1 ∂S 3kN
= =
T ∂U V U
so U = 3kN T .
(c) The equation of state may be found using
( )
P ∂S kN
= =
T ∂V U V
which gives ideal gas law.
(d) The heat capacity at constant volume is
( )
∂U
CV = = 3N k
∂T V
In general
T V α2
CP = CV +
KT
where α is the coefficient of thermal expansion and KT is the isothermal compressibility. Since our
equation of state is the ideal gas law, α = 1/T and KT = 1/P , which gives CP = CV + N k = 4N k,
thus γ = CP /CV = 4/3. Note that for the non-relativistic ideal gas γ = 5/3.
Problem: Consider the following simple model for rubber. A chain consisting of N independent,
distinguishable links, each of length a, is attached to a weight of mass M . Each link points either
up or down; the individual links have negligible mass and kinetic energy. The energy for any given
configuration may be written as E(L) = M g[N a − L] where the length L = a(n1 + n2 + . . . + nN ), with
ni = −1 and +1 for links pointing up and down. Find the canonical partition function, QN , and use it
to find the average length ⟨L⟩ = L(E = U ) as a function of temperature. Hint: Unlike most materials,
rubber expands when cooled; you can check that your answer confirms this observation.
n1 =−1 nN =−1
( )N
∑
+1
= e−βM gN a eβM gan
n=−1
−βM gN a
( )N
= e e−βM ga + eβM ga
CHAPTER 4. SAMPLE EXERCISES FOR FIRST MIDTERM 55
Using
∂
U = − ln QN
∂β
∂ [ ( )]
= βM gN a − N ln e−βM ga + eβM ga
∂β
eβM ga − e−βM ga
= M gN a − M gN a βM ga
e + e−βM ga
Since L(E) = N a − E/M g,
eβM ga − e−βM ga
⟨L⟩ = Na − Na + Na
eβM ga + e−βM ga
eβM ga − e−βM ga
= Na
eβM ga + e−βM ga
= N a tanh(βM ga) = N a tanh(M ga/kT )
Since tanh(x) increases as x increases the length increases as temperature decreases. A state of zero
entropy occurs when all the links point in the downward direction, which gives the maximum length.
0 2a
Solution: (a) The gas is ideal so the canonical partition function may be formulated as
1
QN = (Q1 )N
N!
where the classical, single particle partition function is
∫ Lx ∫ Ly ∫ Lz ∫ ∞ ∫ ∞ ∫ ∞
1
Q1 = dx dy dz dpx dpy dpz e−βE(r,p)
h3 0 0 0 −∞ −∞ −∞
Since the box is tall βmgLz ≫ 1 so the exponential term is negligible and integral is approximately just
1/βmg = kT /mg = LT . Collecting these pieces gives
( )N
1 Lx Ly LT
QN =
N! λ3
∂ ∂ ∂ 3 5
U =− ln QN = −N ln LT + 3N ln λ = N kT + N kT = N kT
∂β ∂β ∂β 2 2
Finally, the heat capacity is ( )
∂U 5
CV = = Nk
∂T V 2
5
and cV = 2 k.
(c) The result in part (b) could not be predicted from the equipartition theorem, as presented in
the notes and in class, because the gravitational potential energy is not quadratic in position.
However, Pathria derives a more general form of the equipartition theorem (eqn. (3.7.4)) which
states that, ⟨ ⟩
∂E
z = kT
∂z
In our case this gives,
⟨mgz⟩ = kT
which tells us that the average potential energy for a particle is kT . Since the average kinetic energy
for a particle is 32 kT we do arrive at the result obtained in part (b). Since either yes or no is a correct
answer, credit is given only when your answer is justified.
Lecture
Midterm
Chapter 5
Classical Limit
Lecture 12
We already used the canonical ensemble to study the monatomic ideal gas using quantum mechanics
(i.e., non-interacting particles in a box). However, we had to use the approximation,
√ √
λ ≪ 3 V /N ≪ L/ N
3
Recall that
h
λ= √
2πmkT
is the de Broglie wavelength of a particle at temperature T so the above inequality indicates that λ is
much smaller than the distance between particles.
This approximation puts us into the classical regime and gives us the classical result for the canonical
partition function
( )N
1 V
QN = (Classical ideal gas)
N ! λ3
The grand canonical partition function is
∞
∑ ∞
∑ ( )N ( )
1 zV zV
Q= z N QN = = exp
N ! λ3 λ3
N =0 N =0
We now return to the quantum ideal gas but without making the approximation that λ is small.
Usually when the distance between particles is comparable to their deBroglie wavelength we may no
longer neglect the interparticle forces (i.e., the gas is no longer ideal). However, there are some important
exceptions:
• Conduction electrons in a solid (λ is large because m is small).
• Photon gas (i.e., blackbody radiation).
• Liquid Helium (interatomic forces are weak).
• Ultracold systems (λ is large because T is extremely small; T ≈ 10−7 ).
In these cases the system of particles can be approximated as a quantum ideal gas, that is, the
interparticle forces can be neglected yet the gas cannot be treated classically.
Quantum mechanics seems to divide particles into two categories. Particles with integer spin can
have more than one particle occupying the same quantum state; these types of particles are called bosons.
57
CHAPTER 5. QUANTUM IDEAL GASES 58
Examples of such particles (and their spin) are: Alpha (0), Pion (0), Photon (1), Deuteron (1), Gluon
(1), W (1), Z 0 (1). Particles with 1/2 integer spin obey an exclusion principle (Pauli exclusion). Each
quantum state can have at most one particle; these types of particles are called fermions. Examples of
such particles (and their spin) are: Electron (1/2), Muon (1/2), Proton (1/2), Neutron (1/2), Quarks
(1/2), 3 He (1/2), Ω (3/2), 55 F e (3/2). Why is this the way nature works? Are there other possibilities
(i.e. anyons)? Who knows? Until other types of particles are discovered, we’ll stick with fermions and
bosons.
where z = eβµ is the fugacity. While we use this definition of Q we won’t explicitly compute QN .
Start with the definition of the canonical partition function,
∑
states
QN = e−βEj
j
∑
1∗ ∑
1∗ ∑
1∗
= ... exp (−β [n1 ϵ1 + n2 ϵ2 + . . . + nM ϵM ])
n1 =0 nz =0 nM =0
where ni is the number of particles in energy state i. We have i = 1, ...., M where M is the number of
energy states (which might be infinite). The asterisks on the sums are to remind us of the constraint,
n1 + n2 + . . . + nM = N nj = 0 or 1
that is, there are N particles distributed among the M energy states with at most one particle per
energy state.
By the way, don’t confuse system states, which are all the possible configurations of the system,
with energy states. The energy states are similar to energy levels however remember that energy states
don’t necessarily have unique energy values (i.e., ϵi = ϵj is allowed, in fact it’s common). If we picture
the classical phase space for a single particle then each energy level is a parcel in that phase space.
We can formulate our sum over system states in terms of just energy states of a single particle because
we’re still doing ideal gases where the energy of a particle only depends on its own state and not the
state of the other particles (although with quantum ideal gases, the energy states that are available to
a particle are affected by the number of particles in those energy states).
Our next step is to write QN as
( 1 ) ( 1∗ )
∑ ∑ ∑1∗
QN = e−βn1 ϵ1 ... exp (−β [n2 ϵ2 + . . . + nM ϵM ])
n1 =0 n2 =0 nM =0
( )
( ) ∑
1∗ ∑
1∗
= 1 + e−βϵ1 ... exp (−β [n2 ϵ2 + . . . + nM ϵM ])
n2 =0 nM =0
′ ′
= QN + QN −1 e−βϵ1
CHAPTER 5. QUANTUM IDEAL GASES 59
where the prime means “remove energy state 1 from the sums.”
The grand partition function is
∑
∞ ∞
∑ ( ′ ′
)
Q = z N QN = z N QN + QN −1 e−βϵ1
N =0 N =0
∑∞ ∞
∑
′ ′
= z N QN + e−βϵ1 z z N −1 QN −1
N =0 N =1
∞
( −βϵ1
)∑ ′
= 1 + ze z N QN
N =0
or
( )
Q = 1 + ze−βϵ1 Q′
We can play the same game and pull out energy state 2 and 3 and so on. Ultimately we arrive at
( )( ) ( )
Q = 1 + ze−βϵ1 1 + ze−βϵ2 . . . 1 + ze−βϵM
∏
M
( )
= 1 + ze−βϵi
i=1
Note that for bosons the sum goes from 0 to N instead of from 0 to 1. Again the ∗ means that the
sums are restricted by the condition n1 + n2 + . . . + nM = N .
As before, we explicitly expand the n1 summation to get
′ ′ ′
QN = QN + e−βϵ1 QN −1 + e−2βϵ1 QN −2 + . . .
Next we would apply the same procedure to strip out energy state 2 then 3 and so forth. Ultimately,
we have
( )−1 ( )−1 ( )−1
Q = 1 − ze−βϵ1 1 − ze−βϵ2 . . . 1 − ze−βϵM
∏M ( )−1
= 1 − ze−βϵi
i=1
which is our expression for the grand partition function for bosons.
∏
M
( )σ
Q= 1 + σze−βϵi
i=1
where {
+1, for fermions
σ=
−1, for bosons
and M is the number of energy states (which can be infinite). From this
∑
M
( )
ln Q = σ ln 1 + σze−βϵi
i=1
∑
M
N= ni
i=1
so
1
ni =
z −1 eβϵi + σ
for fermions and bosons.
Some points: 1) Sometimes ni is written as ⟨ni ⟩; 2) For fermions 0 ≤ ni ≤ 1 while for bosons
0 ≤ ni ≤ N ; 3) In both cases, z is fixed by the above relation for N ; 4) In the classical (high T ) limit
z ≪ 1 so ni ≈ exp(−βϵi ), which gives the Maxwell-Boltzmann distribution.
CHAPTER 5. QUANTUM IDEAL GASES 61
The energy is
( ) ∑
M ∑
M
∂ ϵi
U =− ln Q = = ni ϵi
∂β z i=1
z −1 eβϵi + σ i=1
Many of our calculations for Fermi and Bose ideal gases will involve either the above expression for N
or the expression for U .
∏
M
( )σ
Q= 1 + σze−βϵi
i=1
where {
+1, for fermions
σ=
−1, for bosons
We also have
PV ∑M
( )
= ln Q = σ ln 1 + σze−βϵi
kT i=1
and
( ) ∑
∂
M
( )
N =z ln Q = ze−βϵi / 1 + σze−βϵi
∂z β i=1
and ( ) ( )
∂ ∂ PV
U =− ln Q = −
∂β z ∂β z kT
We could write other thermodynamic variables in terms of ln Q but the above are the most useful.
While the above is complete, the sums are clumsy to work with. For this reason we will convert
from sums over energy states (taking M → ∞) to integrals over phase space,
∑
M ∫ ∫
gd
→ ⃗r d⃗
p
i=1
h3
2π
∫ ∫π ∫∞
gd
= (V ) dϕ dθ sin θ dp p2
h3
0 0 0
∫∞
4πgd
= V dp p2
h3
0
where gd is the degeneracy factor (number of momentum states that have the same energy). Since our
only integration is over momentum, we rewrite the energy of a state using ϵ = p2 /2m (which is legit as
long as our particles do not move at relativistic speeds).
We are not reverting back to classical mechanics even though the notation is similar. Physical
systems have many particles so there are a large number of states; the only approximation that we are
making here to assume that systems have so many states that we can replace the sums with integrals.
Our previous classical limit was a high temperature approximation.
CHAPTER 5. QUANTUM IDEAL GASES 62
and
∫ −βp2 /2m
N 4π ∞ ze
= gd 3 dp p2
V h 0 1 + σze−βp2 /2m
Our previous classical approximation, taking z ≪ 1 and using ln(1 + x) ≈ x, we would write these
expressions as
∫ ∞
P 4π
dp p2 σze−βp /2m
2
≈ gd 3 σ
kT h 0
gd z
= (Classical approximation)
λ3
which agrees with our earlier result, ln Q = P V /kT = zV /λ3 , for nondegenerate (gd = 1) energy states.
Similarly, the classical approximation gives
∫ ∞
N 4π
dp p2 σze−βp /2m
2
≈ gd 3 σ
V h 0
gd z
= (Classical approximation)
λ3
which using the previous equation gives P V = N kT .
We introduce the Fermi integrals
∫ ∞ ( )
4
dx x2 ln 1 + ze−x
2
f5/2 (z) = √
π ◦
∑∞
(−1)ℓ+1 z ℓ
=
ℓ=1
ℓ5/2
∂
f3/2 (z) = z f5/2 (z)
∂z
∑∞
(−1)ℓ+1 z ℓ
=
ℓ=1
ℓ3/2
so for bosons,
P gd
= g5/2 (z)
kT λ3
N gd
= g3/2 (z)
V λ3
Careful: don’t confuse gd with the Bose integrals; sorry but this is the standard notation.
If you compare the above with equations (7.1.5) and (7.1.6) in Pathria (pg. 180) you will find a
discrepancy. We should actually write
P 1 1
= g5/2 (z) − log (1 − z)
kT λ3 V
N 1 1 z
= g3/2 (z) +
V λ3 V 1−z
The last term in each expression is a correction required for very low temperature problems. Since the
origin of these corrections is intellectually interesting we return to this question later and consider it in
some depth.
We may also write the internal energy in terms of the above integrals. Using
( ) ( )
∂ ∂ PV
U =− ln Q = −
∂β z ∂β z kT
We also find
3
U= PV (both Bose and Fermi)
2
a result which is also true for the classical monatomic ideal gas.
Fermi Energy
We now consider a Fermi gas in some detail. Using the grand canonical ensemble, we already determined
that
N gd
= 3 f3/2 (z)
V λ
√
where λ = h/ 2πmkT is the thermal wavelength and z = eβµ is the fugacity. You should view this
expression as an equation to obtain z, or equivalently, to give the chemical potential µ, given T , V , and
N.
The Fermi function f3/2 (z) may be written as
∫ ∞ ∞
∑ (−1) ℓ+1
4 x2 zℓ
f3/2 (z) = √ dx =
π 0 z −1 ex + 1
2
ℓ3/2
ℓ=1
Figure 5.1 shows a sketch of this function. We will be interested in the approximations
{
z, z≪1
f3/2 (z) ≈ √4 3/2
(log z) , z ≫ 1
3 π
CHAPTER 5. QUANTUM IDEAL GASES 64
Figure 5.1: Graph of f3/2 (z) indicating low and high z regions.
From N λ3 /gd V = f3/2 (z) we see that in the classical limit (λ → 0) then f3/2 (z) ≪ 1 so z ≪ 1. In the
( )
opposite case, λ3 ≫ V /N we have f3/2 (z) ≫ 1 so z ≫ 1. Let’s consider these separately.
Classical limit When the thermal de Broglie wavelength λ is small, then f3/2 (z) ≈ z so
N λ3
z=
gd V
The average occupation number for state i becomes
1 1 N λ3 −βϵi
⟨ni ⟩ = ≈ = e
z −1 eβϵi + 1 z −1 eβϵi gd V
which is the classical Maxwell-Boltzmann distribution. In the classical limit f5/2 (z) ≈ z so we also
recover the classical ideal gas equation of state P V = N kT .
Strong Quantum Limit Now we consider the opposite limit where the thermal de Broglie wavelength
√
λ is much larger than the particle separation 3 V /N . In this limit z ≫ 1 and
N λ3 4 3/2
≈ √ (log z)
gd V 3 π
or
z ≈ eβµ0
where ( )2/3
h̄2 6π 2 N
µ0 ≡
2m gd V
When λ → ∞ (or T → 0) then z → eβµ0 so µ0 is the chemical potential at T = 0. A common notation
is ϵF ≡ µ0 where ϵF is called the Fermi energy. The energy level at the Fermi energy is called the Fermi
level.
As T → 0, the average occupation number is
gd gd gd
⟨ni ⟩ = −1
= −βϵ
= −ϵ
z eβϵi +1 e F eβϵi +1 e β(ϵi F) +1
Yet as T → 0, β = 1/kT → ∞ so
{
β(ϵi −ϵF ) ∞, ϵi > ϵF
lim e =
T →0 0, ϵi < ϵF
and thus {
0, ϵi > ϵF
lim ⟨ni ⟩ =
T →0 gd , ϵi < ϵF
CHAPTER 5. QUANTUM IDEAL GASES 65
In other words, as T → 0, all states below the Fermi level are filled and all states above the Fermi level
are empty. The particles filling the energy levels below ϵF are called the “Fermi sea” (a better term
would be the “Fermi Seventh Street Garage During the First Week of Classes”).
From the Fermi energy we can define a Fermi temperature as ϵF = kTF or TF = ϵF /k. If T ≫ TF ,
then the system is in the classical regime. However, for most fermi gases of interest, T ≪ TF so they
are in the strong quantum limit.
It is easy to understand why the chemical potential at T = 0 equals the Fermi energy. At T = 0,
every fermion is in its lowest possible energy level and stacked, due to Fermi exclusion, up to the Fermi
level. The entropy is zero at T = 0; there is only one possible state for the system. Now add a single
particle; this particle has to be place on the stack at the next available energy level. This means that
the particle will have infinitesimally more energy than the Fermi energy. But the entropy remains zero
so the energy of the system increases by ϵF thus the chemical potential
( )
∂U
µ= = ϵF (T = 0)
∂N S,V
It turns out that µ ≈ 0 for T ≈ TF and the chemical potential is negative when T ≫ TF , which was
already pointed out for the classical ideal gas.
The chemical potential at T = 0 is µ0 = ϵF ; to obtain µ for T > 0, we need to keep the next order
in the expansion for f3/2 (z). Specifically,
[ ]
4 3/2 π2 −2
f3/2 (z) = √ (log z) 1+ (log z) + . . .
3 π 8
[ ]
4 3/2 π2 −2
≈ √ (βµ) 1+ (βµ) (z ≫ 1)
3 π 8
Using this new approximation we get,
[ ]
N λ3 4 3/2 π2 −2
= √ (βµ) 1+ (βµ)
gd V 3 π 8
or √ [ ]−1
3/2 3 π N λ3 π2 −2
(βµ) = 1+ (βµ)
4 gd V 8
or
( √ )2/3 [ ]−2/3
1 3 π N λ3 π2 −2
µ = 1+ (βµ)
β 4 gd V 8
[ 2
]−2/3
π −2
= ϵF 1 + (βµ)
8
Only one hitch, we have µ in the correction term on the r.h.s.. However, since that term is the first
order correction, we set µ = µ0 = ϵF on the r.h.s. and get
[ ]−2/3
π2 −2
µ ≈ ϵF 1 + (βϵF )
8
[ ]
π2 −2
≈ ϵF 1 − (βϵF )
12
[ ( )2 ]
π 2 kT
= ϵF 1 −
12 ϵF
[ ( )2 ]
π2 T
= ϵF 1 −
12 TF
CHAPTER 5. QUANTUM IDEAL GASES 66
which gives µ for T > 0. Since it is common that T ≪ TF , we rarely need a second order correction.
At low (but non-zero) temperatures the average occupation number is,
1
⟨ni ⟩ =
eβ(ϵi −µ) +1
which is sketched in Fig. 5.2.
From the above, we can obtain the internal energy,
[ ( )2 ]
3 5 2 T
U = N ϵF 1 + π
5 12 TF
Notice that at T = 0 the average energy per particle is 35 ϵF , which is the average ground state energy.
The heat capacity is ( )
∂U π2 T
CV = = Nk
∂T V 2 TF
the specific heat per mole (set N = NA and use R = NA k) is
π2 T
cV = R
2 TF
This result resolves an old mystery that tortured Boltzmann. Recall that the law of Dulong and
Petit which said that due to molecular vibrations the specific heat per mole of a solid is 3R. Yet it
is strange that this result applies for both insulators and metals. After all metals have conduction
electrons which should contribute 3 extra degrees of freedom (due to their translational kinetic energy).
For metals we would expect the molar heat capacity to be,
3
cV = 3R + R
2
where the first term is the contribution from the lattice and the second is the contribution from the
conduction electrons. But this result is not correct because conduction electrons do not behave as a
classical ideal gas but rather as a Fermi ideal gas. The correct expression for the molar heat capacity
of a metal is,
π2 T
cV = 3R + R
2 TF
For copper TF ≈ 50, 000◦ C so it is not surprising that the conduction electrons have a negligible effect
on the heat capacity.
CHAPTER 5. QUANTUM IDEAL GASES 67
An alternative way to view this phenomenon is to consider the other definition of heat capacity
( )
∂S
CV = T
∂T V
As temperature increases, only a few particles rise above their ground states. Most particles remain in
the ground state so long as T ≪ TF . For this reason, the number of accessible states Γ (and consequently
( ∂S )
the entropy S) increases very slowly. Since ∂T V
is small, so is CV .
Finally, the pressure in a Fermi gas is
[ ( )2 ]
2U 2 N ϵF 5π 2 T
P = = 1+
3V 5 V 12 TF
Notice that even when T = 0, the pressure is nonzero. Again this is due to the ground state energy of
the particles. In the next section we consider an example in which this pressure supports a star and
prevents its gravitational collapse.
Fortunately, we can assume that all the electrons are in their ground state (since T ≪ TF ).
The total internal energy is then
∑
states
U = gd ϵi ni
i
CHAPTER 5. QUANTUM IDEAL GASES 68
As usual, we replace the sum over states with an integral over states by integrating over position and
momentum,
∫ ∫ √
1 2 2
U = 2 3 d⃗r p (pc) + (mc2 )
d⃗
h
V p<pF
∫pF √
1 2 2
= 2 3 (V ) (4π) dp p2 (pc) + (mc2 )
h
0
or
m4 c5
U= V f (xF )
π 2 h̄3
where xF ≡ pF /mc and
∫ x √ 1√ 2 1
f (x) ≡ dy y 2 1 + y2 = x + 1(2x3 + x) − sinh−1 (x)
0 8 8
Note that f (x) is no a particularly interesting looking function; it basically looks like xa where a is a
small integer (around 2 to 5 depending on whether x → 0 or x → ∞).
The reason we wanted to get U is that we can get the pressure using
( )
∂U
P =−
∂V S
Note: Entropy is fixed because all electrons are in the ground state. Putting in our expression for U ,
( 4 5)[ ( ) ]
m c ∂f ∂xF
P = − f (xF ) +
π 2 h̄3 ∂xF ∂V S
( 4 5)[ √ ]
m c 1 3 2 − f (x )
= x 1 + x F
π 2 h̄3 3 F F
CHAPTER 5. QUANTUM IDEAL GASES 69
This gives us pressure in terms of xF but we would prefer to know in terms of M , the star’s mass
and R, the star’s radius. Yet it is simple to write xF in terms of M and R since
( )1/3 1/3
pF h̄ 3π 2 N M
xF = = =
mc mc V R
where
R
R=
h̄/mc
is a dimensionless radius and
9π M
M=
8 mp
is a dimensionless mass. We’ve used the fact that the star’s mass, M , is related to the number of
electrons, N , as
M ≈ 2mp N
where mp is the mass of a proton. That is for each electron there is approximately one proton and one
neutron (which have about the same mass). All miscellaneous constants are stuffed into M and R to
keep things tidy.
Our expression for the pressure remains complicated. However, in the two approximations
4
P ≈ 5
KxF (xF ≪ 1)
5
and
( )
P ≈ K x4F − x2F (xF ≫ 1)
where
mc2 ( mc )3
K=
12π h̄
we use these expressions in a moment.
The gravitational force of the star must balance the pressure of the Fermi gas in order to contain
the gas. To do this right, the calculation requires solving some partial differential equations. Yet we
can use a simple approximation for the pressure exerted by gravity.
∂
(Force) = −(gravitational potential energy)
∂R ( )
∂ GM 2
P × (surface area) = − −
∂R R
( ) GM 2
P 4πR2 =
R2
so
1 GM 2
P =
4π R4
Comparing this with our previous expression for pressure allows us to solve for radius in terms of
mass. We find,
1) In the low density limit (xF ≪ 1),
−1/3
R∝M
Notice that as mass increases, radius decreases.
CHAPTER 5. QUANTUM IDEAL GASES 70
where the constant M0 turns out to be about 1.44 times the mass of the sun.
The graph of radius versus mass is sketched in Fig. 5.3.
Notice that the theory predicts that if the mass M > M0 , the Fermi gas pressure cannot hold up
the star; this is called the Chandrasekhar limit. ∗ If a star’s mass exceeds M0 , it does not necessarily
collapse into a black hole—internuclear forces can hold up the star in which case it becomes a neutron
star.
where ni is the number of photons in energy level i and E is the energy for the state (n1 , . . . , nM ).
Notice that the sums are unrestricted since particle number is unconstrained.
Call ϵi the energy of a photon in state i and we may write
∞
∑ ∞
∑
Q = ... e−β[n1 ϵ1 +n2 ϵ2 +...+nM ϵM ]
n1 =0 nM =0
( ∞
)( ∞
) ( ∞
)
∑ ∑ ∑
−βn1 ϵ1 −βn2 ϵ2 −βnM ϵM
= e e ... e
n1 =0 n2 =0 nM =0
( )( ) ( )
1 1 1
= ...
1 − e−βϵ1 1 − e−βϵ2 1 − e−βϵM
∏
M
1
=
i=1
1 − e−βϵi
so
∑
M
( )
ln Q = − ln 1 − e−βϵi
i=1
Notice how easy those unrestricted sums were. If this looks familiar its because it’s the same as what
we found for ln Q for bosons but with z = 1 (since µ = 0). For photons there is no physical upper
bound on the number of states so we may take M = ∞.
As usual, to get much farther we need to replace the sum over states into integrals over phase space,
∑
states ∑
spins ∫ ∫
1
→ d⃗r d⃗
p
h3
V
∫
1
= (2) (V ) d⃗
p
h3
∫ ∞
8πV
= dp p2
h3 0
since there are two spin states (corresponding to left and right polarized).
One usually prefers characterizing the state of a photon by its frequency rather than its momentum
so using p = ϵ/c = h̄ω/c, ∫ ∞ ∫ ∞
8πV V
dp p 2
→ dω ω 2
h3 0 π 2 c3 0
and
∫ ∞
V ( )
ln Q = − 2 3 dω ω 2 ln 1 − e−βh̄ω
π c 0
2
( )3
π V kT
=
45c3 h̄
The pressure may be found using P V = kT ln Q, and
π2 4 4 4
P = 3 3 (kT ) = 3 σT
45c h̄
CHAPTER 5. QUANTUM IDEAL GASES 72
so P V = 13 U .
For our final result, go back a moment to our integral expression for ln Q,
∫ ∞
V ( )
ln Q = − 2 3 dω ω 2 ln 1 − e−βh̄ω
π c 0
( )
Using U = − ∂β ∂
ln Q
V ∫ ∞
V h̄ωe−βh̄ω
U= 2 3 dω ω 2
π c 0 1 − e−βh̄ω
or ∫ ∞
U
= dω u(ω, T )
V 0
where
h̄ ω3
u(ω, T ) =
π 2 c3 eβh̄ω − 1
is the energy per unit volume per unit frequency, i.e., the Planck radiation law (see Fig. 7.7 in Pathria).
By the correspondence principle, in the limit, h̄ → 0, eβh̄ω ≃ 1 + βh̄ω, we recover the classical
Rayleigh-Jeans law
kT
u (ω, T ) = 2 3 ω 2
π c
The ultraviolet catastrophe (u → ∞ as ω → ∞) in the classical result led to the birth of quantum
mechanics.
∑
M ( )
ln Q =− ln 1 − e−βϵi
i=1
∑
M ( )
=− ln 1 − e−βh̄ωi
i=1
Yet there are some differences. First, the number of states (i.e., modes), M , is not infinite but rather
is limited to 3N , where N is the number of atoms in the solid. This limitation may be understood by
the fact that the smallest wavelength of the sound modes is bounded by the finite separation between
atoms.
As before, we want to replace the sum over states with an integral and this is as with photons,
∑
M ∫ ωmax
3 V
→ dω ω 2
i=1
2 π 2 c3 0
(1) The polarization vector for phonons has 3 directions (2 traverse + 1 longitudinal) instead of 2
(just transverse) as with photons. This produces the factor 3/2.
(2) The speed c is the sound speed instead of the speed of light
(3) The finite number of modes introduces a maximum frequency ωmax . This ωmax is found using
the condition, ∫ ωmax
3 V
dω ω 2 = 3N
2 π 2 c3 0
or ( 2 )1/3
6π N
ωmax = c
V
for the maximum frequency.
I should mention that we have made an approximation (first introduced by Debye) that the distri-
bution of modes is the same as for photons. In reality, we should write
∑
M ∫ ωmax
→ f (ω) dω
i=1 0
This last integral is tabulated (see Debye function). If we define a Debye temperature T0 = h̄ωmax /k
then we have the approximations,
1 − 38 TT0 + . . . , T ≫ T0
U = 3N kT × π2 ( T )3
5 T + . . . , T ≪ T0
0
( )
Typically, T0 ≈ 200K so at room temperature U ≈ 3N kT and CV = ∂U ∂T V ≈ 3N k = 3Rn, where n is
the number of moles. This last expression is the familiar law of Dulong and Petit. For low temperatures,
U ∝ T 4 so CV ∝ T 3 , as seen experimentally.
∏
states
( )−1
Q= 1 − ze−βϵi
i
CHAPTER 5. QUANTUM IDEAL GASES 74
PV ∑
states
( )
= ln Q = − ln 1 − ze−βϵi
kT i
∂ ∑
states
ze−βϵi
N =z ln Q =
∂z i
1 − ze−βϵi
Normally, we replace the sums over states with integrals over phase space
∑
states ∫ ∞
4π
→ 3V dp p2
i
h 0
An aside: For bosons, 0 ≤ z < 1. You can prove this by considering the average occupation number
1 z
ni = = βϵi
z −1 eβϵi − 1 e −z
For the ground state, i = 1, ϵ1 = 0, we have
z
n1 =
1−z
Since 0 ≤ n1 < ∞, the fugacity must be between 0 and 1. (Note: for fermions 0 ≤ z < ∞.)
“. . . not even the magic of quantum mechanics can make particles disappear with cooling
or reappear when warmed.”
The key to resolving this paradox is to realize that at low temperatures, a significant fraction of the
particles will occupy the ground state. We explicitly separate the first term from the sum over states,
and replace the remaining sum with an integral, as before,
∞
∑ 1
N =
i=1
z −1 eβϵi −1
∞
∑
1 1
= +
z −1 − 1 i=2 z −1 eβϵi − 1
z V
= + 3 g3/2 (z)
1−z λ
= Ngr (z) + Nex (z)
where Ngr , Nex are the number of particles in the ground state and non-ground states respectively. As
we saw, the Nex → 0 as T → 0 yet the first term, Ngr = z/ (1 − z) can be arbitrarily large as z → 1.
As we saw, the maximum value for g3/2 (z) is ζ (3/2) (≃ 2.612) so the maximum number of particles
outside the ground state is
V
max {Nex (z)} = ζ (3/2) = Nex (z = 1)
λ3
Recall that λ ∝ T −3/2 ; we define a critical temperature, Tc , by the condition,
Nex (1) = N
or
V
ζ (3/2) = N
λ(Tc )3
In other words, we ask: at what minimum temperature could we still fit all the particles into non-ground
states? Solving the above for this temperature
( )( )2/3
h2 N
kTc =
2πm V ζ (3/2)
By the way, the value of kTc is approximately the same as the Fermi energy so quantum effects appear
in the same energy range for fermions and bosons.
The graph of Ngr and Nex is shown in Fig. 5.5. Specifically, the functions are
{ ( )
3/2
Ngr = N 1 − (T /T c ) , T < Tc
0, T > Tc
Notice that below T = Tc , particles start collecting in the ground state; this phenomenon is called
Bose-Einstein condensation.
To obtain the other thermodynamic properties, we use the standard expressions but treat the ranges
T < Tc and T > Tc separately:
• For T < Tc : The value of z in this case is very close to 1. For example, if the ground state contains a
mere 106 particles then from
z
Ngr =
1−z
CHAPTER 5. QUANTUM IDEAL GASES 76
Figure 5.5: The number of particles in the ground state and in excited states as a function of tempera-
ture.
we have that z ≈ 0.999999. Thus for T < Tc , we may set z = 1 everywhere except the above expression
for Ngr .
• For T > Tc : We can take Ngr = 0, Nex = N so
V
N= g3/2 (z)
λ3
We must solve this expression to find z; probably need to do this numerically.
Some of the other thermodynamic results we obtain from the partition function include
{3
kT λV3 g5/2 (z) , (T > Tc )
U = 23 V
2 kT λ3 g5/2 (1) , (T < Tc )
Graphing the specific heat, we get a curve with a noticeable cusp; see Fig. 5.6 or Pathria and Beale’s
Fig. 7.11. Quantum effects in a Fermi gas of electrons are significant, even at room temperature, because
the mass of an electron is small, giving it a large deBroglie wavelength. We don’t have a similar situation
for bosons. For a He4 atom (spin 0), the critical temperature at which Bose-Einstein condensation would
occur is Tc = 2.18K. Experimentally, at T = 3.14K, He4 does have a phase transition into a superfluid
phase. The specific heat near the transition is reminiscent of the ideal Bose gas result. Unfortunately,
a liquid is not well approximated as an ideal gas; the interaction between atoms is not negligible. Thus
our B-E condensation is only a first step toward understanding superfluid helium.
Bose-Einstein condensation is observed in the laboratory † and significant experimental work has
been done in the field of quantum ideal gases in the past decade. Finally, white dwarfs are filled with
( )
helium nuclei and are so dense that the critical temperature is quite high Tc ≈ 2.5 × 106 K . Maybe we
have B-E condensation in their cores? There is a white dwarf nearby (8 light years away, near Sirius),
we should go check this out.
† M.H. Anderson, J.R. Ensher, M.R. Matthews, C.E. Wieman, and E.A. Cornell, Science 269 198 (1995).
CHAPTER 5. QUANTUM IDEAL GASES 77
Figure 5.6: Heat capacity of an ideal boson gas as a function of temperature; notice cusp at T = Tc .
Chapter 6
Ei = ϵ1 (j ′ ) + ϵ2 (j ′′ ) + . . . + ϵN (j ′ · · ·′ )
and the energy of each particle is independent of the state of the other particles. For distinguishable
particles, the partition function simplifies
∑
states ∑
states
QN = e−βϵ1 (j) × . . . × e−βϵN (j) = (Q1 )
N
j j
| {z } | {z }
Particle 1 Particle N
where Q1 is the partition function for a single particle. Sometimes counting states is a little more
difficult (e.g., Fermi and Bose ideal gases) but still doable within one lecture.
In this chapter we consider the much more difficult problem of non-ideal systems for which the
energy does not decompose as a simple sum of single particle energies because of interactions between
the particles.
To give you a picture to have in mind, consider the following two examples:
Classical Non-ideal Gas In this system the energy of a single particle has both a kinetic and potential
energy contribution,
1 2 ∑
N
ϵi = p + U(|qi − qk |)
2m i
k̸=i
78
CHAPTER 6. NON-IDEAL SYSTEMS – ISING MODEL 79
where U is the potential energy due to the force between a pair of particles, which is here taken to only
be a function of their relative separation. The total energy for a given state is
∑
N
E(q1 , . . . , qN , p1 , . . . , pN ) = ei (q1 , . . . , qN , pi )
i=1
∑
N
E(q1 , . . . , qN , p1 , . . . , pN ) ̸= ei (qi , pi )
i=1
because the potential energy for particle j is a function of the positions of all the other particles.
Quantum Ising Spin System Consider a lattice of particles with spin 12 for which the state of particle
i is si = +1 (“up”) or −1 (“down”).∗ Two typical configurations could be pictured as
+ + − + + + − +
− + − − + + + +
+ + − + + − + +
+ − + − + + + +
In the left configuration the spins appear randomly oriented while on the right they are aligned to
mostly point up. The state of the system is given by the set of quantum numbers (s1 , . . . , sN ).
The spins interact in this system and we write the total energy for a given state as
1 ∑∑ ∑
N N N
E(s1 , . . . , sN ) = Uij (si , sj ) + UiH (si )
2 i=1 j=1 i=1
where Uij is the potential energy due to the interaction between spins i and j; UiH is the potential
energy of spin i in the presence of an external magnetic field H.
The Ising model simplifies this system even further to write the energy as
1∑ ∑ ∑
N γ N
E(s1 , . . . , sN ) = − ϵsi sj − Hsi
2 i=1 i=1
j∈Ni
where the sum “j ∈ Ni ” is restricted to the spins that are “neighbors” of spin i; each spin has γ
neighbors.† The interaction energy is ϵ so if ϵ > 0 then the lower energy state is when spins are aligned
(si and sj both +1 or both −1). The potential energy due to the external field is also lower when H
and si have the same sign (i.e., when the spin points in the same direction as the field).
The Ising model is a simplified representation of a ferromagnet. You can check that the energy
cannot be written as the sum of independent terms, that is,
∑
N
E(s1 , . . . , sN ) ̸= e(si )
i=1
because of the nonlinear interaction between the spins. The Ising model is non-ideal system due to the
coupling of a spin and its neighbors. Despite its simplicity it is a very rich theoretical model since it is
one of the simplest non-ideal systems with a direct relation to a physical material.
∗ NOTATION: Pathria uses σi instead of si
† NOTATION: Pathria uses J instead of ϵ and µB instead of H
CHAPTER 6. NON-IDEAL SYSTEMS – ISING MODEL 80
∑
states
QN = e−βEi
i
∑
+1 ∑
+1 ∑
+1
= ... exp(−βE(s1 , s2 , . . . , sN ))
s1 =−1 s2 =−1 sN =−1
Each sum has only two terms but there are 1000 sums so the total number of terms to evaluate is
21000 ≈ 10300 .
Modern supercomputers can evaluate about 1015 operations per second (1 Petaflop). Even if each
term required only one operation, this small system would require 10285 s ≈ 10275 years to evaluate QN
for a single value of β!
Phenomenological Approach This approach can be generously termed “intelligent guessing” or cyn-
ically termed “fudging to fit the answer.” The paradigm for this approach is the formulation of the van
der Waals model for dense gases and liquids. Starting from the ideal gas, van der Waals proposed two
corrections to account for the short-ranged repulsion force and the long-ranged attraction force. For
many phenomena (e.g., behaviour near the gas-liquid critical point) the model is in good agreement
with experimental data, indicating that it contains some elements of truth.
The phenomenological approach produces thermodynamic models that are only partially derived
from statistical mechanics. This is less satisfying to the pure theorist but the models can be very handy
for general applications. And sometimes the model can be derived from fundamental principles, as with
the Kac-Uhlenbeck-Hemmer analysis of the van der Waals model, thought this usually takes decades
(90 years for the van der Waals model).
Perturbation Approach If a system is close to ideal then the interactions can be added as a correction
and solved by a perturbation expansion. For a non-dilute gas we can work out the virial coefficients,
Bi , to formulate an equation of state of the form
PV N N2
= B1 (T ) + B2 (T ) + B3 (T ) 2 + . . .
N kT V V
At low densities only the first few terms contribute. The calculation of these virial coefficients is rather
tedious and composes the bulk of chapter 9 in Pathria. Second quantizaton, a specialized perturbation
methodology for quantum gases, is described in Chapter 10 of Pathria. We will not cover either of these
chapters.
Mean Field Approach In this approach we approximate the system to be a sum of noninteracting
particles by replacing the interactions with a fake external field. The idea is that, on average, a particle
feels a “mean field” of neighbors. But don’t we need to know the distribution for the system to find this
mean field? Yes, the calculation is circular—assume we know the answer, compute the answer, then
use the answer to get the answer. This is best understood by applying the method, which is what is
done in the next section.
CHAPTER 6. NON-IDEAL SYSTEMS – ISING MODEL 81
where ϵ > 0 is the interaction energy of the spins, γ is the number of neighboring sites that interact
with each spin, and H is the component of the external magnetic field parallel to the spins.
Before formulating the mean field solution for the Ising model, let’s discuss some general features
we expect to find in the model. To start, take the external field H = 0 so we only have the interaction
among the spins. The energy is lower when the spins are aligned so if we consider the two configurations:
+ + − + + + − +
− + − − + + + +
+ + − + + − + +
+ − + − + + + +
the configuration on the right has a lower energy than the one on the left. In the canonical ensemble
the probability of a state goes as exp(−E/kT ) so the state on the left is more probable.
Does this mean that we expect to find the spins aligned? Not necessarily. Again, considering the
two configurations above, the one on the left has 9 up spins and 7 down spins while the configuration on
the right has 14 up spins and 2 down spins. The number of combinations of M = 9 up spins chosen from
N = 16 sites is N CM = N !/M !(N − M )! = 11440; the number of combinations for M = 14 up spins is
120. Though the configuration on the right is more probable there are many more configurations like
the one on the left.
At low temperatures the “energy advantage” will highly favor configurations with aligned spins.
At high temperatures the “entropy advantage” will favor configurations with random spins because
there are so many of them. Experimentally, ferromagnets are observed to undergo a phase transition
at the Curie temperature, Tc . Below the Curie temperature the ferromagnet can sustain a non-zero
spontaneous magnetization but above Tc the material transitions from ferromagnetic to paramagnetic
behavior. For iron Tc = 1043 K, making it a ferromagnet at room temperature. Finally, when H ̸= 0
the spins are more likely to align with the field than against it but the general behavior discussed above
is qualitatively unchanged.
Now for the mean field approximation in the Ising model. In general, the energy of a single spin, at
site i, may be written as
∑γ
ei = −ϵsi sj − Hsi
j∈Ni
Figure 6.1 illustrates an individual spin in a two-dimensional lattice with the neighbors being the four
spins immediately adjacent to site i (so γ = 4). The energy for this single spin may be written as
ei (si , sSouth , sWest , sEast , sNorth ), with the compass directions pointing to the four neighbors.
The mean field approximation replaces the specific values of the neighboring spins with an average
spin, ⟨s⟩,‡ so the energy of spin i is
Figure 6.1: Schematic of the Ising system focusing on spin i in the original system and in the mean
field approximation
where
H ∗ = H + γϵ⟨s⟩
is the “effective mean field” felt by spin i. The energy of site i is thus ei = ∓H ∗ for the two states
si = ±1. Of course we don’t know the average spin ⟨s⟩ yet but the strategy will be to formulate an
expression for ⟨s⟩ in terms of ⟨s⟩.
For a single lattice site the partition function is
∑
+1
Q1 = e−βei (si )
si =−1
∗ ∗
= e−βH + e+βH = 2 cosh(βH ∗ )
∑
+1
⟨si ⟩ = si P(si )
si =−1
= (+1)P(+1) + (−1)P(−1)
1 βH ∗ 1 −βH ∗
= e − e
Q1 Q1
2
= sinh(βH ∗ )
Q1
sinh(βH ∗ )
= = tanh(βH ∗ )
cosh(βH ∗ )
But since there is nothing special about site i so we write ⟨si ⟩ = ⟨s⟩. Recall that H ∗ = H + γϵ⟨s⟩ so
where Tc = γϵ/k. These functions are sketched in Pathria Fig. 12.6; notice that at high temperatures
(T > Tc ) the only solution is ⟨s⟩ = 0.
At low temperatures (T ≤ Tc ) there are three solutions, ⟨s⟩ = 0, +s̄(T ), and −s̄(T ). Being a
transcendental equation there is no simple analytic form for s̄(T ) but using tanh(x) = x − x3 /3 + . . .
you can show that √ ( )
Tc
s̄(T ) ≈ 3 1 −
T
for T ≈ Tc . For T ≪ Tc , we find
s̄(T ) ≈ 1 − 2e−2Tc /T
The function s̄ is sketched in Pathria Fig. 12.7, where it is shown to be in qualitative agreement
with experimental data. Unfortunately the mean field approximation is too crude to give quantitative
agreement.
where a “+” is an up spin state (s = +1) and “−” is a down spin state (s = −1). Here we take each
site to have γ = 4 neighbors and label each pair of sites as “++” or “−−” if the pair are both up or
both down; pair that are unaligned are labeled as “+−”. Define
N+ = Number of up spins
N− = Number of down spins
CHAPTER 6. NON-IDEAL SYSTEMS – ISING MODEL 84
Since the total number of spins is N and the total number of pairs is 12 γN ,
N+ + N− = N
1
N++ + N−− + N+− = γN
2
Notice that the average value of the spin state may be written as
1 ∑
N
N+ − N−
⟨s⟩ = si =
N i=1 N
so
N
N+ = (1 + ⟨s⟩)
2
N
N− = (1 − ⟨s⟩)
2
For ⟨s⟩ = 0, N+ = N− = N/2.
The total energy in the system may be written as
1 ∑∑ ∑
N N N
E = − si sj ϵi,j − H si
2 i=1 j=1 i=1
= −ϵ(N++ + N−− − N+− ) − H(N+ − N− )
since the interaction coefficient ϵi,j = ϵ if spins i and j are neighbors and is zero otherwise.
So far we’ve made no approximations (and not gone very far). We now introduce the Bragg-William
approximation by assuming that
2
P++ = (P+ )
where P++ is the probability that a pair is “++” and P+ is the probability that a spin is up. This is
similar to the situation in which we flip a coin twice; the probability of getting two heads is the square
of the probability of getting a single head. Note that this is true even if it’s an unfair coin (i.e., change
of heads different from 50%).
Since
N+
P+ =
N
N++
P++ = 1
2 γN
S = k ln Γ = k ln N ! − k ln N+ ! − k ln N− !
Using ln M ! ≈ M ln M gives
S = kN ln N − kN+ ln N+ − kN− ln N−
Using (**) to evaluate the numerator and (*) to evaluate the denominator gives
( )
− kN
ln 1+⟨s⟩ ( )
1 2 1−⟨s⟩ 1 1 + ⟨s⟩
= = ln
T −N ϵγ⟨s⟩ + N H 2ϵγ⟨s⟩ + 2H 1 − ⟨s⟩
CHAPTER 6. NON-IDEAL SYSTEMS – ISING MODEL 86
which is exactly the same transcendental equation we obtained from mean field theory. Naturally the
results obtained from mean field theory (e.g., critical temperature Tc = γϵ/k) are the same for the
Bragg-William approximation.
Finally, the reason that the mean field approximation and the Bragg-William approximation are
equivalent is that both represent the “long range order” in the Ising model but neglect the “short range
order.” In the real physical system, if a spin is up then it is more probable that a neighbor spin will also
be up; both approximations neglect this short-range correlation. If many spins are up then ⟨s⟩ will be
positive so in the mean field approximation the value of si will tend to be positive since spin i feels this
global field. In this sense the long-range correlation is included. For the Bragg-William approximation,
the number of “++” pairs (N++ ) depends only on the total number of up spins (N+ ) and not on how
they are distributed, which is equivalent to assuming the up spins are randomly distributed.
Any improvement of mean field theory will have to include short-range order. The simplest correc-
tion is called the Bethe approximation (section 12.6 in Pathria). Advanced methods are presented in
Chapters 13 and 14. The full story of the Ising model (and its variants) could easily occupy us for sev-
eral months. Instead of continuing its study we’ll move on to the general theory of critical phenomena.
But before leaving the Ising model, let’s consider one last important case which happens to be exactly
solvable.
∑
+1 ∑ ∏
+1 N −1
= ... exp(βϵsi si+1 )
s1 =−1 sN =−1 i=1
ex = cosh(x) + sinh(x)
cosh(−x) = cosh(x)
sinh(−x) = − sinh(x)
CHAPTER 6. NON-IDEAL SYSTEMS – ISING MODEL 87
where y = tanh(βϵ).
The partition function is now
∑
+1 ∑ ∏
+1 N −1
QN = ... (1 + si si+1 y) cosh(βϵ)
s1 =−1 sN =−1 i=1
∑
+1 ∑ ∏
+1 N −1
N −1
= cosh (βϵ) ... (1 + si si+1 y)
s1 =−1 sN =−1 i=1
(1 + s1 s2 y)(1 + s2 s3 y) . . . (1 + sN −1 sN y)
∑+1
Doing the first sum, s1 =−1 , makes the product
or
2[(1 + s2 s3 y) . . . (1 + sN −1 sN y)]
In fact, each time we do a sum we eliminate a term in the sum and pick up a factor of two.
The partition function for the one-dimensional Ising chain is just
QN = 2N coshN −1 (βϵ)
Compare this with the partition function for non-interacting spins with energy levels of 0 and ϵ,
N
QN = (2 cosh(βϵ))
Critical Phenomena
88
CHAPTER 7. CRITICAL PHENOMENA 89
Figure 7.1: Graph of normalized temperature (T /Tc ) versus normalized density (ρ/ρc ) at the va-
por/liquid coexistence boundary for various substances. From E. A. Guggenheim, J. Chem. Phys.
13, 253 (1945).
so β = 1/2 in mean field theory. Experiments on ferromagnets find that for iron β = 0.37 ± 0.01 and
for nickel β = 0.358 ± 0.003. Obviously mean field theory is not completely wrong but it is also not the
final word.
Next, let’s do a similar analysis for the van der Waals equation near the liquid-gas critical point.
Recall that the equation ( )
an2
P + 2 (V − bn) = nRT
V
may be written as ( )( )
P V2 V T
+ 3 c2 3 −1 =8
Pc V Vc Tc
After some rearranging we find that this may be written as
(2 − η)F = 8τ (η + 1) + 3η 3 (∗)
where
T − Tc
τ=
Tc
is the dimensionless temperature. We define the order parameter
ρ − ρc Vc − V
η= =
ρc V
where ρ = mN/V is the mass density. The order parameter is the variable that bifurcates at the
critical point; below Tc it has a high value on the liquid branch and a low value on the gas branch of
the coexistence diagram. For the Ising model the corresponding order parameter is η = ⟨s⟩.
CHAPTER 7. CRITICAL PHENOMENA 90
Figure 7.2: In the absence of an applied field (F = 0) the order parameter goes as η ∝ (Tc − T )β as it
bifurcates below the critical temperature.
The variable F is the dimensionless applied external field, specifically the pressure for which the
order parameter vanishes,
P0 (T ) − Pc
F = , where P0 (T ) = P (T, V = Vc )
Pc
For the Ising model we simply have F = H. In summary, the analogous variables are:
ρ − ρc
⟨s⟩ ⇔ =η (Order Parameter)
ρc
P − Pc
H⇔ =F (External Forcing)
Pc
with τ = (T − Tc )/Tc . See Figure 7.2 for a graphical illustration of the definition of the exponent β and
Figure 7.3 for a 3D graph of the equation of state η(τ, F ) in the vicinity of a critical point.
To find the critical exponent β for van der Waals we need to approach the critical point along the
“phase boundary” which is a line that gives η = 0 as we pass the critical point. Using (*) and setting
η = 0 we find F = 4τ ; holding this value of F from (*) we get
(2 − η)(4τ ) = 8τ (η + 1) + 3η 3
or
η 3 = −4τ η
so
√
η = 0 or ± 2 −τ
CHAPTER 7. CRITICAL PHENOMENA 91
Figure 7.3: Graph of the equation of state η(τ, F ) in the vicinity of a critical point (only the positive η
surface is shown).
CHAPTER 7. CRITICAL PHENOMENA 92
Above the critical temperature (τ > 0) the only solution is η = 0. But below Tc we have a pitchfork
bifurcation and the critical exponent is β = 1/2, just as we found with the Ising model. In fact, it turns
out that the van der Waals equation of state may be shown to arise from a mean field approximation
of the intermolecular attraction.
η ∝ |τ |β
CHAPTER 7. CRITICAL PHENOMENA 93
Figure 7.4: Graphical illustration for the critical exponents γ and δ on equation of state curves.
where η is the order parameter (e.g., η = ⟨s⟩ for the Ising model) and τ = (T − Tc )/Tc ; Landau theory
and the mean field approximation give β = 1/2. While β is probably the most interesting critical
exponent, there are three others that are studied.
The critical exponent α is defined as the temperature dependence of the heat capacity for F = 0
and near Tc ,
CF ∝ |τ |−α
In reality there’s α and α′ depending on whether we are above or below the critical temperature but
experimentally α ∼ = α′ .
The critical exponent γ gives the temperature dependance of the susceptibility (or compressibility)
for F = 0 and near Tc , ( )
∂η
∝ |τ |−γ
∂F T
Again, there’s a γ and γ ′ depending on whether we are above or below the critical temperature but
experimentally γ ∼
= γ′.
The fourth and final critical exponent, δ, is defined,
η ∝ F 1/δ at T = Tc
which is the power law dependance of the order parameter on the external field near the critical point.
See Figure 7.4 for a graphical illustration for the critical exponents γ and δ on equation of state curves.
Now let’s generalize Landau theory to include an external applied field and compute these new
critical exponents. We now write the thermodynamic potential, say the Gibbs free energy, as
1 1
G(η, τ, F ) = C0 + C2 τ η 2 + C4 η 4 − C∗ F η
2 4
CHAPTER 7. CRITICAL PHENOMENA 94
where the C’s are positive constants.∗ Notice that the external field energetically favors the case where
F and η have the same sign. The order parameter at equilibrium is obtained by finding the minimum
of G so, as before, we evaluate
( )
∂G
= C2 τ η + C4 η 3 − C∗ F = 0 (∗)
∂η τ,F
We could solve this cubic but we don’t need the explicit solution for the purpose of finding the critical
exponents. The general shape of G is still either a single well or double well potential depending on
whether we are above or below the critical temperature; for F > 0 the largest positive root of this cubic
is the equilibrium state since it is the deeper well.
To evaluate the critical exponent γ we use (∗) to evaluate
( ) ( ) [ ]
∂F ∂ C2 C4 3 C2 C4
= τη + η = τ + 3 η2
∂η τ ∂η τ C∗ C∗ C∗ C∗
For T > Tc , F → 0, we know η → 0 (this is just the disordered state). From our earlier analysis we
also know that for T < Tc the order parameter η ∝ |τ |1/2 near the critical point. In summary, near the
critical point, ( )−1
( )
C2
∂η C∗ τ for T > Tc
= ( )−1
∂F τ CC2 τ + 3 CC4 |τ | for T < Tc
∗ ∗
From the second law of thermodynamics we know that the isothermal susceptibility (or compressibility)
must be positive and so we find that both above and below the critical point,
( )
∂η 1
∝
∂F τ |τ |
so γ = 1 in Landau theory.
Using (*) again but setting T = Tc (τ = 0) immediately gives
( )1/3
C∗
η= F
C4
since dG = f (η)dF − SdT (e.g., for van der Waals dG = V dP − SdT ). Using our expression for G we
get,
( ) ( ) ()2 ( 2 )
∂2G ∂η ∂η ∂ η
= 2C2 η + C2 τ + C2 τ η
∂τ 2 F ∂τ F ∂τ F ∂τ 2 F
( )2 ( 2 ) ( 2 )
∂η ∂ η ∂ η
+3C4 η 2 + C4 η 3
− C∗ F
∂τ F ∂τ 2 F ∂τ 2 F
∗ Again we invoke Occam’s razor to justify this simple form for G.
CHAPTER 7. CRITICAL PHENOMENA 95
Since the critical exponent is defined at zero field (F = 0) the last term drops out. Above the critical
temperature η is constant (and equal to zero) so all the terms in the expression above are zero. Below
Tc we know that
( ) ( 2 )
∂η −1/2 1 ∂ η 1
η ∝ |τ | 1/2
so ∝ |τ | ∝ and 2
∝ |τ |−3/2 ∝ 3
∂τ F η ∂τ F η
so we find that each term in (∂ 2 G/∂τ 2 )F is constant so CF is a constant. Given the definition of α
being CF ∝ |τ |−α we have that α = 0 for Landau theory.
In summary, the Landau theory (and mean field theory) predict that the critical exponents are
Pathria and Beale list experimental data for a variety of physical systems (see Table 12.1) and find
values to be,
α = 0.0 − 0.2 β = 0.30 − 0.36 γ = 1.0 − 1.4 δ = 4.0 − 5.0
On the one hand the experimental values are comparable to the theoretical predictions and similar for
vastly different systems (magnets, gas-liquid mixtures, binary alloys, etc.). On the other hand, there
is some disagreement with theory and the values are similar among different systems but not entirely
universal.
f (ax) = ar f (x)
Near the critical point the order parameter is proportional to the derivative of G with respect to the
external field, that is, ( )
∂G
η∝
∂F τ
Using this observation we have,
η(τ ′ , F ′ )aq = aη(τ, F )
or
η(ap τ, aq F ) = a1−q η(τ, F )
To grasp the meaning of this result, let’s consider a numerical example. It turns out that in Landau
theory p = 1/2 and q = 3/4. To get round numbers we’ll pick a = 16, so the result above gives,
or
η(4τ, 8F ) = 2η(τ, F )
That is, if we raise the temperature τ by a factor of 4 and also raise the applied field F by a factor of
8 then according to the scaling hypothesis the order parameter will double.
Now let’s directly link the scaling coefficients to the critical exponents. Suppose we take a =
(−τ )−1/p and set F = 0 so
or
η(−1, 0) = ((−τ )−(1−q)/p )η(τ, 0)
The left hand side is a constant so
These give two relations among the critical exponents so, for example, given β and γ we could find α
and δ. A simple form for the two relations is,
α + 2β + γ = 2
α + β(1 + δ) = 2
Recalling that the critical exponents in Landau theory are α = 0, β = 1/2, γ = 1, and δ = 3 we see
that they agree with this result from the scaling hypothesis. In fact these scaling hypothesis relations
appear to be exact. For example, for the Ising model in 2-dimensions can be solved exactly via a rather
long calculation (by Onsager) and one finds that,
which satisfy the two scaling hypothesis relations and give coefficients of p = 1/2 and q = 15/16.
Finally, the universality of the scaling hypothesis result has its origin in the long-ranged, self-similar
correlations that arise near a critical point, which is our last topic in the theory of critical phenomena.
CHAPTER 7. CRITICAL PHENOMENA 97
where this critical exponent is γ ≈ 1.2 for water. At the critical point the compressibility diverges so it
takes no energy to expand or contract an element of fluid; the density undergoes wild fluctuations which
are macroscopically visible in a laboratory volume. Because these fluctuations turn the vapor mucky
the effect is known as critical opalescence; see www.youtube.com/watch?v=OgfxOl0eoJ0 for example.
Analogously, the isothermal susceptibility of a ferromagnet near the critical point goes as
( )
∂⟨s⟩ 1
χT ∝ ∝
∂H |T − Tc |γ
where γ ≈ 1.3 for iron. From simulations of the Ising model we see that near the critical point we find
“islands” of all sizes, from single outcroppings to huge continents of aligned spins (with corresponding
“lakes” of the opposite alignment); see Figure 7.5. These structures at all sizes are analogous to the
density variations that create critical opalescence.
Correlations
To investigate these fluctuations near the critical point it makes sense to define a local order parameter,
η(⃗r), such as the local density at position ⃗r for liquid-vapor condensation. We define the correlation
function as,
g(⃗r, ⃗r′ ) = ⟨η(⃗r)η(⃗r′ )⟩ − ⟨η(⃗r)⟩⟨η(⃗r′ )⟩
† Assuming that the two locations or the two times were macroscopically distinct.
CHAPTER 7. CRITICAL PHENOMENA 98
Figure 7.5: Simulation results for the Ising model at various temperatures above and below the critical
point (with Tc = 1). Systems have 512 by 512 sites with periodic boundary conditions; black/white
dots indicate up/down spins.
CHAPTER 7. CRITICAL PHENOMENA 99
If η(⃗r) and η(⃗r′ ) are statistically independent then g = 0. We can interpret this average as a time
average, that is, ∫
1 t
⟨f (η(⃗r))⟩ = lim f (η(⃗r, t))dt
t→∞ t 0
Since the order parameter is a mechanical variable (such as number of particles or number of up spins)
there’s no difficulty in defining the instantaneous value, η(⃗r, t).‡
We still consider our equilibrium system to be statistically homogeneous (though not instantaneously
homogeneous) so we have ⟨η(⃗r)⟩ = ⟨η(⃗r′ )⟩ and
This correlation function is easy to measure in computer simulations and can be probed indirectly by
neutron scattering or light scattering in the laboratory. In both simulations and experiments one finds
that just both above and below the critical point,
1 −∆r/ξ
g(∆r) ∝ e
∆r
for 3D systems. More precisely, we should write,
1
g(∆r) ∝ e−∆r/ξ
(∆r)d−2+ηe
where d is the dimensionality and ηe is yet another critical exponent§ but one finds that ηe ≈ 0 so we
won’t worry about it.
The correlation length, ξ, is not constant, in fact it increases and diverges as we approach the critical
point. Specifically,
1
ξ∝ (T ≈ Tc )
|T − Tc |ν
The critical exponent, ν, is measured to be ν ≈ 0.5 − 0.8 (see Table 12.1 in Pathria and Beale). At
T = Tc , ξ → ∞, e−∆r/ξ → 1 so the correlation function g(r) ∝ 1/r in 3D systems, so it is very
long-ranged.
Ginzburg-Landau Theory
Let’s extend Landau theory to deal with spatial fluctuations of the order parameter. We now define
the local free energy density as,
1 1 1
G(⃗r) = C0 + C2 τ η(⃗r)2 + C4 η(⃗r)4 − C∗ F (⃗r)η(⃗r) + C∇ (∇η)2
2 4 2
This should be familiar by now except for the last term, which depends on the spatial gradient of the
order parameter. This term accounts for the fact that although it is energetically favorable for the
system to be uniform (i.e., with zero gradients) and this term is constructed as the simplest possible
form that gives this behavior.
At equilibrium the total free energy,
∫
GΣ = d⃗rG(⃗r)
‡ Defining instantaneous values for non-mechanical variables, such as temperature, is a more delicate issue.
§ Pathria and Beale write this exponent as just η; see equation (12.12.4)
CHAPTER 7. CRITICAL PHENOMENA 100
η(⃗r) = η0 + η1 (⃗r)
F (⃗r) = F0 + F1 (⃗r)
where η0 = ⟨η(⃗r)⟩ and F0 = ⟨F (⃗r)⟩ are constants. Keeping only zero-th order terms we have
which is our previous result for Landau theory (i.e., η0 is zero for T > Tc and has a pitchfork bifurcation
for T ≤ Tc ). At first order in perturbation theory,
′
C2′ τ η1 (⃗r) + C4′ η02 η1 (⃗r) − C∇ ∇2 η1 (⃗r) = F1 (⃗r)
or
( 2 ) 1
∇ − ξ −2 η1 (⃗r) = − ′ F1 (⃗r)
C∇
which is the inhomogeneous Helmholtz equation with
√
′
C∇
ξ=
C2′ τ + C4′ η02
√
Notice that for T > Tc we have η0 = 0 while for T < Tc we have η ∝ |τ | so both above and below the
critical point we find
1
ξ∝
|τ |1/2
In a moment we’ll see that this ξ is indeed the correlation length so Ginzburg-Landau theory predicts
the critical exponent ν = 1/2.
To finish the derivation, let’s pin the value of the order parameter at location ⃗r′ by setting F1 (⃗r) =
δ(⃗r − ⃗r′ ). The solution to this Green’s function problem for the Helmholtz equation is (in 3D),
1 ′
η̂1 (⃗r) ∝ e−|⃗r−⃗r |/ξ (Fixed η(⃗r′ ))
|⃗r − ⃗r′ |
CHAPTER 7. CRITICAL PHENOMENA 101
This means that in the absence of an applied field the correlation function is
where the last step is justified by the fact that the η̂1 (⃗r) gives the value of the order parameter for fixed
η(⃗r′ ).
Chapter 8
We now continue our study of systems with interacting particles (i.e., non-ideal) but in a new direction.
Up to now we’ve considered mainly the universal behavior that we find near the critical point. We now
turn to the more mundane problem of the specific behavior of specific systems, such as dense gases. The
advantage of studying the universal behavior was that since it was universal we could pick the simplest
systems, such as the Ising model, to study the behavior near the critical point. The disadvantage is
that we’re restricted to that small area of phase space.
In this chapter we’ll consider some basic approximations for treating dense gases, essentially finding
corrections to the ideal gas equation of state. We’ll also see how some of the ideas may be extended
to simple liquids. For the temperatures and pressures of interest the molecular interaction are well
approximated by classical potentials so we’ll restrict our attention to the classical canonical ensemble.
Pathria and Beale consider more advanced techniques, including some quantum mechanics calculations,
in their Chapter 10.
102
CHAPTER 8. CLASSICAL NON-IDEAL GASES 103
(∫ )N
1
= d⃗ p| /2m}
p exp{−β|⃗ 2
N !h3N
VN
=
N !λ3N
√
where λ = h/ 2πmkT is the thermal wavelength. Using this result, we may write the partition function
for the non-ideal system as,
∫ ∫
1 Qideal IN
QN = Qideal
N N
d⃗
r1 . . . d⃗
rN exp{−βV(⃗
r1 , . . . , ⃗
rN )} = N
N
V V
Unfortunately the potential energy integral, IN , has a much more complex form, in general, than the
kinetic energy so we’ll have to make some approximations or limit ourselves to some simplified inter-
molecular potentials.
The probability of a point in phase space P(⃗r1 , . . . , ⃗rN ) is zero if any particles have positions that
overlap; all other points in phase space are equally probable. This might seem like it would be difficult
to work with but it turns out to be simple if we assume that the gas is not too dense.
Recall that we want to evaluate
∫ ∫
IN = d⃗r1 . . . d⃗rN exp{−βV(⃗r1 , . . . , ⃗rN )}
We can do the integral for the position of the first particle easily since it may be anywhere within the
∫
volume so d⃗r1 = V . The second particle may be anywhere within the volume except at positions where
∫
it would overlap with the first particle. Since the particles have volume v0 we have d⃗r2 = V − v0 . For
the next particle we will assume that the system is not so dense that we have to worry about geometry
CHAPTER 8. CLASSICAL NON-IDEAL GASES 104
corrections that depend on the positions of the first two particles; in brief, we assume that the available
∫
volume of phase space for the third particle is d⃗r3 = V − 2v0 .
Continuing in this way we find,
∏
N −1
IN = (V )(V − v0 )(V − 2v0 ) . . . (V − (N − 1)v0 ) = (V − iv0 )
i=0
so
∑
N −1
ln IN = ln(V − iv0 )
i=0
∑ −1 N −1
kT ∑
N
1
= kT = (1 + iv0 /V )
i=0
V − iv0 V i=0
kT (N − 1) v0 (N − 1)(N − 2)
= + kT 2
V( ) V 2
N kT bN N kT
≈ 1+ ≈
V V V − bN
where b = v0 /2. Notice that we get the van der Waals equation of state with only repulsion (i.e., van
der Waals attraction coefficient a = 0).
∑
N ∑
N
1∑ ∑
N N
V(⃗r1 , . . . , ⃗rN ) = υ(|⃗ri − ⃗rj |) = υ(|⃗ri − ⃗rj |)
i=1 j=i+1
2 i=1
j=1,j̸=i
This is a good approximation for simple molecules; far more complicated intermolecular models exist
but they’re mostly used in computer simulations.
Theory and measurement indicate that the potential υ(r) has two parts: a weak attraction term due
to dipole-dipole interactions and a strong but shorter-ranged repulsion due to Fermi exclusion of the
electron clouds. From E&M we know that the dipole-dipole potential goes as r−6 while the repulsion
term is not so short-ranged that it’s precise form is not too important so for convenience it’s usually
taken to be r−12 . We’ll write this model potential, known as the Lennard-Jones potential, as,
[( ) ]
σ 12 ( σ )6
υLJ (r) = 4ϵ −
r r
where the parameter σ indicates the range of the potential and ϵ indicates the strength. See Figure 8.1
for a graph of the potential; note that the minimum of the potential well is at rmin = 21/6 σ ≈ 1.12σ
and υLJ (rmin ) = −ϵ.
CHAPTER 8. CLASSICAL NON-IDEAL GASES 105
The Lennard-Jones potential is commonly used in molecular dynamics simulations in which the
motion of a large collection molecules is computed by calculating the intermolecular forces. In these
simulations one typically uses a truncated form of the potential, which we may write as,
{
υLJ (r) − υLJ (rc ) r ≤ rc
υLJt (r) =
0 r > rc
Since I can’t find the real name for it we’ll call this the Garcia model.
Finally, the repulsive part of the potential is sometimes approximated as exponential, so
[ ( σ )6 ]
−r/σ
υB (r) = 4ϵ e −
r
which is known as the Buckingham potential.
where we define ⟨υ⟩T as the average potential energy of a given (tagged) particle due to the presence
of all the other particles. We can write this average as
∫
⟨υ⟩T = d⃗r υ(r)ρT (⃗r)
where ρT (⃗r) is the number density of particles located at a position ⃗r relative to the tagged particle.∗
In the homogeneous approximation we take ρT (⃗r) = ρ = N/V , that is, a constant equal to the total
number density. For intermolecular potentials with a strong repulsion, such as a hard core repulsion,
we improve this approximation by changing it to,
{
0 |⃗r| < rmin
ρT (⃗r) =
N/V |⃗r| ≥ rmin
where rmin is the minimum separation between particles due to the strong repulsion. The homogeneous
approximation is reasonable for dilute systems and later we’ll discuss corrections for more realistic forms
of ρT (⃗r).
As an example of the homogeneous approximation let’s evaluate ⟨υ⟩T for the Garcia model (hard
core with dipole attraction). The integrals are easy to do,
∫ 2π ∫ π ∫ ∞ ( )
N
⟨υ⟩T = dϕ sin θ dθ 2
r drυG (r)
0 0 rmin V
∫ ∞
N
= 4πϵrmin 6
r−4
V rmin
4 N
= − πϵrmin 3
3 V
From this, the total potential energy is
1 aN 2
Upot = N ⟨υ⟩T = −
2 V
where a = 23 πϵrmin
3
. Notice that the factor a increases if ϵ increases (strength of the potential increases)
or rmin increases (range of the potential increases).
To relate this result to the pressure we use
( ) ( ) ( )
∂A ∂U ∂S
P =− =− +T (∗)
∂V T,N ∂V T,N ∂V T,N
For the second term in (*), that is the entropy contribution, we can use our previous result for the hard
core model equation of state,
N kT
Prepulsion =
V − bN
and putting these two pieces together gives the full van der Waals EoS.
∗ For convenience we can simply take the origin to be at the center of the tagged particle.
CHAPTER 8. CLASSICAL NON-IDEAL GASES 107
Figure 8.2: (Left) Illustration of hard sphere particles at higher densities; notice that there is ordering
of the positions even in the absence of an attraction force. (Right) Pair correlation function in a dense
Lennard-Jones fluid.
g(r) ≈ e−βυ(r)
which is a good approximation for moderate densities. Notice that for the hard core potential this
is the same as the homogeneous approximation but for more complicated potentials it includes more
structure. Using it we have ∫
3 1 N2
U = N kT + d⃗rυ(r)e−βυ(r)
2 2 V
From which we can formulate the equation of state.
CHAPTER 8. CLASSICAL NON-IDEAL GASES 108
Because this approach is only valid for moderate densities we’ll express our equation of state in the
form of a virial expansion, that is, an expansion in ρ = N/V ,
P = ρkT (1 + B2 (T )ρ + B3 (T )ρ2 + . . .)
and find an expression for the first expansion coefficient, B2 (T ), in terms of the potential υ(r).
We’ll be using the Helmholtz free energy again and since we want to find the difference from the
ideal gas law we’ll write,
3
∆U = U − Uideal = U − N kT and ∆A = A − Aideal
2
To find ∆A we use
( )
1 ∂A
U = A + TS = A + −
kβ ∂T V,N
( ) ( )
∂A ∂
= A+β = (βA)
∂β V,N ∂β V,N
so ( )
∂
∆U = (β∆A)
∂β V,N
where C is a constant of integration; we know that C = −1 since ∆A = 0 if υ(r) = 0 (ideal gas limit).
We can now get the deviation from the ideal gas pressure, ∆P , from
( ) ∫ [ ]
∂ 1 N2 −βυ(r)
∆P = − ∆A = − d⃗
r e − 1
∂V T,N 2 V2
so ( ∫ [ ])
1
P = Pideal + ∆P = ρkT 1 − ρ d⃗r e−βυ(r) − 1
2
Comparing with the virial expansion we see that
∫ [ ] ∫
1 −βυ(r) 1
B2 (T ) = − d⃗r e −1 =− d⃗rf (r)
2 2
For quantities such as energy and volume, which have a purely mechanical significance as
well as a thermodynamic one, the concept of fluctuations is self-explanatory, but it needs
more precise treatment for quantities such as entropy and temperature, whose definition
necessarily involves considering the body over finite intervals of time. For example, let
S(U, V ) be the equilibrium entropy of the body as a function of its (mean) energy and
(mean) volume. By the fluctuation of entropy we mean the change in the function S(U, V ),
formally regarded as a function of the exact (fluctuating) values of the energy and volume.
∗
Call SΣ the entropy at equilibrium, that is,
∗
SΣ = S1 (U 1 , V 1 ) + S2 (U 2 , V 2 ) = S 1 + S 2
∗ Statistical Physics, Section 112
109
CHAPTER 9. FLUCTUATIONS AND STOCHASTIC PROCESSES 110
For different values the system has a lower entropy, that is,
∗
SΣ = S1 (U1 , V1 ) + S2 (U2 , V2 ) ≤ SΣ
S = k ln Γ
ΓΣ eSΣ /k ∗
P(SΣ ) ∝ ∗ = S ∗ /k = e(SΣ −SΣ )/k = e∆SΣ /k
ΓΣ e Σ
We’ll be OK without having to find the normalization so don’t worry that the above is “goes as” rather
than ”equals.”
Now we want to analyze ∆SΣ in terms of the states of parts 1 and 2 of the system; specifically,
Since the fluctuations are small we can use the differential version of the first and second laws
or
1
∆S2 = (∆U2 + P2 ∆V2 )
T2
There’s a similar expression for ∆S1 but we won’t need it.
Can now write
1
∆SΣ = ∆S1 + (∆U2 + P2 ∆V2 )
T2
and make the following two observations: 1) Since U = U1 + U2 is fixed, ∆U2 = −∆U1 and similarly
∆V2 = −∆V1 . 2) Since system 2 is very large T2 ≈ T ∗ and P2 ≈ P ∗ (i.e., fluctuations only have a
significant effect in the smaller part). From these observations,
1 P∗
∆SΣ = ∆S1 − ∆U1 − ∆V1 (∗)
T∗ T∗
We now have ∆S in terms of fluctuations in system 1; the next step is to replace ∆U1 with an expression
involving only temperature, pressure, etc.
We now write U1 (S1 , V1 ) and Taylor expand it about the equilibrium state,
( ) ( )
∂U1 ∂U1
U1 (S1 , V1 ) = U1 (S 1 , V 1 ) + ∆S1 + ∆V1
∂S1 V ∂V1 S
[( ) ( 2 ) ( 2 ) ]
1 ∂ 2 U1 2 ∂ U1 ∂ U1 2
+ (∆S1 ) + (∆S1 )(∆V1 ) + (∆V1 ) + . . .
2 ∂S12 V ∂S1 ∂V1 ∂V12 S
where the overbar indicates that the quantity is evaluated at equilibrium. For example,
( ) ( )
∂U1 ∗ ∂U1
=T = −P ∗
∂S1 V ∂V1 S
CHAPTER 9. FLUCTUATIONS AND STOCHASTIC PROCESSES 111
To deal with the quadratic term in the square brace, use the Taylor expansion
( ) ( ) ( 2 ) ( 2 )
∂U1 ∂U1 ∂ U1 ∂ U1
T1 = = + ∆S 1 + ∆V1 + . . .
∂S1 V ∂S1 V ∂S12 V ∂V1 ∂S1
We see that P(∆T, ∆V ) factors into a pair of Gaussian distributions‡ of the form,
P(x) = A e−x
2
/2σ 2
(Gaussian distribution)
where A is the normalization constant and σ is the standard deviation In this form for the Gaussian
the mean, x = 0, and the variance (x − x)2 = x2 = σ 2 so,
kT 2
∆T 2 = ∆V 2 = kT KT V
Cv
Finally, since the distribution factors into a pair of distributions for ∆T and ∆V that means that these
variables are independent (i.e., uncorrelated) so ∆T ∆V . By the way, this result for the variances in
temperature and volume is unchanged if we allow N to vary as well. Pathria and Beale discuss the
variances for other variables, such as ∆S 2 , ∆P 2 , etc.
Example (a) Use the result for ∆V 2 to obtain ∆N 2 for a fixed volume V . (b) Evaluate ∆N 2 for a
dilute gas.
Solution (a) The derivation assumed constant N , allowing V to vary. The fluctuation in the volume
per particle is,
⟨∆V 2 ⟩ kT KT V
⟨(∆(V /N ))2 ⟩ = 2
=
N N2
This fluctuation is the same whether N is fixed and V varies or vice versa. When V is constant then
∆(V /N ) = V ∆(1/N ) ≈ −(V /N 2 )∆N . Using this we have,
V2
⟨(∆(V /N ))2 ⟩ = ⟨∆N 2 ⟩
N4
so ( )
N4 kT KT V kT KT N 2
⟨∆N 2 ⟩ = =
V2 N2 V
(b) For an ideal gas KT = 1/P so
kT N 2
⟨∆N 2 ⟩ =
=N
PV
Note that we arrive at the same result by assuming that N is Poisson distributed (see “Poisson distri-
bution” article in Wikipedia).
Let’s finish by putting in some values into these expressions to get a sense of the magnitude of the
fluctuations. For a Gaussian distribution the probability that a random value is 3σ above the mean is
about 0.2% and the probability that it is 9σ above the mean is about 10−19 . The heat capacity of one
milliliter§ of water at room temperature is about 4 Joules per Kelvin so for temperature fluctuations,
√
√ kT 2
σT = ∆T 2 = ≈ 5 × 10−10 Kelvin
Cv
‡ Also known as the normal distribution.
§ One milliliter equals one cubic centimeter equals 10−6 m3 .
CHAPTER 9. FLUCTUATIONS AND STOCHASTIC PROCESSES 113
Even if you’ve been watching that glass of water since the start of the Universe (about 1018 seconds)
you’re not going to see a corner start to spontaneously boil (or even get noticeable warm). This
√ result
isn’t surprising since, in general, the ratio of the fluctuation to the mean values go as 1/ N so for
macroscopic volumes where N is on the order of Avogadro’s number these deviations are negligible.
On the other hand, at microscopic scales they are significant. Consider a yoctoliter¶ of water at
room temperature; for this volume the standard deviation of temperature fluctuations is,
√
−10 10−6
σT ≈ (5 × 10 Kelvin) × ≈ 15 Kelvin
10−27
This microscopic volume has roughly a hundred water molecules so the fluctuations in temperature are
noticeable.
⟨ 21 mv 2 ⟩ = 32 kT
At time t = n the particle has taken n steps with n+ steps to the right and n− = n − n+ steps to
the left. The probability of taking n+ steps to the right is
( )n
1 n!
Pn (n+ ) =
2 n+ !(n − n+ )!
The problem is similar to finding the probability of getting exactly n+ “heads” when flipping a fair coin
n times. Note that we could do the more general case where the probability of taking a step to the
right is p+ and to the left is p− = (1 − p+ ) for which we’d have,
n n!
Pn (n+ ) = p++ (1 − p+ )n−n+
n+ !(n − n+ )!
But for simplicity we’ll stay with the symmetric case where p+ = 1/2.
The position of the particle is m = n+ − n− = 2n+ − n so the probability of the particle being at
position m is ( )n
1 ((n + m)/2)!
Pn (m) =
2 ((n − m)/2)!
We can now compute a few average quantities, starting with the average position,
∑
n
⟨m⟩ = mPn (m)
m=−n
For this calculation it will actually be easier to work with n+ instead of m and convert using
⟨m⟩ = 2⟨n+ ⟩ − n
Remember that since taking an average is a linear operation (either summing or integrating over possible
values) we have, in general,
⟨af (x) + bg(x)⟩ = a⟨f (x)⟩ + b⟨g(x)⟩
where the average is over the probability P(x).
To find ⟨n+ ⟩ we use the binomial expansion,
∑
n
n!
(p + q)n = pn+ q n−n+ = q n + npq n−1 + . . . + npn−1 q + pn
n+ =0
n+ !(n − n+ )!
We now use a common trick in manipulating these types of series and differentiate both sides,
d ∑ n+ n−n+
n
d n n!
(p + q) = p q
dp dp n =0 n+ !(n − n+ )!
+
so
∑
n
n!
n(p + q)n−1 = n+ pn+ −1 q n−n+
n+ =0
n+ !(n − n+ )!
∑
n ( )n−1 ∑n
1 n!
n= n+ =2 n+ Pn (n+ )
n+ =0
2 n+ !(n − n+ )! n =0 +
CHAPTER 9. FLUCTUATIONS AND STOCHASTIC PROCESSES 115
so
∑
n
n
⟨n+ ⟩ = n+ Pn (n+ ) =
n+ =0
2
which is not surprising since, on average, we take n/2 steps to the right and n/2 steps to the left. From
this result we directly find that ⟨m⟩ = 0, which is also no surprise.
The more interesting result comes from repeating the differentiation trick again, which after a little
algebra gives ⟨n2+ ⟩ = n(n + 1)/4 so
Simple Diffusion
To make a closer connection between the random walk model and the diffusive, random motion of
a Brownian particle we’ll formulate the probability distribution Pn (m) in the limit where n is large.
Naturally, the first step is to use Sterling’s approximation for the factorials,
1
ln n! ≈ n ln n − n + ln 2πn
2
Note that we’re using one more term than usual. Recalling our earlier result,
1 m2 m2 1
ln Pn (m) ≈ − + ln(2/πn)
2 n n 2
or √
1 −m2 /2n
Pn (m) ∝
e
n
Not surprisingly, after all those approximations we have to multiply by the appropriate constant to
normalize the distribution.†† Doing so yields,
√
1
e−m /2n
2
Pn (m) =
2πn
†† We normalize by requiring that the integral of Pn (m) from m = −∞ to ∞ equals one.
CHAPTER 9. FLUCTUATIONS AND STOCHASTIC PROCESSES 116
√
which is a Gaussian distribution with mean ⟨m⟩ = 0 and variance ⟨(m − ⟨m⟩)2 ⟩ = ⟨m2 ⟩ = n.
Passing from discrete to continuous space (m → x) and from discrete steps to continuous time
(n → t) we obtain, √
1
e−x /4Dt
2
P(x, t) = (∗)
4πDt
where D is the diffusion coefficient. This coefficient is implicit in the assumption in the random walk
model that the steps are unit length and occur at unit times. The mean and variance of this distribution
are,
∫ ∞
⟨x⟩ = dx xP(x, t) = 0
−∞
∫ ∞
⟨(x − ⟨x⟩)2 ⟩ = dx (x − ⟨x⟩)2 P(x, t) = 2Dt
−∞
√ √ √
so the r.m.s. displacement is ⟨(x − ⟨x⟩)2 ⟩ = ⟨x2 ⟩ = 2Dt.
You may already be familiar with the fact that this Gaussian function (*) is the solution of the
diffusion equation,
∂ ∂2
P(x, t) = D 2 P(x, t)
∂t ∂x
for the initial condition,
P(x, t) = δ(x)
and no boundary conditions (i.e., x extends from −∞ to ∞). From inspection of (*) the amplitude of
P(x, t) at x = 0 diverges for t = 0 while the standard deviation goes to zero as t → 0, which verifies
that the Gaussian satisfies the Dirac delta function initial condition.
All of the analysis may easily be extended to three-dimensional systems by assuming that the motion
in each of the x, y, and z directions is independent. From this,
(√ )3
1 −3/2 −r 2 /4Dt
e−(x
2
+y 2 +z 2 )/4Dt
P(⃗r, t) = P(x, t)P(y, t)P(z, t) = = (4πDt) e
4πDt
∂
P(⃗r, t) = D∇2 P(⃗r, t)
∂t
as the corresponding three-dimensional diffusion equation.
On last comment: Notice that in this section we, for the first time in this course, introduced the
element of time. Equilibrium statistical mechanics is time-less, not in the conventional meaning of the
term but literally; at equilibrium it is meaningless to speak about the past or the future. With the
study of Brownian motion we are entering the realm of non-equilibrium statistical mechanics.
to fluctuations. From Newton’s second law of motion we know that the motion of the Brownian particle
(mass M ) suspended in the fluid is given by,
d⃗v
M = F = F⃗drag (⃗v ) + F⃗random (t)
dt
For large objects only the drag force is significant but for Brownian particles we must also consider the
random force due to fluctuations. As before, we assume the Brownian particle is neutrally buoyant,
that is, the buoyant force exactly balances gravity; the main simplification is that we won’t have to
consider the falling or rising motion due to these forces (although that’s certainly possible to include in
this framework).
For a spherical Brownian particle of radius a the drag force is given by Stokes’ law,
where η is the viscosity of the fluid suspending the Brownian particle. The mobility, B, is defined as
the terminal velocity for a particle that is pulled by an external force of unit magnitude. Note that the
smaller the particle or the lower the viscosity, the higher the mobility.
The random force is, on average, zero but we cannot neglect it since it is the source of the Brownian
motion. The random force has a non-zero variance, but instead of tackling that question directly we’ll
take an easier, if indirect route. By the way, Pathria and Beale make the distinction between a time-
average (indicated by overbar, f ) and ensemble-average (indicated by angle braces, ⟨f ⟩). I won’t bother
with this subtlety and will use the latter for both; use the context to decide which is more appropriate.
To start, because ⟨F⃗random ⟩ = 0,
d 1
M ⟨⃗v ⟩ = ⟨F⃗drag ⟩ = − ⟨⃗v ⟩
dt B
This ODE is trivial to solve,
⟨⃗v (t)⟩ = ⃗v (0)e−t/M B = ⃗v (0)e−t/τ
where τ = M B is the relaxation time for the drag force. Returning to our original equation, we may
now write it as,
d 1 1 ⃗
⃗v = − ⃗v + Frandom (t)
dt τ M
Of course it’s rather meaningless for us to try to solve this for ⃗v (t) due to the random force; for an
ensemble of Brownian particles each one will have it’s own trajectory and velocity. But it is meaningful
to compute variances for position and velocity, which is what we will now do.
First, apply ⃗r· to both sides of the equation above,
d 1 1
⃗r · ⃗v = − ⃗r · ⃗v + ⃗r · F⃗random (t)
dt τ M
or
d d⃗r 1 1
(⃗r · ⃗v ) − · ⃗v = − ⃗r · ⃗v + ⃗r · F⃗random (t)
dt dt τ M
But since
d⃗r 1 d 1 d 2
⃗r · ⃗v = ⃗r · = (⃗r · ⃗r) = r
dt 2 dt 2 dt
we have
1 d2 2 1 d 2 1
r − v2 = − r + ⃗r · F⃗random (t)
2 dt2 2τ dt M
CHAPTER 9. FLUCTUATIONS AND STOCHASTIC PROCESSES 118
1 d2 2 1 d 2 1
2
⟨r ⟩ − ⟨v 2 ⟩ = − ⟨r ⟩ + ⟨⃗r · F⃗random (t)⟩
2 dt 2τ dt M
Recall that our random walk description of Brownian motion gave ⟨r2 ⟩ ∝ t and this is the result we’re
working towards. But we’re not simply duplicating the previous results because ultimately we also hope
the find how this variance depends on physical quantities, such as the fluid viscosity and the particle’s
mass and radius.
The next two steps of the analysis are: 1) Assume that the random force is uncorrelated with the
particle’s position so ⟨⃗r · F⃗random (t)⟩ = 0. 2) Use equipartition theorem,
3
⟨ 12 M v 2 ⟩ = kT
2
so ⟨v 2 ⟩ = 3kT /M . With these two steps,
1 d2 2 1 d 2 6kT
2
⟨r ⟩ + ⟨r ⟩ =
2 dt 2τ dt M
You can check that the solution to this ODE may be written as,
( )
6kT τ 2 t
⟨r2 ⟩ = − c1 (1 − e−t/τ ) + c2
M τ
where the constants c1 and c2 are set by the initial conditions (for example, if initially ⟨r2 ⟩ = d⟨r2 ⟩/dt =
0 then c1 = 1 and c2 = 0). For times longer than the initial transient, that is, for t ≫ τ the result is,
( ) ( )
6kT τ kT
⟨r ⟩ =
2
t= t
M πηa
Interestingly, this was one of the first experimental methods for measuring Boltzmann’s constant, k,
and subsequently for computing Avogadro’s number.
This derivation gives the correct results in the long time limit but it’s not entirely rigorous because
we assumed that ⟨v 2 ⟩ = 3kT /M , which is correct as t → ∞ but depending on the initial conditions for
the Brownian particle this may not be accurate (e.g., if the particle is initially at rest). Let’s back up
to the original ODE,
d 1 1 ⃗
⃗v (t) = − ⃗v + Frandom (t)
dt τ M
The general solution of this inhomogeneous ODE is,
∫
1 −t/τ t t′ /τ ⃗
⃗v (t) = ⃗v (0)e−t/τ + e e Frandom (t′ ) dt′
M 0
where ∫ ∞
1
C= 2 ⟨F⃗random (t′ ) · F⃗random (0)⟩ dt′
M −∞
is the integral of the time-correlation of the random force. Fortunately, we don’t have to evaluate this
integral because we can, instead, use the equipartition function to fix the value of C. Specifically, since
⟨v 2 (t)⟩ → 3kT /M as t → ∞ then it must be the case that C = 6kT /M τ .
Pathria and Beale, in a round-about fashion, also argue that
Notice that with this approximation the random acceleration, A ⃗ = F⃗random /M , is correlated with the
fluctuating velocity as,
C
⟨⃗v (t) · A(0)⟩
⃗ = 2δ(t)
/
since ⟨A(t)
⃗ · A(0)⟩
⃗ = Cδ(t).
I’ll just quote the final result for the variance of the position,
6kT τ
⟨r2 (t)⟩ = t + v 2 (0)τ 2 (1 − e−t/τ )2
M
3kT 2
− τ (1 − e−t/τ )(3 − e−t/τ )
M
For long times (t ≫ τ ) only the first term remains, which is the same as our earlier result. See Figures
15.3 and 15.4 in Pathria and Beale for graphs of ⟨v 2 (t)⟩ and ⟨r2 (t)⟩ as functions of time for various
initial conditions.
One last thing about this Langevin theory analysis: Since we know that the time-correlation function
of the random force is given by,
∫ ∞
6kT M
= ⟨F⃗random (t′ ) · F⃗random (t′′ )⟩ dt′
τ −∞
we can use this to model the random force term. For example, we can use the white noise model, for
which the random force varies so rapidly that it immediately de-correlates, that is,
This formulation is useful for both theoretical analysis and numerical stochastic modeling.
Chapter 10
Computer Simulations
1 ∑
M
⟨X⟩ = Xi
M i=1
since Pi = 1/M (i.e., all states are equally probable). In the canonical ensemble we evaluate the mean
value as,
∑ ∑M
1 ∑
M M
Xi eβEi
⟨X⟩ = Xi Pi = ∑i=1
M
= Xi e−βEi
−βE Q
i=1 e
i N
i=1 i=1
where β = 1/kT . As mentioned before, the direct evaluation of these summations is not practical due
to the huge numbers of states, even in small systems. For example, for a three-dimensional Ising lattice
with 10 by 10 by 10 sites the number of states is M = 21000 ≈ 10300 configurations. The next generation
of supercomputers will be able to perform 1018 operations per second (an exaFLOP) so we’re still 282
orders of magnitude short of the necessary computational horsepower for the brute-force calculation.
The next option is not to evaluate the sums for all M states but instead to select a sub-sample of
M ′ states with M ′ ≪ M ; in the micro-canonical ensemble this would be just,
′
∑
M
1 ∑
M
1 ∑
M
⟨X⟩ = Xi Pi = Xi ≈ ′ Xi′
i=1
M i=1 M ′
i =1
This is analogous to doing a sample poll of a population instead of a full census. Of course the set of
states M ′ that are selected must be an unbiased sample otherwise our estimate of ⟨X⟩ will be skewed.
120
CHAPTER 10. COMPUTER SIMULATIONS 121
For micro-canonical ensemble this approach is not practical because it’s too difficult to find states
that satisfy the constraint on total energy. The canonical ensemble is much more liberal in allowing us
to pick states but then each state is weighed with probability Pi ∝ e−βEi . Unfortunately, if we pick
states at random we find that P ≈ 0 for nearly all of the states. Nearly any random sample will have
a large error because, given that M is enormous, it’s unlikely that a random subset of M ′ states will
contain any of the important states for which Pi is significantly different from zero. This is analogous
to doing a random poll of people in the US asking them what it was like to walk on the Moon.
The alternative is to not choose states entirely at random but rather to pick mostly the important
ones; this is called importance sampling. Instead of choosing M ′ states at random we will choose M ∗
states and Πi the probability of choosing state i for this sample; in the canonical ensemble we would
evaluate,
∑ ∑ M∗
1 ∑ Xi∗
M M
Xi
⟨X⟩ = Xi Pi = Πi Pi ≈ ∗ Pi∗
i=1 i=1
Πi M i∗ =1 Πi∗
where the states i∗ are selected with probability Πi∗ . Notice that if states are chosen at random then
Πi∗ = 1/M ∗ , which is what we had before.
Ideally, we’d like to choose the most important states so we’d like to use Πi ≈ Pi . That means that
we need to find an efficient way of choosing states with the specified probability Πi . The key to finding
these important states in an efficient manner is not to draw them independently but to pick the states
close to each other in phase space; typically if a state has high probability then nearby states are also
high probability states. To continue our analogy, if you want to poll persons who’ve walked on the
moon you start with Neil Armstrong’s address book.
To locate states near each other in phase space we’ll use the transition probability, Wi→j , which is
the probability that starting from state i we pick state j as our next sample state. Moving between
states in this fashion, if we call Π̂i (t) the probability that we’re at state i at time t then this probability
obeys a Master equation,
M [
∑ ]
d
Π̂i (t) = −Wi→j Π̂i + Wj→i Π̂j
dt j
The first term inside the sum is the rate at which we leave state i, which is equal to the probability
that we’re at i times the rate at which we jump to any other state j. The second term is the rate at
which we jump into i from any other state, which equals Wj→i times the probability that we’re in state
j (i.e., Π̂j (t)). Of course for the total rate of change both the “loss” and “gain” terms are summed over
all states j, whether they are the destination state (in the first term) or the source state (in the second
term).∗
Assuming that this process has a steady state (dΠ̂i /dt = 0) we have that as t → ∞, Π̂i → Πi with
∑
M
[−Wi→j Πi + Wj→i Πj ] = 0
j
To get this we’ll impose the condition of detailed balance on our transition probability, that is,
Πi Wi→j = Πj Wj→i
This ensures that if we take a large number of steps the probability of landing on state i is Πi .
∗ This type of formulation also appears in several statistical mechanical theories, such as the Boltzmann equation for
dilute gases.
CHAPTER 10. COMPUTER SIMULATIONS 122
Note an important feature: Because we only need the ratio of Pj /Pi we don’t need Pi explicitly; this
is key because computing QN for the normalization of Pi is just as difficult as computing ⟨X⟩.
A simple way to construct Wi→j such that it satisfies (*) is to use,
{ −β(E −E )
e j i
Ei < Ej
Wi→j =
1 Ei ≥ Ej
That is, if the energy of state j is higher than that of state i by and amount ∆E = Ej − Ei then make
the transition to state j with probability proportional to e−β∆E ; if state j has lower energy (∆E < 0)
then automatically take the step. Since this transition rate gives Πi = Pi we finally have our estimate,
∗ ∗
1 ∑ Xi∗ 1 ∑
M M
⟨X⟩ ≈ ∗ Pi∗ = ∗ Xi∗
M i∗ =1 Πi∗ M i=1
where the M ∗ states are generated by stepping from one to the next. This procedure was first proposed
by Metropolis et al.† in 1953 but was independently discovered by a number of other computational
physicists and chemists at about the same time. It’s now known as the Metropolis Monte Carlo method
or sometimes as just the Monte Carlo method.‡
Let’s see how the Metropolis Monte Carlo method might be used in practice. For the Ising model
we could take a lattice of N spin sites and ask what is the value of ⟨N+ ⟩, the average number of “up”
sites at a given temperature T . § Starting from an arbitrary initial configuration we take a step to the
next configuration by choosing a spin site at random and then making a random choice as to whether
or not to flip that spin. If flipping the selected spin site causes it to have the same or more aligned
neighbors (i.e., if ∆E ≤ 0) then we automatically make that transition. On the other hand if the
transition is energetically unfavorable then we generate a random number, R, between zero and one;
if R ≤ exp(−β∆E) then the spin is flipped otherwise the system remains unchanged. Whatever the
outcome we now have our new state and a sample value for N+ . We continue repeating this process for
the desired number of M ∗ samples until we get the desired accuracy for ⟨N+ ⟩; the details on how to
estimate the accuracy of this sample average are discussed by Pathria and Beale in Section 16.1.
To find the equation of state for dense gas the procedure is similar. We select random values for the
positions of the particles in a volume V ; as we saw before we only need to estimate the pair-correlation
function to get the non-ideal contribution for the equation of state. We pick a random particle, change
it’s position by a random increment ∆⃗r and compute ∆E, the difference in total energy ∆E based on
the intermolecular potential energy V(⃗r1 , . . . , ⃗rN ). Typically |∆⃗r| is taken as a fraction of a molecular
diameter to minimize the chance of particles overlapping (i.e., the strong repulsion term can make
∆E very large, which means almost certain rejection of the move). Once the move is accepted or
rejected according to the Metropolis Monte Carlo procedure the quantities of interest are measured and
the procedure repeats. The original Metropolis Monte Carlo calculations were of this type since the
equations of state for materials at extreme conditions were of particular interest to the scientists at Los
Alamos in the early 1950’s.
† J.
Chem. Phys. 21 1087 (1953)
‡ Thelatter term can be confusing given the wide variety of stochastic algorithms that are generally called Monte Carlo
methods.
§ Note that ⟨s⟩ = 2⟨N ⟩ − N .
+
CHAPTER 10. COMPUTER SIMULATIONS 123
One last note: It’s important to keep in mind that the transitions between states in the Metropolis
Monte Carlo (MMC) method have nothing to do with the actual physical dynamics of an interacting
system. That is, if we took an initial condition in phase space, say the positions and velocities of
particles, and computed the real dynamics (say, from the Lagrangian) this would trace out a trajectory in
phase space. At equilibrium the states visited by that trajectory have the same equilibrium distribution
as the states visited by the MMC method. But these two ways of traveling through phase space have
completely different trajectories. Another way to say this is that with MMC the only quantity of
physical interest is the set of sample values; the order in which we get those samples has no physical
meaning.
Molecular Dynamics
The interactions between particles in a gas or liquid often may be accurately modeled using classical
mechanics. Assume the particles interact by a pairwise force that depends only on the relative separation
where Fij is the force on particle i due to particle j; the positions of the particles are ri and rj ,
respectively. The explicit form for F may either be approximated from experimental data or computed
theoretically using quantum mechanics.
Once we fix the interparticle force, the dynamics is given by the equation of motion
1 ∑
N
d2
2
ri = Fij
dt m j=1
j̸=i
where m is the mass of a particle. From the initial conditions, in principle, the future state can be
computed by evaluating this system of ODEs. This numerical approach is called molecular dynamics
and it has been very successful in computing microscopic properties of fluids; Pathria and Beale give a
short introduction in Sections 16.3 and 16.4.
Though simple in concept the simulations can be computationally expensive unless sophisticated
numerical techniques are used. For example, the calculation of Fij for all pairs is a computation that
requires O(N 2 ) operations; when dealing with millions of particles this makes the calculation intractable.
But since the intermolecular force is typically short-ranged it really only needs to be computed for
particles closer than a specified radius of influence. The sophisticated approach is to maintain neighbor
lists so that the computation of Fij can be reduced to O(N ) or O(N ln N ).
Kinetic theory
Let’s turn our attention now to ideal gases. Although an ideal gas is simple at equilibrium, when the
gas is out of equilibrium, say flowing due to a pressure gradient, it has all the rich, interesting behavior
found in hydrodynamics. Let’s consider how to model the microscopic dynamics of a gas so as to
formulate efficient numerical methods for simulating it.
Consider the following model for a monatomic gas: a system of volume V contains N particles.
These particles interact, but since the gas is dilute, the interactions are always two-body collisions. The
criterion for a gas to be dilute is that the distance between the particles is large compared to d, the
CHAPTER 10. COMPUTER SIMULATIONS 124
effective diameter of the particles. This effective diameter may be measured, for example, by scattering
experiments. Our criterion for a gas to be considered dilute may be written as
√
d ≪ 3 V /N
An alternative view of this criterion is to say that a gas is dilute if the volume occupied by the particles
is a small fraction of the total volume.
In Boltzmann’s time there was no hope of evaluating the equations of motion numerically, and
even today molecular dynamics is limited to relatively small systems. To understand the scale of the
problem, consider that in a dilute gas at standard temperature and pressure, the number of particles in
a cubic centimeter, Loschmidt’s number, is 2.687 × 1019 . A molecular dynamics simulation of a dilute
gas containing a million particles represents a volume of 0.037 cubic microns. Even on a supercomputer,
an hour of computer time will evolve the system for only a few nanoseconds of physical time.
Maxwell-Boltzmann Distribution
Instead of being overwhelmed by the huge numbers, we can use them to our advantage. The basic idea
of statistical mechanics is to abandon any attempt to predict the instantaneous state of a single particle.
Instead, we obtain probabilities and compute average quantities, for example, the average speed of a
particle. The large numbers of particles now work in our favor because even in a very small volume we
are averaging over a very large sample.
For a dilute gas we usually take the gas to be ideal; that is, we assume that a particle’s energy is all
kinetic energy,
1
E(r, v) = m|v|2
2
In a dilute gas this is a good approximation, since the interparticle forces are short-ranged. The
probability that a particle in this system is at a position between r and r + dr with a velocity between
v and v + dv is
P (r, v) drdv = A exp(−E(r, v)/kT ) drdv
The constant A is a normalization that is fixed by the condition that the integral of the probability
over all possible states must equal unity. The differential elements, drdv, on each side of the equation
serve to remind us that P (r, v) is a probability density.
Since a particle’s energy is independent of r, the probability density may be written as
P (r, v) drdv = [Pr (r)dr][Pv (v)dv]
[ ]
1
= dr [Pv (v)dv]
V
The particle is equally likely to be anywhere inside the volume V . For example, suppose that we demark
a subregion α inside our system. The probability that the particle is inside α is
∫ ∫ ∫ ∫ ∫ ∫
1 Vα
Pr (r) dr = dr =
α V α V
where Vα is the volume of subregion α.
We may further simplify our expression for the probability by making use of the isotropy of the
distribution. In spherical coordinates, the probability that a particle has a velocity between v and
v + dv is
Pv (v) dv = Pv (v, θ, ϕ)v 2 sin θ dv dθ dϕ
= A(e− 2 mv
1 2
/kT 2
v dv)(sin θ dθ)(dϕ)
CHAPTER 10. COMPUTER SIMULATIONS 125
−3
x 10
2.5
1.5
Pv(v)
1
0.5
0
0 200 400 600 800 1000 1200
Velocity (m/s)
Figure 10.1: Maxwell-Boltzmann distribution of particle speed for nitrogen at T = 273 K. The dashed
line marks the average speed, ⟨v⟩.
Since the distribution of velocities is isotropic, the angular parts can be integrated to give
( m )3/2
v 2 e− 2 mv /kT dv
1 2
Pv (v) dv = 4π
2πkT
where Pv (v)dv is the probability that a particle’s speed is between v and v + dv. Notice that we finally
∫∞
fixed the normalization constant A by imposing the condition that 0 Pv (v)dv = 1. This velocity
distribution is known as the Maxwell-Boltzmann distribution (see Figure 10.1).
Using the Maxwell-Boltzmann distribution it is not difficult to compute various average quantities.
For example, the average particle speed
∫ ∞ √ √
2 2 kT
⟨v⟩ = vPv (v) dv = √
0 π m
gives √
√ kT
vmp = 2
m
CHAPTER 10. COMPUTER SIMULATIONS 126
√
Notice that vmp < ⟨v⟩ < ⟨v 2 ⟩, but they are all of comparable magnitude and approximately equal to
the speed of sound √
√ kT
vs = γ
m
where γ = cp /cv is the ratio of specific heats. In a monatomic gas, γ = 5/3.
There remains one problem: We don’t really have just one moving particle; all particles move. This
issue is resolved by identifying ⟨vr ⟩ as the average relative speed between particles
∫ ∫
⟨vr ⟩ = ⟨|v1 − v2 |⟩ = |v1 − v2 |Pv (v1 )Pv (v2 ) dv1 dv2
√
4 kT
= √
π m
where Pv (v) is given by the Maxwell-Boltzmann distribution. From the collision frequency, the mean
free path is obtained as
Using the expressions above for the average particle speed and the mean free path,
V
λ= √
2N πd2
Notice that the mean free path depends only on density and particle diameter; it is independent of
temperature.
Using kinetic theory, we can design a numerical simulation of a dilute gas. Instead of solving
the deterministic equations of motion, we will build a stochastic model. The probability arguments
discussed in this section give us the framework for the numerical scheme.
we study homogeneous problems, but formulate the DSMC algorithm for inhomogeneous systems, in
anticipation of the next section. After the particles move, some are selected to collide. The rules for this
random selection process are obtained from kinetic theory. After the velocities of all colliding particles
have been reset, the process is repeated for the next time step.
Collisions
Intuitively, we would want to select only particles that were near each other as collision partners. In
other words, particles on opposite sides of the system should not be allowed to interact. To implement
this condition, the particles are sorted into spatial cells and only particles in the same cell are allowed
to collide. We could invent more complicated schemes, but this one works well, as long as the dimension
of a cell is no larger than a mean free path.
In each cell, a set of representative collisions is processed at each time step. All pairs of particles
in a cell are considered to be candidate collision partners, regardless of their positions within the cell.
In the hard-sphere model, the collision probability for the pair of particles, i and j, is proportional to
their relative speed,
|vi − vj |
Pcoll [i, j] = ∑Nc ∑m−1
m=1 n=1 |vm − vn |
where Nc is the number of particles in the cell. Notice that the denominator serves to normalize this
discrete probability distribution.
It would be computationally expensive to use the above expression directly, because of the double
sum in the denominator. Instead, the following acceptance-rejection scheme is used to select collision
pairs:
3. The pair is accepted as collision partners if vr > vrmax ℜ, where vrmax is the maximum relative
velocity in the cell and ℜ is a uniform deviate in [0, 1).
4. If the pair is accepted, the collision is processed and the velocities of the particles are reset.
This acceptance-rejection procedure exactly selects collision pairs according to the desired collision
probability. The method is also exact if we overestimate the value of vrnew , although it is less efficient
in the sense that more candidates are rejected. On the whole, it is computationally cheaper to make
an intelligent guess that overestimates vrnew rather than recompute it at each time step.
After the collision pair is chosen, their postcollision velocities, vi∗ and vj∗ , need to be evaluated.
Conservation of linear momentum tells us that the center of mass velocity remains unchanged by the
collision,
1 1
vcm = (vi + vj ) = (vi∗ + vj∗ ) = vcm ∗
2 2
From conservation of energy, the magnitude of the relative velocity is also unchanged by the collision,
These equations give us four constraints for the six unknowns in vi∗ and vj∗ .
CHAPTER 10. COMPUTER SIMULATIONS 129
The two remaining unknowns are fixed by the angles, θ and ϕ, for the relative velocity
For the hard-sphere model, these angles are uniformly distributed over the unit sphere. The azimuthal
angle is uniformly distributed between 0 and 2π, so it is selected as ϕ = 2πℜ1 . The θ angle is distributed
according to the probability density,
Pθ (θ) dθ = 12 sin θ dθ
Using the change of variable q = sin θ, we have Pq (q) dq = ( 21 ) dq, so q is uniformly distributed in the
interval [−1, 1]. We don’t really need to find θ; instead we compute
q = 2ℜ2 − 1
cos θ = q
√
sin θ = 1 − q2
to use for computing the components of the relative velocity. The post-collision velocities are set as
1
vi∗ = vcm
∗
+ vr∗
2
1
vj∗ = vcm
∗
− vr∗
2
and we go on to select the next collision pair.
Finally we ask, “How many total collisions should take place in a cell during a time step?” From
the collision frequency, the total number of collisions in a cell during a time τ is
1 Nc (Nc − 1)Nef πd2 ⟨vr ⟩τ
Mcoll = (Nc − 1)Nef f τ =
2 2Vc
where Vc is the volume of the cell. Each collision between simulation particles represents Nef collisions
among molecules in the physical system. However, we don’t really want to compute ⟨vr ⟩, since that
involves doing a sum over all 12 Nc (Nc − 1) pairs of particles in the cell.
Recall that collision candidates go through an acceptance-rejection procedure. The ratio of total
accepted to total candidates is
Mcoll ⟨vr ⟩
= max
Mcand vr
since the probability of accepting a pair is proportional to their relative velocity. Using the expressions
above,
Nc (Nc − 1)Nef πd2 vrmax τ
Mcand =
2Vc
which tells us how many candidates we should select over a time step τ . Notice that if we set vrmax too
high, we still process the same number of collisions on average, but the program is inefficient because
many candidates are rejected.
Figure 10.3 shows the results from a DSMC simulation of the relaxation of a gas towards equilibrium
from an initial state in which all the particles have the same speed (but random velocity direction).
We see that after 10 steps (and 2720 collisions) for a system of 3000 particles the distribution has
already significantly relaxed toward equilibrium despite the extremely improbable initial condition and
despite the fact that each particle has only been in fewer than two collisions. Figure 10.4 shows the
distribution after 50 steps (and 14,555 collisions). This latter histogram shows that the system has
almost completely relaxed to equilibrium in about a nanosecond.
CHAPTER 10. COMPUTER SIMULATIONS 130
1000
Number
500
0
50 150 250 350 450 550 650 750 850 950 1050
Speed (m/s)
Figure 10.3: Speed distribution as obtained from dsmceq for N = 3000 particles. After 10 time steps,
there have been 2720 collisions.
where Nh (v) is the number of particles in a histogram bin of width ∆v. This H-function was introduced
by Boltzmann and it is proportional to the negative of the entropy in the system. Since the system is
initially out of equilibrium, H decreases with time until the system equilibrates, as shown in Figure 10.5.
For a full description of the DSMC methods, including programming examples, see my textbook
Numerical Methods for Physics. Makes for great summer reading at the beach!
CHAPTER 10. COMPUTER SIMULATIONS 131
700
600
500
Number
400
300
200
100
0
50 150 250 350 450 550 650 750 850 950 1050
Speed (m/s)
Figure 10.4: Speed distribution as obtained from dsmceq for N = 3000 particles. After 50 time steps,
there have been 14,555 collisions.
50
−50
H
−100
−150
−200
0 2 4 6 8 10
Collisions per particle