Lecture 2
Lecture 2
Lecture 2
1. Stirling’s approximation tells us that
N
X Z N
log N ! = log n ≈ dn log n ≈ N log N − N + 1 . (1)
n=1 1
(The first correction after order N is actually 21 log 2πN , so there is an extra subleading diver-
gence for large N that our crude estimate fails to capture.) In the last lecture, we calculated
the number of microstates corresponding to an ideal gas of N identical and indistinguishable
spin-0 particles of mass m in a fixed cubic volume V at a total energy E. While this is a
somewhat messy computation in the microcanonical ensemble, we persevered and managed
to deduce that 3N
−1
V N m 3N 2 E 2
Ω(E) = . (2)
N ! 2πℏ2 Γ( 3N2 )
The entropy is proportional to the logarithm of the total number of states. Therefore,
" 3N
#
−1
V N m 3N 2 E 2
S = kB log Ω(E) = kB log
N ! 2πℏ2 Γ( 3N
2 )
V 3 mE 5 log N
= N kB log + log + + O( ) . (3)
N 2 3N πℏ2 2 N
This is known as the Sackur–Tetrode equation. It is valid in the limit where V and N are
3N 3N
large. In particular, since N ≫ 1, we use the simplifications E 2 −1 ≈ E 2 and Γ( 3N 3N
2 ) ≈ 2 !.
We write the first law of thermodynamics as
1 p µ
dS = dE + dV − dN . (4)
T T T
Since the entropy is a state function S(E, V, N ), from taking partial derivatives of S we
determine the temperature and pressure:
1 ∂S 3 N kB 3
= = =⇒ E = N kB T , (5)
T ∂E V,N
2 E 2
p ∂S N kB
= = =⇒ pV = N kB T . (6)
T ∂V
E,N V
These are the equipartition theorem and ideal gas law, respectively. They are the
equations of state for the ideal gas.
By modifying the assumptions, for example, by letting the gas molecules occupy a finite
volume and interact via intermolecular forces, the equation of state is altered. This can yield
improvements to the ideal gas law:
aN 2
p + 2 (V − b) = N kB T , (van der Waals) , (7)
V
a
p(V − b) = N kB T exp − , (Dieterici) , (8)
N kB T V
where a and b are suitably chosen dimensionful parameters. We also sometimes write
∞
p X N
=ρ+ Bn (T )ρn , ρ= . (9)
kB T V
n=2
1
Statistical Physics Vishnu Jejjala
The Virial expansion expresses the pressure of a gas at a temperature T as a power series in
the number density of particles.
2. This is an alternate derivation of the entropy starting from the equations of state, which we
can think of as phenomenological descriptions of the ideal gas. We know that
dE = T dS − p dV + µ dN . (10)
As pV ∝ T , we also obtain:
" 5 #
T 2 p0
S(T, p) − S0 (T0 , p0 ) = N kB log . (13)
T0 p
3. Suppose that ĤI is an interaction Hamiltonian and |a⟩ and |b⟩ are quantum states with the
same energy. The transition probability for going from state |a⟩ to state |b⟩ is, by Fermi’s
golden rule,
2π
Pab = ρ|⟨b|ĤI |a⟩|2 , (14)
ℏ
where ρ is the density of states at the energy E. Because of the modulus square, this is
the same as the transition probability for going from state |b⟩ to state |a⟩: Pab = Pba .
To determine how often these transitions occur, we define Rab = ωa Pab , where ωa is the
P
probability for being in the state |a⟩ in the first place, and a ωa = 1. We then have
dωa X X X
= ωb Pba − ωa Pab = (Rba − Rab ) . (15)
dt
b b b
In the center of the previous expression, the first sum is the increase in probability of being
in the state |a⟩ due to transitions from other states |b⟩ into |a⟩ while the second sum is the
decrease in the probability of being in the state |a⟩ due to transitions to other states |b⟩. At
equilibrium ω̇a = 0 for all states |a⟩ since macroscopic observables no longer change. Thus,
for all |a⟩, X
0= (ωb − ωa )Pab . (16)
b
This means, if Pab is non-degenerate, ωa = ωb for all |a⟩ and |b⟩. This is the principle of equal
a priori probabilities. Hence, at equilibrium Rab = Rba , and the forward and backward rates
of transition are the same. We have obtained the principle of detailed balance.
Suppose that Pac ≪ Pab for all a and b ̸= c. This means that the likelihood of transition from
any state |a⟩ to the state |c⟩ is exceedingly small. If we find ourselves in the state |c⟩, we are
also extremely unlikely to transition out of |c⟩ to some other state. The system spends a lot
of time in |c⟩. Once there, it is hard to get out. Indeed, as much time is spent in |c⟩ as in
|a⟩. This is another statement of the principle of equal a priori probabilities.
2
Statistical Physics Vishnu Jejjala
4. As an important aside, the Legendre transform of a function f with respect to the variable
x is defined as
∂f
g = f (x, y1 , . . . , yn ) − p(x, y1 , . . . , yn ) · x , p := . (17)
∂x yi
Similarly,
n
∂g X ∂g
dg = dp + dyi . (19)
∂p yi ∂yi p,yj̸=i
i=1
As an illustrative example, consider f (x) = x2 ; p = f ′ (x) = 2x, which means that x(p) = p2
and p 2 p p2
g(p) = f (x) − p · x = −p· =− . (22)
2 2 4
Taking the differential,
p
dg = − dp = −x(p) dp . (23)
2
Now, let’s take the Legendre transform of g(p) with respect to the variable p. Then, q =
g ′ (p) = − 12 p. We therefore have p(q) = −2q. The Legendre transform is
1
h(q) = g(p) − q · p = − (−2q)2 − q · (−2q) = q 2 . (24)
4
Thus, taking the Legendre transform twice gives us the function we started out with just as
we claimed.
5. Consider the Hamiltonian H. It is the sum of the kinetic energy T and the potential energy
V . Generally, we have
X p2
i
H =T +V = + V (qi ) , (25)
2m
i
3
Statistical Physics Vishnu Jejjala
where (qi , pi ) are generalized coordinates and momenta, collectively called phase space. The
index i runs over the degrees of freedom of the system. We may regard the Hamiltonian
as (up to a sign) the Legendre transform of the Lagrangian, where we swap the generalized
velocities q̇i for the generalized momenta pi :
∂L X
L=T −V , pi = , H= pi q̇i − L . (26)
∂ q̇i
i
Whereas the Euler–Lagrange equations, which are derived from a variational principle, are
second order in time, there are twice as many Hamilton equations, but they are first order:
d ∂L ∂L ∂H ∂H
− =0 ⇐⇒ q̇i = , ṗi = − . (27)
dt ∂ q̇i ∂q ∂pi ∂qi
We observe that
∂2H ∂ q̇i ∂ ṗi
= =− . (28)
∂qi ∂pi ∂qi ∂pi
Consider the volume element in phase space at a time t:
Y
dΓ = dqi dpi . (29)
i
since O(δt) terms cancel by (28). Thus, the differential volume element in phase space is
independent of time. This is a requirement of Liouville’s theorem, the statement that the
volume of a region of phase space remains constant under time evolution via Hamilton’s
equations. The shape of the region can change as we map out dΓ over a foliation in time.
1 T
Z Z
Ā = dt A(qi , pi ) = dΓ A(qi , pi ) ρ(qi , pi ) , (34)
T 0
where ρ(qi , pi ) is the ergodic distribution. The idea is that the average of a parameter over
time yields the same result as the average of the parameter over a statistical ensemble of
4
Statistical Physics Vishnu Jejjala
states weighted by the probabilities of occupation. A slightly more precise formulation is the
ergodic theorem, due to von Neumann and Birkhoff, which asserts that (i) if we consider
the time average of a quantity A along a trajectory initially located at some point in phase
space, then in the limit as T → ∞, the time average converges to a limit; and (ii) this limit
is the weighted average of A over the accessible phase space. Thus, when the system evolves,
after a long time, it forgets about the initial conditions. The trajectory from any initial point
fills all of the accessible phase space. These ideas are not universally applicable, however. For
example, integrable systems, in which the motion in phase space is confined to an invariant
torus, do not obey the ergodic hypothesis.
7. When two systems are placed in contact, we saw last time that the entropy acquires its
maximum value when the system equilibrates to constant temperature. How much does
the entropy change? Suppose that initially, the energies of the subsystems are E10 and E20 ,
respectively. The systems are allowed to exchange energy, but not particles. The total energy,
which is conserved, is E = E1 + E2 . The total number of states is
X
Ω(E) = Ω1 (E1 ) · Ω2 (E − E1 )
E1
X
= Ω1 (E10 ) · Ω2 (E20 ) + Ω1 (E1 ) · Ω2 (E − E1 ) (35)
E1 ̸=E10
If T1 > T2 , the entropy increases. According to the principle of maximum entropy, the system
seeks an equilibrium condition in which the number of allowed configurations is maximized.
As the entropy of the combined system increases when energy is transferred from hot to cold,
this determines the direction of heat flow.
From (37), the entropy after ∆E of energy is transferred is S0 + ∆S, which implies that
5
Statistical Physics Vishnu Jejjala
∆E is transferred, the probability that the latter process occurs in favor of the former is
given by the ratio
# of states with energy flow from 2 → 1
P =
(# of states with energy flow from 2 → 1) + (# of states with energy flow from 1 → 2)
(Ω1 · Ω2 )0 e−∆S/kB
= ≈ 10−732 . (39)
(Ω1 · Ω2 )0 e−∆S/kB + (Ω1 · Ω2 )0 e∆S/kB
With these odds, we shouldn’t place a bet on the room becoming spontaneously air condi-
tioned.
8. Using the previous example, if two rooms at temperature T1 and T2 are the same size, we
expect to achieve a final temperature T = 273.2 K. This raises the temperature of the second
room by 0.05 K. Consider the energy dE that must be transferred in order to accomplish the
change in temperature dT . The ratio of these quantities is the heat capacity:
∂E
CX = . (40)
∂T X
Extensive properties are homogeneous functions of degree one with respect to the extensive
variables. Euler’s homogeneous function theorem tells us that
X ∂F
F ({ai }, {Sj }) = Sj . (42)
∂Sj Sk̸=j
j