Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
64 views

Lecture 2

1. The document discusses statistical physics concepts including: Stirling's approximation, entropy calculations for an ideal gas, equations of state like the ideal gas law and van der Waals equation, and the Virial expansion. 2. It also derives the Sackur-Tetrode equation for entropy of an ideal gas starting from microscopic considerations and alternatively from macroscopic equations of state. 3. Further, it discusses Fermi's golden rule for transition probabilities between quantum states and derives the principle of detailed balance and equal a priori probabilities from these considerations. 4. Finally, it provides a definition of the Legendre transform.

Uploaded by

Prince Mensah
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views

Lecture 2

1. The document discusses statistical physics concepts including: Stirling's approximation, entropy calculations for an ideal gas, equations of state like the ideal gas law and van der Waals equation, and the Virial expansion. 2. It also derives the Sackur-Tetrode equation for entropy of an ideal gas starting from microscopic considerations and alternatively from macroscopic equations of state. 3. Further, it discusses Fermi's golden rule for transition probabilities between quantum states and derives the principle of detailed balance and equal a priori probabilities from these considerations. 4. Finally, it provides a definition of the Legendre transform.

Uploaded by

Prince Mensah
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Statistical Physics Vishnu Jejjala

Lecture 2
1. Stirling’s approximation tells us that
N
X Z N
log N ! = log n ≈ dn log n ≈ N log N − N + 1 . (1)
n=1 1

(The first correction after order N is actually 21 log 2πN , so there is an extra subleading diver-
gence for large N that our crude estimate fails to capture.) In the last lecture, we calculated
the number of microstates corresponding to an ideal gas of N identical and indistinguishable
spin-0 particles of mass m in a fixed cubic volume V at a total energy E. While this is a
somewhat messy computation in the microcanonical ensemble, we persevered and managed
to deduce that 3N
−1
V N  m  3N 2 E 2
Ω(E) = . (2)
N ! 2πℏ2 Γ( 3N2 )
The entropy is proportional to the logarithm of the total number of states. Therefore,
" 3N
#
−1
V N  m  3N 2 E 2
S = kB log Ω(E) = kB log
N ! 2πℏ2 Γ( 3N
2 )
 
V 3 mE 5 log N
= N kB log + log + + O( ) . (3)
N 2 3N πℏ2 2 N
This is known as the Sackur–Tetrode equation. It is valid in the limit where V and N are
3N 3N
large. In particular, since N ≫ 1, we use the simplifications E 2 −1 ≈ E 2 and Γ( 3N 3N
2 ) ≈ 2 !.
We write the first law of thermodynamics as
1 p µ
dS = dE + dV − dN . (4)
T T T
Since the entropy is a state function S(E, V, N ), from taking partial derivatives of S we
determine the temperature and pressure:

1 ∂S 3 N kB 3
= = =⇒ E = N kB T , (5)
T ∂E V,N
2 E 2

p ∂S N kB
= = =⇒ pV = N kB T . (6)
T ∂V
E,N V

These are the equipartition theorem and ideal gas law, respectively. They are the
equations of state for the ideal gas.
By modifying the assumptions, for example, by letting the gas molecules occupy a finite
volume and interact via intermolecular forces, the equation of state is altered. This can yield
improvements to the ideal gas law:
aN 2
 
p + 2 (V − b) = N kB T , (van der Waals) , (7)
V
 
a
p(V − b) = N kB T exp − , (Dieterici) , (8)
N kB T V
where a and b are suitably chosen dimensionful parameters. We also sometimes write

p X N
=ρ+ Bn (T )ρn , ρ= . (9)
kB T V
n=2

1
Statistical Physics Vishnu Jejjala

The Virial expansion expresses the pressure of a gas at a temperature T as a power series in
the number density of particles.

2. This is an alternate derivation of the entropy starting from the equations of state, which we
can think of as phenomenological descriptions of the ideal gas. We know that

dE = T dS − p dV + µ dN . (10)

Let us work with fixed particle number so that dN = 0. Using pV = N kB T and E = 23 N kB T ,


we have
1 p 3 dT dV
dS = dE + dV = N kB + N kB . (11)
T T 2 T V
This can then be integrated:
"  3  #
3 T V T 2 V
S(T, V ) − S0 (T0 , V0 ) = N kB log + N kB log = N kB log . (12)
2 T0 V0 T0 V0

As pV ∝ T , we also obtain:
" 5  #
T 2 p0
S(T, p) − S0 (T0 , p0 ) = N kB log . (13)
T0 p

Here, S0 is a reference quantity.

3. Suppose that ĤI is an interaction Hamiltonian and |a⟩ and |b⟩ are quantum states with the
same energy. The transition probability for going from state |a⟩ to state |b⟩ is, by Fermi’s
golden rule,

Pab = ρ|⟨b|ĤI |a⟩|2 , (14)

where ρ is the density of states at the energy E. Because of the modulus square, this is
the same as the transition probability for going from state |b⟩ to state |a⟩: Pab = Pba .
To determine how often these transitions occur, we define Rab = ωa Pab , where ωa is the
P
probability for being in the state |a⟩ in the first place, and a ωa = 1. We then have
dωa X X X
= ωb Pba − ωa Pab = (Rba − Rab ) . (15)
dt
b b b

In the center of the previous expression, the first sum is the increase in probability of being
in the state |a⟩ due to transitions from other states |b⟩ into |a⟩ while the second sum is the
decrease in the probability of being in the state |a⟩ due to transitions to other states |b⟩. At
equilibrium ω̇a = 0 for all states |a⟩ since macroscopic observables no longer change. Thus,
for all |a⟩, X
0= (ωb − ωa )Pab . (16)
b
This means, if Pab is non-degenerate, ωa = ωb for all |a⟩ and |b⟩. This is the principle of equal
a priori probabilities. Hence, at equilibrium Rab = Rba , and the forward and backward rates
of transition are the same. We have obtained the principle of detailed balance.
Suppose that Pac ≪ Pab for all a and b ̸= c. This means that the likelihood of transition from
any state |a⟩ to the state |c⟩ is exceedingly small. If we find ourselves in the state |c⟩, we are
also extremely unlikely to transition out of |c⟩ to some other state. The system spends a lot
of time in |c⟩. Once there, it is hard to get out. Indeed, as much time is spent in |c⟩ as in
|a⟩. This is another statement of the principle of equal a priori probabilities.

2
Statistical Physics Vishnu Jejjala

4. As an important aside, the Legendre transform of a function f with respect to the variable
x is defined as

∂f
g = f (x, y1 , . . . , yn ) − p(x, y1 , . . . , yn ) · x , p := . (17)
∂x yi

The Legendre transform g is a function of the variables p, y1 , . . . , yn . If f is a function of


a single variable x that we plot, then geometrically, the Legendre transform describes the
positions where the vertical axis is intersected by tangents to f . For the Legendre transform
to be unique, the slope f ′ (x) must be a strictly monotonic function. This is a requirement
in order for f ′ (x) to be invertible. If f ′ (x) is strictly monotonic, its inverse function is also
strictly monotonic. If we do the Legendre transform twice, we recover the original function.
The Legendre transform of f (x, y1 , . . . , yn ) with respect to x is the function g(p, y1 , . . . , yn ).
We know that n
∂f X ∂f
df = dx + dyi . (18)
∂x yi ∂yi x,yj̸=i
i=1

Similarly,
n
∂g X ∂g
dg = dp + dyi . (19)
∂p yi ∂yi p,yj̸=i
i=1

From the definition of the Legendre transform,


n
X ∂f
dg = df − x dp − p dx = −x dp + dyi . (20)
∂yi x,yj̸=i
i=1

We see immediately from comparing (19) and (20) that



∂g ∂g ∂f
x=− , = . (21)
∂p yi ∂yi p,yj̸=i ∂yi x,yj̸=i

As an illustrative example, consider f (x) = x2 ; p = f ′ (x) = 2x, which means that x(p) = p2
and  p 2 p p2
g(p) = f (x) − p · x = −p· =− . (22)
2 2 4
Taking the differential,
p
dg = − dp = −x(p) dp . (23)
2
Now, let’s take the Legendre transform of g(p) with respect to the variable p. Then, q =
g ′ (p) = − 12 p. We therefore have p(q) = −2q. The Legendre transform is

1
h(q) = g(p) − q · p = − (−2q)2 − q · (−2q) = q 2 . (24)
4
Thus, taking the Legendre transform twice gives us the function we started out with just as
we claimed.

5. Consider the Hamiltonian H. It is the sum of the kinetic energy T and the potential energy
V . Generally, we have
X p2
i
H =T +V = + V (qi ) , (25)
2m
i

3
Statistical Physics Vishnu Jejjala

where (qi , pi ) are generalized coordinates and momenta, collectively called phase space. The
index i runs over the degrees of freedom of the system. We may regard the Hamiltonian
as (up to a sign) the Legendre transform of the Lagrangian, where we swap the generalized
velocities q̇i for the generalized momenta pi :
∂L X
L=T −V , pi = , H= pi q̇i − L . (26)
∂ q̇i
i

Whereas the Euler–Lagrange equations, which are derived from a variational principle, are
second order in time, there are twice as many Hamilton equations, but they are first order:
d ∂L ∂L ∂H ∂H
− =0 ⇐⇒ q̇i = , ṗi = − . (27)
dt ∂ q̇i ∂q ∂pi ∂qi
We observe that
∂2H ∂ q̇i ∂ ṗi
= =− . (28)
∂qi ∂pi ∂qi ∂pi
Consider the volume element in phase space at a time t:
Y
dΓ = dqi dpi . (29)
i

At an infinitesimal later time t + δt, we have


∂H
qi → qi′ = qi + q̇i δt = qi + δt , (30)
∂pi
∂H
pi → p′i = pi + ṗi δt = pi − δt . (31)
∂qi
We can think of this as a change of coordinates. The volume element in the phase space
defined by (qi′ , p′i ) is related to dΓ by a Jacobian factor associated to the new coordinates.
That is to say,
∂2H ∂2H
!
Y δ ij + ∂pi ∂qj δt ∂pi ∂pj δt
dΓ′ = dqi′ dp′i = (det J )dΓ , det J = det ∂2H 2H . (32)
∂qi ∂qj δt δij − ∂q∂i ∂p j
δt

Let us consider infinitesimal δt. We know that det(I + ϵM ) = 1 + ϵ tr M + O(ϵ2 ), so


X  ∂2H ∂2H

det J = 1 + − δt + O(δt2 ) = 1 + O(δt2 ) , (33)
∂pi ∂qi ∂qi ∂pi
i

since O(δt) terms cancel by (28). Thus, the differential volume element in phase space is
independent of time. This is a requirement of Liouville’s theorem, the statement that the
volume of a region of phase space remains constant under time evolution via Hamilton’s
equations. The shape of the region can change as we map out dΓ over a foliation in time.

6. Relatedly, for T sufficiently large, Boltzmann proposed the ergodic hypothesis:

1 T
Z Z
Ā = dt A(qi , pi ) = dΓ A(qi , pi ) ρ(qi , pi ) , (34)
T 0

where ρ(qi , pi ) is the ergodic distribution. The idea is that the average of a parameter over
time yields the same result as the average of the parameter over a statistical ensemble of

4
Statistical Physics Vishnu Jejjala

states weighted by the probabilities of occupation. A slightly more precise formulation is the
ergodic theorem, due to von Neumann and Birkhoff, which asserts that (i) if we consider
the time average of a quantity A along a trajectory initially located at some point in phase
space, then in the limit as T → ∞, the time average converges to a limit; and (ii) this limit
is the weighted average of A over the accessible phase space. Thus, when the system evolves,
after a long time, it forgets about the initial conditions. The trajectory from any initial point
fills all of the accessible phase space. These ideas are not universally applicable, however. For
example, integrable systems, in which the motion in phase space is confined to an invariant
torus, do not obey the ergodic hypothesis.

7. When two systems are placed in contact, we saw last time that the entropy acquires its
maximum value when the system equilibrates to constant temperature. How much does
the entropy change? Suppose that initially, the energies of the subsystems are E10 and E20 ,
respectively. The systems are allowed to exchange energy, but not particles. The total energy,
which is conserved, is E = E1 + E2 . The total number of states is
X
Ω(E) = Ω1 (E1 ) · Ω2 (E − E1 )
E1
X
= Ω1 (E10 ) · Ω2 (E20 ) + Ω1 (E1 ) · Ω2 (E − E1 ) (35)
E1 ̸=E10

≥ Ω1 (E10 ) · Ω2 (E20 ) = (Ω1 · Ω2 )0 .

Thus, taking the logarithm


S(E) ≥ S1 (E10 ) + S2 (E20 ) . (36)
The entropy of a combined system is greater than the entropy from before unless the two
systems were initially at the same temperature, in which case the inequality is saturated.
This is the second law of thermodynamics.
Which way does the energy flow? Suppose ∆E1 = −∆E and ∆E2 = +∆E, meaning that
energy flows from the first system to the second. Then,
 
∂S1 ∂S2 1 1
∆S = ∆E1 + ∆E2 = − ∆E . (37)
∂E1 ∂E2 T2 T1

If T1 > T2 , the entropy increases. According to the principle of maximum entropy, the system
seeks an equilibrium condition in which the number of allowed configurations is maximized.
As the entropy of the combined system increases when energy is transferred from hot to cold,
this determines the direction of heat flow.
From (37), the entropy after ∆E of energy is transferred is S0 + ∆S, which implies that

Ω′ = e(S0 +∆S)/kB = (Ω1 · Ω2 )0 e∆S/kB . (38)

Suppose T1 = 293.25 K, T2 = 293.15 K, and ∆E = 10−14 J. (Recall that 273.15 K = 0◦ C;


∆E is about the energy of a gamma ray photon or the rest mass energy of an electron.)
Then e∆S/kB ≈ 10366 . Transferring a minute amount of energy from a slightly hotter system
to a slightly colder one increases the number of available microstates by a factor of 10366 .
Conversely, transferring a minute amount of energy from a slightly colder system to a slightly
hotter one decreases the number of available microstates by a factor of 10−366 . Given that

5
Statistical Physics Vishnu Jejjala

∆E is transferred, the probability that the latter process occurs in favor of the former is
given by the ratio
# of states with energy flow from 2 → 1
P =
(# of states with energy flow from 2 → 1) + (# of states with energy flow from 1 → 2)
(Ω1 · Ω2 )0 e−∆S/kB
= ≈ 10−732 . (39)
(Ω1 · Ω2 )0 e−∆S/kB + (Ω1 · Ω2 )0 e∆S/kB

With these odds, we shouldn’t place a bet on the room becoming spontaneously air condi-
tioned.

8. Using the previous example, if two rooms at temperature T1 and T2 are the same size, we
expect to achieve a final temperature T = 273.2 K. This raises the temperature of the second
room by 0.05 K. Consider the energy dE that must be transferred in order to accomplish the
change in temperature dT . The ratio of these quantities is the heat capacity:

∂E
CX = . (40)
∂T X

The quantity X can be volume or pressure or any number of thermodynamic quantities; it


is held constant when we take the partial derivative of the energy. The energy is exten-
sive, meaning it is additive for non-interacting, independent systems. By contrast, intensive
properties are independent of the system size. Energy, entropy, volume, and particle number
are extensive quantities while temperature, pressure, chemical potential, and densities are
intensive. A measure of the heat capacity independent of the size of the system is the specific
heat: cX = CX /mass. We divide one extensive quantity by another.
Suppose the parameters {ai } are intensive and {Sj } are extensive. The function F ({ai }, {Sj })
is extensive if and only if, for all λ,

F ({ai }, {λSj }) = λF ({ai }, {Sj }) . (41)

Extensive properties are homogeneous functions of degree one with respect to the extensive
variables. Euler’s homogeneous function theorem tells us that

X ∂F
F ({ai }, {Sj }) = Sj . (42)
∂Sj Sk̸=j
j

Thus, the first law (10) tells us that

E({T, p, µ}, {S, V, N }) = T S − p V + µ N . (43)

You might also like