Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Fundamentals of Statistical Mechanics

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

453.320 Statistical Physics, S.M.

Tan, The University of Auckland

2-1

Chapter 2 Fundamentals of Statistical Mechanics


2.1 Microstates and Macrostates

In classical mechanics, a microstate of a system is a complete description of what every particle in the system is doing. At a given instant of time, this involves specifying the position and momentum of every particle. When we observe a macroscopic system, we do not see this level of detail. Instead, we group together macroscopically indistinguishable microstates into macrostates. These are the states that we deal with in classical thermodynamics, and as in the previous chapter, they are specied in terms of a (relatively small) set of state variables, such as the energy, volume and particle number for an isolated, homogeneous material. Note that macrostates are dened in terms of macroscopic indistinguishability which is subjective to the extent that it depends on the accuracy to which we make the measurements that tell the states apart. It turns out that a key operation in statistical mechanics is counting the number of microstates which are accessible to the system while being consistent with a given macrostate. This is called the statistical weight of the macrostate, and is denoted by : In classical mechanics, this counting process is not welldened, since the dynamical values of the particles in a system can assume a continuum of values and there are no discrete microstates. By using the quantum mechanical description of a system, however, we can give a well-dened prescription for nding its thermodynamical properties, so long that the volume is nite. Later in the course, we shall treat an ideal gas, but since the quantum mechanics required to describe this is somewhat complicated, we start with a system which is much simpler to analyse from a quantum mechanical viewpoint. 2.1.1 Some essential quantum mechanics

In your previous courses in quantum mechanics, you have probably considered only a single particle. This is not a macroscopic system, and so one of the things we shall need to learn is to how to consider systems with many particles in quantum mechanics, since a microstate is a state of all the particles which make up the system. Let us start however by considering the familiar example of a single particle conned in one dimensional box extending from x = 0 to x = L with innitely high walls. As we know, the particle is described by a complex-valued wave function (x; t) which satises Schrdingers equation. This wave 2 function is such that j (x; t)j tells us the probability density of nding the particle at x at time t: Out of all of these wave functions, there are some special ones called the stationary states. These have the property that j (x; t)j2 is independent of time. These form a discrete set, which we may label as n (x; t) where n is the quantum number. Stationary states turn out to be states of denite energy (i.e., if we look at a collection of quantum systems which are each in the nth stationary state and measure the energy of each, we always get the same denite answer En ) and are thus also called energy eigenstates. The wave function of the nth stationary state can be written as iEn t n (x; t) = n (x) exp (2.1) ~ where for the particle in the box of length L; the spatial part of the wave function is ( q nx 2 for 0 x L L sin L n (x) = 0 otherwise where n 2 f1; 2; 3; :::g and the energy eigenvalues are En = n2 ~2 2 2mL2 (2.3)

(2.2)

An arbitrary wave function of the system can be written as a linear combination of the n (x; t) : In statistical mechanics, we identify microstates with the stationary states of the system. Thus, we see that a single particle in a one dimensional box with innitely high walls already has an innite number of discrete microstates.

453.320 Statistical Physics, S.M. Tan, The University of Auckland

2-2

Even this apparently simple problem already has an innite number of microstates, and is already more complicated than we would like. Fortunately, in quantum mechanics there are simpler (but perhaps less familiar) systems which have a nite number of discrete energy eigenstates. The simplest non-trivial single-particle system has only two energy eigenstates, and is (not surprisingly) called a two-level system. A physical example of a two-level system is the magnetic moment of an electron (or of any spin-half particle) in an applied magnetic eld. Suppose that we have a magnetic eld B oriented along the z direction. If we place an electron within this eld, it acts like an elementary magnet with a magnetic moment : Classically, one would think that it is possible to point this magnetic moment in any direction, so that it has an energy of B = B cos in the eld. Quantum mechanically, we nd that this is not the case. For electrons (and other particles spin-half particles), measuring the energy always yields one of only two possible values, namely +B or B which are the energy eigenvalues of the two energy eigenstates of the system. It is as though the elementary magnet could only point in one of two directions which we can call spin up, where is aligned parallel to B; with energy B or spin down, where is anti-paralled to B, with energy B: A system of a single elementary magnet of spin-half in a magnetic eld has only two microstates.

2.2

A Simple Soluble System

Let us now consider a system of N elementary magnets of spin half in an external magnetic eld. We suppose the magnets to be xed in place e.g., in a crystal lattice, so that they can be distinguished from each other by their location. We shall further suppose that the magnets do not interact with each other but only with the applied external magnetic eld. Because of the assumption of no interaction, an energy eigenstate of the N particle system is one for which each of the N elementary magnets is in an energy eigenstate. This shows that the N particle system has 2N energy eigenstates, each of which counts as a microstate of the N particle system. For example, for the case of N = 3; the table shows the eight energy eigenstates and the energy eigenvalue associated with each eigenstate Magnet 1 up up up up down down down down Magnet 2 up up down down up up down down Magnet 3 up down up down up down up down Energy eigenvalue 3B B B B B B B 3B

What are the macrostates of this system? The obvious state variable for this system is the energy. If we use this to distinguish the macrostates, and if we assume that the resolution of our equipment is ne enough that we can distinguish the dierent energies, we see that there are four macrostates which have energies 3B; B; B and 3B respectively. Looking at this table, we see that although there is only one microstate corresponding to each of the macrostates of energy 3B and 3B; there are three microstates corresponding to each of the macrostates of energy B and energy B: We call the number of microstates corresponding to a single macrostate the statistical weight of the macrostate, and denote it by the symbol : For example, in this system we have (E = B) = 3: We can straightforwardly generalize these considerations to the case of arbitrary N: The energy eigenvalue corresponding to a particular energy eigenstate depends of the excess of up spins to down spins. If there are n up spins and N n down spins, the energy eigenvalue is E (n) = nB + (N n) B = (N 2n) B: (2.4)

453.320 Statistical Physics, S.M. Tan, The University of Auckland

2-3

We may thus use n as a label for our macrostates. The number of microstates which correspond to a macrostate with n up spins is given by the number of ways of choosing n objects from among N: Thus the statistical weight of the macrostate n is N N! (n) = : (2.5) = n! (N n)! n Let us suppose that our system of N elementary magnets in a magnetic eld B is completely isolated from the environment so that its energy is xed. In this situation, the value of n (and hence the macrostate) is determined by the energy 1 E n= N (2.6) 2 B but of course, the microstate is not known. What we do know, however, is that the system is going to be in one of the (n) microstates that are consistent with the macroscopic information. The fundamental assumption of statistical mechanics is that it is equally likely that the system is in any of the accessible microstates. If we label the microstates by r and denote the energy of the rth eigenstate by Er ; we are assuming that 1 if Er = (N 2n) B (n) (2.7) Pr (system is in microstate r) = 0 otherwise We also claim that the quantity which corresponds to the entropy of the macrostate n is S (n) = k log (n) (2.8) where k is Boltzmanns constant and log denotes the natural logarithm. Notice that we are assuming here that we can measure the energy accurately enough that it is possible to distinguish the energy levels. If the applied magnetic eld is so weak (or the thermal isolation is so poor) that we cannot distinguish the macrostates by their energy, all microstates become accessible and the total statistical weight is 2N : Recall that in classical thermodynamics, once we can write S as a function of E; V and N; we can derive all the properties of the system. Here, the energy depends on n and the volume does not aect the system. For the system of N elementary magnets we see that N! S (n) = k log : (2.9) n! (N n)! (2.10)

This may be simplied with the help of Stirlings approximation which states that for large N; p 1 N +1=2 N ! 2 N exp N + + ::: : 12N and so log N !

1 1 1 log (2) + N log N + log N N + + ::: (2.11) 2 2 12N Let us look at the relative importance of these terms for N = 1023 : Evaluating each of the above terms gives log 1023 ! 0: 9 + 5: 3 1024 + 26 1023 + 8 1025 + ::: It is apparent that to a very good approximation, log N ! N log N N: (2.13) (2.12)

Thus S (n) = k (N log N N n log n + n (N n) log (N n) + (N n)) = k (N log N n log n (N n) log (N n)) : (2.14)

Let us consider graphs of (n) and S (n) ; where n can range from zero to N: Since we are going to be assuming that N is large, we can consider n as being essentially a continuous variable, and replace summations over n by integrations. The function (n) is an (unnormalized) binomial distribution with its N peak at n = N=2; at which (N=2) = N=2 : The sum over all n gives the total number of microstates which is 2N : N The graph of S (n) also has a peak of N=2 at which its maximum value is k log N=2 : Using Stirlings approximation the peak is over-estimated as kN log 2 which happens to be equal to the area under the true graph.

453.320 Statistical Physics, S.M. Tan, The University of Auckland

2-4

2.2.1

Gaussian Approximation to (n)

If we consider a binomial probability distribution in which we have N trials and a probability of success p on each trial, the probability that there are n successes is given by N n N n Pr (n) = p (1 p) (2.15) n p The mean of this distribution is N p and the standard deviation is N p (1 p): If we consider the case in which p is not so close to 0 or 1 that the mean lies within a few standard deviations of the endpoints 0 or N; it is possible to approximate Pr (n) by a Gaussian probability density with the same mean and standard deviation. Recall that " 2 # 1 1 x Pr (x) = p exp (2.16) 2 2 is the general expression for a Gaussian with mean and standard deviation : For large N; we may neglect the discrete nature of the variable and so " # 2 (n N p) 1 exp (2.17) Pr (n) p 2N p (1 p) 2N p (1 p) An important special case of this result occurs for p = 1=2 whereupon N r 2 2 N 1 2 exp (n N=2) 2 N N n r N 2 2 2 N 2 exp (n N=2) n N N (2.18)

or

(2.19)

This has its peak at x = N=2. We can alternatively use the true value at the peak by writing N N 2 2 exp (n N=2) N n N=2 p This is very sharply peaked about the maximum. The width of the peak is = N =2: For our system of N spins, (n) = N N 2 exp (n N=2)2 n N=2 N N 2k S (n) k log (n N=2)2 N=2 N

(2.20)

(2.21) (2.22)

In Figure 2.1 we show graphs of the entropy for a system of N = 1000 spins using expression (2.14) for the solid line and expression (2.22) for the dashed line. The energy of the spin system is E (n) = (N 2n) B: As functions of the energy, we nd N E2 (E) exp N=2 2N 2 B 2 p which has width = B N and the entropy is 1 2 E2 S (E) k N log 2 + log 2 N 2N 2 B 2

(2.23)

(2.24)

Notice that the rst two terms in the brackets are independent of the energy E: This is a quadratic approximation to the true expression for the entropy found previously. Recall that in classical thermodynamics, knowing S as a function of E; V and N allows one to determine all quantities of interest. For the spin system, we are assuming that the volume does not aect S and so the above gives a complete thermodynamic characterization of the system. We shall return to calculating some of these properties after establishing the denition of temperature in the context of statistical mechanics.

453.320 Statistical Physics, S.M. Tan, The University of Auckland

2-5

700 600 500 400 300 200 100 0

Scaled entropy S(n)/k

100

200

300

400

500

600

700

800

900

1000

Number of parallel spins

Figure 2.1 Entropy S (n) of a system of N = 1000 spins (solid line) together with the Gaussian approximation (dashed line).

2.3

Thermal Equilibrium of two Systems


(0) (0) (0)

Suppose we have two systems, the rst with energy E1 ; volume V1 and with N1 particles, and the (0) (0) (0) second with energy E2 ; volume V2 and with N2 particles. At some time, these systems are placed in thermal contact, so that they can exchange energy with each other via heat ow. Given that they are isolated from the rest of the universe, we want to nd out how the composite system behaves, and what is the equilibrium state of the composite system. Let us write 1 (E1 ) for the statistical weight of the rst system when its energy is E1 and 2 (E2 ) for the statistical weight of the second system when its energy is E2 : The entropies are then given by Si (Ei ) = k log i (Ei ) : While the systems are separated, the microstates (energy eigenstates) of the composite system are found by supposing that each system is individually in an energy eigenstate, and adding the energies of the components. If the energy of the rst system is E1 and that of the second is E2 ; for each microstate of the rst system, the second can be in any one of 2 (E2 ) microstates. The number of microstates of the composite system associated with this distribution of energy is tot (E1 ; E2 ) = 1 (E1 ) 2 (E2 ) : In this problem, the volumes and particle numbers of the individual systems are xed. However, after thermal contact is established, the energies E1 and E2 can in principle have any values which satisfy E1 + E2 = Etot = E1 + E2 ;
(0) (0)

(2.25)

since the composite system is isolated. In Figure 2.2, this means that the systems must lie somewhere on the constraint line shown. By the fundamental assumption, all microstates along this line are equally likely. However, this does not mean that any combination of energies along the line is equally likely, since the microstates are unevenly distributed on the E1 E2 plane, and in particular along the constraint line. The statistical weight 1 (E1 ) 2 (E2 ) tells us the density of microstates on the plane. For a macroscopic system, this weighting function is very highly peaked, so that given a random microstate along the constraint line, it is overwhelmingly probable that the energy distribution between the systems for this state is very close to the most probable values of E1 and E2 shown in the gure. It is straightforward to compute the most probable energies for this problem. We need to maximize 1 (E1 ) 2 (E2 ) subject to the constraint E1 + E2 = Etot : This requires 0= d 1 (E1 ) 2 (Etot E1 ) dE1 = 2 (Etot E1 ) 0 (E1 ) 1 (E1 ) 0 (Etot E1 ) 1 2

(2.26)

453.320 Statistical Physics, S.M. Tan, The University of Auckland

2-6

Contours of (E )(E )
1 2

Section along constraint line Most probable E and E


1 2

(0 )

E E
(0 )

Figure 2.2 Statistical weights as a function of energy for two systems brought into thermal contact. The most probable state is where the statistical weight is a maximum along the energy constraint line. Dividing by 1 (E1 ) 2 (Etot E1 ) and identifying E2 = Etot E1 , we see that 0= or 0 (E1 ) 0 (E2 ) 1 2 1 (E1 ) 2 (E2 ) (2.27)

@ @ log 1 (E1 ) = log 2 (E2 ) : @E1 @E2 @S2 @S1 = @E1 @E2

Since the entropy is dened as S = k log ; the condition for thermal equilibrium can also be written as (2.29)

+E
2

st on =c

(2.28)

or, recalling that @S 1 = (2.30) T @E we see that T1 = T2 at thermal equilibrium. Note that in statistical mechanics, we regard 2.30 as the denition of temperature. The similarities and dierences between this statistical mechanics argument and the classical thermodynamics argument for thie result should be noted. In statistical mechanics, the energies of the two systems are each not absolutely precisely dened when thermal equilibrium is reached, since energy can still ow between the systems. Despite this, for a macroscopic system, the uctuations away from the most probable values are usually extremely small indeed, and so for practical purposes, we can assign an energy to each system. In classical thermodynamics, the tendency for a system to reach a state of maximum entropy is postulated as a law of thermodynamics. In the statistical mechanics context, this is seen simply as a probabilistic result following from the assumption of equal probabilities for the accessible microstates together with the way the microstates are distributed, as given by the statistical weights of the various macrostates. Let us now compute the direction of heat ow as the composite system moves towards the most probable (maximum entropy) state. The change of entropy S associated with an amount of energy E being transferred from system 1 to system 2 is @S1 @S2 S = S1 + S2 = (E) + (E) @E1 @E2 1 1 E = T2 T1

453.320 Statistical Physics, S.M. Tan, The University of Auckland

2-7

High B field Number of parallel spins (n) Low B field

N/2

Temperature

Figure 2.3 Number of parallel spins as a function of temperature for the system of N spins in a magnetic eld.
1 1 In the process of approaching equilibrium S > 0: In order for E to be positive T2 > T1 or T2 < T1 : Thus our denition of temperature is consistent with the idea that heat ows spontaneously from hot objects to cold objects.

Example: Temperaure of the spin system. In the system of N elementary spin-half magnets of magnetic moment in a magnetic eld B; nd the temperature of the system if n of the spins are up (i.e., aligned with the magnetic eld). We have from (2.14) that @S @n N n n

= k log

(2.31)

Since the energy is related to n via E (n) = (N 2n) B; we see that k @S n k N B E 1 = = log = ln T @E N 2B N n 2B N B + E Similarly we can write n as a function of T; yielding n= N 1 + exp 2B kT

(2.32)

(2.33)

where x = B= (kT ) : This graph is shown in Figure 2.3. At low temperatures x ! 1 and n ! N , whereas at high temperatures, x ! 0 and n ! N=2: This is as we would expect intuitively. At low temperatures, the elementary magnets align parallel with the external eld in order to minimize the energy, while at high temperatures, the elementary magnets are pointing randomly up or down, giving an average of half pointing upwards. When the magnetic eld is high, the spins tend to be aligned even at higher temperatures, since the energy cost of ipping to an antiparallel state is large. Example: Adiabatic magnetic cooling For a reversible (quasi-static) adiabatic change, we know from classical thermodynamics that the entropy remains constant. Let us suppose that we prepare a system of spins in a magnetic eld at a given temperaure, thermally isolate the system and then slowly reduce the magnetic eld. From the expression (2.14), it is apparent that the entropy depends only on the number of up spins n: This means that during the reversible adiabatic process, the number of up pointing spins does not change. Since we know that 1 k n = log ; (2.34) T 2B N n

453.320 Statistical Physics, S.M. Tan, The University of Auckland

2-8

it is apparent that T / B if n is xed. Thus during the reduction of the magnetic eld from Bi to Bf say, the temperature will also be reduced from Ti to Tf where Bf Tf = Ti (2.35) Bi

2.4

Thermal and Mechanical Equilibrium of two Systems

We can extend the above considerations to a situation where two systems are placed in thermal and mechanical contact so that they can not only exchange heat energy but are also able to change their volumes so as to satisfy the constraints E1 + E2 = Etot = E1 + E2 V1 + V2 = Vtot =
(0) V1 (0) (0)

(2.36) (2.37)

(0) V2

subject to the constraints. In order to nd the condition(s) for this maximum, 0= @S @S1 @S2 @E2 = + @E1 @E1 @E2 @E1 @S1 @S2 = @E1 @E2 @S1 @S2 @S = : @V1 @V1 @V2

If the entropy functions of the two systems are S1 (E1 ; V1 ; N1 ) and S2 (E2 ; V2 ; N2 ) ; the microstates of the composite system can explore any states consistent with the above constraints. However, the overwhelming majority of these will lie in the most probable macrostate which is the one which maximizes (0) (0) S = S1 E1 ; V1 ; N1 + S2 E2 ; V2 ; N2 (2.38)

(2.39)

which is the usual condition for the equality of the temperatures. We also need to satisfy 0= (2.40)

From our previous discussion of the fundamental thermodynamic relation, we know that P @S = T @V (2.41)

and so, we obtain the result that for systems in thermal and mechanical equilibrium, the temperatures are equal and the pressures are equal. We shall give a statistical mechanics argument for (2.41) later.

2.5

Equilibrium of a System in Thermal Contact with a Heat Bath

Instead of considering an isolated system, we now consider a system in thermal contact with a heat bath of temperature T: The heat bath is supposed to be so large that its temperature does not change when heat ows into or out of it. In order to make our previous considerations apply, we consider the system and heat bath together as a composite system which is isolated from the rest of the universe. In this situation, although the total energy of system and heat bath is a constant, the energy of the system alone can uctuate due to heat exchange between it and the bath. Thus it is no longer the case that the microstates of the system can be classied as being accessible (and occuring with equal probabilities) if they have the correct energy and inaccessible if they have the wrong energies. Instead, there is a certain probability that the system is in a particular microstate, and this is what we want to compute.

453.320 Statistical Physics, S.M. Tan, The University of Auckland

2-9

2.5.1

Probability of each system microstate

Let us label the system microstates by the index r: Each system microstate is an energy eigenstate of the system, and let us suppose that the energy eigenvalues are ordered such that E1 E2 E3 ::: Er ::: (2.42)

What is the probability that the system is in the microstate r? It is simply the fraction of all microstates of the composite system which have this property. All that we know about the heat bath when we are told that the system is in state r is that the energy of the heat bath is EB = Etot Er : (2.43)

If B (EB ) is the statistical weight of the bath when its energy is EB ; then the probability pr that the system is in microstate r is pr = No. of microstates of composite system in which system is in microstate r Total number of microstates of composite system B (Etot Er ) =P r0 B (Etot Er0 ) (2.44) (2.45)

where the sum in the denominator is over all the system microstates. This simply acts as a normalizing constant. In order to evaluate the numerator, we express B in terms of the entropy of the bath SB and carry out a Taylor expansion about Etot : SB (Etot Er ) B (Etot Er ) = exp k )# " ( @SB 1 (2.46) exp SB (Etot ) Er k @E Et o t By denition of the temperature of the bath, @SB 1 @E T and so pr / B (Etot where 1= (kT ) : The constant of proportionality may be found by normalization, since we must have pr = where Z= exp (Er ) Z exp (Er ) Er Er ) / exp = exp (Er ) kT P
r

(2.47)

(2.48)

pr = 1: We write (2.49)

is called the partition function. The relationship (2.49) for the probability that the system is in a given microstate when it is in thermal contact with a heat bath at temperature T is known as the Boltzmann distribution. Note: In the above derivation, it would have been unhelpful to try to expand B (instead of SB ) as a Taylor series in E: This is because B changes very rapidly with E; and taking only the linear term in the expansion as we did for the entropy SB would not have given a good approximation. More precisely, it is

X
r

(2.50)

453.320 Statistical Physics, S.M. Tan, The University of Auckland

2-10

found that for a macroscopic system of N particles, the dependence of on E is of the form = cE where is of the order of N: If we try to carry out a Taylor series expansion on ; we have 1 2 (E dE) (E) 0 (E) (dE) + 00 (E) (dE) ::: 2 1 2 = cE cE 1 (dE) + c ( 1) E 2 (dE) ::: 2 In order for the quadratic term to be much less that the linear term, we require dE 2E 1

(2.51)

which is of order of the energy of a single molecule. The energy uctuations we need to consider due to heat exchange between system and bath are much larger than this. On the other hand, when we expand S k log E; we have 1 S (E dE) S (E) S 0 (E) (dE) + S 00 (E) (dE)2 ::: 2 2 dE k dE = k log E k ::: E 2 E The condition for the quadratic term to be much less than the linear term is now dE 2E (2.53)

(2.52)

Since we need to apply this to the bath which is so large that the energy exchanged with the system is very small compared to the total energy of the bath, this condition is easily fullled. Once we know the probabilities for each of the system microstates, we can (in principle) work out all the properties of the system since we can nd the probability distribution of any function of the system microstates. 2.5.2 2.5.2.1 Alternative forms for probabilities and the partition function Grouping Microstates by Energy

In the above, we consider each microstate of the system separately, and so all the summations are indexed by the individual microstates. As we know, the microstates are often degenerate in energy, and we may have, for instance E1 < E2 = E3 = E4 < E5 = E6 < E7 ::: We may choose to focus on the distinct energies rather than on the individual microstates, and label the degeneracy of the microstate with energy Er by g (Er ) : (This is the same as the statistical weight of a macrostate which we denoted by if we classify macrostates according to their energy and have an energy resolution which is ne enough to distinguish the energies of the various microstates). In this case the partition function may be written X g (Er ) exp (Er ) (2.54) Z=
Er

where the sum is over only the distinct energies. Similarly, the probability that the system has a certain energy is g (Er ) exp (Er ) p (Er ) = : (2.55) Z If the energy levels are closely spaced, we may often approximate the discrete levels by a continuum and dene a density of states f (E) such that f (E) E = No. of system energy eigenstates with energy between E and E + E We then have Z= Z (2.56)

dE f (E) exp (E)

(2.57)

453.320 Statistical Physics, S.M. Tan, The University of Auckland

2-11

f (E) exp (E) E (2.58) Z where p (E) is the probability density of the system energy, so that p (E) E is the probability that the system energy lies between E and E + E: p (E) E = 2.5.2.2 Alternative Groupings of Microstates

and

The key idea in the denition of the partition function is that Z is a sum over system microstates r: We can choose to group these microstates however we like. For example, it may be more appropriate to group the microstates so that each macrostate corresponds to some (narrow range of) momentum, labelled by ps : We can then write Z as X Z= (ps ) exp [E (ps )] (2.59)
ps

where the sum is over the macrostate labels and the statistical weight (ps ) counts the number of microstates associated with the macrostate labelled by ps : Note that the Boltzmann factor always involves the energy. The probability that the system is in the macrostate labelled by ps is then Pr (ps ) = (ps ) exp [E (ps )] : Z (2.60)

Again if the labels are in some sense closely-spaced, we can often approximate the sum by an integral over a density of states Z X (ps ) ! dp f (p) (2.61)
ps

where f (p) p is the number of states in the range p to p + p: 2.5.3

Mean energy, heat capacity and energy uctuations

First, let us consider the mean (internal) energy while the system is in contact with the bath. This is P X Er exp (Er ) E= pr Er = r (2.62) Z r Remarkably, we can write this entirely in terms of Z: From the denition, dierentiation with respect to yields X @Z Er exp (Er ) (2.63) = @ r 1 @Z = Z @ P
r

and so

Er exp (Er ) Z

(2.64)

Thus,

@ (log Z) 1 @Z = E= Z @ @ From this, we can nd the heat capacity at constant volume, which is @E @E 2 = k CV = @T V @ V 2 @ (log Z) = k 2 : @ 2 V

(2.65)

(2.66)

Since we actually know the probability distribution of the microstates, we can also nd the variance of the energy when the system is in thermal contact with a heat bath. (E)2 = E 2 E 2 (2.67)

453.320 Statistical Physics, S.M. Tan, The University of Auckland

2-12

We have already obtained an expression for E: Following an analogous procedure for E 2 ; P 2 X E exp (Er ) 2 2 = E pr Er = r r Z r and we notice that X @2Z 2 Er exp (Er ) 2 = @ r E2 = Using the result (2.65), we nd that 2 2 2 = 1 @ Z 1 @Z E (E) = Z @ 2 Z @ 2 @ 1 @Z @ (log Z) = = @ Z @ @ 2
2

(2.68)

(2.69)

so that

1 @2Z Z @ 2

(2.70)

E2

(2.71) (2.72)

By comparing this with (2.66), we obtain the interesting result that the size of the energy uctuations is related to the heat capacity at constant volume. In fact, 1=2 CV E = (2.73) k 2 What does this tell us? For one thing, we can look at the relative size of the uctuations compared to the mean: 1=2 E CV (2.74) = k 2 E 2 E

Now consider how this scales with the number of molecules in the system. The temperature (and hence ) is an intensive quantity and does not change with the size of thesystem. On the other hand, both CV and E are extensive quantities which scale with N: Thus E=E scales as N 1=2 ; and for large N; the uctuations become negligible. For such macroscopic systems, we can imagine the internal energy to be essentially well-dened when the system is in thermal contact with a heat bath even though, of course, it is only absolutely precisely determined for an isolated system. There is a caveat to the above conclusion, since we implicitly assumed in the argument that CV was nite, so that the fractional uctuations would be reduced by the factor N 1=2 : There are important cases where this is not true. For example, if we consider ice and water in contact with each other at the melting point, we know that as we add heat to this system, the ice melts but the temperature remains constant. This corresponds to an innite heat capacity, and according to the equation, we expect the energy uctuations to be very large in this case, even though the system is in contact with a heat bath at constant temperature. This is indeed the case, since at such a phase change, xing the temperature does not specify how much of the system is in each of the phases, and the internal energy depends critically on the fraction in the two phases. (Recall that the internal energy of water exceeds that of ice at the same temperature by the latent heat). 2.5.4 Entropy

For the case of an isolated system, microstates of the system were either accessible or inaccessible, and the entropy was given by (Boltzmanns constant multiplied by) the logarithm of the number of accessible microstates. When the system is in contact with a heat bath, dierent microstates occur with dierent probabilities and we need to generalize the denition of the entropy to accomodate this. In the isolated system, in which we know that the total energy is E; the probabilities of the various microstates may be written as 1 if Er = E (Er ) pr = (2.75) 0 if Er 6= E

453.320 Statistical Physics, S.M. Tan, The University of Auckland

2-13

The expression for the entropy is S = k log (E) = k log pr (2.76) where pr is the probability of any of the accessible microstates. In Figure 2.4(a), we show schematically how the microstates can be classied into accessible and inaccessible when the system is thermally isolated.

Inaccessible Accessible (E )
r

Less probable

p=0 More probable (b) System in contact with heat bath

p = 1/ (a) Thermally isolated system

Figure 2.4 Schematic diagram showing that for a thermally isolated system (a), there is a clean division between accessible and inaccessible microstates, whereas for a system at a xed temperature (b), microstates occur with dierent probabilities. We wish to generalize the concept of entropy to allow the latter situation. In Figure 2.4(b) on the other hand, which represents the system in thermal contact with a heat bath, there is a smooth gradation from probable microstates to improbable ones as given by the Boltzmann distribution. If we think of each microstate as making a contribution to the entropy of the system, when the probabilities of the microstates are no longer equal, we can take the expectation value of the above over all possible microstates. This suggests that a plausible generalization for the entropy is X S = hk log pr i = k pr log pr (2.77)
r

where the sum is over the microstates. This is in fact correct, and is called the Gibbs expression for the entropy for an arbitrary probability distribution for the microstates. It is possible to derive this formula more formally, but we shall not do so in this course. For the system in thermal contact with a bath, we can substitute the Boltzmann distribution (2.49) for pr into the expression (2.77) to obtain S = k =k X eEr
r

log

eEr Z

= k E + k log Z Writing = 1=(kT ); we see that which may be rearranged as S = E=T + k log Z kT log Z = E T S: F = kT log Z:

X eEr
r

(Er + log Z) (2.78) (2.79) (2.80)

Comparing this with the expression for the free energy F in classical thermodynamics, we see that (2.81)

This connection allows us to calculate all thermodynamic variables starting from the partition function, once this is expressed as a function of T; V and N:

453.320 Statistical Physics, S.M. Tan, The University of Auckland

2-14

2.6

Derivation of the Fundamental Thermodynamic Relation

The fundamental relation relates the change in internal energy of a system when the system undergoes an innitesimal process which changes the entropy and the volume. In this section we derive this relationship using statistical mechanics and show the microscopic origin of each of the terms. Consider a uid with a xed number of particles and suppose that a process occurs which changes its temperature T and volume V from (; V ) to ( + d; V + dV ) where, as usual, = 1=(kT ): In statistical mechanics, we identify the microstates of the system as the energy eigenstates which we label by the indices r: We have seen that when the system is in thermal equilibrium with a heat bath, the probability pr for being in state r is given by the Boltzmann distribution and the average energy is X E= pr Er (2.82)
r

where Er is the energy of the rth eigenstate. During the change from (; V ) to ( + d; V + dV ) , both pr and Er will change, and the average energy changes by X X dE = dpr Er + pr dEr (2.83)
r r

Let us consider how the entropy changes. Using Gibbs expression, ! ! X 1 X pr log pr = k pr dpr + log pr dpr dS = d k pr r r X X = k dpr k log pr dpr
r r

(2.84) P dpr = 0

Since the probabilities of the various microstates always add up to one, the sum of the changes for any process. We can use the Boltzmann distribution to nd pr : This is pr = and so Substituting this into (2.84) and using P log pr = Er log Z dpr = 0 we see that dS = 1X Er dpr T r exp (Er ) Z

(2.85)

(2.86)

(2.87)

and so the rst term on the right-hand side of (2.83) is just T dS: P It remains to calculate r pr dEr which involves the change in the energy eigenvalues. From quantum mechanics, the energy eigenvalues depends on the external parameters of the system, such as the volume, not on the temperature. If during the process, the system starts in eigenstate r and remains in this eigenstate, the change in volume produces a change of energy dEr = @Er dV = Pr dV @V (2.88)

where Pr is the pressure of the system in state r: The assumption that a system initially in an eigenstate remains in that state as the process takes place is the microscopic version of the process being quasistatic. Taking the average over the probability to be in each of the eigenstates, we see that X X pr dEr = pr Pr dV = P dV (2.89)
r r

453.320 Statistical Physics, S.M. Tan, The University of Auckland

2-15

where P is the average pressure. Substituting (2.87) and (2.89) into (2.83) leads to the fundamental relation dE = T dS P dV For a reversible change, we see that X
r

(2.90)

pr dEr = P dV = W

(2.91)

so heat changes the probability of being in the various levels, without aecting the levels themselves.

so work changes the energy levels, not the probabilities of being in the levels. Also X dpr Er = T dS = Q
r

(2.92)

2.7

Spin System in Thermal Contact with Heat Bath

We illustrate the theory for the system of N spin-half particles considered earlier, but we now suppose them to be in thermal contact with a heat bath at temperature T: If the spin-half particles are each localized at xed positions within a crystal lattice, this is a simple model for a paramagnetic solid (see Chapter 3 of Mandl). The important features of the model are that the particles are distinguishable by their positions and that each is supposed to interact directly with the applied magnetic eld and not with each other. 2.7.1 System consisting of a single spin

Let us rst consider a single particle with dipole moment . This can be in one of two energy levels, denoted + for spin up (parallel with B eld) or for spin down (antiparallel with B eld). The energies are E+ = B and E = B: (2.93) The partition function for one particle involves a sum over its energy eigenstates, Z1 = exp (E+ ) + exp (E ) = 2 cosh x where x = B = B= (kT ) : We nd from the partition function that The probability for the particle to be in each of the two states is p = The mean dipole moment is The mean energy is (2.94)

exp (E ) exp (x) = ; Z 2 cosh x

(2.95)

1 = p+ + () p = tanh x; E1 = E+ p+ + E p = B tanh x

(2.96) (2.97)

Notice that we could have also worked this out using

@ @ log (2 cosh x) @x (log Z1 ) = E1 = @ @x @ = B tanh x

(2.98)

In Figure 2.5, we plot the probability for being in each of the two states as a function of x; which is proportional to the ratio of the magnetic eld to the temperature.

453.320 Statistical Physics, S.M. Tan, The University of Auckland

2-16

Disordered
1

Ordered

0.8

p+

Probability

0.6

0.4

0.2

p0 0.5 1 1.5 2 2.5 3 3.5 4

x = B/(kT)

Figure 2.5 Probability of a spin being aligned parallel p+ or antiparallel p with a magnetic eld as a function of x = B=(kT ): 2.7.2 System consisting of N spins

If we consider N spins which are distinguishable and independent, the partition function is simply given by the N th power of Z1 ; i.e., ZN = (Z1 )N = 2N coshN x (2.99) Let us consider in more detail why this should be so. With N independent spins, there are 2N energy levels, each of which is labelled by a string of N symbols such as r (+; +; ; ; +; :::; +) where each symbol represents the state of each of the N distinguishable spins. The energy of the state r is Er = (B) n + (B) (N n) = (N 2n) B where n is the number of up spins (+ signs) in r: The degeneracy of the energy Er is the number of ways of getting n spins pointing upwards, which is N : (2.101) g (Er ) = n The partition function is thus given by ZN = X
Er

(2.100)

g (Er ) exp (Er )

n=0

N X N

exp [ (N 2n) B] (2.102)

=e

N x

N X N e2nx n n=0 N

By the binomial theorem, the sum is simply [1 + exp (2x)] N (ex + ex ) = 2N coshN x:

and so we recover the result that ZN =

We can nd various properties of interest, either from the N particle partition function or from the single particle results together with statistical independence.

453.320 Statistical Physics, S.M. Tan, The University of Auckland

2-17

The probability for the N particles to be in a particular state r (+; +; ; ; +; :::; +) is the product of the probabilities for the individual particles to be in these states, i.e., pr = p+ p+ p p p+ :::p+ = | {z }
single particle probs

exp (+nx) exp [ (N n) x] (2 cosh x)


N

(2.103)

exp [ (N 2n) x] (2 cosh x)


N

(2.104)

where n is the number of up spins in the state r: The mean energy is @ EN = (log ZN ) = N B tanh x @

(2.105)

This result is simply N E1 as might be expected due to the assumption of independence. The mean dipole moment is N = N 1 = N tanh x (2.106)

The mean number of up spins when the temperature is T may be readily found from the mean energy, since E = (N 2n) B n= N EN N N = (1 + tanh x) = 2 2B 2 1 + exp (2x) N = 1 + exp 2B kT

(2.107)

Comparing this equation with (2.33), we see that the mean number of up-spins when the temperature is maintained at T by an external heat bath is consistent with the temperature of an isolated system when the number of upwards pointing spins is xed at n: The magnetic susceptibility of a material is the ratio of the net magnetization of the system (i.e., the magnetic moment per unit volume) to the applied magnetic eld. It measures how well the internal magnetic dipoles in a material are aligned by the presence of an external eld. Intuitively, we expect that at high temperatures, the dipoles in the material will be randomized leading to a small magnetization for a given eld whereas at low temperatures, the magnetization will be higher since more alignment will occur. If n is the mean number of up spins, the net magnetic moment is (2 N ) (since there are also N n n spins pointing down on average) and the magnetization I is (2 N ) =V: The susceptibility is thus n = (2n N ) =V (B=0 ) (2.108)

Note that in the denominator we have H = B=0 which is the quantity which is properly called the magnetic eld strength. (B is more properly called the magnetic induction, but we shall often lapse into calling it the eld strength). From (2.107) we nd that N 0 B = tanh (2.109) VB kT In the limit of weak eld and high temperature, the argument of the hyperbolic tangent is small and we can use tanh x x for small x to conclude that = N 2 0 V kT (2.110)

At high temperatures, it is more dicult to align the magnetic moments against the tendency towards randomization, and the susceptibility falls. This inverse relationship is known as Curies law, and this eect is used to measure temperatures as low as 0:01 K (which is nevertheless high in this context,

453.320 Statistical Physics, S.M. Tan, The University of Auckland

2-18

since B kT ) using paramagnetic salts such as cerium magnesium nitrate Ce2 Mg3 (NO3 )12 :24H2 O. At low temperatures, the argument of the hyperbolic tangent is large and the magnetization saturates as essentially all the dipoles align with the eld. This expression (and its generalization to include elementary magnets of spins greater than one half) gives results which are in excellent agreement with experiment (see Mandl). The heat capacity of the system (at xed volume) is 2 @E @ (log Z) 2 C= = N kx2 (sech x)2 = k @T @ 2 V This is shown as the solid line in Figure 2.6 and the mean energy is shown as the dashed line. Notice that the heat capacity has quite a large peak over a relatively narrow range of temperatures. Over this range of temperatures, the system goes essentially from an ordered state to a disordered state. This feature of the specic heat of a collection of two-state systems is called the Schottky anomaly.
0.5 0

0.4

-0.2

0.3

-0.4

0.2

-0.6

0.1

-0.8

0 0

kT/(B)

-1 10

Figure 2.6 Heat capacity (solid line) and mean energy (dashed line) of the system of N spins in a magnetic eld. The entropy of the spin system when in thermal equilibrium with a heat bath is given by log Z 2 @ S = k = N k (log 2 + log cosh x x tanh x) @

Mean Energy E/(N B)

Heat Capacity C/(Nk)

(2.111)

Note that at high temperatures, x ! 0 and S ! k log 2N so all 2N states are eectively accessible, whereas for low temperatures, x ! 1 and S ! 0; indicating that only one state is eectively accessible.

You might also like