Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

SM HW1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Homework 1

Statistical Mechanics (I)


Due 2024 March 7

Problem 1. Some Gaussian integrals

1. Evaluate the simple generalization of the Gaussian integral we discussed in the lecture.
Z ∞
2
e−ax +bx dx =? (1)
−∞

2. A N dimensional Gaussian integral is defined as


Z ∞ Z ∞ Z ∞ " N N
#
1X X
dx1 dx2 · · · dxN exp − xi Aij xj + bi x i (2)
−∞ −∞ −∞ 2 i,j=1 i=1

Derive the final expression of this integral using the matrix A and the vector b

3. Fourier transform for Gaussian–


Z ∞
1 2
G(x, t) = eikx e−Dk t dk (3)
2π −∞

Evaluate this integral by complete the square of the exponent and change the com-
plex integration contour correspondingly. One should find the result that G(x, t) =
2
√ 1
4πDt
e−x /(4Dt) .

Problem 2. Experiment for central limit theorem


Before we proof the central limit theorem, let’s try to do a simple numerical experiment.
Consider a random variable X with probability distribution function P (X). In general, we
require P (X) having a well defined mean and variance. However, to construct a simplest
nontrivial mathematical experiment. Let’s take P (X) to be a box distribution ranging from
−1 to 1. That is, P (X) = 21 if X ∈ [−1, 1], else P (X) = 0. The distribution obviously
have a well defined mean and variance (Why? Convince yourself it’s true). Now, lets try to
calculate the following quantity and plot it’s distribution function with the Gaussian function
with the same variance.

1. Distribution function of the sum of two random variables: S2 = X1 + X2 , with X1


and X2 from P (X). Find P (S2 ), evaluate the variance of S2 and plot P (S2 ) and the
Gaussian with the same mean and variance on the same graph.

1
2. Distribution function of the sum of three random variables: S3 = X1 + X2 + X3 , with
X1 , X2 and X3 from P (X). Find P (S3 ), evaluate the variance of S3 and plot P (S3 )
and the Gaussian with the same mean and variance on the same graph.

Please notice how fast the sum SN approaches a Gaussian distribution.

Problem 3. (Universal behavior of the sum of random variables.)


Central limit theorem– Consider independent random variables {xj } with a single variable
distribution function P1 (xj ). We want to understand the distribution function of the sum of
the random variables. That is, we want to derive the distribution function of X ≡ N
P
j=1 xj .
We are interested in the case where N is large. We can formally write it as
Z ∞ Z ∞
PN (X) = dx1 · · · dxN P1 (x1 )P1 (x2 ) . . . P1 (xN )δ(x1 + x2 + · · · + xN − X). (4)
−∞ −∞

Here δ(x) is the Dirac delta function. The Fourier transform of the distribution function is
Z ∞ Z ∞ Z ∞ 
−ikX
PN (k) =
f dXe dx1 · · · dxN P1 (x1 )P1 (x2 ) . . . P1 (xN )δ(x1 + x2 + · · · + xN − X)
−∞ −∞ −∞
h iN
= P f1 (k) (5)

, here
Z ∞
P
f1 (k) = dxP1 (x)e−ikx = ⟨e−ikx ⟩. (6)
−∞

We can consider P f1 (k) as the Fourier transform of P1 (x). Or, we can consider it as the
−ikx
average of e with respect to the distribution function P1 (x). In the following, we will
consider the second interpretation. Expand the exponential, we will have

(−ik)2 2 (−ik)3 3
⟨e−ikx ⟩ = 1 − ik⟨x⟩ + ⟨x ⟩ + ⟨x ⟩ + . . . . (7)
2! 3!

We would like to approximate Pe1 (k) as Pe1 (k) ≈ ef (k) . This expression will be useful since
R∞ h iN
1 e1 (k) eikX dk = 1 ∞ eN f (k) eikX dk. We know how to
R
we can express PN (X) = 2π −∞
P 2π −∞
perform this type of integral at least to quadratic power of k. (With a simple change of
variable, the integral to quadratic order in k is identical with the problem 1.) So our next
goal is to find f (k) in the expression Pe1 (k) ≈ ef (k) .
2 3
1. Use
h the Taylor expansion of ln(1 + x) = x −ix2 + x3 + O(x4 ), we can express ln Pe1 (k) =
2 3
ln 1 − ik⟨x⟩ + (−ik)
2!
⟨x2 ⟩ + (−ik)
3!
⟨x3 ⟩ + . . . in powers of k. Then we can exponentiate
this expression to get f (k). Find the expression of f (k, k 2 , k 3 ) to the third order in k.
(Hint: your result will looks like f (k) ≈ C1 k + C2 k 2 + C3 k 3 + . . . , find C1 , C2 , C3 as a
function of ⟨x⟩, ⟨x2 ⟩, ⟨x3 ⟩.

2
2. Then, we can formally express PN (X) as
Z ∞
1 2
eN (C1 k+C2 k ) eikX 1 + C3 k 3 + . . . dk
 
PN (X) ≈ (8)
2π −∞

by expanding the exponent with C3 k 3 . We can use the following trick to express this

integral in another form. In [1 + C3 k 3 + . . . ] expression, we can replace (ik) ↔ ∂X
simply because
∂ m h N (C1 k+C2 k2 ) ikX i h N (C1 k+C2 k2 ) ikX i
e e = e e (ik)m . (9)
∂X m
Therefore, we have
Z ∞
1 2
eN (C1 k+C2 k ) eikX 1 + C3 k 3 + . . . dk
 
PN (X) ≈
2π −∞
"  3 3 # Z ∞
∂ 1 2
= 1 + iC3 3
+ ... eN (C1 k+C2 k ) eikX dk (10)
∂X 2π −∞

As mentioned before, the last integral is a Gaussian integral that we know how to
perform. Complete the square and derive the result of PN (X) to leading order (ignoring
 3 3

the iC3 ∂X 3 term and beyond.). (Furthermore, you can ask: how the higher-order
correction scales as a function of N , but this will be left as a challenge that is beyond
this homework problem.)

3. After we answered the above questions, we basically derived the central limit theorem.
That is, the distribution function for the sum of random variables from a distribution
function P1 (x) is a Gaussian. That is a very general result. What conditions does
P1 (x) need to satisfy to apply the central limit theorem? Will it work when P1 (x) is a
Gaussian distribution? Will it work when P1 (x) is a Cauchy distribution?

Problem 4. Freely Jointed Chain (Gaussian model)


Polymers are large(high molar mass) molecules composed of a large number of monomers
bonded together to form a chain. In reality, the monomers are bounded covalently. That is,
the potential energy between two monomers should be a complex function of the bonding
angle between two nearby monomers. Let’s make some assumptions to simplify our problem
which leads to the freely jointed model. (We might over simplify the problem but let’s just
try it first to get a feeling about what kind of problem we are facing. Then we can ask how
to put back more ingredients to make our analysis more realistic.)

• We assume the joint of the monomers can rotate freely, i.e. the potential energy is
independent of the bonding angle, ϕ, between two nearby monomers.

• We assume the monomers can overlap with each other in space and have no interaction
between each other.

3
Figure 1: Schematic picture of a single chain polymer molecular formed by the monomers
represented by the grey oval.

The model with above two assumptions is the freely jointed chain model. It is the idealization
of polymers analogous to ideal gas model for gases.
A key property that we are interested in is the size of the polymer. Usually the polymer
coils up. The way they coils up will be a competition between the entropy and the energetics.
Therefore, the size of the polymer is not simply the number of monomers, N , times the size
of the monomer, a. Instead, we need to use some statistical description to characterize the
size of the polymer. Usually, we use the root-mean-square end-to-end distance, ⟨r2 ⟩1/2 , as
a measure for the size of the polymer. Here, r is the distance from one end of the polymer
to the other in three-dimensional space. The average is taken over all possible ways the
polymer coils. One of the configuration is simple, if all the monomers are aligned in one
direction, the end-to-end distance rstraight = N a. However, this is just one of the possible
value of r, once the polymer coils, ⟨r2 ⟩1/2 < N a.

1. It is interesting to observe that the freely jointed model in one-dimension is actually


the simple one-dimensional random walk we discussed during the lecture. The step
size is just the size of the monomer. The one-dimensional constraint restricts ϕ = 0/π.
Let’s start in this simple limit and use x to represent the end-to-end distance in one-
dimensional case. Derive the probability distribution P1D (x, N ) when the number of
monomers N is large.

2. For the three-dimensional case, we assume the orientation is completely random and
the vector from monomer i and the vector for monomer i + 1 are uncorrelated. i.e.
⟨di · dj ⟩i̸=j = 0. Therefore, we expect N to be distributed evenly Nx = Ny = Nz = N3 .
Here, Nx is a rough definition of the monomer belongs to the monomer in x direction.
We can simply consider the projection of the monomer to the x, y, z direction. If the
projection to x direction is has the largest size, we said it is a x monomer that should be
1
counted in Nx . Derive ⟨r2 ⟩ 2 and express it using N and a. This is a simple estimation
for the size of the polymer. (Sometimes you will see people use the radius of gyration
rg2 to estimate the size of the polymer. rg is the average distance between monomers
and the center of mass. It turns out the length scale, ⟨rg2 ⟩, will be proportional with
⟨r2 ⟩. We will not discuss the calculation here, but just mention the fact that using the

4
root mean square of the end-to-end distance capture the essential information for the
size of the polymer.)

You might also like