SM HW1
SM HW1
SM HW1
1. Evaluate the simple generalization of the Gaussian integral we discussed in the lecture.
Z ∞
2
e−ax +bx dx =? (1)
−∞
Derive the final expression of this integral using the matrix A and the vector b
Evaluate this integral by complete the square of the exponent and change the com-
plex integration contour correspondingly. One should find the result that G(x, t) =
2
√ 1
4πDt
e−x /(4Dt) .
1
2. Distribution function of the sum of three random variables: S3 = X1 + X2 + X3 , with
X1 , X2 and X3 from P (X). Find P (S3 ), evaluate the variance of S3 and plot P (S3 )
and the Gaussian with the same mean and variance on the same graph.
Here δ(x) is the Dirac delta function. The Fourier transform of the distribution function is
Z ∞ Z ∞ Z ∞
−ikX
PN (k) =
f dXe dx1 · · · dxN P1 (x1 )P1 (x2 ) . . . P1 (xN )δ(x1 + x2 + · · · + xN − X)
−∞ −∞ −∞
h iN
= P f1 (k) (5)
, here
Z ∞
P
f1 (k) = dxP1 (x)e−ikx = ⟨e−ikx ⟩. (6)
−∞
We can consider P f1 (k) as the Fourier transform of P1 (x). Or, we can consider it as the
−ikx
average of e with respect to the distribution function P1 (x). In the following, we will
consider the second interpretation. Expand the exponential, we will have
(−ik)2 2 (−ik)3 3
⟨e−ikx ⟩ = 1 − ik⟨x⟩ + ⟨x ⟩ + ⟨x ⟩ + . . . . (7)
2! 3!
We would like to approximate Pe1 (k) as Pe1 (k) ≈ ef (k) . This expression will be useful since
R∞ h iN
1 e1 (k) eikX dk = 1 ∞ eN f (k) eikX dk. We know how to
R
we can express PN (X) = 2π −∞
P 2π −∞
perform this type of integral at least to quadratic power of k. (With a simple change of
variable, the integral to quadratic order in k is identical with the problem 1.) So our next
goal is to find f (k) in the expression Pe1 (k) ≈ ef (k) .
2 3
1. Use
h the Taylor expansion of ln(1 + x) = x −ix2 + x3 + O(x4 ), we can express ln Pe1 (k) =
2 3
ln 1 − ik⟨x⟩ + (−ik)
2!
⟨x2 ⟩ + (−ik)
3!
⟨x3 ⟩ + . . . in powers of k. Then we can exponentiate
this expression to get f (k). Find the expression of f (k, k 2 , k 3 ) to the third order in k.
(Hint: your result will looks like f (k) ≈ C1 k + C2 k 2 + C3 k 3 + . . . , find C1 , C2 , C3 as a
function of ⟨x⟩, ⟨x2 ⟩, ⟨x3 ⟩.
2
2. Then, we can formally express PN (X) as
Z ∞
1 2
eN (C1 k+C2 k ) eikX 1 + C3 k 3 + . . . dk
PN (X) ≈ (8)
2π −∞
by expanding the exponent with C3 k 3 . We can use the following trick to express this
∂
integral in another form. In [1 + C3 k 3 + . . . ] expression, we can replace (ik) ↔ ∂X
simply because
∂ m h N (C1 k+C2 k2 ) ikX i h N (C1 k+C2 k2 ) ikX i
e e = e e (ik)m . (9)
∂X m
Therefore, we have
Z ∞
1 2
eN (C1 k+C2 k ) eikX 1 + C3 k 3 + . . . dk
PN (X) ≈
2π −∞
" 3 3 # Z ∞
∂ 1 2
= 1 + iC3 3
+ ... eN (C1 k+C2 k ) eikX dk (10)
∂X 2π −∞
As mentioned before, the last integral is a Gaussian integral that we know how to
perform. Complete the square and derive the result of PN (X) to leading order (ignoring
3 3
∂
the iC3 ∂X 3 term and beyond.). (Furthermore, you can ask: how the higher-order
correction scales as a function of N , but this will be left as a challenge that is beyond
this homework problem.)
3. After we answered the above questions, we basically derived the central limit theorem.
That is, the distribution function for the sum of random variables from a distribution
function P1 (x) is a Gaussian. That is a very general result. What conditions does
P1 (x) need to satisfy to apply the central limit theorem? Will it work when P1 (x) is a
Gaussian distribution? Will it work when P1 (x) is a Cauchy distribution?
• We assume the joint of the monomers can rotate freely, i.e. the potential energy is
independent of the bonding angle, ϕ, between two nearby monomers.
• We assume the monomers can overlap with each other in space and have no interaction
between each other.
3
Figure 1: Schematic picture of a single chain polymer molecular formed by the monomers
represented by the grey oval.
The model with above two assumptions is the freely jointed chain model. It is the idealization
of polymers analogous to ideal gas model for gases.
A key property that we are interested in is the size of the polymer. Usually the polymer
coils up. The way they coils up will be a competition between the entropy and the energetics.
Therefore, the size of the polymer is not simply the number of monomers, N , times the size
of the monomer, a. Instead, we need to use some statistical description to characterize the
size of the polymer. Usually, we use the root-mean-square end-to-end distance, ⟨r2 ⟩1/2 , as
a measure for the size of the polymer. Here, r is the distance from one end of the polymer
to the other in three-dimensional space. The average is taken over all possible ways the
polymer coils. One of the configuration is simple, if all the monomers are aligned in one
direction, the end-to-end distance rstraight = N a. However, this is just one of the possible
value of r, once the polymer coils, ⟨r2 ⟩1/2 < N a.
2. For the three-dimensional case, we assume the orientation is completely random and
the vector from monomer i and the vector for monomer i + 1 are uncorrelated. i.e.
⟨di · dj ⟩i̸=j = 0. Therefore, we expect N to be distributed evenly Nx = Ny = Nz = N3 .
Here, Nx is a rough definition of the monomer belongs to the monomer in x direction.
We can simply consider the projection of the monomer to the x, y, z direction. If the
projection to x direction is has the largest size, we said it is a x monomer that should be
1
counted in Nx . Derive ⟨r2 ⟩ 2 and express it using N and a. This is a simple estimation
for the size of the polymer. (Sometimes you will see people use the radius of gyration
rg2 to estimate the size of the polymer. rg is the average distance between monomers
and the center of mass. It turns out the length scale, ⟨rg2 ⟩, will be proportional with
⟨r2 ⟩. We will not discuss the calculation here, but just mention the fact that using the
4
root mean square of the end-to-end distance capture the essential information for the
size of the polymer.)