Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Montecarlo Introduction

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

An introduction to Monte Carlo methods

J.-C. Walter
arXiv:1404.0209v1 [cond-mat.stat-mech] 1 Apr 2014

Laboratoire Charles Coulomb UMR 5221 & CNRS, Université Montpellier 2,


34095 Montpellier, France

G. T. Barkema
Institute for Theoretical Physics, Utrecht University, The Netherlands
Instituut-Lorentz, Universiteit Leiden, P.O. Box 9506, 2300 RA Leiden, The Netherlands

Abstract
Monte Carlo simulations are methods for simulating statistical systems. The
aim is to generate a representative ensemble of configurations to access ther-
modynamical quantities without the need to solve the system analytically
or to perform an exact enumeration. The main principles of Monte Carlo
simulations are ergodicity and detailed balance. The Ising model is a lattice
spin system with nearest neighbor interactions that is appropriate to illus-
trate different examples of Monte Carlo simulations. It displays a second
order phase transition between a disordered (high temperature) and ordered
(low temperature) phases, leading to different strategies of simulations. The
Metropolis algorithm and the Glauber dynamics are efficient at high temper-
ature. Close to the critical temperature, where the spins display long range
correlations, cluster algorithms are more efficient. We introduce the rejec-
tion free (or continuous time) algorithm and describe in details an interesting
alternative representation of the Ising model using graphs instead of spins
with the Worm algorithm. We conclude with an important discussion of the
dynamical effects such as thermalization and correlation time.
Keywords: Monte Carlo simulations, Ising model, algorithms

Email addresses: jean-charles.walter@univ-montp2.fr (J.-C. Walter),


g.t.barkema@uu.nl (G. T. Barkema)

Preprint submitted to Physica A April 2, 2014


1. Introduction
Most models in statistical physics are not solvable analytically, and there-
fore an alternative way is needed to determine thermodynamical quantities.
Numerical simulations help in this task, but introduce another challenge: it
is not possible, in most cases, to enumerate all the possible configurations of
a system; one therefore has to create a set of configurations that are represen-
tative for the entire ensemble. In this section, we will illustrate our purpose
with the Ising model. This is a renowned model because of its simplicity and
success in the description of critical phenomena [1]. The degrees of freedom
are spins Si = ±1 placed at the vertex i of a lattice. This lattice will be
square or cubic for simplicity, with edge size L and dimension D. Thus, the
system contains N = LD spins. The hamiltonian of the Ising model is:
X
H = −J Si Sj , (1)
hiji

where the summation runs over all pairs of nearest-neighbor spins hiji of the
lattice and J is the strength of the interaction. The statistical properties of
the system are obtained from the partition function:
X
Z= e−βE(C) , (2)
C

where the summation runs over all the configurations C. The energy of a con-
figuration is denoted by E(C). Here, β ≡ 1/(kB T ) is the inverse temperature
(temperature T and Boltzmann constant kB ). The Ising model displays a
second-order phase transition at the temperature Tc , characterised by a high
temperature phase with an average magnetization zero (disordered phase)
and a low temperature phase with a non-zero average magnetization (or-
dered phase). The system is exactly solvable in one and two dimensions.
For D ≥ 4, the critical properties are easily obtained by the renormalization
group. In three dimensions no exact solution is available. Even a 3D cubic
lattice of very modest size 10 × 10 × 10 generates 21000 ≈ 10301 configurations
in the partition function. If we want to obtain e.g. critical exponents with
a sufficient accuracy, we need sizes that are at least an order of magnitude
larger. An exact enumeration is a hopeless effort. Monte Carlo simulations
are one of the possible ways to perform a sampling of configurations. This
sampling is made out of a set of configurations of the phase space that con-
tribute the most to the averages, without the need of generating every single

2
configuration. This is referred to as importance sampling. In this sampling
of the phase space, it is important to choose the appropriate Monte Carlo
scheme to reduce the computational time. In that respect, the Ising model
is interesting because the different regimes in temperature lead to the devel-
opment of new algorithm that reduce tremendously the computational time,
specifically close to the critical temperature.
We will start these notes by introducing two important principles of
Monte Carlo simulations: detailed balance and ergodicity. Then we will re-
view different examples of Monte Carlo methods applied to the Ising model:
local and cluster algorithms, the rejection free (or continuous time) algo-
rithm, and another kind of Monte Carlo simulations based on an alternative
representation of the spin system, namely the so-called worm algorithm. We
continue with discussing dynamical quantities, such as the thermalization
and correlation times.

2. Principles of MC simulations: Ergodicity & detailed balance


condition
The basic idea of most Monte Carlo simulations is to iteratively propose a
small random change in a configuration Ci , resulting in the trial configuration
t
Ci+1 (the index “t” stands for trial). Next, the trial configuration is either
t
accepted, i.e. Ci+1 = Ci+1 , or rejected, i.e. Ci+1 = Ci . The resulting set of
configurations for i = 1 . . . M is known as a Markov chain in the phase space
of the system. We define PA (t) as the probability to find the system in the
configuration A at the time t and W (A → B) the transition rate from the
state A to the state B. This Markov process can be described by the master
equation:

dPA (t) X
= [PB (t)W (B → A) − PA (t)W (A → B)] , (3)
dt A6=B
P
with the condition W (A → B) ≥ 0 and B W (A → B) = 1 for all A and B.
The transition probability W (A → B) can be further decomposed into a trial
proposition probability T (A → B) and an acceptance probability A(A → B)
so that W (A → B) = T (A → B) · A(A → B). A proposed change in the
configuration is usually referred to as a Monte Carlo move. Conventionally,
the time scale in Monte Carlo simulations is chosen such that each degree of
freedom of the system is proposed to change once per unit time, statistically.

3
The first constraint on this Markov chain is called ergodicity: starting
from any configuration C0 with nonzero Boltzmann weight, any other config-
uration with nonzero Boltzmann weight should be reachable through a finite
number of Monte Carlo moves. This constraint is necessary for a proper sam-
pling of phase space, as otherwise the Markov chain will be unable to access
a part of phase space with a nonzero contribution to the partition sum.
Apart from a very small number of peculiar algorithms, a second con-
straint is known as the condition of detailed balance. For every pair of states
A and B, the probability to move from A to B, as well as the probability for
the reverse move, are related via:
PA · T (A → B) · A(A → B) = PB · T (B → A) · A(B → A) . (4)
The meaning of this condition can be seen in Eq. (3): a stationary probabil-
ity (i.e. dPA /dt = 0) is reached if each individual term in the summation on
the right hand side cancels. This prevents the Markov chain to be trapped
in a limit cycle [2]. This is a strong, but not necessary, condition. We men-
tion that generalizations of Monte Carlo process that do not satisfy detailed
balance exist. The combination of ergodicity and detailed balance assures
a correct algorithm, i.e., given a long enough time, the desired distribution
probability is sampled.
The key question in Monte Carlo algorithms is which small changes one
should propose, and what acceptance probabilities one should choose. The
trial proposition and acceptance probabilities have to be well chosen so that
the probability of sampling of a configuration A (after thermalization) is
equal to the Boltzmann weight:
e−βEA
PA = , (5)
Z
in which EA is the energy of configuration A. The knowledge of the parti-
tion function Z is not necessary because the transition probabilities are con-
structed with the ratio of probabilities. The detailed balance condition (4),
using (5), can be rewritten as:
T (B → A) · A(B → A) PA
= = e−β(EA −EB ) . (6)
T (A → B) · A(A → B) PB

3. Local MC algorithms: Metropolis & Glauber


One often-used approach to realize detailed balance is to propose ran-
domly a small change in state A, resulting in another state B, in such a way

4
that the reverse process (starting in B and then proposing a small change
that results in A) is equally likely. More formally, a process in which the
condition T (A → B) = T (B → A) holds for all pairs of states {A, B}.
For example, taking the example of an Ising model containing N spins, it
corresponds to chose randomly one of the spins on the lattice, therefore
T (A → B) = T (B → A) = 1/N . Detailed balance allows for a common
scale factor in the acceptance probabilities for the forward and reverse Monte
Carlo moves, but being probabilities, they cannot exceed 1. Simulations are
then fastest if the bigger of the two acceptance probabilities is equal to 1, i.e.
either A(A → B) or A(B → A) is equal to 1. These conditions (including
detailed balance) are realized by the so-called Metropolis algorithm, in which
the acceptance probability is given by:

Amet (A → B) = Min [1, PB /PA ] = Min [1, exp(−β(EB − EA )] . (7)

Thus, a proposed move that does not raise the total energy is always ac-
cepted, but a proposed move which results in higher energy is accepted with
a probability that decreases exponentially with the increase of the energy
difference. For the sake of illustration, let us describe how a simulation of
the Ising model looks like:
1. Initialize all spins (either random or all up)
2. Perform N random trial moves (N = LD ):
(a) randomly select a site
(b) compute the energy difference ∆E = EB − EA if the trial (here a
spin flip) induces a change in energy
(c) generate a random number rn uniformly distributed in [0, 1]
(d) if ∆E < 0 or if rn < exp(−∆E): flip the spin
3. Perform sampling of some observables
The step 2 corresponds to one unit time step of the Monte Carlo simulation.
An alternative to the Metropolis algorithm is the Glauber dynamics [3]. The
trial probability is the same as Metropolis i.e. T (A → B) = T (B → A) =
1/N . However the acceptance probability is now:

e−β(EB −EA )
Agla (A → B) = , (8)
1 + e−β(EB −EA )
which also satisfies the detailed balance condition Eq.(6).

5
Figure 1: Snapshots of the 2D Ising model defined in Eq. (1) at three different tempera-
tures: from left to right, T  Tc , T ≈ Tc and T  Tc where Tc is the critical temperature.
White and blacks dots denote spins up and down. The system size is 200 × 200. In
the picture at Tc (middle), we observe large clusters of correlated spins: these are crit-
ical fluctuations that slow down Monte Carlo simulations when local algorithms such as
Metropolis or Glauber are used. This critical slowing down is reduced by non-local (or
cluster) algorithms like the Wolff algorithm [5].

4. Cluster algorithms: the example of the Wolff algorithm


Many models encounter phase transitions at some critical temperature.
The paradigmatic example for the second order phase transitions is the Ising
model defined in Eq. (1). In the vicinity of the critical temperature, the spins
display critical fluctuations. As shown in Figure 1 (middle), large aligned
spin domains appear. This phenomenon is associated with (i) the divergence
of the correlation length ξ of the connected spin-spin correlation function
C(|i − j|) = hSi · Sj i − hSi i2 (ii) the divergence of the correlation time of
the autocorrelation function C(|t − t0 |) = hSi (t) · Si (t0 )i − hSi (t)i2 . Moreover,
the correlation time increases with the size of the system like τ ∼ Lzc where
zc is the critical dynamical exponent. For the 2D Ising model simulated
with the Metropolis algorithm, zc = 2.1665(12) [4]. This phenomenon of
critical slowing down reflects the difficulty to change the magnetization of a
correlated spin cluster. Take again the example of a 2D spin system where one
spin has four nearest neighbors. If this spin is surrounded by aligned spins,
its contribution to the energy is EA = −4J and after the reversal of this spin,
this becomes EB = 4J. Right at Tc ≈ 2.269, the acceptance probability is
low for the Metropolis algorithm: A(A → B) = e−8βc J = 0.0294.... Thus,
most of the flipping attempts are rejected. Making matters worse, even if
such a spin with aligned neighbours is flipped, the next time it is selected, it

6
will surely flip back. Only spin flips at the edge of a cluster have a significant
effect over a longer time; but their fraction becomes vanishingly small when
the critical temperature is approached and the cluster size diverges.
One remedy is to develop a non-local algorithm that flips a whole cluster
of spins at once. Such an algorithm has been designed for the Ising model
by Wolff [5], following the idea of Swendsen and Wang [6] for more general
spin systems. A sketch of this procedure is shown in Figure 2.

A B
Figure 2: Sketch of one iteration of the non-local algorithm introduced by Wolff [5]
between two spin configurations A and B. The white and black dots stand for spins of
opposite signs. The spins within the loop (dashed line) belong to the same cluster. The
steps to form the cluster are: (i) choose randomly a seed spin (ii) add aligned spins with
the probability Padd (see text) (iii) add iteratively aligned neighbors of newly added spins
with the probability Padd (iv) flip all the spins in the cluster at once when the cluster is
completed. This is an efficient algorithm for the Ising model at criticality.

The procedure consists of first choosing a random initial site (seed site).
Then, we add each neighboring spin, provided it is aligned, with the prob-
ability Padd . If it is not aligned, it cannot belong to the cluster. This step
is iteratively repeated with each neighbor added to the cluster. When no
neighbor can be added to the cluster anymore, all the spins in the clus-
ter flip at once. The probability to form a certain cluster of spins in state
A before the Wolff move is the same as that in state B after the Wolff
move, except for the aligned spins that have not been added to the cluster
at the boundaries. The probability to not add an aligned spin is 1 − Padd .
If m and n stand for non-added aligned spins to the cluster for A and B,

7
T (A → B)/T (B → A) = (1 − Padd )m−n and the detailed balance condition
(6) can be rewritten as:

T (A → B) · A(A → B) A(A → B)
= (1 − Padd )m−n = e−β(EB −EA ) . (9)
T (B → A) · A(B → A) A(B → A)

Noticing that EA − EB = 2J(n − m), it follows that:

A(A → B)  n−m
= (1 − Padd )e2βJ . (10)
A(B → A)

Therefore, choosing Padd = 1 − e2βJ , the acceptance probabilities simplifies:


A(A → B) = A(B → A) = 1. For this reason the spins can be automatically
flipped when the cluster is formed. In the vicinity of the critical point, the
Wolff algorithm significantly reduces the autocorrelation time and the critical
dynamical exponent compared to a local algorithm (such as Metropolis or
Glauber). We notice that the time τ̃W measured in units of Wolff iterations
involves a subset of spins corresponding to the averaged size hpi of a cluster.
On the other hand a time τM measured in units of Metropolis iterations
involves all spins of the network i.e. N = LD spins. To compare the efficiency
of both algorithms fairly, it is therefore necessary to define a rescaled Wolff
autocorrelation time τW = τ̃W hpi/LD . Moreover, it is possible to show that
W
χ = βhpi [2]. Noticing that τ̃W ∼ Lz̃c , it follows (remembering ξ ∼ L)
W W
τW ∼ ξ zc ∼ Lz̃c +γ/ν−D leading to the definition of the dynamical critical
exponent zcW = z̃cW +γ/ν −D. In 2D for example, remarkably, the dynamical
exponent is zcW ≈ 0 for Wolff (see e.g. [7, 8] and references therein) whereas
zcM = 2.1665(12) [4] for Metropolis.

5. Continuous-time or rejection free algorithm


As we have seen in the previous subsection, with a local algorithm (like
e.g. Metropolis) a spin flip of the Ising model at criticality has a high proba-
bility to be rejected, and this holds even more in the ferromagnetic phase. A
significant amount of the computational time will therefore be spent without
making the system evolve. An alternative way has been proposed by Gille-
spie [9] in the context of chemical reactions and afterwards applied by Bortz,
Kalos and Lebowitz in the context of spin systems [10].
Briefly, this algorithm lists all possible Monte Carlo moves that can be
performed in the system in its current configuration. One of these moves

8
is chosen randomly according to its probability, and the system is forced to
move into this state. The time step of evolution during such a move can be
estimated rigourously. This time will change from each configuration and
cannot be set to unity as in the Metropolis algorithm: it takes a continuous
value. this is why this algorithm is sometimes called continuous time algo-
rithm. On the one hand, this algorithm has to maintain a list of all possible
moves, which requires a relatively heavy administrative task, on the other
hand, the new configuration is always accepted and this saves a lot of time
when the probability of rejection would otherwise be high. It is also some-
times called the rejection free algorithm. The efficiency of this algorithm will
be maximized for T ≤ Tc . In detail, one iteration of the continuous time
algorithm looks like:
1. List all possible moves from the current configuration. Each of these n
moves has an associated probability Pn .
2. Calculate the integrated probability that a move occurs Q = ni=0 Pn .
P

3. Generate a random number rn1 uniformly distributed in [0, Q]. This


selects the chosen move with probability Pn /Q.
4. estimate the time elapsed during the move: ∆t = Q−1 ln (1 − rn2 )
where rn2 is a random number uniformly distributed in [0, 1] 1
Implementation of this algorithm becomes easier if the probabilities Pn
can only take a small number of values. In that case, lists can be made of
all moves with the same probability Pn . The selection process is then first to
select one of the lists, with the appropriate probability, after which randomly
one move is selected from that list. This is the case e.g. in Ising simulations
on a square (2D) or cubic (3D) lattice, when the probability Pn is limited to
the values 1, e−4βJ , e−8βJ , or e−12βJ (the latter occurring only in 3D).

6. The worm algorithm


We present here another example of a local algorithm, the so-called worm
algorithm introduced by Prokof’ev, Svistunov and Tupitsyn [11, 12]. The dif-
ference with the algorithms presented above is an alternative representation
of the system, in terms of graphs instead of spins. The Markov chain is there-
fore performed along graph configurations rather than spin configurations,

1
The probability of a spin flip is exponential versus Q: P (∆t) = exp(−Q∆t)).

9
but always with Metropolis acceptance rates. The principle is based on the
high temperature expansion of the partition function. Suppose that we want
to sample the magnetic susceptibility of the Ising model. We can access it via
the correlation function using the (discrete) fluctuation-dissipation theorem:
β X
χ= G(i − j) , (11)
N i,j

where G(i−j) = hSi ·Sj i−hSi i2 is the connected correlation function between
sites i and j. In the high temperature phase, the average value of the spin
cancels and G(i − j) = hSi Sj i. The first step is to write the correlation
function G(i − j) of the Ising model in the following form:
1X P
G(i − j) = Si · Sj e−βJ hk,li Sk ·Sl , (12)
Z
{S}
1 X Y
= cosh(βJ)DN Si · Sj (1 + Sk · Sl tanh(βJ)) . (13)
Z
{S} hk,li

The configurations that contribute to the sum in (13) contain an even num-
ber of spins in the product at any given site. Other products involving an odd
number of spins in the product contribute zero. Each term can be associated
with a path determined by the sites involved in it. A contribution to the
sum is made of a (open) path joining sites i and j and closed loops. The sum
over the configurations can be replaced by a sum over such graphs. Figure 3
sketches such a contribution for a given couple of source sites i and j. The im-
portance sampling is no longer made over spin configurations but over graphs
that are generated as follows. One of the two sources, say i, is mobile. At
every steps, it moves randomly to a neighboring site n. Any nearest-neighbor
site can be chosen with the trial probability T (A → B) = 1/2D, where D is
the dimension of the (hypercubic) lattice. If no link is present between the
two sites, then a link is created with the acceptance probability:
A(A → B) = Min(1, tanh βJ) . (14)
If a link is already present, it is erased with the acceptance probability:
A(A → B) = Min(1, 1/ tanh βJ) . (15)
Since 0 ≤ tanh x < 1 for all values x > 0, the probability (15) is equal to unity
and the link is always erased. These probabilities are obtained considering

10
i i

A B
Figure 3: Illustration of a move with the worm algorithm. Thick lines stand for one
example of graph contributing to the correlation function: one path joining the sites i et
j (the sources) and possibly closed loops. These two graphs differ in one iteration of the
worm algorithm. According to (13), the graph on the left and on the right have respective
equilibrium probabilities PA ∝ tanh5 βJ and PB ∝ tanh6 βJ (we neglect loops that are
not relevant for this purpose). From the detailed balance condition (4), the Metropolis ac-
ceptance rates are A(A → B) = Min(1, tanh βJ) and A(B → A) = Min(1, 1/ tanh βJ). In
both case T (A → B) = T (B → A) = 1/2D where D is the dimension of the (hypercubic)
lattice.

the Metropolis acceptance rate Eq.(7) and the expression of the correlation
function (13). The procedure is illustrated in Figure 3.
The open paths in the two graphs are resp. made of 5 and 6 lattice
spacings (we neglect the loop that does not contribute in this example). Ac-
cording to (13), the graphs on the left and on the right have equilibrium
probabilities PA ∝ tanh5 (βJ) and PB ∝ tanh6 (βJ), respectively. The tran-
sition probability (7) is thus written W (A → B) = 1/2D × Min(1, tanh βJ)
and W (B → A) = 1/2D × Min(1, 1/ tanh βJ), in agreement with (14) and
(15).
If the two sources meet, they can move together on another random site
with a freely chosen transition probability. When the two sources move to-
gether, they leave a closed loop behind that justifies the simultaneous pres-
ence of open path and closed loop in Figure 3. These loops may disappear if
the head of the worm meets them. Compared to the Swendsen-Wang algo-
rithm, the worm algorithm has a dynamical exponent slightly higher in 2D
but significantly lower in 3D [13]. The efficiency of this algorithm can be im-
proved with the use of a continuous time implementation [14]. The formalism

11
of the worm algorithm is suitable for high temperature. In the critical region,
the number of graphs that contribute to the correlation function increases
exponentially. In order to check the convergence of the algorithm, we can
compare it with the Wolff algorithm for the 5D Ising model with different
lattice sizes in Figure 4. The two algorithms give results in good agreement,
except in the critical region where the converge of the worm algorithm is
slower as the lattice size L increases.

3 5D Ising model Worm algorithm


10 L=6
L=8
L = 10
L = 14
Wolff algorithm
2 L=6
10 L=8
χ/β

L = 10
L = 14
Slow convergence
1
of the worm algo.
10

-2 -1 0 1
10 10 10 10
T - Tc

Figure 4: Comparison of the magnetic susceptibility χ obtained with the worm algorithm
and the Wolff algorithm for the Ising model in 5D with different lattice sizes. The results
are in excellent agreement for both algorithms in the high temperature phase. As we move
closer to Tc i.e. in the critical regime (dashed line ellipse), the convergence of the worm
algorithm is slower as the lattice size increases (keeping all other parameters fixed). The
number of graphs increases exponentially.

7. Dynamical aspects: thermalization & correlation time


In order to perform a sampling of thermodynamical quantities at a given
temperature, one has to first thermalize the system. Usually, it is possible
to set up the system either at infinite temperature (all spins random) or in
the ground state (all spins up or down). Let us start from an initial random
configuration. If the thermalization takes place above the critical tempera-
ture, then the relaxation is exponential. As we come closer to the critical

12
point, the equilibrium correlation length becomes larger and the relaxation
becomes much slower and eventually algebraic right at Tc . An example of
such process for the 2D Ising model is given in Fig. 5. Initially (t = 0), the
system is prepared at infinite temperature, all the spins are random. Then
the Glauber dynamics is applied at the critical temperature. We see the
nucleation and the evolution of correlated spin domains in time (t =0, 10,
100, 1000 from left to right). It is possible to show that in such a quench,
the correlation length grows with time like [15]:

ξ(t) ∼ t1/zc , (16)

where zc is the critical dynamical exponent (zc = 2.1665(12) [4] for the
2D Ising model). The time needed to complete thermalization at criticality
is therefore τth ∼ Lzc . In case of a subcritical quench, the system has to
choose between two ferromagnetic states of opposite magnetization. Again,
the relaxation is slow because there is nucleation and growth of domain of
opposite magnetization. We define the typical size of a domain at a time t by
Ld (t). The thermalization process involves the growth (coarsening) of these
domains, until eventually one domain spans the whole system. Only then,
equilibrium is reached (in the low temperature phase, the expectation of the
absolute value of the magnetization is nonzero). The motion of the domain
walls is mostly diffusive Ld (t) ∼ t1/z with a dynamical exponent z = 2 [15].
The walls have to cover a distance ∼ L, so that the thermalization time
scales as τth ∼ L2 . This time diverges again with system size. Starting from
an ordered state does not help for the critical quench (but it does help to
start in the ground state to thermalize the system at T < Tc ).
Once the system is thermalized, one has to be aware of another dynamical
effect: the correlation time. This is the time needed to perform sampling
between statistically uncorrelated configurations. In the high temperature
phase, the correlation time is equal to the thermalization time, up to some
factor close to unity. This is not surprising, as proper thermalization requires
the configuration to become uncorrelated from the initial state. Practically,
in all Monte Carlo simulations, one has to estimate τ at the temperature of
sampling to treat properly the error bars.
In the low temperature phase, after thermalization, the magnetization
is either positive or negative, and stays like that over prolonged periods of
time. So-called magnetization reversals do occur now and then, but the
characteristic time between those increases exponentially with system size.

13
Figure 5: Snapshots of the evolution of the 2D Ising of size 200 × 200 after a quench
from a disordered state until equilibration at the critical temperature Tc with the Glauber
dynamics. We see the nucleation and the growth of correlated domains. From left to
right, t =0, 10, 100, 1000 (expressed in Monte Carlo unit time after the quench). The
thermalization is completed when the correlation length reaches its static value ξ ∼ L.
Using Eq.(16), the thermalization time at criticality behaves like τth ∼ Lzc and therefore
diverges with system size.

Because of the strict symmetry between the parts of phase space with positive
and negative magnetization, in practice one is not so much interested in
the time of magnetization reversals, but rather in the correlation time τ
within the up- or down-phase; and this time is some temperature-dependent
constant, irrespective of the system size provided it is significantly larger
than the correlation length.
Let us consider now the two-time spin-spin correlation functions in the
framework of dynamical scaling [16]. We will use for this purpose a contin-
uous space, so that the spin Si on the site i is now denoted by S~r where ~r
is the position vector. Upon a dilatation with a scale factor b, the equilib-
rium correlation C(~r, t, |T − Tc |) = hS0 (0) · S~r (t)i is assumed to satisfy the
homogeneity relation:

C(~r, t, |T − Tc |) = b−2xσ C(r/b, t/bzc , |T − Tc |b1/ν ) , (17)

where xσ is the scaling dimension of magnetization density with 2xσ = η for


two-dimensional systems and zc is again the critical dynamical exponent. The
motivation for the last two arguments of the scaling function in equation (17)
comes from the behavior of the correlation length either with time, ξ ∼ t1/zc ,
or with temperature, ξ ∼ |T − Tc |−1/ν . Setting b = t1/zc in equation (17), we
obtain:
C(~r, t) = t−η/zc C(r/t1/zc , |T − Tc |t1/(νzc ) ) . (18)
The algebraic prefactor corresponds to the critical behavior while the scaling

14
function includes all corrections to it. The characteristic time:

τ ∼ ξ zc ∼ |T − Tc |−νzc , (19)

appears as the relaxation time of the system. Here we are interested only
in autocorrelation functions for which r = 0. Moreover, we expect an ex-
ponential decay of the scaling function C(t/τ ) in the paramagnetic phase.
Therefore, the autocorrelation function can generally be written at equilib-
rium as:
e−t/τ
C(t, T ) ∼ η/zc . (20)
t
The spin-spin autocorrelation function C(t, T ) versus time is plotted in
Fig.6 (left) for the 2D Ising model of size N = 50 × 50. The different
curves correspond to different inverse temperatures
√ β = 0.35 to 0.39 (the
critical inverse temperature is βc = 1/2 ln(1 + 2) ≈ 0.441). We observe an
increase of the autocorrelation time as the temperature comes closer to Tc .
The autocorrelation time can be obtained from a fit of the curve in the main
graph, assuming Eq.(20). The result is plotted versus |T − Tc | in the inset.
The numerics tend to the behavior of Eq.(19) as T goes to Tc .
Some other aspects of critical dynamics are interesting to study, for in-
stance, the time evolution of the equilibrium mean-square displacement of
the magnetization. It is defined as:

h(t) = (M (t) − M (0))2 .




(21)

At small time differences (t < 1), the dynamics consists of sparsely dis-
tributed proposed spin flips, each of which has a nonzero acceptance proba-
bility. Since these spin flips are uncorrelated and their number scales as LD t,
in the short-time regime (t ≈ 1), h(t) behaves diffusively:

h(t) ∼ LD t . (22)

The magnetization at long time t > τ ∼ Lzc is no longer correlated i.e.


hM (t) · M (0)i ≈ 0. Moreover, the expectation value of the squared magne-
tization is directly related to the magnetic susceptibility like χ ≡ β/N hM 2 i
where N = LD (for T ≥ Tc , hM i ≈ 0). It diverges at the critical temperature
with system size as ∼ Lγ/ν , implying:

h(t) = M (t)2 + M (0)2 − 2M (t) · M (0) ≈ 2 M 2 ∼ LD+γ/ν





(t > τ ).
(23)

15
Therefore h(t) has to grow from h(t ≈ 1) ∼ LD to h(t ∼ Lzc ) ∼ LD+γ/ν .
Assuming a power law behavior, it follows that h(t) ∼ tγ/(νzc ) . Therefore, in
this regime, we can assume the following form for h(t):
h(t) ∼ LD+γ/ν F(t/Lzc ) , (24)
where F is a scaling function with the limit F(x) = constant when x  1
and F(x) ∼ xγ/(νzc ) at intermediate times. We measured the function h(t)
as defined in Eq.(21) in simulations of the two-dimensional Ising model at
the critical temperature for various systems sizes. The scaling function F
is plotted in Fig. 6 (right), using the exponents γ = 1.75, ν = 1 and zc ≈
2.17. With increasing system size, the data become increasingly consistent
-1
10
L=100
-2 L=400
10 L=600 ~t
0.81
2+γ/ν

z
-3 t/L c
-4 -2 0
10 1
10 10 10
h(t)/L

10

-ln(CM(t)/CM(0))
-4 10
0 0.81
10 ~t
-1 L=50
10 L=100
L=200
-5 -2
10 10

-6 -4 -2
10 10 10
zc
t/L
Figure 6: (left) Spin-spin autocorrelation function C(t) of the 2D Ising model (N = 50×50)
at different β = J/(kB T ) =0.35, 0.36, 0.37, 0.38 and 0.39 (βc ≈ 0.44). The correlation
time τ is an increasing function of T − Tc . Assuming Eq.(20), τ that is plotted in the
inset vs. |T − Tc |. (right) Scaling function h(t)/L2+γ/ν of the mean-square deviation of
the magnetization versus t/Lzc for the 2D Ising model at Tc for different lattice sizes. At
intermediate times, it displays an anomalous diffusion behavior compatible with h(t) ∼
tγ/(νzc ) = t0.81 . Insert: the autocorrelation CM (t) =
 h|M (0)| · |M (t)|i is compatible with
a stretched exponential CM (t) ∼ exp −(t/τ )γ/(νzc ) , explained by the behavior of h(t).

with a simple power law behavior (for intermediate times between the early-
time behavior (Eq. (22) and the time of saturation ∼ Lzc ). This power law
behavior corresponds to an instance of anomalous diffusion i.e. a mean-
square deviation growing as a power law with an exponent 6= 1 compatible
with:
h(t) ∼ tγ/(νzc ) ∼ t0.81 . (25)

16
Assuming Eq.(25), the magnetization autocorrelation function for inter-
mediate times (i.e. between times of order unity and the correlation time,
thus spanning many decades) is compatible with the first terms of the Taylor
expansion of a stretched exponential:

CM (t) = h|M (t)| · |M (0)|i ∼ exp −(t/τ )γ/(νzc ) .



(26)

This conjecture compares well with the numercis for the correlation function
in Figure 6 (right, in the inset). This shows that the dynamical critical
exponent zc appears at relatively small times 1  t  Lzc .

8. Conclusion
In these lecture notes, we provide an introduction to Monte Carlo simu-
lations that are a way to produce a set of representative configurations of a
statistical system. We start with the basic principles: ergodicity and detailed
balance. In the next parts, we present several Monte Carlo algorithms. To il-
lustrate their functioning, we use the example of the Ising model. This model
is defined by scalar spins on a lattice that interact via nearest-neighbor inter-
actions. This is the paradigmatic system for second order phase transitions:
a critical temperature shares a disorderered phase at high temperature and
an ordered phase at low temperature. These different regimes induce differ-
ent strategies for the Monte Carlo simulations. In the disordered phase, local
algorithms such as Metropolis or Glauber are efficient. In the critical region,
the appearance of long range correlations have set a computational challenge.
It has been solved by the use of cluster algorithms such as the Wolff algorithm
that flips a whole cluster of correlated spins. Below the critical temperature,
when the probability of spin flip is low, it is a gain of computational time
to program the continuous time algorithm. It forces the system into a new
configurations, with a jump in time according to the probability of transi-
tion. Finally, we describe an interesting algorithm based on an alternative
representation of the model in terms of graphs instead of spins. We end with
important considerations on the dynamics: thermalization and correlation
time.

Acknowledgements
We thank Christophe Chatelain for his careful reading of the manuscript
and the various collaborations that have largely inspired these notes. We

17
also thank Raoul Schram for stimulating discussions and the reading of the
manuscript. J-CW is supported by the Laboratory of Excellence Initiative
(Labex) NUMEV, OD by the Scientific Council of the University of Mont-
pellier 2. This work is part of the D-ITP consortium, a program of the
Netherlands Organisation for Scientific Research (NWO) that is funded by
the Dutch Ministry of Education, Culture and Science (OCW).

References
[1] Binney J J, Dowrick N J, Fisher a J & Newman M E The Theory of
Critical Phenomena (Clarendon Press, Oxford, 1995).

[2] Newman M E J & Barkema G T Monte Carlo Methods in Statistical


Physics (1999) (Oxford University Press).

[3] Glauber R J (1963) Time-Dependent Statistics of the Ising Model J.


Math. Phys. 4, 294.

[4] Nightingale M P & Blöte H W J (1996) Dynamic Exponent of the Two-


Dimensional Ising Model and Monte Carlo Computation of the Subdom-
inant Eigenvalue of the Stochastic Matrix Phys. Rev. Lett. 76, 4548.

[5] Wolff U (1989) Collective Monte Carlo Updating for Spin Systems Phys.
Rev. Lett. 62, 361.

[6] Swendsen R H & Wang J S (1987) Nonuniversal critical dynamics in


Monte Carlo simulations Phys. Rev. Lett. 58, 86-88.

[7] Gündüç S, Dilaver M, Aydin M & Gündüç Y (2005) A study of dy-


namic finite size scaling behavior of the scaling functions-calculation of
dynamic critical index of Wolff algorithm Comp. Phys. Comm. 166, 1.

[8] Du J, Zheng B & Wang J-S (2006) Dynamic critical exponents for
Swendsen-Wang and Wolff algorithms obtained by a nonequilibrium re-
laxation method J. Stat. Mech.: Theor. Exp., P05004.

[9] Gillespie D T (1976) A general method for numerically simulating the


stochastic time evolution of coupled chemical reactions J. Comp. Phys.
22,403434; (1977) Exact stochastic simulation of coupled chemical re-
actions J. Phys. Chem. 81, 23402361.

18
[10] Bortz A B, Kalos H M & Lebowitz J L (1975) A new algorithm for
Monte Carlo simulation of Ising spin systems J. Comp. Phys. 17, 1018.

[11] Prokof’ev N V, Svistunov B V & Tupitsyn I S (1998) Worm algorithm


in quantum Monte Carlo simulations Phys. Lett. A 238, 253-257.

[12] Prokof’ev N & Svistunov B (2001) Worm algorithms for classical statis-
tical models Phys. Rev. Lett. 87, 160601.

[13] Deng Y, Garoni T M & Sokal A D (2007) Dynamic Critical Behavior of


the Worm Algorithm for the Ising Model Phys. Rev. Lett. 99, 110601.

[14] Berche B, Chatelain C, Dhall C, Kenna R, Low R & Walter J-C (2008)
Extended scaling in high dimensions J. Stat. Mech. P11010.

[15] Bray A J (1994) Theory of phase-ordering kinetics Adv. Phys. 43, 357.

[16] Hohenberg P C & Halperin B I (1977) Theory of dynamic critical phe-


nomena Reviews of Modern Physics, 49, 435.

19

You might also like