Montecarlo Introduction
Montecarlo Introduction
Montecarlo Introduction
J.-C. Walter
arXiv:1404.0209v1 [cond-mat.stat-mech] 1 Apr 2014
G. T. Barkema
Institute for Theoretical Physics, Utrecht University, The Netherlands
Instituut-Lorentz, Universiteit Leiden, P.O. Box 9506, 2300 RA Leiden, The Netherlands
Abstract
Monte Carlo simulations are methods for simulating statistical systems. The
aim is to generate a representative ensemble of configurations to access ther-
modynamical quantities without the need to solve the system analytically
or to perform an exact enumeration. The main principles of Monte Carlo
simulations are ergodicity and detailed balance. The Ising model is a lattice
spin system with nearest neighbor interactions that is appropriate to illus-
trate different examples of Monte Carlo simulations. It displays a second
order phase transition between a disordered (high temperature) and ordered
(low temperature) phases, leading to different strategies of simulations. The
Metropolis algorithm and the Glauber dynamics are efficient at high temper-
ature. Close to the critical temperature, where the spins display long range
correlations, cluster algorithms are more efficient. We introduce the rejec-
tion free (or continuous time) algorithm and describe in details an interesting
alternative representation of the Ising model using graphs instead of spins
with the Worm algorithm. We conclude with an important discussion of the
dynamical effects such as thermalization and correlation time.
Keywords: Monte Carlo simulations, Ising model, algorithms
where the summation runs over all pairs of nearest-neighbor spins hiji of the
lattice and J is the strength of the interaction. The statistical properties of
the system are obtained from the partition function:
X
Z= e−βE(C) , (2)
C
where the summation runs over all the configurations C. The energy of a con-
figuration is denoted by E(C). Here, β ≡ 1/(kB T ) is the inverse temperature
(temperature T and Boltzmann constant kB ). The Ising model displays a
second-order phase transition at the temperature Tc , characterised by a high
temperature phase with an average magnetization zero (disordered phase)
and a low temperature phase with a non-zero average magnetization (or-
dered phase). The system is exactly solvable in one and two dimensions.
For D ≥ 4, the critical properties are easily obtained by the renormalization
group. In three dimensions no exact solution is available. Even a 3D cubic
lattice of very modest size 10 × 10 × 10 generates 21000 ≈ 10301 configurations
in the partition function. If we want to obtain e.g. critical exponents with
a sufficient accuracy, we need sizes that are at least an order of magnitude
larger. An exact enumeration is a hopeless effort. Monte Carlo simulations
are one of the possible ways to perform a sampling of configurations. This
sampling is made out of a set of configurations of the phase space that con-
tribute the most to the averages, without the need of generating every single
2
configuration. This is referred to as importance sampling. In this sampling
of the phase space, it is important to choose the appropriate Monte Carlo
scheme to reduce the computational time. In that respect, the Ising model
is interesting because the different regimes in temperature lead to the devel-
opment of new algorithm that reduce tremendously the computational time,
specifically close to the critical temperature.
We will start these notes by introducing two important principles of
Monte Carlo simulations: detailed balance and ergodicity. Then we will re-
view different examples of Monte Carlo methods applied to the Ising model:
local and cluster algorithms, the rejection free (or continuous time) algo-
rithm, and another kind of Monte Carlo simulations based on an alternative
representation of the spin system, namely the so-called worm algorithm. We
continue with discussing dynamical quantities, such as the thermalization
and correlation times.
dPA (t) X
= [PB (t)W (B → A) − PA (t)W (A → B)] , (3)
dt A6=B
P
with the condition W (A → B) ≥ 0 and B W (A → B) = 1 for all A and B.
The transition probability W (A → B) can be further decomposed into a trial
proposition probability T (A → B) and an acceptance probability A(A → B)
so that W (A → B) = T (A → B) · A(A → B). A proposed change in the
configuration is usually referred to as a Monte Carlo move. Conventionally,
the time scale in Monte Carlo simulations is chosen such that each degree of
freedom of the system is proposed to change once per unit time, statistically.
3
The first constraint on this Markov chain is called ergodicity: starting
from any configuration C0 with nonzero Boltzmann weight, any other config-
uration with nonzero Boltzmann weight should be reachable through a finite
number of Monte Carlo moves. This constraint is necessary for a proper sam-
pling of phase space, as otherwise the Markov chain will be unable to access
a part of phase space with a nonzero contribution to the partition sum.
Apart from a very small number of peculiar algorithms, a second con-
straint is known as the condition of detailed balance. For every pair of states
A and B, the probability to move from A to B, as well as the probability for
the reverse move, are related via:
PA · T (A → B) · A(A → B) = PB · T (B → A) · A(B → A) . (4)
The meaning of this condition can be seen in Eq. (3): a stationary probabil-
ity (i.e. dPA /dt = 0) is reached if each individual term in the summation on
the right hand side cancels. This prevents the Markov chain to be trapped
in a limit cycle [2]. This is a strong, but not necessary, condition. We men-
tion that generalizations of Monte Carlo process that do not satisfy detailed
balance exist. The combination of ergodicity and detailed balance assures
a correct algorithm, i.e., given a long enough time, the desired distribution
probability is sampled.
The key question in Monte Carlo algorithms is which small changes one
should propose, and what acceptance probabilities one should choose. The
trial proposition and acceptance probabilities have to be well chosen so that
the probability of sampling of a configuration A (after thermalization) is
equal to the Boltzmann weight:
e−βEA
PA = , (5)
Z
in which EA is the energy of configuration A. The knowledge of the parti-
tion function Z is not necessary because the transition probabilities are con-
structed with the ratio of probabilities. The detailed balance condition (4),
using (5), can be rewritten as:
T (B → A) · A(B → A) PA
= = e−β(EA −EB ) . (6)
T (A → B) · A(A → B) PB
4
that the reverse process (starting in B and then proposing a small change
that results in A) is equally likely. More formally, a process in which the
condition T (A → B) = T (B → A) holds for all pairs of states {A, B}.
For example, taking the example of an Ising model containing N spins, it
corresponds to chose randomly one of the spins on the lattice, therefore
T (A → B) = T (B → A) = 1/N . Detailed balance allows for a common
scale factor in the acceptance probabilities for the forward and reverse Monte
Carlo moves, but being probabilities, they cannot exceed 1. Simulations are
then fastest if the bigger of the two acceptance probabilities is equal to 1, i.e.
either A(A → B) or A(B → A) is equal to 1. These conditions (including
detailed balance) are realized by the so-called Metropolis algorithm, in which
the acceptance probability is given by:
Thus, a proposed move that does not raise the total energy is always ac-
cepted, but a proposed move which results in higher energy is accepted with
a probability that decreases exponentially with the increase of the energy
difference. For the sake of illustration, let us describe how a simulation of
the Ising model looks like:
1. Initialize all spins (either random or all up)
2. Perform N random trial moves (N = LD ):
(a) randomly select a site
(b) compute the energy difference ∆E = EB − EA if the trial (here a
spin flip) induces a change in energy
(c) generate a random number rn uniformly distributed in [0, 1]
(d) if ∆E < 0 or if rn < exp(−∆E): flip the spin
3. Perform sampling of some observables
The step 2 corresponds to one unit time step of the Monte Carlo simulation.
An alternative to the Metropolis algorithm is the Glauber dynamics [3]. The
trial probability is the same as Metropolis i.e. T (A → B) = T (B → A) =
1/N . However the acceptance probability is now:
e−β(EB −EA )
Agla (A → B) = , (8)
1 + e−β(EB −EA )
which also satisfies the detailed balance condition Eq.(6).
5
Figure 1: Snapshots of the 2D Ising model defined in Eq. (1) at three different tempera-
tures: from left to right, T Tc , T ≈ Tc and T Tc where Tc is the critical temperature.
White and blacks dots denote spins up and down. The system size is 200 × 200. In
the picture at Tc (middle), we observe large clusters of correlated spins: these are crit-
ical fluctuations that slow down Monte Carlo simulations when local algorithms such as
Metropolis or Glauber are used. This critical slowing down is reduced by non-local (or
cluster) algorithms like the Wolff algorithm [5].
6
will surely flip back. Only spin flips at the edge of a cluster have a significant
effect over a longer time; but their fraction becomes vanishingly small when
the critical temperature is approached and the cluster size diverges.
One remedy is to develop a non-local algorithm that flips a whole cluster
of spins at once. Such an algorithm has been designed for the Ising model
by Wolff [5], following the idea of Swendsen and Wang [6] for more general
spin systems. A sketch of this procedure is shown in Figure 2.
A B
Figure 2: Sketch of one iteration of the non-local algorithm introduced by Wolff [5]
between two spin configurations A and B. The white and black dots stand for spins of
opposite signs. The spins within the loop (dashed line) belong to the same cluster. The
steps to form the cluster are: (i) choose randomly a seed spin (ii) add aligned spins with
the probability Padd (see text) (iii) add iteratively aligned neighbors of newly added spins
with the probability Padd (iv) flip all the spins in the cluster at once when the cluster is
completed. This is an efficient algorithm for the Ising model at criticality.
The procedure consists of first choosing a random initial site (seed site).
Then, we add each neighboring spin, provided it is aligned, with the prob-
ability Padd . If it is not aligned, it cannot belong to the cluster. This step
is iteratively repeated with each neighbor added to the cluster. When no
neighbor can be added to the cluster anymore, all the spins in the clus-
ter flip at once. The probability to form a certain cluster of spins in state
A before the Wolff move is the same as that in state B after the Wolff
move, except for the aligned spins that have not been added to the cluster
at the boundaries. The probability to not add an aligned spin is 1 − Padd .
If m and n stand for non-added aligned spins to the cluster for A and B,
7
T (A → B)/T (B → A) = (1 − Padd )m−n and the detailed balance condition
(6) can be rewritten as:
T (A → B) · A(A → B) A(A → B)
= (1 − Padd )m−n = e−β(EB −EA ) . (9)
T (B → A) · A(B → A) A(B → A)
A(A → B) n−m
= (1 − Padd )e2βJ . (10)
A(B → A)
8
is chosen randomly according to its probability, and the system is forced to
move into this state. The time step of evolution during such a move can be
estimated rigourously. This time will change from each configuration and
cannot be set to unity as in the Metropolis algorithm: it takes a continuous
value. this is why this algorithm is sometimes called continuous time algo-
rithm. On the one hand, this algorithm has to maintain a list of all possible
moves, which requires a relatively heavy administrative task, on the other
hand, the new configuration is always accepted and this saves a lot of time
when the probability of rejection would otherwise be high. It is also some-
times called the rejection free algorithm. The efficiency of this algorithm will
be maximized for T ≤ Tc . In detail, one iteration of the continuous time
algorithm looks like:
1. List all possible moves from the current configuration. Each of these n
moves has an associated probability Pn .
2. Calculate the integrated probability that a move occurs Q = ni=0 Pn .
P
1
The probability of a spin flip is exponential versus Q: P (∆t) = exp(−Q∆t)).
9
but always with Metropolis acceptance rates. The principle is based on the
high temperature expansion of the partition function. Suppose that we want
to sample the magnetic susceptibility of the Ising model. We can access it via
the correlation function using the (discrete) fluctuation-dissipation theorem:
β X
χ= G(i − j) , (11)
N i,j
where G(i−j) = hSi ·Sj i−hSi i2 is the connected correlation function between
sites i and j. In the high temperature phase, the average value of the spin
cancels and G(i − j) = hSi Sj i. The first step is to write the correlation
function G(i − j) of the Ising model in the following form:
1X P
G(i − j) = Si · Sj e−βJ hk,li Sk ·Sl , (12)
Z
{S}
1 X Y
= cosh(βJ)DN Si · Sj (1 + Sk · Sl tanh(βJ)) . (13)
Z
{S} hk,li
The configurations that contribute to the sum in (13) contain an even num-
ber of spins in the product at any given site. Other products involving an odd
number of spins in the product contribute zero. Each term can be associated
with a path determined by the sites involved in it. A contribution to the
sum is made of a (open) path joining sites i and j and closed loops. The sum
over the configurations can be replaced by a sum over such graphs. Figure 3
sketches such a contribution for a given couple of source sites i and j. The im-
portance sampling is no longer made over spin configurations but over graphs
that are generated as follows. One of the two sources, say i, is mobile. At
every steps, it moves randomly to a neighboring site n. Any nearest-neighbor
site can be chosen with the trial probability T (A → B) = 1/2D, where D is
the dimension of the (hypercubic) lattice. If no link is present between the
two sites, then a link is created with the acceptance probability:
A(A → B) = Min(1, tanh βJ) . (14)
If a link is already present, it is erased with the acceptance probability:
A(A → B) = Min(1, 1/ tanh βJ) . (15)
Since 0 ≤ tanh x < 1 for all values x > 0, the probability (15) is equal to unity
and the link is always erased. These probabilities are obtained considering
10
i i
A B
Figure 3: Illustration of a move with the worm algorithm. Thick lines stand for one
example of graph contributing to the correlation function: one path joining the sites i et
j (the sources) and possibly closed loops. These two graphs differ in one iteration of the
worm algorithm. According to (13), the graph on the left and on the right have respective
equilibrium probabilities PA ∝ tanh5 βJ and PB ∝ tanh6 βJ (we neglect loops that are
not relevant for this purpose). From the detailed balance condition (4), the Metropolis ac-
ceptance rates are A(A → B) = Min(1, tanh βJ) and A(B → A) = Min(1, 1/ tanh βJ). In
both case T (A → B) = T (B → A) = 1/2D where D is the dimension of the (hypercubic)
lattice.
the Metropolis acceptance rate Eq.(7) and the expression of the correlation
function (13). The procedure is illustrated in Figure 3.
The open paths in the two graphs are resp. made of 5 and 6 lattice
spacings (we neglect the loop that does not contribute in this example). Ac-
cording to (13), the graphs on the left and on the right have equilibrium
probabilities PA ∝ tanh5 (βJ) and PB ∝ tanh6 (βJ), respectively. The tran-
sition probability (7) is thus written W (A → B) = 1/2D × Min(1, tanh βJ)
and W (B → A) = 1/2D × Min(1, 1/ tanh βJ), in agreement with (14) and
(15).
If the two sources meet, they can move together on another random site
with a freely chosen transition probability. When the two sources move to-
gether, they leave a closed loop behind that justifies the simultaneous pres-
ence of open path and closed loop in Figure 3. These loops may disappear if
the head of the worm meets them. Compared to the Swendsen-Wang algo-
rithm, the worm algorithm has a dynamical exponent slightly higher in 2D
but significantly lower in 3D [13]. The efficiency of this algorithm can be im-
proved with the use of a continuous time implementation [14]. The formalism
11
of the worm algorithm is suitable for high temperature. In the critical region,
the number of graphs that contribute to the correlation function increases
exponentially. In order to check the convergence of the algorithm, we can
compare it with the Wolff algorithm for the 5D Ising model with different
lattice sizes in Figure 4. The two algorithms give results in good agreement,
except in the critical region where the converge of the worm algorithm is
slower as the lattice size L increases.
L = 10
L = 14
Slow convergence
1
of the worm algo.
10
-2 -1 0 1
10 10 10 10
T - Tc
Figure 4: Comparison of the magnetic susceptibility χ obtained with the worm algorithm
and the Wolff algorithm for the Ising model in 5D with different lattice sizes. The results
are in excellent agreement for both algorithms in the high temperature phase. As we move
closer to Tc i.e. in the critical regime (dashed line ellipse), the convergence of the worm
algorithm is slower as the lattice size increases (keeping all other parameters fixed). The
number of graphs increases exponentially.
12
point, the equilibrium correlation length becomes larger and the relaxation
becomes much slower and eventually algebraic right at Tc . An example of
such process for the 2D Ising model is given in Fig. 5. Initially (t = 0), the
system is prepared at infinite temperature, all the spins are random. Then
the Glauber dynamics is applied at the critical temperature. We see the
nucleation and the evolution of correlated spin domains in time (t =0, 10,
100, 1000 from left to right). It is possible to show that in such a quench,
the correlation length grows with time like [15]:
where zc is the critical dynamical exponent (zc = 2.1665(12) [4] for the
2D Ising model). The time needed to complete thermalization at criticality
is therefore τth ∼ Lzc . In case of a subcritical quench, the system has to
choose between two ferromagnetic states of opposite magnetization. Again,
the relaxation is slow because there is nucleation and growth of domain of
opposite magnetization. We define the typical size of a domain at a time t by
Ld (t). The thermalization process involves the growth (coarsening) of these
domains, until eventually one domain spans the whole system. Only then,
equilibrium is reached (in the low temperature phase, the expectation of the
absolute value of the magnetization is nonzero). The motion of the domain
walls is mostly diffusive Ld (t) ∼ t1/z with a dynamical exponent z = 2 [15].
The walls have to cover a distance ∼ L, so that the thermalization time
scales as τth ∼ L2 . This time diverges again with system size. Starting from
an ordered state does not help for the critical quench (but it does help to
start in the ground state to thermalize the system at T < Tc ).
Once the system is thermalized, one has to be aware of another dynamical
effect: the correlation time. This is the time needed to perform sampling
between statistically uncorrelated configurations. In the high temperature
phase, the correlation time is equal to the thermalization time, up to some
factor close to unity. This is not surprising, as proper thermalization requires
the configuration to become uncorrelated from the initial state. Practically,
in all Monte Carlo simulations, one has to estimate τ at the temperature of
sampling to treat properly the error bars.
In the low temperature phase, after thermalization, the magnetization
is either positive or negative, and stays like that over prolonged periods of
time. So-called magnetization reversals do occur now and then, but the
characteristic time between those increases exponentially with system size.
13
Figure 5: Snapshots of the evolution of the 2D Ising of size 200 × 200 after a quench
from a disordered state until equilibration at the critical temperature Tc with the Glauber
dynamics. We see the nucleation and the growth of correlated domains. From left to
right, t =0, 10, 100, 1000 (expressed in Monte Carlo unit time after the quench). The
thermalization is completed when the correlation length reaches its static value ξ ∼ L.
Using Eq.(16), the thermalization time at criticality behaves like τth ∼ Lzc and therefore
diverges with system size.
Because of the strict symmetry between the parts of phase space with positive
and negative magnetization, in practice one is not so much interested in
the time of magnetization reversals, but rather in the correlation time τ
within the up- or down-phase; and this time is some temperature-dependent
constant, irrespective of the system size provided it is significantly larger
than the correlation length.
Let us consider now the two-time spin-spin correlation functions in the
framework of dynamical scaling [16]. We will use for this purpose a contin-
uous space, so that the spin Si on the site i is now denoted by S~r where ~r
is the position vector. Upon a dilatation with a scale factor b, the equilib-
rium correlation C(~r, t, |T − Tc |) = hS0 (0) · S~r (t)i is assumed to satisfy the
homogeneity relation:
14
function includes all corrections to it. The characteristic time:
τ ∼ ξ zc ∼ |T − Tc |−νzc , (19)
appears as the relaxation time of the system. Here we are interested only
in autocorrelation functions for which r = 0. Moreover, we expect an ex-
ponential decay of the scaling function C(t/τ ) in the paramagnetic phase.
Therefore, the autocorrelation function can generally be written at equilib-
rium as:
e−t/τ
C(t, T ) ∼ η/zc . (20)
t
The spin-spin autocorrelation function C(t, T ) versus time is plotted in
Fig.6 (left) for the 2D Ising model of size N = 50 × 50. The different
curves correspond to different inverse temperatures
√ β = 0.35 to 0.39 (the
critical inverse temperature is βc = 1/2 ln(1 + 2) ≈ 0.441). We observe an
increase of the autocorrelation time as the temperature comes closer to Tc .
The autocorrelation time can be obtained from a fit of the curve in the main
graph, assuming Eq.(20). The result is plotted versus |T − Tc | in the inset.
The numerics tend to the behavior of Eq.(19) as T goes to Tc .
Some other aspects of critical dynamics are interesting to study, for in-
stance, the time evolution of the equilibrium mean-square displacement of
the magnetization. It is defined as:
At small time differences (t < 1), the dynamics consists of sparsely dis-
tributed proposed spin flips, each of which has a nonzero acceptance proba-
bility. Since these spin flips are uncorrelated and their number scales as LD t,
in the short-time regime (t ≈ 1), h(t) behaves diffusively:
h(t) ∼ LD t . (22)
15
Therefore h(t) has to grow from h(t ≈ 1) ∼ LD to h(t ∼ Lzc ) ∼ LD+γ/ν .
Assuming a power law behavior, it follows that h(t) ∼ tγ/(νzc ) . Therefore, in
this regime, we can assume the following form for h(t):
h(t) ∼ LD+γ/ν F(t/Lzc ) , (24)
where F is a scaling function with the limit F(x) = constant when x 1
and F(x) ∼ xγ/(νzc ) at intermediate times. We measured the function h(t)
as defined in Eq.(21) in simulations of the two-dimensional Ising model at
the critical temperature for various systems sizes. The scaling function F
is plotted in Fig. 6 (right), using the exponents γ = 1.75, ν = 1 and zc ≈
2.17. With increasing system size, the data become increasingly consistent
-1
10
L=100
-2 L=400
10 L=600 ~t
0.81
2+γ/ν
z
-3 t/L c
-4 -2 0
10 1
10 10 10
h(t)/L
10
-ln(CM(t)/CM(0))
-4 10
0 0.81
10 ~t
-1 L=50
10 L=100
L=200
-5 -2
10 10
-6 -4 -2
10 10 10
zc
t/L
Figure 6: (left) Spin-spin autocorrelation function C(t) of the 2D Ising model (N = 50×50)
at different β = J/(kB T ) =0.35, 0.36, 0.37, 0.38 and 0.39 (βc ≈ 0.44). The correlation
time τ is an increasing function of T − Tc . Assuming Eq.(20), τ that is plotted in the
inset vs. |T − Tc |. (right) Scaling function h(t)/L2+γ/ν of the mean-square deviation of
the magnetization versus t/Lzc for the 2D Ising model at Tc for different lattice sizes. At
intermediate times, it displays an anomalous diffusion behavior compatible with h(t) ∼
tγ/(νzc ) = t0.81 . Insert: the autocorrelation CM (t) =
h|M (0)| · |M (t)|i is compatible with
a stretched exponential CM (t) ∼ exp −(t/τ )γ/(νzc ) , explained by the behavior of h(t).
with a simple power law behavior (for intermediate times between the early-
time behavior (Eq. (22) and the time of saturation ∼ Lzc ). This power law
behavior corresponds to an instance of anomalous diffusion i.e. a mean-
square deviation growing as a power law with an exponent 6= 1 compatible
with:
h(t) ∼ tγ/(νzc ) ∼ t0.81 . (25)
16
Assuming Eq.(25), the magnetization autocorrelation function for inter-
mediate times (i.e. between times of order unity and the correlation time,
thus spanning many decades) is compatible with the first terms of the Taylor
expansion of a stretched exponential:
This conjecture compares well with the numercis for the correlation function
in Figure 6 (right, in the inset). This shows that the dynamical critical
exponent zc appears at relatively small times 1 t Lzc .
8. Conclusion
In these lecture notes, we provide an introduction to Monte Carlo simu-
lations that are a way to produce a set of representative configurations of a
statistical system. We start with the basic principles: ergodicity and detailed
balance. In the next parts, we present several Monte Carlo algorithms. To il-
lustrate their functioning, we use the example of the Ising model. This model
is defined by scalar spins on a lattice that interact via nearest-neighbor inter-
actions. This is the paradigmatic system for second order phase transitions:
a critical temperature shares a disorderered phase at high temperature and
an ordered phase at low temperature. These different regimes induce differ-
ent strategies for the Monte Carlo simulations. In the disordered phase, local
algorithms such as Metropolis or Glauber are efficient. In the critical region,
the appearance of long range correlations have set a computational challenge.
It has been solved by the use of cluster algorithms such as the Wolff algorithm
that flips a whole cluster of correlated spins. Below the critical temperature,
when the probability of spin flip is low, it is a gain of computational time
to program the continuous time algorithm. It forces the system into a new
configurations, with a jump in time according to the probability of transi-
tion. Finally, we describe an interesting algorithm based on an alternative
representation of the model in terms of graphs instead of spins. We end with
important considerations on the dynamics: thermalization and correlation
time.
Acknowledgements
We thank Christophe Chatelain for his careful reading of the manuscript
and the various collaborations that have largely inspired these notes. We
17
also thank Raoul Schram for stimulating discussions and the reading of the
manuscript. J-CW is supported by the Laboratory of Excellence Initiative
(Labex) NUMEV, OD by the Scientific Council of the University of Mont-
pellier 2. This work is part of the D-ITP consortium, a program of the
Netherlands Organisation for Scientific Research (NWO) that is funded by
the Dutch Ministry of Education, Culture and Science (OCW).
References
[1] Binney J J, Dowrick N J, Fisher a J & Newman M E The Theory of
Critical Phenomena (Clarendon Press, Oxford, 1995).
[5] Wolff U (1989) Collective Monte Carlo Updating for Spin Systems Phys.
Rev. Lett. 62, 361.
[8] Du J, Zheng B & Wang J-S (2006) Dynamic critical exponents for
Swendsen-Wang and Wolff algorithms obtained by a nonequilibrium re-
laxation method J. Stat. Mech.: Theor. Exp., P05004.
18
[10] Bortz A B, Kalos H M & Lebowitz J L (1975) A new algorithm for
Monte Carlo simulation of Ising spin systems J. Comp. Phys. 17, 1018.
[12] Prokof’ev N & Svistunov B (2001) Worm algorithms for classical statis-
tical models Phys. Rev. Lett. 87, 160601.
[14] Berche B, Chatelain C, Dhall C, Kenna R, Low R & Walter J-C (2008)
Extended scaling in high dimensions J. Stat. Mech. P11010.
[15] Bray A J (1994) Theory of phase-ordering kinetics Adv. Phys. 43, 357.
19