The Ising Model: Indrek Mandre
The Ising Model: Indrek Mandre
The Ising Model: Indrek Mandre
Department of Physics
Indrek Mandre
(YAFM 081797)
Tallinn 2008
Substance Formula Force
Diamagnetic
Water H2 O -22
Copper Cu -2.6
Diamond C -16
Graphite C -110
Paramagnetic
Sodium Na 20
Nickel sulfate NiSO4 830
Liquid oxygen O2 7500 (90K)
Ferromagnetic
Iron Fe 400000
Magnetite Fe3 O4 120000
1
2 Paramagnetism and Diamagnetism1
An external uniform magnetic field applied on a loop of current will generate
torque
τ = m×B
where m = ISn̂ is the magnetic dipole moment of the loop of current. This torque
will try to align the dipole vector along the magnetic field. Because the magnetic
field is uniform the net force on the loop will be 0:
˛ µ˛ ¶
F = I (dl × B) = I dl × B = 0.
In case of a nonuniform magnetic field there may be a net force on the current loop.
For an infinitesimal loop, with dipole moment m, in a field B, the force is
F = ∇ (m · B) .
When taking the Bohr model of an atom we can see that electrons revolve around
the nucleus. This circular movement can be looked at as a small current loop and
approximates as steady current. If the electron moves at speed v at radius R, then
the period of movement is T = 2πR/v and the current is
e ev
I= = .
T 2πR
From this we get the orbital dipole moment of an atom:
1
m = − evRn̂.
2
Like any other magnetic dipole, this one is subject to a torque (m × B) when the
atom is placed in a magnetic field. The orbits of electrons don’t tend to tilt very
much though and the contribution from this to the paramagnetism is small.
There is another more significant effect on the orbital motion: the electron
speeds up or slows down in an external magnetic field. This is due to the Lorenz
force −e (v × B). Assuming the magnetic field is perpendicular to the plane of the
orbit, the change in speed is
eRB
∆v = .
2M
This change in speed happens when the magnetic field is turned on or off. A change
in orbital speed means a change in the magnetic dipole moment:
e2 R2
∆m = − B.
4M
The change is in the negative direction of B. Now ordinarily all the electrons in
the matter are randomly oriented, and the orbital dipole moments cancel out. But
1 Based on [Griffiths].
2
Figure 1: Ferromagnetic domains [Griffiths].
in the presence of a magnetic field, each atom pick up a little “extra” dipole mo-
ment, and these increments are all anti-parallel to the field. This will result in a net
magnetic field generated by the magnetic dipoles that opposes the applied mag-
netic field. This is the mechanism responsible for diamagnetism. It is a universal
phenomenon and affects all atoms. However, it is typically much weaker than para-
magnetism and is observed mainly in atoms with even numbers of electrons where
paramagnetism is usually absent.
3 Ferromagnetism2
In ferromagnetism, the magnetic field generated by matter is caused by the spins of
unpaired electrons. Each of those spins “likes” to point in the same direction as its
neighbors. The reason for that is essentially quantum mechanical. The correlation
is so strong that it virtually aligns 100% of the unpaired electron spins.
The alignment usually occurs in relatively small patches, called domains (typi-
cal volume of about 10−3 mm3 ). Each domain contains billions of dipoles, all lined
up. For iron each atom in a domain has a moment of around 2.2 Bohr magnetons
[Schwarz]. But the domains themselves are randomly oriented and because of this
random orientation the net magnetic field generated is usually 0 - that’s why any
piece of iron is not a permanent magnet. A sample of domains can be seen on
figure 1.
When placing a piece of iron into a strong magnetic field, the torque m × B
tends to align the dipoles parallel to the field. Since they like to stay parallel to
their neighbors, most of the dipoles will resist the torque. However, at the boundary
2 Based on [Griffiths].
3
between two domains, there are competing neighbors, and the torque will throw its
weight on the side of the domain most nearly parallel to the field; this domain will
win over some converts, at the expense of the less favorably oriented one. The
net effect of the magnetic field, then, is to move the domain boundaries. Domains
parallel to the field grow, and the others shrink. If the field is strong enough, one
domain takes over entirely, and the iron is said to be “saturated”.
When the field is turned off, there will be some return to randomly oriented
domains, but it is far from complete - there remains a preponderance of domains in
the original direction. This is the hysteresis.
One thing that can destroy uniform alignment of the spins is random thermal
motions. This happens at a precise temperature called the Curie point3 when a fer-
romagnetic behavior is abruptly changed into paramagnetic behavior. This abrupt
change is known as a second order phase transition4 .
5 Markov Chain6
We can look at a system evolving from one state into another as a chain of states:
x0 → x1 → ... → xn . We can state the probability of moving from one state into
another (xn−1 → xn ) as P(xn |xn−1 , xn−1 , ..., x0 ) - that is the probability may depend
on all the previous states of the system.
3 Named after Pierre Curie (1859-1906), and refers to a characteristic property of a ferromagnetic
or piezoelectric material. The curie point of iron is 768 ◦C.
4 Second-order phase transitions are continuous in the first derivative but exhibit discontinuity in
4
The Markov process or Markov chain is a system where the probability of a
state moving to a specific next state (xn−1 → xn ) only depends on the previous state
xn−1 :
P(xn |xn−1 , ..., x0 ) = P(xn |xn−1 ).
The probability of a specific state change from x0 to xn is
The probability of a system moving from one state into another in n steps is
(n)
denoted as pi j .
A chain is irreducible if, and only if, every other state can be reached from
every state. If the chain is reducible, the sequence of states will fall into classes
with no transitions from one class into the other.
An example of a system with just four states with transition probabilities ar-
ranged into a stochastic matrix:
1/2 1/4 0 1/4
0 1/3 2/3 0
.
0 1 0 0
1/2 0 1/2 0
The probability from going from state 1 → 1 is 1/2, from state 1 → 2 is 1/4, from
state 1 → 3 is 0 and so on. This matrix is not irreducible - one can’t go from state
2 to state 1 or 4. Once the system has reached the state 2 or 3 it is trapped.
(n)
A state xi has a period t > 1 if pii = 0 unless n = zt and t is the largest integer
with this property. A state is aperiodic if no such t > 1 exists.
5
(n)
Let fi j denote the probability that in a process starting from xi the first entry
to x j occurs at the n-th step. Further, let
(0)
fi j = 0,
∞
(n)
fi j = ∑ fi j ,
n=1
∞
(n)
µi = ∑ n fii .
n=1
Then fi j is the probability that starting from xi the system will ever pass through
x j . In the case that fi j = 1 the state xi is called persistent, and µi is termed the mean
recurrence time.
A state xi is called ergodic if it is aperiodic and persistent with a finite mean
recurrence time. A Markov chain with only ergodic elements is called ergodic.
System’s ergodicity basically means that any state is accessible from any other
state. More strongly expressed, any state must be accessible from any other state
in a finite number of transitions.
An irreducible aperiodic Markov chain possesses an invariant distribution if,
and only if, it is ergodic. In this case uk > 0 for all k and the absolute probabilities
tend to uk irrespective of the initial distribution.
6
the search where P(x) is large. As the sites visited are with the probability pro-
portional to the distribution function, calculating the property we are interested in
reduces to
1 n
hAi = ∑ A(xk )
n k=1
where n is the number of steps taken.
Metropolis algorithm is based on two ideas. Firstly, the search is not done
randomly but rather through an ergodic Markov chain so that in theory we could
visit every possible state x. Using a Markov chain means the systems is stepped
through a series of states that are in close proximity.
The second idea is to pick a transition function W (x → x0 ) that satisfies the
detailed balance equation
x ↔ x0
and is sufficient but not necessary condition for the Markov chain to converge
ultimately to the desired distribution.
As the initial state may be far off from the searched equilibrium where P(x)
is large, the simulation may have to step through a number of steps at first be-
fore measurements can be taken. So this equilibration is an essential part of the
algorithm.
To apply the Metropolis algorithm a transition function must be found that
satisfies the detailed balance equation. How this is done for the Ising model can be
seen from the next section.
7
Figure 2: An example 2-dimensional Ising model spin configuration.
means that the spin lattice is closed and forms a torus. If the lattice is of sufficient
size the periodic boundary itself will usually not cause any problems.
Let a specific spin configuration be known as state x = (s1 , s2 , ..., sN ). The
Hamiltonian (energy) of the system in state x is then:
H (x) = −J ∑ si s j − µH ∑ si
hi, ji i
where µ denotes the magnetic moment of a spin, H is the external magnetic field
and J is the exchange coupling energy between neighboring cells. The sum ∑hi, ji si s j
means that exchange energy is counted only for the neighbors of each spin.
If neighboring spins point in the same direction, then the energy contributed
is −J (assuming J is positive) but if they are anti-parallel, then +J . The system
generally wants to go to lowest energy possible and so parallel spins are favored.
The probability for the system being in a specific state x represented by the
Hamiltonian H (x) is proportional to
−H (x)
P(x) ∝ e k T B
where kB is the Boltzmann constant and T is the temperature. This means that the
system follows the Boltzmann energy distribution or in other words is a canonical
ensemble. So temperature has a disorderly effect on the system causing it to diverge
from the lowest possible energy.
Suppose we can change the system from state x to x0 by toggling the value of
spin si , so that x0 = (s1 , s2 , ..., −si , ..., sN ). To apply the Metropolis algorithm we
need to develop a transition function. We can use a function where W (x0 → x) = 1
whenever energy is decreased going from state x0 to x but in reverse it is
8
where ∆H is the change in the energy of the system going from x to x0 . It is easy to
verify that this satisfies the detailed balance equation for the Metropolis algorithm
and there is also no need to know the exact distribution function P(x) as common
terms are canceled out.
We can now describe a detailed Metropolis algorithm for the Ising model
[Landau, GiordNakan, Heermann]:
2. Initialize all the spins in the system x = (s1 , s2 , ..., sN ). We can take all spins
up or completely random.
(a) For a given sweep, loop through all the spins si in sequence:
i. Generate a trial configuration x0 by reversing the given spin’s di-
rection: x0 = (s1 , s2 , ..., −si , ..., sN ).
ii. Calculate the energy H (x0 ).
iii. If H (x0 ) ≤ H (x), accept the new configuration and set x = x0 .
− ∆H
iv. If H (x0 ) > H (x), accept with relative probability P = e k T
B :
A. Choose a uniform random number R ∈ [0, 1].
(
x0 if P ≥ R (accept)
B. x =
x if P < R (reject)
(b) At the start perform a number of sweeps to reach the equilibrium.
(c) Once the equilibrium is reached record the new energy, magnetization,
and any other quantities of interest after each sweep.
The property we are interested in here is the magnetization. It can vary from −1 to
+1 and is simply the weighted sum of all the spins:
1 N
m= ∑ sk
N k=1
8 Implementation
The Ising model was implemented in the C++ language on a Linux system using
GCC. The implementation is three-dimensional Ising model with a periodic bound-
ary, that is each spin has six neighbors to interact with. Following constants and
parameters were used at the simulations:
9
• Spin values {−1.0, 1.0}.
At the start of the simulation all the spins are set up or down depending on the initial
magnetic field. After that a number of steps are taken to reach an equilibrium and
after that sampling steps can be taken during which different system properties are
calculated. At each step all the spins were visited in sequence and toggled based
on the Metropolis algorithm.
For schematics and visualizations the following tools were used: gnuplot, PovRay
and xfig. The source code can be found in appendix A. Additionally, a web site
with the code archive including a Makefile and visualization scripts has been set
up at http://www.mare.ee/indrek/ising/.
9 Results
The simulation time dependence on the lattice size property LN can be seen on
figure 3. As expected it responds at around LN 3 .
16
14
12
10
time [s]
8
6
4
2
0
6 8 10 12 14 16 18 20
LN
10
The main interest of this simulation is the magnetization depending on temper-
ature and where and how the second order phase transition from magnetism into
paramagnetism happens. This can be seen on figure 4 with different lattice sizes.
1
LN = 10
0.9 LN = 20
0.8 LN = 30
0.7
0.6
|m|
0.5
0.4
0.3
0.2
0.1
0
3 3.5 4 4.5 5 5.5 6
T
The hysteresis curve is also interesting to look at. In this simulation the mag-
netic field at the start was 0, then increased to +5, then decreased to −5 and then
back to +5. The corresponding magnetization graphs can be seen on figure 5. At
temperature T = 0 the external magnetic field at the given level had no effect on
the magnetization direction. At temperature T = 2.0 we can see a clear hysteresis
curve. It seems the Ising model does allow for some hysteresis effect.
11
1.5
1
0.5
0
m
-0.5
-1
T = 0.0 T = 2.0
-1.5
1.5
1
0.5
0
m
-0.5
-1
T = 4.5 T = 6.0
-1.5
-6 -4 -2 0 2 4 6 -6 -4 -2 0 2 4 6
H H
Figure 5: Magnetization hysteresis curves m = m(H).
Finally an attempt was made to visualize the 3D lattice itself by cutting a quar-
ter out of the lattice and visualizing the remainder using PovRay. The resulting
image is on figure 6.
12
Figure 6: 30x30x30 spins lattice with m = 0.58 at T = 4.3.
13
References
[Purcell] Edward M. Purcell, “Electricity and Magnetism”, second edition, 1985.
14
A C++ Code
/ / C o p y r i g h t ( C ) 2008 I n d r e k Mandre <indrek@mare . ee >
# d e f i n e JBOUND 0 . 5 / / e x c h a n g e bound e n e r g y
# d e f i n e KB 1 . 0 / / the Boltzmann ’ s c o n s t a n t
# d e f i n e TAU 1 . 0 / / time scale factor
# d e f i n e RNDSEED 1 / / random s e e d
c l a s s Grid
{
struct Cell
{
char v ; / / s p i n v a l u e , −1 o r +1
char nc ; / / n e i g h b o u r i n g s p i n s sum
};
i n t LN ;
d o u b l e muH ;
C e l l ∗_GRID ;
d o u b l e TRMAP [ 1 4 ] ; / / t r a n s i t i o n p r o b a b i l i t y map
i n t ebs ; / / e n e r g y b o u n d s sum
int sbalance ; / / up / down s p i n s sum
d o u b l e temp ; / / temperature
public :
G r i d ( i n t LN = 1 6 , d o u b l e temp = 0 , d o u b l e muH = 0 ) :
LN(LN ) , _GRID ( 0 ) { s e t u p ( LN , temp , muH ) ; }
~ G r i d ( ) { d e l e t e [ ] _GRID ; }
/ / s e t up t h e s i m u l a t i o n a t t e m p e r a t u r e t
v o i d s e t u p ( i n t LN = 1 6 , d o u b l e t = 0 , d o u b l e muH = 0 )
{
t h i s −>LN = LN ;
t h i s −>muH = muH ;
15
d e l e t e [ ] _GRID ;
_GRID = new C e l l [LN∗LN∗LN ] ;
temp = t ;
sbalance = 0;
ebs = 0;
/ / s e t a l l s p i n s t o a l t e r n a t i n g −1/1
f o r ( i n t i = 0 ; i < LN ; i ++ )
f o r ( i n t j = 0 ; j < LN ; j ++ )
f o r ( i n t k = 0 ; k < LN ; k++ )
f g e t ( i , j , k ) . v = muH < 0 ? −1 : 1 ;
/ / c a l c u l a t e c o n t r i b u t i o n s from n e i g h b o u r s
f o r ( i n t i = 0 ; i < LN ; i ++ )
f o r ( i n t j = 0 ; j < LN ; j ++ )
f o r ( i n t k = 0 ; k < LN ; k++ )
f g e t ( i , j , k ) . nc =
get ( i − 1 , j , k ) . v + get ( i + 1 , j , k ) . v +
get ( i , j − 1 , k ) . v + get ( i , j + 1 , k ) . v +
get ( i , j , k − 1). v + get ( i , j , k + 1). v ;
/ / c a l c u l a t e e n e r g y bound c o u n t
f o r ( i n t i = 0 ; i < LN ; i ++ )
f o r ( i n t j = 0 ; j < LN ; j ++ )
f o r ( i n t k = 0 ; k < LN ; k++ )
{
C e l l& c = f g e t ( i , j , k ) ;
e b s += c . v ∗ c . nc ;
s b a l a n c e += c . v ;
}
reprob ( ) ;
}
/ / f r o m t h e n e g a t i v e s p i n −1
dH = −4 ∗ (−JBOUND ∗ ncv ) − 2 ∗ muH ;
p r o b = exp (−dH / ( KB ∗ temp ) ) / TAU;
i f ( p r o b > 1 . 0 / TAU ) p r o b = 1 . 0 / TAU;
TRMAP[ i + 1 ] = p r o b ;
}
}
/ / c h a n g e t h e muH p a r a m e t e r
v o i d set_muH ( d o u b l e muH)
{
t h i s −>muH = muH ;
reprob ( ) ;
}
16
i n l i n e v o i d t o g g l e ( i n t p l a n e , i n t row , i n t column )
{
C e l l& c = g e t ( p l a n e , row , column ) ;
c . v ∗= −1;
g e t ( p l a n e − 1 , row , column ) . nc += 2 ∗ c.v;
g e t ( p l a n e + 1 , row , column ) . nc += 2 ∗ c.v;
g e t ( p l a n e , row − 1 , column ) . nc += 2 ∗ c.v;
g e t ( p l a n e , row + 1 , column ) . nc += 2 ∗ c.v;
g e t ( p l a n e , row , column − 1 ) . nc += 2 ∗ c.v;
g e t ( p l a n e , row , column + 1 ) . nc += 2 ∗ c.v;
s b a l a n c e += 2 ∗ c . v ;
e b s += 4 ∗ c . v ∗ c . nc ;
}
/ / move s i m u l a t i o n on by one s i m u l a t i o n s t e p
void s t e p ( )
{
f o r ( i n t i = 0 ; i < LN ; i ++ )
f o r ( i n t j = 0 ; j < LN ; j ++ )
f o r ( i n t k = 0 ; k < LN ; k++ )
{
C e l l& c = f g e t ( i , j , k ) ;
i f ( d r a n d 4 8 ( ) < TRMAP[ c . v ∗ c . nc + 6 +
( c . v == −1 ? 1 : 0 ) ] ) t o g g l e ( i , j , k ) ;
}
}
/ / c a l c u l a t e t h e energy c o n t r i b u t i o n from t h e g i v e n c e l l
d o u b l e e n e r g y ( i n t p l a n e , i n t row , i n t column )
{
C e l l& c = g e t ( p l a n e , row , column ) ;
r e t u r n −JBOUND ∗ c . v ∗ c . nc − muH ∗ c . v ;
}
i n l i n e i n t s p i n ( i n t p l a n e , i n t row , i n t column )
{
r e t u r n g e t ( p l a n e , row , column ) . v ;
}
double m a g n e t i z a t i o n ( )
{
r e t u r n ( d o u b l e ) s b a l a n c e / (LN ∗ LN ∗ LN ) ;
}
struct sample_data
{
double m a g n e t i z a t i o n ;
double a b s _ m a g n e t i z a t i o n ;
};
17
f o r ( s i z e _ t i = 0 ; i < sampc ; i ++ )
{
step ( ) ;
m += m a g n e t i z a t i o n ( ) ;
abs_m += f a b s ( m a g n e t i z a t i o n ( ) ) ;
}
o u t . m a g n e t i z a t i o n = m / sampc ;
o u t . a b s _ m a g n e t i z a t i o n = abs_m / sampc ;
}
};
s t a t i c void compare_grid ( i n t n )
{
p r i n t f ( " Compare g r i d %d \ n " , n ) ;
char f n [ 1 2 8 ] ;
s p r i n t f ( fn , " c g r i d%d . t x t " , n ) ;
FILE ∗ f p = f o p e n ( fn , "w+ " ) ;
G r i d ∗ g r i d = new G r i d ( ) ;
for ( double t = 3 ; t < 6 . 0 1 ;
t += ( f a b s ( t − 4 . 5 ) < 0 . 5 ) ? 0 . 0 2 5 : 0 . 0 5 )
{
Grid : : sample_data d a t a ;
g r i d −> s e t u p ( n , t ) ;
g r i d −>s a m p l e ( d a t a , 1 0 0 0 , 1 0 0 0 ) ;
f p r i n t f ( fp , "%f %f \ n " , t , d a t a . a b s _ m a g n e t i z a t i o n ) ;
}
delete grid ;
f c l o s e ( fp ) ;
}
s t a t i c v o i d compare_hm ( d o u b l e temp )
{
p r i n t f ( "M=M(H) / TEMP = %.1 f \ n " , temp ) ;
char f n [ 1 2 8 ] ;
s p r i n t f ( fn , " chm%.1 f . t x t " , temp ) ;
FILE ∗ f p = f o p e n ( fn , "w+ " ) ;
double h = 0 ;
G r i d ∗ g r i d = new G r i d ( 2 0 , temp , h ) ;
Grid : : sample_data da t a ;
g r i d −>s a m p l e ( d a t a , 1 0 0 0 , 0 ) ;
f o r ( ; h <= 5 . 0 1 ; h += 0 . 2 5 )
{
g r i d −>set_muH ( h ) ;
g r i d −>s a m p l e ( d a t a , 2 0 0 , 2 0 0 ) ;
f p r i n t f ( fp , "%f %f \ n " , h , d a t a . m a g n e t i z a t i o n ) ;
}
f o r ( ; h >= −5. 01 ; h −= 0 . 2 5 )
{
g r i d −>set_muH ( h ) ;
g r i d −>s a m p l e ( d a t a , 2 0 0 , 2 0 0 ) ;
f p r i n t f ( fp , "%f %f \ n " , h , d a t a . m a g n e t i z a t i o n ) ;
}
f o r ( ; h <= 5 . 0 1 ; h += 0 . 2 5 )
{
g r i d −>set_muH ( h ) ;
g r i d −>s a m p l e ( d a t a , 2 0 0 , 2 0 0 ) ;
f p r i n t f ( fp , "%f %f \ n " , h , d a t a . m a g n e t i z a t i o n ) ;
}
delete grid ;
f c l o s e ( fp ) ;
}
18
s t a t i c v o i d r u n n i n g _ t i m e ( i n t n1 , i n t n2 )
{
p r i n t f ( " Running t i m e . . . \ n " ) ;
FILE ∗ f p = f o p e n ( " r t i m e . t x t " , "w+ " ) ;
G r i d ∗ g r i d = new G r i d ( ) ;
f o r ( i n t i = n1 ; i <= n2 ; i ++ )
{
p r i n t f ( " r u n n i n g _ t i m e : %d \ n " , i ) ;
Grid : : sample_data d a t a ;
g r i d −> s e t u p ( i , 0 ) ;
c l o c k _ t t1 , t 2 ;
t1 = clock ( ) ;
g r i d −>s a m p l e ( d a t a , 1 0 0 0 0 0 , 0 ) ;
t2 = clock ( ) ;
f p r i n t f ( fp , "%d %f \ n " , i , ( d o u b l e ) ( t 2 − t 1 ) / 1 0 0 0 0 0 0 . 0 ) ;
}
delete grid ;
f c l o s e ( fp ) ;
}
s t a t i c v o i d s l i c e ( d o u b l e temp )
{
p r i n t f ( " Running s l i c e . . . \ n " ) ;
FILE ∗ f p = f o p e n ( " s l i c e . i n c " , "w+ " ) ;
G r i d ∗ g r i d = new G r i d ( 3 0 , temp ) ;
Grid : : sample_data da t a ;
g r i d −>s a m p l e ( d a t a , 2 0 0 0 , 1 0 0 0 ) ;
p r i n t f ( " M a g n e t i z a t i o n : %f \ n " , d a t a . m a g n e t i z a t i o n ) ;
f o r ( i n t i = 0 ; i < 3 0 ; i ++ )
f o r ( i n t j = 0 ; j < 3 0 ; j ++ )
f o r ( i n t k = 0 ; k < 3 0 ; k++ )
{
i f ( k <= 15 | | j <= 15 )
f p r i n t f ( fp , " s p h e r e {<%d ,%d ,%d > , 0 . 7 t e x t u r e {%s } } \ n " ,
i , j , k , g r i d −> s p i n ( i , j , k ) < 0 ? "SD" : "SU" ) ;
}
delete grid ;
f c l o s e ( fp ) ;
}
i n t main ( )
{
s r a n d 4 8 (RNDSEED ) ;
slice (4.3);
compare_hm (0.0);
compare_hm (2.0);
compare_hm (4.5);
compare_hm (6.0);
running_time (5 , 20);
compare_grid ( 1 0 ) ;
compare_grid ( 2 0 ) ;
compare_grid ( 3 0 ) ;
return 0;
}
19