1506 02567 PDF
1506 02567 PDF
1506 02567 PDF
Badis Ydri
Department of Physics, Faculty of Sciences, BM Annaba University,
Annaba, Algeria.
March 16, 2016
Abstract
This book is divided into two parts. In the first part we give an elementary introduc-
tion to computational physics consisting of 21 simulations which originated from a formal
course of lectures and laboratory simulations delivered since 2010 to physics students at
Annaba University. The second part is much more advanced and deals with the problem
of how to set up working Monte Carlo simulations of matrix field theories which involve fi-
nite dimensional matrix regularizations of noncommutative and fuzzy field theories, fuzzy
spaces and matrix geometry. The study of matrix field theory in its own right has also
become very important to the proper understanding of all noncommutative, fuzzy and
matrix phenomena. The second part, which consists of 9 simulations, was delivered infor-
mally to doctoral students who are working on various problems in matrix field theory.
Sample codes as well as sample key solutions are also provided for convenience and com-
pletness. An appendix containing an executive arabic summary of the first part is added
at the end of the book.
Contents
Introductory Remarks 8
Introducing Computational Physics . . . . . . . . . . . . . . . . . . . . . . . . . 8
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Codes and Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Matrix Field Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
5 Chaotic Pendulum 52
5.1 Equation of Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
5.2 Numerical Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
5.2.1 Euler-Cromer Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 54
5.2.2 Runge-Kutta Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 55
5.3 Elements of Chaos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.3.1 Butterfly Effect: Sensitivity to Initial Conditions . . . . . . . . . . 56
5.3.2 Poincare Section and Attractors . . . . . . . . . . . . . . . . . . . 57
5.3.3 Period-Doubling Bifurcations . . . . . . . . . . . . . . . . . . . . . 57
5.3.4 Feigenbaum Ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
5.3.5 Spontaneous Symmetry Breaking . . . . . . . . . . . . . . . . . . . 58
5.4 Simulation 8: The Butterfly Effect . . . . . . . . . . . . . . . . . . . . . . 59
5.5 Simulation 9: Poincare Sections . . . . . . . . . . . . . . . . . . . . . . . . 59
5.6 Simulation 10: Period Doubling . . . . . . . . . . . . . . . . . . . . . . . . 61
5.7 Simulation 11: Bifurcation Diagrams . . . . . . . . . . . . . . . . . . . . . 61
CP and MFT, B.Ydri 4
6 Molecular Dynamics 64
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
6.2 The Lennard-Jones Potential . . . . . . . . . . . . . . . . . . . . . . . . . 64
6.3 Units, Boundary Conditions and Verlet Algorithm . . . . . . . . . . . . . 66
6.4 Some Physical Applications . . . . . . . . . . . . . . . . . . . . . . . . . . 68
6.4.1 Dilute Gas and Maxwell Distribution . . . . . . . . . . . . . . . . . 68
6.4.2 The Melting Transition . . . . . . . . . . . . . . . . . . . . . . . . 69
6.5 Simulation 12: Maxwell Distribution . . . . . . . . . . . . . . . . . . . . . 69
6.6 Simulation 13: Melting Transition . . . . . . . . . . . . . . . . . . . . . . 70
9 Codes 237
9.1 metropolis-ym.f . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
9.2 hybrid-ym.f . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
9.3 hybrid-scalar-fuzzy.f . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
9.4 phi-four-on-lattice.f . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
9.5 metropolis-scalar-multitrace.f . . . . . . . . . . . . . . . . . . . . . . . . . . 268
9.6 remez.f . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
9.7 conjugate-gradient.f . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
9.8 hybrid-supersymmetric-ym.f . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
9.9 u-one-on-the-lattice.f . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
developping and writing all our codes in a high-level compiled language and not call any
libraries. As our programming language we will use Fortran 77 under the Linux operating
system. We adopt exclusively the Ubuntu distribution of Linux. We will use the Fortran
compilers f77 and gfortran. As an editor we will use mostly Emacs and sometimes Gedit
and Nano while for graphics we will use mostly Gnuplot.
References
The main references which we have followed in developing the first part of this book
include the following items:
1. N.J.Giordano, H. Nakanishi, Computational Physics (2nd edition), Pearson/Prentice
Hall, (2006).
2. H.Gould, J.Tobochnick, W.Christian, An Introduction To Computer Simulation
Methods: Applications to Physical Systems (3rd Edition), Addison-Wesley (2006).
3. R.H.Landau, M.J.Paez, C.C. Bordeianu, Computational Physics: Problem Solving
with Computers (2nd edition), John Wiley and Sons (2007).
4. R.Fitzpatrick, Introduction to Computational Physics,
http://farside.ph.utexas.edu/teaching/329/329.html.
5. Konstantinos Anagnostopoulos, Computational Physics: A Practical Introduction
to Computational Physics and Scientific Computing, Lulu.com (2014).
6. J. M. Thijssen, Computational Physics, Cambridge University Press (1999).
7. M. Hjorth-Jensen,Computational Physics, CreateSpace Publishing (2015).
8. Paul L.DeVries, A First Course in Computational Physics (2nd edition), Jones and
Bartlett Publishers (2010).
even include matrix regularizations of supersymmetry, string theory and M-theory. These
matrix regularizations employ necessarily finite dimensional matrix algebras so that the
problems are amenable and are accessible to Monte Carlo methods.
The matrix regulator should be contrasted with the, well established, lattice regulator
with advantages and disadvantages which are discussed in their places in the literature.
However, we note that only 5 simulations among the 7 simulations considered in this part
of the book use the matrix regulator whereas the other 2, closely related simulations, use
the usual lattice regulator. This part contains also a special chapter on the Remez and
conjugate gradient algorithms which are required for the simulation of dynamical fermions.
The study of matrix field theory in its own right, and not thought of as regulator, has
also become very important to the proper understanding of all noncommutative, fuzzy
and matrix phenomena. Naturally, therefore, the mathematical, physical and numerical
aspects, required for the proper study of matrix field theory, which are found in this part
of the book are quite advanced by comparison with what is found in the first part of the
book.
The set of references for each topic consists mainly of research articles and is included
at the end of each chapter. Sample numerical calculations are also included as a section
or several sections in each chapter. Some of these solutions are quite detailed whereas
others are brief. The relevant Fortran codes for this part of the book are collected in the
last chapter for convenience and completeness. These codes are, of course, provided as is
and no warranty should be assumed.
Appendices
We attach two appendices at the end of this book relevant to the first part of this
book. In the first appendix we discuss the floating point representation of numbers,
machine precision and roundoff and systematic errors. In the second appendix we give an
executive summary of the simulations of part I translated into arabic.
Acknowledgments
Firstly, I would like to thank both the ex-head as well as the current-head of the
physics department, professor M.Benchihab and professor A.Chibani, for their critical
help in formally launching the computational physics course at BM Annaba University
during the academic year 2009-2010 and thus making the whole experience possible. This
three-semester course, based on the first part of this book, has become since a fixture
of the physics curriculum at both the Licence (Bachelor) and Master levels. Secondly, I
should also thank doctor A.Bouchareb and doctor R.Chemam who had helped in a crucial
way with the actual teaching of the course, especially the laboratory simulations, since the
beginning. Lastly, I would like to thank my doctoral students and doctor A.Bouchareb for
their patience and contributions during the development of the second part of this book
in the weekly informal meeting we have organized for this purpose.
Part I
Introduction to Computational
Physics
Chapter 1
Euler Algorithm
y(x0 ) = y0 . (1.2)
We solve for the function y = y(x) in the unit xinterval starting from x0 . We make the
xinterval discretization
xn = x0 + nx , n = 0, 1, ... (1.3)
The Euler algorithm is one of the oldest known numerical recipe. It consists in replacing
the function y(x) in the interval [xn , xn+1 ] by the straight line connecting the points
(xn , yn ) and (xn+1 , yn+1 ). This comes from the definition of the derivative at the point
x = xn given by
yn+1 yn
= f (xn , yn ). (1.4)
xn+1 xn
This means that we replace the above first order differential equation by the finite differ-
ence equation
This is only an approximation. The truncation error is given by the next term in the
Taylors expansion of the function y(x) which is given by
1 df (x, y)
yn+1 ' yn + xf (xn , yn ) + x2 |x=xn + .... (1.6)
2 dx
The error then reads
1 df (x, y)
(x)2 |x=xn . (1.7)
2 dx
The error per step is therefore proportional to (x)2 . In a unit interval we will perform
N = 1/x steps. The total systematic error is therefore proportional to
1
N (x)2 = . (1.8)
N
In other words the probability of decay per unit time given by (dN (t)/N (t))/dt is a
constant which we denote 1/ . The minus sign is due to the fact that dN (t) is negative
since the number of particles decreases with time. We write
dN (t) N (t)
= . (1.10)
dt
The solution of this first order differential equation is given by a simple exponential func-
tion, viz
The number N0 is the number of particles at time t = 0. The time is called the mean
lifetime. It is the average time for decay. For the uranium 235 the mean lifetime is around
109 years.
The goal now is to obtain an approximate numerical solution to the problem of ra-
dioactivity using the Euler algorithm. In this particular case we can compare to an exact
solution given by the exponential decay law (1.11). We start evidently from the Taylors
expansion
CP and MFT, B.Ydri 14
dN 1 d2 N
N (t + t) = N (t) + t + (t)2 2 + ... (1.12)
dt 2 dt
We get in the limit t 0
dN N (t + t) N (t)
= Limt0 . (1.13)
dt t
We take t small but non zero. In this case we obtain the approximation
dN N (t + t) N (t)
' . (1.14)
dt t
Equivalently
dN
N (t + t) ' N (t) + t . (1.15)
dt
By using (1.10) we get
N (t)
N (t + t) ' N (t) t . (1.16)
We will start from the number of particles at time t = 0 given by N (0) = N0 which is
known. We substitute t = 0 in (1.16) to obtain N (t) = N (1) as a function of N (0).
Next the value N (1) can be used in equation (1.16) to get N (2t) = N (2), etc. We are
thus led to the time discretization
In other words
The integer N determine the total time interval T = N t. The numerical solution (1.16)
can be rewritten as
N (i)
N (i + 1) = N (i) t , i = 0, ..., N. (1.19)
This is Euler algorithm for radioactive decay. For convenience we shift the integer i so
that the above equation takes the form
N (i 1)
N (i) = N (i 1) t , i = 1, ..., N + 1. (1.20)
(i) = N (i 1), i.e N
We introduce N (1) = N (0) = N0 . We get
N (i) t N (i) , i = 1, ..., N + 1.
(i + 1) = N (1.21)
The corresponding times are
program radioactivity
return
end
We have chosen the name radioactivity for our program. The c in the second line
indicates that the sentence here is the code is only a comment and not a part of the
code.
After the program statement come the declaration statements. We state the variables
and their types which are used in the program. In Fortran we have the integer type for
integer variables and the double precision type for real variables. In the case of (1.21) the
variables N (i), t(i), , t, N0 are real numbers while the variables i and N are integer
numbers.
An array A of dimension K is an ordered list of K variables of a given type called the
elements of the array and denoted A(1), A(2),...,A(K). In our above example N (i) and
t(i) are real arrays of dimension N + 1. We declare that N (i) and t(i) are real for all
i = 1, ..., N + 1 by writing N (1 : N + 1) and t(1 : N + 1).
Since an array is declared at the begining of the program it must have a fixed size. In
other words the upper limit must be a constant and not a variable. In Fortran a constant
is declared with a parameter statement. In our above case the upper limit is N + 1 and
hence N must be declared in parameter statement.
In the Fortran code we choose to use the notation A = N , A0 = N0 , time = t, = t
and tau = . By putting all declarations together we get the following preliminary lines
of code
program radioactivity
integer i,N
parameter (N=100)
doubleprecision A(1:N+1),A0,time(1:N+1),Delta,tau
return
end
CP and MFT, B.Ydri 16
The input of the computation in our case are obviously given by the parameters N0 ,
, t and N .
For the radioactivity problem the main part of the code consists of equations (1.21)
and (1.22). We start with the known quantities N (1) = N0 at t(1) = 0 and generate via
(i) and t(i) for all i > 1. This will be coded using
the successive use of (1.21) and (1.22) N
a do loop. It begins with a do statement and ends with an enddo statement. We may also
indicate a step size.
The output of the computation can be saved to a file using a write statement inside the
do loop. In our case the output is the number of particles N (i) and the time t(i). The
write statement reads explicitly
(i).
write(10, ) t(i), N
program radioactivity
integer i,N
parameter (N=100)
doubleprecision A(1:N+1),A0,time(1:N+1),Delta,tau
parameter (A0=1000,Delta=0.01d0,tau=1.0d0)
A(1)=A0
time(1)=0
do i=1,N+1,1
A(i+1)=A(i)-Delta*A(i)/tau
time(i+1)=i*Delta
write(10,*) time(i+1),A(i+1)
enddo
return
end
the athlete will avoid the use of an explicit formula for F . Multiplying the above equation
by v we obtain
dE
= P. (1.24)
dt
E is the kinetic energy and P is the power, viz
1
E = mv 2 , P = F v. (1.25)
2
Experimentaly we find that the output of well trained athletes is around P = 400 watts
over periods of 1h. The above equation can also be rewritten as
dv 2 2P
= . (1.26)
dt m
For P constant we get the solution
2P
v2 = t + v02 . (1.27)
m
We remark the unphysical effect that v as t . This is due to the absence of
the effect of friction and in particular air resistance.
The most important form of friction is air resistance. The force due to air resistance
(the drag force) is
Fdrag = B1 v B2 v 2 . (1.28)
At small velocities the first term dominates whereas at large velocities it is the second term
that dominates. For very small velocities the dependence on v given by Fdrag = B1 v
is known as Stockes law. For reasonable velocities the drag force is dominated by the
second term, i.e. it is given for most objects by
Fdrag = B2 v 2 . (1.29)
B2 = CA. (1.32)
Taking into account the force due to air resistance we find that Newtons law becomes
dv
m = F + Fdrag . (1.34)
dt
Equivalently
dv P CAv 2
= . (1.35)
dt mv m
It is not obvious that this equation can be solved exactly in any easy way. The Euler
algorithm gives the approximate solution
dv
v(i + 1) = v(i) + t (i). (1.36)
dt
In other words
P CAv 2 (i)
v(i + 1) = v(i) + t , i = 0, ..., N. (1.37)
mv(i) m
This can also be put in the form (with v(i) = v(i 1))
P CAv 2 (i)
v(i + 1) = v(i) + t , i = 1, ..., N + 1. (1.38)
m v (i) m
d~v
m = F~ + F~drag
dt
~v
= m~g B2 v 2
v
= m~g B2 v~v . (1.40)
The goal is to determine the position of the projectile and hence one must solve the two
equations
d~x
= ~v . (1.41)
dt
d~v
m = m~g B2 v~v . (1.42)
dt
CP and MFT, B.Ydri 19
In components (the horizontal axis is x and the vertical axis is y) we have 4 equations of
motion given by
dx
= vx . (1.43)
dt
dvx
m = B2 vvx . (1.44)
dt
dy
= vy . (1.45)
dt
dvy
m = mg B2 vvy . (1.46)
dt
We recall the constraint
q
v= vx2 + vy2 . (1.47)
The numerical approach we will employ in order to solve the 4 equations of motion (1.43)-
(1.46) together with (1.47) consists in using Euler algorithm. This yields the approximate
solution given by the equations
x(i + 1) = x(i) + tvx (i). (1.48)
B2 v(i)vx (i)
vx (i + 1) = vx (i) t . (1.49)
m
B2 v(i)vy (i)
vy (i + 1) = vy (i) tg t . (1.51)
m
The constraint is
q
v(i) = vx (i)2 + vy (i)2 . (1.52)
In the above equations the index i is such that i = 0, ..., N . The initial position and
velocity are given, i.e. x(0), y(0), vx (0) and vy (0) are known.
0 = mg cos + T. (1.54)
The constants 0 and depend on the initial displacement and velocity of the pendulum.
The frequency is independent of the mass m and the amplitude of the motion and depends
only on the length l of the string.
d g
= . (1.60)
dt l
We use the definition of a derivative of a function, viz
df f (t + t) f (t)
= , t 0. (1.61)
dt t
CP and MFT, B.Ydri 21
In other words
The integer N determine the total time interval T = N t. The above numerical solution
can be rewritten as
g
(i + 1) = (i) (i)t
l
(i + 1) = (i) + (i)t. (1.65)
We shift the integer i such that it takes values in the range [1, N + 1]. We obtain
g
(i) = (i 1) (i 1)t
l
(i) = (i 1) + (i 1)t. (1.66)
By using the values of and at time i we calculate the corresponding values at time
= (0) and
i + 1. The initial angle and angular velocity (1) (1) = (0) are known. This
process will be repeated until the functions and are determined for all times.
g
(i) (i)t
(i + 1) =
l
(i + 1) = (i) + (i + 1)t. (1.68)
CP and MFT, B.Ydri 22
The error can be computed as follows. From these two equations we get
+ 1) = (i) + g 2
(i (i)t (i)t
l
d2
+
= (i) (i)t + |i t2 . (1.69)
dt
In other words the error per step is still of the order of t2 . However the Euler-Cromer
algorithm does better than Euler algorithm with periodic motion. Indeed at each step i
the energy conservation condition reads
g g
Ei+1 = Ei + (i2 i2 )t2 . (1.70)
2l l
The energy of the simple pendulum is of course by
1 g
Ei = i2 + i2 . (1.71)
2 2l
The error at each step is still proportional to t2 as in the Euler algorithm. However
the coefficient is precisely equal to the difference between the values of the kinetic energy
and the potential energy at the step i. Thus the accumulated error which is obtained by
summing over all steps vanishes since the average kinetic energy is equal to the average
potential energy. In the Euler algorithm the coefficient is actually equal to the sum of the
kinetic and potential energies and as consequence no cancellation can occur.
d 1 d2 1 d3
(ti t) = (ti ) t |ti + (t)2 2 |ti (t)3 3 |ti + ... (1.73)
dt 2 dt 6 dt
Adding these expressions we get
d2
(ti + t) = 2(ti ) (ti t) + (t)2 |t + O(4 ). (1.74)
dt2 i
We write this as
g
i+1 = 2i i1 (t)2 i . (1.75)
l
This is the Verlet algorithm for the harmonic oscillator. First we remark that the error
is proportional to t4 which is less than the errors in the Euler, Euler-Cromer (and even
less than the error in the second-order Runge-Kutta) methods so this method is much
more accurate. Secondly in this method we do not need to calculate the angular velocity
= d/dt. Thirdly this method is not self-starting. In other words given the initial
conditions 1 and 1 we need also to know 2 for the algorithm to start. We can for
example determine 2 using the Euler method, viz 2 = 1 + t 1 .
CP and MFT, B.Ydri 23
1.5 Exercises
Exercise 1: We give the differential equations
dx
= v. (1.76)
dt
dv
= a bv. (1.77)
dt
Write down the exact solutions.
Write down the numerical solutions of these differential equations using Euler and
Verlet methods and determine the corresponding errors.
dv P CAv 2
= .
dt mv m
The numerical approximation of this first order differential equation which we will consider
in this problem is based on Euler algorithm.
(1) Calculate the speed v as a function of time in the case of zero air resistance and
then in the case of non-vanishing air resistance. What do you observe. We will take
P = 200 and C = 0.5. We also give the values
(2) What do you observe if we change the drag coefficient and/or the power. What do
you observe if we decrease the time step.
B2 v(i)vx (i)
vx (i + 1) = vx (i) t .
m
B2 v(i)vy (i)
vy (i + 1) = vy (i) tg t .
m
q
v(i + 1) = vx2 (i + 1) + vy2 (i + 1).
(1) Write a Fortran code which implements the above Euler algorithm.
CP and MFT, B.Ydri 25
B2
= 0.00004m1 , g = 9.8m/s2 .
m
v(1) = 700m/s , = 30 degree.
vx (1) = v(1) cos , vy (1) = v(1) sin .
N = 105 , t = 0.01s.
Calculate the trajectory of the projectile with and without air resistance. What do
you observe.
(3) We can determine numerically the range of the projectile by means of the conditional
instruction if. This can be done by adding inside the do loop the following condition
Determine the range of the projectile with and without air resistance.
(4) In the case where air resistance is absent we know that the range is maximal when
the initial angle is 45 degrees. Verify this fact numerically by considering several
angles. More precisely add a do loop over the initial angle in order to be able to
study the range as a function of the initial angle.
(5) In the case where air resistance is non zero calculate the angle for which the range
is maximal.
N = 10000 , t = 0.05s.
1 = 0.1 radian , 1 = 0.
By using the conditional instruction if we can limit the total time of motion to be
equal to say 5 periods as follows
(3) Compare between the value of the energy calculated with the Euler method and the
value of the energy calculated with the Euler-Cromer method. What do you observe
and what do you conclude.
(4) Repeat the computation using the Verlet algorithm. Remark that this method can
not self-start from the initial values 1 and 1 only. We must also provide the angle
2 which can be calculated using for example Euler, viz
2 = 1 + 1 t.
We also remark that the Verlet algorithm does not require the calculation of the
angular velocity. However in order to calculate the energy we need to evaluate the
angular velocity which can be obtained from the expression
i+1 i1
i = .
2t
Chapter 2
Z b
F = f (x)dx. (2.1)
a
In general this can not be done analytically. However this integral is straightforward to
do numerically. The starting point is Riemann definition of the integral F as the area
under the curve of the function f (x) from x = a to x = b. This is obtained as follows. We
discretize the xinterval so that we end up with N equal small intervals of lenght x, viz
ba
xn = x0 + nx , x = (2.2)
N
Clearly x0 = a and xN = b. Riemann definition is then given by the following limit
N
X 1
F = lim x f (xn ) . (2.3)
x0 , N , ba=fixed
n=0
The first approximation which can be made is to drop the limit. We get the so-called
rectangular approximation given by
N
X 1
FN = x f (xn ). (2.4)
n=0
In other words we evaluate the function f (x) at N + 1 points in the interval [a, b] then we
sum the values f (xn ) with some corresponding weights wn . For example in the rectangular
approximation (2.4) the values f (xn ) are summed with equal weights wn = x, n =
0, N 1 and wN = 0. It is also clear that the estimation FN of the integral F becomes
exact only in the large N limit.
CP and MFT, B.Ydri 28
We remark that the weights here are given by w0 = x/2, wn = x, n = 1, ..., N 1 and
wN = x/2.
f (x) = x2 + x + . (2.8)
Equivalently
f (1) + f (1) f (1) f (1)
= f (0) , = , = f (0). (2.12)
2 2
Thus
Z 1
f (1) 4f (0) f (1)
dx(x2 + x + ) = + + . (2.13)
1 3 3 3
CP and MFT, B.Ydri 29
In other words we can express the integral of the function f (x) = x2 + x + over the
interval [1, 1] in terms of the values of this function f (x) at x = 1, 0, 1. Similarly we
can express the integral of f (x) over the adjacent subintervals [xn1 , xn ] and [xn , xn+1 ] in
terms of the values of f (x) at x = xn+1 , xn , xn1 , viz
Z xn+1 Z xn+1
dx f (x) = dx(x2 + x + )
xn1 xn1
f (xn1 ) 4f (xn ) f (xn+1 )
= x + + . (2.14)
3 3 3
By adding the contributions from each pair of adjacent subintervals we get the full integral
N 2
X f (x2p )
2
4f (x2p+1 ) f (x2p+2 )
SN = x + + . (2.15)
3 3 3
p=0
N
X 1
x
TN = f (x0 ) + 2 f (xn ) + f (xN ) . (2.17)
2
n=1
Let us also recall that N x = b a is the length of the total interval which is always kept
fixed. Thus by doubling the number of subintervals we halve the width, viz
2N
X 1
x
4T2N = 2f (
x0 ) + 4 f (
xn ) + 2f (
x2N )
2
n=1
N
X 1 N
X 1
x
= 2f (
x0 ) + 4 f (
x2n ) + 4 f (
x2n+1 ) + 2f (
x2N )
2
n=1 n=0
N
X 1 N
X 1
x
= 2f (x0 ) + 4 f (xn ) + 4 f (
x2n+1 ) + 2f (xN ) . (2.18)
2
n=1 n=0
2n = xn , n = 0, 1, ..., N 1, N . Thus
In above we have used the identification x
N
X 1 N
X 1
4T2N TN = f (x0 ) + 2 f (xn ) + 4 f (
x2n+1 ) + f (xN )
x
n=1 n=0
= 3SN . (2.19)
CP and MFT, B.Ydri 30
2.4 Errors
The error estimates for numerical integration are computed as follows. We start with
the Taylor expansion
1
f (x) = f (xn ) + (x xn )f (1) (xn ) + (x xn )2 f (2) (xn ) + ... (2.20)
2!
Thus
Z xn+1
1 (1) 1
dx f (x) = f (xn )x + f (xn )(x)2 + f (2) (xn )(x)3 + ... (2.21)
xn 2! 3!
This is of order 1/N 2 . But we have N subintervals. Thus the total error is of order 1/N .
The error in the interval [xn , xn+1 ] in the trapezoidal approximation is
Z xn+1 Z xn+1
1
dx f (x) (f (xn ) + f (xn+1 ))x = dx f (x)
xn 2 xn
1 1
(2f (xn ) + xf (1) (xn ) + (x)2 f (2) (xn ) + ...)x
2 2!
1 1 1 (2) 3
= ( )f (xn )(x) + ... (2.23)
3! 2 2!
This is of order 1/N 3 and thus the total error is of order 1/N 2 .
In order to compute the error in the interval [xn1 , xn+1 ] in the parabolic approxima-
tion we compute
Z xn Z xn+1
2 2
dx f (x) + dx f (x) = 2f (xn )x + (x)3 f (2) (xn ) + (x)5 f (4) (xn ) + ...
xn1 xn 3! 5!
(2.24)
Also we compute
x 2 2
(f (xn+1 ) + f (xn1 ) + 4f (xn )) = 2f (xn )x + (x)3 f (2) (xn ) + (x)5 f (4) (xn ) + ...
3 3! 3.4!
(2.25)
Hence the error in the interval [xn1 , xn+1 ] in the parabolic approximation is
Z xn+1
x 2 2
dx f (x) (f (xn+1 ) + f (xn1 ) + 4f (xn )) = ( )(x)5 f (4) (xn ) + ...
xn1 3 5! 3.4!
(2.26)
Calculate the value of this integral using the rectangular approximation. Compare
with the exact result.
Hint: You can code the function using either subroutine or function.
(2) Calculate the numerical error as a function of N . Compare with the theory.
(3) Repeat the computation using the trapezoid method and the Simpsons rule.
(4) Take now the integrals
Z Z e Z +1
2 1 1
I= cos xdx , I = dx , I = lim dx.
0 1 x 1 0 x2 + 2
Chapter 3
f (x) = 0. (3.1)
The bisection algorithm works as follows. We start with two values of x say x+ and x
such that
f (x ) < 0 , f (x+ ) > 0. (3.2)
In other words the function changes sign in the interval between x and x+ and thus there
must exist a root between x and x+ . If the function changes from positive to negative
as we increase x we conclude that x+ x . We bisect the interval [x+ , x ] at
x+ + x
x= . (3.3)
2
If f (x)f (x+ ) > 0 then x+ will be changed to the point x otherwise x will be changed to
the point x. We continue this process until the change in x becomes insignificant or until
the error becomes smaller than some tolerance. The relative error is defined by
x+ x
error = . (3.4)
x
Clearly the absolute error e = xi xf is halved at each iteration and thus the rate of
convergence of the bisection rule is linear. This is slow.
df
f (x) = f (x0 ) + x |x=x0 . (3.6)
dx
The correction x is determined by finding the intersection point of this linear approxi-
mation of f (x) with the x axis. Thus
df f (x0 )
f (x0 ) + x |x=x0 = 0 = x = . (3.7)
dx (df /dx)|x=x0
df f (x0 + x) f (x0 )
|x=x0 = . (3.8)
dx x
In summary this method works by drawing the tangent to the function f (x) at the old
guess x0 and then use the intercept with the x axis as the new hopefully better guess x.
The process is repeated until the change in x becomes insignificant.
Next we compute the rate of convergence of the Newton-Raphson algorithm. Starting
from xi the next guess is xi+1 given by
f (xi )
xi+1 = xi . (3.9)
f 0 (x)
f (xi )
i+1 = i + . (3.10)
f 0 (x)
0 (x xi )2 00
f (x) = 0 = f (xi ) + (x xi )f (xi ) + f (xi ) + ... (3.11)
2!
In other words
0 2i 00
f (xi ) = i f (xi ) f (xi ) + ... (3.12)
2!
Therefore the error is given by
00
2 f (xi )
i+1 = i 0 . (3.13)
2 f (xi )
Then the Newton-Raphson step is accepted else we take instead a bisection step.
We find
(x1 x)(x3 x1 ) (x1 x)(x2 x1 )
a2 = , a3 = . (3.22)
(x2 x)(x2 x3 ) (x3 x)(x2 x3 )
Thus
(x3 x1 )(x2 x1 )
1 + a2 + a3 = . (3.23)
(x2 x)(x3 x)
Therefore we get
A polynomial which goes through the n points (xi , fi = f (xi )) was given by Lagrange.
This is given by
Yn x xj
i (x) = . (3.27)
j(6=i)=1 xi xj
We remark
i (xj ) = ij . (3.28)
n
X
i (x) = 1. (3.29)
i=1
The Lagrange polynomial can be used to fit the entire table with n equal the number of
points in the table. But it is preferable to use the Lagrange polynomial to to fit only a
small region of the table with a small value of n. In other words use several polynomials
to cover the whole table and the fit considered here is local and not global.
CP and MFT, B.Ydri 36
p(x) = aj (x xj )3 + bj (x xj )2 + cj (x xj ) + dj . (3.30)
We assume that
In other words the pj for all j = 1, 2, ..., n 1 are known. From the above equation we
conclude that
dj = pj . (3.32)
We compute
0
p (x) = 3aj (x xj )2 + 2bj (x xj ) + cj . (3.33)
00
p (x) = 6aj (x xj ) + 2bj . (3.34)
00
Thus we get by substituting x = xj into p (x) the result
00
pj
bj = . (3.35)
2
00
By substituting x = xj+1 into p (x) we get the result
00 00
pj+1 pj
aj = . (3.36)
6hj
By substituting x = xj+1 into p(x) we get
00 00 6(pn pn1 ) 0
hn1 (pn1 + 2pn ) = + 6pn . (3.46)
hn1
The n equations (3.44), (3.45) and (3.46) correspond to a tridiagonal linear system. In
0 0
general p1 and pn are not known. In this case we may use natural spline in which the
second derivative vanishes at the end points and hence
p2 p1 0 pn pn1 0
p1 = pn = 0. (3.47)
h1 hn1
tan a = .
r r
2mE 2m(V E)
= , = .
~2 ~2
In the case of the infinite potential well we find the solutions
(n + 12 )2 2 ~2
En = , n = 0, 1....
2ma2
We choose (dropping units)
~ = 1 , a = 1 , 2m = 1.
In order to find numerically the energies En we will use the Newton-Raphson algorithm
which allows us to find the roots of the equation f (x) = 0 as follows. From an initial
guess x0 , the first approximation x1 to the solution is determined from the intersection of
the tangent to the function f (x) at x0 with the xaxis. This is given by
f (x0 )
x1 = x0 .
f 0 (x0 )
Next by using x1 we repeat the same step in order to find the second approximation x2
to the solution. In general the approximation xi+1 to the desired solution in terms of the
approximation xi is given by the equation
f (xi )
xi+1 = xi .
f 0 (xi )
(1) For V = 10, determine the solutions using the graphical method. Consider the two
functions r
V
f () = tan a , g() = = 1.
2
(2) Find using the method of Newton-Raphson the two solutions with a tolerance equal
108 . For the first solution we take the initial guess = /a and for the second
solution we take the initial guess = 2/a.
(3) Repeat for V = 20.
(4) Find the 4 solutions for V = 100. Use the graphical method to determine the initial
step each time.
(5) Repeat the above questions using the bisection method.
Chapter 4
d2~r GMe Ms
Me = ~r
dt2 r3
GMe Ms ~
= (xi + y~j). (4.1)
r3
We get the two equations
d2 x GMs
= 3 x. (4.2)
dt2 r
d2 y GMs
2
= 3 y. (4.3)
dt r
We replace these two second-order differential equations by the four first-order differential
equations
dx
= vx . (4.4)
dt
dvx GMs
= 3 x. (4.5)
dt r
CP and MFT, B.Ydri 40
dy
= vy . (4.6)
dt
dvy GMs
= 3 y. (4.7)
dt r
We recall
p
r= x2 + y 2 . (4.8)
Me v 2 GMs Me
= . (4.9)
r r2
Equivalently
GMs = v 2 r. (4.10)
The radius is r = 1AU. The velocity of the earth is v = 2r/yr = 2AU/yr. Hence
For the numerical simulations it is important to determine the correct initial conditions.
The orbit of Mercury is known to be an ellipse with eccentricity e = 0.206 and radius
(semimajor axis) a = 0.39 AU with the Sun at one of the foci. The distance between
the Sun and the center is ea. The first initial condition is x0 = r1 , y0 = 0 where r1
is the maximum distance from Mercury to the Sun,i.e. r1 = (1 + e)a = 0.47 AU. The
second initial condition is the velocity (0, v1 ) which can be computed using conservation
of energy and angular momentum. For example by comparing with the point (0, b) on
the orbit where b is the semiminor axis, i.e b = a 1 e2 the velocity (v2 , 0) there can be
obtained in terms of (0, v1 ) from conservation of angular momentum as follows
r1 v1
r1 v1 = bv2 v2 = . (4.12)
b
Next conservation of energy yields
GMs Mm 1 GMs Mm 1
+ Mm v12 = + Mm v22 . (4.13)
r1 2 r2 2
In above r2 = e2 a2 + b2 is the distance between the Sun and Mercury when at the point
(0, b). By substituting the value of v2 we get an equation for v1 . This is given by
r
GMs 1 e
v1 = = 8.2 AU/yr. (4.14)
a 1+e
CP and MFT, B.Ydri 41
GMs Me
Me~r = r. (4.15)
r2
r to derive ~r = r
We use r = and = r + r and ~r = (
r r2 )
r + (r + 2r )
.
Newtons second law decomposes into the two equations
r + 2r = 0. (4.16)
GMs
r r2 = 2 . (4.17)
r
Let us recall that the angular momentum by unit mass is defined by ~l = ~r ~r = r2
r .
2
Thus l = r . Equation (4.16) is precisely the requirement that angular momentum is
conserved. Indeed we compute
dl
= r(r + 2r )
= 0. (4.18)
dt
Now we remark that the area swept by the vector ~r in a time interval dt is dA = (rrd)/2
where d is the angle traveled by ~r during dt. Clearly
dA 1
= l. (4.19)
dt 2
In other words the planet sweeps equal areas in equal times since l is conserved. This is
Keplers second law.
The second equation (4.17) becomes now
l2 GMs
r = 3
2 (4.20)
r r
By multiplying this equation with r we obtain
d 1 l2 GMs
E = 0 , E = r 2 + 2 . (4.21)
dt 2 2r r
This is precisely the statement of conservation of energy. E is the energy per unit mass.
Solving for dt in terms of dr we obtain
dr
dt = s (4.22)
l2 GMs
2 E 2r2
+ r
CP and MFT, B.Ydri 42
ldr
d = s (4.23)
2 GMs
r2 2 E 2rl 2 + r
By inverting this equation we get an equation of ellipse with eccentricity e since E < 0,
viz
1 0
= C(1 + e cos( )). (4.26)
r
0
This is Keplers first law. The angle at which r is maximum is = . This distance
is precisely (1 + e)a where a is the semi-major axis of the ellipse since ea is the distance
between the Sun which is at one of the two foci and the center of the ellipse. Hence we
obtain the relation
1 l2
(1 e2 )a = = . (4.27)
C GMs
From equation (4.19) we can derive Keplers third law. By integrating both sides of the
equation over a single period T and then taking the square we get
1
A2 = l 2 T 2 . (4.28)
4
A is the area of the ellipse, i.e. A = ab where the semi-minor axis b is related the
semi-major axis a by b = a 1 e2 . Hence
1
2 a4 (1 e2 ) = l2 T 2 . (4.29)
4
By using equation (4.27) we get the desired formula
T2 4 2
= . (4.30)
a3 GMs
CP and MFT, B.Ydri 43
The total time interval is T = N t. We define x(t) = x(i), vx (t) = vx (i), y(t) = y(i),
vy (t) = vy (i). Equations (4.4), (4.5), (4.6),(4.7) and (4.8) become (with i = 0, ..., N )
GMs
vx (i + 1) = vx (i) x(i)t. (4.32)
(r(i))3
GMs
vy (i + 1) = vy (i) y(i)t. (4.34)
(r(i))3
p
r(i) = x(i)2 + y(i)2 . (4.36)
This is Euler algorithm. It can also be rewritten with x (i) = x(i 1), y(i) = y(i 1),
vx (i) = vx (i 1), vy (i) = vy (i 1), r(i) = r(i 1) and i = 1, ..., N + 1 as
GMs
vx (i + 1) = vx (i) x
(i)t. (4.37)
r(i))3
(
x
(i + 1) = x
(i) + vx (i)t. (4.38)
GMs
vy (i + 1) = vy (i) y(i)t. (4.39)
r(i))3
(
p
r(i) = x(i)2 + y(i)2 . (4.41)
x
(i + 1) = x
(i) + vx (i + 1)t. (4.43)
GMs
vy (i + 1) = vy (i) y(i)t. (4.44)
r(i))3
(
The slope f (xn , yn ) of this line is exactly given by the slope of the function y = y(x) at
the begining of the inetrval [xn , xn+1 ].
Given the value yn at xn we evaluate the value yn+1 at xn+1 using the method of Runge-
Kutta as follows. First the middle of the interval [xn , xn+1 ] which is at the value xn + 21 x
corresponds to the y-value yn+1 calculated using the Eulers method, viz yn+1 = yn + 12 k1
where
k1 = xf (xn , yn ). (4.48)
yn+1 = yn + k2 . (4.50)
CP and MFT, B.Ydri 45
k1 = xf (xn , yn )
1 1
k2 = xf (xn + x, yn + k1 )
2 2
yn+1 = yn + k2 . (4.51)
The error in this method is proportional to x3 . This can be shown as follows. We have
dy 1 d2 y
y(x + x) = y(x) + x + (x)2 2 + ...
dx 2 dx
1 d
= y(x) + xf (x, y) + (x)2 f (x, y) + ...
2 dx
1 f 1 f
= y(x) + x f (x, y) + x + xf (x, y) + ...
2 x 2 y
1 1
= y(x) + xf (x + x, y + xf (x, y)) + O(x3 )
2 2
1 1
= y(x) + xf (x + x, y + k1 ) + O(x3 )
2 2
3
= y(x) + k2 + O(x ). (4.52)
Let us finally note that the above Runge-Kutta method is strictly speaking the second-
order Runge-Kutta method. The first-order Runge-Kutta method is the Euler algorithm.
The higher-order Runge-Kutta methods will not be discussed here.
d
=
dt
d g
= . (4.53)
dt l
Eulers equations read
n+1 = n + tn
g
n+1 = n n t. (4.54)
l
First we consider the function = (t). The middle point is (tn + 21 t, n + 21 k1 ) where
k1 = tn . For the function = (t) the middle point is (tn + 21 t, n + 12 k3 ) where
k3 = gl tn . Therefore we have
k1 = tn
g
k3 = tn . (4.55)
l
CP and MFT, B.Ydri 46
k2 1
= n + k3 . (4.56)
t 2
The slope of the function (t) at its middle point is
k4 g 1
= (n + k1 ). (4.57)
t l 2
The Runge-Kutta solution is then given by
n+1 = n + k2
n+1 = n + k4 . (4.58)
dvx GMs
= 3 x. (4.60)
dt r
dy
= vy . (4.61)
dt
dvy GMs
= 3 y. (4.62)
dt r
First we consider the function x = x(t). The middle point is (tn + 21 t, xn + 12 k1 ) where
k1 = t vxn . For the function vx = vx (t) the middle point is (tn + 12 t, vxn + 21 k3 ) where
k3 = GM s
rn t xn . Therefore we have
k1 = t vxn
GMs
k3 = 3 t xn . (4.63)
rn
k2 1
= vxn + k3 . (4.64)
t 2
The slope of the function vx (t) at the middle point is
k4 GMs 1
= 3 (xn + k1 ). (4.65)
t Rn 2
CP and MFT, B.Ydri 47
0
Next we consider the function y = y(t). The middle point is (tn + 21 t, yn + 12 k1 ) where
0 0
k1 = t vyn . For the function vy = vy (t) the middle point is (tn + 21 t, vyn + 12 k3 ) where
0
k3 = GM s
rn t yn . Therefore we have
0
k1 = t vyn
0 GMs
k3 = 3 t yn . (4.66)
rn
xn+1 = xn + k2
vx(n+1) = vxn + k4
0
yn+1 = yn + k2
0
vy(n+1) = vyn + k4 . (4.70)
axis when mercury is at the perihelion is found to change linearly with time. We get the
following rates of precession
d
= 0.0008 , = 8.414 0.019
dt
d
= 0.001 , = 10.585 0.018
dt
d
= 0.002 , = 21.658 0.019
dt
d
= 0.004 , = 45.369 0.017. (4.72)
dt
Thus
d
= a , = 11209.2 147.2 degrees/(yr.). (4.73)
dt
By extrapolating to the value provided by general relativity, viz = 1.1 108 we get
d
= 44.4 0.6 arcsec/century. (4.74)
dt
4.5 Exercises
Exercise 1: Using the Runge-Kutta method solve the following differential equations
d2 r l2 GM
2
= 3 2 . (4.75)
dt r r
d2 z
= g. (4.76)
dt2
dN
= aN bN 2 . (4.77)
dt
Exercise 2: The Lorenz model is a chaotic system given by three coupled first order
differential equations
dx
= (y x)
dt
dy
= xz + rx y
dt
dz
= xy bz. (4.78)
dt
This system is a simplified version of the system of Navier-Stokes equations of fluid me-
chanics which are relevant for the Rayleigh-Benard problem. Write down the numercial
solution of these equations according to the Runge-Kutta method.
CP and MFT, B.Ydri 49
(1) Write a Fortran code in which we implement the Runge-Kutta algorithm for the
problem of solving the equations of motion of the the solar system.
(2) Compute the trajectory, the velocity and the energy as functions of time. What do
you observe for the energy.
(3) According to Keplers first law the orbit of any planet is an ellipse with the Sun at
one of the two foci. In the following we will only consider planets which are known
to have circular orbits to a great accuracy. These planets are Venus, Earth, Mars,
Jupiter and Saturn. The radii in astronomical units are given by
Part II
(1) According to Keplers third law the square of the period of a planet is directly
proportional to the cube of the semi-major axis of its orbit. For circular orbits the
proportionality factor is equal 1 exactly. Verify this fact for the planets mentioned
above. We can measure the period of a planet by monitoring when the planet returns
to its farthest point from the sun.
(2) By changing the initial velocity appropriately we can obtain an elliptical orbit. Check
this thing.
CP and MFT, B.Ydri 50
(3) The fundamental laws governing the motion of the solar system are Newtons law of
universal attraction and Newtons second law of motion. Newtons law of universal
attraction states that the force between the Sun and a planet is inversely proportioanl
to the square of the distance between them and it is directed from the planet to the
Sun. We will assume in the following that this force is inversely proportional to a
different power of the distance. Modify the code accordingly and calculate the new
orbits for powers between 1 and 3. What do you observe and what do you conclude.
GMs Mm
F = 2
(1 + 2 ) , = 1.1.108 AU 2 .
r r
(1) Include the above force in the code. The initial position and velocity of Mercury are
x0 = (1 + e)a , y0 = 0.
r
GMs 1 e
vx0 = 0 , vy0 = .
a 1+e
Thus initially Mercury is at its farthest point from the Sun since a is the semi-major
axis of Mercury (a = 0.39 AU) and e is its eccentricity (e = 0.206) and hence ea
is the distance between the Sun and the center of the ellipse. The semi-minor axis
is defined by b = a 1 e2 . The initial velocity was calculated from applying the
principles of conservation of angular momentum and conservation of energy between
the above initial point and the point (0, b).
(2) The amount of precession of the perihelion of Mercury is very small because is
very small. In fact it can not be measured directly in any numerical simulation with
a limited amount of time. Therefore we will choose a larger value of for example
CP and MFT, B.Ydri 51
= 0.0008 AU2 . We also work with N = 20000 , dt = 0.0001. Compute the orbit
for these values. Compute the angle made between the vector position of Mercury
and the horizontal axis as a function of time. Compute also the distance between
Mercury and the sun and its derivative with respect to time given by
dr xvx + yvy
= .
dt r
This derivative will vanish each time Mercury reaches its farthest point from the sun
or its closest point from the sun (the perihelion). Plot the angle p made between the
vector position of Mercury at its farthest point and the horizontal axis as a function
of time. What do you observe. Determine the slope dp /dt which is precisely the
amount of precession of the perihelion of Mercury for the above value of .
(3) Repeat the above question for other values of say = 0.001, 0.002, 0.004. Each
time compute dp /dt. Plot dp /dt as a function of . Determine the slope. De-
duce the amount of precession of the perihelion of Mercury for the value of =
1.1.108 AU2 .
Chapter 5
Chaotic Pendulum
write the above second order differential equation as two first order differential equations,
namely
d
=
dt
d 1
= sin + FD cos D t. (5.6)
dt Q
This system of differential equations does not admit a simple analytic solution. The linear
approximation corresponds to small amplitude oscillations, viz
We find
1 2 1 D
a= 2
D
(1 D ) ,b = 2
D
. (5.12)
2 )2 +
(1 D 2 )2 +
(1 D Q
Q2 Q2
= + t . (5.13)
2) 2)
FD (1 D 1 (0) 1 FD (1 3D t
2Q
t = (0) 2
D
cos t + (0) + 2 sin t e .
2 )2 +
(1 D 2Q 2Q (1 2 )2 + D
Q2 D Q2
(5.14)
The last two terms depend on the initial conditions and will vanish exponentially at very
large times t , i.e. they are transients. The asymptotic motion is given by .
Thus for t we get
2 FD2
2 + 2 = D2 = FD2 (a2 + b2 ) =
F 2
D
. (5.17)
D 2 )2 +
(1 D Q2
In other words the orbit of the system in phase space is an ellipse. The motion is periodic
with period equal to the period of the driving force. This ellipse is also called a periodic
attractor because regardless of the initial conditions the trajectory of the system will tend
at large times to this ellipse.
Let us also remark that the maximum angular displacement is FD . The function
FD = FD (D ) exhibits resonant behavior as the driving frequency approaches the natural
frequency which is equivalent to the limit D 1. In this limit FD = QFD . The width
of the resonant window is proportional to 1/Q so for Q we observe that FD
when D 1 while for Q 0 we observe that FD 0 when D 1.
In general the time-asymptotic response of any linear system to a periodic drive is pe-
riodic with the same period as the driving force. Furthermore when the driving frequency
approaches one of the natural frequencies the response will exhibits resonant behavior.
The basic ingredient in deriving the above results is the linearity of the dynamical
system. As we will see shortly periodic motion is not the only possible time-asymptotic
response of a dynamical system to a periodic driving force.
For example
(1) = 0
(1) = 0
t(1) = 0. (5.21)
k1 = t (i)
1
k3 = t (i) sin (i) + F (i)
Q
1
k2 = t (i) + k3
2
1 1 1 1
k4 = t (i) + k3 sin (i) + k1 + F (i + )
Q 2 2 2
(5.25)
(i + 1) = (i) + k2
(i + 1) = (i) + k4
t(i + 1) = t i. (5.26)
1 1 1
F (i + ) F (t(i) + t) = FD cos D t(i ). (5.29)
2 2 2
1 1 1
F (i + ) F (t(i) + t) = FD sin D t(i ). (5.30)
2 2 2
CP and MFT, B.Ydri 56
= + t . (5.31)
The motion in the phase space is periodic with period equal to the period of the driving
force. The orbit in phase space is precisley an ellipse of the form
2 2 2 2 2
+ 2 = FD (a + b ). (5.34)
D
Let us consider a perturbation of the initial conditions. We can imagine that we have two
pendulums A and B with slightly different initial conditions. Then the difference between
the two trajectories is
This goes to zero at large times. If we plot ln as a function of time we find a straight line
with a negative slope. The time-asymptotic motion is not sensitive to initial conditions.
It converges at large times to no matter what the initial conditions are. The curve
= ( ) is called a (periodic) attractor. This is because any perturbed trajectory
will decay exponentially in time to the attractor.
In order to see chaotic behavior we can for example increase Q keeping everything else
fixed. We observe that the slope of the line ln = t starts to decrease until at some
value of Q it becomes positive. At this value the variation between the two pendulums
increases exponentially with time. This is the chaotic regime. The value = 0 is the
value where chaos happens. The coefficient is called Lyapunov exponent.
The chaotic pendulum is a deterministic system (since it obeys ordinary differential
equations) but it is not predictable in the sense that given two identical pendulums their
motions will diverge from each other in the chaotic regime if there is the slightest error
in determining their initial conditions. This high sensitivity to initial conditions is known
as the butterfly effect and could be taken as the definition of chaos itself.
However we should stress here that the motion of the chaotic pendulum is not random.
This can be seen by inspecting Poincare sections.
CP and MFT, B.Ydri 57
D t = + 2n. (5.36)
The angle is called the Poincare phase and n is an integer. For period-1 motion the
Poincare section consists of one single point. For period-N motion the Poincare section
consists of N points.
Thus in the linear regime if we plot (, ) for D t = 2n we get a single point since
the motion is periodic with period equal to that of the driving force. The single point we
get as a Poincare section is also an attractor since all pendulums with almost the same
initial conditions will converge onto it.
In the chaotic regime the Poincare section is an attractor known as strange attractor.
It is a complicated curve which could have fractal structure and all pendulums with almost
the same initial conditions will converge onto it.
obtained in some sense subharmonics with periods equal to the period of the driving force
times 2N . This is very characteristic of chaos. In fact chaotic behavior corresponds to
the limit N . In other words chaos is period- (bounded) motion which could be
taken as another definition of chaos.
(1) Write a code which implements the Euler-Cromer algorithm for the chaotic pendu-
lum. The angle must always be taken between and which can be maintained
as follows
if(i .lt. ) i = i 2.
does not depend on the initial conditions thus confirming that the motion is not random
and which may have a fractal structure. As a consequence this curve is called a strange
attractor.
(1) We consider two identical chaotic pendulums A and B with slightly different initial
conditions. For example we take
i = iA iB .
What do you observe. Is the two motions identical. What happens for large times.
Is the motion of the pendulum predictable. For the second value of FD use
N = 10000 , dt = 0.01s.
What is the orbit in the phase space for small times and what does it represent.
What is the orbit for large times. Compare between the two pendulums A and B.
Does the orbit for large times depend on the initial conditions.
(3) A Poincare section is obtained numerically by plotting the points (, ) of the orbit
at the times at which the function sin D t vanishes. These are the times at which
this function changes sign. This is implemented as follows
Verify that Poincare section in the linear regime is given by a single point in the
phase space. Take and use FD = 0.5 radian/s2 , N = 104 107 , dt = 0.001s.
Verify that Poincare section in the chaotic regime is also an attractor. Take and use
FD = 1.2 radian/s2 , N = 105 , dt = 0.04s. Compare between Poincare sections of
the pendulums A and B. What do you observe and what do you conclude.
CP and MFT, B.Ydri 61
Part II As we have seen in the previous simulation period doubling can also be described
by a bifurcation diagram. This phenomena is also an example of a spontaneous symmetry
breaking. In this case the symmetry is t t + TD . Clearly only orbits with period TD
are symmetric under this transformation.
Let QN be the value of Q at which the N th bifurcation occurs. In other words this
is the value at which the orbit goes from being a period-(N 1) motion to a period-N
motion. The Feigenbaum ratio is defined by
QN 1 QN 2
FN = .
QN QN 1
As we approach the chaotic regime, i.e. as N the ratio FN converges rapidly to the
constant value F = 4.669. This is a general result which holds for many chaotic systems.
Any dynamical system which can exhibit a transition to chaos via an infinite series of
period-doubling bifurcations is characterized by a Feigenbaum ratio which approaches
4.669 as N .
(1) Calculate the orbit and Poincare section for Q = 1.36s. What is the period of the
motion. Is the orbit symmetric under t t + TD . Is the orbit symmetric under
.
CP and MFT, B.Ydri 63
(2) Plot the bifurcation diagram = (Q) for two different sets of initial conditions for
values of Q in the interval [1.3, 1.36]. What is the value Q at which the period gets
doubled. What is the value of Q at which the symmetry t t+TD is spontaneously
broken.
(3) In this question we use the initial conditions
Calculate the orbit and Poincare section and plot the bifurcation diagram = (Q)
for values of Q in the interval [1.34, 1.38]. Determine from the bifurcation diagram
the values QN for N = 1, 2, 3, 4, 5. Calculate the Feigenbaum ratio. Calculate the
accumulation point Q at which the transition to chaos occurs.
Chapter 6
Molecular Dynamics
6.1 Introduction
In the molecular dynamics approach we attempt to understand the behavior of a
classical many-particle system by simulating the trajectory of each particle in the system.
In practice this can be applied to systems containing 109 particles at most. The molecular
dynamics approach is complementary to the more powerful Monte Carlo method. The
Monte Carlo method deals with systems that are in thermal equilibrium with a heat bath.
The molecular dynamics approach on the other hand is useful in studying how fast in real
time a system moves from one microscopic state to another.
We consider a box containing a collection of atoms or molecules. We will use Newtons
second law to calculate the positions and velocities of all the molecules as functions of
time. Some of the questions we can answer with the molecular dynamics approach are:
The melting transition.
The rate of equilibration.
The rate of diffusion.
As state above molecular dynamics allows us to understand classical systems. A classical
treatment can be justified as follows. We consider the case of liquid argon as an example.
The energy required to excite an argon atom is of the order of 10eV while the typical
kinetic energy of the center of mass of an argon atom is 0.1eV. Thus a collision between
two argon atoms will not change the electron configuration of either atoms. Hence for
all practical purposes we can ignore the internal structure of argon atoms. Furthermore
the wavelength of an argon atom which is of the order of 107 A is much smaller than the
spacing between argon atoms typically of the order of 1A which again justifies a classical
treatment.
dvi,x dxi
= ax,i , = vi,x . (6.1)
dt dt
dvi,y dyi
= ay,i , = vi,y . (6.2)
dt dt
Each argon atom experience a force from all other argon atoms. In order to calculate
this force we need to determine the interaction potential. We assume that the interaction
potential between any pair of argon atoms depend only on the distance between them.
Let rij and u(rij ) be the distance and the interaction potential between atoms i and j.
The total potential is then given by
N
X 1 N
X
U= u(rij ). (6.3)
i=1 j=i+1
The precise form of u can be calculated from first principles, i.e. from quantum mechanics.
However this calculation is very complicated and in most circumstances a phenomenolog-
ical form of u will be sufficient.
For large separations rij the potential u(rij ) must be weakly attractive given by the
Van der Walls force which arises from electrostatic interaction between the electric dipole
moments of the two argon atoms. In other words u(rij ) for large rij is attractive due to the
mutual polarization of the two atoms. The Van der Walls potential can be computed from
quantum mechanics where it is shown that it varies as 1/rij 6 . For small separations r the
ij
potential u(rij ) must become strongly repulsive due to the overlap of the electron clouds
of the two argon atoms. This repulsion known also as core repulsion is a consequence
of Pauli exclusion principle. It is a common practice to choose the repulsive part of the
potential u to be proportional to 1/rij 12 . The total potential takes the form
12 6
u(r) = 4 . (6.4)
r r
This is the Lennard-Jones potential. The parameter is of dimension length while is
of dimension energy. We observe that at r = the potential is 0 identically while for
r > 2.5 the potential approaches zero rapidly. The minimum of the potential occurs at
r = 21/6 . The depth of the potential at the minimum is .
The force of atom k on atom i is
12 6
~ ~ 24
fk,i = k,i u(rk,i ) = 2 rki . (6.5)
rki rki rki
The acceleration of the ith atom is given by
1 X 1 X xi xk
ax,i = fk,i cos k,i = fk,i
m m rki
k6=i k6=i
12 6
24 X xi xk
= 2 2 . (6.6)
m rki rki rki
k6=i
CP and MFT, B.Ydri 66
1 X 1 X yi yk
ay,i = fk,i sin k,i = fk,i
m m rki
k6=i k6=i
12 6
24 X yi yk
= 2 2 . (6.7)
m rki rki rki
k6=i
= = m = 1. (6.8)
Thus
r
m
= 2.17 1012 s. (6.10)
Hence a molecular dynamics simulation which runs for 2000 steps with a reduced time
step t = 0.01 corresponds to a total reduced time 2000 0.01 = 20 which is equivalent
to a real time 20(/m)1/2 = 4.34 1011 s.
Periodic Boundary Conditions The total number of atoms in a real physical sys-
tem is huge of the order of 1023 . If the system is placed in a box the fraction of atoms of
the system near the walls of the box is negligible compared to the total number of atoms.
In typical simulations the total number of atoms is only of the order of 103 105 and in
this case the fraction of atoms near the walls is considerable and their effect can not be
neglected.
In order to reduce edge effects we use periodic boundary conditions. In other words
the box is effectively a torus and there are no edges. Let Lx and Ly be the lengths of the
box in the x and y directions respectively. If an atom crosses the walls of the box in a
particular direction we add or subtract the length of the box in that direction as follows
if (x > Lx ) then x = x Lx
if (x < 0) then x = x + Lx . (6.11)
if (y > Ly ) then y = y Ly
if (y < 0) then y = y + Ly . (6.12)
CP and MFT, B.Ydri 67
The maximum separation in the x direction between any two particles is only Lx /2 whereas
the maximum separation in the y direction between any two particles is only Ly /2. This
can be implemented as follows
Verlet Algorithm The numerical algorithm we will use is Verlet algorithm. Let us
consider the forward and backward Taylor expansions of a function f given by
df 1 d2 f 1 d3 f
f (tn + t) = f (tn ) + t |tn + (t)2 2 |tn + (t)3 3 |tn + ... (6.15)
dt 2 dt 6 dt
df 1 d2 f 1 d3 f
f (tn t) = f (tn ) t |tn + (t)2 2 |tn (t)3 3 |tn + ... (6.16)
dt 2 dt 6 dt
Adding these expressions we get
d2 f
f (tn + t) = 2f (tn ) f (tn t) + (t)2 |t + O(t4 ). (6.17)
dt2 n
We remark that the error is proportional to t4 which is less than the errors in the Euler,
Euler-Cromer and second-order Runge-Kutta methods so this method is more accurate.
We have therefore for the ith atom
1 X xi,n xk,n
ax,i,n = fk,i,n . (6.21)
m rki,n
k6=i
1 X yi,n yk,n
ay,i,n = fk,i,n . (6.22)
m rki,n
k6=i
In the Verlet method it is not necessary to calculate the components dxi,n /dt and dyi,n /dt
of the velocity. However since the velocity will be needed for other purposes we will also
compute it using the equations
xi,n+1 xi,n1
vx,i,n = . (6.24)
2t
yi,n+1 yi,n1
vy,i,n = . (6.25)
2t
Let us remark that the Verlet method is not self starting. In other words given the initial
conditions xi,1 , yi,1 , vx,i,1 and vy,i,1 we need also to know xi,2 , yi,2 , vx,i,2 and vy,i,2 for the
algorithm to start which can be determined using the Euler method.
Another way of measuring the temperature T of a dilute gas is through a study of the
distribution of atom velocities. A classical gas in thermal equilibrium obeys Maxwell
distribution. The speed and velocity distributions in two dimensions are given respectively
by
2
v mv
2k T
P (v) = C e B . (6.28)
kB T
2 mvy 2
mvx
1 1
P (vx ) = Cx e 2kB T , P (vy ) = Cy e 2kB T . (6.29)
kB T kB T
Recall that the probability per unit v of finding an atom with speed v is equal P (v) whereas
the probability per unit vx,y of finding an atom with velocity vx,y is equal P (vx,y ). The
CP and MFT, B.Ydri 69
constants C and Cx,y are determined from the normalization conditions. There are peaks
in the distributions P (v) and P (vx,y ). Clearly the temperature is related to the location
of the peak which occurs in P (v). This is given by
2
kB T = mvpeak . (6.30)
L
a= .
N
Clearly there are N cells of area a a. We choose L and N such that a > 2. For
simplicity we will use reduced units = = m = 1. In order to reduce edge effects we
use periodic boundary conditions. In other words the box is effectively a torus and there
are no edges. Thus the maximum separation in the x direction between any two particles
is only L/2 and similarly the maximum separation in the y direction between any two
particles is only L/2.
The initial positions of the atoms are fixed as follows. The atom k = N (i1)+j will
be placed at the center of the cell with corners (i, j), (i + 1, j), (i, j + 1) and (i + 1, j + 1).
Next we perturb in a random way these initial positions by adding random numbers in
the interval [a/4, +a/4] to the x and y coordinates of the atoms. The initial velocities
can be chosen in random directions with a speed equal v0 for all atoms.
(1) Write a molecular dynamics code along the above lines. Take L = 15, N = 25,
t = 0.02, Time = 500 and v0 = 1. As a first test verify that the total energy is
conserved. Plot the trajectories of the atoms. What do you observe.
(2) As a second test we propose to measure the temperature by observing how the gas
approaches equilibrium. Use the equipartition theorem
N
m X 2 2
kB T = (vi,x + vi,y ).
2N
i=1
Plot T as a function of time. Take Time = 1000 1500. What is the temperature
of the gas at equilibrium.
CP and MFT, B.Ydri 70
(3) Compute the speed distribution of the argon atoms by constructing an appropriate
histogram as follows. We take the value Time = 2000. We consider the speeds of
all particles at all times. There are Time N values of the speed in this sample.
Construct the histogram for this sample by 1) finding the maximum and minimum,
2) dividing the interval into bins, 3) determining the number of times a given value
of the speed falls in a bin and (4) properly normalizing the distribution. Compare
with the Mawell distribution
v 2 2kmv2T
PMaxwell (v) = C e B .
kB T
2
Deduce the temperature from the peak of the distribution given by kB T = mvpeak .
Compare with the value of the temperature obtained from the equipartition theorem.
What happens if we increase the initial speed.
(1) Show that with these conditions you obtain a crystalline solid with a triangular
lattice.
(2) In order to observe melting we must heat up the system. This can be achieved by
increasing the kinetic energy of the atoms by hand. A convenient way of doing this
is to rescale the current and previous positions of the atoms periodically (say every
1000 steps) as follows
hh = int(n/1000)
if (hh 1000.eq.n) then
x(i, n) = x(i, n + 1) R(x(i, n + 1) x(i, n))
y(i, n) = y(i, n + 1) R(y(i, n + 1) y(i, n))
endif.
This procedure will rescale the velocity by the amount R. We choose R = 1.5. Verify
that we will indeed reach the melting transition by means of this method. What
happens to the energy and the temperature.
Chapter 7
Let us take the following example a = 4,c = 1 and M = 9 with seed r1 = 3. We get a
sequence of length 9 given by
3, 4, 8, 6, 7, 2, 0, 1, 5. (7.2)
After the last number 5 we get 3 and therefore the sequence will repeat. In this case the
period is M = 9.
It is clear that we need to choose the parameters a, c and M and the seed r1 with
care so that we get the longest sequence of pseudo random numbers. The maximum
possible period depends on the size of the computer word. A 32bit machine may use
M = 231 = 2 109 . The numbers generated by (7.1) are random only in the sense that
they are evenly distributed over their range. Equation (7.1) is related to the logistic map
which is known to exhibit chaotic behaviour. Although chaos is deterministic it looks
random. In the same way although equation (7.1) is deterministic the numbers generated
by it look random. This is the reason why they are called pseudo random numbers.
This is a test of uniformity as well as of randomness. To be more precise if < xki > is equal
to 1/(k + 1) then we can infer that the distribution is uniform whereas if the deviation
varies as 1/ N then we can infer that the distribution is random.
A direct test of uniformity is to divide the unit interval into K equal subintevals (bins)
and place each random number in one of these bins. For a uniform distribution we must
obtain N/K numbers in each bin where N is the number of generated random numbers.
CP and MFT, B.Ydri 73
Chi-Square Statistic : In the above test there will be statistical fluctuations about the
ideal value N/K for each bin. The question is whether or not these fluctuations are
consistent with the laws of statistics. The answer is based on the so-called chi-square
statistic defined by
K
X (Ni nideal )2
2m = . (7.7)
nideal
i=1
In the above definition Ni is the number of random numbers which fall into bin i and
nideal is the expected number of random numbers in each bin.
The probability of finding any particular value 2 which is less than 2m is found to
be proportional to the incomplete gamma function (/2, 2m /2) where is the number
of degrees of freedom given by = K 1. We have
(/2, 2m /2)
P (2 2m ) = P (/2, 2m /2). (7.8)
(/2)
The most likely value of 2m , for some fixed number of degrees of freedom , corresponds
to the value P (/2, 2m /2) = 0.5. In other words in half of the measurements (bin tests),
for some fixed number of degrees of freedom , the chi-square statistic predicts that we
must find a value of 2m smaller than the maximum.
Again if xi and xi+j are independent random numbers which are distributed with the
joint probability distribution P (xi , xi+j ) then
Z 1 Z 1
< xi xi+j >' dx dyxyP (x, y). (7.11)
0 0
We have clearly assumed that N is large. For a uniform distribution, viz P (x, y) = 1 we
get
1
< xi xi+j >' . (7.12)
4
CP and MFT, B.Ydri 74
For a random distrubution the deviation from this result is of order 1/ N . Hence in the
case that the random numbers are not correlated we have
C(j) = 0. (7.13)
N
X
xN = si . (7.14)
i=1
The walker for p = q = 1/2 can be generated by flipping a coin N times. The position is
increased by a for heads and decreased by a for tails.
Averaging over many walks each consisting of N steps we get
N
X
< xN >= < si >= N < s > . (7.15)
i=1
In above we have used the fact that the average over every step is the same given by
For p = q = 1/2 we get < xN >= 0. A better measure of the walk is given by
X
N 2
x2N = si . (7.17)
i=1
x2 =< (xN < xN >)2 >=< x2N > < xN >2 . (7.18)
We compute
N X
X N
x2 = < (si < s >)(sj < s >) >
i=1 j=1
N
X N
X
= < (si < s >)(sj < s >) > + < (si < s >)2 > . (7.19)
i6=j=1 i=1
In the first term since i 6= j we have < (si < s >)(sj < s >) >=< (si < s >) ><
(sj < s >) >. But < (si < s >) >= 0. Thus
N
X
2
x = < (si < s >)2 >
i=1
= N (< s2i > < s >2 >)
= N (a2 (p q)2 a2 )
= 4N pqa2 . (7.20)
The main point is that since N is proportional to time we have < x2N > t. This is an
example of a diffusive behaviour.
CP and MFT, B.Ydri 76
Let be the time between steps and a the lattice spacing. Then t = N and x = ia. Also
we define P (x, t) = P (i, N )/a. We get
1
P (x, t) = P (x + a, t ) + P (x a, t ) . (7.23)
2
In the limit a 0, 0 with the ratio D = a2 /2 kept fixed we obtain the equation
P (x, t) 2 P (x, t)
=D . (7.25)
t x2
This is the diffusion equation. Generalization to 3dimensions is
P (x, y, z, t)
= D2 P (x, y, z, t). (7.26)
t
A particular solution of (7.25) is given by
1 x22
P (x, t) = e 2 , = 2Dt. (7.27)
In other words the spatial distribution of the diffusing molecules is always a gaussian with
half-width increasing with time as t.
The average of any function f of x is given by
Z
< f (x, t) >= f (x)P (x, t)dx. (7.28)
Let us multiply both sides of (7.25) by f (x) and then integrate over x, viz
Z Z
P (x, t) 2 P (x, t)
f (x) dx = D f (x) dx. (7.29)
t x2
Clearly
Z Z Z
P (x, t) d d
f (x) dx = f (x)P (x, t) dx = f (x)P (x, t)dx = < f (x) > .(7.30)
t t dt dt
CP and MFT, B.Ydri 77
Thus
Z
d 2 P (x, t)
< f (x) > = D f (x) dx
dt x2
Z
P (x, t) x=+ f (x) P (x, t)
= D f (x) |x= D dx. (7.31)
x x x
Z
d f (x) P (x, t)
< f (x) > = D dx. (7.32)
dt x x
Hence
This is the diffusive behaviour we have observed in the random walk problem.
For c > 0 the linear congruential generators are called mixed. They are denoted by
LCG(a, c, M ). The random numbers generated with LCG(a, c, M ) are in the range [0, M
1].
For c = 0 the linear congruential generators are called multiplicative. They are denoted
by MLCG(a, M ). The random numbers generated with MLCG(a, M ) are in the range
[1, M 1].
CP and MFT, B.Ydri 78
In the case that a is a primitive root modulo M and M is a prime the period of the
generator is M 1. A number a is a primitive root modulo M means that for any integer
n such that gcd(n, M ) = 1 there exists a k such that ak = n mod M .
An example of MLCG is RAN0 due to Park and Miller which is used extensively on
IBM computers. In this case
This generator can not be implemented directly in a high level language because of integer
overflow. Indeed the product of a and M 1 exceeds the maximum value for a 32bit inte-
ger. Assemply language implementation using 64bit product register is straightforward
but not portable.
A better solution is given by Schrages algorithm. This algorithm allows the multipli-
cation of two 32bit integers without using any intermediate numbers which are larger
than 32 bits. To see how this works explicitly we factor M as
M = aq + r. (7.40)
M
r = M mod a , q = [ ]. (7.41)
r
In the above equation [ ] denotes integer part. Remark that
M
r = M mod a = M [ ]a. (7.42)
a
Thus by definition r < a. We will also demand that r < q and hence
r
<< 1. (7.43)
qa
We have also
aXi
Xi+1 = aXi mod M = aXi [ ]M
M
aXi
= aXi [ ]M. (7.44)
aq + r
We compute
aXi Xi Xi 1
= =
aq + r q + ar r
q 1 + qa
Xi r
= (1 )
q qa
Xi Xi r
= . (7.45)
q aq q
CP and MFT, B.Ydri 79
Clearly
Xi Xi Xi
= ' < 1. (7.46)
aq M r M
Hence
aXi Xi
[ ] = [ ], (7.47)
M q
if neglecting = (rXi )/(aq 2 ) does not affect the integer part of aXi /M and
aXi Xi
[ ] = [ ] 1, (7.48)
M q
aXi
Xi+1 = aXi [ ](aq + r)
M
aXi aXi
= a(Xi [ ]q) [ ]r (7.49)
M M
Xi Xi
= a(Xi [ ]q) [ ]r (7.50)
q q
Xi
= a(Xi mod q) [ ]r, (7.51)
q
if
Xi
a(Xi mod q) [ ]r 0. (7.52)
q
Also
aXi
Xi+1 = aXi [ ](aq + r)
M
aXi aXi
= a(Xi [ ]q) [ ]r (7.53)
M M
Xi Xi
= a(Xi [ ]q + q) [ ]r + r (7.54)
q q
Xi
= a(Xi mod q) [ ]r + M, (7.55)
q
if
Xi
a(Xi mod q) [ ]r < 0. (7.56)
q
The generator RAN0 contains serial correlations. For example Ddimensional vectors
(x1 , ..., xD ), (xD+1 , ..., x2D ),...which are obtained by successive calls of RAN0 will lie on
a small number of parallel (D 1)dimensional hyperplanes. Roughly there will be
M 1/D such hyperplanes. In particular successive points (xi , xi+1 ) when binned into a
2dimensional plane for i = 1, ..., N will result in a distribution which fails the 2 test
for N 107 which is much less than the period M 1.
CP and MFT, B.Ydri 80
The RAN1 is devised so that the correlations found in RAN0 is removed using the
Bays-Durham algorithm. The Bays-Durham algorithm shuffles the sequence to remove
low-order serial correlations. In other words it changes the order of the numbers so that
the sequence is not dependent on order and a given number is not correlated with previous
numbers. More precisely the jth random number is output not on the jth call but on a
randomized later call which is on average the j + 32th call on .
The RAN2 is an improvement over RAN1 and RAN0 due to LEcuyer. It uses two
sequences with different periods so as to obtain a new sequence with a larger period
equal to the least common multiple of the two periods. In this algorithm we add the two
sequences modulo the modulus M of one of them. In order to avoid overflow we subtract
rather than add and if the result is negative we add M 1 so as to wrap around into the
inetrval [0, M 1]. LEcuyer uses the two sequences
The period is 2.3 1018 . Let us also point out that RAN2 uses Bays-Durham algorithm
in order to implement an additional shuffle.
We conclude this section by discussing another generator based on the linear congru-
ential method which is the famous random number generator RAND given by
The period of this generator is 232 and lattice structure is present for higher dimensions
D 6.
(3) Let N be the number of generated random numbers. Compute the correlation func-
tions defined by
N
X k
1
sum1 (k) = xi xi+k .
N k
i=1
Part II We take N random numbers in the interval [0, 1] which we divide into K bins
of length = 1/K. Let Ni be the number of random numbers which fall in the ith bin.
For a uniform sequence of random numbers the number of random numbers in each bin
is nideal = N/K.
(1) Verify this result for the generator rand found in the standard Fortran library with
seed given by seed = 32768. We take K = 10 and N = 1000. Plot Ni as a function
of the position xi of the ith bin.
(2) The number of degrees of freedom is = K 1. The most probable value of the
chi-square statistics 2 is . Verify this result for a total number of bin tests equal
L = 1000 and K = 11. Each time calculate the number of times Li in the L = 1000
bin tests we get a specific value of 2 . Plot Li as a function of 2 . What do you
observe.
call srand(seed)
rand()
CP and MFT, B.Ydri 82
(1) Compute the positions xi of three different random walkers as functions of the step
number i. We take i = 1, 100. Plot the three trajectories.
(2) We consider now the motion of K = 500 random walkers. Compute the averages
K K
1 X (i) 2 1 X (i) 2
< xN >= xN , < xN >= (xN ) .
K K
i=1 i=1
(i)
In the above equations xN is the position of the ith random walker after N steps.
Study the behavior of these averages as a function of N . Compare with the theoret-
ical predictions.
for a collection of L = 500 two dimensional random walkers. We consider the values
N = 10, ..., 1000.
Chapter 8
Z b
F = f (x)dx. (8.1)
a
We discretize the xinterval so that we end up with N equal small intervals of lenght x,
viz
ba
xn = x0 + nx , x = (8.2)
N
Clearly x0 = a and xN = b. Riemann definition of the integral is given by the following
limit
N
X 1
F = lim x f (xn ) , x 0 , N , b a = fixed. (8.3)
n=0
The first approximation which can be made is to simply drop the limit. We get the
so-called rectangular approximation given by
N
X 1
FN = x f (xn ). (8.4)
n=0
The error can be computed as follows. We start with the Taylor expansion
1
f (x) = f (xn ) + (x xn )f (1) (xn ) + (x xn )2 f (2) (xn ) + ... (8.5)
2!
CP and MFT, B.Ydri 84
Thus
Z xn+1
1 (1) 1
dx f (x) = f (xn )x + f (xn )(x)2 + f (2) (xn )(x)3 + ... (8.6)
xn 2! 3!
This is of order 1/N 2 . But we have N subintervals. Thus the total error is of order 1/N .
R is the domain of integration. In order to give the midpoint approximation of this integral
we imagine a rectangle of sides xb xa and yb ya which encloses the region R and we
divide it into squares of lenght h. The points in the x/y direction are
1
xi = xa + (i )h , i = 1, ..., nx . (8.9)
2
1
yi = ya + (i )h , i = 1, ..., ny . (8.10)
2
The number of points in the x/y direction are
xb xa yb ya
nx = , ny = . (8.11)
h h
The number of cells is therefore
(xb xa )(yb ya )
n = nx ny = . (8.12)
h2
The integral is then approximated by
ny
nx X
X
F = h2 f (xi , yj )H(xi , yj ). (8.13)
i=1 j=1
Let us say that we have nx intervals [xi , xi+1 ] with x0 = a and xi = xa + (i 0.5)h,
i = 1, ..., nx 1. The term hf (xi+1 ) is associated with the interval [xi , xi+1 ]. It is clear
that we can write this approximation as
x 1
nX
xi + xi+1
F =h f( ) , xi = xa + ih. (8.18)
2
i=0
The total error is thereore 1/n2x as opposed to the 1/nx of the rectangular approximation.
Let us do this in two dimensions. We write the error as
Z xi+1 Z yj+1
xi + xi+1 yj + yj+1
f (x, y) dx dy f ( , )xy (8.20)
xi yj 2 2
0 0 1 00
f (x, y) = f (xi , yj ) + fx (xi , yj )(x xi ) + fy (xi , yj )(y yj ) + fx (xi , yj )(x xi )2
2
1 00 00
+ f (xi , yj )(y yj )2 + fxy (xi , yj )(x xi )(y yj ) + ... (8.21)
2 y
We find
Z xi+1 Z yj+1
xi + xi+1 yj + yj+1 1 00 1 00
f (x, y) dx dy f ( , )xy = fx (xi , yj )(x)3 y + fy (xi , yj )x(y)3
xi yj 2 2 24 24
+ ... (8.22)
Z +R Z
Vd = dxd dx1 ...dxd1
R x21 +...+x2d1 R2 x2d
Z +R Z R2 x2 Z
d
d2
= dxd r dr dd2
R 0
Z +R
Vd1 d1
= dxd (R2 x2d ) 2 . (8.26)
Rd1 R
At each dimension d we are thus required to compute only the remaining integral over
xd using, for instance, the midpoint approximation while the volume Vd1 is determined
in the previous recursion step. The starting point of the recursion process, for example
the volume in d = 2, can be determined also using the midpoint approximation. As we
will see in the lab problems this numerical calculation is very demanding with significant
errors compared with the Monte Carlo method.
A Monte Carlo method is any procedure which uses (pseudo) random numbers to compute
or estimate the above integral. In the following we will describe two very simple Monte
Carlo methods based on simple sampling which give an approximate value for this integral.
As we progress we will be able to give more sophisticated Monte Carlo methods. First we
start with the sampling (hit or miss) method then we go on to the sample mean method.
< f > is the average value of the function f (x) in the range a x b. The sample mean
method estimates the average < f > as follows:
We choose n random points xi from the interval [a, b] which are distributed uniformly.
We compute the values of the function f (x) at these point.
We take their average. In other words
n
1X
F = (b a) f (xi ). (8.30)
n
i=1
This is formally the same as the rectangular approximation. The only difference is that
here the points xi are chosen randomly from the interval [a, b] whereas the points in the
rectangular approximation are chosen with equal spacing. For lower dimensional integrals
the rectangular approximation is more accurate whereas for higher dimensional integrals
the sample mean method becomes more accurate.
n
1X
F =A f (xi , yi )H(xi , yi ). (8.32)
n
i=1
The points xi are random and uniformly distributed in the interval [xa , xb ] whereas the
points yi are random and uniformly distributed in the interval [ya , yb ]. A is the areas of
the rectangle, i.e A = (xb xa )(yb ya ). The Heaviside function is defined by
V is the volume of the parallelepiped which encloses the three dimensional region R.
n
1X
y1 =< xi >= xi p(xi ). (8.35)
n
i=1
We repeat this measurement N times thus obtaining N averages y1 , y2 ,...,yN . The mean
z of the averages yi is
N
1 X
z= yi . (8.36)
N
i=1
The question we want to answer is: what is the probability distribution function of z.
Clearly the probability of obtaining a particular value z is the product of the probabil-
ities of obtaining the individual averages yi (which are assumed to be independent) with
the constraint that the average of yi is z.
Let p(y) be the probability distribution function of the average y and let P (z) be the
probability distribution of the average z of the averages. We can then write P (z) as
Z Z
y1 + ... + yN
P (z) = dy1 ... dyN p(y1 )... p(yN )(z ). (8.37)
N
CP and MFT, B.Ydri 89
The delta function expresses the constraint that z is the average of yi . The delta function
can be written as
Z
y1 + ... + yN 1 y1 +...+yN
(z )= dqeiq(z N )
. (8.38)
N 2
Let be the actual average of yi , i.e.
Z
=< yi >= dy p(y)y. (8.39)
We write
Z Z Z
1 iq(z) iq
(y1 ) iq
P (z) = dqe dy1 p(y1 )e N ... dyN p(yN )e N (yN )
2
Z Z N
1 iq(z) iq
(y)
= dqe dy p(y)e N . (8.40)
2
But
Z Z
iq
(y) iq q 2 ( y)2
dy p(y)e N = dy p(y) 1 + ( y) + ...
N 2N 2
q22
= 1 + ... (8.41)
2N 2
We have used
Z
dy p(y)( y)2 =< y 2 > < y >2 = 2 . (8.42)
Hence
Z
1 q2 2
P (z) = dqeiq(z) e 2N
2
Z
1 N2 (z)2 2 iN 2
= e 2 dqe 2N (q (z))
2
(z)2
2 2
1 e N
= . (8.43)
2 N
N = . (8.44)
N
This is the normal distribution. Clearly the result does not depend on the original prob-
ability distribution functions p(x) and p(y).
The average z of N random numbers yi corresponding to a probability distribution
function p(y) is distributed according to the normal probability distribution function with
average equal to the average value of p(y) and variance equal to the variance of p(y)
divided by N .
CP and MFT, B.Ydri 90
= F FN . (8.47)
However in general we do not know the exact result F . The best we can do is to calculate
the probability that the approximate result FN is within a certain range centered around
the exact result F .
The starting point is the central limit theorem. This states that the average z of N
random numbers y corresponding to a probability distribution function p(y) is distributed
according to the normal probability distribution function. Here the variable y is (we
assume for simplicity that b a = 1)
N
1 X
y= f (xi ). (8.48)
N
i=1
According to the central limit theorem the mean z is distributed according to the normal
probability distribution function with average equal to the average value < y > of y and
variance equal to the variance of y divided by M , viz
s
M (z < y >)2
2
M2 exp M 2
M2 . (8.51)
The
M is the standard deviation of the mean given by the square root of the variance
M
2 1 X
M = (y < y >)2 . (8.52)
M 1
=1
The use of M 1 instead of M is known as Bessels correction. The reason for this
correction is the fact that the computation of the mean < y > reduces the number of
independent data points y by one. For very large M we can replace
M with M defined
by
M
2 2 1 X
M M = (y < y >)2 =< y 2 > < y >2 . (8.53)
M
=1
The standard deviation of the sample (one single measurement with N data points) is
given by the square root of the variance
N
1 X
2 =
(f (xi ) < f >)2 . (8.54)
N 1
i=1
N N
1 X 2 1 X
< f >= f (xi ) , < f >= f (xi )2 . (8.56)
N N
i=1 i=1
The standard deviation of the mean M M is given in terms of the standard deviation
by the equation
of the sample
M = . (8.57)
N
The proof goes as follows. We generalize equations (6.80) and (8.56) to the case of M
measurements each with N samples. The total number of samples is M N . We have
M N
1 XX
2 = (f (xi, ) < f >)2 =< f 2 > < f >2 . (8.58)
NM
=1 i=1
CP and MFT, B.Ydri 92
M N M N
1 XX 2 1 XX
< f >= f (xi, ) , < f >= f (xi, )2 . (8.59)
NM NM
=1 i=1 =1 i=1
M M is given by
The standard deviation of the mean
M
2 1 X
M = (y < y >)2
M
=1
M N 2
1 X 1 X
= f (xi, ) < f >
M N
=1 i=1
XM X N X N
1
= f (xi, ) < f > f (xi, ) < f > . (8.60)
N 2M
=1 i=1 j=1
In above we have used the fact that < y >=< f >. For every set the sum over i and
j splits into two pieces. The first is the sum over the diagonal elements with i = j and
the second is the sum over the off diagonal elements with i 6= j. Clearly f (xi, ) < f >
and f (xj, ) < f > are on the average equally positive and negative and hence for large
numbers M and N the off diagonal terms will cancel and we end up with
M N 2
2 1 XX
M = f (x i, ) < f >
N 2M
=1 i=1
2
= . (8.61)
N
The standard deviation of the mean M can therefore be interpreted as the probable error
in the original N measurements since if we make M sets of measurements each with N
samples the standard deviation of the mean M will estimate how much an average over
N measurements will deviate from the exact mean.
This means in particular that the original measurement FN of the integral F has a
68 per cent chance of being within one standard deviation M of the true mean and a 95
per cent chance of being within 2M and a 99.7 per cent chance of being within 3M . In
general the proportion of data values within M standard deviations of the true mean is
defined by the error function
Z <y>+M Z
1 (z < y >)2 2 2
q exp 2 dz = exp x2 dx = erf( ).
<y>M 2
2M 2M 0 2
(8.62)
with the correct probabilities using only a uniform probability distribution. The answer
is as follows. Let r be a uniform random number between 0 and 1. We choose the event
1 if r < p1 else we choose the event 2.
Let us now consider three discrete events 1, 2 and 3 with probabilities p1 , p2 and p3
respectively such that p1 + p2 + p3 = 1. Again we choose a random number r between 0
and 1. If r < p1 then we choose event 1, if p1 < r < p1 + p2 we choose event 2 else we
choose event 3.
P
We consider now n discrete events with probabilities pi such that ni=1 pi = 1. Again
we choose a random number r between 0 and 1. We choose the event i if the random
number r satisfies the inequality
i1
X i
X
pj r pj . (8.63)
j=1 j=1
In the continuum limit we replace the probability pi with p(x)dx which is the probability
P
that the event x is found between x and x + dx. The condition ni=1 pi = 1 becomes
Z +
p(x) dx = 1. (8.64)
Thus r is equal to the cumulative probability distribution P (x), i.e the probability of
choosing a value less than or equal to x. This equation leads to the inverse transform
method which allows us to generate a nonuniform probability distribution p(x) from a
uniform probability distribution r. Clearly we must be able to 1) perform the integral
analytically to find P (x) then 2) invert the relation P (x) = r for x.
As a first example we consider the Poisson distribution
1 x
p(x) = e , 0 x . (8.66)
We find
x
P (x) = 1 e = r. (8.67)
Hence
Thus given the uniform random numbers r we can compute directly using the above
formula the random numbers x which are distributed according to the Poisson distribution
x
p(x) = 1 e .
The next example is the Gaussian distribution in two dimensions
1 x2 +y2 2
p(x, y) = e 2 . (8.69)
2 2
CP and MFT, B.Ydri 94
r2 = 2 2 ln v , = 2w. (8.72)
The random numbers v and w are clearly uniformly distributed between 0 and 1. The
random numbers x (or y) are distributed according to the Gaussian distribution in one
dimension. This method is known as the Box-Muller method.
(1) Write a program that computes the three dimensional integral using the midpoint
approximation. We take the stepsize h = 2R/N , the radius R = 1 and the number
of steps in each direction to be N = Nx = Ny = 2p where p = 1, 15.
CP and MFT, B.Ydri 95
(2) Show that the error goes as 1/N . Plot the logarithm of the absolute value of the
absolute error versus the logarithm of N .
(3) Try out the two dimensional integral. Work in the positive quadrant and again take
the stepsize h = R/N where R = 1 and N = 2p , p = 1, 15. We know that generically
the theoretical error goes at least as 1/N 2 . What do you actually find? Why do you
find a discrepancy?
Hint: the second derivative of the integrand is singular at x = R which changes the
dependence from 1/N 2 to 1/N 1.5 .
Part II In order to compute numerically the volume of the ball in any dimension d we
use the recursion formula
Z +R
Vd1 d1
Vd = dxd (R2 x2d ) 2 .
Rd1 R
(1) Find the volumes in d = 4, 5, 6, 7, 8, 9, 10, 11 dimensions. Compare with the exact
result given above.
Part III
(1) Use the Monte Carlo sampling (hit or miss) method to find the integrals in d = 2, 3, 4
and d = 10 dimensions. Is the Monte Carlo method easier to apply than the midpoint
approximation?
(2) Use the Monte Carlo sample mean value method to find the integrals in d = 2, 3, 4
and d = 10 dimensions. For every d we perform M measurements each with N
samples. We consider M = 1, 10, 100, 150 and N = 2p , p = 10, 19. Verify that the
exact error in this case goes like 1/ N .
Hint: Compare the exact error which is known in this case with the standard de-
viation of the mean M and with / N where is the standard deviation of the
sample, i.e. of a single measurement. These three quantities must be identical.
Part IV
(1) The value of can be given by the integral
Z
= dx dy.
x2 +y 2 R2
Use the Monte Carlo sampling (hit or miss) method to give an approximate value of
.
(2) The above integral can also be put in the form
Z +1 p
=2 dx 1 x2 .
1
Use the Monte Carlo sample mean value method to give another approximate value
of .
CP and MFT, B.Ydri 96
x = r cos .
r2 = 2 2 ln v , = 2w.
The v and w are uniform random numbers in the interval [0, 1].
(2) Draw a histogram of the random numbers obtained in the previous question. The
steps are as follows:
a- Determine the range of the points x.
b- We divide the interval into u bins. The lenght of each bin is h = interval/u. We
take for example u = 100.
c- We determine the location of every point x among the bins. We increase the
counter of the corresponding bin by a unit.
d- We plot the fraction of points as a function of x. The fraction of point is equal
to the number of random numbers in a given bin divided by hN where N is the
total number of random numbers. We take N = 10000.
(3) Draw the data on a logarithmic scale, i.e plot log(fraction) versus x2 . Find the fit
and compare with theory.
Part II
(1) Apply the acceptance-rejection method to the above problem.
(2) Apply the Fernandez-Criado algorithm to the above problem. The procedure is as
follows
a- Start with N points xi such that xi = .
b- Choose at random a pair (xi , xj ) from the sequence and make the following
change
xi + xj
xi
2
xj xi + 2xj .
CP and MFT, B.Ydri 97
c- Repeat step 2 until we reach equilibrium. For example try it M times where
M = 10, 100, ....
Chapter 9
The sum is over all the microstates of the system with a fixed N and V . The Helmholtz
free energy F of a system is given by
F = kB T ln Z. (9.3)
In equilibrium the free energy is minimum. All other thermodynamical quantities can be
given by various derivatives of F . For example the internal energy U of the system which
CP and MFT, B.Ydri 99
Z Y
N
d3 pi d3 qi H(~pi ,~qi )
Z= e . (9.6)
h3
i=1
For quantum dynamical field systems (in Euclidean spacetimes) which are of fundamental
importance to elementary particles and their interactions the partition function is given by
the so-called path integral which is essentially of the same form as the previous equation
with the replacement of the Hamiltonian H(~ pi , ~qi ) by the action S[] where stands for
Q
the field variables and the replacement of the measure N 3 3 3
i=1 (d pi d qi )/h by the relevant
(infinite dimensional) measure D on the space of field configurations. We obtain therefore
Z
Z = D eS[] . (9.7)
Similarly to what happens in statistical mechanics where all observables can be derived
from the partition function the observables of a quantum field theory can all be derived
from the path integral. The fundamental problem therefore is how to calculate the par-
tition function or the path integral for a given physical system. Normally an analytic
solution will be ideal. However finding such a solution is seldom possible and as a conse-
quence only the numerical approach remains available to us. The partition function and
the path integral are essentially given by multidimensional integrals and thus one should
seek numerical approaches to the problem of integration.
the integrand is largest. Importance sampling uses also in a crucial way nonuniform
probability distributions.
Let us again consider the one dimensional integral
Z b
F = dx f (x). (9.8)
a
The probability distribution p(x) is chosen such that the function f (x)/p(x) is slowly
varying which reduces the corresponding standard deviation.
X N
X
EI {si } = ij si sj H si . (9.12)
<ij> i=1
The parameter H is the external magnetic field. The symbol < ij > stands for nearest
neighbor spins. The sum over < ij > extends over N /2 terms where is the number of
nearest neighbors. In 2, 3, 4 dimensions = 4, 6, 8. The parameter ij is the interaction
energy between the spins i and j. For isotropic interactions ij = . For > 0 we obtain
ferromagnetism while for < 0 we obtain antiferromagnetism. We consider only > 0.
The energy becomes with these simplifications given by
X N
X
EI {si } = si sj H si . (9.13)
<ij> i=1
CP and MFT, B.Ydri 101
Generally given any physical quantity A its expectation value < A > can be computed
using a similar expression, viz
P Es
s As e
< A >= P Es
. (9.16)
se
The number As is the value of A in the microstate s. In general the number of microstates
N is very large. In any Monte Carlo simulation we can only generate a very small number
n of the total number N of the microstates. In other words < E > and < A > will be
approximated with
Pn Es
s=1 Es e
< E > ' < E >n = P n Es
. (9.17)
s=1 e
Pn Es
s=1 As e
< A > ' < A >n = P n Es
. (9.18)
s=1 e
The calculation of < E >n and < A >n proceeds therefore by 1) choosing at random
a microstate s, 2) computing Es , As and eEs then 3) evaluating the contribution of
this microstate to the expectation values < E >n and < A >n . This general Monte
Carlo procedure is however highly inefficient since the microstate s is very improbable
and therefore its contribution to the expectation values is negligible. We need to use
importance sampling. To this end we introduce a probability distribution ps and rewrite
the expectation value < A > as
P As Es
s p e ps
< A >= P 1s E . (9.19)
s ps e
sp
s
CP and MFT, B.Ydri 102
Now we generate the microstates s with probabilities ps and approximate < A > with
< A >n given by
Pn As Es
s=1 p e
< A >n = Pn 1s E . (9.20)
s=1 ps e
s
The Metropolis algorithm in the case of spin systems such as the Ising model can be
summarized as follows:
(1) Choose an initial microstate.
(2) Choose a spin at random and flip it.
(3) Compute E = Etrial Eold . This is the change in the energy of the system due to
the trial flip.
(4) Check if E 0. In this case the trial microstate is accepted.
(5) Check if E > 0. In this case compute the ratio of probabilities w = eE .
(6) Choose a uniform random number r in the inetrval [0, 1].
(7) Verify if r w. In this case the trial microstate is accepted, otherwise it is rejected.
(8) Repeat steps 2) through 7) until all spins of the system are tested. This sweep counts
as one unit of Monte Carlo time.
(9) Repeat setps 2) through 8) a sufficient number of times until thermalization, i.e.
equilibrium is reached.
(10) Compute the physical quantities of interest in n thermalized microstates. This can
be done periodically in order to reduce correlation between the data points.
(11) Compute averages.
The proof that this algorithm leads indeed to a sequence of states which are distributed
according to the Boltzmann distribution goes as follows.
It is clear that the steps 2) through 7) corresponds to a transition probability between
the microstates {si } and {sj } given by
W (i j) = min(1, eE ) , E = Ej Ei . (9.23)
Any other probability function W which satisfies this condition will generate a sequence of
states which are distributed according to the Boltzmann distribution. This can be shown
P
by summing over the index j in the above equation and using j W (i j) = 1. We
get
X
eEi = W (j i) eEj . (9.25)
j
The system is assumed to be in equilibrium with a heat bath with temperature T . Thermal
equilibrium of the Ising model is described by the canonical ensemble. The probability of
finding the Ising model in a configuration {s1 , ..., s2N } is given by Boltzmann distribution
eE{s}
P {s} = . (9.28)
Z
CP and MFT, B.Ydri 104
The magnetization M in a configuration {s1 , ..., s2N } is the order parameter of the system.
It is defined by
X
M= si . (9.30)
i
In above < si >=< s > since all spins are equivalent. We have
1 log Z F
< M >= = . (9.32)
H H
In order to compute < M > we need to compute Z. In this section we use the mean field
approximation. First we rewrite the energy E{s} in the form
X X
E{s} = (J sj )si H si
<ij> i
X X
i
= Heff si H si . (9.33)
i i
i is given by
The effective magnetic field Heff
X
i
Heff = J sj(i) . (9.34)
j(i)
The index j(i) runs over the four nearest neighbors of the spin i. In the mean field
approximation we replace the spins sj(i) by their thermal average < s >. We obtain
i
Heff = J < s > , = 4. (9.35)
In other words
X X
E{s} = (H + J < s >) si = Heff si (9.36)
i i
Thus for zero magnetic field we get from the second equation the constraint
Clearly < s >= 0 is always a solution. This is the high temperature paramagnetic phase.
For small temperature we have also a solution < s >6= 0. This is the ferromagnetic phase.
There must exist a critical temperature Tc which separates the two phases. We expect
< s > to approach < s >= 0 as T goes to Tc from below. In other words near Tc we can
treat < s > as small and as a consequence we can use the expansion tanh x = x 31 x3 .
We obtain
1 3
< s >= J < s > J < s > . (9.42)
3
Equivalently
2 3 1 J
<s> <s> 3
T = 0. (9.43)
T (J) kB
F = kT N ln 2 cosh J < s > . (9.46)
We see that for T < Tc the ferromagnetic solution has a lower free energy than the
paramagnetic solution < s >= 0. The phase T < Tc is indeed ferromagnetic. The
transition at T = Tc is second order. The free energy is continuous at T = Tc , i.e. there is
CP and MFT, B.Ydri 106
no latent heat while the specific heat is logarithmically divergent. The mean field theory
yields the correct value 0 for the critical exponent although it does not reproduce the
logarithmic divergence. The susceptibility diverges at T = Tc with critical exponent = 1.
These latter statements can be seen as follows.
The specific heat is given by
2
Cv = kB T (F )
T T
2
= 2kB T (F ) kB T 2 2 (F ). (9.47)
T T
Next we use the expression F = N ln(ex + ex ) where x = J < s >. We find
Cv x 2x 1 x 2
= 2kB T tanh x + kB T 2 tanh2 x 2 + kB T 2 2 ( ) . (9.48)
N T T cosh x T
We compute
s s s
3kB 1 x 1 3kB 1 2x 1 3kB 3
x= (Tc T ) 2 , = (Tc T ) 2 , 2
= (Tc T ) 2 .
J T 2 J T 4 J
(9.49)
It is not difficult to show that the divergent terms cancel and as a consequence
Cv
(Tc T ) , = 0. (9.50)
N
The susceptibility is given by
= <M >. (9.51)
H
To compute the behavior of near T = Tc we consider the equation
For small magnetic field we can still assume that J < s > +H is small near T = Tc
and as a consequence we can expand the above equation as
1
< s >= (J < s > +H) (J < s > +H)3 . (9.53)
3
Taking the derivative with respect to H of both sides of this equation we obtain
= (J + )(J < s > +H)2 .
+ ) (J (9.54)
= <s>. (9.55)
H
Setting the magnetic field to zero we get
= (J + )(J < s >)2 .
+ ) (J (9.56)
CP and MFT, B.Ydri 107
In other words
2
1 J + J(J < s >) = (J < s >)2 . (9.57)
Tc T 1
2
= (1 (J < s >)2 ). (9.58)
T kB T
Hence
1
= (Tc T ) , = 1. (9.59)
2kB
We impose periodic boundary condition in order to reduce edge and boundary effects.
This can be done as follows. We consider (n + 1) (n + 1) matrix where the (n + 1)th
row is identified with the first row and the (n + 1)th column is identified with the first
column. The square lattice is therefore a torus. The toroidal boundary condition will
read explicitly as follows
The variation of the energy due to the flipping of the spin (i, j) is an essential ingredient
in the Metropolis algorithm. This variation is explicitly given by
E = 2J(i, j) (i + 1, j) + (i 1, j) + (i, j + 1) + (i, j 1) + 2H(i, j). (9.61)
A subroutine which generates pseudo random numbers. We prefer to work with well
established suboutines such as the RAN 2 or the RANLUX.
A subroutine which implements the Metropolis algorithm for the Ising model. This
main part will read (with some change of notation such as J = exch)
do i=1,L
ip(i)=i+1
im(i)=i-1
enddo
ip(L)=1
im(1)=L
do i=1,L
do j=1,L
deltaE=2.0d0*exch*phi(i,j)*(phi(ip(i),j)+phi(im(i),j)+phi(i,ip(j))+phi(i,im(j)))
deltaE=deltaE + 2.0d0*H*phi(i,j)
if (deltaE.ge.0.0d0)then
probability=dexp(-beta*deltaE)
call ranlux(rvec,len)
r=rvec(1)
if (r.le.probability)then
phi(i,j)=-phi(i,j)
endif
else
phi(i,j)=-phi(i,j)
endif
enddo
enddo
We compute the energy < E > and the magnetization < M > of the Ising model in
a separate subroutine.
We compute the errors using for example the Jackknife method in a separate sub-
routine.
We fix the parameters of the model such as L, J, = 1/T and H.
We choose an initial configuration. We consider both cold and hot starts which are
given respectively by
We run the Metropolis algorithm for a given thermalization time and study the
history of the energy and the magnetization for different values of the temperature.
CP and MFT, B.Ydri 109
We add a Monte Carlo evolution with a reasonably large number of steps and com-
pute the averages of E and M .
We compute the specific heat and the susceptibility of the system.
Specific Heat: The critical exponent associated with the specific heat is given by
= 0. However the specific heat diverges logarithmically at T = Tc . This translates into
the fact that the peak grows with n logarithmically, namely
Cv
log n. (9.64)
n2
Magnetization: The magnetization near but below the critical temperature in the
two-dimensional Ising model scales as
<M >
(Tc T ) , = 1/8. (9.65)
n2
Critical Temperature: From the behavior of the above observable we can measure
the critical temperature, which marks the point where the second order ferromagnetic
phase transition occurs, to be given approximately by
2J
kB Tc = . (9.67)
ln( 2 + 1)
Note that near-neighbor lattice sites which are a distance x away in a given direction
from a given index i are given by
do x=1,nn
if (i+x .le. n) then
ipn(i,x)=i+x
else
ipn(i,x)=(i+x)-n
endif
if ((i-x).ge.1)then
imn(i,x)=i-x
else
imn(i,x)=i-x+n
endif
enddo
For simplicity we consider only odd lattices, viz n = 2nn + 1. Clearly because of the
toroidal boundary conditions the possible values of the distance x are x = 1, 2, ..., nn.
First Order Transition and Hysteresis: We can also consider the effect of a
magnetic field H on the physics of the Ising model. We observe a first order phase
transition at H = 0 or H near 0 and a phenomena of hysteresis. We observe the following:
For T < Tc we can observe a first order phase transition. Indeed we observe a
discontinuity in the energy and the magnetization which happens at a non-zero
value of H due to hysteresis. The jumps in the energy and the magnetization are
typical signal for a first order phase transition.
For T > Tc the magnetization becomes a smooth function of H near H = 0 which
means that above Tc there is no distinction between the ferromagnetic states with
M 0 and M 0.
We recompute the magnetization as a function of H for a range of H back and
fourth. We observe the following:
CP and MFT, B.Ydri 111
A hysteresis loop.
The hysteresis window shrinks with increasing temperature or accumulating
more Monte Carlo time.
The hysteresis effect is independent of the size of the lattice.
The phenomena of hysteresis indicates that the behaviour of the system depends on
its initial state and history. Equivalently we say that the system is trapped in a
metastable state.
(2) Write a subroutine that implements the Metropolis algorithm for this system. You
will need for this the variation of the energy due to flipping the spin (i, j).
(3) We choose L = 10, H = 0, J = 1, = 1/T . We consider both a cold start and a
hot start.
Run the Metropolis algorithm for a thermalization time TTH = 26 and study the
history of the energy and the magnetization for different values of the temperature.
The energy and magnetization should approach the values E = 0 and M = 0 when
T and the values E = 2JN and M = +1 when T 0.
(4) Add a Monte Carlo evolution with TTM = 210 and compute the averages of E and
M.
(5) Compute the specific heat and the susceptibility of the system. These are defined
by
Cv = < E >= (< E 2 > < E >2 ) , = < M >= (< M 2 > < M >2 ).
T H
CP and MFT, B.Ydri 112
(6) Determine the critical point. Compare with the theoretical exact result
2J
kB Tc = .
ln( 2 + 1)
Part II Add to the code a separate subroutine which implements the Jackknife method
for any set of data points. Compute the errors in the energy, magnetization, specific heat
and susceptibility of the Ising model using the Jackknife method.
Part II The magnetization near but below the critical temperature in 2D Ising model
scales as
<M > 1
(Tc T ) , = .
L2 8
We propose to study the magnetization near Tc in order to determine the value of
numerically. Towards this end we plot | < M > | versus Tc T where T is taken in the
the range
T = Tc 104 step , step = 0, 5000.
We take large lattices say L = 30 50 with TTH = TMC = 210 .
Part III The susceptibility near the critical temperature in 2D Ising model scales as
7
2
|T Tc | , = .
L 4
Determine numerically. Use TTH = 210 , TMC = 213 , L = 50 with the two ranges
T = Tc 5 104 step , step = 0, 100.
In the above question we take LL = 20. We also consider the parameters TTH = 210 ,
TTC = 215 and the temperatures
a- For T < Tc say T = 0.5 and 1.5 determine the first order transition point from
the discontinuity in the energy and the magnetization. The transition should
happen at a non-zero value of H due to hysteresis. The jump in the energy
is associated with a non-zero latent heat. The jumps in the energy and the
magnetization are the typical signal for a first order phase transition.
b- For T > Tc say T = 3 and 5 the magnetization becomes a smooth function of
H near H = 0 which means that above Tc there is no distinction between the
ferromagnetic states with M 0 and M 0.
(2) We recompute the magnetization as a function of H for a range of H from 5 to 5
and back. You should observe a hysteresis loop.
a- Verify that the hysteresis window shrinks with increasing temperature or accu-
mulating more Monte Carlo time.
b- Verify what happens if we increase the size of the lattice.
The phenomena of hysteresis indicates that the behaviour of the system depends
on its initial state and history or equivalently the system is trapped in metastable
states.
Part II
Z Z
1 4
S = 2 d xTrF F d4 xTrF F . (1.1)
2g 16 2
We recall the definitions
F = A A i[A , A ]. (1.3)
1
F = F . (1.4)
2
The path integral of interest is
Z
Z= DA exp(iS). (1.5)
Z
ZE = DA exp(SE ). (1.6)
Z Z
1 4 2 i
SE = (d x)E Tr(F )E + (d4 x)E Tr(F F )E . (1.7)
2g 2 16 2
We remark that the theta term is imaginary. In the following we will drop the subscript E
for simplicity. Let us consider first the = 0 (trivial) sector. The pure Yang-Mills action
is defined by
Z
1
SYM = d4 xTrF2
. (1.8)
2g 2
The path integral is of the form
Z Z
1
DA exp( d4 xTrF
2
). (1.9)
2g 2
First we find the equations of motion. We have
Z
1
SYM = d4 x TrF F
g2
Z
2
= d4 x TrF D A
g2
Z Z
2 2
= 2 d x TrD F .A + 2 d4 x TrD (F A )
4
g g
Z Z
2 2
= 2 d x TrD F .A + 2 d4 x Tr (F A ).
4
(1.10)
g g
The equations of motion for variations of the gauge field which vanish at infinity are
therefore given by
D F = 0. (1.11)
Equivalently
F i[A , F ] = 0. (1.12)
We can reduce to zero dimension by assuming that the configurations Aa are constant
configurations, i.e. are xindependent. We employ the notation Aa = Xa . We obtain
immediately the action and the equations of motion
VR4
SYM = Tr[X , X ]2 . (1.13)
2g 2
[X , [X , X ]] = 0. (1.14)
CP and MFT, B.Ydri 118
D F = ( F i[A , F ])
= [A , [A , A ]]
= 0. (1.17)
Thus
1
L = TrD (F A )
8 2
1
= Tr (F A ) i[A , F A ]
8 2
= K . (1.18)
1
K = TrF A . (1.19)
8 2
This shows explicitly that the theta term will not contribute to the equations of motion
for variations of the gauge field which vanish at infinity.
In order to find the current K itself we adopt the method of [1]. We consider a one-
parameter family of gauge fields A (x, ) = A (x) with 0 1. By using the above
result we have immediately
1
K = 2
TrF (x, ) A
8
1 2
= Tr A A i [A , A ] .A (x). (1.20)
8 2
By integrating both sides with respect to between = 0 and = 1 and setting K (x, 1) =
K (x) and K (x, 0) = 0 we get
1 1 1 i
K = Tr A A [A , A ] .A (x). (1.21)
8 2 2 2 3
CP and MFT, B.Ydri 119
The theta term is proportional to an integer k (known variously as the Pontryagin class,
the winding number, the instanton number and the topological charge) defined by
Z
k = d4 xL
Z
= d4 x K . (1.22)
R4 = S
3
. (1.23)
Then
Z
k = d3 K
R4 =S
3
Z
1 3 2
= d Tr F A + i A A A . (1.24)
16 2 R4 =S
3 3
A Yang-Mills instanton is a solution of the equations of motion which has finite action.
In order to have a finite action the field strength F must approach 0 at infinity at least
as 1/x2 , viz1
I
F (x) = o(1/x2 ) , x . (1.26)
We can immediately deduce that the gauge field must approach a pure gauge at infinity,
viz
which satisfies (from the above asymptotic behavior) the equation g 1 = iAI g 1 or
equivalently
d 1 dx I
g (x(s), x0 ) = i A (x(s))g 1 (x(s), x0 ). (1.28)
ds ds
The solution is given by the path-ordered Wilson line
Z 1
1 dy I
g (x, x0 ) = P exp i ds A (y(s)) . (1.29)
0 ds
1
The requirement of finite action can be neatly satisfied if we compactify R4 by adding one point at to
obtain the four-sphere S 4 .
CP and MFT, B.Ydri 120
g 1 : S
3
G. (1.30)
g 1 : S
3
SU (2) = S 3 . (1.31)
These maps are characterized precisely by the integer k introduced above. This number
measures how many times the second S 3 (group) is wrapped (covered) by the first sphere
3 (space). In fact this is the underlying reason why k must be quantized. In other words
S
k is an element of the third homotopy group 3 (S 3 ), viz 2
We can obviously use any spin j representation of SU (2) provided it fits inside the N N
matrices of SU (N ). The case N = 2j + 1 is equivalent to choosing the generators of
SU (N )a
SU (2) in the spin j representation as the first 3 generators of SU (N ) and hence A ,
a = 1, 2, 3 are given by the SU (2) instanton configurations whereas the other components
SU (N )a
A , a = 4, ..., N 2 1 are zero identically. The explicit constructions of all these
instanton solutions will not be given here.
The story of instanton calculus is beautiful but long and complicated and we can only
here refer the reader to the vast literature on the subject. See for example the pedagogical
lectures [2].
We go back to the main issue for us which is the zero dimensional reduction of the
Chern-Simons term. By using the fact that on S 3 we have F
= 0 we can rewrite (1.24)
as
Z
i
k = d3 TrA A A . (1.34)
24 2 R4 =S
3
2
In general n (S n ) = Z. It is obvious that 1 (S 1 ) = 2 (S 2 ) = Z.
CP and MFT, B.Ydri 121
As before we can reduce to zero dimension by assuming that the configurations Xa are
constant. We obtain immediately
iVS 3
SCS = abc TrXa Xb Xc . (1.41)
24 2
By putting (1.13) and (1.41) we obtain the matrix action
VR 4 iVS 3
SE = 2
Tr[X , X ]2 + abc TrXa Xb Xc . (1.42)
2g 24 2
We choose to perform the scaling
1/4
N g2
X X . (1.43)
2VR4
The action becomes
N 2N
Tr[X , X ]2 + i
SE = abc TrXa Xb Xc . (1.44)
4 3
The new coupling constant is given by
1 VS 3 N g 2 3/4
= . (1.45)
16 2 N 2VR4
CP and MFT, B.Ydri 122
The meaning of the meausre is obvious since X are N N matrices. The corresponding
probability distribution for the matrix configurations X is given by
1
P (X) = exp(SYM [X]). (1.48)
Z
We want to sample this probability distribution in Monte Carlo using the Metropolis
algorithm. Towards this end, we need to compute the variation of the action under the
following arbitrary change
0
X X = X + X , (1.49)
where
SYM = S1 + S2 . (1.51)
N X
S2 = [X , X ]2
2
6=
N X N X
= d [X , [X , X ]]ji d [X , [X , X ]]ij
2 2
6= 6=
X
= N d2 (X )ji (X )ji + (d )2 (X )ij (X )ij + 2dd (X )ii (X )jj dd (X2 )ii + (X2 )jj
6=
1 2
(d + (d )2 ) (X2 )ii + (X2 )jj ij . (1.53)
2
CP and MFT, B.Ydri 123
It is not difficult to show that this probability distribution satisfies detailed balance, and
as a consequence, this algorithm is exact, i.e. free from systematic errors.
We start with z = 1 and we compute the error e(1) then we go to z = 2 and compute
the error e(2). The true error is the largest value. Then we go to z = 3, compute e(3),
compare it with the previous error and again retain the largest value and so on until we
reach z = T 1.
f = . (1.58)
T
The standard deviation (the variance) is given by
The above theoretical estimate of the error is valid provided the thermalized configurations
1 , 2 ,....,T are statistically uncorrelated, i.e. independent. In real simulations, this is
certainly not the case. In general, two consecutive configurations will be dependent, and
the average number of configurations which separate two really uncorrelated configurations
is called the auto-correlation time. The correct estimation of the error must depend on
the auto-correlation time.
We define the auto-correlation function j and the normalized auto-correlation func-
tion j for the observable f by
T j
1 X
j = (fi < f >)(fi+j < f >). (1.60)
T j
i=1
j
j = . (1.61)
0
The auto-correlation function j , for large j, can not be precisely determined, and hence,
one must truncate the sum over j in int at some cut-off M , in order to not increase the
error int in int by simply summing up noise. The integrated auto-correlation time int
should then be defined by
M
1 X
int = + j . (1.64)
2
j=1
The value M is chosen as the first integer between 1 and T such that
M 4int + 1. (1.65)
In general two among the three parameters of the molecular dynamics (the time step
dt, the number of iterations n and the time interval T = ndt) should be optimized in such
a way that the acceptance rate is fixed, for example, between 70 and 90 per cent. We fix
n and optimize dt along the line discussed in previous chapters. We make, for every N , a
reasonable guess for the value of the number of iterations n, based on trial and error, and
then work with that value throughout. For example, for N between N = 4 and N = 8,
we found the value n = 10, to be sufficiently reasonable.
<S> d
2
= . (1.67)
N 1 4
This identity follows from the invariance of the path integral under the translations X
X + X .
CP and MFT, B.Ydri 126
180
160
140
action
120
100
80
60
40
20
0 500 1000 1500 2000 2500
Monte Carlo time
300
250
action
200
150
100
50
0 500 1000 1500 2000 2500
Monte Carlo time
2.5
2
<S>/(N -1)
2
1.5
0.5
0
2 3 4 5 6 7 8 9 10
dimension
Figure 1.1:
Bibliography
[1] A. M. Polyakov, Gauge Fields and Strings, Contemp. Concepts Phys. 3, 1 (1987).
[2] S. Vandoren and P. van Nieuwenhuizen, arXiv:0802.1862 [hep-th].
[3] S. Schaefer, Simulations with the Hybrid Monte Carlo Algorithm: implementation
and data analysis .
Chapter 2
Z Y
d
ZYM = dX exp(SYM [X]). (2.3)
=1
Firstly, we will think of the gauge configurations X as evolving in some fictitious time-like
parameter t, viz
X X (t). (2.4)
The above path integral is then equivalent to the Hamiltonian dynamical system
Z Y Y d
1X
ZYM = dP dX exp( T rP2 SYM [X]). (2.5)
2
=1
CP and MFT, B.Ydri 129
X d
SYM V
= N [X , [X , X ]]ji + = (P )ij . (2.9)
(X )ij (X )ij
=1
We will define
SYM
(V )ij (t) =
(X )ij (t)
d
X V
= N [X , [X , X ]]ji +
(X )ij
=1
2 2
= N 2X X X X X X X + m2 (X )ji
ji
+ 2iN [X2 , X3 ]ji 1 + 2iN [X3 , X1 ]ji 2 + 2iN [X1 , X2 ]ji 3 . (2.10)
t2
(P )ij (t + t) = (P )ij (t) + t(P )ij (t) + (P )ij (t) + ... (2.12)
2
We calculate that
X 2 SYM
(P )ij = (X )kl
(X )kl (X )ij
kl,
d
X X 2V
= N [PT , [X , X ]] + [X , [PT , X ]] + [X , [X , PT ]] (X )kl .
ji (X )kl (X )ij
=1 kl,
(2.14)
t2 SYM
(X )ij (t + t) = (X )ij (t) + t(P )ji (t) + ... (2.16)
2 (X )ji (t)
t SYM t SYM
(P )ij (t + t) = (P )ij (t) + ... (2.17)
2 (X )ij (t) 2 (X )ij (t + t)
t
(X )ij (t + t) = (X )ij (t) + t(P )ji (t + ). (2.19)
2
t t SYM
(P )ij (t + t) = (P )ij (t + ) . (2.20)
2 2 (X )ij (t + t)
n = 0, ..., 1 will be denoted by (P )ij (n + 1/2). The above equations take then the
form
1 t
(P )ij (n + ) = (P )ij (n) (V )ij (n). (2.21)
2 2
1
(X )ij (n + 1) = (X )ij (n) + t(P )ji (n + ). (2.22)
2
1 t
(P )ij (n + 1) = (P )ij (n + ) (V )ij (n + 1). (2.23)
2 2
This algorithm applied to the solution of the equations of motion is essentially the molec-
ular dynamics method.
In other words detailed balance holds along a classical trajectory . The leap-frog method
used to solve the above differential equations maintains only the last two properties.
The violation of the first property introduces systematic errors and as a consequence
detailed balance is violated. It is a well established fact that introducing a Metropolis
accept/reject step at the end of each classical trajectory will eliminate the systematic
error completely. The algorithm becomes therefore exact and it is known-together with
the initial generation of the P s according to the Gaussian distribution-as the hybrid
Monte Carlo algorithm. The hybrid algorithm is the hybrid Monte Carlo algorithm in
which the Metropolis accept/reject step is omitted.
The difference between the hybrid algorithm and the ordinary molecular dynamics al-
gorithm is that in the hybrid algorithm we refresh the momenta (P )ij (t) at the beginning
of each molecular dynamics trajectory in such a way that they are chosen from a Gaussian
ensemble. In this way we avoid the ergodicity problem.
The hybrid Monte Carlo algorithm can be summarized as follows:
1) Choose an initial configuration X = X (0).
2)Choose P = P (0) according to the Gaussian probability distribution exp( 21 T rP2 ).
0 0
3)Find the configuration (X , P ) by solving the above differential equations of mo-
0 0
tion, i.e. (X , P ) = (X (T ), P (T )).
CP and MFT, B.Ydri 132
0 0
4)Accept the configuration (X , P ) with a probability min(1, eH[X,P ] ) where H
is the change in the Hamiltonian..
5) Go back to step 2 and repeat.
Steps 2 4 consists one sweep or one unit of Hybrid Monte Carlo time. The Metropolis
accept/reject step guarantees detailed balance of this algorithm and absence of systematic
errors which are caused by the non-invariance of the Hamiltonian due to the discretization.
where a = 1/2 for diagonal and a = 1 for off-diagonal. By squaring and including
normalization we have
Z Z 1 Z 1
a 1 2 2
dxdy e 2 a(x +y ) = dt1 dt2 . (2.27)
0 0
2
t1 = , t2 = ear . (2.28)
2
We generate therefore two uniform random numbers t1 and t2 and write down for diagonal
elements (P )ii the following equations
= 2t1
p
r = 2 ln(1 t2 )
(P )ii = r cos . (2.29)
= 2t1
p
r = ln(1 t2 )
(P )ij = r cos + ir sin
(P )ji = (P )ij . (2.30)
Test 4:On general ground we must also have the Schwinger-Dyson identity (exact
result) given by
4 < YM > +3 < CS > +2m2 < HO >= d(N 2 1). (2.35)
d
N X
YM = T r[X , X ]2 . (2.36)
4
,=1
2N i
CS = abc T rXa Xb Xc . (2.37)
3
1
HO = T rX2 . (2.38)
2
Test 5: We compute < SYM > and Cv =< SYM 2 > < SYM >2 for = 1 and
m = 0. There must be an emergent geometry phase transition in for d = 3 and
d = 4.
Test 6: We compute the eigenvalues distributions of the Xs in d = 3 and d = 4 for
= 1 and = m = 0.
Test 7: The Polyakove line is defined by
1
P (k) = T reikX1 . (2.39)
N
We compute < P (k) > as a function of k for m = = 0.
CP and MFT, B.Ydri 134
The order parameter in this problem is given by the observable radius defined by
2 c2
N2 1
r= , c2 = . (2.42)
radius 4
A more powerful set of order parameters is given by the eigenvalues distributions of the
matrices X3 , i[X1 , X2 ], and Xa2 . Other useful observables are
N 2iN
S3 = YM + CS , YM = [X , X ]2 , CS = abc T rXa Xb Xc . (2.43)
4 3
The specific heat is
For this so-called ARS model it is important that we remove the trace part of the matrices
Xa after each molecular dynamics step because this mode can never be thermalized. In
other words, we should consider in this case the path integral (partition function) given
by
Z
Z = dXa exp(S3 )(T rXa ). (2.46)
The corresponding hybrid Monte Carlo code is included in the last chapter. We skip here
any further technical details and report only few physical results.
The ARS model is characterized by two phases: the fuzzy sphere phase and the Yang-
Mills phase. Some of the fundamental results are:
1. The Fuzzy Sphere Phase:
This appears for large values of
. It corresponds to the class of solutions of
the equations of motion given by
Xa = La . (2.48)
X N2 1
[La , Lb ] = iabc Lc , c2 = L2a = l(l + 1).1N = .1N . (2.49)
a
4
2 4
4 c2 23
4 c2
S 3 = 3
4 c2 ( ) , YM = , CS = , radius = 2
2 c(2.50)
2.
2 3 2 3
The eigenvalues of D3 = X3 / and i[D1 , D2 ] = i[X1 , X2 ]/2 are given by
N 1 N 1
i = , ..., + . (2.51)
2 2
The spectrum of [D1 , D2 ] is a better measurement of the geometry since all
fluctuations around L3 are more suppressed. Some illustrative data for
=3
and N = 4 is shown on figure (2.1).
2. The Yang-Mills (Matrix) Phase:
This appears for small values of
. It corresponds to the class of solutions of
the equations of motion given by
[Xa , Xb ] = 0. (2.52)
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5
Figure 2.1:
0.3
0.25
0.2
0.15
0.1
0.05
-0.05
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
Figure 2.2:
CP and MFT, B.Ydri 138
0.3
0.25
0.2
0.15
0.1
0.05
0
-4 -3 -2 -1 0 1 2 3
Figure 2.3:
CP and MFT, B.Ydri 139
6
1 m2= 0 N=16 N=16
N=24 N=12
N=32 5 N=10
N=48 theory
0.5
exact 4
<S>/N2
0 3
-0.5 r 2
1
-1
0
-1.5 -2 0 2 4 6 8 10 12
1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3
35 2.2
N=16 N=16
N=12 N=24
30 N=10 2
N=32
theory N=48
25 1.8
theory
1.6
Tr Xa /N
20
2
Cv
1.4
15
1.2
10
1
5
0.8
0
-0.5 0 0.5 1 1.5 2 2.5 3 0.6
0 1 2 3 4 5 6
Figure 2.4:
Bibliography
[1] I. Montvay and G. Munster, Quantum fields on a lattice, Cambridge, UK: Univ.
Pr. (1994), 491 p, Cambridge monographs on mathematical physics.
[2] H. J. Rothe, Lattice gauge theories: An Introduction, World Sci. Lect. Notes Phys.
74, 1 (2005).
[3] J. Ambjorn, K. N. Anagnostopoulos, W. Bietenholz, T. Hotta and J. Nishimura,
Large N dynamics of dimensionally reduced 4D SU(N) super Yang-Mills theory,
JHEP 0007, 013 (2000) [arXiv:hep-th/0003208].
[4] J. Ambjorn, K. N. Anagnostopoulos, W. Bietenholz, T. Hotta and J. Nishimura,
Monte Carlo studies of the IIB matrix model at large N, JHEP 0007, 011 (2000)
[arXiv:hep-th/0005147].
[5] K. N. Anagnostopoulos, T. Azuma, K. Nagao and J. Nishimura, Impact of su-
persymmetry on the nonperturbative dynamics of fuzzy spheres, JHEP 0509, 046
(2005) [arXiv:hep-th/0506062].
[6] V. G. Filev and D. OConnor, On the Phase Structure of Commuting Matrix Mod-
els, arXiv:1402.2476 [hep-th].
[7] R. Delgadillo-Blando, D. OConnor and B. Ydri, Geometry in transition: A model of
emergent geometry, Phys. Rev. Lett. 100, 201601 (2008) [arXiv:0712.3011 [hep-th]].
Chapter 3
b = b c
3 , c = . (3.2)
aN 2 a2 N 2
The path integral we wish to sample in Monte Carlo simulation is
Z
Z= d exp(S[]). (3.3)
As before, we will first think of the configurations as evolving in some fictitious time-like
parameter t, viz
(t). (3.4)
The above path integral is then equivalent to the Hamiltonian dynamical system
Z
1
Z = dP d exp( T rP 2 S[]). (3.5)
2
In other words, we have introduced a Hermitian N N matrix P which is conjugate to
. The Hamiltonian is clearly given by
1
H = T rP 2 + S[]. (3.6)
2
CP and MFT, B.Ydri 142
t t
(P )ij (t + ) = (P )ij (t) Vij (t). (3.9)
2 2
t
ij (t + t) = ij (t) + tPji (t + ). (3.10)
2
t t
Pij (t + t) = Pij (t + ) Vij (t + t). (3.11)
2 2
Let us recall that t = nt, n = 0, 1, 2, ..., 1, where the point n = 0 corresponds to the
initial configuration ij (0) whereas n = corresponds to the final configuration ij (T )
where T = t.
where H is the corresponding change in the Hamiltonian when we go from ((0), P (0))
to ((T ), P (T )).
4) Repeat.
CP and MFT, B.Ydri 143
3.4 Optimization
3.4.1 Partial Optimization
We start with some general comment which is not necessarily a part of the optimization
process. The scalar field is a hermitian matrix, i.e. the diagonal elements are real, while
the off diagonal elements are complex conjugate of each other. We find it crucial that we
implement, explicitly in the code, the reality of the diagonal elements by subtracting from
ii the imaginary part (error) which in each molecular dynamics iteration is small but
can accumulate. The implementation of the other condition is straightforward.
In actual simulations we can fix , for example we take = 20, and adjust the step size
t, in some interval [tmin , tmax ], in such a way that the acceptance rate pa is held fixed
between some target acceptance rates say palow = 70 and pahigh = 90 per cents. If the
acceptance rate becomes larger than the target acceptance rate pahigh , then we increase
the step size t by a factor inc = 1.2 if the outcome is within the interval [tmin , tmax ].
Similarly, if the acceptance rate becomes smaller than the target acceptance rate palow ,
we decrease the step size by a factor dec = 0.8 if the outcome is within the interval
[tmin , tmax ]. The adjusting of t can be done at each Monte Carlo step, but it can also
be performed only each L simulations. We take L = 1. A sample pseudo code is attached
below. A sample of the results is shown in figure (3.1).
pa=(Accept)/(Rejec+Accept)
cou=mod(tmc,L)
if (cou.eq.0)then
if (pa.ge.target_pa_high) then
dtnew=dt*inc
if (dtnew.le.dt_max)then
dt=dtnew
else
dt=dt_max
endif
endif
if (pa.le.target_pa_low) then
dtnew=dt*dec
if (dtnew.ge.dt_min)then
dt=dtnew
else
dt=dt_min
endif
endif
endif
CP and MFT, B.Ydri 144
a=0,c=1.0,b=-5.3,nu=20,pa=0.7-0.9,dt=10**(-4)-1,N=10 Tth=2**12
1
L=1,thermalization
measurement
L=10,thermalization
0.95 measurement
0.9
0.85
pa
0.8
0.75
0.7
0.65
0 500 1000 1500 2000 2500 3000 3500 4000 4500
time
a=0,c=1.0,b=-5.3,nu=20,pa=0.7-0.9,dt=10**(-4)-1,N=10 Tth=2**12
0.25
L=1,thermalization
measurement
L=10,thermalization
measurement
0.2
0.15
dt
0.1
0.05
0
0 1000 2000 3000 4000 5000 6000 7000 8000 9000
time
a=0,c=1.0,b=-5.3,nu=20,pa=0.7-0.9,dt=10**(-4)-1,N=10 Tth=2**12
-20
L=1,thermalization
L=10,thermalization
-25
-30
-35
action
-40
-45
-50
-55
-60
0 500 1000 1500 2000 2500 3000 3500 4000 4500
time
Figure 3.1:
CP and MFT, B.Ydri 145
0 = 0. (3.13)
r
b
= U U + , 2 = 1N , U U + = U + U = 1N . (3.14)
2c
We compute V [0 ] = 0 and V [ ] = b2 /4c. The first configuration corresponds to
the disordered phase characterized by < >= 0. The second solution makes sense
only for b < 0, and it corresponds to the ordered phase characterized by < >6= 0.
As mentioned above, there is a non-perturbative transition between the two phases which
occurs quantum mechanically, not at b = 0, but at b = b = 2 N c, which is known as the
one-cut to two-cut transition. The idempotent can always be chosen such that = k =
diag(1k , 1N k ). The orbit of k is the Grassmannian manifold U (N )/(U (k) U (N k))
which is dk dimensional where dk = 2kN 2k 2 . It is not difficult to show that this
CP and MFT, B.Ydri 146
dimension is maximum at k = N/2, assuming that N is even, and hence from entropy
argument, the most important two-cut solution is the so-called stripe configuration given
by = diag(1N/2 , 1N/2 ).
In this real quartic matrix model, we have therefore three possible phases characterized
by the following order parameters:
1
PT = T r2 . (3.21)
N
1
P0 = (T r)2 . (3.22)
N2
We will also compute the eigenvalues of the matrix by calling the library LAPACK and
then construct appropriate histograms using known techniques.
CP and MFT, B.Ydri 147
Ising: The Ising transition appears for small values of c and is the easiest one to observe
in Monte Carlo simulations. We choose, for N = 8, the Monte Carlo times Tth = 211 ,
Tmc = 211 and Tco = 20 , i.e. we ignore to take into account auto-correlations for simplicity.
The data for c = 0.1, 0.2 is shown on figure (3.2). The transition, marked by the peak of
the susceptibility, occurs, for c = 0.1, 0.2, 0.3 and 0.4, at b = 0.5, 0.9, 1.4 and 1.75
respectively. The corresponding linear fit which goes through the origin is given by
c = 0.22b . (3.23)
Matrix: The disorder-to-non-uniform phase transition appears for large values of c and
is quite difficult to observe in Monte Carlo simulations due to the fact that configurations,
which have slightly different numbers of pluses and minuses, strongly competes for finite
N , with the physically relevant stripe configuration with an equal numbers of pluses and
minuses. In principle then we should run the simulation until a symmetric eigenvalues
distribution is reached which can be very difficult to achieve in practice. We choose,
for N = 8, the Monte Carlo times Tth = 211 , Tmc = 212 and Tco = 24 . The data for
the specific heat for c = 1 4 is shown on figure (3.3). We also plot the data for the
pure quartic matrix model for c = 1 for comparison. The transition for smaller value
of c is marked, as before, by the peak in specific heat. However, this method becomes
unreliable for larger values of c since the peak disappears. Fortunately, the transition
is always marked by the point where the eigenvalues distribution splits at = 0. The
corresponding eigenvalues distributions are shown on (3.4). We include symmetric and
slightly non-symmetric distributions since both were taken into account in the data of
the specific heat. The non-symmetric distributions cause typically large fluctuations of
the magnetization and peaks in the susceptibility which are very undesirable finite size
effects. But, on the other hand, as we increase the value of |b| we are approaching the non-
symmetric uniform phase and thus the appearance of these non-symmetric distributions
is very natural. This makes the determinantion of the transition point very hard from the
behavior of these observables.
We have determined instead the transition point by simulating, for a given c, the pure
matrix model with a = 0, in which we know that the transition occurs at b = 2 c, and
then searching in the full model with a = 1 for the value of b with an eigenvalues distribu-
tion similar to the eigenvalues distribution found for a = 0 and b = 2 c. This exercise
is repeated for c = 4, 3, 2 and 1 and we found the transition points given respectively by
b = 5, 4.5, 4, and 2.75. See graphs on figure (3.5). The corresponding linear fit is
given by
c, b) = (0.56, 2.57).
( (3.26)
However, this is not really what we observe using our code here. The uniform-to-non-
uniform phase transition is only observed for small values of c from the uniform phase to
the non-uniform phase as we increase b. The transition for these small values of c, such
as c = 0.1, 0.2, 0.3, 0.4, corresponds to a second peak in the susceptibility and the specific
heat. It corresponds to a transition from a one-cut eigenvalues distribution symmetric
around 0 to a one-cut eigenvalues distribution symmetric around a non-zero value. The
eigenvalues distributions for c = 0.3 are shown on the first two graphs of figure (3.7).
In this case we have found it much easier to determine the transition points from the
behavior of the magnetization and the powers. In particular, we have determined the
transition point from the broad maximum of the magnetization which corresponds to the
discontinuity of the power in the zero modes. The magnetization and the powers, for
c = 0.1, 0.2, 0.3, 0.4, are shown on figure (3.8). The transition points were found to be
1.5, 1.7, 2 and 2.1 respectively.
The uniform phase becomes narrower as we approach the value c = 0.5. The specific
heat and the susceptibility have a peak around b = 2.25 which is consistent with the
Ising transition but the powers and the magnetization show the behavior of the disorder-
to-non-uniform-order transition. The eigenvalues distribution is also consistent with the
disorder-to-non-uniform-order transition. See last graph of figure (3.7). The value c = 0.5
is roughly the location of the triple point.
The phase diagram is shown on figure (3.6).
CP and MFT, B.Ydri 149
N=8
1.8
cT=0.1,PT
P0
1.6 cT=0.2,PT
P0
1.4
1.2
1
PT,P0
0.8
0.6
0.4
0.2
0
-1.4 -1.2 -1 -0.8 -0.6 -0.4 -0.2 0
bT
N=8
11
cT=0.1
cT=0.2
10
6
m
0
-1.4 -1.2 -1 -0.8 -0.6 -0.4 -0.2 0
bT
N=8
4.5
cT=0.1
cT=0.2
4
3.5
2.5
1.5
0.5
0
-1.4 -1.2 -1 -0.8 -0.6 -0.4 -0.2 0
bT
Figure 3.2:
CP and MFT, B.Ydri 150
N=8 N=8
0.8 0.8
a=1.0,cT=1.0 a=1.0,cT=1.0
cT=2.0 a=0,cT=1.0
cT=3.0
cT=4.0 0.7
0.7
0.6
0.6
0.5
Cv/N2
Cv/N2
0.5
0.4
0.4
0.3
0.3
0.2
0.2 0.1
-10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 -6 -5 -4 -3 -2 -1 0
bT bT
N=8 N=8
0.45 0.6
a=1.0,cT=1.0,PT a=0.0,cT=1.0,PT
P0 P0
0.4
0.5
0.35
0.3 0.4
0.25
PT,P0
PT,P0
0.3
0.2
0.15 0.2
0.1
0.1
0.05
0 0
-3 -2.5 -2 -1.5 -1 -0.5 0 -3 -2.5 -2 -1.5 -1 -0.5 0
bT bT
N=8 N=8
1 0.32
a=1.0,cT=1.0 a=0.0,cT=1.0
0.9 0.3
0.8
0.28
0.7
0.26
0.6
m
0.24
0.5
0.22
0.4
0.2
0.3
0.2 0.18
0.1 0.16
-4 -3.5 -3 -2.5 -2 -1.5 -1 -0.5 0 -3 -2.5 -2 -1.5 -1 -0.5 0
bT bT
N=8 N=8
0.35 0.035
a=1.0,cT=1.0 a=0.0,cT=1.0
0.3
0.03
0.25
0.025
0.2
m
0.02
0.15
0.015
0.1
0.01
0.05
0 0.005
-4 -3.5 -3 -2.5 -2 -1.5 -1 -0.5 0 -3 -2.5 -2 -1.5 -1 -0.5 0
bT bT
Figure 3.3:
CP and MFT, B.Ydri 151
N=8
2
cT=4.0, bT=-1.5
bT=-5.0
1.8 bT=-5.5
bT=-6.0
1.6
1.4
1.2
()
0.8
0.6
0.4
0.2
0
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8
N=8
2.5
cT=4.0, bT=-6.1
bT=-6.5
bT=-6.9
bT=-7.5
bT=-9.5
2
1.5
()
0.5
0
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
N=8
3
cT=4.0, bT=-7.0
bT=-8.0
bT=-9.0
bT=-10.0
2.5
2
()
1.5
0.5
0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8
Figure 3.4:
CP and MFT, B.Ydri 152
N=8 N=8
1.8 1.6
a=0,cT=4.0, bT=-4.0 a=0,cT=3.0, bT=-3.4
theory theory
1.6 a=1,cT=4.0, bT=-4.0 a=1,cT=3.0, bT=-3.4
bT=-5.0 1.4 bT=-4.5
1.4
1.2
1.2
1
1
()
()
0.8
0.8
0.6
0.6
0.4
0.4
0.2 0.2
0 0
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8
N=8 N=8
1.6 1.4
a=0,cT=2.0, bT=-2.8 a=0,cT=1.0, bT=-2.0
theory theory
a=1,cT=2.0, bT=-2.8 a=1,cT=1.0, bT=-2.0
1.4 bT=-4.0 bT=-2.75
1.2
1.2
1
1
0.8
()
()
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0 0
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8
N=8
1.8
theory: a=0,cT=1.0, bT=-2.0
MC: a=1,cT=1.0, bT=-2.0
1.6 theory: a=0,cT=2.0, bT=-2.8
MC: a=1,cT=2.0, bT=-2.8
theory: a=0,cT=3.0, bT=-3.4
MC: a=1,cT=3.0, bT=-3.4
1.4 theory: a=0,cT=4.0, bT=-4.0
MC: a=1,cT=4.0, bT=-4.0
1.2
1
()
0.8
0.6
0.4
0.2
0
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8
Figure 3.5:
CP and MFT, B.Ydri 153
N=8
4
disorder-to-non-uniform
fit
a=0 theory
3.5 multitrace approximation
disorder-to-uniform
fit
3 disordered uniform-to-non-uniform
fit
triple point
2.5
cT
2 non-uniform
1.5
0.5
uniform
0
0 2 4 6 8 10
-bT
Figure 3.6:
CP and MFT, B.Ydri 154
N=8
1.8
cT=0.3, bT=-0.25
bT=-1.0
1.6 bT=-1.25
bT=-1.5
bT=-1.75
1.4
1.2
1
()
0.8
0.6
0.4
0.2
0
-1.5 -1 -0.5 0 0.5 1 1.5
N=8
1.8
cT=0.3, bT=-2.0
bT=-2.25
1.6 bT=-2.5
bT=-3.0
1.4
1.2
1
()
0.8
0.6
0.4
0.2
0
-1.5 -1 -0.5 0 0.5 1 1.5 2
N=8
2.5
cT=0.5, bT=-2.0
bT=-2.25
bT=-3.0
bT=-7.0
1.5
()
0.5
0
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
Figure 3.7:
CP and MFT, B.Ydri 155
N=8
2
cT=0.2,PT
P0
1.8 cT=0.3,PT
P0
cT=0.4,PT
1.6 P0
1.4
1.2
PT,P0
0.8
0.6
0.4
0.2
0
-3.5 -3 -2.5 -2 -1.5 -1 -0.5 0 0.5
bT
N=8
10
cT=0.2
cT=0.3
9 cT=0.4
6
m
0
-3.5 -3 -2.5 -2 -1.5 -1 -0.5 0 0.5
bT
Figure 3.8:
Bibliography
[1] F. Garcia Flores, X. Martin and D. OConnor, Simulation of a scalar field on a fuzzy
sphere, Int. J. Mod. Phys. A 24, 3917 (2009) [arXiv:0903.1986 [hep-lat]].
[2] F. Garcia Flores, D. OConnor and X. Martin, Simulating the scalar field on the
fuzzy sphere, PoS LAT 2005, 262 (2006) [hep-lat/0601012].
[3] X. Martin, A matrix phase for the phi**4 scalar field on the fuzzy sphere, JHEP
0404, 077 (2004) [hep-th/0402230].
[4] M. Panero, Numerical simulations of a non-commutative theory: The Scalar model
on the fuzzy sphere, JHEP 0705, 082 (2007) [hep-th/0608202].
[5] J. Ambjorn, K. N. Anagnostopoulos, W. Bietenholz, T. Hotta and J. Nishimura,
Large N dynamics of dimensionally reduced 4D SU(N) super Yang-Mills theory,
JHEP 0007, 013 (2000) [arXiv:hep-th/0003208].
[6] J. Ambjorn, K. N. Anagnostopoulos, W. Bietenholz, T. Hotta and J. Nishimura,
Monte Carlo studies of the IIB matrix model at large N, JHEP 0007, 011 (2000)
[arXiv:hep-th/0005147].
[7] S. S. Gubser and S. L. Sondhi, Phase structure of noncommutative scalar field
theories, Nucl. Phys. B 605, 395 (2001) [hep-th/0006119].
[8] J. Ambjorn and S. Catterall, Stripes from (noncommutative) stars, Phys. Lett. B
549, 253 (2002) [hep-lat/0209106].
[9] B. Ydri, A Multitrace Approach to Noncommutative 42 , arXiv:1410.4881 [hep-th].
Chapter 4
References for this chapter include the elegant quantum field theory textbook [1] and
the original articles [24].
The mass parameter m2 is replaced by the so-called hopping parameter and the coupling
constant is replaced by the coupling constant g where
1 2g g
m2 a2 = 2d , d4 = 2 . (4.3)
a
The fields in and in are related by
r
2 i
in = . (4.4)
ad2 n
The partition function is given by
Z Y
Z = din eS[]
n,i
Z
in in+
P P
= d() e2 n . (4.5)
CP and MFT, B.Ydri 158
n,i
Y ~ 2 g(
~ 2 1)2
N~
= d n e n n
n
Y
d(n ). (4.6)
n
This is a generalized Ising model. Indeed in the limit g the dominant configurations
are such that 21 + ... + 2N = 1, i.e. points on the sphere S N 1 . Hence
R R
~ n)
d(n )f ( ~ n)
dN 1 f (
R = R , g . (4.7)
d(n ) dN 1
For N = 1 we obtain
R
~ n)
d(n )f ( 1
R = (f (+1) + f (1)) , g . (4.8)
d(n ) 2
Thus the limit g of the O(1) model is precisely the Ising model in d dimensions. The
limit g of the O(3) model corresponds to the Heisenberg model in d dimensions.
The O(N ) models on the lattice are thus intimately related to spin models.
There are two phases in this model. A disordered (paramagnetic) phase characterized
by < in >= 0 and an ordered (ferromagnetic) phase characterized by < in >= vi 6= 0.
This can be seen in various ways. The easiest way is to look for the minima of the classical
potential
Z
d 1 2 i i i i 2
V [] = d x m + ( ) . (4.9)
2 4
The equation of motion reads
j j i
[m2 + ] = 0. (4.10)
2
For m2 > 0 there is a unique solution i = 0 whereas for m2 < 0 there is a second solution
given by j j = 2m2 /.
A more precise calculation is as follows. Let us compute the expectation value < in >
on the lattice which is defined by
R i e2 n in in+
P P
d() n
< in > = R P P i i
d() e2 n n n+
R P i
P i i
d() in e n n (n+ +n )
= R P i
P P i i . (4.11)
d() e n n n (n+ +n )
Now we approximate the spins in at the 2d nearest neighbors of each spin in by the
average v i =< in >, viz
P i i
(n+
+ n)
= vi. (4.12)
2d
CP and MFT, B.Ydri 159
This is a crude form of the mean field approximation. Equation (4.11) becomes
R P i i
i d() in e4d n n v
v = R P i i
d() e4d n n v
R i i
d(n ) in e4dn v
= R . (4.13)
d(in ) e4din vi
The extra factor of 2 in the exponents comes from the fact that the coupling between any
two nearest neighbor spins on the lattice occurs twice. We write the above equation as
vi = ln Z[J]|J i =4dvi . (4.14)
J i
Z
i i
Z[J] = d(n ) en J
Z
i i i i 2 +i J i
= dN in en n g(n n 1) n . (4.15)
In other words
1
v i = 2c dv i c = . (4.17)
2d
Z Z
N ij ij Z[0]
d in (in in 1) in jn = dN in (in in 1) kn kn = . (4.20)
N N N
Hence
J iJ i
Z[J] = Z[0] 1 + + ... . (4.21)
2N
Thus
Ji 4c dv i N
vi = = c = . (4.22)
N N 4d
CP and MFT, B.Ydri 160
N = 1 , g . (4.23)
We compute then
Z
Z[J] = N dn (2n 1) en J
= Z[0] cosh J. (4.24)
Thus
A graphical sketch of the solutions of this equation will show that for < c there is only
one intersection point at v = 0 whereas for > c there are two intersection points away
from the zero, i.e. v 6= 0. Clearly for near c the solution v is near 0 and thus we can
expand the above equation as
1
v = 4dv (4d)3 v 2 + .... (4.26)
3
The solution is
1
(4d)2 3 v 2 = c . (4.27)
3
Thus only for > c there is a non zero solution.
In summary we have the two phases
The critical line c = c (g) interpolates in the g plane between the two lines given by
N
c = , g . (4.30)
4d
1
c = , g 0. (4.31)
2d
For d = 4 the critical value at g = 0 is c = 1/8 for all N . This critical value can be
derived in a different way as follows. We know that the renormalized mass at one-loop
order in the continuum 4 with O(N ) symmetry is given by the equation
m2R = m2 + (N + 2)I(m2 , )
(N + 2) 2 (N + 2) 2 m2 (N + 2) 2
= m2 + + m ln 2 + m C + finite terms.
16 2 16 2 16 2
(4.32)
CP and MFT, B.Ydri 161
(N + 2) (N + 2) 2 2 (N + 2) 2 2
a2 m2R = am2 + 2
+ 2
a m ln a2 m2 + a m C + a2 finite terms.
16 16 16 2
(4.33)
The lattice space a is formally identified with the inverse cut off 1/, viz
1
a= . (4.34)
Thus we obtain in the continuum limit a 0 the result
(N + 2) (N + 2) 2 2 (N + 2) 2 2
a2 m2 2
+ 2
a m ln a2 m2 + a m C + a2 finite terms.
16 16 16 2
(4.35)
The leap frog, or Stormer-Verlet, algorithm, which maintains the symmetry under time
reversible and the conservation of the phase space volume of the above Hamilton equations,
is then given by the equations
t t
Pni (t + ) = (P )in (t) Vni (t). (4.43)
2 2
t
in (t + t) = in (t) + tPni (t + ). (4.44)
2
t t
Pni (t + t) = Pni (t + ) Vni (t + t). (4.45)
2 2
We recall that t = nt, n = 0, 1, 2, ..., 1, where the point n = 0 corresponds to the
initial configuration in (0) whereas n = corresponds to the final configuration in (T )
where T = t. This algorithm does not conserve the Hamiltonian due to the systematic
error associated with the discretization, which goes as O(t2 ), but as can be shown the
addition of a Metropolis accept-reject step will nevertheless lead to an exact algorithm.
The hybrid Monte Carlo algorithm in this case can be summarized as follows:
1) Choose P (0) such that P (0) is distributed according to the Gaussian probability
P
distribution exp( 21 n Pni Pni ). In particular we choose Pni such that
p
Pni = 2 ln(1 x1 ) cos 2(1 x2 ), (4.46)
where x1 and x2 are two random numbers uniformly distributed in the interval [0, 1].
This step is crucial if we want to avoid ergodic problems.
2)Find the configuration ((T ), P (T )) by solving the above differential equations of
motion.
3)Accept the configuration ((T ), P (T )) with a probability
where H is the corresponding change in the Hamiltonian when we go from ((0), P (0))
to ((T ), P (T )).
4) Repeat.
CP and MFT, B.Ydri 163
Z
2 1 2 1 2 2 4
S[] = d x ( ) + 0 + . (4.48)
2 2 4
X X
S[] = 2 n n+ + 2n + g(2n 2
1) . (4.49)
n
20 = m2 . (4.50)
1 2g g
20l 20 a2 = 4 , l a2 = 2 . (4.51)
In the simulations we will start by fixing the lattice quartic coupling l and the lattice
mass parameter 20l which then allows us to fix and g as
q
8l + (20l + 4)2 (20l + 4)
= . (4.52)
4l
g = 2 l . (4.53)
The phase diagram will be drawn originally in the 20l l plane. This is the lattice phase
diagram. This should be extrapolated to the infinite volume limit L = N a .
The Euclidean quantum field theory phase diagram should be drawn in terms of the
renormalized parameters and is obtained from the lattice phase diagram by taking the limit
a 0. In two dimensions the 4 theory requires only mass renormalization while the
quartic coupling constant is finite. Indeed, the bare mass 20 diverges logarithmically when
we remove the cutoff, i.e. in the limit where = 1/a while is independent of
a. As a consequence, the lattice parameters will go to zero in the continuum limit a 0.
We know that mass renormalization is due to the tadpole diagram which is the only
divergent Feynman diagram in the theory and takes the form of a simple reparametrization
given by
20 = 2 2 , (4.54)
where 2 is the renormalized mass parameter and 2 is the counter term which is fixed
via an appropriate renormalization condition. The unltraviolet divergence ln of 20
is contained in 2 while the renormalization condition will split the finite part of 20
between 2 and 2 . The choice of the renormalization condition can be quite arbitrary. A
convenient choice suitable for Monte Carlo measurements and which distinguishes between
the two phases of the theory is given by the usual normal ordering prescription [2] .
CP and MFT, B.Ydri 164
(4.56)
This should certainly work in the symmetric phase where 2 > 0. We can also write this
as
2 = 3A2 . (4.59)
with p = m 2/L where m = N/2+1, N/2+2, ..., N/2 for N even. This means that
the zero of the xspace is located at the edge of the box while the zero of the pspace
is located in the middle of the box. We have therefore the normalization conditions
P 0 P 0 P
exp(i(p p )x) = p,p0 and p exp(i(x x )p) = x,x0 where, for example, p =
Px
/L2 . In the infinite volume limit defined by L = N a with a fixed we have
Pm R /a 2 2
p /a d p/(2) . It is not difficult to show that on the lattice the propagator
P
1/(p2 + 2 ) becomes a2 /(4 sin2 ap /2 + 2l ) [1]. Thus on a finite volume lattice with
periodic boundary conditions the Feynman diagram A2 takes the form
X a2
A2 =
p1 ,p2
4 sin2 ap1 /2 + 4 sin2 ap2 /2 + 2l
N N
1 X X 1
= . (4.62)
N 2
m =1 m =1
4 sin m1 /N + 4 sin2 m2 /N + 2l
2
1 2
In the last line we have shifted the integers m1 and m2 by N/2. Hence on a finite volume
lattice with periodic boundary conditions equation (4.54), together with equation (4.59),
becomes
Given the critical value of 20l for every value of l we need then to determine the corre-
sponding critical value of 2l . This can be done numerically using the Newton-Raphson
algorithm. The continuum limit a 0 is then given by extrapolating the results into
the origin, i.e. taking l = a2 0, 2l = a2 2 0 in order to determine the critical
value
l
fc = liml ,2 0 . (4.64)
l 2lc
<S>. (4.65)
1 X
M= < m > , m = | n |. (4.67)
N2 n
< m4 >
U =1 . (4.69)
3 < m2 >2
We note the use of the absolute value in the definition of the magnetization since the
P
usual definition M =< n n > /N 2 is automatically zero on the lattice because of the
symmetry . The specific heat diverges at the critical point logarithmically as
the lattice size is sent to infinity. The susceptibility shows also a peak at the critical point
whereas the Binder cumulant exhibits a fixed point for all values of N .
We run simulations with Tth + Tmc Tco steps with Tth = 213 thermalization steps
and Tmc = 214 measurement steps. Every two successive measurements are separated by
Tco = 23 steps to reduce auto-correlations. We use ran2 as our random numbers generator
and the Jackknife method to estimate error bars. The hybrid Monte Carlo code used in
these simulations can be found in the last chapter.
We have considered lattices with N = 16, 32 and 49 and values of the quartic coupling
given by l = 1, 0.7, 0.5, 0.25. Some results are shown on figure (4.1). The critical value
20l for each value of l is found from averaging the values at which the peaks in the specific
heat and the susceptibility occur. The results are shown on the second column of table
(4.1). The final step is take the continuum limit a 0 in order to find the critical value
2l by solving the renormalization condition (4.63) using the Newton-Raphson method.
0
This is an iterative method based on a single iteration given by 2l = 2l F/F . The
corresponding results are shown on the third column of table (4.1). The critical line is
shown on figure (4.2) with a linear fit going through the origin given by
This should be compared with the much more precise result l = 10.82l published in [3].
The above result is sufficient for our purposes here.
l 20l 2l
1.0 1.25 0.05 1.00 102
0.7 0.95 0.05 6.89 102
0.5 0.7 0.00 5.52 102
0.25 0.4 0.00 2.53 102
Table 4.1:
CP and MFT, B.Ydri 167
l=0.7 l=0.5
0.95 140
N=49 N=49
N=32 N=32
0.9 N=16 N=16
120
0.85
0.8 100
0.75
80
0.7
Cv/N2
/N2
0.65
60
0.6
0.55 40
0.5
20
0.45
0.4 0
-1.6 -1.4 -1.2 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 -1.6 -1.4 -1.2 -1 -0.8 -0.6 -0.4 -0.2 0 0.2
2 2
0l 0l
l=0.25 l=1.0
0.7 1.6
N=49 N=49
N=32 N=32
N=16 N=16
0.6 1.4
0.5 1.2
0.4 1
0.3 0.8
M
U
0.2 0.6
0.1 0.4
0 0.2
-0.1 0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 -2 -1.8 -1.6 -1.4 -1.2 -1 -0.8 -0.6 -0.4
0l2 0l2
Figure 4.1:
CP and MFT, B.Ydri 168
1
data
fit
0.9
0.8
0.7
0.6
l
0.5
0.4
0.3
0.2
0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.11
l2
Figure 4.2:
Bibliography
1 p
() = (2C2 + B + C 2 ) 2 2
N
1 1 2 p
= ( 1 + r2 ) 4r2 2 . (5.3)
g 2
This is a single cut solution with the cut defined by
2r 2r. (5.4)
1
r = . (5.5)
2
1 p
2 = (B + B 2 + 12N C)
3C
1 p
= (1 + 1 + 3g). (5.6)
3
CP and MFT, B.Ydri 171
q
2C||
() = (2 12 )(22 2 )
N
q
|| 2 )(r 2 2 ).
= (2 r + (5.7)
2g
Here there are two cuts defined by
r || r+ . (5.8)
r = 1 , r+ = 2 . (5.9)
2 1
r = (B 2 N C)
2C
= 2(1 g). (5.10)
A third order transition between the above two phases occurs at the critical point
gc = 1 Bc2 = 4N C Bc = 2 N C. (5.11)
There is a third phase in this model: the so-called Ising or uniform ordered phase, which
despite the fact that it is not stable, plays an important role in generalizations of this
model, such as the one discussed in the next section, towards noncommutative 4 .
r
m 1
(3 )lm = llm , ()lm = (m 1)(1 )lm1 , (E)lm = (l )lm . (5.17)
N +1 2
The relationship between the parameters a and r2 is given by
r2 = 2aN (5.18)
Z
Z = dM exp S[M ]
Z Z
2 2 4
2 1
= d () exp T r b + c dU exp r K[U U ] .(5.19)
The second line involves the diagonalization of the matrix M (more on this below). The
calculation of the integral over U U (N ) is a very long calculation done in [2,3]. The end
result is a multi-trace effective potential given by (assuming the symmetry M M )
X 1X
Seff = (b2i + c4i ) ln(i j )2
2
i i6=j
2 X X
r 2 r4 4 r4 X
2 2
+ v2,1 (i j ) + v4,1 (i j ) v2,2 (i j ) + ... .
8 48 24N 2
i6=j i6=j i6=j
(5.20)
2
aN 2 v2,1 2 a2 N 3 v4,1 4 2a2 N 2 2
V = b+ T rM + c + T rM T rM . (5.21)
2 6 3
This can also be solved exactly as shown in [2]. The strength of the multi-trace term is
given by
3
= v2,2 v4,1 . (5.22)
4
CP and MFT, B.Ydri 173
The coefficients v2,1 , v4,1 and v2,2 are given by the following two competing calculations
of [2] and [3] given respectively by
1
v2,1 = 1 , v4,1 = 0 , v2,2 = . (5.23)
8
3
v2,1 = 1 , v4,1 = , v2,2 = 0. (5.24)
2
This discrepancy is discussed in [2].
2
2 4 2
V = T r BM + CM + D T rM . (5.25)
We may include the odd terms found in [2] without any real extra effort. We will not do
this here for simplicity, but we will include them for completeness in the attached code.
The partition function (path integral) is given by
Z
Z= dM exp V . (5.26)
The relationship between the two sets of parameters {a, b, c} and {B, C, D} is given by
= B3 = b + a
v2,1 C 2 v4,1
a a2 N
2
B , C = 2 = c + , D= . (5.28)
N2 2 N 6 3
Only two of these three parameters are independent. For consistency of the large N
limit, we must choose a
to be any fixed number. We then choose for simplicity a
= 1 or
equivalently D = 2N/3 .2
M = U U 1 . (5.29)
We compute
M = U + [U U, ] U 1 .
1
(5.30)
2
The authors of [1] chose instead a = 1.
CP and MFT, B.Ydri 174
We count N 2 real degrees of freedom as there should be. The measure is therefore given
by
Y Y p
dM = di dVij dVij det(metric)
i i6=j
Y Y sY
= di dVij dVij (i j )2 . (5.32)
i i6=j i6=j
We write this as
The dU is the usual Haar measure over the group SU(N) which is normalized such that
R
dU = 1, whereas the Jacobian 2 () is precisely the so-called Vandermonde determinant
defined by
Y
2 () = (i j )2 . (5.34)
i>j
We will use the Metropolis algorithm to study this model. Under the change i i + h
of the eigenvalue i the above effective potential changes as Veff Veff + Vi,h where
S2 = h2 + 2hi . (5.38)
X h
SVand = 2 ln |1 + |. (5.40)
i j
j6=i
CP and MFT, B.Ydri 175
Cv 1 2 r4 r p
2
= + (2
r 3) r2 + 3 , r > 1. (5.42)
N2 4 27 27
This behavior is also confirmed in Monte Carlo simulation as shown for c = 4 and N = 8
and N = 10 on figure (5.2).
The above one-cut-to-two-cut transition persists largely unchanged in the quartic mul-
titrace matrix model (5.21). On the other hand, and similarly to the above pure quartic
matrix model, the Ising phase is not stable in this case and as a consequence the transition
between non-uniform order and uniform-order is not observed in Monte Carlo simulations.
The situation is drastically different if odd multitrace terms are included.
CP and MFT, B.Ydri 176
0.35 0.4
g=3,N=10 g=2,N=10
theory theory
0.35
0.3
0.3
0.25
0.25
0.2
()
()
0.2
0.15
0.15
0.1
0.1
0.05
0.05
0 0
-6 -5 -4 -3 -2 -1 0 1 2 3 4 5 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5
0.6 0.7
g=1,N=10 g=0.5,N=10
theory theory
0.6
0.5
0.5
0.4
0.4
()
()
0.3
0.3
0.2
0.2
0.1
0.1
0 0
-6 -5 -4 -3 -2 -1 0 1 2 3 4 5 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5
0.7
g=3,N=10
g=2,N=10
g=1,N=10
g=0.5,N=10
0.6
0.5
0.4
()
0.3
0.2
0.1
0
-3 -2 -1 0 1 2 3
Figure 5.1:
CP and MFT, B.Ydri 177
cT=4
0.27
N=8
N=10
0.26
0.25
0.24
0.23
0.22
Cv/N2
0.21
0.2
0.19
0.18
0.17
0.16
-10 -8 -6 -4 -2 0
bT
Figure 5.2:
1 = . (5.45)
S1 = S[1 ] = S0 . (5.46)
CP and MFT, B.Ydri 178
Equivalently
S4 4 + S2 2 S0 = 0. (5.47)
The diagonal element Qii comes with a factor a = 1 while the off diagonal elements comes
with a factor a = 2. Thus we choose
s s
1 N x ij + iy ij 1 N
Qii = zii |a=1 + (M 2 )ii , Qij = |a=1 + (M 2 )ij . (5.56)
2 g 2 2 g
r s
N g 1 NX
li = (1 Qii ) , hi = (Qij Mji + Qji Mij ). (5.58)
g N 2 g
j6=i
Thus the diagonal elements Mii are Gaussian numbers which come with factors a = li .
Thus we choose
xii hi
Mii = |a=1 + . (5.59)
li 2li
Finally, the part of the path integral which depends on the off diagonal element Mij is
given by
Z Y X
dMij dMij exp lij Mij Mij + hij Mij + hij Mij =
i6=j i6=j
Z Y X hij 2
dMij dMij exp lij |Mij | + ... . (5.60)
lij
i6=j i6=j
r s
N 1 g 1 N X X
lij = 1 (Qii + Qjj ) , hij = Qik Mkj + Qkj Mik . (5.61)
g 2 N 4 g
k6=i k6=j
Hence the off diagonal elements Mij are Gaussian numbers which come with factors a = lij .
Thus we choose
xij + iyij hij
Mij = p |a=1 + . (5.62)
lij lij
This algorithms can also be applied quite effectively to simple Yang-Mills matrix models
as done for example in [6, 7].
Bibliography
[1] F. Garcia Flores, X. Martin and D. OConnor, Simulation of a scalar field on a fuzzy
sphere, Int. J. Mod. Phys. A 24, 3917 (2009) [arXiv:0903.1986 [hep-lat]].
[2] B. Ydri, A Multitrace Approach to Noncommutative 42 , arXiv:1410.4881 [hep-th].
[3] D. OConnor and C. Saemann, Fuzzy Scalar Field Theory as a Multitrace Matrix
Model, JHEP 0708, 066 (2007) [arXiv:0706.2493 [hep-th]].
[4] N. Kawahara, J. Nishimura and A. Yamaguchi, Monte Carlo approach to nonpertur-
bative strings - Demonstration in noncritical string theory, JHEP 0706, 076 (2007)
[hep-th/0703209].
[5] M. Panero, Numerical simulations of a non-commutative theory: The Scalar model
on the fuzzy sphere, JHEP 0705, 082 (2007) [hep-th/0608202].
[6] T. Hotta, J. Nishimura and A. Tsuchiya, Dynamical aspects of large N reduced
models, Nucl. Phys. B 545, 543 (1999) [hep-th/9811220].
[7] T. Azuma, S. Bal, K. Nagao and J. Nishimura, Nonperturbative studies of fuzzy
spheres in a matrix model with the Chern-Simons term, JHEP 0405, 005 (2004)
[hep-th/0401038].
Chapter 6
iff the error function e(x) takes its maximum absolute value at at least n + 2 points on
the unit interval, which may include the end points, and furthermore the sign of the error
alternate between the successive extrema.
We can go from the function f (x) defined in the interval [1, +1] to a function f (y)
defined in a generic interval [a, b] by considering the transformation x y given by
y 21 (b + a)
x= 1 . (6.4)
2 (b a)
A simple proof of this theorem can be found in [4]. This goes as follows:
Chebyshevs criterion is necessary: If the error has fewer than n + 2
alternating extrema then the approximation can be improved. Let p(x) be
a polynomial for which the error e(x) = p(x) f (x) has fewer than n + 2 alternating
extrema. The next largest extremum of the error, corresponding to a local extremum,
is therefore smaller by some non zero gap . Between any two successive alternating
extrema the error obviously will pass by zero at some point zi . If we assume that we
have d + 1 alternating extrema, then we will d zeros zi . We can trivially construct
the polynomial
Y
u(x) = A (x zi ). (6.5)
i
We choose A such that the sign of u(x) is opposite to the sign of e(x) and its
0
magnitude is less than , viz
0
u(xi )e(xi ) < 0 , = max0x1 |u(x)| < . (6.6)
0
We consider now the polynomial p (x) = p(x) + u(x) with corresponding error func-
0
tion e (x) = e(x) + u(x). The first condition u(xi )e(xi ) < 0 yields directly to the
0
conclusion that the error e (x) is less than e(x) in the domain of the alternating
0 0
extrema, whereas it is the condition < that yields to the conclusion that e (x)
0
is less than e(x) in the domain of the next largest extremum. Thus e (x) < e(x)
0
throughout and hence p (x) is a better polynomial approximation.
Chebyshevs criterion is sufficient: If the error is extremal at exactly
n + 2 alternating points then the approximation is optimal. Let us assume
CP and MFT, B.Ydri 183
0
that there is another polynomial p (x) which provides a better approximation. This
0 0 0
means that the uniform norm ||e || = max0x1 |e (x)| = max0x1 |p (x) f (x)| is
less than ||e|| = max0x1 |e(x)| = max0x1 |p(x) f (x)|. Equivalently we must
have at the n + 2 extrema of e(xi ) the inequalities
0
|e (xi )| < |e(xi )|. (6.7)
From the results Tn1 = cos n cos sin n sin we deduce the recursion relation
These polynomials are orthogonal in the interval [1, 1] with a weight 1/(1 x2 )1/2 , viz
Z +1
dx
Ti (x)Tj (x) = ij . (6.13)
1x 2 2
1
Z +1
dx
T0 (x)T0 (x) = . (6.14)
1 1 x2
The zeros of the polynomial Tn (x) are given by
(2k 1)
Tn (cos ) = 0 cos n = 0 n = (2k 1) x = cos , k = 1, 2, ..., n.(6.15)
2 2n
Since the angle is in the interval between 0 and . There are therefore n zeros.
The derivative of Tn is given by
d d
Tn = n cos1 x. sin(n cos1 x)
dx dx
n
= sin(n cos1 x). (6.16)
1x 2
CP and MFT, B.Ydri 184
m
X
T0 (xk )T0 (xk ) = m. (6.19)
k=1
In the above two equations i, j < m and xk , k = 1, ..., m, are the m zeros of the Chebyshev
polynomial Tm (x).
Since Tn (x) has n + 1 extrema which alternate in value between 1 and +1 for 1
x 1, and since the leading coefficient of Tn (x) is 2n1 ; the polynomial pn (x) = xn
21n Tn (x) is the best polynomial approximation of degree n 1 with uniform weight
to the function xn over the interval [1, 1]. This is because by construction the error
en (x) = pn (x) xn = 21n Tn (x) satisfies Chebyshevs criterion. The magnitude of the
error is just ||en || = 21n = 2en ln 2 , i.e. the error decreases exponentially with n.
Chebyshev approximation: Let f (x) be an arbitrary function in the interval [1, +1].
The Chebyshev approximation of this function can be constructed as follows. Let N be
some large degree and xk , k = 1, ..., N , be the zeros of the Chebyshev polynomial TN (x).
The function f (x) can be approximated by the polynomial of order N defined by
N
X 1
fN (x) = ck Tk1 (x) c1 . (6.20)
2
k=1
This approximation is exact for x equal to all of the N zeros of TN (x). Indeed, we can
show
N
X N
X N
X N
1 X
Tl1 (xk )fN (xk ) = ck Tl1 (xk )Tk1 (xk ) c1 Tl1 (xk )
2
k=1 k=1 k=1 k=1
N
= cl , l = 1, ..., N. (6.22)
2
In other words,
For very large N , the polynomial fN becomes very close to the function f . The polynomial
fN can be gracefully, by using the words of [5], truncated to a lower degree m << N
by considering
m
X 1
fm (x) = ck Tk1 (x) c1 . (6.24)
2
k=1
The error for rapidly decreasing ck , which is given by the difference between fN and fm , is
dominated by cm+1 Tm which has m+1 equal extrema distributed smoothly and uniformly
in the interval [1, +1]. Since the T s are bounded between 1 and +1 the total error
is the sum of the neglected ck , k = m + 1, ..., N . The Chebyshev approximation fm (x) is
very close to the minimax polynomial which has the smallest maximum deviation from
the function f (x). Although the calculation of the Chebyshev polynomial fm (x) is very
easy, finding the actual minimax polynomial is very difficult in practice.
x3 x5
sin x = x + . (6.25)
6 120
The domain of definition of sin x can be taken to be the interval [, ]. By making
the replacement x x/ we convert the domain of definition [, ] into the domain
[1, 1], viz
3 x3 5 x5
sin x = x + . (6.26)
6 120
The error in the above quintic approximation is estimated by the first neglected term
evaluated at the end points x = 1, viz
7 x7
|x= = 0.6. (6.27)
7!
The error in the 7th degree polynomial approximation can be found in the same way. We
get in this case 9 x9 /9!|x= = 0.08.
The monomials xk can be given in terms of Chebyshev polynomials by the formulas
1 k! k! k!
xk = k1 Tk (x) + Tk2 (x) + Tk4 (x) + ... + k1 k1
T1 (x) , k odd.
(6.28)
2 1!(k 1)! 2!(k 2)! 2 !(k 2 )!
k 1 k! k! k!
x = Tk (x) + Tk2 (x) + Tk4 (x) + ... + k T0 (x) , k even.
(6.29)
2k1 1!(k 1)! 2!(k 2)! k
2 !(k 2 )!
For example
x = T1 (x). (6.30)
CP and MFT, B.Ydri 186
1
x3 = [T3 (x) + 3T1 (x)]. (6.31)
4
1
x5 = [T5 (x) + 5T3 (x) + 10T1 (x)]. (6.32)
16
By substitution we get the result
3 x3 5 x5
sin x = x +
6 120
(192 24 2 + 3 ) 3 (16 2 ) 5
= T1 T3 + T5 . (6.33)
192 384 1920
Since |Tn | 1, the last term is of the order of 0.16. This is smaller than the error
found in the quintic approximation above. By truncating this term we obtain a cubic
approximation of the sine function given by
(192 24 2 + 3 ) 3 (16 2 )
sin x = T1 T3 (6.34)
192 384
By substituting the Chebyshev polynomials by their expressions in terms of the xk , and
then changing back to the interval [, +], we obtain the cubic polynomial
383 5x3
sin x = x . (6.35)
384 32
By construction this cubic approximation is better than the above considered quintic
approximation.
iff the error function e(x) takes its maximum absolute value at at least n + d + 2 points
on the unit interval, which may include the end points, and furthermore the sign of the
error alternate between the successive extrema.
A simple proof of this theorem can be found in [4]. As it can be shown rational
approximations are far more superior to polynomial ones since, for some functions and
some intervals, we can achieve substantially higher accuracy with the same number of
coefficients. However, it should also be appreciated that constructing the rational approx-
imation is much more difficult than the polynomial one.
CP and MFT, B.Ydri 187
We will further explain this very important theorem following the discussion of [5].
The rational function rn,d is the ratio of two polynomials pn and qd of degrees n and d
respectively, viz
pn (x)
rn,d (x) = . (6.37)
qd (x)
The polynomials pn and qd can be written as
pn (x) = 0 + 1 x + ... + n xn , qd (x) = 1 + 1 x + ... + d xd . (6.38)
We will assume that rn,d is non degenerate, i.e. it has no common polynomial factors in
numerator and denominator. The error function e(x) is the deviation of rn,d from f (x)
with a maximum absolute value e, viz
e(x) = rn,d (x) f (x) , e = max0x1 |e(x)|. (6.39)
Equation (6.37) can be rewritten as
n d
0 + 1 x + ... + n x = (f (x) + e(x)) 1 + 1 x + ... + d x . (6.40)
There are n + d + 1 unknowns i and i plus one which is the error function e(x). We
can choose the rational approximation rn,x (x) to be exactly equal to the function f (x) at
n + d + 1 points xi in the interval [1, 1],viz
f (xi ) = rn,d (xi ) , e(xi ) = 0. (6.41)
As a consequence the n + d + 1 unknowns i and i will be given by the n + d + 1 linear
equations
n d
0 + 1 xi + ... + n xi = f (xi ) 1 + 1 xi + ... + d xi . (6.42)
If we choose the xi to be the extrema of the error function e(x) then the yi will be exactly
e where e is the maximal value of |e(x)|. We get then n + d + 2 (not n + d + 1) equations
for the unknowns i , i and e given by
n d
0 + 1 xi + ... + n xi = (f (xi ) e) 1 + 1 xi + ... + d xi . (6.44)
The signs are due to the fact that successive extrema are alternating between e and
+e. Although, this is not exactly a linear system since e enters non linearly, it can still
be solved using for example methods such as Newton-Raphson.
CP and MFT, B.Ydri 188
M v = 0. (6.46)
Several drawbacks of this algorithm are noted in [4,5]. Among these, we mention here the
slow rate of convergence and the necessity of multiple precision arithmetic.
Zolotarevs Theorem: The case of rational approximations of the sign function, the
square root and the inverse square root are known analytically in the sense that the coef-
ficients of the optimal and unique Chebyshev rational approximations are known exactly.
This result is due to Zolotarev.
The Numerical Recipes algorithm: A much simpler but very sloppy approxi-
mation, which is claimed in [5] to be within a fraction of a least significant bit of the
minimax one, and in which we try to bring the error not to zero as in the minimax case
but to some consistent value, can be constructed as follows:
CP and MFT, B.Ydri 189
In the case that the number of xi s is larger than n + d + 1 we can use the singular
value decomposition method to solve this system. The solution will provide our
starting rational approximation rn,d (x). Compute e(xi ) and e.
We solve for i and j the linear system:
0 + 1 xi + ... + n xni d
= (f (xi ) e) 1 + 1 xi + ... + d xi . (6.49)
The is chosen to be the sign of the observed error function e(xi ) at each point xi .
We repeat the second step several times.
A~x = ~v . (6.50)
We will find the solution by means of the conjugate gradient method which is an iterative
algorithm suited for large sparse matrices A.
Principles of the method: The above problem is equivalent to finding the minimum
~x of the function (~x) defined by
1
(~x) = ~xA~x ~x~v . (6.51)
2
The gradient of is given by
~ x) = A~x ~v .
(~ (6.52)
CP and MFT, B.Ydri 190
This vanishes at the minimum. If not zero, it gives precisely the direction of steepest
ascent of the surface . The residual of the above set of equations is defined by
~ x) = ~v A~x.
~r = (~ (6.53)
We will denote the n linearly independent vectors in the vector space to which ~x belongs
by p~(i) , i = 1, ..., n. They form a basis in this vector space. The vector ~x can be expanded
as
n
X
~x = si p~(i) = P ~s. (6.54)
i=1
(j)
P is the n n matrix of the linearly independent vectors p~(i) , i.e. Pij = pi , and ~s is the
vector of the coefficients si . Typically, we will start from a reference vector ~x0 . Thus we
write
p~(i) A~
p(j) = 0 , i 6= j. (6.56)
P T AP = D. (6.57)
di = p~(i) A~
p(i) . (6.58)
P T
~ = P T AP ~s P T ~r0
= D~s P T ~r0 . (6.60)
~ = 0 is then
The solution to
p~(i)~r0
D~s P T ~r0 = 0 si = . (6.61)
p~(i) A~p(i)
The solution si found by globally minimizing , also locally minimizes along the direc-
tion p~(i) . Thus starting from a vector ~x0 we obtain the solution
p~(1)~r0
~x1 = ~x0 + s1 p~(1) , s1 = , ~r0 = ~v A~x0 . (6.62)
p~(1) A~ p(1)
CP and MFT, B.Ydri 191
This is the local minimum of along a line from ~x0 in the direction p~(1) . Indeed, we can
check that
p~(1)~r0
p~(1)
~ = 0 s1 = . (6.63)
p~(1) A~ p(1)
The vector ~r0 is the first residual at the point ~x0 given by
~ ~x = ~r0 .
| (6.64)
0
p~(2)~r1
~x2 = ~x1 + s2 p~(2) , s2 = , ~r1 = ~v A~x1 . (6.65)
p~(2) A~ p(2)
This is the local minimum of along a line from ~x1 in the direction p~(2) . The vector ~r1
is the new residual at the point ~x1 , viz
~ ~x = ~r1 .
| (6.66)
1
p~(i+1)~ri
~xi+1 = ~xi + si+1 p~(i+1) , si+1 = , ~ri = ~v A~xi . (6.67)
p~(i+1) A~p(i+1)
This is the local minimum of along a line from ~xi in the direction p~(i+1) . The vector ~ri
is the residual at the point ~xi , viz
~ ~x = ~ri .
| (6.68)
i
The residual vectors provide the directions of steepest descent of the function at each
iteration step. Thus if we know the conjugate vectors p~(i) we can compute the coefficients
si and write down the solution ~x. Typically, a good approximation of the true minimum
of may be obtained only after a small subset of the conjugate vectors are visited.
Choosing the conjugate vectors: The next step is to choose a set of conjugate
vectors. An obvious candidate is the set of eigenvectors of the symmetric matrix A.
However, in practice this choice is made as follows. Given that we have reached the
iteration step i, i.e. we have reached the vector ~xi which minimizes in the direction p~(i) ,
the search direction p~(i+1) will be naturally chosen in the direction of steepest descent of
the function at the point ~xi , which since A is positive definite is given by the direction
of the residual ~ri , but conjugate to the previous search direction p~(i) . We start then from
the ansatz
p~(i+1) = ~ri ~
p(i) . (6.69)
p~(i) A~
p(i+1) = 0. (6.70)
CP and MFT, B.Ydri 192
~ ~x |
| ~ ~x ~ ~x
= ~rj |
j i i
p(j) + p~(j+1) )|
= (~ ~ ~x
i
= 0. (6.73)
The first search direction can be chosen arbitrarily. We can for example choose p~(1) =
~r0 = | ~ ~x . The next search direction p~(2) is by construction Aconjugate to p~(1) .
0
At the third iteration step we obtain p~(3) which is Aconjugate to p~(2) . The remaining
question is whether p~(3) is Aconjugate to p~(1) or not. In general we would like to show
that the search direction p~(i) generated at the ith iteration step, which is Aconjugate to
p~(i1) , is also Aconjugate to all previously generated search directions p~(j) , j < i 1.
Thus we need to show that
p~(j) A~
p(i) = 0 , j < i 1. (6.74)
We compute
p~(j) A~
p(i) = p~(j) A(~ri1 ~
p(i1) )
= p~(j) A~ri1 ~p(j) A~
p(i1)
1
= (~xj ~xj1 )A~ri1 ~p(j) A~
p(i1)
sj
1
= p(j) A~
(~rj + ~rj1 )~ri1 ~ p(i1)
sj
p(j) A~
= ~ p(i1)
= 0. (6.75)
CP and MFT, B.Ydri 193
Summary: Let us now summarize the main ingredients of the above algorithm. We
have the following steps:
1) We choose a reference vector ~x0 . We calculate the initial residual ~r0 = ~v A~x0 .
2) We choose the first search direction as p~(1) = ~r0 .
3) The first iteration towards the solution is
p~(1)~r0
~x1 = ~x0 + s1 p~(1) , s1 = . (6.76)
p~(1) A~ p(1)
p~(i) A~ri
p~(i+1) = ~ri ~
p(i) , = . (6.78)
p~(i) A~ p(i)
p~(i+1)~ri
si+1 = . (6.79)
p~(i+1) A~p(i+1)
By using equations (6.77) and (6.80) we can show that equation (6.77) can be re-
placed by the equation
p(i)
~ri = ~ri1 si A~ (6.81)
5) The above procedure continues as long as |~r| where is some tolerance, otherwise
stop.
~ri+1 = ~ri + i A~
pi . (6.84)
CP and MFT, B.Ydri 194
~ri+1~ri+1
p~i+1 = ~ri+1 + i+1 p~i , i+1 = . (6.85)
~ri~ri
We start iterating from
~r2 = ~r0 + 0 A~r0 + 1 A(~r0 + 0 A~r0 ) + 1 1 A~r0 span{~r0 , A~r0 , A2~r0 }. (6.89)
Thus in general
Also
n1
X
~xn = ~x0 i p~i . (6.95)
i=0
Thus
~xn ~x0 = Qn1 (A)~r0 span{~r0 , A~r0 , A2~r0 , ..., An1~r0 }. (6.96)
The Qn1 (A) is a polynomial of exact degree n 1. Hence both the conjugate gradient
directions p~n and the solutions ~xn ~x0 belong to various Krylov subspaces.
The conjugate gradient method is an example belonging to a large class of Krylov
subspace methods. It is due to Hestenes and Stiefel [8] and it is the method of choice for
solving linear systems that are symmetric positive definite or Hermitian positive definite.
We conclude this section by the following two definitions.
CP and MFT, B.Ydri 195
The residuals rn of the above so-called Krylov space solver will satisfy
(A + )~x = ~v . (6.101)
~ri+1 = ~ri + i (A + )~
pi . (6.103)
~
~ri+1
ri+1
p~i+1 = ~ri+1
+ i+1 p~i , i+1
= . (6.104)
~ri ~ri
There is clearly a loop over which could be very expensive in practice. Fortunately we
can solve, by following [7], the above multi-mass linear system using only a single set of
vector-matrix operations as follows. First we note that
~ri+1 = ~ri + i (A + )~
pi = Pi+1
(A + )~r0 Ki+2 (A + , ~r0 ). (6.106)
As discussed before the polynomials Pi+1 are orthogonal in A + . This follows from the
fact that ~ri+1 ~ri and as a consequence
Pi+1 (A + )~r0 Ki+1 (A + , ~r0 ). (6.107)
CP and MFT, B.Ydri 196
By comparing the ~ri+1 terms and also using the above two results we find after some
calculation
n n1
n1
n+1 = . (6.116)
n n (n1 n ) + n1 n1 (1 n )
Let us conclude by summarizing the main ingredients of this algorithm. These are:
1. We start from
0 = 0 = 0 , 1 = 1
= 1 , 0 = 1
= 1. (6.118)
CP and MFT, B.Ydri 197
~rn~rn
n =
p~n A~ pn
~xn+1 = ~xn n p~n . (6.119)
~rn+1 = ~rn + n A~
pn . (6.120)
~rn+1~rn+1
n+1 =
~rn~rn
p~n+1 = ~rn+1 + n+1 p~n . (6.121)
3. We generate solutions of the sigma problems by the relations (we start from n = 0):
n n1 n1
n+1 = . (6.122)
n n (n1 n ) + n1 n1 (1 n )
n+1
n = n . (6.123)
n
~rn+1 = n+1 ~rn+1 . (6.125)
n+1
n
n+1 = n+1 . (6.126)
n n
p~n+1 = ~rn+1
+ n+1 p~n . (6.127)
Remark how the residues are generated directly from the residues of the no-sigma
problem.
4. The above procedure continues as long as |~r| where is some tolerance, otherwise
stop. Thus
We finally note that in the case of a hermitian matrix, i.e. A+ = A, we must replace
in the above formulas the transpose by hermitian conjugation. For example, we replace
p~Tn A~
pn by p~+
n A~
p. The rest remains unchanged.
Bibliography
Z Y
4
ZYM =
X dd exp i[X4 , ..] + a [Xa , ..] + exp(SBYM [X]). (7.1)
=1
4
N X
SBYM = T r[X , X ]2 . (7.2)
4
,=1
The parameter will be set to one and we may add to the bosonic Yang-Mills action a
Chern-Simons term and a harmonic oscillator term with parameters and m2 respectively.
The spinors and are two independent complex two-component Weyl spinors. They
contain the same number of degrees of Freedom as the four-component real Majorana
spinors in four dimensions. The scalar curvature or fermion mass parameter is given by .
The above theory is only supersymmetric for a restricted set of values of the parameters
, , m2 and . See [11] and references therein for a discussion of this matter.
CP and MFT, B.Ydri 200
The determinant of this Dirac operator is positive definite since the eigenvalues come in
complex conjugate pairs [1]. In d = 6 and d = 10 the determinant is, however, complex
valued which presents a serious obstacle to numerical evaluation. In these three cases,
i.e. for d = 4, 6, 10, the supersymmetric path integral is well behaved. In d = 3 the
supersymmetric path integral is ill defined and only the bosonic quenched approximation
makes sense. The source of the divergence lies in the so-called flat directions, i.e. the set
of commuting matrices. See [10] and references therein.
It is possible to rewrite the Dirac action in the following form (with X34 = X3 + iX4
and X = X1 iX2 )
+
T rD = T r 1 (X34 + )1 + 1 X 2 + 2 X+ 1 + 2 (X34 + )2
+
T r X34 1 1 + X 1 2 + X+ 2 1 X34 2 2 . (7.4)
=
T rD 1 M11 1 +
1 M12 2 +
2 M21 2 +
2 M22 2 . (7.7)
+ + A B
(M22 )AB = T rT A (X34 + )T B + T rX34 T T . (7.11)
We remark that
T r(T A )+ T B = iA iB jA jB = AB , T rT A T B = jA iB jB iA = AB
. (7.13)
= M.
T rD (7.15)
Next, we observe that the trace parts of the matrices Xa drop from the partition function.
R R
Thus the measure should read dXa (T rXa ) instead of simply dXa . Similarly, we
observe that if we write = 0 + 1, then the trace part will decouple from the rest
since
T r i[X4 , ..] + a [Xa , ..] + = T r0 i[X4 , ..] + a [Xa , ..] + 0 + . (7.16)
Hence, the constant fermion modes can also be integrated out from the partition func-
R R
tion and thus we should consider the measure dd(T r )(T r ) instead of dd.
These facts should be taken into account in the numerical study. We are thus led to
consider the partition function
Z Y
4
ZYM = dX (T rX ) det D exp SBYM [X] . (7.17)
=1
Z X
N2 X
N2
= dd
( )A iA jA (
)A iA jA exp M
A=1 A=1
Z
0 0 0 0 0
= d d M .
exp (7.18)
0 0 0
are (N 2 1)dimensional. The matrix M is 2(N 2 1) 2(N 2 1)
The vectors ,
dimensional, and it is given by
0 0 0 0 0 2 0 0 2 2 2
AB AB
M = M MN
B
i 0j 0 MA N
i 0j 0 + MN
N
i 0j 0 i 0j 0 . (7.19)
A A B B A A B B
We remark that
2 2
MN
N
= . (7.20)
0
SYM [X] = SBYM [X] + V [X] , V = ln det M . (7.23)
We will need
4
X
SBYM
= N [X , [X , X ]]ji
(X )ij (t)
=1
= N 2X X X X2 X X X2 . (7.24)
ji
The determinant is real positive definite since the eigenvalues are paired up. Thus, we
can introduce the positive definite operator by
0 0
= (M )+ M . (7.25)
1 t SBYM
(P )ij (n + ) = (P )ij (n) (n) + (V )ij (n) . (7.27)
2 2 (X )ij
1
(X )ij (n + 1) = (X )ij (n) + t(P )ji (n + ). (7.28)
2
1 t SBYM
(P )ij (n + 1) = (P )ij (n + ) (n + 1) + (V )ij (n + 1) . (7.29)
2 2 (X )ij
The effect of the determinant is encoded in the matrix
V
(V )ij =
(X )ij
1
= T rad 1 . (7.30)
2 (X )ij
From (7.23) and (7.30) we see that we must compute the inverse and the determinant of
the Dirac operator at each hybrid Monte Carlo step. However, the Dirac operator is an
N N matrix where N = 2N 2 2. This is proportional to the number of degrees of
freedom. Since the computation of the determinant requires O(N 3 ) operations at best,
through Gaussian elimination, we see that the computational effort of the above algorithm
will be O(N 6 ). Recall that the computational effort of the bosonic theory is O(N 3 )1 .
1
Compare also with field theory in which the number of degrees of freedom is proportional to the volume, the
computational effort of the bosonic theory is O(V ) while that of the full theory, which includes a determinant,
is O(V 2 ).
CP and MFT, B.Ydri 203
0 1
det D = det M = (det ) 2
Z
= d+ d exp(+ 1/2 ). (7.31)
These are precisely the pseudo-fermions. They are complex-valued instead of Grassmann-
valued degrees of freedom, and that is why they are pseudo-fermions, with a positive
definite Laplacian and thus they can be sampled in Monte Carlo simulations in the usual
way.
Furthermore, we will use the so-called rational approximation, which is why the re-
sulting hybrid Monte Carlo is termed rational, which allows us to write
Z
1
(det ) 2 = d+ d exp(+ r2 ()). (7.33)
M
X a
x1/4 ' r(x) = a0 + . (7.34)
x + b
=1
The parameters a0 , a , b and M are real positive numbers which can be optimized for
any strictly positive range such as x 1. This point was discussed at great length
previously.
Thus the pseudo-fermions are given by a heatbath, viz
= r1 (), (7.35)
By using a different rational approximation r(x), in order to avoid double inversion (see
below), we rewrite the original path integral in the form
Z Y 4 Z
ZYM = dX d+ d (T rX ) exp SBYM [X] exp(+ r()).(7.37)
=1
CP and MFT, B.Ydri 204
V = + r()
M
X
= a0 + + a + ( + b )1
=1
XM M
X
+ +
= a0 + a G = a0 +
+ a +
G
=1 =1
XM M
X
= a0 + + a G+ +
= a0 + a G+
. (7.40)
=1 =1
G = ( + b )1 . (7.42)
H
( )A =
(Q )A
(Q )A . (7.45)
CP and MFT, B.Ydri 205
1
( )A (n + 1) = ( )A (n) + t(Q )A (n + ). (7.48)
2
1 t
(Q )A (n + 1) = (Q )A (n + ) (W )A (n + 1). (7.49)
2 2
The first set of equations of motion associated with the matrices X are given by
H
(P )ij =
(X )ij
SBYM V
= +
(X )ij (X )ij
X M
SBYM
= a G+
G . (7.50)
(X )ij (X )ij
=1
The effect of the determinant is now encoded in the matrix (the force)
M
X
(V )ij = a G+
G . (7.51)
(X )ij
=1
The second set of equations associated with the matrices X are given by
H
(X )ij =
(P )ij
= (P )ji . (7.52)
The leap-frog algorithm for this part of the problem is given by the equations (7.27),
(7.28) and (7.29) with the appropriate re-interpretation of the meaning of (V )ij .
0 0 0 0 0 0 0 0
B
z = (M )+ y (z )A0 = (M ) A
(y )B 0 . (7.55)
CP and MFT, B.Ydri 206
0
Multiplication by M : By using (7.19) we have
0 0 0 0 0
AB
(y )A0 = M (x )B 0
0 0 0 2 0 0 0 2 0 2 2 0
AB
= M (x )B 0 MN
B
i 0j 0 (x )B 0 MA N
i 0j 0 (x )B 0 + MN
N
i 0j 0 i 0j 0 (x )B 0 .
A A B B A A B B
(7.56)
Recall that the primed indices run from 1 to N 2 1 while unprimed indices run from 1
to N 2 . We introduce then
(y )A = MAB
(x )B
0 2
= MAB AN
(x )B 0 + M (x )N 2 . (7.57)
We define
0 0
(x )B 0 = (x )B 0 , (x )N 2 = (x )B 0 i 0j 0 . (7.58)
B B
Thus
0 0 2 0
(y )A = MAB AN
(x )B 0 M (x )B 0 i 0j 0 . (7.59)
B B
Thus
x T A = (
(x )A = T r y T A = (
x )jA iA , (y )A = T r y )jA iA . (7.63)
And
x (T A )+ = (
(x )A = T r y (T A )+ = (
x )iA jA , (y )A = T r y )iA jA . (7.64)
We verify that
MAB A
(x )B = T rT (D
x) . (7.65)
By comparing with
(y )A = T rT A (
y ) , (7.66)
CP and MFT, B.Ydri 207
we get
yT = D
x. (7.67)
Thus yT = D
x is equivalent to
(
y1 )ij = (D1 x
)ji = [X34 , x
1 ]ji + [X , x
2 ]ji + (
x1 )ji . (7.69)
+
( )ji = [X34
y2 )ij = (D2 x ,x
2 ]ji + [X+ , x
1 ]ji + (
x2 )ji . (7.70)
(y )A MAB y (D
(x )B = T r x) . (7.71)
0
Multiplication by (M )+ : As before the calculation of
0 0 0 0 0
B A
(z )A0 = (M ) (y )B 0 (7.72)
(z )A = (M )B A (y )B , (7.73)
0
(z )A0 = (z )A0 (z )N 2 i 0j 0 . (7.75)
A A
MBA A +
(y )B = T rT (D y) . (7.76)
Hence
zT = D+ y. (7.78)
Equivalently
+
(
z1 )ij = (D1 y )ji = [X34 , y1 ]ji [X+ , y2 ]ji + (
y1 )ji . (7.79)
+ T
(
z2 )ij = (D2 y )ji = [X34 , y2 ]ji [X , y1 ]ji + (
y2 )ji . (7.80)
CP and MFT, B.Ydri 208
M
X
(V )ij = a G+
G
(X )ij
=1
M 0 M 0
X (M )+ X
+
M
= a G+
F a F G
(X )ij (X )ij
=1 =1
XM 0 X
M 0
+
M +
M
= a F G a F G . (7.81)
(X )ij (X )ij
=1 =1
+ are defined by
The vectors F and F
0 0
+
F = M G , F = G+ +
(M ) . (7.82)
N 2
X
X = XA T A . (7.83)
A=1
Equivalently
Hence we have
VA (V )iA jA
M
X 0 X M 0
+
M +
M
= a F G a F G
X
A XA
=1 =1
M
X M
X
A A
= a T a T . (7.86)
=1 =1
A is obviously given by
The definition of T
0
A +
M
T = F G . (7.87)
XA
For simplicity we may denote the derivations with respect to XA and XA by and
respectively. As before we introduce the vectors in the full Hilbert space:
) 0 = (G ) 0 , (G
(G )N 2 = (G ) 0 i 0j 0 . (7.88)
B B B B B
CP and MFT, B.Ydri 209
(F )B 0 = (F )B 0 , (F )N 2 = (F )B 0 i 0j 0 . (7.89)
B B
0 0 0
(F )A0 (M )A B (G )B 0 = (F )A (M )AB (G
)B . (7.91)
Thus
A +
M
T = F .
G (7.92)
XA
Explicitly we have
A
MCD
T = (F )C )D .
(G (7.93)
XA
We use the result
MCD
M
= Tr [T D , T C ], (7.94)
XA XA
where
+
M11 = X34 , M12 = X , M21 = X+ , M22 = X34 . (7.95)
We also introduce the matrices F and G
given by
N 2 N 2
X X
F = (F )A T A , G
= )A T A .
(G (7.96)
A=1 A=1
A
1 , F2
2 , F1 A
1 , F2
2 , F1
T2 = i[G ]jA iA + i[G ]jA iA , T2 = i[G ]iA jA + i[G ]iA jA(7.101)
.
A
T3 = [G 2 , F ]j i , T A = [G
1 , F ]j i [G 1 , F ]i j [G
2 , F ]i j . (7.102)
1 A A 2 A A 3 1 A A 2 A A
A
T4 = i[G 2 , F ]j i , T A = i[G
1 , F ]j i + i[G 1 , F ]i j + i[G
2 , F ]i j .(7.103)
1 A A 2 A A 4 1 A A 2 A A
CP and MFT, B.Ydri 210
6. Other Essential Ingredients: The two other essential ingredients of this algorithm
are:
(a) Conjugate Gradient: This plays a fundamental role in this algorithm. The
multimass Krylov space solver employed here is based on the fundamental equa-
tions (6.117)-(6.128). This allows us to compute the G for all given by equa-
tion (7.42) at once. The multiplication by is done in two steps: first we
0 0
multiply by M then we multiply by (M )+ . This is done explicitly by reducing
(7.54) to (7.69)+(7.70) and reducing (7.55) to (7.79)+(7.80). Here, we obviously
need to convert between a given traceless vector and its associated matrix and
vice versa. The relevant equations are (7.58), (7.60) and (7.64).
(b) Remez Algorithm: This is discussed at length in the previous chapter. We
only need to re-iterate here that the real coefficients c, d, for the rational ap-
proximation of x1/4 , and a and b, for the rational approximation of x1/2 , as
well as the integer M are obtained using the Remez algorithm of [9]. The integer
M is supposed to be determined separately for each function by requiring some
level of accuracy whereas the range over which the functions are approximated
by their rational approximations should be determined on a trial and error basis
by inspecting the spectrum of the Dirac operator.
< 4YM > + < 3CS > + < 2m2 HO > + < COND >= (d + 2)(N 2 (7.107)
1).
< 4YM > + < 3CS > + < 2m2 HO >= (d + 2)(N 2 1). (7.108)
=10 =10
600 36
SB KF
KB SF
34
500
32
400
30
SB,KB
KF,SF
300 28
26
200
24
100
22
0 20
0 1000 2000 3000 4000 5000 6000 7000 8000 0 1000 2000 3000 4000 5000 6000 7000 8000
molecular dynamics time molecular dynamics time
=1 =1
1000 80
SB KF
KB SF
900
70
800
700
60
600
SB,KB
KF,SF
500 50
400
40
300
200
30
100
0 20
0 1000 2000 3000 4000 5000 6000 7000 8000 0 1000 2000 3000 4000 5000 6000 7000 8000
molecular dynamics time molecular dynamics time
=0 =0
3500 400
SB KF
KB SF
350
3000
300
2500
250
2000
SB,KB
KF,SF
200
1500
150
1000
100
500
50
0 0
0 1000 2000 3000 4000 5000 6000 7000 8000 0 1000 2000 3000 4000 5000 6000 7000 8000
molecular dynamics time molecular dynamics time
Figure 7.1:
CP and MFT, B.Ydri 215
1000
2500
800
SB
2000
H
600
1500
400
1000
200
0 500
0 1000 2000 3000 4000 5000 6000 7000 8000 0 1000 2000 3000 4000 5000 6000 7000 8000
molecular dynamics time molecular dynamics time
3000
450
400
2500
350
HF
KB
2000 300
250
1500
200
150
1000
100
500 50
0 1000 2000 3000 4000 5000 6000 7000 8000 0 1000 2000 3000 4000 5000 6000 7000 8000
molecular dynamics time molecular dynamics time
Figure 7.2:
CP and MFT, B.Ydri 216
thermalized observables for = =m2=0 and N=4 thermalized observables for = =m2=0 and N=4
160 10
fort.9 u 1:2 fort.9 u 1:8
150
8
140
6
130
120 4
H
H
110 2
100
0
90
-2
80
70 -4
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 0 1000 2000 3000 4000 5000 6000 7000 8000 9000
Monte Carlo time Monte Carlo time
2 2
thermalized observables for = =m =0 and N=4 thermalized observables for = =m =0 and N=4
35 80
fort.9 u 1:9 S
SB
SF
70
30
60
25
50
20
exp(- H )
actions
40
15
30
10
20
5
10
0 0
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 0 1000 2000 3000 4000 5000 6000 7000 8000 9000
Monte Carlo time Monte Carlo time
thermalized observables for = =m2=0 and N=8 thermalized observables for = =m2=0 and N=8
115 280
SB S
SF
110 260
105 240
100 220
95 200
S,SF
SB
90 180
85 160
80 140
75 120
70 100
65 80
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 0 1000 2000 3000 4000 5000 6000 7000 8000 9000
Monte Carlo time Monte Carlo time
Figure 7.3:
CP and MFT, B.Ydri 217
expectation values for m2=0 and N=4 expectation values for m2=0 and N=4
1.1 14
identity YM
exp(- H)
1.05
12
1
<identity>/(6N2-1),<exp(- H)>
10
0.95
<YM>/(N2-1)
8
0.9
6
0.85
4
0.8
2
0.75
0.7 0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6
2 2
expectation values for m =0 and N=4 expectation values for m =0 and N=4
0 80
CS HO
70
-2
60
-4
50
<CS>/(N2-1)
<HO>/N
-6
40
30
-8
20
-10
10
-12
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 0
0.5 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6
N
1.84
1.82
1.8
<SF>/(N2-1)
1.78
1.76
1.74
1.72
1.7
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6
Figure 7.4:
CP and MFT, B.Ydri 218
In this chapter we will follow the excellent pedagogical textbook [1] especially on
practical detail regarding the implementation of the Metropolis and other algorithms to
lattice gauge theories. The classic textbooks [25] were also very useful.
The are the famous 44 Dirac gamma matrices which appear in any theory containing
a spin 1/2 field. They satisfy { , } = 2 where = diag(1, 1, 1, 1). The
electromagnetic field is given by the U (1) gauge vector field A with field strength F =
A A while the fermion (electron) field is given by the spinor field with mass M .
The spinor is a 4component field and = + 0 . The interaction term is proportional
to the electric charge e given by the last term e A . The Euler-Lagrange classical
equations of motion derived from the above action are precisely the Maxwell equations
F = j with j = e and the Dirac equation (i m e A ) = 0. The
above theory is also invariant under the following U (1) gauge transformations
CP and MFT, B.Ydri 221
Before we can study this theory numerically using the Monte Carlo method we need to:
1. Rotate to Euclidean signature in order to convert the theory into a statistical field
theory.
2. Regularize the UV behavior of the theory by putting it on a lattice.
As a consequence we obtain an ordinary statistical system accessible to ordinary sampling
techniques such as the Metropolis algorithm.
We start by discussing a little further the above action. The free fermion action in
Minkowski spacetime is given by
Z
SF = d4 x(x)(i
M )(x). (8.4)
This action is invariant under the global U (1) transformation (x) G(x) and
(x)
(x)G 1 where G = exp(i). The symmetry U (1) can be made local (i.e.
1 (x),
G(x) , G (8.6)
provided we also transform the covariant derivative and the gauge field as follows
i
D GD G1 A G(x)A G1 (x) G(x) G1 (x). (8.7)
e
Since A and G(x) = exp(i(x)) commute the transformation law of the gauge field
reduces to A A + /e. The dynamics of the gauge field A is given by the
Maxwell action
Z
1
SG = d4 xF F , F = A A . (8.8)
4
This action is also invariant under the local U (1) gauge symmetry A A + /e.
The total action is then
Z Z
1
SQED = d xF F + d4 x(x)(i
4
D M )(x). (8.9)
4
CP and MFT, B.Ydri 222
This has the symmetry ei and the symmetry ei5 when M = 0. The
associated conserved currents are known to be given by J = and J 5 =
5
where 5 = 1 2 3 4 . It is also a known result that in the quantum theory one can
not maintain the conservation of both of these currents simultaneously in the presence of
gauge fields.
A regularization which maintains exact chiral invariance of the above action can be
achieved by replacing the Euclidean four dimensional spacetime by a four dimensional
hypercubic lattice of N 4 sites. Every point on the lattice is specified by 4 integers which
we denote collectively by n = (n1 , n2 , n3 , n4 ) where n4 denotes Euclidean time. Clearly
each component of the 4vector n is an integer in the range N/2n N/2 with N
even. The lattice is assumed to be periodic. Thus x = an where a is the lattice spacing
CP and MFT, B.Ydri 223
and L = aN is the linear size of the lattice. Now to each site x = an we associate a spinor
variable (n) = (x) and the derivative (x) is replaced by
1 1h i
(x) (n) = ) (n
(n + ) . (8.14)
a 2a
The vector is the unit vector in the direction. With this prescription the action
(8.13) becomes (with M = aM and = a3/2 )
XXXX
SF = (n)K (n, m) (m)
n m
1X m,n .
K (n, m) = ( ) m,n+ m,n + M (8.15)
2
U (1) Lattice Gauge Fields: The free fermion action on the lattice is therefore given
by
XX
SF
= M (n) (n)
n
1 XXXX
) (n) ( ) (n) (n +
( ) (n + ) .
2 n
(8.16)
This bilinear can be made gauge covariant by inserting the Schwinger line integral
Ry
U (x, y) = eie x dz A (z)
, (8.19)
which transforms as
We conclude that in order to get local U (1) gauge invariance we replace the second and
third bilinear fermionic terms in the above action as follows
+ +
(n)(r )(n ) (n)(r )Un,n+ (n )
+
+
(n )(r )(n) (n )(r )Un+,n (n). (8.23)
The U (1) element Un,n+ lives on the lattice link connecting the two points n and n +
.
This link variable is therefore a directed quantity given explicitly by
+ i (n)
Un,n+ = ei (n) U (n) , Un+,n = Un,n+
=e U+ (n). (8.25)
The second equality is much clearer in the continuum formulation but on the lattice it
is needed for the reality of the action. The phase (n) belongs to the compact interval
[0, 2]. Alternatively we can work with A (n) defined through
Let us now consider the product of link variables around the smallest possible closed loop
on the lattice, i.e. a plaquette. For a plaquette in the plane we have
Z
SG [U ]SF [U,,
]
Z= DU DD e . (8.31)
4
YY Y Y
DU = dU (n) , D = d(n) , D =
d(n). (8.32)
n =1 n n
Where
Z
(n)D (U )n,m (m)
P P P P
e
DD n m = detD (U )n,m . (8.37)
Z
Z= DU detD (U )n,m eSG [U ] . (8.38)
At this stage we will make the approximation that we can set the determinal equal 1, i.e.
the QED partition function will be approximated by
Z
Z = DU eSG [U ] (8.39)
We will also measure the expectation value of the so-called Wilson loop which has a length
I in one of the spatial direction (say 1) and a width J in the temporal direction 4. This
rectangular loop C is defined by
By using the fact that under (n) (n), the partition function is invariant while
the Wilson loop changes its orientation, i.e. WC [U ] WC [U ]+ , we obtain
The loop C is now a rectangular contour with spatial length R = Ia and timelike length
T = Ja. This represents the probability amplitude for the process of creating an infinitely
heavy, i.e. static, quark-antiquark 1 pair at time t = 0 which are separated by a distance
R, then allowing them to evolve in time and then eventually annihilate after a long time
T.
1
For U (1) we should really speak of an electron-positron pair.
CP and MFT, B.Ydri 227
In other words we also take the average over the lattice which is necessary in order to
reduce noise in the measurment of the Creutz ratio (see below).
The above Wilson loop is the order parameter of the pure U (1) gauge theory. For
large time T we expect the behavior
where V (R) is the static quark-antiquark potential. For strong coupling (small ) we can
show that the potential is linear, viz
V (R) = R. (8.50)
The constant is called the string tension from the fact that the force between the quark
and the antiquark can be modeled by the force in a string attached to the quark and
antiquark. For a linear potential the Wilson loop follows an area law W [R, T ] = exp(A)
with A = a2 IJ. This behavior is typical in a confining phase which occurs at high
temperature.
For small coupling (large ,low temperature) the lattice U (1) gauge field becomes
weakly coupled and as a consequence we expect the Coulomb potential to dominate the
static quark-antiquark potential, viz
Z
V (R) = . (8.51)
R
Hence for large R the quark and antiquark become effectively free and their energy is
simply the sum of their self-energies. The Wilson loop in this case follows a perimeter law
W [R, T ] = exp(2T ).
In summary for a rectangular R T Wilson loop with perimeter P = 2(R + T ) and
area A = RT we expect the behavior
The perimeter piece actually dominates for any fixed size loop. To measure the string
tension we must therefore eliminate the perimeter behavior which can be achieved using
the so-called Creutz ratio defined by
W [I, J]W [I 1, J 1]
(I, J) = ln . (8.55)
W [I, J 1]W [I 1, J]
CP and MFT, B.Ydri 228
Clearly all the planes are equivalent and thus we should have
ln Z X
= 6 < [1 Re U14 (n)] >
n
X
= 6 < [1 Re U1 (n)U4 (n + 1)U4+ (n)U1+ (n + 4)] > . (8.59)
n
Remark that there are N 3 NT lattice sites. Each site corresponds to 4 plaquettes in every
plane and thus it corresponds to 4 6 plaquettes in all. Each plaquette in a plane
corresponds to 4 sites and thus to avoid overcounting we must divide by 4. In summary
we have 4 6 N 3 NT /4 plaquettes in total. Six is therefore the ratio of the number
of plaquettes to the number of sites.
We have then
1 ln Z 1 X
= 1 < Re U1 (n)U4 (n + 1)U4+ (n)U1+ (n + 4) >(8.60)
.
6N 3 NT N 3 NT n
We can now observe that all lattice sites n are the same under the expectation value,
namely
1 ln Z
3
= 1 < Re U1 (n)U4 (n + 1)U4+ (n)U1+ (n + 4) > . (8.61)
6N NT
This is the average action per plaquette (the internal energy) denoted by
1 ln Z
P = 3
= 1 W [1, 1]. (8.62)
6N NT
Z
Z= DU eSG [U ] . (8.64)
4
YY
DU = dU (n). (8.65)
n =1
Hence
4
YY
DU = d (n). (8.68)
n =1
We will use the Metropolis algorithm to solve this problem. This goes as follows. Starting
from a given gauge field configuration, we choose a lattice point n and a direction ,
0
and change the link variable there, which is U (n), to U (n) . This link is shared by 6
plaquettes. The corresponding variation of the action is
0
SG [U (n))] = SG [U ] SG [U ]. (8.69)
0
The gauge field configurations U and U differ only by the value of the link variable
U (n). We need to isolate the contribution of U (n) to the action SG . Note the fact that
+ = U . We write
U
XX XX
+
SG [U ] = 1 U (n) + U (n) . (8.70)
2
n < <
n
In the plane, the link variable U (n) appears twice corresponding to the two lattice
points n and n . For every there are three relevant planes. The six relevant terms
are therefore given by
XX X
U (n) )U+ (n + )U+ (n)
U (n)U (n +
2 2
n < 6=
+ +
+ U (n)U (n )U (n )U (n + ) + ...(8.72)
CP and MFT, B.Ydri 230
The A (n) is the sum over the six so-called staples which are the products over the other
three link variables which together with U (n) make up the six plaquettes which share
U (n). Explicitly we have
X
+ + + +
A (n) = U (n + )U (n )U (n ) .(8.74)
)U (n + )U (n) + U (n +
6=
We compute then
0
SG [U (n))] = SG [U ] SG [U ]
0
= (U (n) U (n))A (n). (8.76)
Having computed the variation SG [U (n))], next we inspect its sign. If this variation is
0
negative then the proposed change U (n) U (n) will be accepted (classical mechan-
ics). If the variation is positive, we compute the Boltzmann probability
0
exp(SG [U (n))]) = exp((U (n) U (n))A (n)). (8.77)
0
The proposed change U (n) U (n) will be accepted according to this probability
(quantum mechanics). In practice we will pick a uniform random number r between 0
and 1 and compare it with exp(SG [U (n))]). If exp(SG [U (n))]) < r we accept
this change otherwise we reject it.
We go through the above steps for every link in the lattice which constitutes one Monte
Carlo step. Typically equilibration (thermalization) is reached after a large number of
Monte Carlo steps at which point we can start taking measurements based on the formula
(8.66) written as
L
1X
< O >= Oi , Oi = O(Ui ). (8.78)
L
i=1
here U (1)) near the identity. In order to maintain a symmetric selection probability, X
should be drawn randomly from a set of U (1) elements which contains also X 1 . For U (1)
gauge group we have X = exp(i) where [0, 2]. In principle the acceptance rate can
be maintained around at least 0.5 by tuning appropriately the angle . Reunitarization
0
of U (n) may also be applied to reduce rounding errors.
The final technical remark is with regard to boundary conditions. In order to reduce
edge effects we usually adopt periodic boundary conditions, i.e.
This means in particular that the lattice is actually a four dimensional torus. In the
actual code this is implemented by replacing i 1 by ip(i) and im(i), ipT(i) and imT(i)
respectively which are defined by
do i=1,N
ip(i)=i+1
im(i)=i-1
enddo
ip(N)=1
im(1)=N
do i=1,NT
ipT(i)=i+1
imT(i)=i-1
enddo
ipT(NT)=1
imT(1)=NT
A code written along the above lines is attached in the last chapter.
3. The simplest order parameter is the action per plaquette P , defined in equation
(8.62), which is shown on figure (8.2). We observe good agreement between the
high-temperature and low-temperature expansions of P from one hand and the corre-
sponding observed behavior in the strong coupling and weak coupling regions respec-
tively from the other hand. We note that the high-temperature and low-temperature
CP and MFT, B.Ydri 232
16
N=3
N=4
N=10
14 N=12
12
10
Cv/N4
0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8
beta
1.4
N=8
N=10
strong coupling
weak coupling
1.2
0.8
P
0.6
0.4
0.2
0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8
beta
0
N=4
N=12
-1
-2
-3
-4
action/N4
-5
-6
-7
-8
-9
-10
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8
beta
0.9
N=8,1x1 loop
2x2 loop
0.8 3x3 loop
N=10,1x1 loop
2x2 loop
0.7 3x3 loop
0.6
0.5
W[I,J]
0.4
0.3
0.2
0.1
-0.1
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8
beta
Figure 8.4: The Wilson loop as a function of the inverse coupling strength .
2
2X2,N=10
N=12
1.8 3X3,N=10
N=12
2X3,N=10
1.6 N=10
3X2,N=10
N=12
strong coupling expansion
1.4
1.2
creutz ratio
0.8
0.6
0.4
0.2
0
0.4 0.6 0.8 1 1.2 1.4 1.6 1.8
beta
Figure 8.5: String tension from Creutz ratio as a function of on a 124 lattice.
Bibliography
Codes
File: /home/ydri/Desktop/TP_QFT/codes/metropolis-ym.f Page 1 of 6
program my_metropolis_ym
implicit none
integer dim,dimm,N,ther,mc,Tther,Tmc
integer lambda,i,j,idum
parameter (dimm=10,N=8)
parameter (Tther=2**11,Tmc=2**11)
double complex X(dimm,N,N)
double precision xx,y,Accept,Reject,inn,interval,pa
double precision act(Tmc),actio,average_act,error_act
double precision t_1, t_2
real x0
call cpu_time(t_1)
do dim=2,dimm
if(dim.le.dimm)then
idum=-148175
x0=0.0
idum=idum-2*int(secnds(x0))
c.......inititialization of X................................
inn=1.0d0
do lambda=1,dimm
if (lambda.le.dim)then
do i=1,N
do j=i,N
if (j.ne.i) then
xx=interval(idum,inn)
y=interval(idum,inn)
X(lambda,i,j)=cmplx(xx,y)
X(lambda,j,i)=cmplx(xx,-y)
else
xx=interval(idum,inn)
X(lambda,i,j)=xx
endif
enddo
enddo
else
do i=1,N
do j=i,N
if (j.ne.i) then
xx=0.0d0
y=0.0d0
X(lambda,i,j)=cmplx(xx,y)
X(lambda,j,i)=cmplx(xx,-y)
else
xx=0.0d0
X(lambda,i,j)=xx
endif
enddo
enddo
endif
enddo
c.... accepts including flips, rejects and the acceptance rate pa...
Reject=0.0d0
Accept=0.0d0
pa=0.0d0
c.............thermalization.......................................
do ther=1,Tther
File: /home/ydri/Desktop/TP_QFT/codes/metropolis-ym.f Page 2 of 6
call metropolis(dim,dimm,N,X,Reject,Accept,inn,idum)
call adjust_inn(pa,inn,Reject,Accept)
call action(dim,dimm,N,X,actio)
write(*,*)ther,actio,pa
write(10+dim,*)ther,actio,pa
enddo
do mc=1,Tmc
call metropolis(dim,dimm,N,X,Reject,Accept,inn,idum)
call adjust_inn(pa,inn,Reject,Accept)
call action(dim,dimm,N,X,actio)
act(mc)=actio
write(*,*)mc,act(mc),pa
write(21+dim,*)mc,act(mc),pa
enddo
c.............measurements.........................................
call jackknife_binning(Tmc,act,average_act,error_act)
write(*,*)dim,average_act,error_act
write(32,*)dim,average_act,error_act
endif
enddo
c.........cpu time............................................
call cpu_time(t_2)
write(*,*)"cpu_time", t_2-t_1
return
end
c...............action......................................
subroutine action(dim,dimm,N,X,actio)
implicit none
integer dim,dimm,N,mu,nu,i,j,k,l
double complex X(dimm,N,N)
double precision actio,action0
actio=0.0d0
do mu =1,dimm
do nu=mu+1,dimm
action0=0.0d0
do i=1,N
do j=1,N
do k=1,N
do l=1,N
action0=action0+X(mu,i,j)*X(nu,j,k)*X(mu,k,l)*X(nu,l,i)
& -X(mu,i,j)*X(mu,j,k)*X(nu,k,l)*X(nu,l,i)
enddo
enddo
enddo
enddo
action0=-N*action0
actio=actio+action0
enddo
enddo
return
end
c..............metropolis algorithm..........................
subroutine metropolis(dim,dimm,N,X,Reject,Accept,inn,idum)
File: /home/ydri/Desktop/TP_QFT/codes/metropolis-ym.f Page 3 of 6
implicit none
integer dim,dimm,N,i,j,lambda,idum
double precision Reject,Accept,inn,interval,deltaS,ran2,z1,p1,xx,y
double complex X(dimm,N,N),dc,dcbar
do lambda=1,dim
c..............diagonal..........................
do i=1,N
xx=interval(idum,inn)
y=interval(idum,inn)
dc=cmplx(xx,0)
dcbar=cmplx(xx,-0)
call variationYM(dim,dimm,N,lambda,i,i,dc,dcbar,X,deltaS)
if ( deltaS .gt. 0.0d0 ) then
z1=ran2(idum)
p1=dexp(-deltaS)
if ( z1 .lt. p1 ) then
X(lambda,i,i)=X(lambda,i,i)+dc+dcbar
Accept=Accept+1.0d0
else
Reject=Reject+1.0d0
endif
else
X(lambda,i,i)=X(lambda,i,i)+dc+dcbar
Accept=Accept+1.0d0
endif
enddo
c............off diagonal..........................
do i=1,N
do j=i+1,N
xx=interval(idum,inn)
y=interval(idum,inn)
dc=cmplx(xx,y)
dcbar=cmplx(xx,-y)
call variationYM(dim,dimm,N,lambda,i,j,dc,dcbar,X,deltaS)
if ( deltaS .gt. 0.0d0 ) then
z1=ran2(idum)
p1=dexp(-deltaS)
if ( z1 .lt. p1 ) then
X(lambda,i,j)=X(lambda,i,j)+dc
Accept=Accept+1.0d0
else
Reject=Reject+1.0d0
endif
else
X(lambda,i,j)=X(lambda,i,j)+dc
Accept=Accept+1.0d0
endif
X(lambda,j,i)=dconjg(X(lambda,i,j))
enddo
enddo
enddo
return
end
subroutine variationYM(dim,dimm,N,lambda,i,j,dc,dcbar,X,deltaS)
implicit none
integer dim,dimm,N,i,j,lambda,sigma,k,l,p,q
double complex delta0,delta1,del2,del3,delta2
double precision delta11,delta22,deltaS
double complex X(dimm,N,N),dc,dcbar
delta0=0.0d0
do sigma=1,dim
File: /home/ydri/Desktop/TP_QFT/codes/metropolis-ym.f Page 4 of 6
if (sigma.ne.lambda)then
do k=1,N
delta0=delta0-X(sigma,i,k)*X(sigma,k,i)
& -X(sigma,j,k)*X(sigma,k,j)
enddo
endif
enddo
delta1=0.0d0
delta1=delta1+dc*dcbar*delta0
if (i.eq.j) then
delta1=delta1+0.5d0*(dc*dc+dcbar*dcbar)*delta0
endif
do sigma=1,dim
if (sigma.ne.lambda)then
delta1=delta1+dc*dc*X(sigma,j,i)*X(sigma,j,i)
& +dcbar*dcbar*X(sigma,i,j)*X(sigma,i,j)
& +2.0d0*dc*dcbar*X(sigma,i,i)*X(sigma,j,j)
endif
enddo
delta1=-N*delta1
delta11=real(delta1)
del2=0.0d0
del3=0.0d0
do sigma=1,dim
do k=1,N
do l=1,N
del2=del2+2.0d0*X(sigma,i,k)*X(lambda,k,l)*X(sigma,l,j)
& -1.0d0*X(sigma,i,k)*X(sigma,k,l)*X(lambda,l,j)
& -1.0d0*X(lambda,i,k)*X(sigma,k,l)*X(sigma,l,j)
del3=del3+2.0d0*X(sigma,j,k)*X(lambda,k,l)*X(sigma,l,i)
& -1.0d0*X(sigma,j,k)*X(sigma,k,l)*X(lambda,l,i)
& -1.0d0*X(lambda,j,k)*X(sigma,k,l)*X(sigma,l,i)
enddo
enddo
enddo
delta2=0.0d0
delta2=-N*dcbar*del2-N*dc*del3
delta22=real(delta2)
deltaS=delta11+delta22
return
end
subroutine jackknife_binning(TMC,f,average,error)
implicit none
integer i,j,TMC,zbin,nbin
double precision xm
double precision f(1:TMC),sumf,y(1:TMC)
double precision sig0,sig,error,average
c..............TMC is the number of data points. sig0 is the standard deviation. sumf is the sum of all
the data points f_i whereas xm is the average of f......
sig0=0.0d0
sumf=0.0d0
do i=1,TMC
sumf=sumf+f(i)
enddo
xm=sumf/TMC
c.... zbin is the number of elements we remove each time from the set of TMC data points. the minimum
number we can remove is 1 whereas the maximum number we can remove is TMC-1.each time we remove zbin
elements we end up with nbin sets (or bins)...........
c do zbin=1,TMC-1
zbin=1
nbin=int(TMC/zbin)
sig=0.0d0
File: /home/ydri/Desktop/TP_QFT/codes/metropolis-ym.f Page 5 of 6
do i=1,nbin,1
c... y(i) is the average of the elements in the ith bin.This bin contains TMC-zbin data points after we
had removed zbin elements. for zbin=1 we have nbin=TMC.In this case there are TMC bins and y_i=sum_{j#i}
x_j/(TMC-1). for zbin=2 we have nbin=TMC/2. In this case there are TMC/2 bins and y_i= sum_jx_j/(TMC-2)-
x_{2i}/(TMC-2)-x_{2i-1}/(TMC-2)...
y(i)=sumf
do j=1,zbin
y(i)=y(i)-f((i-1)*zbin+j )
enddo
y(i)= y(i)/(TMC-zbin)
c..........the standard deviation computed for the ith bin..............
sig=sig+((nbin-1.0d0)/nbin)*(y(i)-xm)*(y(i)-xm)
enddo
c.... the standard deviation computed for the set of all bins with fixed zbin.....
sig=sig
c..................the error....................................
sig=dsqrt(sig)
c.... we compare the result with the error obtained for the previous zbin, if it is larger, then this is
the new value of the error...
if (sig0 .lt. sig) sig0=sig
c enddo
c.... the final value of the error..............................................................
error=sig0
average=xm
return
end
function ran2(idum)
implicit none
integer idum,IM1,IM2,IMM1,IA1,IA2,IQ1,IQ2,IR1,IR2,NTAB,NDIV
real AM,EPS,RNMX
double precision ran2
parameter (IM1=2147483563,IM2=2147483399,AM=1./IM1,IMM1=IM1-1,
& IA1=40014,IA2=40692,IQ1=53668,IQ2=52774,IR1=12211,
& IR2=3791,NTAB=32,NDIV=1+IMM1/NTAB,EPS=1.2E-7,RNMX=1.-EPS)
integer idum2,j,k,iv(NTAB),iy
SAVE iv,iy,idum2
DATA idum2/123456789/,iv/NTAB*0/,iy/0/
if (idum.le.0) then
idum=max(-idum,1)
idum2=idum
do j=NTAB+8,1,-1
k=idum/IQ1
idum=IA1*(idum-k*IQ1)-k*IR1
if (idum.lt.0) idum=idum+IM1
if (j.le.NTAB) iv(j)=idum
enddo
iy=iv(1)
endif
k=idum/IQ1
idum=IA1*(idum-k*IQ1)-k*IR1
if (idum.lt.0) idum=idum+IM1
k=idum2/IQ2
idum2=IA2*(idum2-k*IQ2)-k*IR2
if (idum2.lt.0) idum2=idum2+IM2
j=1+iy/NDIV
iy=iv(j)-idum2
iv(j)=idum
if (iy.lt.1) iy=iy+IMM1
ran2=min(AM*iy,RNMX)
return
end
File: /home/ydri/Desktop/TP_QFT/codes/metropolis-ym.f Page 6 of 6
c.............interval..................................
function interval(idum,inn)
implicit none
double precision interval,inn,ran2
integer idum
interval=ran2(idum)
interval=interval+interval-1.0d0
interval=interval*inn
return
end
c.........adjusting interval..............................
subroutine adjust_inn(pa,inn,Reject,Accept)
implicit none
double precision inn,pa,Reject,Accept
return
end
File: /home/ydri/Desktop/TP_QFT/codes/hybrid-ym.f Page 1 of 7
program my_hybrid_ym
implicit none
integer d,N,i,j,k,lambda,idum,tt,time,timeT,tther,Tth
parameter (d=4,N=4)
parameter (Tth=2**10)
double precision gamma,mm,alpha,inn,dt,interval
double complex X(d,N,N),P(d,N,N)
double precision actio,ham,kin,variationH
double precision Reject,Accept,pa
double precision varH(Tth),varH_average,varH_error
double precision ac(Tth),ac_average,ac_error
real x0
idum=-148175
x0=0.0
c... seed should be set to a large odd integer according to the manual. secnds(x) gives number of
seconds-x elapsed since midnight. the 2*int(secnds(x0)) is always even so seed is always odd....
idum=idum-2*int(secnds(x0))
c call hot(N,d,idum,inn,X,P)
c call cold(N,d,X)
c time=1
c dt=0.01d0
c timeT=100
c do tt=1,timeT
c call molecular_dynamics(N,d,dt,time,gamma,mm,alpha,X,P)
c call action(d,N,X,P,alpha,mm,gamma,actio,ham,kin)
c write(9,*)tt,actio,ham
c write(*,*)tt,actio,ham
c enddo
time=100
dt=0.01d0
c..................parameters..............
mm=0.0d0
alpha=0.0d0
do k=0,20
gamma=2.1d0-k*0.1d0
inn=1.0d0
call hot(N,d,idum,inn,X,P)
call cold(N,d,X)
Reject=0.0d0
Accept=0.0d0
pa=0.0d0
c..............thermalization................
do tther=1,Tth
call metropolis(N,d,gamma,mm,alpha,dt,time,X,P,Reject,Accept
& ,variationH)
enddo
do tther=1,Tth
call metropolis(N,d,gamma,mm,alpha,dt,time,X,P,Reject,Accept
& ,variationH)
pa=(Accept)/(Reject+Accept)
call action(d,N,X,P,alpha,mm,gamma,actio,ham,kin)
ac(tther)=actio
varH(tther)=dexp(-variationH)
write(10,*)tther,actio,ham,kin,variationH,pa
write(*,*)tther,actio,ham,kin,variationH,pa
enddo
c..............measurements................
call jackknife_binning(Tth,varH,varH_average,varH_error)
write(*,*)gamma,alpha,mm,varH_average,varH_error
write(11,*)gamma,alpha,mm,varH_average,varH_error
call jackknife_binning(Tth,ac,ac_average,ac_error)
write(*,*)gamma,alpha,mm,ac_average,ac_error
write(12,*)gamma,alpha,mm,ac_average,ac_error
enddo
return
end
c.................metropolis algorithm................
subroutine metropolis(N,d,gamma,mm,alpha,dt,time,X,P,Reject,Accept
& ,variationH)
implicit none
integer N,d,i,j,mu,nu,k,l,idum,time
double precision gamma,mm,alpha,inn,dt,ran2,Reject,Accept
double complex var(d,N,N),X(d,N,N),X0(d,N,N),P(d,N,N),P0(d,N,N)
double precision variations,variationH,probabilityS,probabilityH,r
double precision actio,ham,kin
c........Gaussian initialization.....
call gaussian(d,N,P)
X0=X
P0=P
call action(d,N,X,P,alpha,mm,gamma,actio,ham,kin)
variationS=actio
variationH=ham
call molecular_dynamics(N,d,dt,time,gamma,mm,alpha,X,P)
call action(d,N,X,P,alpha,mm,gamma,actio,ham,kin)
variationS=actio-variationS
variationH=ham-variationH
if(variationH.lt.0.0d0)then
accept=accept+1.0d0
else
probabilityH=dexp(-variationH)
r=ran2(idum)
if (r.lt.probabilityH)then
accept=accept+1.0d0
else
X=X0
P=P0
Reject=Reject+1.0d0
File: /home/ydri/Desktop/TP_QFT/codes/hybrid-ym.f Page 3 of 7
endif
endif
return
end
subroutine action(d,N,X,P,alpha,mm,gamma,actio,ham,kin)
implicit none
integer d,N,mu,nu,i,j,k,l
double complex X(d,N,N),P(d,N,N),ii,CS,action0,ham0,action1,
& actio0,action2,ham1
double precision actio,ham,kin
double precision mm,gamma,alpha
ii=cmplx(0,1)
actio0=cmplx(0,0)
do mu =1,d
do nu=mu+1,d
action0=cmplx(0,0)
do i=1,N
do j=1,N
do k=1,N
do l=1,N
action0=action0+X(mu,i,j)*X(nu,j,k)*X(mu,k,l)*X(nu,l,i)
& -X(mu,i,j)*X(mu,j,k)*X(nu,k,l)*X(nu,l,i)
enddo
enddo
enddo
enddo
actio0=actio0+action0
enddo
enddo
actio=real(actio0)
actio=-N*gamma*actio
ham1=cmplx(0,0)
action2=cmplx(0,0)
do mu =1,d
ham0=cmplx(0,0)
action1=cmplx(0,0)
do i=1,N
do j=1,N
ham0=ham0+P(mu,i,j)*P(mu,j,i)
action1=action1+X(mu,i,j)*X(mu,j,i)
enddo
enddo
action2=action2+action1
ham1=ham1+ham0
enddo
ham=0.5d0*real(ham1)
kin=ham
actio=actio+0.5d0*mm*real(action2)
CS=0.0d0
do i=1,N
do j=1,N
do k=1,N
CS=CS+ii*X(1,i,j)*X(2,j,k)*X(3,k,i)
& -ii*X(1,i,j)*X(3,j,k)*X(2,k,i)
enddo
enddo
enddo
actio=actio+2.0d0*alpha*N*real(CS)
ham=ham+actio
File: /home/ydri/Desktop/TP_QFT/codes/hybrid-ym.f Page 4 of 7
return
end
c.......the force.............
subroutine variation(N,d,gamma,mm,alpha,X,var)
implicit none
integer N,d,i,j,mu,nu,k,l
double precision gamma,mm,alpha
double complex var(d,N,N),X(d,N,N),ii
ii=dcmplx(0,1)
do mu=1,d
do i=1,N
do j=i,N
var(mu,i,j)=cmplx(0,0)
do nu=1,d
do k=1,N
do l=1,N
var(mu,i,j)=var(mu,i,j)+2.0d0*X(nu,j,k)*X(mu,k,l)*X(nu,l,i)
& -X(nu,j,k)*X(nu,k,l)*X(mu,l,i)
& -X(mu,j,k)*X(nu,k,l)*X(nu,l,i)
enddo
enddo
enddo
var(mu,i,j)=-N*gamma*var(mu,i,j)+mm*X(mu,j,i)
if(mu.eq.1)then
do k=1,N
var(mu,i,j)=var(mu,i,j)+2.0d0*ii*alpha*N*X(2,j,k)*X(3,k,i)
& -2.0d0*ii*alpha*N*X(3,j,k)*X(2,k,i)
enddo
endif
if(mu.eq.2)then
do k=1,N
var(mu,i,j)=var(mu,i,j)+2.0d0*ii*alpha*N*X(3,j,k)*X(1,k,i)
& -2.0d0*ii*alpha*N*X(1,j,k)*X(3,k,i)
enddo
endif
if(mu.eq.3)then
do k=1,N
var(mu,i,j)=var(mu,i,j)+2.0d0*ii*alpha*N*X(1,j,k)*X(2,k,i)
& -2.0d0*ii*alpha*N*X(2,j,k)*X(1,k,i)
enddo
endif
var(mu,j,i)=conjg(var(mu,i,j))
enddo
enddo
enddo
return
end
c.............leap frog..............
subroutine molecular_dynamics(N,d,dt,time,gamma,mm,alpha,X,P)
implicit none
integer N,d,i,j,mu,nn,time
double precision dt,gamma,mm,alpha
double complex X(d,N,N),P(d,N,N),var(d,N,N)
do nn=1,time
call variation(N,d,gamma,mm,alpha,X,var)
do mu=1,d
do i=1,N
do j=i,N
P(mu,i,j)=P(mu,i,j)-0.5d0*dt*var(mu,i,j)
X(mu,i,j)=X(mu,i,j)+dt*conjg(P(mu,i,j))
File: /home/ydri/Desktop/TP_QFT/codes/hybrid-ym.f Page 5 of 7
X(mu,j,i)=conjg(X(mu,i,j))
enddo
enddo
enddo
call variation(N,d,gamma,mm,alpha,X,var)
do mu=1,d
do i=1,N
do j=i,N
P(mu,i,j)=P(mu,i,j)-0.5d0*dt*var(mu,i,j)
P(mu,j,i)=conjg(P(mu,i,j))
enddo
enddo
enddo
enddo
return
end
subroutine gaussian(d,N,P)
implicit none
integer d,N,mu,i,j,idum
double precision pi,phi,r,ran2
double complex ii,P(d,N,N)
pi=dacos(-1.0d0)
ii=cmplx(0,1)
do mu=1,d
do i=1,N
phi=2.0d0*pi*ran2(idum)
r=dsqrt(-2.0d0*dlog(1.0d0-ran2(idum)))
P(mu,i,i)=r*dcos(phi)
enddo
do i=1,N
do j=i+1,N
phi=2.0d0*pi*ran2(idum)
r=dsqrt(-1.0d0*dlog(1.0d0-ran2(idum)))
P(mu,i,j)=r*dcos(phi)+ii*r*dsin(phi)
P(mu,j,i)=conjg(P(mu,i,j))
enddo
enddo
enddo
return
end
subroutine jackknife_binning(TMC,f,average,error)
implicit none
integer i,j,TMC,zbin,nbin
double precision xm
double precision f(1:TMC),sumf,y(1:TMC)
double precision sig0,sig,error,average
sig0=0.0d0
sumf=0.0d0
do i=1,TMC
sumf=sumf+f(i)
enddo
xm=sumf/TMC
c do zbin=1,TMC-1
zbin=1
nbin=int(TMC/zbin)
sig=0.0d0
do i=1,nbin,1
File: /home/ydri/Desktop/TP_QFT/codes/hybrid-ym.f Page 6 of 7
y(i)=sumf
do j=1,zbin
y(i)=y(i)-f((i-1)*zbin+j )
enddo
y(i)= y(i)/(TMC-zbin)
sig=sig+((nbin-1.0d0)/nbin)*(y(i)-xm)*(y(i)-xm)
enddo
sig=sig
sig=dsqrt(sig)
if (sig0 .lt. sig) sig0=sig
c enddo
error=sig0
average=xm
return
end
function ran2(idum)
implicit none
integer idum,IM1,IM2,IMM1,IA1,IA2,IQ1,IQ2,IR1,IR2,NTAB,NDIV
real AM,EPS,RNMX
double precision ran2
parameter (IM1=2147483563,IM2=2147483399,AM=1./IM1,IMM1=IM1-1,
& IA1=40014,IA2=40692,IQ1=53668,IQ2=52774,IR1=12211,
& IR2=3791,NTAB=32,NDIV=1+IMM1/NTAB,EPS=1.2E-7,RNMX=1.-EPS)
integer idum2,j,k,iv(NTAB),iy
SAVE iv,iy,idum2
DATA idum2/123456789/,iv/NTAB*0/,iy/0/
if (idum.le.0) then
idum=max(-idum,1)
idum2=idum
do j=NTAB+8,1,-1
k=idum/IQ1
idum=IA1*(idum-k*IQ1)-k*IR1
if (idum.lt.0) idum=idum+IM1
if (j.le.NTAB) iv(j)=idum
enddo
iy=iv(1)
endif
k=idum/IQ1
idum=IA1*(idum-k*IQ1)-k*IR1
if (idum.lt.0) idum=idum+IM1
k=idum2/IQ2
idum2=IA2*(idum2-k*IQ2)-k*IR2
if (idum2.lt.0) idum2=idum2+IM2
j=1+iy/NDIV
iy=iv(j)-idum2
iv(j)=idum
if (iy.lt.1) iy=iy+IMM1
ran2=min(AM*iy,RNMX)
return
end
c........hot start...................
subroutine hot(N,d,idum,inn,X,P)
implicit none
integer lambda,i,j,N,d,idum
double complex X(d,N,N),P(d,N,N)
double precision xx,y,inn,interval
do lambda=1,d
do i=1,N
File: /home/ydri/Desktop/TP_QFT/codes/hybrid-ym.f Page 7 of 7
do j=i,N
if (j.ne.i) then
xx=interval(idum,inn)
y=interval(idum,inn)
X(lambda,i,j)=cmplx(xx,y)
X(lambda,j,i)=cmplx(xx,-y)
xx=interval(idum,inn)
y=interval(idum,inn)
P(lambda,i,j)=cmplx(xx,y)
P(lambda,j,i)=cmplx(xx,-y)
else
xx=interval(idum,inn)
X(lambda,i,j)=xx
xx=interval(idum,inn)
P(lambda,i,j)=xx
endif
enddo
enddo
enddo
return
end
c.............interval..............
function interval(idum,inn)
implicit none
double precision interval,inn,ran2
integer idum
interval=ran2(idum)
interval=interval+interval-1.0d0
interval=interval*inn
return
end
c......cold start.....................
subroutine cold(N,d,X)
implicit none
integer lambda,i,j,N,d
double complex X(d,N,N)
do lambda=1,d
do i=1,N
do j=1,N
X(lambda,i,j)=cmplx(0,0)
enddo
enddo
enddo
return
end
File: /home/ydri/Desktop/TP_QFT/codes/hybrid-scalar-fuzzy.f Page 1 of 10
program my_hybrid_scalar_fuzzy
implicit none
integer N,i,j,k,idum,tt,time,tther,Tth,cou,ttco,Tco,Tmc,nn
parameter (N=6)
parameter (Tth=2**10,Tmc=2**10,Tco=2**0)
double precision a,b,c,at,bt,ct
double complex phi(N,N),P(N,N),phi0(N,N)
double precision actio,ham,kin,quad,quar,mag,variationH,ev(1:N)
double precision Reject,Accept,pa,inn,dt,interval,xx,y,t_1,t_2
double precision varH(Tmc),varH_average,varH_error
double precision acti(Tmc),acti_average,acti_error
double precision Cv(Tmc),Cv_average,Cv_error
double precision ma(Tmc),ma_average,ma_error
double precision chi(Tmc),chi_average,chi_error
double precision p0(Tmc),p0_average,p0_error
double precision pt(Tmc),pt_average,pt_error
double precision kinet(Tmc),k_average,k_error
double precision ide_average,ide_error
double precision qu(Tmc),qu_average,qu_error
double precision target_pa_high,target_pa_low,dt_max,dt_min,inc
& ,dec
real x0
call cpu_time(t_1)
idum=-148175
x0=0.0
idum=idum-2*int(secnds(x0))
c.............parameters..................
at=dsqrt(1.0d0*N)!1.0d0
a=at/dsqrt(1.0d0*N)
ct=1.0d0
c=N*N*ct
do k=0,0
bt=-5.0d0+k*0.1d0
b=N*dsqrt(1.0d0*N)*bt
inn=1.0d0
call hot(N,idum,inn,phi,P)
time=10
dt=0.01d0
Reject=0.0d0
Accept=0.0d0
pa=0.0d0
target_pa_high=0.90d0
target_pa_low=0.70d0
dt_max=1.0d0
dt_min=0.0001d0
inc=1.2d0
dec=0.8d0
nn=1
File: /home/ydri/Desktop/TP_QFT/codes/hybrid-scalar-fuzzy.f Page 2 of 10
c............thermalization................................
do tther=1,Tth
call metropolis(N,a,b,c,dt,time,phi,P,Reject,Accept
& ,variationH,idum)
call action(N,phi,P,a,b,c,kin,quad,quar,actio,ham,mag)
cou=tther
call adjust_inn(cou,pa,dt,time,Reject,Accept,
& nn,target_pa_high,target_pa_low,dt_max,dt_min,inc,dec)
write(*,*)tther,pa,dt,actio
enddo
do tther=1,Tmc
do ttco=1,Tco
call metropolis(N,a,b,c,dt,time,phi,P,Reject,Accept
& ,variationH,idum)
enddo
call action(N,phi,P,a,b,c,kin,quad,quar,actio,ham,mag)
acti(tther)=actio
ma(tther)=mag
p0(tther)=mag*mag/N**2
pt(tther)=quad/N
kinet(tther)=kin
qu(tther)=quar
varH(tther)=dexp(-variationH)
cou=tther
call adjust_inn(cou,pa,dt,time,Reject,Accept,
& nn,target_pa_high,target_pa_low,dt_max,dt_min,inc,dec)
write(*,*)tther,pa,dt,actio
phi0=phi
call eigenvalues(N,phi0,ev)
write(62,*)tther,ev
enddo
c..............measurements...................................................
c....................energy........................................................
call jackknife_binning(Tmc,acti,acti_average,acti_error)
write(*,*)"action",a,bt,ct,acti_average,acti_error
write(10,*)a,bt,ct,acti_average,acti_error
c.........specific heat Cv=<(S_i-<S>)^2>............................
do tther=1,Tmc
Cv(tther)=0.0d0
Cv(tther)=Cv(tther)+acti(tther)
Cv(tther)=Cv(tther)-acti_average
Cv(tther)=Cv(tther)*Cv(tther)
enddo
call jackknife_binning(Tmc,Cv,Cv_average,Cv_error)
write(*,*)"specific heat",a,bt,ct,Cv_average,Cv_error
write(20,*)a,bt,ct,Cv_average,Cv_error
c..............magnetization.................................................
call jackknife_binning(Tmc,ma,ma_average,ma_error)
write(*,*)"magnetization",a,bt,ct,ma_average,ma_error
File: /home/ydri/Desktop/TP_QFT/codes/hybrid-scalar-fuzzy.f Page 3 of 10
write(30,*)a,bt,ct,ma_average,ma_error
c..............susceptibility...........................................................
do tther=1,Tmc
chi(tther)=0.0d0
chi(tther)=chi(tther)+ma(tther)
chi(tther)=chi(tther)-ma_average
chi(tther)=chi(tther)*chi(tther)
enddo
call jackknife_binning(Tmc,chi,chi_average,chi_error)
write(*,*)"susceptibility", a,bt,ct,chi_average,chi_error
write(40,*)a,bt,ct,chi_average,chi_error
c.............power in the zero mode.............................................
call jackknife_binning(Tmc,p0,p0_average,p0_error)
write(*,*)"zero power", a,bt,ct,p0_average,p0_error
write(50,*)a,bt,ct,p0_average,p0_error
c.............total power=quadratic term/N.........................................
call jackknife_binning(Tmc,pt,pt_average,pt_error)
write(*,*)"total power=quadrtic/N",a,bt,ct,pt_average,pt_error
write(60,*)a,bt,ct,pt_average,pt_error
c..............kinetic term.........................................................
call jackknife_binning(Tmc,kinet,k_average,k_error)
write(*,*)"kinetic",a,bt,ct,k_average,k_error
write(70,*)a,bt,ct,k_average,k_error
c..............quartic term....
call jackknife_binning(Tmc,qu,qu_average,qu_error)
write(*,*)"quartic", a,bt,ct,qu_average,qu_error
write(80,*)a,bt,ct,qu_average,qu_error
c..............schwinger-dyson identity.....................................
ide_average=2.0d0*a*k_average+2.0d0*b*N*pt_average
& +4.0d0*c*qu_average
ide_average=ide_average/(N*N)
ide_error=2.0d0*a*k_error+2.0d0*b*N*pt_error
& +4.0d0*c*qu_error
ide_error=ide_error/(N*N)
write(*,*)"ide", a,bt,ct,ide_average,ide_error
write(81,*)a,bt,ct,ide_average,ide_error
c...............variation of hamiltonian.................................
call jackknife_binning(Tmc,varH,varH_average,varH_error)
write(*,*)"exp(-\Delta H)",a,bt,ct,varH_average,varH_error
write(11,*)a,bt,ct,varH_average,varH_error
enddo
c.......................cpu time.............................................
call cpu_time(t_2)
write(*,*)"cpu_time=", t_2-t_1
return
end
c.....................metropolis algorithm...........................
subroutine metropolis(N,a,b,c,dt,time,phi,P,Reject,Accept
& ,variationH,idum)
implicit none
integer N,i,j,mu,nu,k,l,idum,time
double precision a,b,c,inn,dt,ran2,Reject,Accept
double complex var(N,N),phi(N,N),phi0(N,N),P(N,N),P0(N,N)
double precision variations,variationH,probabilityS,probabilityH,r
double precision actio,ham,kin,quad,quar,mag
call molecular_dynamics(N,dt,time,a,b,c,phi,P)
call action(N,phi,P,a,b,c,kin,quad,quar,actio,ham,mag)
variationS=actio-variationS
variationH=ham-variationH
c...........metropolis accept-reject step.................
if(variationH.lt.0.0d0)then
accept=accept+1.0d0
else
probabilityH=dexp(-variationH)
r=ran2(idum)
if (r.lt.probabilityH)then
accept=accept+1.0d0
else
phi=phi0
P=P0
Reject=Reject+1.0d0
endif
endif
return
end
c....................eigenvalues............................
subroutine eigenvalues(N,phi0,ev)
implicit none
integer N,inf
double complex cw(1:2*N-1)
double precision rw(1:3*N-2)
double complex phi0(1:N,1:N)
double precision ev(1:N)
return
end
subroutine action(N,phi,P,a,b,c,kin,quad,quar,actio,ham,mag)
implicit none
integer N,mu,i,j,k,l
double complex phi(N,N),P(N,N)
double precision a,b,c
double precision kin,quad,quar,actio,ham,mag
double complex kine,quadr,quart,ham0
double complex Lplus(1:N,1:N),Lminus(1:N,1:N),Lz(1:N,1:N)
double complex X(1:3,1:N,1:N)
do j=1,N
quadr=quadr+phi(i,j)*phi(j,i)
enddo
enddo
kin=kin+0.5d0*(N*N-1.0d0)*real(quadr)
quad=real(quadr)
c.....................quartic term..........................
quart=cmplx(0,0)
do i=1,N
do j=1,N
do k=1,N
do l=1,N
quart=quart+phi(i,j)*phi(j,k)*phi(k,l)*phi(l,i)
enddo
enddo
enddo
enddo
quar=real(quart)
c....................action...........................
actio=a*kin+b*quad+c*quar
c..................Hamiltonian...............................
ham0=cmplx(0,0)
do i=1,N
do j=1,N
ham0=ham0+P(i,j)*P(j,i)
enddo
enddo
ham=0.5d0*real(ham0)
ham=ham+actio
c.......................magnetization.............................
mag=0.0d0
do i=1,N
mag=mag+phi(i,i)
enddo
mag=dabs(mag)
return
end
c.................the force.............................................
subroutine variation(N,a,b,c,phi,var)
implicit none
integer N,i,j,k,l,nu
doubleprecision a,b,c
doublecomplex var(N,N),var1(N,N),phi(N,N)
doublecomplex Lplus(1:N,1:N),Lminus(1:N,1:N),Lz(1:N,1:N)
doublecomplex X(1:3,1:N,1:N)
call SU2(N,X,Lplus,Lminus)
do i=1,N
do j=i,N
var(i,j)=cmplx(0,0)
do k=1,N
do l=1,N
var(i,j)=var(i,j)+X(1,j,k)*phi(k,l)*X(1,l,i)
& +X(2,j,k)*phi(k,l)*X(2,l,i)
& +X(3,j,k)*phi(k,l)*X(3,l,i)
enddo
enddo
var1(i,j)=cmplx(0,0)
do k=1,N
do l=1,N
var1(i,j)=var1(i,j)+phi(j,k)*phi(k,l)*phi(l,i)
enddo
enddo
var(i,j)=-4.0d0*a*var(i,j)+(N*N-1.0d0)*a*phi(j,i)
File: /home/ydri/Desktop/TP_QFT/codes/hybrid-scalar-fuzzy.f Page 6 of 10
& +2.0d0*b*phi(j,i)+4.0d0*c*var1(i,j)
var(j,i)=conjg(var(i,j))
enddo
enddo
return
end
c..........SU(2) generators....................
subroutine SU2(N,L,Lplus,Lminus)
implicit none
integer i,j,N
double complex Lplus(1:N,1:N),Lminus(1:N,1:N),Lz(1:N,1:N)
double complex L(1:3,1:N,1:N)
double complex ii
ii=cmplx(0,1)
do i=1,N
do j=1,N
if( ( i + 1 ) .eq. j )then
Lplus(i,j) =dsqrt( ( N - i )*i*1.0d0 )
else
Lplus(i,j)=0.0d0
endif
if( ( i - 1 ) .eq. j )then
Lminus(i,j)=dsqrt( ( N - j )*j*1.0d0 )
else
Lminus(i,j)=0.0d0
endif
if( i.eq.j)then
Lz(i,j) = ( N + 1 - i - i )/2.0d0
else
Lz(i,j) = 0.0d0
endif
L(1,i,j)=0.50d0*(Lplus(i,j)+Lminus(i,j))
L(2,i,j)=-0.50d0*ii*(Lplus(i,j)-Lminus(i,j))
L(3,i,j)=Lz(i,j)
enddo
enddo
return
end
c..............leap frog......................................
subroutine molecular_dynamics(N,dt,time,a,b,c,phi,P)
implicit none
integer N,i,j,nn,time
double precision dt,a,b,c
double complex phi(N,N),P(N,N),var(N,N),ii
ii=cmplx(0,1)
do nn=1,time
call variation(N,a,b,c,phi,var)
do i=1,N
do j=i,N
if (j.ne.i)then
P(i,j)=P(i,j)-0.5d0*dt*var(i,j)
phi(i,j)=phi(i,j)+dt*conjg(P(i,j))
phi(j,i)=conjg(phi(i,j))
else
P(i,i)=P(i,i)-0.5d0*dt*var(i,i)
phi(i,i)=phi(i,i)+dt*conjg(P(i,i))
phi(i,i)=phi(i,i)-ii*aimag(phi(1,1))
endif
enddo
File: /home/ydri/Desktop/TP_QFT/codes/hybrid-scalar-fuzzy.f Page 7 of 10
enddo
c...........last step of leap frog.......................................
call variation(N,a,b,c,phi,var)
do i=1,N
do j=i,N
if(j.ne.i)then
P(i,j)=P(i,j)-0.5d0*dt*var(i,j)
P(j,i)=conjg(P(i,j))
else
P(i,i)=P(i,i)-0.5d0*dt*var(i,i)
P(i,i)=P(i,i)-ii*aimag(P(i,i))
endif
enddo
enddo
enddo
return
end
subroutine gaussian(idum,N,P)
implicit none
integer N,mu,i,j,idum
double precision pi,phi,r,ran2
double complex ii,P(N,N)
pi=dacos(-1.0d0)
ii=cmplx(0,1)
do i=1,N
phi=2.0d0*pi*ran2(idum)
r=dsqrt(-2.0d0*dlog(1.0d0-ran2(idum)))
P(i,i)=r*dcos(phi)
enddo
do i=1,N
do j=i+1,N
phi=2.0d0*pi*ran2(idum)
r=dsqrt(-1.0d0*dlog(1.0d0-ran2(idum)))
P(i,j)=r*dcos(phi)+ii*r*dsin(phi)
P(j,i)=conjg(P(i,j))
enddo
enddo
return
end
subroutine jackknife_binning(TMC,f,average,error)
implicit none
integer i,j,TMC,zbin,nbin
double precision xm
double precision f(1:TMC),sumf,y(1:TMC)
double precision sig0,sig,error,average
sig0=0.0d0
sumf=0.0d0
do i=1,TMC
sumf=sumf+f(i)
enddo
xm=sumf/TMC
c do zbin=1,TMC-1
zbin=1
nbin=int(TMC/zbin)
sig=0.0d0
do i=1,nbin,1
y(i)=sumf
File: /home/ydri/Desktop/TP_QFT/codes/hybrid-scalar-fuzzy.f Page 8 of 10
do j=1,zbin
y(i)=y(i)-f((i-1)*zbin+j )
enddo
y(i)= y(i)/(TMC-zbin)
sig=sig+((nbin-1.0d0)/nbin)*(y(i)-xm)*(y(i)-xm)
enddo
sig=sig
sig=dsqrt(sig)
if (sig0 .lt. sig) sig0=sig
c enddo
error=sig0
average=xm
return
end
function ran2(idum)
implicit none
integer idum,IM1,IM2,IMM1,IA1,IA2,IQ1,IQ2,IR1,IR2,NTAB,NDIV
real AM,EPS,RNMX
double precision ran2
parameter (IM1=2147483563,IM2=2147483399,AM=1./IM1,IMM1=IM1-1,
& IA1=40014,IA2=40692,IQ1=53668,IQ2=52774,IR1=12211,
& IR2=3791,NTAB=32,NDIV=1+IMM1/NTAB,EPS=1.2E-7,RNMX=1.-EPS)
integer idum2,j,k,iv(NTAB),iy
SAVE iv,iy,idum2
DATA idum2/123456789/,iv/NTAB*0/,iy/0/
if (idum.le.0) then
idum=max(-idum,1)
idum2=idum
do j=NTAB+8,1,-1
k=idum/IQ1
idum=IA1*(idum-k*IQ1)-k*IR1
if (idum.lt.0) idum=idum+IM1
if (j.le.NTAB) iv(j)=idum
enddo
iy=iv(1)
endif
k=idum/IQ1
idum=IA1*(idum-k*IQ1)-k*IR1
if (idum.lt.0) idum=idum+IM1
k=idum2/IQ2
idum2=IA2*(idum2-k*IQ2)-k*IR2
if (idum2.lt.0) idum2=idum2+IM2
j=1+iy/NDIV
iy=iv(j)-idum2
iv(j)=idum
if (iy.lt.1) iy=iy+IMM1
ran2=min(AM*iy,RNMX)
return
end
c........hot start...................
subroutine hot(N,idum,inn,phi,P)
implicit none
integer lambda,i,j,N,d,idum
double complex phi(N,N),P(N,N)
double precision xx,y,inn,interval
do i=1,N
do j=i,N
if (j.ne.i) then
File: /home/ydri/Desktop/TP_QFT/codes/hybrid-scalar-fuzzy.f Page 9 of 10
xx=interval(idum,inn)
y=interval(idum,inn)
phi(i,j)=cmplx(xx,y)
phi(j,i)=cmplx(xx,-y)
xx=interval(idum,inn)
y=interval(idum,inn)
P(i,j)=cmplx(xx,y)
P(j,i)=cmplx(xx,-y)
else
xx=interval(idum,inn)
phi(i,j)=xx
xx=interval(idum,inn)
P(i,j)=xx
endif
enddo
enddo
return
end
c.............interval..............
function interval(idum,inn)
implicit none
double precision interval,inn,ran2
integer idum
interval=ran2(idum)
interval=interval+interval-1.0d0
interval=interval*inn
return
end
c......cold start.....................
subroutine cold(N,phi)
implicit none
integer lambda,i,j,N
double complex phi(N,N)
do i=1,N
do j=1,N
phi(i,j)=cmplx(0,0)
enddo
enddo
return
end
c.........adjusting interval..................
subroutine adjust_inn(cou,pa,dt,time,Rejec,Accept,
& nn,target_pa_high,target_pa_low,dt_max,dt_min,inc,dec)
implicit none
double precision dt,pa,Rejec,Accept
integer time,cou,cou1
integer nn
double precision target_pa_high,target_pa_low,dt_max,dt_min,inc,
& dec,rho1,rho2,dtnew
dtnew=dt*inc
if (dtnew.le.dt_max)then
dt=dtnew
else
dt=dt_max
endif
endif
if (pa.le.target_pa_low) then
dtnew=dt*dec
if (dtnew.ge.dt_min)then
dt=dtnew
else
dt=dt_min
endif
endif
endif
return
end
File: /home/ydri/Desktop/TP_QFT/codes/phi-four-on-lattice.f Page 1 of 7
program my_phi_four_on_lattice
implicit none
integer N,idum,time,cou,nn,kk,ith,imc,ico,Tth,Tmc,Tco
parameter (N=16)
parameter (Tth=2**13,Tmc=2**14,Tco=2**3)
double precision dt,kappa,g,phi(N,N),P(N,N),lambda_l,mu0_sq_l
double precision mass,linear,kinetic,potential,act,Ham,variationH,
& quartic
double precision target_pa_high,target_pa_low,dt_max,dt_min,inc
& ,dec,inn,pa,accept,reject
real x0
double precision ac(Tmc),ac_average,ac_error,cv(Tmc),cv_average,
& cv_error,lin(Tmc),lin_average,lin_error,susc(Tmc),susc_average,
& susc_error,ac2(Tmc),ac2_av,ac2_er,ac4(Tmc),ac4_av,ac4_er,binder,
& binder_e
idum=-148175
x0=0.0
idum=idum-2*int(secnds(x0))
c.............parameters..................
lambda_l=0.5d0
do kk=0,15
mu0_sq_l=-1.5d0+kk*0.1d0
kappa=dsqrt(8.0d0*lambda_l+(4.0d0+mu0_sq_l)*(4.0d0+mu0_sq_l))
kappa=kappa/(4.0d0*lambda_l)
kappa=kappa-(4.0d0+mu0_sq_l)/(4.0d0*lambda_l)
g=kappa*kappa*lambda_l
inn=1.0d0
call hot(N,idum,inn,phi,P)
time=10
dt=0.01d0
Reject=0.0d0
Accept=0.0d0
pa=0.0d0
target_pa_high=0.90d0
target_pa_low=0.70d0
dt_max=1.0d0
dt_min=0.0001d0
inc=1.2d0
dec=0.8d0
nn=1
c...............thermalization......
do ith=1,Tth
call metropolis(time,dt,N,kappa,g,idum,accept,reject,
& variationH,P,phi)
call adjust_inn(cou,pa,dt,time,Reject,Accept,
& nn,target_pa_high,target_pa_low,dt_max,dt_min,inc,dec)
call action(N,kappa,g,P,phi,mass,linear,kinetic,potential,
& act,Ham,quartic)
File: /home/ydri/Desktop/TP_QFT/codes/phi-four-on-lattice.f Page 2 of 7
write(9+kk,*) ith,act,Ham,variationH,pa,dt
enddo
do imc=1,Tmc
do ico=1,Tco
call metropolis(time,dt,N,kappa,g,idum,accept,reject,
& variationH,P,phi)
call adjust_inn(cou,pa,dt,time,Reject,Accept,
& nn,target_pa_high,target_pa_low,dt_max,dt_min,inc,dec)
enddo
call action(N,kappa,g,P,phi,mass,linear,kinetic,potential,
& act,Ham,quartic)
ac(imc)=act
lin(imc)=dabs(linear)
ac2(imc)=linear*linear
ac4(imc)=linear*linear*linear*linear
write(9+kk,*) imc+Tth,act,Ham,variationH,pa,dt
enddo
c....................observables........................
c.................action..................................
call jackknife_binning(Tmc,ac,ac_average,ac_error)
write(50,*)mu0_sq_l,lambda_l,kappa,g,ac_average,ac_error
c.................specific heat..................................
do imc=1,Tmc
cv(imc)=ac(imc)-ac_average
cv(imc)=cv(imc)**(2.0d0)
enddo
call jackknife_binning(Tmc,cv,cv_average,cv_error)
write(60,*)mu0_sq_l,lambda_l,kappa,g,cv_average,cv_error
c...............magnetization....................................
call jackknife_binning(Tmc,lin,lin_average,lin_error)
write(70,*)mu0_sq_l,lambda_l,kappa,g,lin_average,lin_error
c...............susceptibility...............................
do imc=1,Tmc
susc(imc)=lin(imc)-lin_average
susc(imc)=susc(imc)**(2.0d0)
enddo
call jackknife_binning(Tmc,susc,susc_average,susc_error)
write(80,*)mu0_sq_l,lambda_l,kappa,g,susc_average,susc_error
c...............Binder cumulant...........................
call jackknife_binning(Tmc,ac2,ac2_av,ac2_er)
write(81,*)mu0_sq_l,lambda_l,kappa,g,ac2_av,ac2_er
call jackknife_binning(Tmc,ac4,ac4_av,ac4_er)
write(82,*)mu0_sq_l,lambda_l,kappa,g,ac4_av,ac4_er
binder=1.0d0-ac4_av/(3.0d0*ac2_av*ac2_av)
binder_e=-ac4_er/(3.0d0*ac2_av*ac2_av)
& +2.0d0*ac4_av*ac2_er/(3.0d0*ac2_av*ac2_av*ac2_av)
write(90,*)mu0_sq_l,lambda_l,kappa,g,binder,binder_e
enddo
return
end
subroutine metropolis(time,dt,N,kappa,g,idum,accept,reject,
& variationH,P,phi)
implicit none
integer time,N,idum
double precision dt,kappa,g,accept,reject,P(N,N),phi(N,N),
& variationH,P0(N,N),phi0(N,N),r,ran2,probability
double precision mass,linear,kinetic,potential,act,Ham,quartic
call gaussian(N,idum,P)
P0=P
File: /home/ydri/Desktop/TP_QFT/codes/phi-four-on-lattice.f Page 3 of 7
phi0=phi
call action(N,kappa,g,P,phi,mass,linear,kinetic,potential,act,Ham,
& quartic)
variationH=-Ham
call leap_frog(time,dt,N,kappa,g,P,phi)
call action(N,kappa,g,P,phi,mass,linear,kinetic,potential,act,Ham,
& quartic)
variationH=variationH+Ham
if (variationH.lt.0.0d0)then
accept=accept+1.0d0
else
probability=dexp(-variationH)
r=ran2(idum)
if (r.lt.probability)then
accept=accept+1.0d0
else
P=P0
phi=phi0
reject=reject+1.0d0
endif
endif
return
end
subroutine gaussian(N,idum,P)
implicit none
integer N,i,j,idum
double precision P(N,N),ph,r,pi,ran2
pi=dacos(-1.0d0)
do i=1,N
do j=1,N
r=dsqrt(-2.0d0*dlog(1.0d0-ran2(idum)))
ph=2.0d0*pi*ran2(idum)
P(i,j)=r*dcos(ph)
enddo
enddo
return
end
subroutine leap_frog(time,dt,N,kappa,g,P,phi)
implicit none
integer time,N,nn,i,j
double precision kappa,g,phi(N,N),P(N,N),force(N,N),dt
do nn=1,time
call scalar_force(N,phi,kappa,g,force)
do i=1,N
do j=1,N
P(i,j)=P(i,j)-0.5d0*dt*force(i,j)
phi(i,j)=phi(i,j)+dt*P(i,j)
enddo
enddo
call scalar_force(N,phi,kappa,g,force)
do i=1,N
do j=1,N
P(i,j)=P(i,j)-0.5d0*dt*force(i,j)
enddo
enddo
enddo
return
end
File: /home/ydri/Desktop/TP_QFT/codes/phi-four-on-lattice.f Page 4 of 7
subroutine scalar_force(N,phi,kappa,g,force)
implicit none
integer N,i,j,ip(N),im(N)
double precision phi(N,N),kappa,g,force(N,N)
double precision force1,force2,force3
call ipp(N,ip)
call imm(N,im)
do i=1,N
do j=1,N
force1=phi(ip(i),j)+phi(im(i),j)+phi(i,ip(j))+phi(i,im(j))
force1=-2.0d0*kappa*force1
force2=2.0d0*phi(i,j)
force3=phi(i,j)*(phi(i,j)*phi(i,j)-1.0d0)
force3=4.0d0*g*force3
force(i,j)=force1+force2+force3
enddo
enddo
return
end
subroutine action(N,kappa,g,P,phi,mass,linear,kinetic,potential,
& act,Ham,quartic)
implicit none
integer N,i,j,ip(N)
double precision kappa,g
double precision phi(N,N),P(N,N),act,potential,mass,kinetic,
& kineticH,ham,linear,quartic
call ipp(N,ip)
kinetic=0.0d0
mass=0.0d0
kineticH=0.0d0
potential=0.0d0
linear=0.0d0
quartic=0.0d0
do i=1,N
do j=1,N
linear=linear+phi(i,j)
quartic=quartic+phi(i,j)*phi(i,j)*phi(i,j)*phi(i,j)
kinetic=kinetic+phi(i,j)*(phi(ip(i),j)+phi(i,ip(j)))
mass=mass+phi(i,j)*phi(i,j)
potential=potential
& +(phi(i,j)*phi(i,j)-1.0d0)*(phi(i,j)*phi(i,j)-1.0d0)
kineticH=kineticH+P(i,j)*P(i,j)
enddo
enddo
kinetic=-2.0d0*kappa*kinetic
potential=g*potential
act=kinetic+mass+potential
kineticH=0.5d0*kineticH
ham=kineticH+act
return
end
subroutine ipp(N,ip)
implicit none
integer ip(N),i,N
do i=1,N-1
ip(i)=i+1
enddo
ip(N)=1
File: /home/ydri/Desktop/TP_QFT/codes/phi-four-on-lattice.f Page 5 of 7
return
end
subroutine imm(N,im)
implicit none
integer im(N),i,N
do i=2,N
im(i)=i-1
enddo
im(1)=N
return
end
subroutine jackknife_binning(TMC,f,average,error)
implicit none
integer i,j,TMC,zbin,nbin
double precision xm
double precision f(1:TMC),sumf,y(1:TMC)
double precision sig0,sig,error,average
sig0=0.0d0
sumf=0.0d0
do i=1,TMC
sumf=sumf+f(i)
enddo
xm=sumf/TMC
c do zbin=1,TMC-1
zbin=1
nbin=int(TMC/zbin)
sig=0.0d0
do i=1,nbin,1
y(i)=sumf
do j=1,zbin
y(i)=y(i)-f((i-1)*zbin+j )
enddo
y(i)= y(i)/(TMC-zbin)
sig=sig+((nbin-1.0d0)/nbin)*(y(i)-xm)*(y(i)-xm)
enddo
sig=sig
sig=dsqrt(sig)
if (sig0 .lt. sig) sig0=sig
c enddo
error=sig0
average=xm
return
end
function ran2(idum)
implicit none
integer idum,IM1,IM2,IMM1,IA1,IA2,IQ1,IQ2,IR1,IR2,NTAB,NDIV
real AM,EPS,RNMX
double precision ran2
parameter (IM1=2147483563,IM2=2147483399,AM=1./IM1,IMM1=IM1-1,
& IA1=40014,IA2=40692,IQ1=53668,IQ2=52774,IR1=12211,
& IR2=3791,NTAB=32,NDIV=1+IMM1/NTAB,EPS=1.2E-7,RNMX=1.-EPS)
integer idum2,j,k,iv(NTAB),iy
SAVE iv,iy,idum2
DATA idum2/123456789/,iv/NTAB*0/,iy/0/
if (idum.le.0) then
File: /home/ydri/Desktop/TP_QFT/codes/phi-four-on-lattice.f Page 6 of 7
idum=max(-idum,1)
idum2=idum
do j=NTAB+8,1,-1
k=idum/IQ1
idum=IA1*(idum-k*IQ1)-k*IR1
if (idum.lt.0) idum=idum+IM1
if (j.le.NTAB) iv(j)=idum
enddo
iy=iv(1)
endif
k=idum/IQ1
idum=IA1*(idum-k*IQ1)-k*IR1
if (idum.lt.0) idum=idum+IM1
k=idum2/IQ2
idum2=IA2*(idum2-k*IQ2)-k*IR2
if (idum2.lt.0) idum2=idum2+IM2
j=1+iy/NDIV
iy=iv(j)-idum2
iv(j)=idum
if (iy.lt.1) iy=iy+IMM1
ran2=min(AM*iy,RNMX)
return
end
c........hot start...................
subroutine hot(N,idum,inn,phi,P)
implicit none
integer lambda,i,j,N,idum
double precision phi(N,N),P(N,N)
double precision inn,interval
do i=1,N
do j=1,N
phi(i,j)=interval(idum,inn)
P(i,j)=interval(idum,inn)
enddo
enddo
return
end
c.........adjusting interval..................
subroutine adjust_inn(cou,pa,dt,time,Reject,Accept,
& nn,target_pa_high,target_pa_low,dt_max,dt_min,inc,dec)
implicit none
double precision dt,pa,Reject,Accept
integer time,cou,cou1
integer nn
double precision target_pa_high,target_pa_low,dt_max,dt_min,inc,
& dec,rho1,rho2,dtnew
if (pa.le.target_pa_low) then
dtnew=dt*dec
if (dtnew.ge.dt_min)then
dt=dtnew
else
dt=dt_min
endif
endif
endif
return
end
c.............interval..............
function interval(idum,inn)
implicit none
double precision interval,inn,ran2
integer idum
interval=ran2(idum)
interval=interval+interval-1.0d0
interval=interval*inn
return
end
File: /home/ydri/Desktop/TP_QFT/codmetropolis-scalar-multitrace.f Page 1 of 7
program my_metropolis_scalar_multitrace
implicit none
integer N,i,k,idum,ither,Tther,imont,ico,tmo,Tmont,Tco,counter,
& Pow1,Pow2,Pow3
parameter (N=10)
parameter (pow1=20,pow2=20,pow3=5)
parameter (Tther=2**pow1,Tmont=2**pow2,Tco=2**pow3)
double precision a,b,c,d,g,at,bt,ct,eta,v22,v41,v21,ap,bp,cp,dp,e,
& ep,fp
double precision ran2,inn,interval,accept,reject,pa,t_1,t_2
double precision lambda(N)
double precision actio,actio0,sum1,sum2,sum4,sumv,actio1,actio2,
& actio4
double precision ac(Tmont),ac_average,ac_error
double precision id,ide(Tmont),ide_average,ide_error
double precision cv(Tmont),cv_average,cv_error
double precision va(Tmont),va_average,va_error
double precision p0(Tmont),p0_average,p0_error
double precision pt(Tmont),pt_average,pt_error
double precision p4(Tmont),p4_average,p4_error
double precision su(Tmont),su_average,su_error
double precision sus(Tmont),sus_average,sus_error
real x0
call cpu_time(t_1)
idum=-148175
x0=0.0
idum=idum-2*int(secnds(x0))
c g=1.0d0
c b=-N/g
c c=N
c c=c/(4.0d0*g)
inn=1.0d0
do i=1,N
lambda(i)=interval(idum,inn)
enddo
Reject=0.0d0
Accept=0.0d0
pa=0.0d0
c.........thermalization.........................................................
do ither=1,Tther
call standard_metropolis(N,ap,b,bp,c,cp,d,dp,ep,fp,lambda,
& accept,reject,idum,inn,pa)
call action(N,ap,b,bp,c,cp,d,dp,ep,fp,lambda,actio,actio0,
& sum1,sum2,sum4,sumv,id,actio1,actio2,actio4)
write(*,*)ither,actio0,actio,dabs(sum1),sum2,sum4,id,pa,inn
write(7,*)ither,actio0,actio,dabs(sum1),sum2,sum4,sumv,id
& ,pa,inn
enddo
do ico=1,Tco
call standard_metropolis(N,ap,b,bp,c,cp,d,dp,ep,fp,lambda
& ,accept,reject,idum,inn,pa)
enddo
call action(N,ap,b,bp,c,cp,d,dp,ep,fp,lambda,actio,actio0,
& sum1,sum2,sum4,sumv,id,actio1,actio2,actio4)
c if ((id.ge.0.8d0).and.(id.le.1.2d0))then
counter=counter+1
ac(counter)=actio0+actio1
ide(counter)=id
va(counter)=sumv
su(counter)=dabs(sum1)
p0(counter)=sum1*sum1/(N*N)
pt(counter)=sum2/N
p4(counter)=sum4
write(*,*)imont,counter,sum2,sum4,id
write(8,*)imont,counter,sum2,sum4,id
c....................eigenvalues........................
write(150+k,*)counter,lambda
c endif
enddo
c...............measurements............
Tmo=counter
c................action and vandermonde...................
call jackknife_binning(Tmo,ac,ac_average,ac_error)
File: /home/ydri/Desktop/TP_QFT/codmetropolis-scalar-multitrace.f Page 3 of 7
write(10,*)bt,ct,d,ac_average,ac_error
call jackknife_binning(Tmo,va,va_average,va_error)
write(11,*)bt,ct,d,va_average,va_error
c..................identity.................
call jackknife_binning(Tmo,ide,ide_average,ide_error)
write(12,*)bt,ct,d,ide_average,ide_error
write(*,*)bt,ct,d,ide_average,ide_error, "identity"
c............power in zero modes, total power and quartic term.............
call jackknife_binning(Tmo,p0,p0_average,p0_error)
write(13,*)bt,ct,d,p0_average,p0_error
call jackknife_binning(Tmo,pt,pt_average,pt_error)
write(14,*)bt,ct,d,pt_average,pt_error
write(*,*)bt,ct,d,pt_average,pt_error, "total power"
call jackknife_binning(Tmo,p4,p4_average,p4_error)
write(15,*)bt,ct,d,p4_average,p4_error
c.......magnetization and susceptibility..............
call jackknife_binning(Tmo,su,su_average,su_error)
write(16,*)bt,ct,d,su_average,su_error
do i=1,Tmo
sus(i)= (su(i)-su_average)*(su(i)-su_average)
enddo
call jackknife_binning(Tmo,sus,sus_average,sus_error)
write(17,*)bt,ct,d,sus_average,sus_error
c..................specific heat....................
do i=1,Tmo
cv(i)=(ac(i)-ac_average)**2
enddo
call jackknife_binning(Tmo,cv,cv_average,cv_error)
write(20,*)bt,ct,d,cv_average,cv_error
enddo
return
end
c.............metropolis algorithm...........................
subroutine standard_metropolis(N,ap,b,bp,c,cp,d,dp,ep,fp,lambda
& ,accept,reject,idum,inn,pa)
implicit none
integer N,i,idum
double precision lambda(N),var,pro,r,b,c,d,accept,reject,ran2,
& h,inn,interval,pa,ap,bp,cp,dp,ep,fp
do i=1,N
c...........variation of the action....................
h=interval(idum,inn)
call variation(N,ap,b,bp,c,cp,d,dp,ep,fp,i,h,lambda,Var)
c............metropolis accept-reject step..........................
if(var.gt.0.0d0)then
pro=dexp(-var)
r=ran2(idum)
if (r.lt.pro) then
lambda(i)=lambda(i)+h
accept=accept+1.0d0
else
reject=reject+1.0d0
endif
else
lambda(i)=lambda(i)+h
accept=accept+1.0d0
endif
enddo
c............adjusting the interval inn................
File: /home/ydri/Desktop/TP_QFT/codmetropolis-scalar-multitrace.f Page 4 of 7
call adjust_inn(pa,inn,Reject,Accept)
return
end
subroutine variation(N,ap,b,bp,c,cp,d,dp,ep,fp,i,h,lambda,Var)
implicit none
integer N,i,k
double precision lambda(N),var,b,c,d,h,ap,bp,cp,dp,ep,fp
double precision dsum2,dsum4,sum2,dvand,dd,dvande
double precision sum1,sum3,var1,var2,var3,var4
dsum2=h*h+2.0d0*h*lambda(i)
dsum4=6.0d0*h*h*lambda(i)*lambda(i)
& +4.0d0*h*lambda(i)*lambda(i)*lambda(i)+4.0d0*h*h*h*lambda(i)
& +h*h*h*h
sum3=0.0d0
sum2=0.0d0
sum1=0.0d0
do k=1,N
sum3=sum3+lambda(k)*lambda(k)*lambda(k)
sum2=sum2+lambda(k)*lambda(k)
sum1=sum1+lambda(k)
enddo
dvand=0.0d0
do k=i+1,N
dd=1.0d0
dd=dd+h/(lambda(i)-lambda(k))
dd=dabs(dd)
dvand=dvand+dlog(dd)
enddo
dvand=-dvand
dvande=0.0d0
do k=1,i-1
dd=1.0d0
dd=dd+h/(lambda(i)-lambda(k))
dd=dabs(dd)
dvande=dvande+dlog(dd)
enddo
dvande=-dvande
dvand=dvand+dvande
dvand=2.0d0*dvand
var=b*dsum2+c*dsum4+2.0d0*d*dsum2*sum2+d*dsum2*dsum2+dvand
var1=h*h+2.0d0*h*sum1
var4=var1*var1+2.0d0*sum1*sum1*var1
var1=bp*var1
var4=dp*var4
var2=h*sum2+(sum1+h)*dsum2
var2=ap*var2
var3=3.0d0*h*lambda(i)*lambda(i)+3.0d0*h*h*lambda(i)+h*h*h
var3=var3*(sum1+h)
var3=var3+h*sum3
var3=cp*var3
var=var+var1+var2+var3+var4
return
end
c..............action.......................................
subroutine action(N,ap,b,bp,c,cp,d,dp,ep,fp,lambda,actio,actio0,
& sum1,sum2,sum4,sumv,id,actio1,actio2,actio4)
implicit none
integer N,i,j
double precision lambda(N),b,c,d,actio,actio0,sum1,sum2,sum4,sumv,
File: /home/ydri/Desktop/TP_QFT/codmetropolis-scalar-multitrace.f Page 5 of 7
& id
double precision sum3,actio1,ap,bp,cp,dp,id1,ep,fp,actio2,actio4
c.............monomial terms............
sum1=0.0d0
sum2=0.0d0
sum3=0.0d0
sum4=0.0d0
do i=1,N
sum1=sum1+lambda(i)
sum2=sum2+lambda(i)*lambda(i)
sum3=sum3+lambda(i)*lambda(i)*lambda(i)
sum4=sum4+lambda(i)*lambda(i)*lambda(i)*lambda(i)
enddo
c.......the multitrace model without odd terms..........
actio0=d*sum2*sum2+b*sum2+c*sum4
actio=actio0
c............odd multitrace terms
actio1=bp*sum1*sum1+cp*sum1*sum3+dp*sum1*sum1*sum1*sum1
& +ap*sum2*sum1*sum1
c...........the multitrace model with odd terms........
actio=actio+actio1
c........adding the vandrmonde potential..............
sumv=0.0d0
do i=1,N
do j=1,N
if (i.ne.j)then
sumv=sumv+dlog(dabs(lambda(i)-lambda(j)))
endif
enddo
enddo
sumv=-sumv
actio=actio+sumv
c..........the quadratic and quartic corrections explicitly....
actio2=fp*sum2+bp*sum1*sum1
actio4=ep*sum4+d*sum2*sum2+cp*sum1*sum3+dp*sum1*sum1*sum1*sum1
& +ap*sum2*sum1*sum1
c...........the schwinger-dyson identity.................
id=4.0d0*d*sum2*sum2+2.0d0*b*sum2+4.0d0*c*sum4
id1=2.0d0*bp*sum1*sum1+4.0d0*(cp*sum1*sum3+dp*sum1*sum1*sum1*sum1
& +ap*sum2*sum1*sum1)
id=id+id1
id=id/(N*N)
return
end
subroutine jackknife_binning(TMC,f,average,error)
implicit none
integer i,j,TMC,zbin,nbin
double precision xm
double precision f(1:TMC),sumf,y(1:TMC)
double precision sig0,sig,error,average
sig0=0.0d0
sumf=0.0d0
do i=1,TMC
sumf=sumf+f(i)
enddo
xm=sumf/TMC
c do zbin=1,TMC-1
zbin=1
nbin=int(TMC/zbin)
sig=0.0d0
do i=1,nbin,1
File: /home/ydri/Desktop/TP_QFT/codmetropolis-scalar-multitrace.f Page 6 of 7
y(i)=sumf
do j=1,zbin
y(i)=y(i)-f((i-1)*zbin+j )
enddo
y(i)= y(i)/(TMC-zbin)
sig=sig+((nbin-1.0d0)/nbin)*(y(i)-xm)*(y(i)-xm)
enddo
sig=sig
sig=dsqrt(sig)
if (sig0 .lt. sig) sig0=sig
c enddo
error=sig0
average=xm
return
end
function ran2(idum)
implicit none
integer idum,IM1,IM2,IMM1,IA1,IA2,IQ1,IQ2,IR1,IR2,NTAB,NDIV
real AM,EPS,RNMX
double precision ran2
parameter (IM1=2147483563,IM2=2147483399,AM=1./IM1,IMM1=IM1-1,
& IA1=40014,IA2=40692,IQ1=53668,IQ2=52774,IR1=12211,
& IR2=3791,NTAB=32,NDIV=1+IMM1/NTAB,EPS=1.2E-7,RNMX=1.-EPS)
integer idum2,j,k,iv(NTAB),iy
SAVE iv,iy,idum2
DATA idum2/123456789/,iv/NTAB*0/,iy/0/
if (idum.le.0) then
idum=max(-idum,1)
idum2=idum
do j=NTAB+8,1,-1
k=idum/IQ1
idum=IA1*(idum-k*IQ1)-k*IR1
if (idum.lt.0) idum=idum+IM1
if (j.le.NTAB) iv(j)=idum
enddo
iy=iv(1)
endif
k=idum/IQ1
idum=IA1*(idum-k*IQ1)-k*IR1
if (idum.lt.0) idum=idum+IM1
k=idum2/IQ2
idum2=IA2*(idum2-k*IQ2)-k*IR2
if (idum2.lt.0) idum2=idum2+IM2
j=1+iy/NDIV
iy=iv(j)-idum2
iv(j)=idum
if (iy.lt.1) iy=iy+IMM1
ran2=min(AM*iy,RNMX)
return
end
c.........adjusting interval inn in such a way that the acceptance rate pa is fixed at 30 per
cent..................
subroutine adjust_inn(pa,inn,Reject,Accept)
implicit none
double precision inn,pa,Reject,Accept
pa=(Accept)/(Reject+Accept)
if (pa.ge.0.30) inn=inn*1.20d0
if (pa.le.0.25) inn=inn*0.80d0
File: /home/ydri/Desktop/TP_QFT/codmetropolis-scalar-multitrace.f Page 7 of 7
return
end
c.............the interval....................................
function interval(idum,inn)
implicit none
doubleprecision interval,inn,ran2
integer idum
interval=ran2(idum)
interval=interval+interval-1.0d0
interval=interval*inn
return
end
File: /home/ydri/Desktop/TP_QFT/codes/remez.f Page 1 of 2
program my_remez
implicit none
integer y,z,n,d,precision,i,counter,j,n0
parameter(n0=100)
double precision lambda_low, lambda_high,e,tolerance
double precision a0,a(n0),b(n0),c0,c(n0),dd(n0),coefficient(n0)
parameter (tolerance=0.0001d0)
character*100 degree, com
character*50 h1
LOGICAL THERE
c........we choose the function to approximate, the range over which the rational approximation is to be
calculated, and the precision used....
y=1
z=2
lambda_low=0.0004d0
lambda_high=1.0d0
precision=40
print*, "Approximating the functions x^{y/z} and x^{-y/z}:"
& , "y=",y,"z=",z
print*, "Approximation bounds:", lambda_low,lambda_high
print*, "Precision of arithmetic:", precision
write(*,*)"..................."
counter=0
i=5
14 i=i+1
counter=counter+1
print*, "ITERATION:",counter
write(degree,'("", I1 )')i
read(degree,'(i5)')n
read(degree,'(i5)')d
write(*,*)"degrees of approximation", n,d
inquire(file='error1.dat', exist=THERE)
11 if ( THERE ) then
write(*,*) "file exists!"
else
go to 11
end if
c......we read the uniform norm and test whether or not it is smaller than some tolerance, if it is not,
we go back and repeat with increased degrees of approximation, viz n=n+1 and d=d+1.............
open(unit=50+i,file='error1.dat',status='old')
read(50+i,555) e
write(*,*)"uniform norm", e
write(*,*)"..................."
555 format(1F20.10)
close(50+i)
if (e.gt.tolerance) go to 14
return
end
File: /home/ydri/Desktop/TP_QFT/codes/conjugate-gradient.f Page 1 of 3
program my_conjugate_gradient
implicit none
integer N,M,i,j,counter,sig
parameter (N=3,M=2)
double precision A(N,N),v(N),sigma(M)
double precision x(N),r(N),p(N),q(N),product,product1,product2,
& residue,tolerance
double precision alpha,beta,alpha_previous,beta_previous,xii,xii0,
& beta_sigma(M),alpha_sigma(M),xi(M),xi_previous(M)
double precision x_sigma(N,M),p_sigma(N,M),r_sigma(N,M)
parameter(tolerance=10.0d-100)
c............example input...........................
call input(N,M,A,v,sigma)
c..............initialization.................................................................
do i=1,N
x(i)=0.0d0
r(i)=v(i)
do sig=1,M
x_sigma(i,sig)=0.0d0
enddo
enddo
alpha=0.0d0
beta=1.0d0
do sig=1,M
xi_previous(sig)=1.0d0
xi(sig)=1.0d0
alpha_sigma(sig)=0.0d0
beta_sigma(sig)=1.0d0
enddo
c.............starting iteration.........
counter=0
13 do i=1,N
p(i)=r(i)+alpha*p(i)
do sig=1,M
p_sigma(i,sig)=xi(sig)*r(i)
& +alpha_sigma(sig)*p_sigma(i,sig)
enddo
enddo
product=0.0d0
product1=0.0d0
c.......the only matrix-vector multiplication in the problem..........
do i=1,N
q(i)=0.0d0
do j=1,N
q(i)=q(i)+A(i,j)*p(j)
enddo
product=product+p(i)*q(i)
product1=product1+r(i)*r(i)
enddo
beta_previous=beta
beta=-product1/product
File: /home/ydri/Desktop/TP_QFT/codes/conjugate-gradient.f Page 2 of 3
product2=0.0d0
do i=1,N
x(i)=x(i)-beta*p(i)
r(i)=r(i)+beta*q(i)
product2=product2+r(i)*r(i)
enddo
alpha_previous=alpha
alpha=product2/product1
do sig=1,M
c......the xi coefficients..........
xii0=alpha_previous*beta*(xi_previous(sig)-xi(sig))
& +xi_previous(sig)*beta_previous*(1.0d0-sigma(sig)*beta)
xii=xi(sig)*xi_previous(sig)*beta_previous/xii0
xi_previous(sig)=xi(sig)
xi(sig)=xii
c........the beta coefficients......
beta_sigma(sig)=beta*xi(sig)/xi_previous(sig)
c.........the solutions and residues...........
do i=1,N
x_sigma(i,sig)=x_sigma(i,sig)-beta_sigma(sig)*p_sigma(i,sig)
r_sigma(i,sig)=xi(sig)*r(i)
enddo
c.......the alpha coefficients.......
alpha_sigma(sig)=alpha
alpha_sigma(sig)= alpha_sigma(sig)*xi(sig)*beta_sigma(sig)
alpha_sigma(sig)=alpha_sigma(sig)/(xi_previous(sig)*beta)
enddo
counter=counter+1
residue=0.0d0
do i=1,N
residue=residue+r(i)*r(i)
enddo
residue=dsqrt(residue)
if(residue.ge.tolerance) go to 13
c........verification 1: if we set sigma=0 then xi must be equal 1 whereas the other pairs must be
equal.........
write(*,*)"verification 1"
write(*,*)counter,xi(1),xi_previous(1)
write(*,*)counter,beta,beta_sigma(1)
write(*,*)counter,alpha,alpha_sigma(1)
c............verification 2.....
write(*,*)"verification 2"
do i=1,N
q(i)=0.0d0
do j=1,N
q(i)=q(i)+A(i,j)*x(j)
enddo
enddo
write(*,*)"v",v
write(*,*)"q",q
c............verification 3.....
write(*,*)"verification 3"
sig=1
do i=1,N
q(i)=sigma(sig)*x_sigma(i,sig)
do j=1,N
q(i)=q(i)+A(i,j)*x_sigma(j,sig)
File: /home/ydri/Desktop/TP_QFT/codes/conjugate-gradient.f Page 3 of 3
enddo
enddo
write(*,*)"v",v
write(*,*)"q",q
return
end
c................input.........................................
subroutine input(N,M,A,v,sigma)
implicit none
integer N,M
double precision A(N,N),v(N),sigma(M)
a(1,1)=1.0d0
a(1,2)=2.0d0
a(1,3)=0.0d0
a(2,1)=2.0d0
a(2,2)=2.0d0
a(2,3)=0.0d0
a(3,1)=0.0d0
a(3,2)=0.0d0
a(3,3)=3.0d0
v(1)=1.0d0
v(2)=0.0d0
v(3)=10.0d0
sigma(1)=1.0d0
sigma(2)=2.0d0
return
end
File: /home/ydri/Desktop/TP_QFT/codes/hybrid-supersymmetric-ym.f Page 1 of 18
program my_hybrid_susy_ym
implicit none
integer dim,N,M,M0,i,j,k,sp,A1,idum,time,timeT,tmc0,TMC,TTH,idum0,
& cou,nn
parameter (dim=4,N=8,M0=5,M=6)
parameter (timeT=2**14,TTH=2**11,TMC=2**13)
double precision gamma,mass,alpha,zeta,alphat
double precision a0,a(M),b(M),c0,c(M0),d(M0),coefficient(2*M+1)
& ,epsilon
double complex X(dim,N,N),P(dim,N,N),phi(2,N*N-1),Q(2,N*N-1),
& xx(2,N*N-1)
double complex G(M,2,N*N-1),W(2,N*N-1),W0(2,N*N-1),xi(2,N*N-1)
double precision inn,dt,interval, Rejec,Accept,pa
double precision ham,action,actionB,actionF,kinB,kinF,
& variationH,YM,CS,HO,hamB,hamF
real x0,t_1,t_2
double complex var(dim,N,N),varF(dim,N,N)
double precision varH0,varH(TMC),varH_average,varH_error
double precision h(TMC),h_average,h_error
double precision ac(TMC),ac_average,ac_error
double precision ac_B(TMC),acB_average,acB_error
double precision ac_F(TMC),acF_average,acF_error
double precision ym0(TMC),ym_average,ym_error
double precision cs0(TMC),cs_average,cs_error
double precision ho0(TMC),ho_average,ho_error
double precision identity_av,identity_er
double precision target_pa_high,target_pa_low,dt_max,dt_min,inc,
& dec
call cpu_time(t_1)
open(10, action='WRITE')
close(10)
open(11, action='WRITE')
close(11)
open(12, action='WRITE')
close(12)
open(13, action='WRITE')
close(13)
open(14, action='WRITE')
close(14)
open(15, action='WRITE')
close(15)
open(16, action='WRITE')
close(16)
open(17, action='WRITE')
close(17)
open(18, action='WRITE')
close(18)
open(unit=60,file='approx_x**-0.5_dat',status='old')
do j=1,2*M+1
read(60,*)coefficient(j)
enddo
a0=coefficient(1)
c write(*,*)"a0=",a0
do i=2,M+1
a(i-1)=coefficient(i)
b(i-1)=coefficient(i+M)
c write(*,*)"i-1=",i-1,"a(i-1)=", a(i-1),"b(i-1)=",b(i-1)
enddo
c.....shifting the no sigma problem of the conjugate gradient to the smallest mass which is presumably
the least convergent mass...
epsilon=b(1)
if (epsilon.gt.d(1))then
epsilon=d(1)
endif
do i=1,M
b(i)=b(i)-epsilon
enddo
do i=1,M0
d(i)=d(i)-epsilon
enddo
idum=-148175
x0=0
idum=idum-2*int(secnds(x0))
c.............parameters...............................................................................
zeta=0.0d0
mass=0.0d0
gamma=1.0d0
do k=0,0
alphat=0.0d0-k*0.25d0
alpha=alphat/dsqrt(1.0d0*N)
c.............initialization of X..............................................................
inn=1.0d0
call hot(N,dim,idum,inn,X)
c call cold(N,dim,idum,X)
c call gaussian(idum,dim,N,P)
c call gaussian_plus(idum,N,Q)
c call gaussian_plus(idum,N,xi)
c...............here we use the coefficients c and d not the coefficients a and b..............
c call conjugate_gradient(dim,N,M0,zeta,X,c0,c,d,xi,G,phi,W,
c & epsilon)
c.............molecular dynamics parameters: dt should be optimized in such a way that the acceptance
rate pa is fixed in [0.7,0.9] and dt is fixed in [0.0001,1]....
time=10
dt=0.001d0
Rejec=0.0d0
Accept=0.0d0
target_pa_high=0.90d0
target_pa_low=0.70d0
dt_max=1.0d0
dt_min=0.0001d0
File: /home/ydri/Desktop/TP_QFT/codes/hybrid-supersymmetric-ym.f Page 3 of 18
inc=1.2d0
dec=0.8d0
nn=1
c time=1
c dt=0.001d0
c do tmc0=1,timeT
c call molecular_dynamics(N,dim,M,dt,time,gamma,mass,alpha,
c & zeta,a0,a,b,X,P,phi,Q,var,varF,epsilon)
c call sub_action(dim,N,M,a0,a,b,X,P,phi,Q,alpha,mass,gamma,zeta,
c & ham,action,actionB,actionF,kinB,kinF,YM,CS,HO,epsilon)
c hamB=kinB+actionB
c hamF=kinF+actionF
c write(*,*)tmc0,ham,kinB,actionB,hamB,kinF,actionF,hamF
c write(7,*)tmc0,ham,kinB,actionB,hamB,kinF,actionF,hamF
c enddo
c.................thermalization..............................
do tmc0=1,TTH
call metropolis(N,dim,M,M0,gamma,mass,alpha,zeta,dt,time,X,
& P,phi,Q,a0,a,b,c0,c,d,Rejec,Accept,var,varF,variationH,
& epsilon,idum)
cou=tmc0
call adjust_inn(cou,pa,dt,time,Rejec,Accept,
& nn,target_pa_high,target_pa_low,dt_max,dt_min,inc,dec)
call sub_action(dim,N,M,a0,a,b,X,P,phi,Q,alpha,mass,gamma,
& zeta,ham,action,actionB,actionF,kinB,kinF,YM,CS,HO,
& epsilon)
varH0=dexp(-variationH)
write(*,*)tmc0,ham,action,actionB,kinB,actionF,kinF,
& variationH,varH0,pa
write(8,*)tmc0,ham,action,actionB,kinB,actionF,kinF,
& variationH,varH0,pa
enddo
do tmc0=1,TMC
call metropolis(N,dim,M,M0,gamma,mass,alpha,zeta,dt,time,X,
& P,phi,Q,a0,a,b,c0,c,d,Rejec,Accept,var,varF,variationH,
& epsilon,idum)
cou=tmc0
call adjust_inn(cou,pa,dt,time,Rejec,Accept,
& nn,target_pa_high,target_pa_low,dt_max,dt_min,inc,dec)
call sub_action(dim,N,M,a0,a,b,X,P,phi,Q,alpha,mass,gamma,
& zeta,ham,action,actionB,actionF,kinB,kinF,YM,CS,HO,
& epsilon)
ym0(tmc0)=YM
cs0(tmc0)=CS
ho0(tmc0)=HO
ac_B(tmc0)=actionB
ac_F(tmc0)=actionF
ac(tmc0)=action
h(tmc0)=ham
varH(tmc0)=dexp(-variationH)
write(*,*)tmc0,ham,action,actionB,kinB,actionF,kinF,
& variationH, varH(tmc0),pa
write(9,*)tmc0,ham,action,actionB,kinB,actionF,kinF,
& variationH,varH(tmc0),pa
enddo
c.....................measurements......................................
c..................the Hamiltonian........................................
File: /home/ydri/Desktop/TP_QFT/codes/hybrid-supersymmetric-ym.f Page 4 of 18
call jackknife_binning(TMC,h,h_average,h_error)
c write(*,*)alpha,gamma,mass,zeta,h_average,h_error
open(10, status='OLD', action='WRITE', position='APPEND')
write(10,*)alpha,gamma,mass,zeta,h_average,h_error
close(10)
c..................we msut have <e^(-variationH)>=1.................................
call jackknife_binning(TMC,varH,varH_average,varH_error)
c write(*,*)alpha,gamma,mass,zeta,varH_average,varH_error
open(11, status='OLD', action='WRITE', position='APPEND')
write(11,*)alpha,gamma,mass,zeta,varH_average,varH_error
close(11)
c...............the total action..................
call jackknife_binning(TMC,ac,ac_average,ac_error)
c write(*,*)alpha,gamma,mass,zeta,ac_average,ac_error
open(12, status='OLD', action='WRITE', position='APPEND')
write(12,*)alpha,gamma,mass,zeta,ac_average,ac_error
close(12)
c..................the bosonic and pseudo-fermion actions and the yang-mills, chern-simons and harmonic
oscillator terms ....
call jackknife_binning(TMC,ac_B,acB_average,acB_error)
c write(*,*)alpha,gamma,mass,zeta,acB_average,acB_error
open(13, status='OLD', action='WRITE', position='APPEND')
write(13,*)alpha,gamma,mass,zeta,acB_average,acB_error
close(13)
call jackknife_binning(TMC,ym0,ym_average,ym_error)
c write(*,*)alpha,gamma,mass,zeta,ym_average,ym_error
open(14, status='OLD', action='WRITE', position='APPEND')
write(14,*)alpha,gamma,mass,zeta,ym_average,ym_error
close(14)
call jackknife_binning(TMC,cs0,cs_average,cs_error)
c write(*,*)alpha,gamma,mass,zeta,cs_average,cs_error
open(15, status='OLD', action='WRITE', position='APPEND')
write(15,*)alpha,gamma,mass,zeta,cs_average,cs_error
close(15)
call jackknife_binning(TMC,ho0,ho_average,ho_error)
c write(*,*)alpha,gamma,mass,zeta,ho_average,ho_error
open(16, status='OLD', action='WRITE', position='APPEND')
write(16,*)alpha,gamma,mass,zeta,ho_average,ho_error
close(16)
call jackknife_binning(TMC,ac_F,acF_average,acF_error)
c write(*,*)alpha,gamma,mass,zeta,acF_average,acF_error
open(17, status='OLD', action='WRITE', position='APPEND')
write(17,*)alpha,gamma,mass,zeta,acF_average,acF_error
close(17)
c............for the flat space supersymmetric model for which xi=0 the Schwinger-Dyson identity
<4*gamma*YM+3*alpha*CS+2*mass*HO>=6(N^2-1) must hold...
identity_av=4.0d0*gamma*ym_average+3.0d0*alpha*cs_average
& +2.0d0*mass*ho_average
identity_av=identity_av/(6.0d0*(N*N-1.0d0))
identity_av=identity_av-1.0d0
identity_er=4.0d0*gamma*ym_error+3.0d0*alpha*cs_error
& +2.0d0*mass*ho_error
identity_er=identity_er/(6.0d0*(N*N-1.0d0))
c write(*,*)alpha,gamma,mass,zeta,identity_av,identity_er
open(18, status='OLD', action='WRITE', position='APPEND')
write(18,*)alpha,gamma,mass,zeta,identity_av,identity_er
close(18)
enddo
c...............cpu time........................................................
call cpu_time(t_2)
write(*,*)"cpu_time=", t_2-t_1
return
end
File: /home/ydri/Desktop/TP_QFT/codes/hybrid-supersymmetric-ym.f Page 5 of 18
subroutine metropolis(N,dim,M,M0,gamma,mass,alpha,zeta,dt,time,X,P
& ,phi,Q,a0,a,b,c0,c,d,Rejec,Accept,var,varF,variationH,epsilon
& ,idum)
implicit none
integer N,dim,M,M0,i,j,mu,nu,k,l,idum,time,A1,sp
double precision gamma,mass,alpha,zeta
double precision inn,dt,ran2,Rejec,Accept
double precision a0,a(M),b(M),c0,c(M0),d(M0),epsilon
double complex X(dim,N,N),X0(dim,N,N),P(dim,N,N),
& P0(dim,N,N),phi(2,N*N-1),phi0(2,N*N-1),Q(2,N*N-1),Q0(2,N*N-1),
& xi(2,N*N-1),G(M,2,N*N-1),W(2,N*N-1),W0(2,N*N-1)
double complex var(dim,N,N),varF(dim,N,N)
double precision variations,variationH,probabilityS,probabilityH,r
double precision ham,action,actionB,actionF,kinB,kinF,YM,CS,HO,
& hamB
c............Gaussian initialization..............................
call gaussian(idum,dim,N,P)
call gaussian_plus(idum,N,Q)
call gaussian_plus(idum,N,xi)
phi=xi
call conjugate_gradient(dim,N,M,zeta,X,c0,c,d,phi,G,W0,W,
& epsilon)
phi=W0
X0=X
P0=P
phi0=phi
Q0=Q
c................evaluation of the initial value of hamiltonian and action..............
call sub_action(dim,N,M,a0,a,b,X,P,phi,Q,alpha,mass,gamma,zeta,
& ham,action,actionB,actionF,kinB,kinF,YM,CS,HO,epsilon)
hamB=actionB+kinB
variationS=action
variationH=ham
call molecular_dynamics(N,dim,M,dt,time,gamma,mass,alpha,zeta
& ,a0,a,b,X,P,phi,Q,var,varF,epsilon)
c...........evaluation of the final value of hamiltonian and action and the differences................
call sub_action(dim,N,M,a0,a,b,X,P,phi,Q,alpha,mass,gamma,zeta,
& ham,action,actionB,actionF,kinB,kinF,YM,CS,HO,epsilon)
hamB=actionB+kinB
variationS=action-variationS
variationH=ham-variationH
if(variationH.lt.0.0d0)then
accept=accept+1.0d0
else
probabilityH=dexp(-variationH)
r=ran2(idum)
if (r.lt.probabilityH)then
accept=accept+1.0d0
else
X=X0
P=P0
File: /home/ydri/Desktop/TP_QFT/codes/hybrid-supersymmetric-ym.f Page 6 of 18
phi=phi0
Q=Q0
Rejec=Rejec+1.0d0
endif
endif
return
end
subroutine molecular_dynamics(N,dim,M,dt,time,gamma,mass,alpha,
& zeta,a0,a,b,X,P,phi,Q,var,varF,epsilon)
implicit none
integer N,dim,M,i,j,mu,nn,time,A1,A1b,sp
double precision dt,gamma,mass,alpha,zeta,a0,a(M),b(M),epsilon,
& alp
double complex X(dim,N,N),phi(2,N*N-1),P(dim,N,N),Q(2,N*N-1),
& xx(2,N*N-1),var(dim,N,N),varF(dim,N,N),G(M,2,N*N-1),
& W(2,N*N-1),W0(2,N*N-1)
alp=1.0d0
do nn=1,time
call conjugate_gradient(dim,N,M,zeta,X,a0,a,b,phi,G,W0,W,
& epsilon)
call boson_force(N,dim,gamma,mass,alpha,X,var)
call fermion_force(N,dim,M,zeta,a0,a,b,X,G,varF)
do i=1,N
do j=i,N
do mu=1,dim
P(mu,i,j)=P(mu,i,j)-0.5d0*alp*dt*var(mu,i,j)
& -0.5d0*alp*dt*varF(mu,i,j)
X(mu,i,j)=X(mu,i,j)+alp*dt*conjg(P(mu,i,j))
X(mu,j,i)=conjg(X(mu,i,j))
enddo
enddo
enddo
do A1=1,N*N-1
do sp=1,2
Q(sp,A1)=Q(sp,A1)-0.5d0*alp*dt*W(sp,A1)
phi(sp,A1)=phi(sp,A1)+alp*dt*conjg(Q(sp,A1))
enddo
enddo
c....................last step of the leap frog......
call conjugate_gradient(dim,N,M,zeta,X,a0,a,b,phi,G,W0,W,
& epsilon)
call boson_force(N,dim,gamma,mass,alpha,X,var)
call fermion_force(N,dim,M,zeta,a0,a,b,X,G,varF)
do i=1,N
do j=i,N
do mu=1,dim
P(mu,i,j)=P(mu,i,j)-0.5d0*alp*dt*var(mu,i,j)
& -0.5d0*alp*dt*varF(mu,i,j)
P(mu,j,i)=conjg(P(mu,i,j))
enddo
enddo
enddo
do A1=1,N*N-1
do sp=1,2
Q(sp,A1)=Q(sp,A1)-0.5d0*alp*dt*W(sp,A1)
enddo
enddo
enddo
return
end
File: /home/ydri/Desktop/TP_QFT/codes/hybrid-supersymmetric-ym.f Page 7 of 18
subroutine conjugate_gradient(dim,N,M,zeta,X,a0,a,b,phi,G,W0,W,
& epsilon)
implicit none
integer dim,N,M,M0,i,j,counter,A1,sig,sp
double precision zeta,a0,a(M),b(M),tol,tol0,residue,residue0,
& epsilon
double complex X(dim,N,N)
double complex xx(2,N*N-1),phi(2,N*N-1),r(2,N*N-1),p(2,N*N-1),
& q(2,N*N-1),o(2,N*N-1),xx1(2,N*N-1),q_previous(2,N*N-1)
double complex x_traceless_vec(2,N*N-1),y_traceless_vec(2,N*N-1),
& z_traceless_vec(2,N*N-1)
double complex G(M,2,N*N-1),p_sigma(M,2,N*N-1),W(2,N*N-1),
& W0(2,N*N-1), G0(M,2,N*N-1)
double precision rho,rho_previous,rho_sigma(M),beta,beta_previous,
& beta_sigma(M),xii0,xii,xi(M),xi_previous(M)
double precision product,product1,product2
parameter(tol=10.0d-5,tol0=10.0d-3)
c.........initialization.................
do A1=1,N*N-1
do sp=1,2
xx(sp,A1)=cmplx(0,0)
r(sp,A1)=phi(sp,A1)
do sig=1,M
G(sig,sp,A1)=cmplx(0,0)
enddo
q(sp,A1)=cmplx(0,0)
enddo
enddo
rho=0.0d0
beta=1.0d0
do sig=1,M
xi_previous(sig)=1.0d0
xi(sig)=1.0d0
rho_sigma(sig)=0.0d0
beta_sigma(sig)=1.0d0
enddo
counter=0
13 do A1=1,N*N-1
do sp=1,2
p(sp,A1)=r(sp,A1)+rho*p(sp,A1)
do sig=1,M
p_sigma(sig,sp,A1)=xi(sig)*r(sp,A1)
& +rho_sigma(sig)*p_sigma(sig,sp,A1)
enddo
enddo
enddo
c enddo
call multiplication(dim,N,M,zeta,X,p,y_traceless_vec)
o=y_traceless_vec
c write(*,*)"o",o
call multiplication_plus(dim,N,M,zeta,X,o,z_traceless_vec)
q_previous=q
q=z_traceless_vec
q=q+epsilon*p
c write(*,*)"q",q
c.................calculating the beta coefficient......
product=0.0d0
product1=0.0d0
do A1=1,N*N-1
do sp=1,2
product=product+conjg(p(sp,A1))*q(sp,A1)
product1=product1+conjg(r(sp,A1))*r(sp,A1)
enddo
enddo
beta_previous=beta
beta=-product1/product
c...............calculating the solution xx, its residue and the rho coefficient.....
product2=0.0d0
do A1=1,N*N-1
do sp=1,2
xx(sp,A1)=xx(sp,A1)-beta*p(sp,A1)
r(sp,A1)=r(sp,A1)+beta*q(sp,A1)
product2=product2+conjg(r(sp,A1))*r(sp,A1)
enddo
enddo
rho_previous=rho
rho=product2/product1
do sig=1,M
c.........the xi coefficients..................
xii0=rho_previous*beta*(xi_previous(sig)-xi(sig))+
& xi_previous(sig)*beta_previous*(1.0d0-b(sig)*beta)
xii=xi(sig)*xi_previous(sig)*beta_previous/xii0
xi_previous(sig)=xi(sig)
xi(sig)=xii
c.........the beta coefficients............................
beta_sigma(sig)=beta*xi(sig)/xi_previous(sig)
c.........the solutions......................
do A1=1,N*N-1
do sp=1,2
G(sig,sp,A1)=G(sig,sp,A1)-beta_sigma(sig)*p_sigma(sig,sp,A1)
enddo
enddo
c........the alpha coefficients:alpha=rho..
rho_sigma(sig)=rho
rho_sigma(sig)=rho_sigma(sig)*xi(sig)*beta_sigma(sig)
rho_sigma(sig)=rho_sigma(sig)/(xi_previous(sig)*beta)
enddo
residue=0.0d0
do A1=1,N*N-1
do sp=1,2
residue=residue+conjg(r(sp,A1))*r(sp,A1)
enddo
enddo
residue=dsqrt(residue)
counter=counter+1
if(residue.ge.tol) go to 13
c write(*,*)counter,residue
File: /home/ydri/Desktop/TP_QFT/codes/hybrid-supersymmetric-ym.f Page 9 of 18
do A1=1,N*N-1
do sp=1,2
W0(sp,A1)=cmplx(0,0)
do sig=1,M
W0(sp,A1)=W0(sp,A1)+a(sig)*G(sig,sp,A1)
enddo
W0(sp,A1)=W0(sp,A1)+a0*phi(sp,A1)
W(sp,A1)=conjg(W0(sp,A1))
enddo
enddo
c......verification of Delta.xx=phi....................
c write(*,*)"phi",phi
c write(*,*)"......................"
c call multiplication(dim,N,M,zeta,X,xx,y_traceless_vec)
c o=y_traceless_vec
c write(*,*)"o",o
c call multiplication_plus(dim,N,M,zeta,X,o,z_traceless_vec)
c q=z_traceless_vec
c...............we must have q=phi since Delta.xx=phi....
c write(*,*)"q",q
c write(*,*)"..............................."
c......verification of (Delta+b(sigma)).G_sigma=phi....................
c sig=1
c call reverse_identification(N,M,sig,G,x_traceless_vec)
c xx1=x_traceless_vec
c call multiplication(dim,N,M,zeta,X,xx1,y_traceless_vec)
c o=y_traceless_vec
c write(*,*)"o",o
c call multiplication_plus(dim,N,M,zeta,X,o,z_traceless_vec)
c q=z_traceless_vec+b(sig)*xx1
c...............we must have q=phi ....
c write(*,*)"q",q
c write(*,*)phi(1,1),q(1,1)
c write(*,*)".........................."
return
end
subroutine sub_action(dim,N,M,a0,a,b,X,P,phi,Q,alpha,mass,gamma,
& zeta,ham,action,actionB,actionF,kinB,kinF,YM,CS,HO,epsilon)
implicit none
integer dim,N,M,mu,nu,i,j,k,l,A1,sp
double complex X(dim,N,N),P(dim,N,N),phi(2,N*N-1),Q(2,N*N-1),
&W(2,N*N-1),W0(2,N*N-1),G(M,2,N*N-1)
double complex ii,action0,action1,action2,ham0,ym0,cs0,ho0,
& kin0,kin1
double precision action,actionB,actionF,ham,kinB,kinF,YM,CS,HO,
&a0,a(M),b(M),epsilon
double precision mass,gamma,alpha,zeta
ii=cmplx(0,1)
c................yang-mills action........................
ym0=cmplx(0,0)
do mu =1,dim
do nu=mu+1,dim
action0=cmplx(0,0)
do i=1,N
do j=1,N
File: /home/ydri/Desktop/TP_QFT/codes/hybrid-supersymmetric-ym.f Page 10 of 18
do k=1,N
do l=1,N
action0=action0+X(mu,i,j)*X(nu,j,k)*X(mu,k,l)*X(nu,l,i)
& -X(mu,i,j)*X(mu,j,k)*X(nu,k,l)*X(nu,l,i)
enddo
enddo
enddo
enddo
ym0=ym0+action0
enddo
enddo
action=real(ym0)
YM=-N*action
action=-N*gamma*action
kin0=cmplx(0,0)
ho0=cmplx(0,0)
do mu =1,dim
ham0=cmplx(0,0)
action1=cmplx(0,0)
do i=1,N
do j=1,N
ham0=ham0+P(mu,i,j)*P(mu,j,i)
action1=action1+X(mu,i,j)*X(mu,j,i)
enddo
enddo
kin0=kin0+ham0
ho0=ho0+action1
enddo
kinB=0.5d0*real(kin0)
ham=kinB
HO=0.5d0*real(ho0)
action=action+0.5d0*mass*real(ho0)
cs0=cmplx(0,0)
do i=1,N
do j=1,N
do k=1,N
cs0=cs0+ii*X(1,i,j)*X(2,j,k)*X(3,k,i)
& -ii*X(1,i,j)*X(3,j,k)*X(2,k,i)
enddo
enddo
enddo
CS=2.0d0*N*real(cs0)
action=action+2.0d0*alpha*N*real(cs0)
ham=ham+action
actionB=action
c...............fermion contribution.....
call conjugate_gradient(dim,N,M,zeta,X,a0,a,b,phi,G,W0,W,
& epsilon)
action2=cmplx(0,0)
kin1=cmplx(0,0)
do A1=1,N*N-1
do sp=1,2
action2=action2+W(sp,A1)*phi(sp,A1)
kin1=kin1+conjg(Q(sp,A1))*Q(sp,A1)
enddo
enddo
actionF=real(action2)
kinF=real(kin1)
action=actionB+actionF
File: /home/ydri/Desktop/TP_QFT/codes/hybrid-supersymmetric-ym.f Page 11 of 18
ham=ham+kinF+actionF
return
end
subroutine boson_force(N,dim,gamma,mass,alpha,X,var)
implicit none
integer N,dim,i,j,mu,nu,k,l
double precision gamma,mass,alpha
double complex var(dim,N,N),X(dim,N,N),ii
ii=cmplx(0,1)
do mu=1,dim
do i=1,N
do j=i,N
var(mu,i,j)=cmplx(0,0)
do nu=1,dim
do k=1,N
do l=1,N
var(mu,i,j)=var(mu,i,j)+2.0d0*X(nu,j,k)*X(mu,k,l)*X(nu,l,i)
& -X(nu,j,k)*X(nu,k,l)*X(mu,l,i)
& -X(mu,j,k)*X(nu,k,l)*X(nu,l,i)
enddo
enddo
enddo
var(mu,i,j)=-N*gamma*var(mu,i,j)+mass*X(mu,j,i)
if(mu.eq.1)then
do k=1,N
var(mu,i,j)=var(mu,i,j)+2.0d0*ii*alpha*N*X(2,j,k)*X(3,k,i)
& -2.0d0*ii*alpha*N*X(3,j,k)*X(2,k,i)
enddo
endif
if(mu.eq.2)then
do k=1,N
var(mu,i,j)=var(mu,i,j)+2.0d0*ii*alpha*N*X(3,j,k)*X(1,k,i)
& -2.0d0*ii*alpha*N*X(1,j,k)*X(3,k,i)
enddo
endif
if(mu.eq.3)then
do k=1,N
var(mu,i,j)=var(mu,i,j)+2.0d0*ii*alpha*N*X(1,j,k)*X(2,k,i)
& -2.0d0*ii*alpha*N*X(2,j,k)*X(1,k,i)
enddo
endif
var(mu,j,i)=conjg(var(mu,i,j))
enddo
enddo
enddo
return
end
subroutine fermion_force(N,dim,M,zeta,a0,a,b,X,G,varF)
implicit none
integer N,M,dim,sig,i,j,k
double complex X(dim,N,N),phi(2,N*N-1)
double precision a0,a(M),b(M),zeta
double complex T(dim),S(dim),varF(dim,N,N),ii
double complex G(M,2,N*N-1),G_vec(2,N*N),Gm(2,N,N),F_vec(2,N*N)
& ,Fm(2,N,N),W(2,N*N-1),W0(2,N*N-1)
double complex x_traceless_vec(2,N*N-1),y_traceless_vec(2,N*N-1)
ii=cmplx(0,1)
File: /home/ydri/Desktop/TP_QFT/codes/hybrid-supersymmetric-ym.f Page 12 of 18
do i=1,N
do j=i,N
varF(1,i,j)=cmplx(0,0)
varF(2,i,j)=cmplx(0,0)
varF(3,i,j)=cmplx(0,0)
varF(4,i,j)=cmplx(0,0)
do sig=1,M
call reverse_identification(N,M,sig,G,x_traceless_vec)
call conversion(N,x_traceless_vec,G_vec,Gm)
call multiplication(dim,N,M,zeta,X,x_traceless_vec,
& y_traceless_vec)
call conversion(N,y_traceless_vec,F_vec,Fm)
T(1)=cmplx(0,0)
T(2)=cmplx(0,0)
T(3)=cmplx(0,0)
T(4)=cmplx(0,0)
S(1)=cmplx(0,0)
S(2)=cmplx(0,0)
S(3)=cmplx(0,0)
S(4)=cmplx(0,0)
do k=1,N
T(1)=T(1)+Gm(1,j,k)*conjg(Fm(2,k,i))-conjg(Fm(2,j,k))*Gm(1,k,i)
& +Gm(2,j,k)*conjg(Fm(1,k,i))-conjg(Fm(1,j,k))*Gm(2,k,i)
S(1)=S(1)+Gm(1,i,k)*conjg(Fm(2,k,j))-conjg(Fm(2,i,k))*Gm(1,k,j)
& +Gm(2,i,k)*conjg(Fm(1,k,j))-conjg(Fm(1,i,k))*Gm(2,k,j)
T(2)=T(2)-Gm(1,j,k)*conjg(Fm(2,k,i))+conjg(Fm(2,j,k))*Gm(1,k,i)
& +Gm(2,j,k)*conjg(Fm(1,k,i))-conjg(Fm(1,j,k))*Gm(2,k,i)
S(2)=S(2)-Gm(1,i,k)*conjg(Fm(2,k,j))+conjg(Fm(2,i,k))*Gm(1,k,j)
& +Gm(2,i,k)*conjg(Fm(1,k,j))-conjg(Fm(1,i,k))*Gm(2,k,j)
T(3)=T(3)+Gm(1,j,k)*conjg(Fm(1,k,i))-conjg(Fm(1,j,k))*Gm(1,k,i)
& -Gm(2,j,k)*conjg(Fm(2,k,i))+conjg(Fm(2,j,k))*Gm(2,k,i)
S(3)=S(3)+Gm(1,i,k)*conjg(Fm(1,k,j))-conjg(Fm(1,i,k))*Gm(1,k,j)
& -Gm(2,i,k)*conjg(Fm(2,k,j))+conjg(Fm(2,i,k))*Gm(2,k,j)
T(4)=T(4)+Gm(1,j,k)*conjg(Fm(1,k,i))-conjg(Fm(1,j,k))*Gm(1,k,i)
& +Gm(2,j,k)*conjg(Fm(2,k,i))-conjg(Fm(2,j,k))*Gm(2,k,i)
S(4)=S(4)+Gm(1,i,k)*conjg(Fm(1,k,j))-conjg(Fm(1,i,k))*Gm(1,k,j)
& +Gm(2,i,k)*conjg(Fm(2,k,j))-conjg(Fm(2,i,k))*Gm(2,k,j)
enddo
T(2)=ii*T(2)
S(2)=ii*S(2)
T(4)=ii*T(4)
S(4)=ii*S(4)
varF(1,i,j)=varF(1,i,j)-a(sig)*(T(1)+conjg(S(1)))
varF(2,i,j)=varF(2,i,j)-a(sig)*(T(2)+conjg(S(2)))
varF(3,i,j)=varF(3,i,j)-a(sig)*(T(3)+conjg(S(3)))
varF(4,i,j)=varF(4,i,j)-a(sig)*(T(4)+conjg(S(4)))
enddo
varF(1,j,i)=conjg(varF(1,i,j))
varF(2,j,i)=conjg(varF(2,i,j))
varF(3,j,i)=conjg(varF(3,i,j))
varF(4,j,i)=conjg(varF(4,i,j))
enddo
enddo
return
end
c.............multiplication by M....
subroutine multiplication(dim,N,M,zeta,X,x_traceless_vec
& ,y_traceless_vec)
implicit none
integer i,j,k,dim,N,M
double precision zeta
double complex y_mat(2,N,N),y_vec(2,N*N),y_traceless_vec(2,N*N-1),
& x_mat(2,N,N),x_vec(2,N*N),x_traceless_vec(2,N*N-1)
double complex ii,X(dim,N,N)
File: /home/ydri/Desktop/TP_QFT/codes/hybrid-supersymmetric-ym.f Page 13 of 18
ii=cmplx(0,1)
call conversion(N,x_traceless_vec,x_vec,x_mat)
do j=1,N
do i=1,N
y_mat(1,j,i)=zeta*x_mat(1,i,j)
y_mat(2,j,i)=zeta*x_mat(2,i,j)
do k=1,N
y_mat(1,j,i)=y_mat(1,j,i)
& +X(3,i,k)*x_mat(1,k,j)-x_mat(1,i,k)*X(3,k,j)
& +ii*X(4,i,k)*x_mat(1,k,j)-ii*x_mat(1,i,k)*X(4,k,j)
& +X(1,i,k)*x_mat(2,k,j)-x_mat(2,i,k)*X(1,k,j)
& -ii*X(2,i,k)*x_mat(2,k,j)+ii*x_mat(2,i,k)*X(2,k,j)
y_mat(2,j,i)=y_mat(2,j,i)
& -X(3,i,k)*x_mat(2,k,j)+x_mat(2,i,k)*X(3,k,j)
& +ii*X(4,i,k)*x_mat(2,k,j)-ii*x_mat(2,i,k)*X(4,k,j)
& +X(1,i,k)*x_mat(1,k,j)-x_mat(1,i,k)*X(1,k,j)
& +ii*X(2,i,k)*x_mat(1,k,j)-ii*x_mat(1,i,k)*X(2,k,j)
enddo
enddo
enddo
call reverse_conversion(N,y_mat,y_vec,y_traceless_vec)
return
end
c.............multiplication by M^+....
subroutine multiplication_plus(dim,N,M,zeta,X,y_traceless_vec
& ,z_traceless_vec)
implicit none
integer i,j,k,dim,N,M
double precision zeta
double complex z_mat(2,N,N),z_vec(2,N*N),z_traceless_vec(2,N*N-1),
& y_mat(2,N,N),y_vec(2,N*N),y_traceless_vec(2,N*N-1)
double complex ii,X(dim,N,N)
ii=cmplx(0,1)
call conversion(N,y_traceless_vec,y_vec,y_mat)
do j=1,N
do i=1,N
z_mat(1,j,i)=zeta*y_mat(1,i,j)
z_mat(2,j,i)=zeta*y_mat(2,i,j)
do k=1,N
z_mat(1,j,i)=z_mat(1,j,i)
& -X(3,k,i)*y_mat(1,k,j)+y_mat(1,i,k)*X(3,j,k)
& +ii*X(4,k,i)*y_mat(1,k,j)-ii*y_mat(1,i,k)*X(4,j,k)
& -X(1,k,i)*y_mat(2,k,j)+y_mat(2,i,k)*X(1,j,k)
& +ii*X(2,k,i)*y_mat(2,k,j)-ii*y_mat(2,i,k)*X(2,j,k)
z_mat(2,j,i)=z_mat(2,j,i)
& +X(3,k,i)*y_mat(2,k,j)-y_mat(2,i,k)*X(3,j,k)
& +ii*X(4,k,i)*y_mat(2,k,j)-ii*y_mat(2,i,k)*X(4,j,k)
& -X(1,k,i)*y_mat(1,k,j)+y_mat(1,i,k)*X(1,j,k)
& -ii*X(2,k,i)*y_mat(1,k,j)+ii*y_mat(1,i,k)*X(2,j,k)
enddo
enddo
enddo
call reverse_conversion(N,z_mat,z_vec,z_traceless_vec)
return
end
subroutine conversion(N,x_traceless_vec,x_vec,x_mat)
implicit none
integer N,i,j,A1,sp
File: /home/ydri/Desktop/TP_QFT/codes/hybrid-supersymmetric-ym.f Page 14 of 18
do sp=1,2
xx=0.0d0
do i=1,N
do j=1,N
A1=N*(i-1)+j
if (A1.lt.N*N) then
x_vec(sp,A1)=x_traceless_vec(sp,A1)
if (i.eq.j) then
xx=xx-x_traceless_vec(sp,A1)
endif
endif
x_mat(sp,i,j)=x_vec(sp,A1)
enddo
enddo
x_vec(sp,N*N)=xx
x_mat(sp,N,N)=x_vec(sp,N*N)
enddo
return
end
subroutine reverse_conversion(N,x_mat,x_vec,x_traceless_vec)
implicit none
integer N,i,j,A1,sp
double complex x_mat(2,N,N),x_vec(2,N*N),x_traceless_vec(2,N*N-1)
do sp=1,2
x_vec(sp,N*N)=x_mat(sp,N,N)
do i=1,N
do j=1,N
A1=N*(i-1)+j
if (A1.lt.N*N) then
x_vec(sp,A1)=x_mat(sp,i,j)
if (i.eq.j)then
x_traceless_vec(sp,A1)=x_vec(sp,A1)-x_vec(sp,N*N)
else
x_traceless_vec(sp,A1)=x_vec(sp,A1)
endif
endif
enddo
enddo
enddo
return
end
subroutine gaussian(idum,dim,N,P)
implicit none
integer dim,N,mu,i,j,idum
double precision pi,phi,r,ran2
double complex ii,P(dim,N,N)
pi=dacos(-1.0d0)
ii=cmplx(0,1)
do mu=1,dim
c.............diagonal.........
do i=1,N
phi=2.0d0*pi*ran2(idum)
r=dsqrt(-2.0d0*dlog(1.0d0-ran2(idum)))
P(mu,i,i)=r*dcos(phi)
File: /home/ydri/Desktop/TP_QFT/codes/hybrid-supersymmetric-ym.f Page 15 of 18
enddo
c.......off diagonal............
do i=1,N
do j=i+1,N
phi=2.0d0*pi*ran2(idum)
r=dsqrt(-1.0d0*dlog(1.0d0-ran2(idum)))
P(mu,i,j)=r*dcos(phi)+ii*r*dsin(phi)
P(mu,j,i)=conjg(P(mu,i,j))
enddo
enddo
enddo
return
end
subroutine gaussian_plus(idum,N,Q)
implicit none
integer N,A1,sp,idum
double precision pi,phi,r,ran2
double complex Q(2,N*N-1),ii
pi=dacos(-1.0d0)
ii=cmplx(0,1)
do A1=1,N*N-1
do sp=1,2
phi=2.0d0*pi*ran2(idum)
r=dsqrt(-1.0d0*dlog(1.0d0-ran2(idum)))
Q(sp,A1)=r*dcos(phi)+ii*r*dsin(phi)
enddo
enddo
return
end
c.........hot start.................
subroutine hot(N,dim,idum,inn,X)
integer mu,i,j,N,dim,idum
double complex X(dim,N,N)
double precision xx,y,inn,ran2
do mu=1,dim
do i=1,N
do j=i,N
if (j.ne.i) then
xx=(2.0d0*ran2(idum)-1.0d0)*inn
y=(2.0d0*ran2(idum)-1.0d0)*inn
X(mu,i,j)=cmplx(xx,y)
X(mu,j,i)=cmplx(xx,-y)
else
xx=(2.0d0*ran2(idum)-1.0d0)*inn
X(mu,i,j)=xx
endif
enddo
enddo
enddo
return
end
c...........cold start......................
subroutine cold(N,dim,idum,X)
integer mu,i,j,N,dim,idum
double complex X(dim,N,N)
File: /home/ydri/Desktop/TP_QFT/codes/hybrid-supersymmetric-ym.f Page 16 of 18
do mu=1,dim
do i=1,N
do j=1,N
X(mu,i,j)=cmplx(0,0)
enddo
enddo
enddo
return
end
subroutine jackknife_binning(TMC,f,average,error)
integer i,j,TMC,zbin,nbin
double precision xm
double precision f(1:TMC),sumf,y(1:TMC)
double precision sig0,sig,error,average
sig0=0.0d0
sumf=0.0d0
do i=1,TMC
sumf=sumf+f(i)
enddo
xm=sumf/TMC
zbin=1
nbin=int(TMC/zbin)
sig=0.0d0
do i=1,nbin,1
y(i)=sumf
do j=1,zbin
y(i)=y(i)-f((i-1)*zbin+j )
enddo
y(i)= y(i)/(TMC-zbin)
sig=sig+((nbin-1.0d0)/nbin)*(y(i)-xm)*(y(i)-xm)
enddo
sig=dsqrt(sig)
if (sig0 .lt. sig) sig0=sig
error=sig0
average=xm
return
end
function ran2(idum)
implicit none
integer idum,IM1,IM2,IMM1,IA1,IA2,IQ1,IQ2,IR1,IR2,NTAB,NDIV
real AM,EPS,RNMX
double precision ran2
parameter (IM1=2147483563,IM2=2147483399,AM=1./IM1,IMM1=IM1-1,
& IA1=40014,IA2=40692,IQ1=53668,IQ2=52774,IR1=12211,
& IR2=3791,NTAB=32,NDIV=1+IMM1/NTAB,EPS=1.2E-7,RNMX=1.-EPS)
integer idum2,j,k,iv(NTAB),iy
SAVE iv,iy,idum2
DATA idum2/123456789/,iv/NTAB*0/,iy/0/
if (idum.le.0) then
idum=max(-idum,1)
idum2=idum
do j=NTAB+8,1,-1
k=idum/IQ1
idum=IA1*(idum-k*IQ1)-k*IR1
if (idum.lt.0) idum=idum+IM1
if (j.le.NTAB) iv(j)=idum
File: /home/ydri/Desktop/TP_QFT/codes/hybrid-supersymmetric-ym.f Page 17 of 18
enddo
iy=iv(1)
endif
k=idum/IQ1
idum=IA1*(idum-k*IQ1)-k*IR1
if (idum.lt.0) idum=idum+IM1
k=idum2/IQ2
idum2=IA2*(idum2-k*IQ2)-k*IR2
if (idum2.lt.0) idum2=idum2+IM2
j=1+iy/NDIV
iy=iv(j)-idum2
iv(j)=idum
if (iy.lt.1) iy=iy+IMM1
ran2=min(AM*iy,RNMX)
return
end
subroutine identification(N,M,sig,x_traceless_vec,G)
implicit none
integer N,M,sig,sp,A1
double complex G(M,2,N*N-1),x_traceless_vec(2,N*N-1)
do sp=1,2
do A1=1,N*N-1
G(sig,sp,A1)=x_traceless_vec(sp,A1)
enddo
enddo
return
end
subroutine reverse_identification(N,M,sig,G,x_traceless_vec)
implicit none
integer N,M,sig,sp,A1
double complex G(M,2,N*N-1),x_traceless_vec(2,N*N-1)
do sp=1,2
do A1=1,N*N-1
x_traceless_vec(sp,A1)=G(sig,sp,A1)
enddo
enddo
return
end
c.........adjusting interval..................
subroutine adjust_inn(cou,pa,dt,time,Rejec,Accept,
& nn,target_pa_high,target_pa_low,dt_max,dt_min,inc,dec)
implicit none
double precision dt,pa,Rejec,Accept
integer time,cou,cou1
integer nn
double precision target_pa_high,target_pa_low,dt_max,dt_min,inc,
& dec,rho1,rho2,dtnew
dtnew=dt*inc
if (dtnew.le.dt_max)then
dt=dtnew
else
dt=dt_max
endif
endif
if (pa.le.target_pa_low) then
dtnew=dt*dec
if (dtnew.ge.dt_min)then
dt=dtnew
else
dt=dt_min
endif
endif
endif
return
end
File: /home/ydri/Desktop/TP_QFT/codes/u-one-on-the-lattice.f Page 1 of 11
program my_u_one_on_the_lattice
implicit none
integer dim,N,NT,i,j,k,l,mu,idum,tther,tmont,nther,nmont,counter,T
integer tcor,ncor,betai,p,q
double precision accept,reject,flip
parameter (dim=4,N=4,NT=4,nther=2**(14),nmont=2**(14),ncor=2**4)
parameter (T=100*(nther+nmont*ncor))
double precision beta,ran2,variation,epsilon
& ,epsilon0,pi,acceptance,avera,erro,tau,deltau
double complex U(dim,N,N,N,NT),ii,X,XX(0:T)
double precision W11,W22,W33,W12,W13,W23,W21,W31,W32
double precision acti(1:nmont),acti_mean,acti_error,
& action
double precision acti_pp(1:nmont),acti_pp_mean,acti_pp_error,
& action_pp
double precision cv(1:nmont),cv_mean,cv_error
double precision plaq1(1:nmont),plaq1_mean,plaq1_error
double precision plaq2(1:nmont),plaq2_mean,plaq2_error
double precision plaq3(1:nmont),plaq3_mean,plaq3_error
double precision plaq4(1:nmont),plaq4_mean,plaq4_error
double precision plaq5(1:nmont),plaq5_mean,plaq5_error
double precision plaq6(1:nmont),plaq6_mean,plaq6_error
double precision plaq7(1:nmont),plaq7_mean,plaq7_error
double precision plaq8(1:nmont),plaq8_mean,plaq8_error
double precision plaq9(1:nmont),plaq9_mean,plaq9_error
double precision tension1,error_tension1,tension2,error_tension2,
& tension3,error_tension3,tension4,error_tension4
idum=-148175
call seed(idum)
counter=0
accept=0
reject=0
flip=0
ii=cmplx(0,1)
pi=dacos(-1.0d0)
epsilon=pi
do betai=1,1
beta=1.9d0-betai*0.1
do mu=1,dim
do i=1,N
do j=1,N
do k=1,N
do l=1,NT
c.........ordered start for coulomb phase while disordered start for confinment phase..
if(beta.ge.1.0d0)then
epsilon0=0.0d0
else
epsilon0=2.0d0*ran2(idum)-1.0d0
epsilon0=epsilon*epsilon0
endif
U(mu,i,j,k,l)=dcos(epsilon0)+ii*dsin(epsilon0)
enddo
enddo
enddo
enddo
File: /home/ydri/Desktop/TP_QFT/codes/u-one-on-the-lattice.f Page 2 of 11
enddo
c................thermalization.............
do tther=1,nther
call metropolis(U,beta,dim,N,NT,accept,reject,flip,acceptance,
& epsilon,counter,XX,T)
enddo
do tmont=1,nmont
do tcor=1,ncor
call metropolis(U,beta,dim,N,NT,accept,reject,flip,acceptance,
& epsilon,counter,XX,T)
enddo
call actio(U,dim,N,NT,beta,action,action_pp)
acti(tmont)=action
acti_pp(tmont)=action_pp
plaq1(tmont)=0.0d0
plaq2(tmont)=0.0d0
plaq3(tmont)=0.0d0
plaq4(tmont)=0.0d0
plaq5(tmont)=0.0d0
plaq6(tmont)=0.0d0
plaq7(tmont)=0.0d0
plaq8(tmont)=0.0d0
plaq9(tmont)=0.0d0
do i=1,N
do j=1,N
do k=1,N
do l=1,NT
p=1
q=4
call Wilson_Loop(U,dim,N,NT,i,j,k,l,p,q,
& W11,W22,W33,W12,W13,W23,W21,W31,W32)
plaq1(tmont)=plaq1(tmont)+W11
plaq2(tmont)=plaq2(tmont)+W22
plaq3(tmont)=plaq3(tmont)+W33
plaq4(tmont)=plaq4(tmont)+W12
plaq5(tmont)=plaq5(tmont)+W13
plaq6(tmont)=plaq6(tmont)+W23
plaq7(tmont)=plaq7(tmont)+W21
plaq8(tmont)=plaq8(tmont)+W31
plaq9(tmont)=plaq9(tmont)+W32
enddo
enddo
enddo
enddo
plaq1(tmont)=plaq1(tmont)/(N**3*NT)
plaq2(tmont)=plaq2(tmont)/(N**3*NT)
plaq3(tmont)=plaq3(tmont)/(N**3*NT)
plaq4(tmont)=plaq4(tmont)/(N**3*NT)
plaq5(tmont)=plaq5(tmont)/(N**3*NT)
plaq6(tmont)=plaq6(tmont)/(N**3*NT)
plaq7(tmont)=plaq7(tmont)/(N**3*NT)
plaq8(tmont)=plaq8(tmont)/(N**3*NT)
plaq9(tmont)=plaq9(tmont)/(N**3*NT)
enddo
c......................measurements........................
c......................action...............
call jackknife_binning(nmont,acti,acti_mean,acti_error)
write(11,*)beta,acti_mean,acti_error
c write(*,*)beta,acti_mean,acti_error
File: /home/ydri/Desktop/TP_QFT/codes/u-one-on-the-lattice.f Page 3 of 11
c.......................specific heat.............
do tmont=1,nmont
cv(tmont)=(acti(tmont)-acti_mean)**2
enddo
call jackknife_binning(nmont,cv,cv_mean,cv_error)
write(13,*)beta,cv_mean,cv_error
c write(*,*)beta,cv_mean,cv_error
c................Wilson loops................
call jackknife_binning(nmont,plaq1,plaq1_mean,plaq1_error)
write(15,*)beta,plaq1_mean,plaq1_error
c write(*,*)beta,plaq1_mean,plaq1_error
call jackknife_binning(nmont,plaq2,plaq2_mean,plaq2_error)
write(16,*)beta,plaq2_mean,plaq2_error
c write(*,*)beta,plaq2_mean,plaq2_error
call jackknife_binning(nmont,plaq3,plaq3_mean,plaq3_error)
write(17,*)beta,plaq3_mean,plaq3_error
c write(*,*)beta,plaq3_mean,plaq3_error
call jackknife_binning(nmont,plaq4,plaq4_mean,plaq4_error)
write(18,*)beta,plaq4_mean,plaq4_error
c write(*,*)beta,plaq4_mean,plaq4_error
call jackknife_binning(nmont,plaq5,plaq5_mean,plaq5_error)
write(19,*)beta,plaq5_mean,plaq5_error
c write(*,*)beta,plaq5_mean,plaq5_error
call jackknife_binning(nmont,plaq6,plaq6_mean,plaq6_error)
write(20,*)beta,plaq6_mean,plaq6_error
c write(*,*)beta,plaq6_mean,plaq6_error
call jackknife_binning(nmont,plaq7,plaq7_mean,plaq7_error)
write(23,*)beta,plaq7_mean,plaq7_error
c write(*,*)beta,plaq7_mean,plaq7_error
call jackknife_binning(nmont,plaq8,plaq8_mean,plaq8_error)
write(24,*)beta,plaq8_mean,plaq8_error
c write(*,*)beta,plaq8_mean,plaq8_error
call jackknife_binning(nmont,plaq9,plaq9_mean,plaq9_error)
write(25,*)beta,plaq9_mean,plaq9_error
c write(*,*)beta,plaq9_mean,plaq9_error
tension1=dabs(tension1)
tension2=dabs(tension2)
tension3=dabs(tension3)
tension4=dabs(tension4)
tension1=-dlog(tension1)
tension2=-dlog(tension2)
tension3=-dlog(tension3)
tension4=-dlog(tension4)
error_tension1=plaq2_error/plaq2_mean+plaq1_error/plaq1_mean
& -plaq4_error/plaq4_mean-plaq7_error/plaq7_mean
error_tension1=dabs(error_tension1)
error_tension2=plaq3_error/plaq3_mean+plaq2_error/plaq2_mean
& -plaq6_error/plaq6_mean -plaq9_error/plaq9_mean
error_tension2=dabs(error_tension2)
error_tension3=plaq6_error/plaq6_mean+plaq4_error/plaq4_mean
File: /home/ydri/Desktop/TP_QFT/codes/u-one-on-the-lattice.f Page 4 of 11
write(22,*)beta,tension1,error_tension1,tension2,error_tension2,
& tension3,error_tension3,tension4,error_tension4
c write(*,*)beta,tension1,error_tension1,tension2,error_tension2,
c & tension3,error_tension3,tension4,error_tension4
enddo
return
end
c...............metropolis algorithm.................
subroutine metropolis(U,beta,dim,N,NT,accept,reject,flip,
& acceptance,epsilon,counter,XX,T)
implicit none
integer dim,N,NT,nu,mu,i,j,k,l,idum,counter,counter0,nn,T
double precision accept,reject,flip,nn0
double precision epsilon,epsilon0,beta,variation,proba,r,ran2,pi,
& modulus,acceptance
double complex U(dim,N,N,N,NT),X,ii,XX(0:T)
pi=dacos(-1.0d0)
ii=cmplx(0,1)
epsilon0=2.0d0*ran2(idum)-1.0d0
epsilon0=epsilon*epsilon0
XX(counter)=dcos(epsilon0)+ii*dsin(epsilon0)
XX(counter+1)=dcos(epsilon0)-ii*dsin(epsilon0)
counter0=counter+1
counter=counter+2
do mu=1,dim
do i=1,N
do j=1,N
do k=1,N
do l=1,NT
nn0=counter0*ran2(idum)
nn=nint(nn0)
X=XX(nn)
call variatio(U,X,beta,dim,N,NT,mu,i,j,k,l,variation)
if(variation.gt.0)then
proba=dexp(-variation)
r=ran2(idum)
if(proba.gt.r)then
U(mu,i,j,k,l)=X*U(mu,i,j,k,l)
accept=accept+1
else
reject=reject+1
endif
else
U(mu,i,j,k,l)=X*U(mu,i,j,k,l)
flip=flip+1
endif
modulus=U(mu,i,j,k,l)*conjg(U(mu,i,j,k,l))
modulus=dsqrt(modulus)
U(mu,i,j,k,l)=U(mu,i,j,k,l)/modulus
enddo
enddo
enddo
enddo
enddo
File: /home/ydri/Desktop/TP_QFT/codes/u-one-on-the-lattice.f Page 5 of 11
c.......for the range of N and NT considered the acceptance rate is already sufficiently high so we can
simply disable the adjust subroutine....we observed that the acceptance rate decreases as we increase N
and NT......
call adjust(epsilon,flip,accept,reject,acceptance)
c write(*,*)flip,accept,reject,acceptance
return
end
c...........adjusting...........................
subroutine adjust(epsilon,flip,accept,reject,acceptance)
implicit none
double precision epsilon,acceptance
double precision flip,accept,reject,ran2
integer idum
acceptance=(flip+accept)/(flip+accept+reject)
if (acceptance.ge.0.5d0) then
epsilon=epsilon*1.2d0
endif
if(acceptance.le.0.45d0) then
epsilon=epsilon*0.8d0
endif
return
end
c........................variation.....................
subroutine variatio(U,X,beta,dim,N,NT,mu,i,j,k,l,variation)
implicit none
integer dim,N,NT,nu,mu,i,j,k,l,idum
double precision epsilon,epsilon0,beta,variation,ran2,pi
double complex U(dim,N,N,N,NT),staple,ii,X
call stapl(U,dim,N,NT,mu,i,j,k,l,staple)
variation=-0.5d0*beta*((X-1.0d0)*U(mu,i,j,k,l)*staple
& + conjg((X-1.0d0)*U(mu,i,j,k,l)*staple))
return
end
c.................staple....................................
subroutine stapl(U,dim,N,NT,mu,i,j,k,l,staple)
implicit none
integer dim,N,NT,nu,mu,i,j,k,l,i0,ip(N),im(N),ipT(NT),imT(NT),
& ipn(1:N,1:N),ipnT(1:N,1:N)
double precision beta
double complex U(dim,N,N,N,NT),staple
call index_array(N,NT,ip,im,ipT,imT,ipn,ipnT)
if(mu.eq.1)then
staple=U(2,ip(i),j,k,l)*conjg(U(mu,i,ip(j),k,l))*
& conjg(U(2,i,j,k,l))
& +conjg(U(2,ip(i),im(j),k,l))*conjg(U(mu,i,im(j),k,l))
& *U(2,i,im(j),k,l)
& +U(3,ip(i),j,k,l)*conjg(U(mu,i,j,ip(k),l))*conjg(U(3,i,j,k,l))
& +conjg(U(3,ip(i),j,im(k),l))*conjg(U(mu,i,j,im(k),l))
& *U(3,i,j,im(k),l)
& +U(4,ip(i),j,k,l)*conjg(U(mu,i,j,k,ipT(l)))*conjg(U(4,i,j,k,l))
& +conjg(U(4,ip(i),j,k,imT(l)))*conjg(U(mu,i,j,k,imT(l)))
& *U(4,i,j,k,imT(l))
endif
File: /home/ydri/Desktop/TP_QFT/codes/u-one-on-the-lattice.f Page 6 of 11
if(mu.eq.2)then
staple=U(1,i,ip(j),k,l)*conjg(U(mu,ip(i),j,k,l))*
& conjg(U(1,i,j,k,l))
& +conjg(U(1,im(i),ip(j),k,l))*conjg(U(mu,im(i),j,k,l))
& *U(1,im(i),j,k,l)
& +U(3,i,ip(j),k,l)*conjg(U(mu,i,j,ip(k),l))*conjg(U(3,i,j,k,l))
& +conjg(U(3,i,ip(j),im(k),l))*conjg(U(mu,i,j,im(k),l))
& *U(3,i,j,im(k),l)
& +U(4,i,ip(j),k,l)*conjg(U(mu,i,j,k,ipT(l)))*conjg(U(4,i,j,k,l))
& +conjg(U(4,i,ip(j),k,imT(l)))*conjg(U(mu,i,j,k,imT(l)))
& *U(4,i,j,k,imT(l))
endif
if(mu.eq.3)then
staple=U(1,i,j,ip(k),l)*conjg(U(mu,ip(i),j,k,l))
& *conjg(U(1,i,j,k,l))
& +conjg(U(1,im(i),j,ip(k),l))*conjg(U(mu,im(i),j,k,l))
& *U(1,im(i),j,k,l)
& +U(2,i,j,ip(k),l)*conjg(U(mu,i,ip(j),k,l))*conjg(U(2,i,j,k,l))
& +conjg(U(2,i,im(j),ip(k),l))*conjg(U(mu,i,im(j),k,l))
& *U(2,i,im(j),k,l)
& +U(4,i,j,ip(k),l)*conjg(U(mu,i,j,k,ipT(l)))*conjg(U(4,i,j,k,l))
& +conjg(U(4,i,j,ip(k),imT(l)))*conjg(U(mu,i,j,k,imT(l)))
& *U(4,i,j,k,imT(l))
endif
if(mu.eq.4)then
staple=U(1,i,j,k,ipT(l))*conjg(U(mu,ip(i),j,k,l))
& *conjg(U(1,i,j,k,l))
& +conjg(U(1,im(i),j,k,ipT(l)))*conjg(U(mu,im(i),j,k,l))
& *U(1,im(i),j,k,l)
& +U(2,i,j,k,ipT(l))*conjg(U(mu,i,ip(j),k,l))*conjg(U(2,i,j,k,l))
& +conjg(U(2,i,im(j),k,ipT(l)))*conjg(U(mu,i,im(j),k,l))
& *U(2,i,im(j),k,l)
& +U(3,i,j,k,ipT(l))*conjg(U(mu,i,j,ip(k),l))*conjg(U(3,i,j,k,l))
& +conjg(U(3,i,j,im(k),ipT(l)))*conjg(U(mu,i,j,im(k),l))
& *U(3,i,j,im(k),l)
endif
return
end
c...............wilson loops...............................
subroutine Wilson_Loop(U,dim,N,NT,i,j,k,l,p,q,
& W11,W22,W33,W12,W13,W23,W21,W31,W32)
implicit none
integer dim,N,NT,i,j,k,l,p,q,i0,j0,ipn(1:N,1:N),ipnT(1:N,1:N),
& ip(1:N),im(1:N),ipT(1:N),imT(1:N)
double complex U(dim,N,N,N,NT),W1,W2,W3,W4
double precision W11,W22,W33,W12,W13,W23,W21,W31,W32
call index_array(N,NT,ip,im,ipT,imT,ipn,ipnT)
if ((p.eq.1).and.(q.eq.4))then
W1=U(p,i,j,k,l)
W4=U(q,i,j,k,l)
c W3=U(q,i+1,j,k,l)
W3=U(q,ipn(i,1),j,k,l)
c W2=U(p,i,j,k,l+1)
W2=U(p,i,j,k,ipnT(l,1))
W11=0.5d0*(W1*W3*conjg(W2)*conjg(W4)+
& conjg(W1)*conjg(W3)*W2*W4)
c W1=U(p,i,j,k,l)*U(p,i+1,j,k,l)
W1=U(p,i,j,k,l)*U(p,ipn(i,1),j,k,l)
c W4=U(q,i,j,k,l)*U(q,i,j,k,l+1)
File: /home/ydri/Desktop/TP_QFT/codes/u-one-on-the-lattice.f Page 7 of 11
W4=U(q,i,j,k,l)*U(q,i,j,k,ipnT(l,1))
c W3=U(q,i+2,j,k,l)*U(q,i+2,j,k,l+1)
W3=U(q,ipn(i,2),j,k,l)*U(q,ipn(i,2),j,k,ipnT(l,1))
c W2=U(p,i,j,k,l+2)*U(p,i+1,j,k,l+2)
W2=U(p,i,j,k,ipnT(l,2))*U(p,ipn(i,1),j,k,ipnT(l,2))
W22=0.5d0*(W1*W3*conjg(W2)*conjg(W4)+
& conjg(W1)*conjg(W3)*W2*W4)
c W1=U(p,i,j,k,l)*U(p,i+1,j,k,l)*U(p,i+2,j,k,l)
W1=U(p,i,j,k,l)*U(p,ipn(i,1),j,k,l)*U(p,ipn(i,2),j,k,l)
c W4=U(q,i,j,k,l)*U(q,i,j,k,l+1)*U(q,i,j,k,l+2)
W4=U(q,i,j,k,l)*U(q,i,j,k,ipnT(l,1))*U(q,i,j,k,ipnT(l,2))
c W3=U(q,i+3,j,k,l)*U(q,i+3,j,k,l+1)*U(q,i+3,j,k,l+2)
W3=U(q,ipn(i,3),j,k,l)*U(q,ipn(i,3),j,k,ipnT(l,1))*
& U(q,ipn(i,3),j,k,ipnT(l,2))
c W2=U(p,i,j,k,l+3)*U(p,i+1,j,k,l+3)*U(p,i+2,j,k,l+3)
W2=U(p,i,j,k,ipnT(l,3))*U(p,ipn(i,1),j,k,ipnT(l,3))*
& U(p,ipn(i,2),j,k,ipnT(l,3))
W33=0.5d0*(W1*W3*conjg(W2)*conjg(W4)+
& conjg(W1)*conjg(W3)*W2*W4)
W1=U(p,i,j,k,l)
c W4=U(q,i,j,k,l)*U(q,i,j,k,l+1)
W4=U(q,i,j,k,l)*U(q,i,j,k,ipnT(l,1))
c W3=U(q,i+1,j,k,l)*U(q,i+1,j,k,l+1)
W3=U(q,ipn(i,1),j,k,l)*U(q,ipn(i,1),j,k,ipnT(l,1))
c W2=U(p,i,j,k,l+2)
W2=U(p,i,j,k,ipnT(l,2))
W12=0.5d0*(W1*W3*conjg(W2)*conjg(W4)+
& conjg(W1)*conjg(W3)*W2*W4)
c W1=U(p,i,j,k,l)*U(p,i+1,j,k,l)
W1=U(p,i,j,k,l)*U(p,ipn(i,1),j,k,l)
W4=U(q,i,j,k,l)
c W3=U(q,i+2,j,k,l)
W3=U(q,ipn(i,2),j,k,l)
c W2=U(p,i,j,k,l+1)*U(p,i+1,j,k,l+1)
W2=U(p,i,j,k,ipnT(l,1))*U(p,ipn(i,1),j,k,ipnT(l,1))
W21=0.5d0*(W1*W3*conjg(W2)*conjg(W4)+
& conjg(W1)*conjg(W3)*W2*W4)
W1=U(p,i,j,k,l)
c W4=U(q,i,j,k,l)*U(q,i,j,k,l+1)*U(q,i,j,k,l+2)
W4=U(q,i,j,k,l)*U(q,i,j,k,ipnT(l,1))*U(q,i,j,k,ipnT(l,2))
c W3=U(q,i+1,j,k,l)*U(q,i+1,j,k,l+1)*U(q,i+1,j,k,l+2)
W3=U(q,ipn(i,1),j,k,l)*U(q,ipn(i,1),j,k,ipnT(l,1))*
& U(q,ipn(i,1),j,k,ipnT(l,2))
c W2=U(p,i,j,k,l+2)
W2=U(p,i,j,k,ipnT(l,3))
W13=0.5d0*(W1*W3*conjg(W2)*conjg(W4)+
& conjg(W1)*conjg(W3)*W2*W4)
c W1=U(p,i,j,k,l)*U(p,i+1,j,k,l)*U(p,i+2,j,k,l)
W1=U(p,i,j,k,l)*U(p,ipn(i,1),j,k,l)*U(p,ipn(i,2),j,k,l)
W4=U(q,i,j,k,l)
c W3=U(q,i+3,j,k,l)
W3=U(q,ipn(i,3),j,k,l)
c W2=U(p,i,j,k,l+1)*U(p,i+1,j,k,l+1)*U(p,i+2,j,k,l+1)
W2=U(p,i,j,k,ipnT(l,1))*U(p,ipn(i,1),j,k,ipnT(l,1))*
& U(p,ipn(i,2),j,k,ipnT(l,1))
W31=0.5d0*(W1*W3*conjg(W2)*conjg(W4)+
& conjg(W1)*conjg(W3)*W2*W4)
c W1=U(p,i,j,k,l)*U(p,i+1,j,k,l)
W1=U(p,i,j,k,l)*U(p,ipn(i,1),j,k,l)
c W4=U(q,i,j,k,l)*U(q,i,j,k,l+1)*U(q,i,j,k,l+2)
W4=U(q,i,j,k,l)*U(q,i,j,k,ipnT(l,1))*U(q,i,j,k,ipnT(l,2))
File: /home/ydri/Desktop/TP_QFT/codes/u-one-on-the-lattice.f Page 8 of 11
c W3=U(q,i+2,j,k,l)*U(q,i+2,j,k,l+1)*U(q,i+2,j,k,l+2)
W3=U(q,ipn(i,2),j,k,l)*U(q,ipn(i,2),j,k,ipnT(l,1))*
& U(q,ipn(i,2),j,k,ipnT(l,2))
c W2=U(p,i,j,k,l+3)*U(p,i+1,j,k,l+3)
W2=U(p,i,j,k,ipnT(l,3))*U(p,ipn(i,1),j,k,ipnT(l,3))
W23=0.5d0*(W1*W3*conjg(W2)*conjg(W4)+
& conjg(W1)*conjg(W3)*W2*W4)
c W1=U(p,i,j,k,l)*U(p,i+1,j,k,l)*U(p,i+2,j,k,l)
W1=U(p,i,j,k,l)*U(p,ipn(i,1),j,k,l)*U(p,ipn(i,2),j,k,l)
c W4=U(q,i,j,k,l)*U(q,i,j,k,l+1)
W4=U(q,i,j,k,l)*U(q,i,j,k,ipnT(l,1))
c W3=U(q,i+3,j,k,l)*U(q,i+3,j,k,l+1)
W3=U(q,ipn(i,3),j,k,l)*U(q,ipn(i,3),j,k,ipnT(l,1))
c W2=U(p,i,j,k,l+2)*U(p,i+1,j,k,l+2)*U(p,i+2,j,k,l+2)
W2=U(p,i,j,k,ipnT(l,2))*U(p,ipn(i,1),j,k,ipnT(l,2))*
& U(p,ipn(i,2),j,k,ipnT(l,2))
W32=0.5d0*(W1*W3*conjg(W2)*conjg(W4)+
& conjg(W1)*conjg(W3)*W2*W4)
endif
return
end
c..........................indexing.............................
subroutine index_array(N,NT,ip,im,ipT,imT,ipn,ipnT)
implicit none
integer N,NT,i0,j0,ip(1:N),im(1:N),ipT(1:N),imT(1:N),
& ipn(1:N,1:N),ipnT(1:N,1:N)
do i0=1,N
ip(i0)=i0+1
im(i0)=i0-1
enddo
ip(N)=1
im(1)=N
do i0=1,NT
ipT(i0)=i0+1
imT(i0)=i0-1
enddo
ipT(NT)=1
imT(1)=NT
do i0=1,N
do j0=1,N
if (i0+j0 .le. N) then
ipn(i0,j0)=i0+j0
else
ipn(i0,j0)=(i0+j0)-N
endif
enddo
enddo
do i0=1,NT
do j0=1,NT
if (i0+j0 .le. NT) then
ipnT(i0,j0)=i0+j0
else
ipnT(i0,j0)=(i0+j0)-NT
endif
enddo
enddo
return
end
c.....................action...............................
File: /home/ydri/Desktop/TP_QFT/codes/u-one-on-the-lattice.f Page 9 of 11
subroutine actio(U,dim,N,NT,beta,action,action_pp)
implicit none
integer dim,N,NT,i,j,k,l,ip(N),im(N),ipT(NT),imT(NT)
double precision beta
double precision action12,action13,action14,action23,action24,
& action34,action,action_pp
double complex U(dim,N,N,N,NT)
do i=1,N
ip(i)=i+1
im(i)=i-1
enddo
ip(N)=1
im(1)=N
do i=1,NT
ipT(i)=i+1
imT(i)=i-1
enddo
ipT(NT)=1
imT(1)=NT
i=1
j=1
k=1
l=1
c....................action per plaquette....
action_pp=U(1,i,j,k,l)*U(2,ip(i),j,k,l)
& *conjg(U(1,i,ip(j),k,l))*conjg(U(2,i,j,k,l))
& +U(2,i,j,k,l)*U(1,i,ip(j),k,l)
& *conjg(U(2,ip(i),j,k,l))*conjg(U(1,i,j,k,l))
action_pp=0.5d0*action_pp
action_pp=1.0d0-action_pp
c..................action..........
action12=0.0d0
action13=0.0d0
action14=0.0d0
action23=0.0d0
action24=0.0d0
action34=0.0d0
do i=1,N
do j=1,N
do k=1,N
do l=1,NT
action12=action12+U(1,i,j,k,l)*U(2,ip(i),j,k,l)
& *conjg(U(1,i,ip(j),k,l))*conjg(U(2,i,j,k,l))
& +U(2,i,j,k,l)*U(1,i,ip(j),k,l)
& *conjg(U(2,ip(i),j,k,l))*conjg(U(1,i,j,k,l))
action13=action13+U(1,i,j,k,l)*U(3,ip(i),j,k,l)
& *conjg(U(1,i,j,ip(k),l))*conjg(U(3,i,j,k,l))
& +U(3,i,j,k,l)*U(1,i,j,ip(k),l)
& *conjg(U(3,ip(i),j,k,l))*conjg(U(1,i,j,k,l))
action14=action14+U(1,i,j,k,l)*U(4,ip(i),j,k,l)
& *conjg(U(1,i,j,k,ipT(l)))*conjg(U(4,i,j,k,l))
& +U(4,i,j,k,l)*U(1,i,j,k,ipT(l))
& *conjg(U(4,ip(i),j,k,l))*conjg(U(1,i,j,k,l))
action23=action23+U(2,i,j,k,l)*U(3,i,ip(j),k,l)
& *conjg(U(2,i,j,ip(k),l))*conjg(U(3,i,j,k,l))
& +U(3,i,j,k,l)*U(2,i,j,ip(k),l)
& *conjg(U(3,i,ip(j),k,l))*conjg(U(2,i,j,k,l))
action24=action24+U(2,i,j,k,l)*U(4,i,ip(j),k,l)
& *conjg(U(2,i,j,k,ipT(l)))*conjg(U(4,i,j,k,l))
& +U(4,i,j,k,l)*U(2,i,j,k,ipT(l))
& *conjg(U(4,i,ip(j),k,l))*conjg(U(2,i,j,k,l))
action34=action34+U(3,i,j,k,l)*U(4,i,j,ip(k),l)
& *conjg(U(3,i,j,k,ipT(l)))*conjg(U(4,i,j,k,l))
& +U(4,i,j,k,l)*U(3,i,j,k,ipT(l))
& *conjg(U(4,i,j,ip(k),l))*conjg(U(3,i,j,k,l))
File: /home/ydri/Desktop/TP_QFT/codes/u-one-on-the-lattice.f Page 10 of 11
enddo
enddo
enddo
enddo
action=action12+action13+action14+action23+action24+action34
action=-0.5d0*beta*action
action=action!+6.0d0*beta*N*N*N*NT
return
end
c...........................jackknife.........................................
subroutine jackknife_binning(TMC,f,average,error)
implicit none
integer i,j,TMC,zbin,nbin
doubleprecision xm
doubleprecision f(1:TMC),sumf,y(1:TMC)
doubleprecision sig0,sig,error,average
sig0=0.0d0
sumf=0.0d0
do i=1,TMC
sumf=sumf+f(i)
enddo
xm=sumf/TMC
zbin=1
nbin=int(TMC/zbin)
sig=0.0d0
do i=1,nbin,1
y(i)=sumf
do j=1,zbin
y(i)=y(i)-f((i-1)*zbin+j )
enddo
y(i)= y(i)/(TMC-zbin)
sig=sig+((nbin-1.0d0)/nbin)*(y(i)-xm)*(y(i)-xm)
enddo
sig=dsqrt(sig)
if (sig0 .lt. sig) sig0=sig
error=sig0
average=xm
return
end
c...............seed...................
subroutine seed(idum)
integer idum1,idum, n
real x
x=0.0
idum=idum-2*int(secnds(x))
return
end
function ran2(idum)
implicit none
integer idum,IM1,IM2,IMM1,IA1,IA2,IQ1,IQ2,IR1,IR2,NTAB,NDIV
real AM,EPS,RNMX
doubleprecision ran2
parameter (IM1=2147483563,IM2=2147483399,AM=1./IM1,IMM1=IM1-1,
& IA1=40014,IA2=40692,IQ1=53668,IQ2=52774,IR1=12211,
& IR2=3791,NTAB=32,NDIV=1+IMM1/NTAB,EPS=1.2E-7,RNMX=1.-EPS)
File: /home/ydri/Desktop/TP_QFT/codes/u-one-on-the-lattice.f Page 11 of 11
integer idum2,j,k,iv(NTAB),iy
SAVE iv,iy,idum2
DATA idum2/123456789/,iv/NTAB*0/,iy/0/
if (idum.le.0) then
idum=max(-idum,1)
idum2=idum
do j=NTAB+8,1,-1
k=idum/IQ1
idum=IA1*(idum-k*IQ1)-k*IR1
if (idum.lt.0) idum=idum+IM1
if (j.le.NTAB) iv(j)=idum
enddo
iy=iv(1)
endif
k=idum/IQ1
idum=IA1*(idum-k*IQ1)-k*IR1
if (idum.lt.0) idum=idum+IM1
k=idum2/IQ2
idum2=IA2*(idum2-k*IQ2)-k*IR2
if (idum2.lt.0) idum2=idum2+IM2
j=1+iy/NDIV
iy=iv(j)-idum2
iv(j)=idum
if (iy.lt.1) iy=iy+IMM1
ran2=min(AM*iy,RNMX)
return
end
Appendix A
Floating Point Representation: Any real number x can be put in the following
binary form
These are normal numbers. The terminology floating point is now clear. The binary
point can be moved (floated) to any position in the bitstring by choosing the appropriate
exponent.
The smallest normalized number is 2126 . The subnormal numbers are represented by
These are not normalized numbers. In fact the space between 0 and the smallest positive
normalized number is filled by the subnormal numbers.
Explicitly
CP and MFT, B.Ydri 310
s e f
Bit Position 31 30-23 22-0
Because only a finite number of bits is used the set of machine numbers (the numbers
that the computer can store exactly or approximately) is much smaller than the set of
real numbers. There is a maximum and a minimum. Exceeding the maximum we get the
error condition known as overflow. Falling below the minimum we get the error condition
known as underflow.
The largest number corresponds to the normal floating number with s = 0, e = 254
and 1.f = 1.111..1 (with 23 1s after the binary point). We compute 1.f = 1 + 0.5 + 0.25 +
0.125 + ... = 2. Hence xnormal float max = 2 2127 ' 3.4 1038 . The smallest number
corresponds to the subnormal floating number with s = 0 and 0.f = 0.00...1 = 223 .
Hence xsubnormal float min = 2149 ' 1.4 1045 . We get for single precision floats the
range
We remark that
The double precision floating point numbers (doubles) occupy 64 bits. The first bit is for
the sign, 11 bits for the exponent and 52 bits for the significand. They are stored as two
32bist words. Explicitly
s e f f
Bit Position 63 62-52 51-32 31-0
In this case the bias is bias = 1023. They correspond approximately to 16 decimal places
of precision. They are in the range
The above description corresponds to the IEEE 754 standard adopted in 1987 by the
Institute of Electrical and Electronics Engineers (IEEE) and American National Standards
Institute (ANSI).
CP and MFT, B.Ydri 311
Machine Precision and Roundoff Errors: The gap between the number 1
and the next largest number is called the machine precision. For single precision we get
= 223 . For double precision we get = 252 .
Alternatively the machine precision m is the largest positive number which if added
to the number stored as 1 will not change this stored 1, viz
1c + m = 1c . (A.10)
Clearly m < . The number xc is the computer representation of of the number x. The
relative error x in xc is therefore such that
xc x
|x | = | |m . (A.11)
x
All single precision numbers contain an error in their 6th decimal place and all double
precision numbers contain an error in their 15th decimal place.
An operation on the computer will therefore only approximate the analytic answer
since numbers are stored approximately. For example the difference a = b c is on the
computer ac = bc cc . We compute
ac b c
= 1 + b c . (A.12)
a a a
In particular the subtraction of two very large nearly equal numbers b and c may lead to
a very large error in the answer ac . Indeed we get the error
b
a ' (b c ). (A.13)
a
In other words the large number b/a can magnify the error considerably. This is called
subtractive cancellation.
Let us next consider the operation of multiplication of two numbers b and c to produce
a number a, viz a = b c. This operation is represented on the computer by ac = bc cc .
We get the error
a = b + c . (A.14)
Let us now consider an operation involving a large number N of steps. The question we
want to ask is how does the roundoff error accumulate.
The main observation is that roundoff errors grow slowly and randomly with N . They
diverge as N gets very large. By assuming that the roundoff errors in the individual steps
of the operation are not correlated we can view the accumulation of error as a random
walk problem with step size equal to the machine precison m . We know from the study
of the random walk problem in statistical mechanics that the total roundoff error will be
proportional to N , namely
ro = N m . (A.15)
This is the most conservative estimation of the roundoff errors. The roundoff errors are
analogous to the uncertainty in the measurement of a physical quantity.
CP and MFT, B.Ydri 312
Systematic (Algorithmic) Errors: This type of errors arise from the use of ap-
proximate numerical solutions. In general the algorithmic (systematic) error is inversely
proportional to some power of the number of steps N , i.e.
sys = . (A.16)
N
The total error is obtained by adding the roundoff error, viz
tot = sys + ro = + N m . (A.17)
N
There is a competition between the two types of errors. For small N it is the systematic
error which dominates while for large N the roundoff error dominates. This is very
interesting because it means that by trying to decrease the systematic error (by increasing
N ) we will increase the roundoff error. The best algorithm is the algorithm which gives
an acceptable approximation in a small number of steps so that there will be no time for
roundoff errors to grow large.
As an example let us consider the case = 2 and = 1. The total error is
1
tot = + N m . (A.18)
N2
This error is minimum when
dtot
= 0. (A.19)
dN
For single precision calculation (m = 107 ) we get N = 1099. Hence tot = 4 106 .
Most of the error is roundoff. In order to decrease the roundoff error and hence the total
error in this example we need to decrease the number of steps. Furthermore in order for
the systematic error to not increase when we decrease the number of steps we must find
another algorithm which converges faster with N . For an algorithm with = 2 and = 4
the total error is
2
tot = 4 + N m . (A.20)
N
This error is minimum now at N = 67 for which tot = 9 107 . We have only 1/16 as
many steps with an error smaller by a factor of 4.
Appendix B
2015 Af
308
xrhf
310 . . . . . . . . . 0dq. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T
312 . . . . . . . . . 1wEC Ty -rAq Twh . . . . . . . . . . . . . . . . . . . . . . . .
313 . . . . . . . . . 2r T@q ryAq Twh . . . . . . . . . . . . . . . . .
314 . . . . . . . . . 3zh Ewtq -wEC Ay -rr r ry . . . . . . . . .
316 . . . . . . . . . 4Akt d. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T
317 . . . . . . . . . 5wEC Aywy C -ws . . . . . . . . . . . . . . . . . . . . . . . . . .
318 . . . . . . . . . 6wECC Ty -w -Amwm T. . . . . . . . . . . . . . TysmK
320 . . . . . . . . . 7s TC {ySsmK wk CAW . . . . . . . . . . . . .
322 . . . . . . . . . 8wn xwRwf : 1 ryrf. . . . . . . . . . . . . . . . . . . . . TJ
324 . . . . . . . . . 9wn xwRwf : 2VAq wrk . . . . . . . . . . . . . . . . . . . .
326 . . . . . . . . . 10wn xwRwf AZ : 3r AS d. . . . . . . . . . . . . . . C
327 . . . . . . . . 11wn xwRwf :4 AWW CAWK CAskAqlt rZAntl
330 . . . . . . . . 12dAny z :1Ew Aws . . . . . . . . . . . . . . . . . .
332 . . . . . . . . 13dAny z :2. . . . . . . . . . . . . . . . . . . . . . . CAhO
333 . . . . . . . . 14d wK. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ty
334 . . . . . . . . 15AKm wK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
335 . . . . . . . . 16rq Ab TWqnWFw wt CA. . . . . . . . . . . . . . . . . w
337 . . . . . . . . 17Ew AAmt ry. . . . . . . . . . . . . . . . . . . . . . . Tm\tnm
339 . . . . . . . . 18wEC Tyrtyw Hy wmzn . . . . . . . . . . . . . . . . . .
341 . . . . . . . . 19t ryCwW r TbA TyryfsyVAn . . . . . . . . . . .
342 . . . . . . . . 20 Tr( Xr) An. . . . . . . . . . . . . . . . . . . . . . . . . . . Ty
344 . . . . . . . . 21rtsh Hys t ryCwW r Tb . . . . . . . . . . . .
309
310 CP and MFT , B . Ydri
dqT
zyfA d T dr wl d Tt r ASFA wFA Ty Tyml
t rhZ Cwlb TnF 30 40ry dqt Ah @ O wnktwAy
r Tym T}A w Atmd r.Tyk
km CAbtzyfA d Ts As zyfA r\n T km CAbt A rsrX
y zyfA r\n T zyfA tr Tyb An t rbt A POA dT
zyfA tr Tyb s .dylq AAn CAqAt AktAtl ,l @n rOwy,
zyflA .Anh A TyCAqm Tr\n T A Tyr An CAqm Ttr .TybAn
ryk , T}A Ayl @ mA ry , rbt w dCAqA TT
TlOfn TfltzyflA CAqm Td .T Th r\n@ A zyfA d T
q Ofn @ ryr XbArSC Aylq r\n trbC . @ A Th
r\nt wF Anbt A @ wWm T wr @ rbt zyfA d Tw
r r zyfA r\n.T
zyfA d Tt z r}An zyfA T}AzyfA r\n T r}An
r AyRA TyqybWt tyl d r}An wl wFA rb T
d d w As TzyA Ty Tny Hy Ah A r.
tFAm wybmk r zyfA wr mA( Aym AA) d.T
mA Ayd T rAsm zyfA Tyt tk AhyA C TyRAry
TyW t wt r \ Ahml lyl wbS . TWqdb AA dT
w wmA l TlmzyfA Ty dyCd TF ybW Anr d t d A A
rO @ wmnsn AKmd A Tw rAt r TybCAqml T A
A Td w r At r TybA dh wrKtF Akm yW tr T A
r .wW yq @ dh wA wECC Ty TyRA EA
@ wmn\r A ASl wybmk .r @yfn@ EAl wybm r w Ayms
AmAA d T w dmtl r TmwEC Tyr TyRA rfJ wtk TAd
Arb Tkm wybmkl r .Ahmhf
mA Ayd T CA rt .TyRm l wmnrRA mAA
d TA XbS C Tny tr Tm Tylm AwEC TyrfK t tsAhlm
mAA d Th wq d C EAh xAyq tr Tm .Tylmb db tFAm
mAA d T Cd TFzyfA TyA Anyl CAbt Ar rfK Am A AmAn
wq mAr EAh xAyq tr Tm Tylmb db r .xAy xAyq@
wq tr Tm TylmAql As @ r mAA d T tt
ytylm Hfn r wyl m.AyW
wR d ybW AF zyfA d T Arb .T \
mA Ayd Tt d A Am b TyzyfA Tytk rfK d lA
mm T rfr ) (Fortran F T )( . @ mA Aykm AS dnAT
An Abtkr Anyd T A ) (Lapack ry .AtFAm rb AydT
Az A ) (Matlab AmyA (Mathematica) Aky @ mA Ayd ,TT}A
t dmtl rV Tqwmt CA ,)Monte Carlo( w rylm Arm E
311 CP and MFT , B . Ydri
@ mA T A Twh , wA r As TWqm Rr
TlmrRA E dCd .TAw wy A @ kK At
2
= .
wlWm wAs rs Td T z .CAqm Td T@h sm T dmtl
wEC Ty .rWq mA zn A En }ry
=
= , = 0, ..., .
r
^() = ( ).
mW rqt r@ kK
( )
)( 2
^
^( + 1) = ^() + . = 1, ..., + 1
^
)(
l\ Az Tynrm TqW
^( + 1) = , = 1, ..., + 1.
) (1s rs Td T z A Tw Aq Twh A Td w
Aq Twh .A ^ . @ s @ A rAs .0.5W AS
yq
= 70kg , = 0.332 , = 1.2kg/3 , = 0.1 , = 200.
rs Tdt TyW
^(1) = 4/ , ^(1) = 0.
Akt dT
rbtAkt d d kK
= ().
) (1@ Akt
1
= () ; () = 2 + 32 + 43 .
0
( + 1) = () + 2 .
( + 1) = () + 4 .
( + 1) = () + 2 .
( + 1) = () + 4 .
mA As Tq@ yq 1 .yq dt TyRwml rs Tw
1 Tmyq .
) (1t rfJ Cwr n z Ahy mW wECC Ty -w Asm TA\n
smK.
) (2s CAsm rs T @ AW Td z .A ^ A TbsnAWl.T
tFd wd .Tyklf@tl ryA AV Twk d TltkW
1
= 2 .
2
319 CP and MFT , B . Ydri
q Aw rlb @ wk.
A Tl sy Asyq 2 3@ rK dtTy
(1) = , (1) = 0 , (1) = 0 , (1) = .
s TC {ySsmK wk CAW
s Aw rlb A dC ym wk CAys AhnmRCAW W
wWq A TO w HmK dmry .@ Aqw km AqtJ ybW
wy wy l Af wk HmK rt| AnnkmAm Af wk
Ahsf Amy . Ahny rywk l AhSb{ AZr C A CwWq
An TOw HmK AAt C {ySsmK w HmK@ wr TWq
CAswk . HmK@ dC l {ySsmK w HmK dym
wk k AKd } TblA Tbs w l dmC r T d. ryb
Xqwl w CAW AhdC wW A TO rz Tryb .k A Tbsnwlb wA
rFt dmC Tnm TSf ms AKmd C SyS smK .qb CAW @ km
xAyC SyS w HmKd Trbt .A wyklf rF xAyq TdC AtTy
566 arcsecond/century.
{ySCAW nO C A Tlw HmK .TnF 240000 Thr A
ybWt wy wy l Af w CAW HmK @ y CAbt ryA
wk l CAW s rF TdC
523 arcsecond/century.
rf w
43 arcsecond/century.
@ Tymk km rysf A TybsnA T AnmhAql Tl Ah
w AhWFwtAn ASfz- .wq An Tm An ASfz- bs TltHmK
t Ktsr ACAW r ry wk km rq Ahb
= 2
(1 + 2 ) , = 1.1.108 2 .
dh wtq d A @ wq TymC l {ySsmK
CAW As 43A xw Ty rq.
) (1d rfJ Cwfr t tFdAn A ybWt As @ y CAbt
wq @mCw .
CAytrK dt Tyh lA . TrK dt wRw CAW .CAt
0 = (1 + ) , 0 = 0.
An CAtwk l\ Tdt Ty d TWq .HmKO rWq
rybkCAW 0.39 wd Tykl rz TCAW As . 0.206Asm T
d HmKt w d dmry r zWq An . PrK dt
A rF w TCAW l\ Tdt Tyt W
1
= 0 = 0 , 0 .
1+
321 CP and MFT , B . Ydri
wn xwRwf : 1 ryrfTJ
w xCAb Tltr TVwwV Xy l rA zk ryAqT
.r T w x r T ry TyWwm A z Tt nO Ahwn x mCw
AKw sy ArSC }ry km lb Tmyq\m TmyqOr
AAt A wn xkm d CC A TlAs 360C Tw TWqCEAk .@
y CAbt ryw Aq Twh l Tltk |rtf AhW Aqw wtFH
@ Pnl Aq Twh wk As lr T TbFAnt AyW rs/ T
A FAn As :
drag = .
Akt wh A dr Twn x wf rT dhtF wnx
Ak AVt dt . Ty _Afl r Twn dR xAq Twh rSC
AR Tw r CA Tyt |rtf Ahw C T z w r FA TTt
:
drive = sin 2 .
A Tzt E TqtKm Aw wy A @ kK
2
2
= sin + sin 2 .
As d = . /rm AztE TWysbwnlx
@ Amwt rz
d @ nFrbt An w mW wEC Ty - rr:r
( )
+1 = + sin + sin 2 , +1 = + +1 .
Tydt rK yq( @ 2)
2 1
= 0.04 , 2 = 1 , = 1 , = 1000 2000.
3 2
1 = 0.2 radian , 1 = 0 radian/.
= 0 radian/2 , = 0.1 radian/2 , = 1.2 radian/2 .
,TyCA rt wq Tmyql TbsnA ^ A .z Td Tz FC
wA ,TyCA rt wq TyA Tmyql TbsnA ^ A .Ezt rw wA
A .TnE A Ezt rw wA rO TnE Ezt rw
. TC Tr . TA Tmyql TbsnA^
324 CP and MFT , B . Ydri
s ln d Tz
= 0.1 radian/2 , = 1.2 radian/2 .
) (2s rs Tz Td Tz T
= 0.5 radian/2 , = 1.2 radian/2 .
write(*, *) , , .
mtF
5
= 10 , = 0.04.
tnts A ^ A . Ezhl rkw Wq Ezhl rkw Wq y CA
.
326 CP and MFT , B . Ydri
wn xwRwf AZ : 3r AS dC
AO PwRwf Tt zymt Ahwn xwRwf AZ wr AS d.C
dmC dC Tt Ah Hf Cw tr CA Tyms r T d Cd
( . )period-1 motionk w d ASdC CAs R Cwq CA Ty
dC CAs C TRA Cwq CA Ty TfOA TdC C
As R 2 Cwq CA . TydmC t C AAs R 2 Cw
tr CA Tyms r T d . )period- motion( C A ztE
w dmC t O Ahyl A dC C T CAs Cw
tr CA Tyysq 2 AZr r FA .)mixing( zm AZr AS
d Ct AK d wn xwRwf AZr dd mtn A Rwf .tw
Rwf dA XbS. Am
r T d Cwt w d Tmy Tfltzl T Tmy
.d Td Tms XW )bifurcation( CAWK wnn Tynrskn
( )fractal TqWnmwRwf .T @ m XWkm As t dA XbStw
wRwf.
) (1@ yq rK dtTy
2 1
= , 2 = 1 , = 1 , = 3000 100000 , = 0.01.
3 2
1 = 0.2 radian , 1 = 0 radian/.
y Cr T yq
= 1.35 radian/2 , = 1.44 radian/2 , = 1.465 radian/2 .
A ^.
s Wq wrk y mA
[1.2, 1.3].
A w Cr .T dm C rZAnt . + ry dm C rZAnt
. ryFC XW = () CAWK wmyt ytflt
rK dt.Ty
A 1 Tmyqt ASt Ahyd C Tmyqt rskn Ahy + rZAnt
.
) (4 @ s @ yl tsd rK dtTy
= 0.0 radian , = 0.0 radian/.
s dm C Wq wrk FC XW = () CAWK y
mA
[1.34, 1.38].
y XWCAWKyq . = 1, 2, 3, 4, 5s A AAbn TWq
rt t ddn Atw wRwf.
) (5t hf tw wRwf rW TqS rbtzE wRwA Aflt tA
.AfyfV @
= 106 radian , = 106 radian/.
s dm C Wq wrk y d C @ s | ln | y
AtTy
= 1.372 , 1.375 , 1.3757 , 1.376.
A ^ Amrtq TqWnRwf.
330 CP and MFT , B . Ydri
CA Ew Aws
2 22
= )( Maxwell .
tntFC TrC Tmyq\m Ewtl t W
2
= peak .
dAny z :2CAhO
r d @ sm TC TF CAhO@ wtw CwW A T TblO AT
As .Tl Anyl d rK Oy TlA T .TblO wR C TrC
wk n TSf Amy Afk T Ak Trf T Amy Afk Tt wk AT
} .Tbl {fC TrC O dkm db A Tt wk Ahyym
Amys AwkF T . wO l A \m y @C CAtAT
As Tys d d As Tzt .T CAtA = 16 QwO . = 4
) (1y tFAAm rK dt Ty@mCw A AnO l A Tbl} TCwlT
TkbJ.Tyl
) (2 AKd CAhO sy Tlm rV EA AW Tr TyC@l
d .A Annkmyq @ r rV ryywR Amys 1000wW
AAt
)hh = int(/1000
if (hh * 1000.eq.) then
)) (, ) = (, + 1) ((, + 1) (,
)) (, ) = (, + 1) ((, + 1) (,
endif.
d wKTy
z rbtw dd bJ wK Ty dmtl rV Tq Ayqbtm Tlmtm T
( )
+
+1 = remainder .
w , CAS ,ASm wW Tll wt .d wK dt
ms C@b .W yq
= 899, = 0, = 32768, 1 = 12 good
= 57, = 1, = 256, 1 = 10 , bad.
d remainder T @fn Cwfr AAt
remainder = mod(, ).
) (1s TlslFd wK TytFAAm yq .FC d . TK XW
.( = 2 , = 2+1 ) rAnt
) (2s XFwtd wK .TyA ^.
) (3ky d d wK Tywmd .s rX
1 sum1 () < >2
= )( sum1 = + , sum2 .
=1 sum1 (0) < >2
A wrO @ d .
) (4s Cwmd wK Ty.
AKm wK
z rbtr TAK wK d .dAKm nkm r T ymy
wW As = AAmt CAsywW As = AAmt . = 1 d
wW Rw AKm bO = .@ yq
1
== , = 1.
2
AA r TAKm wK At w dd wK .Ty @ smT
tsd wm rand d@ d TbtkmmCAy TCwflr .dts @ wm d
rV }d C rAt
)call srand(seed
)(rand
rq Ab TWqnWFw wt CAw
W r O rW A dAT z
= 1 ...
21 +...+2 2
= 2 1 ...1 2 21 ... 21
2 2
= .
) ( 2
As rk dd Atsm TrktC z A
+
1 1
= ) (2 2 2 .
1
z A
) (1tFm rV TqmA Tnwmt CA wAmsm rV Tq}A T W As
Akt A = 4, 3, 2 . = 10 tFAm rV Tqwt CA w@ hF
tFAm rV Tq TWqnWFw .d
) (2tFm rV Tq TmyqWFw l Tnywmt CA w As Akt A
= 4, 3, 2 . = 10 r xAy dkK .Tnyrbt
= 1, 10, 100, 150 = 2 . = 10, 19 yq W wbSm rOt
.1/
336 CP and MFT , B . Ydri
wbSm @ wr @ A T r mCAy
l :TZwCA W
XFwtml / y wr mCAy xAy .d@ Aymk
T wk Ast.T
z r
) (1km W TmyAAkt
= .
2 + 2 2
Ew AAmt ryTm\tnm
z Ew xwW
1 ( )2
= )( exp .
2 2 2
2 = 2 2 ln , = 2.
d d wK Ty Tm\tn mA ].[0, 1
) (2FC wtsyr d wK TymO Ahyl s As AAb wW
At:Ty
y A d wK TymO Ahyl s As. -
sq mA wV TlF d . = interval/ w@ . = 100 -
d Rw d wK y s . r d Ahyd wK -
TlF Tnyz d d d rm @h .Tls
Fr Tbsd wK Ty TlFd TRwm . Tbsd wKTy -
TlFAs d d wKTyt q @ Tlsl y
wd lk d wK .Ty@ . = 10000
) (3FC wtsyhr l lF wCAmt FC ) log(fractiond .2 T df
CA r\n.T
z A
)bV (1 rV Tqr{ wbq wmt CA wl sm T.
)bV (2 rV TqrAd E rA l Asm T .wW AAt:
-db TWq. = y
338 CP and MFT , B . Ydri
( )
= 2(, ) ( + 1, ) + ( 1, ) + (, + 1) + (, 1) + 2 (, ).
=1
) (1q rO d () T = dnW
1 1
)( , = .
4
) (2q rO d () T W
() =< >2 .
) (3q rO d () T rb W
1
)( .
ym F Tl @ AkbJr T = 2 + 1y . = 20 50 rbtAS
yq .TTC = 213 ,TTH = 210
) (4Arq AbtwV d r XAAt
1
, = 1.
| |
do i=1,L
do n=1,LL
if (i+n.le.L)then
ipn(i,n)=i+n
else
ipn(i,n)=(i+n)-L
endif
343 CP and MFT , B . Ydri
if ((i-n).ge.1)then
imn(i,n)=i-n
else
imn(i,n)=i-n+L
endif
enddo
enddo
344 CP and MFT , B . Ydri