Python For Physicist by HA
Python For Physicist by HA
May 4, 2021
1
1 Introduction
Python is a free, open source, easy-to-use software tool that offers a significant alternative to
proprietary packages such as Matlab and Mathematica. This book covers everything the working
scientist needs to know to start using Python effectively to study many interesting mathematical
and physical phenomenan using Python3 . I am assuming, you have prrior knowledge about python
coding. In this note, I will discuss sevaral topics in different ways. Also I will explain in details
about all topics as far my knowledage. I hope this note will be helpful to you, if you working to
construct statistical model or data analysis or if you are a student.
The Python programming language is useful for all kinds of scientific and engineering tasks. You
can use it to analyze and plot data. You can also use it to numerically solve science and engineer-
ing problems that are difficult or even impossible to solve analytically. While we want to marshall
Python’s powers to address scientific problems, you should know that Python is a general purpose
computer language that is widely used to address all kinds of computing tasks, from web applica-
tions to processing financial data on Wall Street and various scripting tasks for computer system
management. Over the past decade it has been increasingly used by scientists and engineers for
numerical computations, graphics, and as a “wrapper” for numerical software originally written in
other languages, like Fortran and C.
Python has many powerful but unfamiliar facts, and these need more explanation than the familiar
ones. In particular, if you encounter in this text are ference to the “beginner” or the “unwary”, it
signifies a point which is not made clear in the documentation, and has caught out this author at
least once.
By “Physicist”, I mean anyone who uses quantitative models either to obtain conclusions by pro-
cessing precollected experimental data or to model potentially observable results from a more
abstract theory, and who asks “what if?”. What if I analyze the data in a different way? What
if I change the model? Thus the term also includes economists, engineers, mathematicians among
others, as well as the usual concept of scientists. Given the volume of potential data or the com-
plexity (non-linearity) of many theoretical models, the use of computers to answer these questions
is fast becoming mandatory.
The purpose of this intentionally short book is to show how easy it is for the working scientist to
implement and test non-trivial mathematical algorithms using Python. We have quite deliberately
preferred brevity and simplicity to encyclopaedic coverage in order to get the inquisitive reader
up and running as soon as possible. We aim to leave the reader with a well-founded framework
to handle many basic, and not so basic, tasks. Obviously, most readers will need to dig further
into techniques for their particular research needs. But after reading this book, they should have
a sound basis for this.
No prior knowledge of programming is needed to read this book. We start with some very simple
examples to get started with programming and then move on to introduce fundamental program-
ming concepts such as loops, functions, if-tests, lists, and classes. These generic concepts are
supplemented by more specific and practical tools for scientific programming, primarily plotting
and array based computations. The book’s overall purpose is to introduce the reader to program-
ming and, in particular, to demonstrate how programming can be an extremely useful and powerful
tool in many branches of the natural sciences.
3
Contents
1. Python for Mathematical concept
a. Fibonacci sequence
b. Prime numbers
c. Use of Numpy
d. Creation of Matrix using ‘numpy.matrix() function’
e. Addition of Matrix in Python
i) Traditional method
ii) using ‘+’ operator
f. Subtraction of matrix using Python
g. Transpose of a Python Matrix
h. Exponent of a Python Matrix
2. Graphical plotting and histogram
a. Making plots
b. Working With Pyplot: Plotting Routines
c. Histograms
3. Mechanical Oscillator
a. Mechanical oscillator using Python
b. Damped Harmonic oscillator
4. Application of Python for electrical circuit
a. DC circuit using Python
b. Kirchoff current law using Python
5. Reading data from fiels
a. Read and write from/to text file
b. Applications
6. Python numerical integration
1
a. Simpson 3 rule
b. Trapezoidal rule
c. Riemanns integral
7. Root finding in Python
a. Bisection method using Python
4
b. Newton-Raphson method using Python
c. Secant method using Python
8. Special function using Python
a. Besel function
b. Gamma function
c. Airy function
d. Legendre polynomials
e. Laguerre polynomials
f. Hermite polynomials
9. Solving one dimensional differential equations
a. Solve differential equations using Python
b. Solving systems of nonlinear equations using Python
10. Fourier Series using Python
a. Fourier series analysis for a sawtooth wave function
b. Fourier series analysis for a sqaure wave function
c. Fourier series analysis for a Triangular wave function
d. Fourier series analysis for a Arbitrary waves function
11. Discrete (Fast) Fourier Transform
a. Continuous Fourier Transformation
b. Discrete Fourier Transformation (FFT)
12. Logic GATE
a. NOT GATE
b. AND GATE
c. OR GATE
d. XOR,XNOR, NAND GATE
13. Python for Electromagnetism
a. Visualizing a vector field with Matplotlib
b. Electric field and potential due to a charge particle
i. Electric field for charge particle
ii. Electric potential for charge particle
iii. Electric field lines and potential for four charges on square
iv. Maxwell’s plot
5
c. Electrostatic potential of an electric dipole
d. Magnetism using Python
i. Magnetic field of a straight wire
ii. Magnetic field produced by a dipole
e. Charged Particle Trajectories in Electric and Magnetic Fields
14. Basic Operations on Quantum Objects using Python
a. Particle in a box problem with application
b. Harmonic oscillator in quantum mechanics
c. Creating and inspecting quantum objects (matrix representation)
d. States and operators using Python
i. Density matrices
ii. Pauli Sigma matrix properties
iii. Harmonic oscillator operators
iv. Quantum annihilation and creation operators
v. Quantum commutation relation using Python
e. Cat vs coherent states in a Kerr resonator, and the role of measurement
15. WKB approximation using Python
16. Monte Carlo Study of Ferro-magnetism using an Ising Model
a. Monte Carlo methods using Python
b. Random number generation using Python
i. Creating Random Numbers
ii. Testing for randomness
iii. Random Walks and the Markov process
iv. The Metropolis algorithm
c. Ferromagnetism using the Ising Model
d. Ising Model using Python
i. Road map of 1d Ising model
ii. Critical dynamics in a 1-D Ising model
iii. Exercise of 2D ising model
6
2 Chapter 1: Python for Mathematical concept
A Fibonacci sequence is the integer sequence of 0, 1, 1, 2, 3, 5, 8…. The first two terms are 0 and 1.
All other terms are obtained by adding the preceding two terms. This means to say the nth term
is the sum of (n-1)th and (n-2)th term.
Fibonacci numbers are named after the Italian mathematician Leonardo of Pisa, later known as
Fibonacci. In his 1202 book Liber Abaci, Fibonacci introduced the sequence to Western European
mathematics, although the sequence had been described earlier in Indian mathematics, as early
as 200 BC in work by Pingala on enumerating possible patterns of Sanskrit poetry formed from
syllables of two lengths.
Fibonacci numbers appear unexpectedly often in mathematics, so much so that there is an entire
journal dedicated to their study, the Fibonacci Quarterly. Applications of Fibonacci numbers
include computer algorithms such as the Fibonacci search technique and the Fibonacci heap data
structure, and graphs called Fibonacci cubes used for interconnecting parallel and distributed
systems.
[1]: # Program to display the Fibonacci sequence up to n-th term
nterms = int(input("How many terms? "))
from IPython.display import display, Image
7
continue on with the process.
[2]: # Python program to display the Fibonacci sequence using recursion function
def recur_fibo(n):
if n <= 1:
return n
else:
return(recur_fibo(n-1) + recur_fibo(n-2))
nterms = 7
Fibonacci sequence:
0 1 1 2 3 5 8
In this program, we store the number of terms to be displayed in nterms. A recursive function
recur_fibo() is used to calculate the nth term of the sequence. We use a for loop to iterate and
calculate each term recursively.
[3]: def fib(n):
"""Returns nth Fibonacci numbers """
a,b=0,1
for i in range(n):
a,b=b,a+b
return a
if __name__ == "__main__":
for i in range(7):
print(fib(i), end =" ")
0 1 1 2 3 5 8
Given a positive integer N, The task is to write a Python program to check if the number is prime
or not. Definition: A prime number is a natural number greater than 1 that has no positive divisors
other than 1 and itself. The first few prime numbers are {2, 3, 5, 7, 11, ….}.
There are infinitely many primes, as demonstrated by Euclid around 300 BC. No known simple
formula separates prime numbers from composite numbers. However, the distribution of primes
within the natural numbers in the large can be statistically modelled. The first result in that
direction is the prime number theorem, proven at the end of the 19th century, which says that the
8
probability of a randomly chosen number being prime is inversely proportional to its number of
digits, that is, to its logarithm.
[4]: # Python program to check if
# given number is prime or not
num = 11
# If given number is greater than 1
if num > 1:
for i in range(2, int(num/2)+1):
if (num % i) == 0:
print(num, "is not a prime number")
break
else:
print(num, "is a prime number")
else:
print(num, "is not a prime number")
11 is a prime number
[5]: # Python program to display all the prime numbers within an interval
lower = 0
upper = 20
This section is an introduction to numpy. While the core is pretty stable, extensions near the edges
are an ongoing process. The definitive documentation is a recent “user guide”, Numpy Community
(2013b) at 103 pages and the “reference manual”,Numpy Community (2013a) at 1409 pages. Earlier,
more discursive accounts, includinga wealth of examples, can be found in Langtangen (2008) and/or
Langtangen (2009).Before we start, we must import the numpy module. The preferred approach is
to preface the code with:
9
[6]: import numpy as np
Perhaps the most useful constructor is np.linspace, which builds an equally spaced array of floats.
Also function np.logspace, is similar, but the numbers are equally spaced on a logarithmic scale.
Somewhat closer to the range function of Python is the function, np.arange, which returns an array
rather than a list.
[7]: a=np.linspace(0,1,5)
c=np.linspace(1,3,5)
a+=c
[8]: a
MatrixA:
[[ 5 10]
[15 20]]
MatrixB:
[[ 5 10]
[15 20]]
For more understanding to can add two matrix in two way. First I will distuss traditional way to
add two matrix using for loop and then I will add two same matrix using ‘+’ operator.
10
for x in range(A.shape[1]):
for y in range(B.shape[0]):
result[x, y] = A[x, y] + B[x, y]
Matrix A :
[[ 5 7]
[ 6 11]]
Matrix B :
[[ 9 12]
[22 10]]
Result :
[[14. 19.]
[28. 21.]]
This method provide better efficiency of your code, which is the main motivation of Python lan-
guage, by decrease line of code to calculate addition of two matrix,
[11]: import numpy as np
A = np.matrix([[5, 7], [6, 11]])
B = np.matrix([[9, 12], [22, 10]])
#result = np.matrix(np.zeros((2,2)))
print('Matrix A :\n', A)
print('\nMatrix B :\n', B)
result = A+B
Matrix A :
[[ 5 7]
[ 6 11]]
Matrix B :
[[ 9 12]
[22 10]]
Result :
[[14 19]
[28 21]]
11
2.5 Matrix multiplication in Python
There are two kind of matrix multiplication, scaler and matrix multiplication.
In the scalar product, a scalar/constant value is multiplied by each element of the matrix.
Matrix A:
[[11 22]
[33 44]]
Matrix A :
[[ 5 7]
[ 6 11]]
Matrix B :
[[ 9 12]
[22 10]]
12
Dot Product of Matrix A and Matrix B:
[[199 130]
[296 182]]
Matrix A :
[[ 5 7]
[ 6 11]]
Matrix B :
[[ 9 12]
[22 10]]
Transpose of a matrix basically involves the flipping of matrix over the corresponding diagonals
i.e. it exchanges the rows and the columns of the input matrix. The rows become the columns and
vice-versa.
[15]: import numpy
A = numpy.array([numpy.arange(10,15), numpy.arange(15,20)])
print("Original Matrix A:\n")
print(A)
print('\nDimensions of the original MatrixA: ',A.shape)
print("\nTranspose of Matrix A:\n ")
result = A.T
print(result)
13
print('\nDimensions of the Matrix A after performing the Transpose Operation: ␣
,→',result.shape)
Original Matrix A:
[[10 11 12 13 14]
[15 16 17 18 19]]
Transpose of Matrix A:
[[10 15]
[11 16]
[12 17]
[13 18]
[14 19]]
The exponent on a Matrix is calculated element-wise i.e. exponent of every element is calculated
by raising the element to the power of an input scalar/constant value.
Original Matrix A:
[[0 1]
[2 3]]
[[0 1]
[4 9]]
2.9 Problem :
You are given a system of linear equations as follows, and need to find the values of w,x,y,z:
14
w + 3x − 5y + 2z = 0
4x − 2y + z = 6
2w − x + 3y − z = 5
w + x + y + z = 10
2.10 Solution:
[[1.]
[2.]
[3.]
[4.]]
2.11 Problem
Python Matrix Multiplication, Inverse Matrix, Matrix Transpose of two matrix. You can chose
your own matrix. Write your answer which can be readable.
2.12 Solution
15
# print the second matrix
print('\nThe second matrix is :\n', matB)
At first sight, it will seem that there are quite some components to consider when you start plotting
with this Python data visualization library. You’ll probably agree with me that it’s confusing and
sometimes even discouraging seeing the amount of code that is necessary for some plots, not knowing
16
where to start yourself and which components you should use.
Luckily, this library is very flexible and has a lot of handy, built-in defaults that will help you
out tremendously. As such, you don’t need much to get started: you need to make the necessary
imports, prepare some data, and you can start plotting with the help of the plot() function! When
you’re ready, don’t forget to show your plot using the show() function.
# Add a legend
plt.legend()
First off, you’ll already know Matplotlib by now. When you talk about “Matplotlib”, you talk
about the whole Python data visualization package. This should not come to you as a big surprise
:)
Secondly, pyplot is a module in the matplotlib package. That’s why you often see matplotlib.pyplot
in code. The module provides an interface that allows you to implicitly and automatically create
figures and axes to achieve the desired plot.
17
This is especially handy when you want to quickly plot something without instantiating any Figures
or Axes, as you saw in the example in the first section of this tutorial. You see, you haven’t explicitly
specified these components, yet you manage to output a plot that you have even customized! The
defaults are initialized and any customizations that you do, will be done with the current Figure
and Axes in mind.
Lastly, pylab is another module, but it gets installed alongside the matplotlib package. It bulk
imports pyplot and the numpy library and was generally recommended when you were working
with arrays, doing mathematics interactively and wanted access to plotting features.
You might still see this popping up in older tutorials and examples of matplotlib, but its use
is no longer recommended, especially not when you’re using the IPython kernel in your Jupyter
notebook. You can read more about this here.
As a solution, you can best use %matplotlib magic in combination with the right backend, such
as inline, qt, etc. Most of the times, you will want to use inline, as this will make sure that the
plots are embedded inside the notebook. Read more about that in DataCamp’s Definitive Guide
to Jupyter Notebook.
[20]: import matplotlib.pyplot as plt
plt.figure(figsize=(4, 3))
plt.plot([1, 2, 3, 4], [10, 20, 25, 30], color='lightblue', linewidth=3)
plt.scatter([0.3, 3.8, 1.2, 2.5], [11, 25, 9, 26], color='darkred', marker='^')
#plt.set_xlim(0.5, 4.5)
plt.show()
You see that the add_subplot() function in itsef also poses you with a challenge, because you see
add_subplots(111) in the above code chunk.
What does 111 mean?
Well, 111 is equal to 1,1,1, which means that you actually give three arguments to add_subplot().
The three arguments designate the number of rows (1), the number of columns (1) and the plot
18
number (1). So you actually make one subplot.
Note that you can really go bananas with this function when you are using this function, especially
when you’re just starting out with this library and you keep on forgetting for what the three
numbers stand.
Consider the following commands and try to envision what the plot will look like and how many
Axes your Figure will have: ax = fig.add_subplot(2,2,1).
Now that all is set for you to start plotting your data, it’s time to take a closer look at some plotting
routines. You’ll often come across functions like plot() and scatter(), which either draw points with
lines or markers connecting them, or draw unconnected points, which are scaled or colored.
But, as you have already seen in the example of the first section, you shouldn’t forget to pass the
data that you want these functions to use!
These functions are only the bare basics. You will need some other functions to make sure your
plots look awesome:
[21]: import numpy as np
import matplotlib.pyplot as plt
plt.figure(figsize=(4, 3))
19
This is a perfect example of standard publishable plot. You can lable axis, name of plot and legend
also.
3.1 Histogram:
Now I will show you how to create 1D and 2D histograms and how to save them as a publishable
histogram.
[22]: import numpy as np
import matplotlib.pyplot as plt
plt.figure(figsize=(4, 3))
20
Now we will write all possible settings for 2d histograms:
3.2 Problem 1 :
21
b)
f2 (x) = [x2 e−x sin(x2 )]2
Make sure your figure has legend,range,title,axis labels and publishable.
3.3 Solution 1:
The case of the one dimensional mechanical oscillator leads to the following equation:
22
mẍ + µẋ + kx = mẍd
Where:
• x is the position,
• ẋ and ẍ are respectively the speed and acceleration,
• m is the mass,
• µ the
• k the stiffness,
• and ẍd the driving acceleration which is null if the oscillator is free.
Where:
• ω0 is the undamped pulsation,
• ζ is damping ratio,
• ẍd = ad sin(ωd t) is the imposed acceleration.
In the case of the mechanical oscillator:
√
k
ω0 =
m
µ
ζ= √
2 mk
23
plt.legend(loc='best')
plt.show()
24
return my_list
def simulate_oscillations(driving_force_amplitude, driving_frequency,␣
,→drag_coefficient, k, m, dt, option): #This does all the work.
x = 0 # Initial dispalcement
t,v = 0,0 #Initial time and speed
displacement = []
time = [] #
#These are lists that will hold the data used␣
,→to produce a pretty graph at the end
amplitude_list = [] #
time1 = []
flag = 0
while flag == 0: #This loop keeps going until we are satisfied that the␣
amplitude has reached a stable, maximum value
,→
time1.append(t-dt) #Records␣
,→the time at which the mass reached the final amplitude
l = len(amplitude_list)
if l > 3: #Wait until we have 3 amplitues to compare
if fabs(amplitude_list[l-1] - amplitude_list[l-2]) < 1e-5 and␣
,→fabs(amplitude_list[l-2] - amplitude_list[l-3]) < 1e-5: #If the amplitude is␣
,→amplitude of the system. This line can be removed for long runs
25
flag = 1 #Breaks out of the loop
if option == 'show_graph':
plt.plot(time1, amplitude_list)
plt.plot(time, displacement)
plt.suptitle('A forced, damped oscilator.')
plt.xlabel('time')
plt.ylabel('displacement')
plt.grid('on')
plt.show()
return amplitude_list[l-1]
def run_the_experiment(a,b,c):
step_size = (b-a)/c
run = 0
f_list = []
results = []
while run <=c:
results.append(simulate_oscillations(3, a, 0.4, 20, 0.5, 0.001,␣
,→'no_graph')) #(driving_force_amplitude, driving_frequency,␣
,→drag_coefficient, k, m, dt, option)
f_list.append(a)
a = a + step_size
run = run + 1
plt.plot(f_list, results)
plt.suptitle('Frequency response of damped oscillator')
plt.xlabel('Driving Frequency')
plt.ylabel('Amplitude')
plt.grid('on')
plt.draw
plt.show()
26
5 Chapter 4: Application of Python for electrical circuit
This part I will explain how to solve electrical circuits using the sympy symbolic math module.
The following circuit includes one voltage source, one current source and two resistors.
The objective is to obtain the output voltage Vo as function of the components.
27
The circuit will be solved using the nodal method.
First we need to locate the circuit nodes, assign one as ground and assign a number for the rest
of them. As the circuit has three nodes and one is ground, we have two nodes left: 1 and 2.
We will first generate a set of sympy symbols. There will be:
• One symbol for each component: V1, R1, R2, Is
• One current symbol for each power sypply: iVs
• One symbol for each measurement we want to obtain: Vo
• One symbol for each node voltage that is not ground: V1, V2
[26]: # Import the sympy module
import sympy
# Create the circuit symbols
Vs,iVs,R1,R2,Is,Vo, V1, V2 = sympy.symbols('Vs,iVs,R1,R2,Is,Vo,V1,V2')
Then we can define the current equations on each node except ground.
The current equations add all the currents in the node from each component.
All equations we add, are supposed to have a result of zero.
[27]: # Create an empty list of equations
equations = []
# Nodal equations
28
equations.append(iVs-(V1-V2)/R1) # Node 1
equations.append(Is-(V2-V1)/R1-V2/R2) # Node 2
We can see the equations and unknows before solving the circuit.
To ease reusing the code, we will define a showCircuit function that shows equations and unknowns
[30]: # Define the function
def showCircuit():
print('Equations')
for eq in equations:
print(' ',eq)
print()
print('Unknowns:',unknowns)
print()
Equations
iVs - (V1 - V2)/R1
Is - V2/R2 - (-V1 + V2)/R1
Eq(Vs, V1)
Eq(Vo, V2)
29
print('Solutions')
for sol in solution:
print(' ',sol,'=',solution[sol])
Solutions
V1 = Vs
V2 = R2*(Is*R1 + Vs)/(R1 + R2)
iVs = (-Is*R2 + Vs)/(R1 + R2)
Vo = R2*(Is*R1 + Vs)/(R1 + R2)
30
[[2.33333333]
[2.61111111]
[1.22222222]]
LINPACK can also provide eigenvalues and eigenvectors of matrices as well, using linalg.eig().
It should be noted that the size of the matrix that LINPACK can handle is limited by memory
avaiable on your computer.
When we performed data analysis from any experimental data, we used to store required infor-
mation in some text file. Python can also read data from text files quite well. I will show a
demonstration how to read text files using loadtxt() function.
In this chapter I will introduce you to the task of text analysis in Python. You will learn how
to read an entire corpus into Python, clean it and how to perform certain data analyses on those
texts. We will also briefly introduce you to using Python’s plotting library matplotlib, with which
you can visualize your data.
Before we delve into the main subject of this chapter, text analysis, we will first write a couple
of utility functions that build upon the things you learnt in the previous chapter. Often we don’t
work with a single text file stored at our computer, but with multiple text files or entire corpora.
We would like to have a way to load a corpus into Python.
Remember how to read files? Each time we had to open a file, read the contents and then close the
file. Since this is a series of steps we will often need to do, we can write a single function that does
all that for us. We write a small utility function read_file(filename) that reads the specified
file and simply returns all contents as a single string.
[34]: def read_file(filename):
"Read the contents of FILENAME and return as a string."
infile = open(filename) # windows users should use codecs.open after␣
,→importing codecs
contents = infile.read()
infile.close()
return contents
31
15.0 0.175 0.036
The loadtxt() function takes one required argument: the file name (you can also write file name with
actual path on your computer). There are a number of optioanal arguments: one we’re going to use
here is “unpack”, which tells loadtxt() that the file contains columns of data that should be returned
in separate arrays. In this case, we have told Python to call those arrays “frequency”,“mic1”,“mic2”.
The loadtxt() function is very handy and resonably intelligent.
#data = np.loadtxt("./weight_height_1.txt")
frequency,mic1,mic2 = np.loadtxt("data.txt",unpack=True)
plt.plot(frequency,mic1,'r--',frequency,mic2,'b--')
plt.xlabel("Frequency (Hz)")
plt.ylabel("Amplitude")
plt.legend(["Mic 1","Mic 2"])
plt.show()
plt.figure(figsize=(4, 3))
plt.plot(frequency,mic1/mic2,'g^')
plt.xlabel("Frequency (Hz)")
plt.ylabel("mic1/mic2")
plt.legend(["ratio"])
plt.show()
32
Here is an example how you can search text file in your computer directory. Also you can write
names of text files in a certain directory of your computer.
[37]: from os import listdir
def list_textfiles(directory):
"Return a list of filenames ending in '.txt' in DIRECTORY."
textfiles = []
for filename in listdir(directory):
if filename.endswith(".txt"):
textfiles.append(directory + "/" + filename)
return textfiles
The function listdir takes as argument the name of a directory and lists all filenames in that
directory. I iterate over this list and append each filename that ends with the extension, .txt to a
new list of textfiles. Using the list_textfiles function, the following code will read all text
files in the directory /home/haradhan and outputs the length (in characters) of each:
33
# stock ticker symbol
url = 'https://apmonitor.com/che263/uploads/Main/goog.csv'
0 929.080017
1 932.070007
2 935.090027
3 925.109985
4 920.289978
Name: Close, dtype: float64
min: 915.0
max: 978.8900150000001
34
6.1 Data files:
In computational physics, the inputs and outputs of any experimental result are large set of data.
Rather than re-enter these large sets each time we run the program, we load and save the data in
the form of text files.
When working with files, we start by opening the file. The open fuction tell the operating system
what file we will be working on, and what we want to do with the file.
FileHandle = open(“FileName”,Mode)
FileName should be a string describing the location and name of the file. The mood can be one of
these:
1) “r”: read mode only, you can’t change it only you can read.
2) “w”: write mode will create the file if it doesn’t exist. If the file already exist, opening it
using “w” means it will re-write it and will destroy the current file
3) “a”: append mode allows you to write onto the end of a previously-existing file without
destroying what was already exist.
6.2 Problem 2:
For the model used in introductory physics courses, a projectile thrown vertically at some initial
velocity vi has position y(t) = yi + vi t − 12 gt2 , where g = 9.8 m/s2 . Write a Python program that
creates two lists, one containing time data (50 datas over 5 seconds) and the other containing the
corresponding vertical position data for this proectile. The program should ask the user for the
initial height and initial velocity vi , and should print a nicely formatted table of the list values after
it has calculated them.
[40]: import numpy as np
import matplotlib.pyplot as plt
import math
t = np.arange(0.,250., 10)
yi = int(input("What is the initial height? "))
vi = int(input("What is the initial velocity? "))
g=9.8
#for t in range
yt= yi+vi*t-0.5*g*t**2
file1 = open("Problem2.txt","w")
L = ["t y(t) \n"]
Q = [t,yt]
#file1.write(yt)
#file1.writelines(Q)
file1.close() #to change file access modes
plt.figure(figsize=(4, 3))
plt.plot(t,yt,'rs')
plt.xlabel("t")
35
plt.ylabel("y(t)")
plt.title("trajectory of projectile")
plt.show()
Consider two consecutive subintervals, [−1 , ] and [, +1 ]. Simpson’s Rule approximates the
area under �(�) over these two subintervals by fitting a quadratic polynomial through the
points (−1 , (−1 )),(, ()), and (+1 , (+1 )), which is a unique polynomial, and then integrating the
quadratic exactly. The following shows this integral approximation for an arbitrary function.
36
1
Simpson 3 formula:
∫ b ∑
n−1 ∑
n−2
h
f (x)dx ≈ [f (x0 ) + 4( f (xi )) + 2( f (xi )) + f (xn )]
a 3
i=1,i:odd i=2,i:even
7.2 Problem 3:
∫π
Use Simpson’s Rule to approximate 0 sin() with 11 evenly spaced grid points over the whole
interval. Compare this value to the exact value of 2
7.3 Solution 3:
a = 0
b = np.pi
n = 11
h = (b - a) / (n - 1)
x = np.linspace(a, b, n)
f = np.sin(x)
37
Trapezoid Rule computes the area of a trapezoid with corners at (, 0), (+1 , 0), (, ()), and (+1 , (+1 )),
which is h ()+(2+1 ) . Thus, the Trapezoid Rule approximates integrals according to the expression:
∫ b ∑
n−1
() + (+1 )
f (x)dx ≈ h
a 2
i=1
7.5 Problem 4:
∫π
Use Trapezoidal Rule to approximate 0 sin() with 11 evenly spaced grid points over the whole
interval. Compare this value to the exact value of 2. Also calculate the difference between Simpson
1/3 rule and Trapezoidal rule calculation.
7.6 Solution 4:
a = 0
b = np.pi
n = 11
h = (b - a) / (n - 1)
x = np.linspace(a, b, n)
f = np.sin(x)
I_trap = (h/2)*(f[0] + \
2 * sum(f[1:n-1]) + f[n-1])
err_trap = 2 - I_trap
The simplest method for approximating integrals is by summing the area of rectangles that are
defined for each subinterval. The width of the rectangle is +1 − = , and the height is defined by a
function value �(�) for some � in the subinterval. An obvious choice for the height is the function
value at the left endpoint, , or the right endpoint, +1 , because these values can be used even if the
function itself is not known. This method gives the Riemann Integral approximation, which is:
∫ b ∑
n−1
f (x)dx ≈ h()
a i=1
38
or ∫ b ∑
n
f (x)dx ≈ h()
a i=1
7.8 Problem 5:
∫π
Use Riemanns integral to approximate 0 sin() with 11 evenly spaced grid points over the whole
interval. Compare this value to the exact value of 2.
[43]: import numpy as np
a = 0
b = np.pi
n = 11
h = (b - a) / (n - 1)
x = np.linspace(a, b, n)
f = np.sin(x)
I_riemannL = h * sum(f[1:n-1])
err_riemannL = 2 - I_riemannL
I_riemannR = h * sum(f[1:n])
err_riemannR = 2 - I_riemannR
I_mid = h * sum(np.sin((x[:n-1] \
+ x[1:])/2))
err_mid = 2 - I_mid
39
8 Chapter 7: Root finding in Python
The simplest root finding algorithm is the bisection method. The algorithm applies to any contin-
uous function f(x) on an interval [a, b] where the value of the function changes sign from a to b .
The idea is simple: divide the interval in two, a solution must exist within one subinterval, select
the subinterval where the sign of changes and repeat.
The bisection method uses the intermediate value theorem iteratively to find roots. Let �(�) be
a continuous function, and � and � be real scalar values such that �<�. Assume, without loss of
generality, that �(�)>0 and �(�)<0. Then by the intermediate value theorem, there must be a root
on the open interval (�,�). Now let = +2 , the midpoint between and � and �. If �(�)=0 or is close
enough, then � is a root. If �(�)>0, then � is an improvement on the left bound, �, and there is
guaranteed to be a root on the open interval (�,�). If �(�)<0, then � is an improvement on the right
bound, �, and there is guaranteed to be a root on the open interval (�,�).
The process of updating � and � can be repeated until the error is acceptably low.
40
8.3 Solution 6.a:
The √2 can be computed as the root of the function () = 2 −2. Starting at �=0 and �=2, use
my_bisection to approximate the √2 to a tolerance of |()| < 0.1 and |()| < 0.01. Verify that the
results are close to a root by plugging the root back into the function. Plot the function in the
range (0,2).
plt.plot(x,fx,'r--')
plt.xlabel("x")
plt.ylabel("f(x)")
plt.title("Bisection plot")
plt.show()
41
[46]: f = lambda x: x**2 - 2
#total are here 0.1 and 0.01, a=0,b=2
r1 = my_bisection(f, 0, 2, 0.1)
print("r1 =", r1)
r01 = my_bisection(f, 0, 2, 0.01)
print("r01 =", r01)
r1 = 1.4375
r01 = 1.4140625
f(r1) = 0.06640625
f(r01) = -0.00042724609375
Let f(x) be a smooth and continuous function and xr be an unknown root of f(x). Now assume that
x0 is a guess for xr . Unless x0 is a very lucky guess, f(x0 ) will not be a root. Given this scenario,
we want to find an x1 that is an improvement on x0 (i.e., closer to xr than x0 ). If we assume that
x0 is “close enough” to xr , then we can improve upon it by taking the linear approximation of f(x)
around x0 , which is a line, and finding the intersection of this line with the x-axis. Written out, the
linear approximation of f(x) around x0 is f (x) ≈ f (x0 ) + f ′ (x0 )(x−x0 ). Using this approximation,
we find x1 such that f(x1 )=0.
42
f (x0 )
which when slove for x1 is, x1 = x0 − f �(x0 )
The √2 can be computed as the root of the function f (x) = x2 −2. Using x0 = 1.4 as a starting point,
use the previous equation to estimate √2. Compare this approximation with the value computed
by Python’s sqrt function.
[47]: import numpy as np
f = lambda x: x**2 - 2
f_prime = lambda x: 2*x
newton_raphson = 1.4 - (f(1.4))/(f_prime(1.4))
newton_raphson = 1.4142857142857144
sqrt(2) = 1.4142135623730951
estimate = 1.4142135623746899
sqrt(2) = 1.4142135623730951
Python has the existing root-finding functions for us to use to make things easy. The function we
will use to find the root is f_solve from the scipy.optimize.
Compute the root of the function f (x) = x3 −100x2 −x + 100 using f_solve.
43
[49]: from scipy.optimize import fsolve
f = lambda x: x**3-100*x**2-x+100
print(fsolve(f, [2, 80]))
[ 1. 100.]
The secant method is a modification of Newton Raphson’s method which has the advantage of not
needing the derivative of the function.
Start with two guesses a and b, these should be near the desired solution, as with Newton’s method,
but they don’t have to bracket the solution like they do with the bisection method. Use the value
of f(a) and f(b) to approximate the slope of the curve, instead of using the function f’(x) to find
the slope exactly.
Algorithm for the secent method:
1. Start
2. Define function as f(x)
3. Input initial guesses (x0 and x1 ), tolerable error (e) and maximum iteration (N)
4. Initialize iteration counter i = 1
5. If f(x0 ) = f(x1 ) then print “Mathematical Error” and goto (11) otherwise goto (6)
f (x1 )
6. Calcualte x2 = x1 − (x1 − x0 ) ∗ (f (x1 )−f (x0 ) )
7. Increment iteration counter i = i + 1
8. If i >= N then print “Not Convergent” and goto (11) otherwise goto (9)
9. If |f(x2 )| > e then set x0 = x1 , x1 = x2 and goto (5) otherwise goto (10)
10. Print root as x2
11. Stop
[50]: def secant(f,a,b,N):
if f(a)*f(b) >= 0:
print("Secant method fails.")
return None
a_n = a
b_n = b
for n in range(1,N+1):
m_n = a_n - f(a_n)*(b_n - a_n)/(f(b_n) - f(a_n))
f_m_n = f(m_n)
if f(a_n)*f_m_n < 0:
a_n = a_n
b_n = m_n
44
elif f(b_n)*f_m_n < 0:
a_n = m_n
b_n = b_n
elif f_m_n == 0:
print("Found exact solution.")
return m_n
else:
print("Secant method fails.")
return None
return a_n - f(a_n)*(b_n - a_n)/(f(b_n) - f(a_n))
8.10 Problem 8:
Find the real root of the polynomial: p(x) = x ∗ ∗3 − x ∗ ∗2 − 1 using secent method.
8.11 Solution 8:
[1.46557123 1.46557123]
Since the polynomial changes sign in the interval [1,2] , we can apply the secant method with this
as the starting interval:
[52]: approx = secant(p,1,2,20)
print("real root by secant method = ",approx)
8.11.1 To do by yourself:
The Fermi-Dirac distribution describes the probablity of finding a quantum particle with half-
integer spin ( 12 , 32 ,…) in energy state E:
1
fF D = E−µ
e KT +1
The µ in the Fermi-Dirac distribution is called the Fermi Energy, and in this case we want to adjust
µ so that the probablity of finding the particle somewhere is exactly one.
∫ Emax
fF D dE = 1
Emin
Imagine a room-temparature quantum system where for some reason the energy E is constrained
to be in between 0 and 2 ev. What is the µ in this case? At room temparature, KT ≈ 40
1
ev. Feel
free to use any of the above integration or root finding method.
45
8.11.2 Note:
One can use “Scipy” stands for “Scientific Python” and it is a package which provides numerous
scientific tools. Like one can use : from scipy import integrate
[54]: # import scipy.integrate.quad
from scipy import integrate
x = np.arange(0, 10)
y = np.arange(0, 10)
y = lambda x: x**2
dy = lambda x: 2*x
# using scipy.integrate.quad() method
I1 = integrate.quad(y, 0, 3)
print(I1)
(9.000000000000002, 9.992007221626411e-14)
SciPy provides a plethora of special functions, including Bessel functions (and routinesfor finding
their zeros, derivatives, and integrals), error functions, the gamma function,Legendre, Laguerre,
and Hermite polynomials (and other polynomial functions), Mathieu functions, many statistical
functions, and a number of other functions. Most are containedin thescipi.speciallibrary, and each
has its own special arguments and syntax, de-pending on the vagaries of the particular function.
We demonstrate a number of them in the code below that produces a plot of the different functions
called. For more information, you should consult the SciPy web site on the scipy.special library.
Bessel functions, first defined by the mathematician Daniel Bernoulli and then generalized by
Friedrich Bessel, are canonical solutions y(x) of Bessel’s differential equation:
d2 y(x) dy(x)
x2 +x + (x2 − α2 ) = 0
dx2 dx
46
f. Solutions to the radial Schrödinger equation (in spherical and cylindrical coordinates) for
Bessel functions of the first kind, denoted as Jα (x), are solutions of Bessel’s differential equation.
For integer or positive α, Bessel functions of the first kind are finite at the origin (x = 0); while for
negative non-integer α, Bessel functions of the first kind diverge as x approaches zero. It is possible
to define the function by its series expansion around x = 0, which can be found by applying the
Frobenius method to Bessel’s equation:
∞
∑ (−1)m x
Jα (x) = ( )2m+α
m!Γ(m + α + 1) 2
m=0
plt.figure()
plt.plot(x, special.jn(0, x), linewidth=1,label=r"$J_0$")
plt.plot(x, special.jn(1, x), linewidth=1,label=r"$J_1$")
plt.plot(x, special.jn(2, x), linewidth=1,label=r"$J_2$")
plt.plot(x, special.jn(3, x), linewidth=1,label=r"$J_3$")
47
9.2 2d plot of Bessel function:
def f(x,y):
r=np.sqrt(x**2 + y**2)
return special.j0(r)
xp=np.linspace(-20,20,500)
yp=np.linspace(-20,20,500)
X,Y = np.meshgrid(xp, yp)
Z = f(X, Y)
plt.imshow(Z,origin='lower',extent=(-20,20,-20,20))
plt.xlabel("$x$",fontsize=18)
plt.ylabel("$y$",fontsize=18)
plt.title("$J_0(\sqrt{x^2 + y^2})$",fontsize=18);
48
9.3 Gamma function:
In mathematics, the gamma function Γ is one commonly used extension of the factorial function to
complex numbers. The gamma function is defined for all complex numbers except the non-positive
integers. For any positive integer n,
Γ(n) = (n − 1)!
Derived by Daniel Bernoulli, for complex numbers with a positive real part, the gamma function
is defined via a convergent improper integral:
∫∞
Γ(z) = xz−1 e−x dx
0
The gamma function then is defined as the analytic continuation of this integral function to a
meromorphic function that is holomorphic in the whole complex plane except zero and the negative
integers, where the function has simple poles.
[70]: fig = plt.figure(1, figsize=(10,10))
x = np.linspace(-3.5, 6., 3601)
49
g = special.gamma(x)
g = np.ma.masked_outside(g, -100, 400)
fig = plt.figure(1, figsize=(10,12))
ax2 = fig.add_subplot(322)
ax2.plot(x,g)
ax2.set_xlim(-3.5, 6)
ax2.set_ylim(-20, 100)
ax2.text(0.5, 0.95,"Gamma", ha="center", va="top",transform = ax2.transAxes);
plt.xlabel("$x$",fontsize=18)
plt.ylabel("$\Gamma(x)$",fontsize=18);
In mathematics, the error function (also called the Gauss error function), often denoted by erf, is
a complex function of a complex variable defined as:
∫∞
2
e−t dt
2
erf z = √
π
0
This integral is a special (non-elementary) sigmoid function that occurs often in probability, statis-
tics, and partial differential equations. In many of these applications, the function argument is a
real number. If the function argument is real, then the function value is also real.
In statistics, for non-negative values of x, the error function has the following interpretation: for
a random variable Y that is normally distributed with mean 0 and variance 1/2, erf x is the
probability that Y falls in the range (-x,x)
50
[84]: fig = plt.figure(1, figsize=(10,10))
z = np.linspace(-3, 3, 500)
ef = special.erf(z)
fig = plt.figure(1, figsize=(10,12))
ax3 = fig.add_subplot(322)
ax3.plot(z,ef)
ax3.set_ylim(-1,1)
ax3.text(0.5, 0.95,"Error", ha="center", va="top",transform = ax2.transAxes);
plt.xlabel("$z$",fontsize=18)
plt.ylabel("$erf z$",fontsize=18);
In the physical sciences, the Airy function (or Airy function of the first kind) Ai(x) is a special
function named after the British astronomer George Biddell Airy (1801–1892). The function Ai(x)
and the related function Bi(x), are linearly independent solutions to the differential equation:
d2 y(x)
− xy = 0
dx2
known as the Airy equation or the Stokes equation. This is the simplest second-order linear dif-
ferential equation with a turning point (a point where the character of the solutions changes from
oscillatory to exponential).
The Airy function is the solution to time-independent Schrödinger equation for a particle confined
within a triangular potential well and for a particle in a one-dimensional constant force field. For
the same reason, it also serves to provide uniform semiclassical approximations near a turning point
51
in the WKB approximation, when the potential may be locally approximated by a linear function
of position. The triangular potential well solution is directly relevant for the understanding of
electrons trapped in semiconductor heterojunctions.
[93]: #fig = plt.figure(1, figsize=(10,10))
x = np.linspace(-15, 4, 256)
ai, aip, bi, bip = special.airy(x)
#fig = plt.figure(1, figsize=(10,12))
plt.figure()
plt.plot(x, ai, linewidth=1,label=r"$Ai$")
plt.plot(x, bi, linewidth=1,label=r"$Bi$")
52
9.6 Legendre polynomials:
In physical science and mathematics, Legendre polynomials (named after Adrien-Marie Legendre,
who discovered them in 1782) are a system of complete and orthogonal polynomials, with a vast
number of mathematical properties, and numerous applications. They can be defined in many
ways, and the various definitions highlight different aspects as well as suggest generalizations and
connections to different mathematical structures and physical and numerical applications.
Legendre’s differential equation:
d dPn (x)
[(1 − x2 ) ] + n(n + 1)Pn (x) = 0
dx dx
plt.figure()
plt.plot(x, np.polyval(special.legendre(0),x), linewidth=1,label=r"$P_0(x)$")
plt.plot(x, np.polyval(special.legendre(1),x), linewidth=1,label=r"$P_1(x)$")
plt.plot(x, np.polyval(special.legendre(2),x), linewidth=1,label=r"$P_2(x)$")
plt.plot(x, np.polyval(special.legendre(3),x), linewidth=1,label=r"$P_3(x)$")
53
9.7 Laguerre polynomials:
In mathematics, the Laguerre polynomials, named after Edmond Laguerre (1834–1886), are solu-
tions of Laguerre’s equation:
d2 y(x) dy(x)
x 2
+ (1 − x) + ny = 0
dx dx
Laguerre polynomials can be defined by the Rodrigues formula,
ex dn (e−x xn )
Ln (x) =
n! dxn
plt.figure()
plt.plot(x, np.polyval(special.laguerre(0),x), linewidth=1,label=r"$L_0(x)$")
plt.plot(x, np.polyval(special.laguerre(1),x), linewidth=1,label=r"$L_1(x)$")
plt.plot(x, np.polyval(special.laguerre(2),x), linewidth=1,label=r"$L_2(x)$")
plt.plot(x, np.polyval(special.laguerre(3),x), linewidth=1,label=r"$L_3(x)$")
54
#Set limits for axes
plt.xlim([-5,8])
plt.ylim([-5,10])
Hermite polynomials were defined by Pierre-Simon Laplace in 1810 though in scarcely recognizable
form, and studied in detail by Pafnuty Chebyshev in 1859. Chebyshev’s work was overlooked, and
they were named later after Charles Hermite, who wrote on the polynomials in 1864, describing
them as new. They were consequently not new, although Hermite was the first to define the
multidimensional polynomials in his later 1865 publications.
Hermite polynomials are solutions of Hermite differential equation:
d2 y(x) dy(x)
2
− 2x + 2λy = 0
dx dx
55
Where Rodrigues formula for Hermite polynomial:
dn e−x
2
dxn
#Discretized space
dx = 0.05
x_lim = 12
x = numpy.arange(-x_lim,x_lim,dx)
plt.figure()
plt.plot(x, hermite(x,0), linewidth=2,label=r"$H_0$")
plt.plot(x, hermite(x,1), linewidth=2,label=r"$H_1$")
plt.plot(x, hermite(x,2), linewidth=2,label=r"$H_2$")
plt.plot(x, hermite(x,3), linewidth=2,label=r"$H_3$")
plt.plot(x, hermite(x,4), linewidth=2,label=r"$H_4$")
56
We will use all this special function when we will discuss Quantum mechanics using Python section.
We are very much useful for Harmonic oscillator problem and H-atom problem.
Let’s start this topic by solving a problem using a well known Brent method. Then we will solve
ODEs using odeint (ODE integrator). A typical problem is to solve a second or higher order ODE
for a given set of initial conditions. Here we illustrate using odeint to solve the equation for a driven
damped pendulum.
10.1 Problem:
√
Find the solution of tan(x) − (8x)2 − 1 = 0 using Brent method.
10.2 Solution:
57
def tdl(x):
y = 8./x
return np.tan(x) - np.sqrt(y*y-1.0)
rx1 = optimize.brentq(tdl, 0.5, 0.49*np.pi)
rx2 = optimize.brentq(tdl, 0.51*np.pi, 1.49*np.pi)
rx3 = optimize.brentq(tdl, 1.51*np.pi, 2.49*np.pi)
rx = np.array([rx1, rx2, rx3])
ry = np.zeros(3)
print("\nTrue roots:")
print("\n".join("f({0:0.5f}) = {1:0.2e}".format(x, tdl(x)) for x in rx))
x = np.linspace(0.7, 8, 128)
y = tdl(x)
ymask = np.ma.masked_where(np.abs(y)>20., y)
plt.figure(figsize=(4, 3))
plt.plot(x, ymask)
plt.axhline(color='black')
plt.axvline(x=np.pi/2., color="gray", linestyle="--", zorder=-1)
plt.axvline(x=3.*np.pi/2., color="gray", linestyle="--", zorder=-1)
plt.axvline(x=5.*np.pi/2., color="gray", linestyle="--", zorder=-1)
plt.xlabel(r"$x$")
plt.ylabel(r"$\tan x - \sqrt{(8/x)^2-1}$")
plt.ylim(-8, 8)
plt.plot(rx, ry, 'og', ms=5, label="true roots")
plt.plot(rxf, ry, 'xr', ms=5, label="false roots")
plt.legend(numpoints=1, fontsize="small", loc = "upper right",bbox_to_anchor =␣
,→(0.92, 0.97))
plt.tight_layout()
plt.show()
True roots:
f(1.39547) = -6.39e-14
f(4.16483) = -7.95e-14
f(6.83067) = -1.22e-15
false roots:
f(1.57080) = -1.61e+12
58
f(4.71239) = -1.56e+12
f(7.85398) = 1.17e+12
An example of using ODEINT is with the following differential equation with parameter k=0.3,
the initial condition y0=5 and the following differential equation.
dy(t)
+ ky(t) = 0
dt
# initial condition
y0 = 5
# time points
t = np.linspace(0,20)
# solve ODE
59
y = odeint(func,y0,t)
# plot results
plt.plot(t,y)
plt.xlabel('time')
plt.ylabel('y(t)')
plt.show()
# initial condition
y0 = 5
# time points
t = np.linspace(0,20)
# solve ODEs
k = 0.1
y1 = odeint(model,y0,t,args=(k,))
k = 0.2
60
y2 = odeint(model,y0,t,args=(k,))
k = 0.5
y3 = odeint(model,y0,t,args=(k,))
# plot results
plt.plot(t,y1,'r-',linewidth=2,label='k=0.1')
plt.plot(t,y2,'b--',linewidth=2,label='k=0.2')
plt.plot(t,y3,'g:',linewidth=2,label='k=0.5')
plt.xlabel('time')
plt.ylabel('y(t)')
plt.legend()
plt.show()
10.4 Problem:
Find a numerical solution to the following differential equations with the associated initial condi-
tions. Expand the requested time horizon until the solution reaches a steady state. Show a plot of
the states (x(t) and/or y(t)). Report the final value of each state as t→∞.
1.
dy(t)
+ ky(t) − 1 = 0
dt
y(0) = 0
61
# function that returns dy/dt
def model(y,t):
dydt = -y + 1.0
return dydt
# initial condition
y0 = 0
# time points
t = np.linspace(0,5)
# solve ODE
y = odeint(model,y0,t)
# plot results
plt.plot(t,y)
plt.xlabel('time')
plt.ylabel('y(t)')
plt.show()
2.
dy(t)
5 + y(t) − u(t) = 0
dt
y(0) = 1
u steps from 0 to 2 at t=10
62
[99]: import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
plt.figure(figsize=(4, 3))
# initial condition
y0 = 1
# time points
t = np.linspace(0,40,1000)
# solve ODE
y = odeint(model,y0,t)
# plot results
plt.plot(t,y,'r-',label='Output (y(t))')
plt.plot([0,10,10,40],[0,0,2,2],'b-',label='Input (u(t))')
plt.ylabel('values')
plt.xlabel('time')
plt.legend(loc='best')
plt.show()
63
3. Solve for x(t) and y(t) and show that the solutions are equivalent.
dx(t)
= 3e−t
dt
dy(t)
= 3 − y(t)
dt
y(0) = 0
x(0) = 0
# initial condition
z0 = [0,0]
# time points
t = np.linspace(0,5)
# solve ODE
z = odeint(model,z0,t)
# plot results
plt.plot(t,z[:,0],'b-',label=r'$\frac{dx}{dt}=3 \; \exp(-t)$')
plt.plot(t,z[:,1],'r--',label=r'$\frac{dy}{dt}=-y+3$')
plt.ylabel('response')
plt.xlabel('time')
plt.legend(loc='best')
plt.show()
64
4.
dx(t)
2 = −x(t) + u(t)
dt
dy(t)
5 = −y(t) + x(t)
dt
u = 2S(t − 5)
y(0) = 0
x(0) = 0
where S(t−5) is a step function that changes from zero to one at t=5. When it is multiplied by
two, it changes from zero to two at that same time, t=5.
[101]: import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
plt.figure(figsize=(4, 3))
# initial condition
z0 = [0,0]
65
# number of time points
n = 401
# time points
t = np.linspace(0,40,n)
# step input
u = np.zeros(n)
# change to 2.0 at time = 5.0
u[51:] = 2.0
# store solution
x = np.empty_like(t)
y = np.empty_like(t)
# record initial conditions
x[0] = z0[0]
y[0] = z0[1]
# solve ODE
for i in range(1,n):
# span for next time step
tspan = [t[i-1],t[i]]
# solve for next step
z = odeint(model,z0,tspan,args=(u[i],))
# store solution for plotting
x[i] = z[1][0]
y[i] = z[1][1]
# next initial condition
z0 = z[1]
# plot results
plt.plot(t,u,'g:',label='u(t)')
plt.plot(t,x,'b-',label='x(t)')
plt.plot(t,y,'r--',label='y(t)')
plt.ylabel('values')
plt.xlabel('time')
plt.legend(loc='best')
plt.show()
66
10.5 Solving systems of nonlinear equations:
Solving systems of nonlinear equations is not for the faint of heart. It is a difficult problemthat
lacks any general purpose solutions. Nevertheless, SciPy provides quite an assort-ment of numerical
solvers for nonlinear systems of equations. However, because of thecomplexity and subtleties of this
class of problems, we do not discuss their use here.
A typical problem is to solve a second or higher order ODE for a given set of initialconditions.
Here we illustrate usingodeintto solve the equation for a driven dampedpendulum. The equation
of motion for the angle θ that the pendulum makes with thevertical is given by:
wheretis time,Q is the quality factor,d is the forcing amplitude, and Ω is the driving frequency of the
forcing. Reduced variables have been used such that the natural (angular)frequency of oscillation
is 1. The ODE is nonlinear owing to the sinθ term. Of course, it’s precisely because there are no
general methods for solving nonlinear ODEs that one employs numerical techniques, so it seems
appropriate that we illustrate the method with a nonlinear ODE.
we can rewrite our second order ODE as two coupled first order ODEs:
dθ(t)
=w
dt
dw(t) w
= − + sinθ(t) + dcosΩt
dt Q
67
[102]: import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
def f(y, t, params):
theta, omega = y # unpack current values of y
Q, d, Omega = params
derivs = [omega,-omega/Q + np.sin(theta) + d*np.cos(Omega*t)]
return derivs
Q = 2.0 # quality factor (inverse damping)
d = 1.5
Omega = 0.65
theta0 = 0.0
omega0 = 0.0
params = [Q, d, Omega]
y0 = [theta0, omega0]
tStop = 200.
tInc = 0.05
t = np.arange(0., tStop, tInc)
psoln = odeint(f, y0, t, args=(params,))
fig = plt.figure(1, figsize=(8,8))
ax1 = fig.add_subplot(311)
ax1.plot(t, psoln[:,0])
ax1.set_xlabel("time")
ax1.set_ylabel("theta")
ax2 = fig.add_subplot(312)
ax2.plot(t, psoln[:,1])
ax2.set_xlabel("time")
ax2.set_ylabel("omega")
ax3 = fig.add_subplot(313)
twopi = 2.0*np.pi
ax3.plot(psoln[:,0]%twopi, psoln[:,1], ".", ms=1)
ax3.set_xlabel("theta")
ax3.set_ylabel("omega")
ax3.set_xlim(0., twopi)
plt.tight_layout()
plt.show()
68
The plots above reveal that for the particular set of input parameters chosen Q = 2.0, d = 1.5, and
Omega = 0.65, the pendulum trajectories are chaotic. Weaker forcing(smallerd) leads to what is
perhaps the more familiar behavior of sinusoidal oscillationswith a fixed frequency which, at long
times, is equal to the driving frequency.
We know that there are many ways by which any complicated function may be expressed as power
series. This is not the only way in which a function may be expressed as a series but there is amethod
of expressing aperiodicfunction as aninfinite sum ofsineand cosine functions. This representationis
known as Fourier series. The computation and study of Fourier series is known as harmonic analysis
andis useful as a way to break up an arbitrary periodic function into a set of simpleharmonic terms
69
that can be plugged in, solved individually, and then recombined to obtain the solutionto the original
problem or an approximation to it to whatever accuracy is desired.Unlike Taylor series, a Fourier
series can describe functions that are not everywhere continuous and/or differentiable. There are
other advantages of using trigonometric terms. They are easy to differentiate and integrate and
each term contain only one characteristic frequency. Analysis of Fourier series becomes important
because this method is used to represent the response of a system to a periodic input and the
response depends on the frequency content of the input.
So the Fourier series of the function f(x) over the periodic interval (0,L) written as:
∞
a0 ∑
′ ′
2πnx 2πnx
f (x′ ) = + [an cos( ) + bn sin( )]
2 L L
n=1
So the Fourier series of the function f(x) over the periodic interval (-L,L) written as:
∞
a0 ∑
′ ′
′ πnx πnx
f (x ) = + [an cos( ) + bn sin( )]
2 L L
n=1
So the Fourier series of the function f(x) over the periodic interval (-π, π) written as:
∞
′ a0 ∑ ′ ′
f (x ) = + [an cos(nx ) + bn sin(nx )]
2
n=1
70
∫π
1 ′ ′ ′
an = f (x )cos(nx )dx
π
−π
∫π
1 ′ ′ ′
bn = f (x )sin(nx )dx
π
−π
plt.plot(x,s,label="Fourier series")
plt.plot(x,y,label="Original sawtooth wave")
plt.xlabel("$x$")
plt.ylabel("$y=f(x)$")
plt.legend(loc='best',prop={'size':5})
plt.title("Sawtooth wave signal analysis by Fouries series")
#plt.savefig("fs_sawtooth.png")
plt.show()
71
11.1 b) Fourier series analysis for a sqaure wave function
72
# Plotting
plt.plot(x,s,label="Fourier series")
plt.plot(x,y,label="Original square wave")
plt.xlabel("$x$")
plt.ylabel("$y=f(x)$")
plt.legend(loc='best',prop={'size':10})
plt.title("Sqaure wave signal analysis by Fouries series")
#plt.savefig("fs_square.png")
plt.show()
73
# Fourier Coefficients
a0=2./L*simps(y,x)
an=lambda n:2.0/L*simps(y*np.cos(2.*np.pi*n*x/L),x)
bn=lambda n:2.0/L*simps(y*np.sin(2.*np.pi*n*x/L),x)
# Series sum
s=a0/2.+sum([an(k)*np.cos(2.*np.pi*k*x/L)+bn(k)*np.sin(2.*np.pi*k*x/L) for k in␣
,→range(1,terms+1)])
# Plotting
plt.plot(x,s,label="Fourier series")
plt.plot(x,y,label="Original Triangular wave")
plt.xlabel("$x$")
plt.ylabel("$y=f(x)$")
plt.legend(loc='best',prop={'size':10})
plt.title("Triangular wave signal analysis by Fouries series")
plt.savefig("fs_triangular.png")
plt.show()
74
samples=1001
terms=300
# Generating wave
x=np.linspace(-L,L,samples,endpoint=False)
F=lambda x: np.array([u**2 if -L<=u<0 else 1 if 0<u<0.5 else 0 for u in x])
#F=lambda x: abs(np.sin(2*np.pi*x))
f=lambda x: F(freq*x%(2*L)-L)
# Fourier Coefficients
a0=1./L*simps(f(x),x)
an=lambda n:1.0/L*simps(f(x)*np.cos(1.*np.pi*n*x/L),x)
bn=lambda n:1.0/L*simps(f(x)*np.sin(1.*np.pi*n*x/L),x)
# Series sum
xp=x
s=a0/2.+sum([an(k)*np.cos(1.*np.pi*k*xp/L)+bn(k)*np.sin(1.*np.pi*k*xp/L) for k␣
,→in range(1,terms+1)])
#Plotting
plt.plot(xp,s,label="Fourier series")
plt.plot(xp,f(xp),label="Original wave")
plt.legend(loc='best',prop={'size':10})
#plt.savefig("arb_ud.png")
plt.show()
The SciPy library has a number of routines for performing discrete Fourier transforms.Before delv-
ing into them, we provide a brief review of Fourier transforms and discreteFourier transforms.
75
12.1 Continuous Fourier Transformation:
∫∞
G(f ) = g(t)e−2πif t dt
−∞
where f is the Fourier transform variable; if t is time, then f is frequency. The inverse transform is
given by:
∫∞
g(t) = G(f )e2πif t dt
−∞
The conventional Fourier transform is defined for continuous functions, or at least forfunctions that
are dense and thus have an infinite number of data points.
When we are doing numerical analysis, however, you work with discrete data sets, that is, data
sets defined for a finite number of points. The discrete Fourier transform (DFT) is defined for a
function gn consisting of a set of N discrete data points. Those N data points must be defined at
equally-spaced times tn = n∆t where ∆t is the time between successive data points andnruns from
0 to N−1. The discrete Fourier transform (DFT) of gn is defined as:
∑
N −1
gn e−i( N )ln
2π
Gl =
n=0
Inverse discrete Fourier transform is defined as:
N −1
1 ∑ 2π
gn = Gl ei( N )ln
N
l=0
The DFT is usually implemented on computers using the well-known Fast Fourier Trans-form (FFT)
algorithm, generally credited to Cooley and Tukey who developed it at AT&TBell Laboratories
during the 1960s. But their algorithm is essentially one of many in-dependent rediscoveries of the
basic algorithm dating back to Gauss who described it asearly as 1805.
[106]: import numpy as np
from scipy import fftpack
import matplotlib.pyplot as plt
width = 2.0
freq = 0.5
t = np.linspace(-10, 10, 101) # linearly space time array
g = np.exp(-np.abs(t)/width)*np.sin(2.0*np.pi*freq*t)
dt = t[1]-t[0]
76
G = fftpack.fft(g)
f = fftpack.fftfreq(g.size, d=dt)
f = fftpack.fftshift(f)
G = fftpack.fftshift(G)
fig = plt.figure(1, figsize=(8,6), frameon=False)
ax1 = fig.add_subplot(211)
ax1.plot(t, g)
ax1.set_xlabel("t")
ax1.set_ylabel("g(t)")
ax2 = fig.add_subplot(212)
ax2.plot(f, np.real(G), color="dodgerblue", label="real part")
ax2.plot(f, np.imag(G), color="coral", label="imaginary part")
ax2.legend()
ax2.set_xlabel("f")
ax2.set_ylabel("G(f)")
plt.show()
12.3 Problem:
77
12.4 Solution:
[107]: x = np.linspace(0,5,100)
y = np.sin(2*np.pi*x)
fig = plt.figure(1, figsize=(8,6), frameon=False)
ax1 = fig.add_subplot(211)
ax1.plot(x, y)
ax1.set_xlabel("x")
ax1.set_ylabel("y")
## fourier transform
f = np.fft.fft(y)
## sample frequencies
freq = np.fft.fftfreq(len(y), d=x[1]-x[0])
ax2 = fig.add_subplot(212)
ax2.plot(freq, abs(f)**2)
ax2.set_xlabel("f")
ax2.set_ylabel("G(f)")
plt.show()
[108]: # app.py
78
import matplotlib.pyplot as plt
import numpy as np
import scipy.fftpack
fig, ax = plt.subplots()
ax.plot(xf, 2.0/N * np.abs(yf[:N//2]))
plt.show()
[109]: # app.py
t = np.arange(256)
sp = np.fft.fft(np.sin(t))
freq = np.fft.fftfreq(t.shape[-1])
79
plt.plot(freq, sp.real, freq, sp.imag)
plt.show()
[110]: Image(filename="pasted-image-0-3.png",width=800)
[110]:
80
13.1 NOT GATE:
81
print("A = 0, B = 1 | Y =", AND(0, 1))
print("A = 1, B = 0 | Y =", AND(1, 0))
print("A = 1, B = 1 | Y =", AND(1, 1))
13.3 OR GATE:
print("Output of OR GATE:")
Output of OR GATE:
A = 0, B = 0 | Y = 0
A = 0, B = 1 | Y = 1
A = 1, B = 0 | Y = 1
A = 1, B = 1 | Y = 1
82
print("A = 0, B = 1 | Y =", NAND(0, 1))
print("A = 1, B = 0 | Y =", NAND(1, 0))
print("A = 1, B = 1 | Y =", NAND(1, 1))
83
print("A = 1, B = 0 | Y =", XOR(1, 0))
print("A = 1, B = 1 | Y =", XOR(1, 1))
13.7 Problem:
13.8 Solution:
13.9 Homework:
You can try to simulate all above logic gate from scratch.
84
13.9.1 Example 1: NOR GATE
85
You can now try remaining GATE to construct from basic.
I will try to play with Python in electrodynamics domain. This will be very interesting to play in
this domain. We will try to expalin few interesting problem in electrodynamics. There are many
topics we can discuss electrodynamics using Python but we will try to keep our discussion as simple
as possible.
# Grid of x, y points
nx, ny = 64, 64
x = np.linspace(-2, 2, nx)
y = np.linspace(-2, 2, ny)
X, Y = np.meshgrid(x, y)
86
for charge in charges:
ex, ey = E(*charge, x=X, y=Y)
Ex += ex
Ey += ey
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_xlabel('$x$')
ax.set_ylabel('$y$')
ax.set_xlim(-2,2)
ax.set_ylim(-2,2)
ax.set_aspect('equal')
plt.axis('off');
plt.show()
87
14.2 Electric field and potential due to a charge particle:
Let’s first define a charged particle class that allows us to compute the field and the potential.
class ChargedParticle:
def __init__(self, pos, charge):
self.pos = np.asarray(pos)
self.charge = charge
return field
x = np.linspace(-5, 5, 100)
y = np.linspace(-4, 4, 80)
Y, X = np.meshgrid(x, y)
field1 = q1.compute_field(x, y)
field2 = q2.compute_field(x, y)
fig, (ax1, ax2) = plt.subplots(nrows=2, figsize=(5, 10))
ax1.streamplot(x, y, u=field1[:, :, 0], v=field1[:, :, 1])
ax1.set_title("particle with negative charge");
ax1.axis('equal')
plt.axis('off')
88
ax2.streamplot(x, y, u=field2[:, :, 0], v=field2[:, :, 1])
ax2.set_title("particle with positive charge");
ax2.axis('equal');
plt.axis('off');
89
14.2.2 Electric potential for charge particle:
90
91
We can now compute the whole field by summing over the individual electric fields.
[123]: def compute_resulting_field(particles, x, y):
fields = [p.compute_field(x, y) for p in particles]
total_field = np.zeros_like(fields[0])
for field in fields:
total_field += field
return total_field
plt.xlim(x.min(), x.max())
plt.ylim(y.min(), y.max());
plt.axis('off');
92
We can also compute the whole potential by summing over the individual electric fields.
[125]: def compute_resulting_potential(particles, x, y):
potentials = [p.compute_potential(x, y) for p in particles]
total_potential = np.zeros_like(potentials[0])
for pot in potentials:
total_potential += pot
return total_potential
93
14.2.3 Four charges on square:
lw = np.linalg.norm(total_field, axis=2)
lw /= lw.max()
fig, ax = plt.subplots()
ax.streamplot(x, y, total_field[:, :, 0], total_field[:, :, 1],␣
,→linewidth=10*lw, density=2)
ax.set_xlim(x.min(), x.max())
ax.set_ylim(y.min(), y.max())
fig, ax = plt.subplots()
mappable = ax.pcolormesh(x, y, total_potential)
plt.colorbar(mappable);
plt.axis('off');
<ipython-input-126-f2aea8e838b7>:16: MatplotlibDeprecationWarning:
shading='flat' when X and Y have the same dimensions as C is deprecated since
94
3.3. Either specify the corners of the quadrilaterals with X and Y, or pass
shading='auto', 'nearest' or 'gouraud', or set rcParams['pcolor.shading']. This
will become an error two minor releases later.
mappable = ax.pcolormesh(x, y, total_potential)
95
[127]: q1 = ChargedParticle((1, 0), -4)
q2 = ChargedParticle((-1, 0), 1)
total_field = compute_resulting_field([q1, q2], x, y)
lw = np.linalg.norm(total_field, axis=2)
lw /= lw.max()
plt.streamplot(x, y, total_field[:, :, 0], total_field[:, :, 1],␣
,→linewidth=20*lw, density=2)
plt.xlim(x.min(), x.max())
plt.ylim(y.min(), y.max());
plt.axis('off');
96
plt.xlim(x.min(), x.max())
plt.ylim(y.min(), y.max())
plt.axis('equal')
plt.axis('off');
97
14.3 Electrostatic potential of an electric dipole:
The following code produces a plot of the electrostatic potential of an electric dipole −
→
p =(qd,0,0)
−19
in the (x,y) plane for q = 1.602×10 C,d=1 pm using the point dipole approximation.
98
fig = plt.figure()
ax = fig.add_subplot(111)
# Draw contours at values of Phi given by levels
levels = np.array([10**pw for pw in np.linspace(0,5,20)])
levels = sorted(list(-levels) + list(levels))
# Monochrome plot of potential
ax.contour(X, Y, Phi, levels=levels, colors='blue', linewidths=1)
plt.show()
99
[131]: from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(-6,6,6)
y = np.linspace(-6,6,6)
z = np.linspace(-6,6,6)
x,y,z = np.meshgrid(x,y,z)
# 3d figure
fig = plt.figure()
ax = fig.gca(projection='3d')
def B(x,y):
i = 0.5
mu = 1.26 * 10**(-6)
mag = (mu/(2*np.pi))*(i/np.sqrt((x)**2+(y)**2))
by = mag * (np.cos(np.arctan2(y,x)))
bx = mag * (-np.sin(np.arctan2(y,x)))
bz = z*0
return bx,by,bz
def cylinder(r):
phi = np.linspace(-2*np.pi,2*np.pi,100)
x = r*np.cos(phi)
y = r*np.sin(phi)
return x,y
for i in np.linspace(-5,5,1000):
ax.plot(cx,cy,i,label='Cylinder',color='r')
plt.xlabel('x')
plt.ylabel('y')
plt.axis('off');
plt.show()
100
14.5 Problem:
14.6 Solution:
x = np.linspace(-10,10,100)
y = np.linspace(-10,10,100)
z = np.linspace(-10,10,100)
def B(x,y,z):
i = 0.5 #Amps in the wire
mu = 1.26 * 10**(-6)
return (mu/(2*np.pi))*(i/np.sqrt((x)**2+(y)**2+(z)**2))
def r(x,y,z):
return np.sqrt(x*x+y*y+z*z)
plt.plot(r(x,y,z), B(x,y,z))
plt.xlabel(r"$r$")
101
plt.ylabel(r"$B(r)$")
plt.show()
Where,
π N
µ0 =
2.5 · 106 A2
µ0 π H
= = 10−7
4π 4π · 2.5 · 10 6 m
102
R = np.subtract(np.transpose(r), r0).T
return B
X = np.linspace(-1, 1)
Y = np.linspace(-1, 1)
plt.figure(figsize=(8, 8))
plt.streamplot(X, Y, Bx, By)
plt.margins(0, 0)
103
14.7 Charged Particle Trajectories in Electric and Magnetic Fields:
First we will discuss motion of charge particle in a constant magnetic field. The equation of motion
for a charged particle in a magnetic field is as follows:
d−
→ q → − →
= (−
v
v × B)
dt m
We choose to put the particle in a field that is written:
−
→
B = B x̂
We thus expect the particle to rotate in the (y,z) plane while moving along the x axis. Let’s check
the integration results. We expect a circle in the (y,z) plane.
104
[134]: import numpy as np
from scipy.integrate import ode
%matplotlib inline
import matplotlib.pyplot as plt
def newton(t, Y, q, m, B):
alpha = q / m * B
return np.array([u, v, w, 0, alpha * w, -alpha * v])
r = ode(newton).set_integrator('dopri5')
# Initial conditions
t0 = 0
x0 = np.array([0, 0, 0])
v0 = np.array([1, 1, 0])
initial_conditions = np.concatenate((x0, v0))
positions = []
t1 = 50
dt = 0.05
while r.successful() and r.t < t1:
r.integrate(r.t+dt)
positions.append(r.y[:3]) # keeping only position, not velocity
positions = np.array(positions)
105
[11]: import matplotlib as mpl
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.plot3D(positions[:, 0], positions[:, 1], positions[:, 2])
106
15 Chapter 14: Basic Operations on Quantum Objects
The particle-in-a-box problem is usefulness in our context is that it illustrates several quantum
mechanical features. The potential energy at the barrier is set to infinity (i.e. the particle cannot
escape) and the potential energy inside the barrier is set to 0. Under these conditions, classical
mechanics predicts that the particle has an equal probability of being in any part of the box
and the kinetic energy of the particle is allowed to have any value. Taking this assumption into
consideration, we get different equations for the particle’s energy at the barrier and inside the box.
[135]: Image(filename="CNX_UPhysics_40_04_box.jpg",width=700)
[135]:
107
If we solve Schrodinger equation the range from (0,L):
ℏ2 dθ2 (ψ(x))
+ (E − 0)ψ(x) = 0
2m dx2
Then Energy of nth state:
n2 h2
En =
8mL2
and wavefunction of nth state: √
2 nπx
ψn = sin( )
L L
Let’s write a code in such a way that, as a input we consider n and L and we will plot wavefunction
and corresponding probablity density for free particle of mass m inside the box ( 0, L).
x = np.linspace(0, L, 900)
fig, ax = plt.subplots()
lim1=np.sqrt(2.0/L) # Maximum value of the wavefunction
ax.axis([0.0,L,-1.1*lim1,1.1*lim1]) # Defining the limits to be plot in the␣
,→graph
str1=r"$n = "+str(n)+r"$"
ax.plot(x, psi(x,n,L), linestyle='--', label=str1, color="orange", linewidth=2.
,→8) # Plotting the wavefunction
108
ax.plot(x, psi(x,n,L)*psi(x,n,L), label=str1, linewidth=2.8)
ax.legend(loc=2);
ax.set_xlabel(r'$L$')
ax.set_ylabel(r'$|\psi_n|^2(x)$')
plt.title('Probability Density')
plt.legend(bbox_to_anchor=(1.1, 1), loc=2, borderaxespad=0.0)
# Show the plots on the screen once the code reaches this point
plt.show()
109
15.2 Problem:
Write a python script for particle in box problem to show the changes in the Wavefunction and
Probability Density for a given state n in boxes of different length L. Also consider length of the
box should not larger than 20 A.
15.3 Solution:
# Reading the input boxes sizes from the user, and making sure the values are␣
,→not larger than 20 A
L = 100.0
while(L>20.0):
L1 = float(input("Enter the value of L for the first box (in Angstroms and␣
,→not larger then 20 A) = "))
L2 = float(input("Enter the value of L for the second box (in Angstroms and␣
,→not larger then 20) = "))
L = max(L1,L2)
if(L>20.0):
print ("The sizes of the boxes cannot be larger than 20 A. Please enter␣
,→the values again.\n")
110
plt.rcParams.update({'font.size': 18, 'font.family': 'STIXGeneral', 'mathtext.
,→fontset': 'stix'})
fig, ax = plt.subplots(figsize=(12,6))
ax.spines['right'].set_color('none')
ax.xaxis.tick_bottom()
ax.spines['left'].set_color('none')
ax.axes.get_yaxis().set_visible(False)
ax.spines['top'].set_color('none')
val = 1.1*max(L1,L2)
X1 = np.linspace(0.0, L1, 900,endpoint=True)
X2 = np.linspace(0.0, L2, 900,endpoint=True)
ax.axis([-0.5*val,1.5*val,-np.sqrt(2.0/L),3*np.sqrt(2.0/L)])
ax.set_xlabel(r'$X$ (Angstroms)')
strA="$\psi_n$"
strB="$|\psi_n|^2$"
ax.text(-0.12*val, 0.0, strA, rotation='vertical', fontsize=30, color="black")
ax.text(-0.12*val, np.sqrt(4.0/L), strB, rotation='vertical', fontsize=30,␣
,→color="black")
ax.margins(0.00)
ax.legend(loc=9)
str2="$V = +\infty$"
ax.text(-0.3*val, 0.5*np.sqrt(2.0/L), str2, rotation='vertical', fontsize=40,␣
,→color="black")
# Show the plots on the screen once the code reaches this point
plt.show()
111
Enter the value of L for the first box (in Angstroms and not larger then 20 A)
= 10
Enter the value of L for the second box (in Angstroms and not larger then 20) =
15
15.4 Problem:
Another very interesting problem to check how the Energy Levels En for an electron change as a
function of the size of the box.
(Hints: You can calculate quantize energy of an electron, considering electron is inside two different
box of length L1 and L2 )
15.5 Solution:
L1 = float(input("Enter the value for L for the first box (in Angstroms) = "))
nmax1 = int(input("Enter the number of levels you want to plot for the first␣
,→box = "))
112
L2 = float(input("Enter the value for L for the second box (in Angstroms) = "))
nmax2 = int(input("Enter the number of levels you want to plot for the second␣
,→box = "))
fig, ax = plt.subplots(figsize=(8,12))
ax.spines['right'].set_color('none')
ax.yaxis.tick_left()
ax.spines['bottom'].set_color('none')
ax.axes.get_xaxis().set_visible(False)
ax.spines['top'].set_color('none')
val = 1.1*max(En(nmax1,L1,me),En(nmax2,L2,me))
val2= 1.1*max(L1,L2)
ax.axis([0.0,10.0,0.0,val])
ax.set_ylabel(r'$E_n$ (eV)')
for n in range(1,nmax1+1):
str1="$n = "+str(n)+r"$, $E_{"+str(n)+r"} = %.3f$ eV"%(En(n,L1,me))
ax.text(0.6, En(n,L1,me)+0.01*val, str1, fontsize=16, color="blue")
ax.hlines(En(n,L1,me), 0.0, 4.5, linewidth=1.8, linestyle='--',␣
,→color="blue")
for n in range(1,nmax2+1):
str1="$n = "+str(n)+r"$, $E_{"+str(n)+r"} = %.3f$ eV"%(En(n,L2,me))
ax.text(6.2, En(n,L2,me)+0.01*val, str1, fontsize=16, color="magenta")
ax.hlines(En(n,L2,me), 5.5, 10.0, linewidth=1.8, linestyle='--',␣
,→color="magenta")
plt.show()
Enter the value for L for the first box (in Angstroms) = 10
Enter the number of levels you want to plot for the first box = 3
Enter the value for L for the second box (in Angstroms) = 15
Enter the number of levels you want to plot for the second box = 3
113
Isn’t it looks very cool! Now you can play with it more. You can also see the energy eigen value
formula, where En is function of m. Let’s do another interesting problem.
114
15.6 Problem:
Show how the Energy Levels, En change as a function of the mass of the particle.
15.7 Solution:
L = float(input("Enter the value for L for both boxes (in Angstroms) = "))
m1 = float(input("Enter mass of first particle (the mass of 1 electron) = "))
nmax1 = int(input("Enter n_{max1} = "))
m2 = float(input("Enter mass of second particle (the mass of 1 electron) = "))
nmax2 = int(input("Enter n_{max2} = "))
fig, ax = plt.subplots(figsize=(8,12))
ax.spines['right'].set_color('none')
ax.yaxis.tick_left()
ax.spines['bottom'].set_color('none')
ax.axes.get_xaxis().set_visible(False)
ax.spines['top'].set_color('none')
val = 1.1*max(En(nmax1,L,m1*me),En(nmax2,L,m2*me))
val2= 1.1*max(m1,m2)
ax.axis([0.0,10.0,0.0,val])
ax.set_ylabel(r'$E_n$ (eV)')
for n in range(1,nmax1+1):
str1="$n = "+str(n)+r"$, $E_{"+str(n)+r"} = %.3f$ eV"%(En(n,L,m1*me))
ax.text(0.6, En(n,L,m1*me)+0.01*val, str1, fontsize=16, color="blue")
ax.hlines(En(n,L,m1*me), 0.0, 4.5, linewidth=1.8, linestyle='--',␣
,→color="blue")
for n in range(1,nmax2+1):
str1="$n = "+str(n)+r"$, $E_{"+str(n)+r"} = %.3f$ eV"%(En(n,L,m2*me))
ax.text(6.2, En(n,L,m2*me)+0.01*val, str1, fontsize=16, color="green")
ax.hlines(En(n,L,m2*me), 5.5, 10.0, linewidth=1.8, linestyle='--',␣
,→color="green")
# Show the plots on the screen once the code reaches this point
115
plt.show()
116
15.8 Combined presentation of Energy Levels, Wavefunctions and Probability
Densities:
We can combine the information from the wavefunctions, probability density, and energies into a
single plot that compares the wavefunctions and the probability densities for different states, each
one represented at its energy value. These plots are made using the electron mass.
[140]: import matplotlib.pyplot as plt
import numpy as np
for n in range(1,nmax+1):
ax.hlines(En(n,L,me), 0.0, L, linewidth=1.8, linestyle='--', color="black")
str1="$n = "+str(n)+r"$, $E_{"+str(n)+r"} = %.3f$ eV"%(En(n,L,me))
ax.text(1.03*L, En(n,L,me), str1, fontsize=16, color="black")
ax.plot(X3,En(n,L,me)+amp*np.sqrt(L/2.0)*psi(X3,n,L), color="red",␣
,→label="", linewidth=2.8)
ax.margins(0.00)
ax.vlines(0.0, 0.0, Etop, linewidth=4.8, color="blue")
ax.vlines(L, 0.0, Etop, linewidth=4.8, color="blue")
ax.hlines(0.0, 0.0, L, linewidth=4.8, color="blue")
plt.title('Wavefunctions', fontsize=30)
plt.legend(bbox_to_anchor=(0.8, 1), loc=2, borderaxespad=0.)
str2="$V = +\infty$"
ax.text(-0.15*L, 0.6*Emax, str2, rotation='vertical', fontsize=40,␣
,→color="black")
117
ax.xaxis.tick_bottom()
ax.spines['left'].set_color('none')
ax.axes.get_yaxis().set_visible(False)
ax.spines['top'].set_color('none')
X3 = np.linspace(0.0, L, 900,endpoint=True)
Emax = En(nmax,L,me)
ax.axis([-0.5*L,1.5*L,0.0,Etop])
ax.set_xlabel(r'$X$ (Angstroms)')
for n in range(1,nmax+1):
ax.hlines(En(n,L,me), 0.0, L, linewidth=1.8, linestyle='--', color="black")
str1="$n = "+str(n)+r"$, $E_{"+str(n)+r"} = %.3f$ eV"%(En(n,L,me))
ax.text(1.03*L, En(n,L,me), str1, fontsize=16, color="black")
ax.plot(X3,En(n,L,me)+ amp*(np.sqrt(L/2.0)*psi(X3,n,L))**2, color="red",␣
,→label="", linewidth=2.8)
ax.margins(0.00)
ax.vlines(0.0, 0.0, Etop, linewidth=4.8, color="blue")
ax.vlines(L, 0.0, Etop, linewidth=4.8, color="blue")
ax.hlines(0.0, 0.0, L, linewidth=4.8, color="blue")
plt.title('Probability Density', fontsize=30)
plt.legend(bbox_to_anchor=(0.8, 1), loc=2, borderaxespad=0.)
str2="$V = +\infty$"
ax.text(-0.15*L, 0.6*Emax, str2, rotation='vertical', fontsize=40,␣
,→color="black")
# Show the plots on the screen once the code reaches this point
plt.show()
118
119
15.9 Particle in 2D box:
If we consider a particle of mass m is inside a 2D box, depending only on the variable x and one
depending only on the variable y, the solution to the 2D Schrödinger equation will be a wavefunction
that is the product of the 1D solutions in the x and y directions with independent quantum numbers
n and m:
ψn,m (x, y) = ψn (x)ψm (y)
or
√
2 nπx mπy
ψn,m (x, y) = sin( )sin( )
Lx Ly Lx Ly
120
def psi2D(x,y): return 2.0*np.sin(n*np.pi*x)*np.sin(m*np.pi*y)
axes.set_ylabel(r'$y/L_y$')
axes.set_xlabel(r'$x/L_x$')
plt.show()
121
<Figure size 720x216 with 0 Axes>
The energy will be given by the sum of the 1D energies:
h2 n2 m2
En,m = En + Em = ( 2 + 2)
8m Lx Ly
122
L1 = float(input("Can we count DEGENERATE states?\nEnter the value for Lx (in␣
,→Angstroms) = "))
fig, ax = plt.subplots(figsize=(nmax1*2+2,nmax1*3))
ax.spines['right'].set_color('none')
ax.yaxis.tick_left()
ax.spines['bottom'].set_color('none')
ax.axes.get_xaxis().set_visible(False)
ax.spines['top'].set_color('none')
val = 1.1*(En2D(nmax1,mmax2,L1,L2))
val2= 1.1*max(L1,L2)
ax.axis([0.0,3*nmax1,0.0,val])
ax.set_ylabel(r'$E_n$ (eV)')
for n in range(1,nmax1+1):
for m in range(1, mmax2+1):
str1="$"+str(n)+r","+str(m)+r"$"
str2=" $E = %.3f$ eV"%(En2D(n,m,L1,L2))
ax.text(n*2-1.8, En2D(n,m,L1,L2)+ 0.005*val, str1, fontsize=20,␣
,→color="blue")
123
This study is very important for atomic and molecular physics study also.
The equation of motion of quantum mechanics for a particle is given by the Schrödinger equation,
∂Ψ ℏ2 ∂ 2 Ψ
iℏ =− +VΨ
∂t 2m ∂x2
Here, Ψ is the wave function of the particle, which is a function of both time, t, and position,
124
x, moving in a potential field described by V . Solution of the Schrödinger equation typically
uses the method of separating the time- from the space-dependent part of the equation. The
spatial portion is the so-called stationary (or time-independent) Schrödinger equation, an eigenvalue
equation which, in the coordinate representation, takes the form of a linear differential equation.
The solutions of of this equation are wave functions ψ(r), which assign a complex number to every
point r. More specifically, they describe those states of the physical system for which the probability
|ψ(r)|2 does not change with time. To obtain a numerical solution of the Schrödinger equation, one
can either approximately discretize the linear differential equation and put it in matrix form, or
expand ψ(r) in terms of a complete set of wave functions ψn (r) and consider only a finite number
of them. In both cases, the stationary Schrödinger equation leads to an eigenvalue equation of a
finite matrix.
a one-dimensional problem where a mass m moves in the quadratic potential V (x) = mω 2 x2 /2.
Here, x is the spatial coordinate and ω is the angular frequency of the harmonic oscillator. The
Hamiltonian for such a system is:
( )
1 p2
H= + mω 2 x2
2 m
or in operator format, ( )
1 ∂2
Ĥ = ℏ m 2 +m ω x
2 2 2 2
2m ∂x
H|ψn ⟩ = En |ψn ⟩,
125
#Discretized space
dx = 0.05
x_lim = 12
x = numpy.arange(-x_lim,x_lim,dx)
def stationary_state(x,n):
xi = numpy.sqrt(m*w/hbar)*x
prefactor = 1./math.sqrt(2.**n * math.factorial(n)) * (m*w/(numpy.
,→pi*hbar))**(0.25)
plt.figure()
plt.plot(x, stationary_state(x,0), linewidth=1,label=r"$\psi_0(x)$")
plt.plot(x, stationary_state(x,1), linewidth=1,label=r"$\psi_1(x)$")
plt.plot(x, stationary_state(x,2), linewidth=1,label=r"$\psi_2(x)$")
126
15.11 Comparing Classical vs. Quantum Harmonic Results:
Compare the behavior of a quantum harmonic oscillator to a classical harmonic oscillator. Connect
what happens as you increase the quantum number to the transition from quantum to classical
behavior. There it is shown that for a classical harmonic oscillator with energy E, the classical
probability of finding the particle at x is given by:
√
2E
xmax =
mω 2
where xmax is the classical turning point, and Pclassical (x) is understood to be zero for |x| > |xmax |.
127
x_lim = 12
x = numpy.arange(-x_lim,x_lim,dx)
def classical_P(x,n):
E = hbar*w*(n+0.5)
x_max = numpy.sqrt(2*E/(m*w**2))
classical_prob = numpy.zeros(x.shape[0])
x_inside = abs(x) < (x_max - 0.025)
classical_prob[x_inside] = 1./numpy.pi/numpy.
,→sqrt(x_max**2-x[x_inside]*x[x_inside])
return classical_prob
plt.figure(figsize=(10, 8))
plt.subplot(3,2,1)
plt.plot(x, numpy.conjugate(stationary_state(x,0))*stationary_state(x,0),␣
,→label="n=0")
plt.plot(x, classical_P(x,0))
plt.legend()
plt.subplot(3,2,2)
plt.plot(x, numpy.conjugate(stationary_state(x,3))*stationary_state(x,3),␣
,→label="n=3")
plt.plot(x, classical_P(x,3))
plt.legend()
plt.subplot(3,2,3)
plt.plot(x, numpy.conjugate(stationary_state(x,8))*stationary_state(x,8),␣
,→label="n=8")
plt.plot(x, classical_P(x,8))
plt.legend()
plt.subplot(3,2,4)
plt.plot(x, numpy.conjugate(stationary_state(x,15))*stationary_state(x,15),␣
,→label="n=15")
plt.plot(x, classical_P(x,15))
plt.legend()
plt.subplot(3,2,5)
128
plt.plot(x, numpy.conjugate(stationary_state(x,25))*stationary_state(x,25),␣
,→label="n=25")
plt.plot(x, classical_P(x,25))
plt.legend()
plt.subplot(3,2,6)
plt.plot(x, numpy.conjugate(stationary_state(x,40))*stationary_state(x,40),␣
,→label="n=40")
plt.plot(x, classical_P(x,40))
plt.legend()
plt.show()
Visualising the spherical harmonics is a little tricky because they are complex and defined in terms
of angular co-ordinates, (θ,ϕ).
129
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
from scipy.special import sph_harm
m, l = 2, 3
130
16 WKB Approximation using python
There is an approximate solution to the one dimensional Helmholtz equation that can be used to de-
scribe propagation through any slowly varying index profile. This is called the WKB approximation
and the approximate solution is given by:
1 ∫x ′ ′
ϕ(x) = √ e±ik0 n(x )dx
n(x)
with the sign being determined by whether the wave is travelling to the right (positive) or left
(negative). In this case I choose rightwards propagation.
Here I choose a refractive index profile that gradually increases from one constant to another, over
a length scale of approximately two wavelengths.
[243]: k0=1.0 # Free space wavevector
lm=2.0*np.pi/k0 # Free space wavelength
a=1.0*lm # Length scale of profile
h=3.0 # Difference between index on the far left and the far right
def n(x):
return 1.0+0.5*h*(1.0+np.tanh(x/a))
[244]: Xv=np.linspace(-5*lm,5*lm,2000)
nv=n(Xv)
plt.plot(Xv,nv,lw=2)
plt.xlim(-5*lm,5*lm)
plt.ylim(0.0,1.1*h+1.0)
plt.xticks([-5*lm,-2.5*lm,0,2.5*lm,5*lm],["$-5$","$-2.5$","$0$","$2.5$","$5$"])
plt.xlabel("$x/\\lambda$",fontsize=18)
plt.ylabel("$n(x)$",fontsize=18)
plt.title("Refractive index",y=1.05);
131
16.2 The WKB approximation:
def phi_wkb(x):
pf=1.0/np.sqrt(n(x))
return pf*np.exp(1j*k0*ph(x))
[247]: plt.plot(Xv,np.real(phv))
plt.plot(Xv,np.imag(phv))
plt.plot(Xv,np.abs(phv),lw=2)
plt.xlim(-5*lm,5*lm)
plt.ylim(-1.1,1.1)
plt.xticks([-5*lm,-2.5*lm,0,2.5*lm,5*lm],["$-5$","$-2.5$","$0$","$2.5$","$5$"])
plt.xlabel("$x/\\lambda$",fontsize=18)
132
plt.ylabel("$\phi(x)$",fontsize=18)
plt.title("WKB approximation",y=1.05);
phnv_all=od.odeint(dphi,[1.0,0.0,0.0,n(5*lm)*k0],Xv[::-1])
phnv=np.array([i[0]+1j*i[1] for i in phnv_all])
phnv=phnv[::-1]/phnv[len(phnv)-1]
plt.plot(Xv,np.real(phnv))
plt.plot(Xv,np.imag(phnv))
plt.plot(Xv,np.abs(phnv),lw=2)
plt.xlim(-5*lm,5*lm)
plt.ylim(-1.1,1.1)
plt.xticks([-5*lm,-2.5*lm,0,2.5*lm,5*lm],["$-5$","$-2.5$","$0$","$2.5$","$5$"])
plt.xlabel("$x/\\lambda$",fontsize=18)
plt.ylabel("$\phi(x)$",fontsize=18)
plt.title("Numerical solution",y=1.05);
133
To compare the two solutions I take the absolute value between the numerical result and the WKB
solution.
[250]: plt.plot(Xv,np.abs(phnv-phv))
plt.xlim(-5*lm,5*lm)
plt.xticks([-5*lm,-2.5*lm,0,2.5*lm,5*lm],["$-5$","$-2.5$","$0$","$2.5$","$5$"])
plt.xlabel("$x/\\lambda$",fontsize=18)
plt.ylabel("$|\phi_{\\rm num}(x)-\phi_{\\rm WKB}|$",fontsize=18)
plt.title("Comparison between WKB and numerical solution",y=1.05);
134
17 Matrix representaion of quantum mechanics:
In this chapter we will discuss about QuTiP is a python package for calculations and numerical
simulations of quantum systems.
It includes facilities for representing and doing calculations with quantum objects such state vectors
(wavefunctions), as bras/kets/density matrices, quantum operators of single and composite systems,
and superoperators (useful for defining master equations).
It also includes solvers for a time-evolution of quantum systems, according to: Schrodinger equation,
von Neuman equation, master equations, Floquet formalism, Monte-Carlo quantum trajectors,
experimental implementations of the stochastic Schrodinger/master equations.
At the heart of the QuTiP package is the Qobj class, which is used for representing quantum object
such as states and operator.
135
The Qobj class contains all the information required to describe a quantum system, such as its
matrix representation, composite structure and dimensionality.
q
[148]:
Quantum object: dims = [[2], [1]], shape = (2, 1), type = ket
( )
1.0
0.0
Is q is Hermitian?: False
136
[153]: # the sigma-z Pauli operator
sz = Qobj([[1,0], [0,-1]])
sz
[153]:
Quantum object: dims = [[2], [2]], shape = (2, 2), type = oper, isherm = True
( )
1.0 0.0
0.0 −1.0
Qubit Hamiltonian =
[154]:
Quantum object: dims = [[2], [2]], shape = (2, 2), type = oper, isherm = True
( )
1.0 −0.100j
0.100j −1.0
[156]: 0.0
Eigen energies:
[-1.00498756 1.00498756]
Normally we do not need to create Qobj instances from stratch, using its constructor and passing its
matrix represantation as argument. Instead we can use functions in QuTiP that generates common
states and operators for us. Here are some examples of built-in state functions:
137
17.2.1 State vectors:
Quantum object: dims = [[5], [1]], shape = (5, 1), type = ket
Qobj data =
[[0.60655682]
[0.60628133]
[0.4303874 ]
[0.24104351]
[0.14552147]]
138
17.2.3 Operators:
### σx operator:
[164]: # Pauli sigma x
sigmax()
[164]:
Quantum object: dims = [[2], [2]], shape = (2, 2), type = oper, isherm = True
( )
0.0 1.0
1.0 0.0
### σy operator:
### σz operator:
[166]: # Pauli sigma z
sigmaz()
[166]:
Quantum object: dims = [[2], [2]], shape = (2, 2), type = oper, isherm = True
( )
1.0 0.0
0.0 −1.0
139
[168]:
Quantum object: dims = [[4], [4]], shape = (4, 4), type = oper, isherm = False
0.0 0.0 0.0 0.0
1.0 0.0 0.0 0.0
0.0 1.414 0.0 0.0
0.0 0.0 1.732 0.0
[169]: # the position operator is easily constructed from the annihilation operator
a = destroy(4)
x = a + a.dag()
x
[169]:
Quantum object: dims = [[4], [4]], shape = (4, 4), type = oper, isherm = True
0.0 1.0 0.0 0.0
1.0 0.0 1.414 0.0
0.0 1.414 0.0 1.732
0.0 0.0 1.732 0.0
[171]: a = destroy(4)
commutator(a, a.dag())
[171]:
Quantum object: dims = [[4], [4]], shape = (4, 4), type = oper, isherm = True
1.0 0.0 0.0 0.0
0.0 1.0 0.0 0.0
0.0 0.0 1.000 0.0
0.0 0.0 0.0 −3.000
Let’s check well known commutator relation between x and p. Checked matrix representation of
operator chapter from Zetlli or any other quantum mechanics book, where
a + a†
x= √
2
a − a†
p = −j √
2
140
[172]: x = (a + a.dag())/np.sqrt(2)
p = -1j * (a - a.dag())/np.sqrt(2)
commutator(x, p)
[172]:
Quantum object: dims = [[4], [4]], shape = (4, 4), type = oper, isherm = False
1.000j 0.0 0.0 0.0
0.0 1.0j 0.0 0.0
0.0 0.0 1.000j 0.0
0.0 0.0 0.0 −3.000j
17.4 Cat vs coherent states in a Kerr resonator, and the role of measurement:
we show how the same system can produce extremely different results according to the way an
observer collects the emitted field of a resonator.
[175]: import matplotlib.pyplot as plt
import numpy as np
from qutip import *
from IPython.display import display, Math, Latex
141
17.4.1 The two-photon Kerr Resontator:
[176]: Image(filename="PhysRevA.94.033841.png",width=650)
[176]:
Let us consider a single nonlinear Kerr resonator subject to a parametric two-photon driving. In a
frame rotating at the pump frequency, the Hamiltonian reads:
U † † G
H= a a aa + (a† a† + aa)
2 2
where U is the Kerr photon-photon interaction strength, G is the two-photon driving amplitude,
and a† (a) is the bosonic creation (annihilation) operator. where � and � are, respectively, the one-
and two-photon dissipation rates.
This model can be solved exactly for its steady state. The corresponding density matrix ρss is well
approximated by the statistical mixture of two orthogonal states:
Where |Cα± � = |α� ± | − α� are photonic Schrödinger cat states whose complex amplitude � is
determined by the system parameters. The state �Cα+ | is called the even cat, since it can be written
as a superposition of solely even Fock states, while �Cα− | is the odd cat. In the previous equation,
the coefficients p± can be interpreted as the probabilities of the system of being found in the
corresponding cat state.
Below, we demonstrate this feature by diagonalising the steady-state density matrix, and by plotting
the photon-number probability for the two most probable states.
[177]: font_size=20
label_size=30
title_font=35
142
a=destroy(20)
U=1
G=4
gamma=1
eta=1
H=U*a.dag()*a.dag()*a*a + G*(a*a + a.dag()*a.dag())
c_ops=[np.sqrt(gamma)*a,np.sqrt(eta)*a*a]
parity=1.j*np.pi*a.dag()*a
parity=parity.expm()
rho_ss=steadystate(H, c_ops)
plt.figure(figsize=(8, 6))
plt.rc('text', usetex=True)
plt.rc('font', family='serif', size=font_size)
plt.semilogy(range(1,7),vals[0:6], 'rx')
plt.xlabel('Eigenvalue', fontsize=label_size)
plt.ylabel('Probability', fontsize=label_size)
plt.title('Distribution of the eigenvalues',fontsize=title_font)
plt.show()
143
[180]: state_zero=vecs[0].full()
state_one=vecs[1].full()
plt.figure(figsize=(8, 6))
plt.rc('text', usetex=True)
plt.rc('font', family='serif', size=font_size)
plt.plot(range(0,20), [abs(i)**2 for i in state_zero[0:20]], 'rx', label='First␣
,→state')
plt.legend()
plt.xlabel('Eigenvalue', fontsize=label_size)
plt.ylabel('Probability', fontsize=label_size)
plt.show()
144
[181]: xvec=np.linspace(-4,4, 500)
W_even=wigner(vecs[0], xvec, xvec, g=2)
W_odd=wigner(vecs[1], xvec, xvec, g=2)
145
[182]: xvec=np.linspace(-4,4, 500)
W_even=wigner(vecs[0], xvec, xvec, g=2)
W_odd=wigner(vecs[1], xvec, xvec, g=2)
font_size=20
label_size=30
title_font=35
W_even=np.around(W_even, decimals=2)
plt.figure(figsize=(10, 8))
plt.contourf(xvec,xvec, W_even, cmap='RdBu', levels=np.linspace(-1, 1, 20))
plt.colorbar()
plt.xlabel(r"Re$(\alpha)$", fontsize=label_size)
plt.ylabel(r"Im$(\alpha)$", fontsize=label_size)
plt.title("First state: even cat-like", fontsize=title_font)
plt.show()
146
[183]: W_odd=np.around(W_odd, decimals=2)
plt.figure(figsize=(10, 8))
147
[31]: tlist=np.linspace(0,8000,800)
sol_hom=ssesolve(H, fock(20,0), tlist, c_ops, [a.dag()*a, (a+a.dag())/2, -1.
,→j*(a-a.dag())/2, parity],ntraj=1,nsubsteps=9500, store_measurement=False,␣
,→method='homodyne')
17.4.2 Homodyne:
148
realisations such as, a nonlinear element is already part of the system and is the key ingredient to
realise two-photon processes.
[33]: plt.figure(figsize=(18, 8))
plt.subplot(311)
plt.plot(tlist, sol_hom.expect[0])
plt.ylabel(r'$\langle \hat{a}^\dagger \hat{a} \rangle$', fontsize=label_size)
plt.xlim([0,2500])
plt.subplot(312)
plt.plot(tlist, sol_hom.expect[3])
plt.ylabel(r'$\langle \hat{P} \rangle$', fontsize=label_size)
plt.xlim([0,2500])
plt.subplot(313)
plt.plot(tlist, sol_hom.expect[1], label=r'$\langle \hat{x} \rangle$')
plt.plot(tlist, sol_hom.expect[2], label=r'$\langle \hat{p} \rangle$')
plt.xlabel(r'$\gamma t$', fontsize=label_size)
#plt.ylabel(r'$\langle \hat{p} \rangle$', fontsize=label_size)
plt.xlim([0,2500])
plt.ylim([-3,3])
plt.legend()
plt.show()
The goal of this chapter is to create a statistical model simulating the evolution of magnetism as a
function of material temperature.
Since the emergence of magnetism is attributed to the contribution of a great many small atomic
magnetic dipoles a statistical method is to be utilised: - Monte Carlo methods - Random number
149
generation - Ferromagetism - Ising Model
The subject of of this project will be statistical in nature, and hence a basic understanding of Monte
Carlo methods and random number algorithms will be necessary.
Numerical computations which utilise random numbers are called Monte Carlo methods after the
famous casino. The obvious applications of such methods are in stochastic physics: e.g., statistical
thermodynamics. However, there are other, less obvious, applications including the evaluation of
multi-dimensional integrals.
This method was popularised by physicists such as Stanislaw Ulam, Enrico Fermi, John von Neu-
mann, and Nicholas Metropolis, among others. A famous early use was employed by Enrico Fermi
who in 1930 used a random method to calculate the properties of the recently discovered neutron.
Of course, these early simulations where greatly restricted by the limited computational power
available at that time.
Uses of Monte Carlo methods require large amounts of random numbers, and it was their use that
spurred the development of pseudorandom number generators, which were far quicker to use than
the tables of random numbers which had been previously used for statistical sampling.
No numerical algorithm can generate a truly random sequence of numbers. However, there exist
algorithms which generate repeating sequences of Nmax (say) integers which are, to a fairly good
approximation, randomly distributed in the range 0 to Nmax −1. Here, Nmax is a (hopefully) large
integer. This type of sequence is termed psuedo-random.
The most well-known algorithm for generating psuedo-random sequences of integers is the so-called
linear congruental method. The formula linking the nth and (n + 1)th integers in the sequence
is
where A, C, and Nmax are positive integer constants. The first number in the se- quence, the
so-called “seed” value, is selected by the user.
As an example, calculate a list of number using A = 7, C = 0, and Nmax = 10.
[185]: # Generate psuedo-random numbers
I = 1
A = 7; C=2; M=10;
for i in range(8):
In = (A * I + C) % M
print(In, end =" ")
I = In
9 5 7 1 9 5 7 1
150
A typical sequence of numbers generated by this formula is
X = 9, 5, 7, 1, 9, 5, 7, ... (2)
Evidently, the above choice of values for A, C, and Nmax is not a particularly good one, since the
sequence repeats after only four iterations. However, if A, C, and Nmax are properly chosen then
the sequence is of maximal length (i.e., of length Nmax ), and approximately randomly distributed
in the range 0 to Nmax − 1.
As a general rule, before implementing a random-number generator in your programs, you should
check its range and that it produces numbers that “appear” random. This can be attempted wither
using graphical display of your random numbers or a more robustly, performing a mathematical
analysis.
With the visual method, since your brain is quite refined at recognising patterns it can imitate
if there is one in your random numbers. For instance, separate your random numbers into pairs
(x, y) = (Xi , Xi+1 ) and analyse visually using a plot(x,y).
Xn = 1
x =[]
y = []
N = (1.0*A*Xn+C)%Nmax
if i%2 ==0:
x.append(N/Nmax)
else:
y.append(N/Nmax)
Xn = N
#print X/Nmax
plt.plot(x,y,'.');
151
Another visual method is to plot using a histogram. If we observe a flat line at 1.0, subject to
random fluctuations we can confirm there is no bias in the random distribution.
Look up the format of the hist function and plot: - a normalized probability your list of random
numbers from 0-1 - in 100 bins - draw a red line to show the probability = 1
The more bins/samples we take the smaller the fluctuations about the average value.
152
If your list is truly random you should observe that every value of x is (roughly) equally as likely
to be chosen as every other value.
Now import the function ‘random’ via Numpy which produces a Uniformly distributed values:
As before, when you want to reproduce a particular set of random numbers, you can set the “seed”
value of the generator to a particular value. Every time you set the seed to this value, it returns
the generator to its initial state and will always give you the same sequence of numbers.
The default seed value in python is None. The python kernel interprets this to mean that we want
it to try to initialise itself using a pseudo-random number from the computer’s random number
cache or it will use the current computer clock time to set the seed.
Let’s see what setting and resetting the seed does:
[207]: import numpy as np
#Using the random function
print("One set of rendom numbers:\n")
# print 5 uniformly distributed numbers between 0 and 1
print( np.random.random(5) )
print("\nAnother set of random numbers:\n")
# now print another 5 - should be different
print( np.random.random(5) )
153
[0.21807959 0.44707934 0.74699429 0.01619885 0.28491486]
A classic visualisation of random behaviour is the random walk. Consider a completely drunk
person who walks along a street and being drunk has no sense of direction. So this drunkard may
move forwards with equal probability that he moves backwards.
A Markov process is a random walk with a selected probability for making a move. The new
move is independent of the previous history of the system. The Markov chain is used repeatedly
in Monte Carlo simulations in order to generate new random states.
In the context of a physical system containing many atoms, molecules etc, the different energy
states are practically infinite. Hence, statistical approaches utilise algorithms to sample this large
state-space and calculate average measurements such as energy and magnetisation. With Monte
Carlo methods, we can explore and sample this state space using a random walk. The role of the
Markov chain is to sample those states that make the most significant contributions.
The reason for choosing a Markov process is that when it is run for a long enough time starting with
a random state, we will eventually reach the most likely state of the system. In thermodynamics,
this means that after a certain number of Markov steps we reach an equilibrium distribution.
This mimicks the way a real system reaches its most likely state at a given temperature of the
surroundings.
To reach this distribution, the Markov process needs to obey two important conditions, that of
ergodicity and detailed balance. These conditions impose then constraints on our algorithms for
accepting or rejecting new random states.
The Metropolis algorithm discussed next abides to both these constraints. The Metropolis al-
gorithm is widely used in Monte Carlo simulations and the understanding of it rests within the
interpretation of random walks and Markov processes.
154
18.4 The Metropolis algorithm
In order to follow the predictions of a given statistical probaility function such as a Boltzmann
distribution, the samples of state-space need to be considered accordindly. Instead of sampling a
lot of states and then weighting them by their Boltzmann probability factors, it makes more sense
to choose states based on their Boltzman probability and to then weight them equally. This is
known as the Metropolis algorithm which has a characteristic cycle: 1. A trial configuration is
made by randomly choosing one state 2. The energy difference, ∆E, of adopting this trial state
relative to the present state is calculated. 3. If this reduces the total energy of the system, i.e. if
∆E ≤ 0, then the trial state is energetically favorable and thus accepted. 4. Otherwise, it will
only be accepted if its probability is greater than some random number exp(−∆E/kB T ) > η where
0 ≤ η ≤ 1.
Each cycle accepts or rejects a potential state and repeats testing many other states in a Markov
process. The total number of cycles is typically the number of atoms, or bodies in the system.
Obviously, the system must be allowed to reach termal equilibrium before sampling the Boltzmann
distribution in this way.
A ferromagnetic material is one that produces a magnetic field of it’s own, even without the presence
of an external magnetic field. A ferromagnet can be any material that forms itself a permanent
magnet, the magnitude of which is not reduced in the presence of any other magnetic fields.
A paramagnet is a material in which, with the presence of an external magnetic field, interatomic
induced magnetic fields are formed, and therefore a magnetic field through the material is produced.
However, once the external field is removed, the induced magnetic fields between atoms are lost
and therefore the material can only have an induced magnetic field.
Ferromagnets contain finite-size domains in which the spins of all the atoms point in the same
direction. When an external magnetic field is applied to these materials, the different domains
align and the materials become “magnetised.” Yet as the temperature is raised, the total magnetism
decreases, and at the Curie temperature the system goes through a phase transition beyond which
all magnetisation vanishes.
Ising model can explain the thermal behaviour of ferromagnets.
The Ising model is the simplest model of a ferromagnet. The basic phenomenology of the Ising
model is simple: there is a certain temperature Tc below which the system will spontaneously
magnetise. This is what we will study with Monte Carlo.
[213]: Image(filename="ising.png",width=450)
[213]:
155
18.6 Ising Model
The model consists of an array of particles, with a spin value of either +1 or -1, corresponding to an
up or down spin configuration respectively. Inside the lattice, spins interact with their ‘neighbours’,
and each particle on the lattice has an associated interaction energy. The value of this interaction
energy is dependent on whether neighbouring particles have parallel or anti-parallel spins. The
interaction energy between two parallel spins is –J, and for anti-parallel spins; +J; where J is an
energy coupling constant that is dependent on the material being simulated: - J > 0 corresponds to
a ferromagnetic state in which spins tend to align with each other in order to minimize the energy.
- J < 0, they prefer to be antiparallel and for a simple lattice that leads to a chessboard-like
alignment, a feature of the anti-ferromagnetic state - J = 0 , the spin alignment is arbitrary.
By summing the interaction energies of every particle on the lattice, the total energy, E, of the
configuration can be obtained, and is given by equation:
∑
E = −J Si Sj (3)
<i,j>
Where < i, j > represents nearest neighbours and Si = +1 for an up spin and -1 for a down spin
on site i. For a give spin, it’s local energy is calculated by summing over all the energies of each
spin of it’s neigbours as given by:
Ei = −J × Si × (Sj1 + Sj2 + Sj3 + Sj4 ) (4)
The change in energy of the system is dictated by the interaction of a dipole with its neighbours,
so that flipping Si to Si′ changes the energy by:
∑
∆E = Ei − Ei′ = 2J Si Sj (5)
j
The two-dimensional square lattice Ising model is very difficult to solve analyticaly, the first such
description was achieved by Lars Onsager in 1944, who solved the dependance:
Kb Tc 2
= √ ≈ 2.269
J ln(1 + 2)
His solution, although elegant, is rather complicated. We’re going to use the Monte Carlo method
to see the effects that his solution describes.
We expect there is some temperature at which this phase transition happens - where the systems
goes from being a Ferromagent to a Paramagnet. This temperature was solved for exactly by Lars
Onsager in 1944
The program starts at a certain given temperature and calculates whether the considered spin flips
or not for a certain number of iterations. For each step we first performed l iterations to reach
thermal equilibrium and then performed another l/2 iterations to determine the physical quantities
Energy per site, Magnetization per site, Magnetic Susceptibility, Specific Heat, Correlation Function
and the Correlation Length.
156
18.6.1 Road map of Ising model:
We will consider a square two-dimensional lattice with periodic boundary conditions. Here, the
first spin in a row ‘sees’ the last spin in the row and vice versa. The same applies for spins at the
top and bottom of a column.
We will define individual functions for all the components of our model, such as: - creating a 2D
grid of lattice of spins - randomly choose a spin - flip the spin - calculate nearest neighbour values
- calculate energy and magnetisation of lattice - metropolis algorithm
The ising model will then simply start at some temperature, T , evolve to equilibium and then
evolve further to a steady state.
[214]: %matplotlib inline
import numpy as np
from numpy.random import rand
import matplotlib.pyplot as plt
def calcEnergy(config):
'''Energy of a given configuration'''
energy = 0
for i in range(len(config)):
for j in range(len(config)):
S = config[i,j]
157
nb = config[(i+1)%N, j] + config[i,(j+1)%N] + config[(i-1)%N, j]
+ config[i,(j-1)%N]
energy += -nb*S
return energy/4.
def calcMag(config):
'''Magnetization of a given configuration'''
mag = np.sum(config)
return mag
[231]: #----------------------------------------------------------------------
# MAIN PART OF THE CODE
#----------------------------------------------------------------------
for tt in range(nt):
E1 = M1 = E2 = M2 = 0
config = initialstate(N)
iT=1.0/T[tt]; iT2=iT*iT;
for i in range(mcSteps):
mcmove(config, iT)
Ene = calcEnergy(config) # calculate the energy
Mag = calcMag(config) # calculate the magnetisation
E1 = E1 + Ene
M1 = M1 + Mag
M2 = M2 + Mag*Mag
E2 = E2 + Ene*Ene
E[tt] = n1*E1
M[tt] = n1*M1
C[tt] = (n1*E2 - n2*E1*E1)*iT2
X[tt] = (n1*M2 - n2*M1*M1)*iT
158
[234]: f = plt.figure(figsize=(18, 10)); # plot the calculated values
sp = f.add_subplot(2, 2, 1 );
plt.scatter(T, E, s=100, color='IndianRed')
plt.xlabel("Temperature (T)", fontsize=20);
plt.ylabel("Energy ", fontsize=20); plt.axis('tight');
sp = f.add_subplot(2, 2, 2 );
plt.scatter(T, abs(M), s=50, marker='o', color='RoyalBlue')
plt.xlabel("Temperature (T)", fontsize=20);
plt.ylabel("Magnetization ", fontsize=20); plt.axis('tight');
sp = f.add_subplot(2, 2, 3 );
plt.scatter(T, C, s=50, marker='o', color='IndianRed')
plt.xlabel("Temperature (T)", fontsize=20);
plt.ylabel("Specific Heat ", fontsize=20); plt.axis('tight');
sp = f.add_subplot(2, 2, 4 );
plt.scatter(T, X, s=50, marker='o', color='RoyalBlue')
plt.xlabel("Temperature (T)", fontsize=20);
plt.ylabel("Susceptibility", fontsize=20); plt.axis('tight');
159
import matplotlib.pyplot as plt
class Ising():
''' Simulating the Ising model '''
## monte carlo moves
def mcmove(self, config, N, beta):
''' This is to execute the monte carlo moves using
Metropolis algorithm such that detailed
balance condition is satisified'''
for i in range(N):
for j in range(N):
a = np.random.randint(0, N)
b = np.random.randint(0, N)
s = config[a, b]
nb = config[(a+1)%N,b] + config[a,(b+1)%N] +␣
,→config[(a-1)%N,b]
+ config[a,(b-1)%N]
cost = 2*s*nb
if cost < 0:
s *= -1
elif rand() < np.exp(-cost*beta):
s *= -1
config[a, b] = s
return config
def simulate(self):
''' This module simulates the Ising model'''
N, temp = 64, .4 # Initialse the lattice
config = 2*np.random.randint(2, size=(N,N))-1
f = plt.figure(figsize=(15, 15), dpi=80);
self.configPlot(f, config, 0, N, 1);
msrmnt = 1001
for i in range(msrmnt):
self.mcmove(config, N, 1.0/temp)
if i == 1: self.configPlot(f, config, i, N, 2);
if i == 4: self.configPlot(f, config, i, N, 3);
if i == 32: self.configPlot(f, config, i, N, 4);
if i == 100: self.configPlot(f, config, i, N, 5);
if i == 1000: self.configPlot(f, config, i, N, 6);
X, Y = np.meshgrid(range(N), range(N))
sp = f.add_subplot(3, 3, n_ )
160
plt.setp(sp.get_yticklabels(), visible=False)
plt.setp(sp.get_xticklabels(), visible=False)
plt.pcolormesh(X, Y, config, cmap=plt.cm.RdBu);
plt.title('Time=%d'%i); plt.axis('tight')
plt.show()
[236]: rm = Ising()
[237]: rm.simulate()
<ipython-input-235-6dd4ce2880d5>:53: MatplotlibDeprecationWarning:
shading='flat' when X and Y have the same dimensions as C is deprecated since
3.3. Either specify the corners of the quadrilaterals with X and Y, or pass
shading='auto', 'nearest' or 'gouraud', or set rcParams['pcolor.shading']. This
will become an error two minor releases later.
plt.pcolormesh(X, Y, config, cmap=plt.cm.RdBu);
161
#Using Probability Distribution given
def get_probability(delta_energy, Temperature):
return np.exp(-delta_energy / Temperature)
def get_energy(spins):
energy=0
for i in range(len(spins)):
energy=energy+interaction*spins[i-1]*spins[i]
energy= energy-field*sum(spins)
return energy
def delta_energy(spins,random_spin):
#If you do flip one random spin, the change in energy is:
#(By using a reduced formula that only involves the spin
# and its neighbours)
if random_spin==L-1:
PBC=0
else:
PBC=random_spin+1
old = -interaction*(spins[random_spin-1]*spins[random_spin]
+ spins[random_spin]*spins[PBC]) -␣
,→field*spins[random_spin]
new = interaction*(spins[random_spin-1]*spins[random_spin]
+ spins[random_spin]*spins[PBC]) +␣
,→field*spins[random_spin]
return new-old
# intializing
#Spin Configuration
spins = np.random.choice([-1,1],L)
Beta = Temperature**(-1)
162
random_spin=np.random.randint(0,L,size=(1))
#Compuing the change in energy of this spin flip
delta=delta_energy(spins,random_spin)
#Metropolis accept-rejection:
if delta<0:
#Accept the move if its negative
spins[random_spin]=-spins[random_spin]
#print('change')
else:
#If its positive, we compute the probability
probability=get_probability(delta,Temperature)
random=np.random.rand()
if random<=probability:
#Accept de move
spins[random_spin]=-spins[random_spin]
data.append(list(spins))
return data,magnetization,energy
def record_state_statistics(data,n=4):
ixs = tuple()
for d in sub_sample]
return state_nums
# setting up problem
L = 200 # size of system
MC_samples = 1000 # number of samples
Temperature = 1 # "temperature" parameter
interaction = 1 # Strength of interaction between nearest neighbours
field = 0 # external field
163
# running MCMC
data = metropolis(L = L, MC_samples = MC_samples, Temperature = Temperature,
interaction = interaction, field = field)
results = record_state_statistics(data[0],n=4) # I was also interested in the␣
,→probability
# Plotting
plt.figure(figsize=(15,10))
plt.subplot(2,1,1)
plt.imshow(np.transpose(data[0]))
plt.xticks([])
plt.yticks([])
plt.axis('tight')
plt.ylabel('Space',fontdict={'size':20})
plt.title('Critical dynamics in a 1-D Ising model',fontdict={'size':20})
plt.subplot(2,1,2)
plt.plot(data[2],'r')
plt.xlim((0,MC_samples))
plt.xticks([])
plt.yticks([])
plt.ylabel('Energy',fontdict={'size':20})
plt.xlabel('Time',fontdict={'size':20});
164
18.7 Exercise 1: Setting up a 2D grid
We are to simulate a simplified 2D surface consisting of magnetic dipoles using the Ising approach.
The Ising model represents a regular grid of points where each point has two possible states, spin up
or spin down. States like to have the same spin as their immediate neighbors so when a spin-down
state is surrounded by more spin-up states it will switch to spin-up and vice versa. Also, due to
random fluctuations, points might switch spins, even if this switch is not favourable.
So to begin, we will define a function to create an initial N × M grid with magnetic spin values of
±1.
For this exercise create 3 functions: - normallattice: create N × M lattice with uniform spin values
- randomlattice: create N × M lattice with random spin values - plotlattice: plot an image of the
lattice with a colour code for the spin
During your later investigations, you can use these functions to test if an initial random or uniform
state is significant.
Implement a function to randomly select one of the particles in the lattice and return its coordinates
(i, j). Next create a function to flip the spin of the particle pointed by the (i,j) indices and return
the new lattice state.
165
18.9 Exercise 3: Nearest Neighbour algorithm
A key element of this model is calculating the combined spin state of the 4 nearest neighbours
around a given lattice point (i, j).
Write a function to return the combined spin state and which respects periodic boundary conditions.
Perform some tests with a simple 5x5 lattice. Once you have convinced yourself that this functions
correctly, add the next component to your model.
The local energy is defined as the total interaction energy between the selected particle and its
immediate neighbours.
(a) Write a function to calculate this.
(b) Also write a function to calculate the total energy of the lattice.
Perform some tests with a simple 5x5 lattice, for example: - Compare the energy of the lattice for
different configuations of spins. - What is the total energy of the system when all spins point up
or down or randomly?
Once you have convinced yourself that this functions correctly, add the next component to your
model.
∑
M= Si (6)
i
166
18.12 Exercise 6: Implement the Metropolis Algorithm
At this point in your the code you should have all the nesscessary functions properly implemented
and the thermodynamic simulation of the system can take place.
1. Set up the system in an initial configuration;
2. Choose one of the particles at random using a Markov Monte Carlo approach
3. Calculate the energy change ∆E of the system if the spin of the chosen particle is flipped
4. If ∆E is negative, then select to flip the spin and go to step 7, otherwise ….
5. Generate a random number r such that 0 < r < 1
6. If this number is less than the probability of ∆E i.e. r < exp(−∆E/kB T ), then flip the spin.
7. Choose another spin of the lattice at random and repeat steps 2 to 6 a chosen number of
times (NM CS )
Note, the Metropolis algorithm only contains ∆E/kB T , where kB is the Boltzmann constant.
Therefore, by defining T ′ = kB T /J the values of J and KB are not required and you can work with
T ′ as a dimensionless parameter independent on the material chosen. Hence the expression in step
(6) reduces to r < exp(−∆E/T ′ ), with ∆E an integer number with values between -4 and 4.
It is usual to reject the first NM CS /2 configurations in each Monte Carlo run in order to first
establish thermalisation, and to consider only one configuration every NS to avoid correlations.
For your final production run choose NM CS = 100000 or a larger number, while you should use a
smaller value of NM CS for debugging. When using lattices of different size, comparable quality of
results can be obtained using the same value of NM CS /NS , where NS is the number of spin. This
ratio is referred to as ‘number of Monte Carlo configurations per spin’ and indicates that an
equal number of random choices is taken for each spin in the system.
This is the key section of your project where you get to perform statistical measurements of the 2D
system. These measurements are magnetisation, magnetic susceptibility, energy and specific heat.
If you perform many simulations at different temperatures, you should be in a position to observe
phase transitions and measure the transition or Curie temperature, kTc .
For a given temperature, we will wish to calculate the average magnetisation.
The average magnetisation per spin of the lattice is given by:
1 ∑
NS
< m >= mi (8)
Nc
i
where, NC is the number of configurations included in the statistical averaging, and mi is the value
of the magnetisation for a given configuration.
Investigate the magnetisation over a range of temperature where the ferromagnetic to paramagnetic
phase transition occurs. It is best to start at a low temperature and work upwards. Analyse the
magnetization as a function of temperature and visualise the results. Also consider providing
representive lattice images, below, at and above the transition temperature, kTc (or even a nice
animated gif of the full temperature scan!).
167
Identify and discuss phase transitions in the evolution of the system.
Consider benchmarking your numerical model for magnetisation. Can you find analytic solutions
from research literature to compare with your numerical model predictions?
The magnetic susceptibility is another useful material parameter. This tells us how much the
magnetisation changes by increasing the temperature. From the results of Exercise 8 it should
be possible to calculate the magnetic susceptibility as a function of temperature. The magnetic
susceptibility is calculated using:
1
χ= [< m2 > − < m >2 ] (9)
T
Interprete and discuss your results.
According to the fluctuation dissipation theorem in statistical physics, the specific heat per spin
of the lattice at temperature T is given by
where E is the energy of the lattice. The thermal averages < E > and < E 2 > can again be
calculated by the Monte Carlo method using:
1 ∑
< E >= E (11)
NC
Using this method, investigate the specific heat of the spin system in the vicinity of the phase
transition.
18.14 Exercise 8: Calculate statistical errors and estimate finite size effects
Estimating the statistical errors is very important when performing Monte Carlo simulations i.e a
single simulation may produce a fluke result! Also, the finite size of the 2D space can effect the
measurements such as TC .
Repeat exercises 8 and 9 by varying the size of the lattice to 32x32, 64x64 etc to estimate the finite
size effects of 2D grid.
Run each lattice simulation a number of times in order to estmate the statistical errors of the
measurements. Save your data in a text files for future analysis.
[ ]:
168