Physics - Quantum Mechanics For Undergraduates
Physics - Quantum Mechanics For Undergraduates
Physics - Quantum Mechanics For Undergraduates
1 WAVE FUNCTION 7
1.1 Probability Theory . . . . . . . . . . . . . . . . . . . . . . . . 8
1.1.1 Mean, Average, Expectation Value . . . . . . . . . . . 8
1.1.2 Average of a Function . . . . . . . . . . . . . . . . . . 10
1.1.3 Mean, Median, Mode . . . . . . . . . . . . . . . . . . 10
1.1.4 Standard Deviation and Uncertainty . . . . . . . . . . 11
1.1.5 Probability Density . . . . . . . . . . . . . . . . . . . 14
1.2 Postulates of Quantum Mechanics . . . . . . . . . . . . . . . 14
1.3 Conservation of Probability (Continuity Equation) . . . . . . 19
1.3.1 Conservation of Charge . . . . . . . . . . . . . . . . . 19
1.3.2 Conservation of Probability . . . . . . . . . . . . . . . 22
1.4 Interpretation of the Wave Function . . . . . . . . . . . . . . 23
1.5 Expectation Value in Quantum Mechanics . . . . . . . . . . . 24
1.6 Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.7 Commutation Relations . . . . . . . . . . . . . . . . . . . . . 27
1.8 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
1.9 Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2 DIFFERENTIAL EQUATIONS 35
2.1 Ordinary Differential Equations . . . . . . . . . . . . . . . . . 35
2.1.1 Second Order, Homogeneous, Linear, Ordinary Differ-
ential Equations with Constant Coefficients . . . . . . 36
2.1.2 Inhomogeneous Equation . . . . . . . . . . . . . . . . 39
2.2 Partial Differential Equations . . . . . . . . . . . . . . . . . . 42
2.3 Properties of Separable Solutions . . . . . . . . . . . . . . . . 44
2.3.1 General Solutions . . . . . . . . . . . . . . . . . . . . . 44
2.3.2 Stationary States . . . . . . . . . . . . . . . . . . . . . 44
2.3.3 Definite Total Energy . . . . . . . . . . . . . . . . . . 45
1
2 CONTENTS
I 1-DIMENSIONAL PROBLEMS 77
5 Bound States 79
5.1 Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . 80
5.2 Finite 1-dimensional Well . . . . . . . . . . . . . . . . . . . . 81
5.2.1 Regions I and III With Real Wave Number . . . . . . 82
5.2.2 Region II . . . . . . . . . . . . . . . . . . . . . . . . . 83
5.2.3 Matching Boundary Conditions . . . . . . . . . . . . . 84
5.2.4 Energy Levels . . . . . . . . . . . . . . . . . . . . . . . 87
5.2.5 Strong and Weak Potentials . . . . . . . . . . . . . . . 88
5.3 Power Series Solution of ODEs . . . . . . . . . . . . . . . . . 89
5.3.1 Use of Recurrence Relation . . . . . . . . . . . . . . . 91
5.4 Harmonic Oscillator . . . . . . . . . . . . . . . . . . . . . . . 92
CONTENTS 3
19 SUPERCONDUCTIVITY 267
WAVE FUNCTION
7
8 CHAPTER 1. WAVE FUNCTION
where N (j) = number of times the value j occurs. The reason we go from
0 to ∞ is because many of the N (j) are zero. Example N (3) = 0. No one
scored 3.
We can also write (1.4) as
µ ¶ µ ¶ µ ¶ µ ¶ µ ¶
3 2 4 5 1
j̄ = 10 × + 9× + 8× + 7× + 5× (1.6)
15 15 15 15 15
3
where for example 15 is the probability that a random student gets a grade
of 10. Defining the probability as
N (j)
P (j) ≡ (1.7)
N
we have ∞
X
hji ≡ j̄ = jP (j) (1.8)
j=0
Any of the formulas (1.2), (1.5) or (1.8) will serve equally well for calculating
the mean or average. However in quantum mechanics we will prefer using
the last one (1.8) in terms of probability.
³ Note that when talking ´about probabilities, they must all add up to 1
3 2 4 5 1
15 + 15 + 15 + 15 + 15 = 1 . That is
∞
X
P (j) = 1 (1.9)
j=0
energy of an electron several times then the j in (1.1) represents each energy
measurement. (do Problem 1.1)
In quantum mechanics we use the word expectation value. It means
nothing more than the word average or mean. That is you have to make
a series of measurements to get it. Unfortunately, as Griffiths points out
[p.7, 15, Griffiths 1995] the name expectation value makes you think that
it is the value you expect after making only one measurement (i.e. most
probable value). This is not correct. Expectation value is the average of
single measurements made on a set of identically prepared systems. This is
how it is used in quantum mechanics.
Note that in general the average of the square is not the square of the average.
hj 2 i =
6 hji2 (1.11)
The mode is simply the most frequently occurring data point. In our
grade example the mode is 7 because this occurs 5 times. (Sometimes data
will have points occurring with the same frequency. If this happens with 2
data points and they are widely separated we have what we call a bi-nodal
distribution.)
For a normal distribution the mean, median and mode will occur at the
same point, whereas for a skewed distribution they will occur at different
points.
(see Figure 1.2)
the literature of quantum mechanics. It’s much better (more precise) to use
the words average and standard deviation instead of expectation value
and uncertainty. Also it’s much better (more precise) to use the symbol σ
rather than ∆, otherwise we get confused with (1.13). (Nevertheless many
quantum mechanics books use expectation value, uncertainty and ∆.)
The average squared distance or variance is simple to define. It is
1 X
σ 2 ≡ h(∆j)2 i = (∆j)2
N all
1 X
= (j − hji)2
N all
∞
X
= (j − hji)2 P (j) (1.15)
j=0
Note: Some books use N 1−1 instead of N1 in (1.15). But if N 1−1 is used
then equation (1.16) won’t work out unless N 1−1 is used in the mean as
well. For large samples N 1−1 ≈ N1 . The use of N 1−1 comes from a data
set where only N − 1 data points are independent. (E.g. percentages of
people walking through 4 colored doors.) Suppose there are 10 people and
4 doors colored red, green, blue and white. If 2 people walk through the red
door and 3 people through green and 1 person through blue then we deduce
that 4 people must have walked through the white door. If we are making
measurements of people then this last data set is not a valid independent
measurement. However in quantum mechanics all of our measurements are
independent and so we use N1 .
However this way of calculating the variance can be a pain in the neck
especially for large samples. Let’s find a simpler formula which will give us
the answer more quickly. Expand (1.15) as
X³ ´
σ2 = j 2 − 2jhji + hji2 P (j)
X X X
= j 2 P (j) − 2hji jP (j) + hji2 P (j)
where we take hji and hji2 outside the sum because they are just numbers
(hji = 8.0 and hji2 = 64.0 in above example) which have already been
P P
summed over. Now jP (j) = hji and P (j) = 1. Thus
σ 2 = hj 2 i − 2hji2 + hji2
giving
σ 2 = hj 2 i − hji2 (1.16)
1
hj 2 i = [(100 × 3) + (81 × 2) + (64 × 4) + (49 × 5) + (25 × 1)]
15
= 65.87
hji2 = 82 = 64
σ 2 = hj 2 i − hji2 = 65.87 − 64 = 1.87
This equation defines the probability density ρ(x) The quantity ρ(x)dx is
thus the probability that a given value lies between x and x + dx. This Ris just
like the ordinary density ρ of water. The total mass of water is M = ρdV
where ρdV is the mass of water between volumes V and V + dV .
Our old discrete formulas get replaced with new continuous formulas, as
follows: Z
∞
X ∞
P (j) = 1 → ρ(x)dx = 1 (1.18)
j=0 −∞
∞
X Z ∞
hji = jP (j) → hxi = xρ(x)dx (1.19)
j=0 −∞
∞
X Z ∞
hf (j)i = f (j)P (j) → hf (x)i = f (x)ρ(x)dx (1.20)
j=0 −∞
∞
X Z ∞
σ 2 ≡ h(∆j)2 i = (j − hji)2 P (j) → σ 2 ≡ (∆x)2 = (x − hxi)2 ρ(x)dx
j=0 −∞
= hj 2 i − hji2 = hx2 i − hxi2
(1.21)
In discrete notation j is the measurement, but in continuous notation the
measured variable is x. (do Problem 1.3)
and p(t) or Γ(x, p). In 3-dimensions this is ~x(t) and p~(t) or Γ(x, y, px , py )
or Γ(r, θ, pr , pθ ). In quantum mechanics we shall see that the uncertainty
principle does not allow us to specify x and p simultaneously. Thus in
quantum mechanics our good coordinates will be things like E, L2 , Lz , etc.
rather than x, p. Thus Ψ will be written as Ψ(E, L2 , Lz · · ·) rather than
Ψ(x, p). (E is the energy and L is the angular momentum.) Furthermore
all information regarding the system resides in Ψ. We Rwill see later that the
expectation value of any physical observable is hQi = Ψ∗ Q̂Ψdx. Thus the
wave function will always give the values of any other physical observable
that we require.
At this stage we don’t know what Ψ means but we will specify its meaning
in a later postulate.
(1.22)
where U ≡ U (x). Again this is simple enough. The equation governing the
behavior of the wave function is the Schrödinger equation. (Here we have
written it for a single particle of mass m in 1–dimension.)
Contrast this to classical mechanics where the time development of the
momentum is given by F = dp dt and the time development of position is given
by F = mẍ. Or in the Lagrangian formulation the time development of the
generalized coordinates is given by the second order differential equations
known as the Euler-Lagrange equations. In the Hamiltonian formulation
the time development of the generalized coordinates qi (t) and generalized
momenta pi (t) are given by the first order differential Hamilton’s equations,
ṗi = −∂H/∂qi and q̇i = ∂H/∂pi .
Let’s move on to the next postulate.
This postulate states that the wave function is actually related to a proba-
bility density
18 CHAPTER 1. WAVE FUNCTION
ρ ≡ |Ψ|2 = Ψ∗ Ψ (1.23)
where Ψ∗ is the complex conjugate of Ψ. Postulate 3 is where we find
out what the wave function really means. The basic postulate in quantum
mechanics is that the wave function Ψ(x, t) is related to the probability for
finding a particle at position x. The actual probability for this is, in 1-
dimension,
Z A
P = |Ψ|2 dx (1.24)
−A
P is the probability for finding the particle somewhere between A and −A.
This means that
which is why |Ψ|2 is called the probability density and not simply the proba-
bility. All of the above discussion is part of Postulate 3. The “discovery” that
the symbol Ψ in the Schrödinger equation represents a probability density
took many years and arose only after much work by many physicists.
Usually Ψ will be normalized so that the total probability for finding the
particle somewhere in the universe will be 1, i.e. in 1-dimension
Z ∞
|Ψ|2 dx = 1 (1.26)
−∞
or in 3-dimensions Z ∞
|Ψ|2 d3 x = 1 (1.27)
−∞
∂2y 1 ∂2y
= (1.28)
∂x2 v 2 ∂t2
Here y = y(x, t) represents the height of the wave at position x and time t
and v is the speed of the wave [Chow 1995, Fowles 1986, Feynman 1964 I].
From (1.22) the free particle (i.e. U = 0) Schrödinger equation is
∂2Ψ 2m ∂Ψ
= −i (1.29)
∂x2 h̄ ∂t
which sort of looks a bit like the wave equation. Thus particles will be rep-
resented by wave functions and we already know that a wave is not localized
in space but spread out. So too is a particle’s wave property spread out over
some distance and so we cannot say exactly where the particle is, but only
the probability of finding it somewhere.
Footnote: The wave properties of particles are discussed in all books on
modern physics [Tipler 1992, Beiser 1987, Serway 1990].
∂ρ
∇·j+ =0
∂t
(1.34)
which is the continuity equation. In 1-dimension this is
∂j ∂ρ
+ =0 (1.35)
∂x ∂t
The continuity equation is a local conservation law. The conservation law in
integral form is obtained by integrating over volume.
Thus Z I
∇ · j dτ = j · da (1.36)
R R
The step dtd
ρ dτ = ∂ρ ∂t dτ in (1.37) requires some explanation. In general
ρ can be a function
R
of position
R
r and time t, i.e. ρ = ρ(r, t). However
the integral ρ(r, t)dτ ≡ ρ(r, t)d3 r will depend only on time as the r
coordinates will be integrated over. Because the whole integral depends
d
only on time then dt is appropriate outside the integral. However because
ρ = ρ(r, t) we should have ∂ρ ∂t inside the integral.
Thus the integral form of the local conservation law is
I
dQ
j · da + =0
dt
(1.39)
Thus a change in the charge Q within a volume is accompanied Hby a flow of
i
current across a boundary surface da. Actually j = area so that j · da truly
is a current I
i ≡ j · da (1.40)
Q ≡ Quniverse
or
Qf = Qi (1.45)
Our above discussion is meant to be general. The conservation laws can
apply to electromagnetism, fluid dynamics, quantum mechanics or any other
physical theory. One simply has to make the appropriate identification of j,
ρ and Q.
Finally, we refer the reader to the beautiful discussion by Feynman [Feyn-
man 1964 I] (pg. 27-1) where he discusses the fact that it is relativity, and
the requirement that signals cannot be transmitted faster than light, that
forces us to consider local conservation laws.
Our discussion of charge conservation should make our discussion of prob-
ability conservation much clearer. Just as conservation of charge is implied
by the Maxwell equations, so too does the Schrödinger equation imply con-
servation of probability in the form of a local conservation law (the continuity
equation).
∂ρ ∂
= (Ψ∗ Ψ)
∂t ∂t
∂Ψ ∂Ψ∗
= Ψ∗ + Ψ
∂t ∂t
and according to the Schrödinger equation in 1-dimension
" #
∂Ψ 1 h̄2 ∂ 2 Ψ ih̄ ∂ 2 Ψ i
= − 2
+ U Ψ = 2
− UΨ
∂t ih̄ 2m ∂x 2m ∂x h̄
∂Ψ∗ ih̄ ∂ 2 Ψ∗ i
=− + U Ψ∗
∂t 2m ∂x2 h̄
(assuming U ∗ = U ) we can write
à !
∂ρ ih̄ ∂ 2 Ψ ∂ 2 Ψ∗
= Ψ∗ 2 − Ψ
∂t 2m ∂x ∂x2
· µ ¶¸
∂ ih̄ ∂Ψ ∂Ψ∗
= Ψ∗ − Ψ
∂x 2m ∂x ∂x
1.4. INTERPRETATION OF THE WAVE FUNCTION 23
Well that doesn’t look much like the continuity equation. But it does if we
define a probability current
µ ¶
ih̄ ∂Ψ∗ ∂Ψ
j≡ Ψ − Ψ∗ (1.46)
2m ∂x ∂x
for then we have
∂ρ ∂j
+ =0 (1.47)
∂t ∂x
which is the continuity equation in 1-dimension and represents our local law
for conservation of probability.
Now let’s get the global law for conservation ofRprobability. In 1-dimension
∞
we integrate the continuity equation (1.47) over −∞ dx to get
Z ∞ Z ∞
∂ρ ∂j
dx = − dx
−∞ ∂t −∞ ∂x
Z · ¸∞
d ∞ ih̄ ∂Ψ∗ ∗ ∂Ψ
= ρ dx = − Ψ −Ψ
dt −∞ 2m ∂x ∂x −∞
In analogy with our discussion about the current located at the boundary
of the universe, here we are concerned about the value of the wave function
Ψ(∞) at the boundary of a 1-dimensional universe (e.g. the straight line).
Ψ must go to zero at the boundary, i.e.
Ψ(∞) = 0
Thus Z
d ∞
|Ψ|2 dx = 0 (1.48)
dt −∞
which is our global conservation law for probability. It is entirely consistent
with ourRnormalization condition (1.26). Writing the total probability P =
R
ρdx = |Ψ|2 dx we have
dP
=0 (1.49)
dt
analogous to global conservation of charge. The global conservation of prob-
ability law, (1.48) or (1.49), says that once the wave function is normalized,
say according to (1.26) then it always stays normalized. This is good. We
don’t want the normalization to change with time.
1.6 Operators
In quantum mechanics, physical quantities are no longer represented by ordi-
nary functions but instead are represented by operators. Recall the definition
of total energy E
T +U =E (1.51)
where U is the potential energy, and T is the kinetic energy
1 p2
T = mv 2 = (1.52)
2 2m
where p is the momentum, v is the speed and m is the mass. If we multiply
(16.1) by a wave function
(T + U )Ψ = EΨ (1.53)
h̄2 ∂ 2
T →− (1.54)
2m ∂x2
and
∂
E → ih̄ (1.55)
∂t
The replacement (16.4) is the same as the replacement
∂
p → −ih̄ (1.56)
∂x
1.6. OPERATORS 25
(Actually p → +ih̄ ∂x
∂
would have worked too. We will see in Problem 1.4
why we use the minus sign.) Let’s use a little hat (∧) to denote operators
and write the energy operator
∂
Ê = ih̄
∂t
(1.57)
and the momentum operator
∂
p̂ = −ih̄
∂x
(1.58)
What would the position operator or potential energy operator be? Well in
the Schrödinger equation (1.1) U just sits there by itself with no differential
operator. For a harmonic oscillator for example U = 12 kx2 we just plug 12 kx2
into the Schrödinger equation as
à !
h̄2 ∂ 2 1 ∂
− 2
+ kx2 ψ(x, t) = ih̄ ψ(x, t) (1.59)
2m ∂x 2 ∂t
x̂ = x
(1.61)
That is, the position opeator is just the position itself.
OK, we know how to write the expectation value of position. It’s given
in (1.50), but how about the expectation value of momentum? I can’t just
26 CHAPTER 1. WAVE FUNCTION
R
∞ R∞
write hpi = −∞ p|Ψ|2 dx = −∞ pΨ∗ Ψdx because I don’t know what to put in
for p under the integral. But wait! We just saw that in quantum mechanics
p is an operator given in (1.58). The proper way to write the expectation
value is
Z ∞ µ ¶
∂ ∗
hp̂i = Ψ −ih̄ Ψdx (1.62)
−∞ ∂x
Z ∞
∂Ψ
= −ih̄ Ψ∗ dx (1.63)
−∞ ∂x
Z
hQ̂i ≡ Ψ∗ Q̂Ψdx
(1.64)
which is a generalization of (1.19) or (1.50).
This would give
Z
hxi = Ψ∗ xΨdx
Example 1.6.1 Write down the velocity operator v̂, and its
expectation value.
Solution We have p̂ = −ih̄ ∂x
∂
and p = mv. Thus we define the
velocity operator
p̂
v̂ ≡
m
Thus
ih̄ ∂
v̂ = −
m ∂x
1.7. COMMUTATION RELATIONS 27
[A, B] ≡ AB − BA
(1.70)
Where AB ≡ A ◦ B which reads A “operation” B. Thus for integers under
addition we would have [3, 2] ≡ (3 + 2) − (2 + 3) = 5 − 5 = 0 or for integers
under multiplication we would have [3, 2] = (3 × 2) − (2 × 3) = 6 − 6 = 0.
Thus if two mathematical objects, A and B, commute then their commutator
[A, B] = 0. The reason why we introduce the commutator is if two objects
do not commute then the commutator tells us what is “left over”. Note a
property of the commutator is
because x and p are just algebraic quantities, not operators. To work out
quantum mechanical commutators we need to operate on Ψ. Thus
(1.75)
The commutator is a very fundamental quantity in quantum mechanics. In
section 1.6 we “derived” the Schrödinger equation by saying (T̂ + Û )Ψ = ÊΨ
p̂2
where T̂ ≡ 2m and then introducing p̂ ≡ −ih̄ ∂x∂
and Ê ≡ ih̄ ∂t ∂
and x̂ ≡ x.
The essence of quantum mechanics are these operators. Because they are
operators they satisfy (1.75).
An alternative way of introducing quantum mechanics is to change the
classical commutation relation [x, p]classical = 0 to [x̂, p̂] = ih̄ which can only
be satisfied by x̂ = x and p̂ = −ih̄ ∂x
∂
.
Thus to “derive” quantum mechanics we either postulate operator defi-
nitions (from which commutation relations follow) or postulate commutation
relations (from which operator definitions follow). Many advanced formula-
tions of quantum mechanics start with the commutation relations and then
later derive the operator definitions.
30 CHAPTER 1. WAVE FUNCTION
1.8 Problems
1.1 Suppose 10 students go out and measure the length of a fence and the
following values (in meters) are obtained: 3.6, 3.7, 3.5, 3.7, 3.6, 3.7, 3.4, 3.5,
3.7, 3.3. A) Pick a random student. What is the probability that she made
a measurement of 3.6 m? B) Calculate the expectation value (i.e. average
or mean) using each formula of (1.2), (1.5), (1.8).
1.2 Using the example of problem 1.1, calculate the variance using both
equations (1.15) and (1.16).
1.9 Answers
1.2 0.0181
q
1.3 Griffiths Problem 1.6. A) A = λ
π B) hxi = a, hx2 i = a2 + 1
2λ ,
σ = √12λ
34 CHAPTER 1. WAVE FUNCTION
Chapter 2
DIFFERENTIAL
EQUATIONS
Hopefully every student studying quantum mechanics will have already taken
a course in differential equations. However if you have not, don’t worry. The
present chapter presents the very bare bones of knowledge that you will need
for your introduction to quantum mechanics.
35
36 CHAPTER 2. DIFFERENTIAL EQUATIONS
where now a1 and a2 are ordinary constants. The general solution to equa-
tion (2.2) is impossible to write down. You have to tell me what a1 (x), a2 (x)
and k(x) are before I can solve it. But for (2.4) I can write down the general
solution without having to be told the value of a1 and a2 . I can write the
general solution in terms of a1 and a2 and you just stick them in when you
decide what they should be.
Now I am going to just tell you the answer for the solution to (2.4).
Theorists and mathematicians hate that! They would prefer to derive the
general solution and that’s what you do in a differential equations course.
But it’s not all spoon-feeding. You can always (and always should) check you
answer by substituting back into the differential equation to see if it works.
That is, I will tell you the general answer y(x). You work out y 0 and y 00 and
substitute into y 00 + a1 y 0 + a2 y. If it equals 0 then you know your answer is
right. (It may not be unique but let’s forget that for the moment.)
First of all try the solution
y(x) = r erx (2.5)
Well y 0 (x) = r2 erx and y 00 (x) = r3 erx and upon substitution into (2.4) we
get
r2 + a1 r + a2 = 0 (2.6)
This is called the Auxilliary Equation. Now this shows that y(x) = r erx
will not be a solution for any old value of r. But if we pick r carefully, so
that it satisfies the Auxilliary equation then y(x) = r erx is a solution. Now
the auxilliary equation is just a quadratic equation for r whose solution is
q
−a1 ± a21 − 4a2
r= (2.7)
2
√
−a + a2 −4a
Now if a21 −4a2 > 0 then r will have 2 distinct real solutions r1 = 1 2 1 2
√
−a1 + a21 −4a2
and r2 = 2 . If a21 − 4a2 = 0, then r will have 1 real solution
r1 = r2 = r = − a21 . If a21 − 4a2 < 0 then r will have 2 complex solutions
√ 2
a1 −4a2
r1 ≡ α + iβ and r2 = α − iβ where α = − 2 and iβ =
a1
2 . We often
just write this as r = α ± iβ. We are now in a position to write down the
general solution to (2.4). Let’s fancy it up and call it a theorem.
Notes to Theorem 1
i) If r1 = r2 = r is a single root then r must be real (see discussion above).
ii) If r1 and r2 are distinct and complex then they must be of the form
r1 = α + iβ and r2 = α − iβ (see discussion above). In this case the
solution is
which are alternative ways of writing the solution. (do Problem 2.1)
Generally speaking the exponentials are convenient for travelling waves
(free particle) and the sines and cosines are good for standing waves
(bound particle).
r2 + ω 2 = 0
r = ±iω
= D cos ωt + E sin ωt
= F cos(ωt + δ)
= G sin(ωt + γ)
y = yP + y H
Table 2.1
if f (x) = try yP =
beαx Beαx
This Table gives you a summary of how to find yP . For example if the
differential equation is
y 00 + 3y 0 + 2y = 3 sin 2x
then the particular solution will be yP = B sin 2x. In other words just change
the constant in the inhomogeneous term and you have the particular solution.
That’s why I like to call it the method of Copying the Inhomogeneous Term.
There is one caveat however. If a function the same as the inhomogeneous
2.1. ORDINARY DIFFERENTIAL EQUATIONS 41
term appears in the homogeneous solution then the method won’t work. Try
multiplying by x (or some higher power) to get yP .
Solution
functions ψ(x) and f (t) which are functions of a single variable. That is
Ψ(x, t) = ψ(x)f (t) (2.9)
Now all you have to do is substitute this ansatz back into the original P DE
dψ ∂2Ψ d2 ψ
and everything will fall out. We calculate ∂Ψ
∂x = dx f (t) and ∂x2 = dx2 f (t)
df
and ∂Ψ
∂t = ψ(x) dt to give
h̄2 d2 ψ df
− 2
f (t) + U (x)ψ(x)f (t) = ih̄ψ(x)
2m dx dt
Divide the whole thing through by ψ(x)f (t) and
h̄2 1 d2 ψ 1 df
− 2
+ U (x) = ih̄
2m ψ dx f dt
But now notice that the left hand side is a function of x only and the right
hand side is a function of t only. The only way two different functions of two
different variables can always be equal is if they are both constant. Let’s call
the constant E and make both sides equal to it. Thus
à !
h̄2 d2
− + U ψ = Eψ
2m dx2
(2.10)
and
1 df E −i
= = E (2.11)
f dt ih̄ h̄
and these are just two ordinary differential equations, which we know how
to solve! Equation (2.10) is only a function of x and is called the time-
independent Schroödinger equation (for 1 particle in 1 dimension). We shall
spend a whole chapter on solving it for different potential energy functions
U (x). That is we will get different solutions ψ(x) depending on what function
U (x) we put in. The second equation (2.11) can be solved right away because
it doesn’t have any unknown functions in it. It’s just an easy first order ODE
which has the solution
f (t) = Ce− h̄ Et
i
(2.12)
(do Problem 2.4)
Thus the separable solution to the time dependent Schrödinger equation
(1.1) is
44 CHAPTER 2. DIFFERENTIAL EQUATIONS
Ψ(x, t) = ψ(x)e− h̄ Et
i
(2.13)
(We will absorb normalization constants into ψ(x).)
What is the constant E? Obviously it’s the total energy because the
h̄2 d2
kinetic energy operator is T = − 2m dx2
and (2.10) is just
(T + U )ψ = Eψ
Compare to (16.3).
where cn are the expansion coefficients. We will see specifically what these
are below.
is constant in time because the time dependence of the wave function cancels
out. This also happens for any expectation value
Z Z
∗
hQi = Ψ (x, t)QΨ(x, t)dx = ψ ∗ (x)Qψ(x)dx (2.16)
because Q does not contain any time dependence. Remember all dynam-
ical variables can be written as functions of x or p only Q = Q(x̂, p̂) =
Q(x, −ih̄ dx
d
).
With equation (2.13) our normalization condition (1.26) can alternatively
be written Z ∞
ψ ∗ (x)ψ(x)dx = 1 (2.17)
−∞
= E2 (2.22)
so that
2
σH =0 (2.23)
The uncertainty in the total energy is zero! Remember that the expectation
value is the average of a set of measurements on an ensemble of identically
prepared system. However if the uncertainty is zero then every measurement
will be identical. Every measurement will give the same value E.
46 CHAPTER 2. DIFFERENTIAL EQUATIONS
2.3.5 Nodes
A node is a point where the function ψ(x) becomes zero (we are not including
the end points here). As n increases the number of nodes in the wave function
increases. In general the wave function ψn (x), corresponding to En , will have
n − 1 nodes.
Thus ψ1 has 0 nodes, ψ2 has 1 node, ψ2 has 2 nodes. Thus the number
of nodes in the wave function tells us which energy level En corresponds to
it.
The Kronecker delta is an object which has the effect of killing sums. Thus
X
Cj δij = Ci (2.28)
j
(Prove this.)
The inner product of two arbitrary vectors (2.24) can be alternatively written
XX XX X
A·B= Ai Bj êi · êj = Ai Bj δij = Ai Bi
i j i j i
Thus X
A·B= Ai Bi (2.29)
i
which could have served equally well as our definition of inner product.
The components or expansion coefficients Ai can now be calculated by
taking the inner product between the vector A and the basis vector êi . Thus
X X X
êi · A = êi · Aj êj = Aj êi · êj = Aj δij = Ai
j j j
giving
Ai = êi · A (2.30)
Now let’s look at functions. It turns out that the separable solutions
form a function space which is a complex infinite-dimensional vector space,
often called a Hilbert space [Byron 1969]. For an ordinary vector A the
components Ai are labelled by the discrete index i. For a function f the
components f (x) are labelled by the continuous index x. The discrete index
i tells you the number of dimensions, but x is a continuous variable and so
the function space is infinite dimensional.
The inner product of two functions g and f is defined by [Byron 1969,
Pg. 214]
Z b
hg | f i ≡ g ∗ (x)f (x)dx (2.31)
a
where we now use the symbol hg | f i instead of A·B to denote inner product.
This definition of inner product is exactly analogous to (2.29), except that
we have the complex component g ∗ (x).
The word orthonormal means orthogonal and normalized. An orthonor-
mal (ON) set of functions obeys
Z
∗
ψm (x)ψn (x)dx = δmn (2.32)
48 CHAPTER 2. DIFFERENTIAL EQUATIONS
Completeness
All separable solutions ψn (x) form an orthonormal set and the vast ma-
jority are also complete. A set of functions {ψn (x)} is complete if any other
function be expanded in terms of them via
X
f (x) = cn ψn (x) (2.35)
n
Now this looks exactly like the way we expand vectors in terms of basis
vectors in (2.25), where the components Ai correspond to the expansion
coefficients cn . For this reason the functions ψn (x) are called basis functions
and they are exactly analogous to basis vectors.
Equation (2.30) tells us how to find components Ai and similarly we
would like to know how to calculate expansion coefficients cn . Looking at
(2.30) you can guess the answer to be
Z
cn = hψn | f i = ψn∗ (x)f (x)dx (2.36)
which is just the inner product of the basis functions with the function.
(do Problem 2.5)
One last thing I want to mention is the so-called completeness relation or
closure relation (Pg. 69 [Ohanian 1990]). But first I need to briefly introduce
the Dirac delta function defined via
Z ∞
f (x)δ(x − a)dx ≡ f (a) (2.37)
−∞
function is defined to kill integrals. (We shall discuss the Dirac delta in
more detail later on.) Now recall that a complete set of basis functions
allows us to expand any other function in terms of them as in (2.35), where
the coefficients cn are given in (2.36). Thus
X
f (x) = cn ψn (x)
n
XZ
= ψn∗ (x0 )f (x0 )dx0 ψn (x)
n
Z " #
X
0
= f (x ) ψn∗ (x0 )ψn (x) dx0 (2.38)
n
and for the left hand side to equal the right hand side we must have
X
ψn∗ (x0 )ψn (x) = δ(x0 − x)
n
(2.39)
which is called the completeness relation or the closure relation. One can
see that if a set of basis functions {ψn (x)} satisfies the completeness relation
then any arbitrary function f (x) can be expanded in terms of them.
cn (t) ≡ cn (0)e− h̄ En t
i
(2.40)
or X X
Ψ(x, 0) = cn (0)ψn (x) = cn ψn (x) (2.42)
n n
and therefore we still expand our function Ψ in terms of the complete set of
basis functions ψn (x).
50 CHAPTER 2. DIFFERENTIAL EQUATIONS
2.4 Problems
2.2 Refer to Example 2.1.1. Determine the constants for the other 3 forms
of the solution using the boundary condition (i.e. determine B, C, F , δ, G,
γ from boundary conditions). Show that all solutions give x(t) = A sin ωt.
2.3 Check that the solution given in Example 2.1.2 really does satisfy the
A
differential equation. That is substitute x(t) = E cos(ωt + δ) + ω2 −α 2 cos αt
2
and check that ẍ + ω x = A cos αt is satisfied.
2.5 Answers
2.1
C = A + B, D = i(A − B) for C cos kx + D sin kx
A = F2 eiα , B = F2 e−iα for F cos(kx + α)
G −iβ
G iβ
A = 2i e , B = − 2i e for G sin(kx + β)
2.2
B = 2iA
C = − 2i
A
δ = π2 F = −A
γ=π G=A
INFINITE
1-DIMENSIONAL BOX
One of the main things we are interested in calculating with quantum me-
chanics is the spectrum of the Hydrogen atom. This atom consists of two
particles (proton and electron) and it moves about in three dimensions. The
Schrödinger equation (2.10) is a one particle equations in one dimension.
The mass of the particle is m moving in the x direction. We will eventually
write down and solve the two body, three dimensional Schrödinger equation
but it’s much more complicated to deal with than the one particle, one di-
mensional equation. Even though the one particle, one dimensional equation
is not very realistic, nevertheless it is very worthwhile to study for the fol-
lowing reasons. i) It is easier to solve and therefore we can get some practice
with solutions before attacking more difficult problems. ii) It contains many
physical phenomena (such as energy levels and tunnelling) that are found
in the two particle, three dimensional problem and in the real world. It
is much better to learn about these phenomena from a simple equation to
begin with.
In this chapter we are going to study the single particle in an infinite
1-dimensional box. This is one of the simplest examples to study in quantum
mechanics and it lets us illustrate many of the unusual features of quantum
theory via a simple example.
Another important reason for studying the infinite box at this stage is
that we will then be able to understand the postulates of quantum mechanics
much more clearly.
Imagine putting a marble in an old shoe box and then wobbling the box
53
54 CHAPTER 3. INFINITE 1-DIMENSIONAL BOX
around so that the marble moves faster and faster. Keep the box on the
floor and wobble it around. Then the marble will have zero potential energy
U and its kinetic energy T will increase depending on how fast the marble
moves. The total energy E = T + U will just be E = T because U = 0. In
principle the marble can have any value of E. It just depends on the speed
of the marble.
We are going to study this problem quantum mechanically. We will use
an idealized box and let its walls be infinitely high. We will also restrict it
to one dimension. (Even though this is an idealization, you could build such
a box. Just make your box very high and very thin.) There is a picture of
our infinite 1-dimensional (1-d) box in Figure 3.1.
What we want to do is to calculate the energy of a marble placed in the
box and see how it compares to our classical answer (which was that E can
be anything).
Even though Figure 3.1 looks like a simple drawing of a 1-d box it can also
be interpreted as a potential energy diagram with a vertical axis representing
U (x) and the horizontal axis being x. Now inside the box U = 0 because
the marble is just rattling around on the floor of the box. Because the box
is infinitely high then the marble can never get out. An equivalent way of
saying this is to say that U (x ≤ 0) = ∞ and U (x ≥ a) = ∞ for a box of
width a.
−h̄2 00
ψ = Eψ
2m
d2 ψ
where ψ 00 ≡ dx2
. Let’s condense all of these annoying constants into one via
√
2mE
k≡ (3.1)
h̄
giving
ψ 00 + k 2 ψ = 0 (3.2)
3.1. ENERGY LEVELS 55
r2 + k2 = 0 (3.3)
(We could have picked any of the other solutions, e.g. ψ(x) = E cos(kx + δ).
These other solutions are explored in problem 3.2.) Thus ψ(x) is a sinusoidal
function! Let’s try to figure out the unknown constants. We would assume
it works just like the classical case (see Example 2.1.1) where the constants
are determined from the boundary conditions.
What are the boundary conditions? Now we have to think quantum
mechanically, not classically. In Figure 3.1 I have written ψ = 0 in the
regions outside the infinite box. This is because if the walls are truly infinite
the marble can never get out of the box. (It can’t even tunnel out. See later.)
Thus the probability of finding the marble outside the box is zero. This is our
quantum mechanical boundary condition. Mathematically we write
Now the marble is inside the box and so the probability of finding the marble
inside the box is not zero.
OK, so we know the marble won’t be outside the box, but it will be inside.
But what happens right at the edge of the wall? Is the marble allowed to be
there or not? ψ(x) is supposed to be a well behaved mathematical function.
That is, it is not allowed to have infinities, spikes and jumps (discontinuities).
In fancy language we say that the wave function and its derivatives must be
continuous. Thus the wave function outside the box must equal the wave
function inside the box at the boundary. Therefore the wave function inside
the box must be zero at the wall.
ψ(x = 0) = 0 = C (3.9)
which means
sin ka = 0 (3.12)
(We don’t pick D = 0, otherwise we end up with nothing. Actually nothing
is a solution, but it’s not very interesting.) This implies that ka = 0, ±π,
±2π · · ·. The ka = 0 solution doesn’t interest us because again we get
nothing (ψ = 0). Also the negative solutions for ka are no different from
the positive solutions because sin(−ka) = − sin ka and the minus sign can
be absorbed into the normalization. The distinct solution are
nπ
k= n = 1, 2, 3 · · · . (3.13)
a
Now from (3.1) this gives
h̄2 k 2 π 2 h̄2
En = = n2 (3.14)
2m 2ma2
Thus the boundary condition tells us the energy! (not the amplitude). And
see how interesting this energy is. It depends on the integer n. Thus the
energy cannot be anything (as in the domial case) but comes in discrete steps
according to the value of n. We say that the energy is quantized. Because
there are only certain values of E allowed, depending on the value of n, I have
π 2 h̄2 π 2 h̄2
added a subscript to E in (3.14) as En . Thus E1 = 2ma 2 , E2 = 4 2ma2 = 4E1 ,
E3 = 9E1 , E4 = 16E1 etc. Thus
En = n2 E 1 (3.15)
3.2. WAVE FUNCTION 57
q
2
Example 3.1 Check that ψ2 (x) = a sin 2π
a x is normalized.
Ra
Solution We need to check that 0 ψ2∗ ψ2 dx = 1
Z a Z
2 a 2 2π
ψ2∗ (x)ψ2 (x)dx = sin x dx
0 a 0 a
2a
= =1
a2
Example 3.2 Check that ψ2 (x) and ψ1 (x) are mutually orthog-
onal.
Ra
Solution We need to check that 0 ψ2∗ (x)ψ1 dx = 0
Z a Z a
2 2π π
Define I ≡ ψ2∗ (x)ψ1 (x)dx = sin x sin x dx
0 a 0 a a
Z a
4 π π
= sin2
x cos x dx using sin 2θ = 2 sin θ cos θ
a0 a a
π π π
Let u = sin x ⇒ du = cos x dx.
a a a
Z 0
4a
Thus I= u2 du = 0
aπ 0
because u = 0 for x = 0 and u = 0 for x = a.
Figure 3.4: Energy Levels and Wave Functions for Infinite 1-dimensional
Box
3.3. PROBLEMS 63
3.3 Problems
3.2 For the infinite 1-dimensional box, show that the same energy levels
and wave functions as obtained in (3.14) and (3.19) also arise if the other
solution y(x) = Ae(α+iβ)x + Be(α−iβ)x is used from Theorem 1.
64 CHAPTER 3. INFINITE 1-DIMENSIONAL BOX
3.4 Answers
POSTULATES OF
QUANTUM MECHANICS
Solution
Z ∞ ∂Ψ
hp̂i = −ih̄ Ψ∗ dx
−∞ ∂x
65
66 CHAPTER 4. POSTULATES OF QUANTUM MECHANICS
Z ∞ ∂Ψ∗
therefore hp̂i∗ = ih̄ Ψ dx
−∞ ∂x
Z ∞
= ih̄ ΨdΨ∗
−∞
½ Z ∞ ¾
= ih̄ [ΨΨ∗ ]∞
−∞ − Ψ∗ dΨ
−∞
B̂φ = φ2
or
B̂φ = φ + 7
or µ ¶2
dφ
B̂φ =
dx
or
B̂φ = bφ
(4.1)
where b is a number. It’s this last operator equation which is the special one
and it’s just like Ĥψ = Eψ. In fact this last operator equation is so special
that it’s given a name. It’s called an eigenvalue equation. An eigenvalue
equation is one in which an operator B̂ acts on a function φ and gives back
simply the original function multiplied by a number b. In such a case the
function φ is called an eigenfunction and the number b is called an eigenvalue.
The eigenvalue and eigenfunctions need not be unique. There might be lots
of them in which case we would write
B̂φi = bi φi (4.2)
4.2 Postulate 4
Having discussed Hermitian operators and eigenvalue equations, we are ready
to formulate our next postulate.
We have also seen that this postulate makes sense. With the energy
eigenfunctions ψn (x) we are able to expand the general eigenfunctions as
X X
cn ψn (x)e− h̄ En t
i
Ψ(x, t) = cn Ψn (x, t) = (4.3)
n n
X
Ψ(x, to ) = c0n ψn (x)
n
(4.4)
where
c0n ≡ cn e− h̄ En to
i
with the time dependence absorbed into the constant. Equation (4.4) is
very important. Let us think specifically about the infinite 1-dimensional
box. The wave function for the whole box is Ψ(x, to ), which is a linear
combination of all individual solutions ψn (x). Equation (4.4) could easily
be verified. Take each solution ψn (x), and add them all up and you will still
have a solution! Thus the most general state of the infinite 1-dimensional
box is Ψ(x).
Again let us illustrate this with the 1-dimensional box. The box is in
P
the general state Ψ = cn ψn (x). If you make a measurement then the
n
probability of finding the system in state ψn (x) is |cn |2 .
70 CHAPTER 4. POSTULATES OF QUANTUM MECHANICS
n m
The probability is the integral of the probability density. Thus the normal-
ization is
Z ∞ XX Z
c∗n cm ψn∗ (x)ψm (x)dx e h̄ (Em −En )t
i
1= |Ψ|2 dx =
−∞ n m
XX
c∗n cm δnm e h̄ (Em −En )t
i
=
n m
X X
= c∗n cn = |cn |2
n n
immediate subsequent measurements will still give ψ3 and ψ10 . If the wave
functions had not collapsed then we might have got ψ5 and ψ13 on subsequent
measurements. That is we would keep measuring a different energy for the
system which is crazy.
5. (Expansion Postulate) {φi } from a CON set, such that any wave func-
P
tion can be written ψ = ci φi .
i
Solution
Z
hB̂i ≡ Ψ∗ (x)B̂Ψ(x)dx (at fixed time)
Z X X
= c∗i φ∗i B̂ cj φj dx using the Expansion postulate
i j
XX Z
= c∗i cj φ∗i B̂φj dx
i j
XX Z
= c∗i cj φ∗i bj φj dx
i j
XX Z
= c∗i cj bj φ∗i φj dx
i j
XX X
= c∗i cj bj δij = c∗i ci bi
i j i
X
= |ci | bi 2
(4.5)
i
using our average formula from (1.8). Thus the expectation value
is the average of the eigenvalues.
The interpretation of (4.5) is that if the state of a system is ψ then
|ci |2 is the probability of a measurement yielding the eigenvalue
bi .
Thus in a measurement it is the eigenvalues that are actually
measured (with a certain probability).
Solution ψ = φi
Z
hB̂i = ψ ∗ B̂ψ dx
Z
= φ∗i bi φi dx
Z
= bi φ∗i φi dx
= bi
4.7 Problems
4.1 Show that the expectation value of the Hamiltonian or Energy operator
is real (i.e. show that hÊi is real where Ê = ih̄ ∂t
∂
).
4.8 Answers
A) r
30
A=
a5
B)
a
hxi =
2
hpi = 0
5h̄2
hHi =
ma2
ci = 0.99928
c2 = 0
c3 = 0.03701
76 CHAPTER 4. POSTULATES OF QUANTUM MECHANICS
Part I
1-DIMENSIONAL
PROBLEMS
77
Chapter 5
Bound States
79
80 CHAPTER 5. BOUND STATES
2m
ψ 00 + (E − U )ψ = 0
h̄2
(5.1)
In this chapter we will only consider bound states, E < 0. In Regions I and
III (x < −a and x > a), we have U = 0 so that the Schrödinger equation
(5.1) is
2mE
ψ 00 + 2 ψ = 0 (5.6)
h̄
but despite the similarity to (3.1) and (3.2) it is not the same equation
because here E is negative. Define
2mE
κ̄2 ≡ (5.7)
h̄2
and thus √
2mE
κ̄ ≡ (5.8)
h̄
but remember now κ̄ is complex because E is negative, whereas in (3.1) k
was real. Proceeding write
ψ 00 + κ̄2 ψ = 0 (5.9)
82 CHAPTER 5. BOUND STATES
However the C cos κ̄x + D sin κ̄x solution doesn’t make any sense because κ̄
is complex. Actually (5.8) can alternately be written
s
2m|E|
κ̄ = i ≡ iκ0 (5.13)
h̄
r = ±κ0 (5.14)
yielding
0 0
ψI (x) = A eκ x + B e−κ x (5.15)
which is equivalent to (5.12) because r = ±iκ̄ is actually real. We can avoid
the confusion above if we just agree to always choose κ or k or whatever to
be real at the outset.
(−2mE)
ψ 00 − ψ=0 (5.16)
h̄2
Because E is negative, −E will be positive. Define
2mE
κ2 ≡ − (5.17)
h̄2
or s
2mE
κ≡ − (5.18)
h̄2
5.2. FINITE 1-DIMENSIONAL WELL 83
ψ 00 − κ2 ψ = 0 (5.19)
5.2.2 Region II
In the region −a < x < a we have U = −U0 so that (5.1) becomes
2m
ψ 00 + (E + U0 )ψ = 0 (5.27)
h̄2
Now even though E < 0 we will never have E < U0 so that E + U0 will
remain positive. Thus we can define
2m
k2 ≡ (E + U0 ) (5.28)
h̄2
safe in the knowledge that
s
2m(E + U0 )
k≡ (5.29)
h̄2
84 CHAPTER 5. BOUND STATES
is real. We use the same symbol for k here as in (3.1) because the Auxilliary
equation and solutions are the same as before in Section 3.1. Equations
(3.3)–(3.5) will be the same as here except with k defined as in (5.29). Thus
and
ψI0 (−a) = ψII
0
(−a) (5.34)
yields
and
0 0
ψII (a) = ψIII (a) (5.43)
yields
A e−κa [(κ cos ka − k sin ka) cos ka − (k cos ka + κ sin ka) sin ka]
= −Bκ e−ka (5.44)
where we note that ψII (x) is an even function. The other substitution κ =
−k cot ka gives
ψI (x) = A eκx
A
ψII (x) = e−κa (κ cos ka − k sin ka) sin kx (5.58)
k
ψIII (x) = −A e−κx (5.59)
where now ψII (x) is an odd function. Also note that ψIII (x) in (5.59) is
simply the negative of ψIII (x) given in (5.57).
2mU0
κ2 + k 2 = (5.60)
h̄2
Thus κ and k were related from the very beginning! There was really only
one undetermined constant, either κ or k.
In the case of the infinite square well the value of k was determined to
be k = nπa from the boundary conditions. This gave us energy quantization
2 h̄2
via En = k2m .
For the finite well the boundary conditions gave us (5.50) and (5.51).
Thus k (or κ) is determined! Thus the energy is already determined!
Look at it this way. Consider the even parity solution (5.50) and (5.60).
They are two equations in two unknowns κ and k. Therefore both κ and k
88 CHAPTER 5. BOUND STATES
are determined and consequently the energy E is calculated from (5.17) and
(5.28).
The trouble is though that the two simultaneous equations (5.50) and
(5.60) for κ and k cannot be solved analytically. We have to solve them
numerically or graphically. This is done as follows. Equation (5.60) is the
equation for a circle as shown in Fig. 5.3. The other equation (5.50) (for
even parity) relating κ and k is shown in Fig. 5.4. The solutions of the
two simultaneous equations (5.50) and (5.60) are the points where the two
Figures 5.3 and 5.4 overlap. This is shown in Figure 5.5 where there are
4 points of intersection which therefore corresponds to 4 quantized energy
levels, which are drawn on the potential energy
√ diagram in Fig. 5.6. Notice
2mU0
that the radius of the circle in Fig. 5.3 is h̄ . Thus the radius depends
on the strength of the potential U0 . Looking at Fig. 5.5 then if the potential
U0 is very weak (small radius) then there might be zero bound states (zero
intersections), whereas if the potential well is very deep (large U0 , thus large
radius) then there may be many bound states (many points of intersection).
Thus the number of bound states depends on the depth of the potential well,
or equivalently, on the strength of the potential.
Similar considerations also hold for the odd parity solution (5.51).
n2 π 2 h̄2
En = − U0 (5.63)
2ma2
which is identical to the solution for the infinite square well. (For the infinite
well we had U = 0 inside the well, whereas for the finite well we had U = −U0
inside the well.)
5.3. POWER SERIES SOLUTION OF ODES 89
Notice too that the wave functions become the same as for the infinite
well.
Let us now consider the case of weak potentials or shallow wells.
Given that the number of energy levels depends on the number of in-
tersection points, as shown in Fig. 5.5, which depends on the radius of the
circle or the strength of the potential, we might expect that for a very weak
potential there might be zero bound states. This is not the case however.
For a weak potential, there is always at least one bound state. Again refer to
Fig. 5.5. No matter how small the circle is, there will always be at least one
intersection point or one bound state.
(do Problem 5.3)
which are substituted back into the differential equation (5.64) to give
∞
X
am [m(m − 1)xm−2 + k 2 xm ] + k 2 (a0 + a1 x) = 0 (5.71)
m=2
We now simply write out all the terms in the sum explicitly and then
equate like coefficients of xm to 0. This yields
2.1a2 + k 2 a0 = 0
3.2a3 + k 2 a1 = 0
4.3a4 + k 2 a2 = 0
5.4a5 + k 2 a3 = 0
6.5a6 + k 2 a4 = 0 etc. (5.72)
k2 k2
a2 = − a0 = − a0
2.1 2!
k2 k2
a3 = − a1 = − a1
3.2 3!
k2 k4
a4 = − a2 = + a0
4.3 4!
5.3. POWER SERIES SOLUTION OF ODES 91
k2 k4
a5 = − a3 = + a1
5.4 5!
k2 k6
a6 = − a4 = − a0
6.5 6!
k2 k6
a7 = − a5 = − a1 etc. (5.73)
7.6 7!
These are substituted back into the solution (5.68) to yield
ψ(x) = a0 + a1 x + a2 x2 + a3 x3 + a4 x4 + · · ·
k2 k2
= a0 + a1 x − a0 x2 − a1 x3
2! 3!
k4 k 4
+ a0 x4 + a1 x5 + · · ·
"4! 5! #
(kx) 2 (kx)4 (kx)4
= a0 1 − + − + ···
2! 4! 6!
" #
a1 (kx)3 (kx)5 (kx)7
+ kx − + − + ··· (5.74)
k 3! 5! 7!
a1
= a0 cos kx + sin kx (5.75)
k
which we recognize as our familiar solution (5.65) with A ≡ ak1 and B ≡ a0 !
which is exactly the same as (5.70). Substituting both this and (5.68) back
into the Schrödinger equation (5.64) gives
∞
X
[(m + 2)(m + 1)am+2 + k 2 am ]xm = 0 (5.77)
m=0
92 CHAPTER 5. BOUND STATES
instead of (5.71). The advantage of using the new sum in (5.76) is that it is
very straightforward to equate like coefficients of xm to 0 in equation (5.77).
This yields
−k 2
am+2 = am (5.78)
(m + 2)(m + 1)
which is called a recurrence relation, which is a compact formula for each of
the expressions in equations (5.73).
1
U (x) = mω 2 x2 (5.79)
2
d2 ψ
+ (² − y 2 )ψ = 0 (5.82)
dy 2
We could use the power series method directly on this equation, but it will get
very complicated. It’s easier to write the wave function as another function
multiplied by an asymptotic function.
By asymptotic we mean the region where x or y goes to infinity. For y
very large the Schrödinger equation becomes
d2 ψ
− y2ψ = 0 (5.83)
dy 2
ψ(y) ≡ h(y)e−y
2 /2
(5.84)
(NNN See Griffith footnote (14), pg. 38 and see Boas). Substituting this
into the Schrödinger equation (5.82) we obtain a differential equation for
the function h(y) as
h00 − 2yh0 + (² − 1)h = 0 (5.85)
where h0 ≡ dh 00 2
dy and h ≡ dy 2 . We will solve this differential equation with
d h
∞
X ∞
X
h00 (y) = m(m − 1)am y m−2 = (m + 2)(m + 2 − 1)am+2 y m+2−2
m=2 m+2=2
X∞
= (m + 2)(m + 1)am+2 y m (5.88)
m=0
Just as our power series solution for the infinite square well gave us two
separate series for aodd and aeven so too does the harmonic oscillator via
(5.90). Obviously all of the aeven coefficients are written in terms of a0 and
the aodd coefficients are written in terms of a1 .
94 CHAPTER 5. BOUND STATES
But now we run into a problem. For very large m the recurrence relation
is
2
am+2 ≈ am (5.91)
m
P∞
Thus the ratio of successive terms for the power series h(y) = m=0 am y m
for large m will be
am+2 y m+2 (2m + 1 − ²) 2 2y 2
= y ≈ (5.92)
am y m (m + 2)(m + 1) m
Now recall the series
∞
X xm x2 x3
x
e = =1+x+ + + ··· (5.93)
m=0
m! 2! 3!
gives
∞
X
x2 x2m
e =
m=0
m!
x4 x6 x8
= 1 + x2 + + + + ···
2! 3! 4!
∞
X xm
= (5.94)
m=0,2,4...
(m/2)!
which blows up for y → ∞. That’s out problem. The wave function is not
normalizable.
5.4. HARMONIC OSCILLATOR 95
This problem can only be solved if the series terminates. That is, at some
value of m, the coefficient am and all the ones above it are zero. Then the
2
series will not behave like ey at large y (i.e. large x). Thus for some value
m ≡ n, we must have
am+2 ≡ an+2 = 0 (5.98)
If this is the case then the recurrence relation (5.90) tells us that all higher
coefficients (eg. an+4 , an+6 etc.) will also be zero.
Now we have no idea as to the value of n. All we know is that it must be
an integer like m. Our argument above will work for n = 0, n = 1, n = 2,
n = 3 etc. That is, we will get finite normalizable wave functions for any
integer value of n, and all of these wave functions will be different for each
value of n because the series will terminate at different n values. (We can
see quantization creeping in!)
There is another piece to this argument. We have noted that there are
actually two independent power series for aeven and aodd . The above argu-
ment only works for one of them. For example, if the even series terminates,
it says nothing about the odd series, and vice versa. Thus if the even series
terminates, then all of the coefficients of the odd series must be zero and
vice versa. (We also expect this physically. For the infinite square well, aeven
corresponded to cos kx and aodd corresponded to sin kx. See equation (5.64).
We know from the properties of separable solutions that the eigenfunctions
will alternate in parity. Thus it makes sense that for a particular value of n
will correspond to either aeven or aodd but not both.)
The requirement (5.98) together with (5.90) gives
2m + 1 − ² ≡ 2n + 1 − ² = 0 (5.99)
yielding
2E
²≡ = 2n + 1 (5.100)
h̄ω
or
µ ¶
1
En = n + h̄ω n = 0, 1, 2, . . .
2
(5.101)
which is our quantization of energy formula for the harmonic oscillator. Just
as with the infinite and finite square wells, the energy quantization is a result
96 CHAPTER 5. BOUND STATES
X
n
hn (y) = am y m (5.102)
m=0
and à !
X
n
−y 2 /2
e−y
m 2 /2
ψn (y) = hn (y)e = am y (5.103)
m=0
(n) 2(m − n)
am+2 = am (5.104)
(m + 2)(m + 1)
where we have substituted (5.100) into (5.90). Thus the recurrence relation
is different for different values of n. For n = 0, we have
h0 (y) = a0 (5.105)
and
ψ0 (y) = a0 e−y
2 /2
. (5.106)
For n = 1 we have a0 = 0 (even series is all zero) and
h1 (y) = a1 y (5.107)
and
ψ1 (y) = a1 y e−y
2 /2
. (5.108)
For n = 2, we have all aodd = 0 (odd series is all zero) and
h2 (y) = a0 + a2 y 2 (5.109)
5.4. HARMONIC OSCILLATOR 97
and µ ¶
4
ψ4 (y) = a0 1 − 4y 2 + y 4 e−y /2 .
2
(5.123)
3
The functions hn (y) are related to the famous Hermite polynomials
Hn (y), [Spiegel, 1968, pg. 151] the first few which are defined as
H0 (y) = 1
H1 (y) = 2y
H2 (y) = 4y 2 − 2
H3 (y) = 8y 3 − 12y
H4 (y) = 16y 4 − 48y 2 + 12 (5.124)
also put in a normalization factor. Thus the wave functions should be written
as
ψn (y) = Cn hn (y)e−y /2
2
(5.135)
where Cn is a normalization. Using a different normalization An we can
write ψ in terms of the Hermite polynomials
ψn (y) = An Hn (y)e−y
2 /2
(5.136)
Normalization requires
100 CHAPTER 5. BOUND STATES
s
Z ∞ Z ∞
h̄
1 = ψn∗ (x)ψn (x)dx = ψn∗ (y)ψn (y)dy
−∞ mω −∞
s
Z ∞
h̄ 2
e−y Hn (y)2 dy
2
= A
mω n −∞
s
h̄ 2 n √
= A 2 n! π (5.137)
mω n
q q
where we have used y ≡ mω
h̄ x and dx = h̄
mω dy and (5.134). This gives
¡ mω ¢1/4
An = πh̄ √1 so that the normalized Harmonic Oscillator wave func-
2n n!
tions are finally
µ ¶1/4
mω 1
Hn (y)e−y /2
2
ψn (x) = √ (5.138)
πh̄ n
2 n!
q
where y ≡ mω
h̄ x. This result together with the energy formula En =
1
(n + 2 )h̄ωcompletes our solution to the 1-dimensional harmonic oscillator
problem. The first few eigenfunctions and probabilities are plotted in Fig.
5.9. Note that for the odd solutions, the probability of the particle in the
center of the well is zero. The particle prefers to be on either side. (See also
Fig. 2.5, pg. 42 of [Griffiths, 1995].)
(do Problems 5.4–5.9)
h̄2 d2 ψ 1
− + mω 2 x2 ψ = Eψ (5.139)
2m dx2 2
Defining the Schrödinger equation in terms of the Hamiltonian operator
Ĥψ = Eψ (5.140)
5.5. ALGEBRAIC SOLUTION FOR HARMONIC OSCILLATOR 101
then
p2 1
+ mω 2 x2
Ĥ = (5.141)
2m 2
for the harmonic oscillator Hamiltonian. (We are going to be lazy and just
write p instead of p̂ and x instead of x̂.) Let us define two new operators
1
â ≡ √ (mωx̂ + ip̂) (5.142)
2mh̄ω
and
1
↠≡ √ (mωx̂ − ip̂) (5.143)
2mh̄ω
or in our lazy rotation
)
a 1
≡√ (mωx ± ip)
a† 2mh̄ω
(5.144)
(Read a† as “a dagger”.) Everyone [Goswami, 19xx, pg. 143; Liboff, 1992, pg.
191; Ohanian, 1990, pg. 151; Gasiorowicz, 1996, pg 131] uses this definition
except Griffiths [1995, pg. 33]. Also everyone uses the symbols a and a† , but
Griffiths uses a+ and a− . Notice that
a† = a∗ (5.145)
A† ≡ Ã∗ (5.146)
à !
A11 A12
where à is the transpose of A. Thus if A ≡ then à =
A21 A22
à ! à !
A21 A22 A∗21 A∗22
and A† = . A matrix is Hermitian if
A11 A12 A∗11 A∗12
A† = A (5.147)
and it can be shown (see later) that Hermitian matrices have real eigenvalues.
Our operator a is a 1-dimensional matrix so obviously we just have a† = a∗ .
102 CHAPTER 5. BOUND STATES
We shall pursue all of these topics in much more detail later. Let’s return
to our operators in (5.144).
The operators in (5.144) can be inverted to give
s
h̄
x= (a + a† ) (5.148)
2mω
and s
mh̄ω †
p=i (a − a) (5.149)
2
(do Problem 5.10) Now the operators a and a† do not commute. In fact
using [x, p] = ih̄, it follows that
[a, a† ] = 1 (5.150)
(do Problem 5.11) The Hamiltonian can now be written in three different
ways as
1
H = (aa† + a† a)h̄ω
2
= (aa† − 12 )h̄ω
= (a† a + 12 )h̄ω (5.151)
Thus an alternative way to write the Schrödinger equation for the harmonic
oscillator is
(aa† − 12 )h̄ωψ = Eψ
(5.152)
or
(a† a + 12 )h̄ωψ = Eψ
(5.153)
which are entirely equivalent ways of writing the Schrödinger equation com-
pared to (5.139). A word of caution!
and similarly for (5.153). The reason is because a and a† do not commute.
The correct statement is
which is quite different to (5.153). Note that (5.155) is not the Schrödinger
equation. The Schrödinger equation is (5.152) or (5.153). To turn (5.155)
into a Schrödinger equation we add h̄ωa† ψ to give
which is
(a† a + 12 )h̄ωa† ψ = (E + h̄ω)a† ψ (5.156)
Define ψ 0 ≡ a† ψ and E 0 ≡ E + h̄ω and we have
or
(a† a + 12 )h̄ω a† a† ψ = (E + 2h̄ω)a† a† ψ (5.160)
Thus the operators a† and a when applied to the wave function ψ either
raise or lower the energy by an amount h̄ω. For this reason a† is called the
raising operator or creation operator and a is called the lowering operator
or destruction operator or annihilation operator.
So by repeatedly applying the destruction operator a to ψ we keep low-
ering the energy. But this cannot go on forever! For the harmonic oscillator
with the minimum of potential U = 0 at x = 0 we can never have E < 0.
The harmonic oscillator must have a minimum energy E0 , with correspond-
ing wave function ψ0 such that
aψ0 = 0 (5.161)
giving
E0 = 12 h̄ω (5.163)
because a† a h̄ω ψ0 = 0 according to (5.161).
where
An †n
ψn = a ψ0 (5.165)
A0
with An and A0 being normalization constants. (We have written ψn =
An †n †n
A0 a ψ0 instead of ψn = An a ψ0 because the latter expression gives ψ0 =
A0 ψ0 for n = 0 which is no good.) Also
En = (n + 12 )h̄ω (5.166)
5.5. ALGEBRAIC SOLUTION FOR HARMONIC OSCILLATOR 105
which is the same result for the energy that we obtained with the power
series method.
Let’s now obtain the wave functions ψn which can be obtained from
n †n
ψn = A A0 a ψ0 once we know ψ0 . Using (5.161)
1
aψ0 = √ (mωx + ip)ψ0 = 0 (5.167)
2mh̄ω
dψ0 mω
=− xψ0 (5.168)
dx h̄
which is a first order ODE with solution
ψ0 = A0 e− 2h̄ x
mω 2
= A0 e−y = A0 H0 (y)e−y
2 /2 2 /2
(5.169)
q
with y ≡ mω h̄ x which is the same solution that we found for the power
series method. (Recall H0 (y) = 1). To obtain ψn is now straightforward.
We have
· µ ¶¸n
An †n 1 d
e− 2h̄ x
mω 2
ψn = a ψ0 = An √ mωx − h̄ (5.170)
A0 2mh̄ω dx
q
However let’s first write a† a little more simply. Using y ≡ mω
h̄ x we have
) µ ¶
a 1 d
= √ y±
a† 2 dy
(5.171)
(Exercise: Prove this.) Thus (5.170) is
µ ¶n
An †n 1 d
e−y
2 /2
ψn = a ψ0 = An √ n y − (5.172)
A0 2 dy
106 CHAPTER 5. BOUND STATES
2 2
ψ0 = A0 H0 (y)e−y /2
2
(5.174)
1
ψ1 = A1 √ H1 (y)e−y /2
2
(5.175)
2
1
ψ2 = A2 H2 (y)e−y /2
2
(5.176)
2
1
ψ3 = A3 √ H3 (y)e−y /2
2
(5.177)
2 2
1
ψ4 = A4 H4 (y)e−y /2
2
(5.178)
4
5.5. ALGEBRAIC SOLUTION FOR HARMONIC OSCILLATOR 107
giving
An 1
=√ (5.181)
A0 n!
An †n
Thus we finally have from ψn = A0 a ψ0 ,
1
ψn = √ a†n ψ0
n!
(5.182)
and
µ ¶1/4
mω 1
Hn (y)e−y /2
2
ψn = √
πh̄ n
2 n!
(5.183)
which is identical to the result we obtained with the power series method.
Hψn = En ψn (5.184)
with H given by any of the three expressions in (5.151). Choosing the third
expression we have
or
a† aψn = nψn (5.185)
Also the second expression gives (aa† − 12 )h̄ωψn = (n + 12 )h̄ωψn to give
N ψn = nψn
(5.188)
which is the Schrödinger equation! Thus
H = (n + 12 )h̄ω (5.189)
Now some useful results are (do Problem 5.16 and 5.17)
Z ∞ Z
(aψn )∗ aψn dx = ψn∗ a† aψn dx (5.190)
−∞
and Z Z
(a† ψn )∗ a† ψn dx = ψn∗ aa† ψn dx (5.191)
n †n
Before we had ψn = AA0 a ψ0 and we had to go to a lot of trouble to find
that A √1
A0 = n! . We can get this result here more quickly.
n
√
Example 5.5.5 Show that a† ψn = n + 1ψn+1 .
Solution Writing
An †n
ψn = a ψ0
A0
and
An+1 † †n An+1 †
ψn+1 = a a ψ0 = a ψn
A0 An
we have
Z
∗
ψn+1 ψn+1 dx = 1
µ ¶ Z
An+1 2
= (a† ψn )∗ (a† ψn )dx
An
µ ¶ Z
An+1 2
= ψn∗ aa† ψn dx
An
µ ¶ Z
An+1 2
= ψn∗ (n + 1)ψn dx
An
110 CHAPTER 5. BOUND STATES
√
a† ψn = n + 1ψn+1
(5.192)
An+1 †
Actually instead of ψn+1 = An a ψn
we would just have written
ψn+1 = Ca† ψn because we know from Theorem 5.1 that this ψn+1
satisfies the Schrödinger equation. As above we would then find
1
C = √n+1 .
√
aψn = nψn−1
(5.193)
which is consistent with aψ0 = 0.
Thus we have
An √
ψn = n!ψn
A0
√
and we must therefore have A
A0 n! = 1 or
n An
A0 = √1
n!
giving
1
ψn = √ a†n ψ0
n!
SCATTERING STATES
In the previous chapter we studied bound state problems and in this chapter
we shall study un-bound or scattering problems. For bound state problems,
the most important things to know were the wave functions and discrete en-
ergy levels En . However for scattering problems the energy E is not discrete
and can be anything. We are particularly interested in the wave functions
which we shall use to determine the transmission and reflection coefficients
T and R.
where
h̄2 k 2
E ≡ h̄ω ≡ (6.2)
2m
In the case of the infinite well we imposed boundary conditions and found
that ψ(x) = D sin nπ a x and also that the energy was quantized. However for
the free particle we have no such boundary conditions and thus the energy
of the free particle can be anything.
113
114 CHAPTER 6. SCATTERING STATES
We have seen that the free particle wave A eikx + B e−ikx travels to the
Left or Right. To specify a wave travelling to the Left, we set A = 0, giving
ψL = B e−ikx (6.8)
ψR = A eikx (6.9)
ψ = C eikx (6.10)
What are we to make of our infinite square well bound³ state solution ´
ψ = D sin kn x? Well write it as ψ = D sin kn x = D eikn x − e−ikn x
2i
and we see that the bound state solution is a superposition of Left and Right
travelling waves that constructively interfere to produce a standing wave. It’s
just like the way we get standing waves on a string. The travelling waves
reflect from the boundaries to produce the standing wave.
Let’s return to our discussion of the free particle. There are two diffi-
culties with the above analysis. First consider a classical free particle where
U = 0 so that E = 12 mv 2 + 0 giving
s
2E
vclassical = = 2vp (6.11)
m
116 CHAPTER 6. SCATTERING STATES
Thus the Left and Right waves are not normalizable! But actually this is
not really a problem because it’s Ψ(x, t) not ψ(x) which is supposed to be
normalizable.
Recall that for the bound discrete states En we had
∞
X
cn ψn (x)e− h̄ En t
i
Ψ(x, t) = (6.13)
n=1
and each ψn (x) corresponded to a definite energy, and it turned out that
each ψn (x) was normalizable. Ψ(x, t) does not contain a definite energy
but a whole bunch of them. For the free particle, the fact that ψ(x) is not
normalizable means that “there is no such thing as a free particle with a
definite energy.” [Griffiths, 1995, pg. 45].
For the free particle ψ(x) cannot be normalized but Ψ(x, t) can be. Thus
the normalized free particle wave function contains a whole bunch of energies,
not just a single energy. Also these energies are continuous, so instead of
2 π 2 h̄2
E = En = n2ma 2 for the infinite well, let’s write
h̄2 k 2
E = Ek = (6.14)
2m
where k now represents a continuous index (rather than En ). Thus instead
of (6.13) we now have
∞Z
1
dk φ(k)ψk (x)e− h̄ Ek t
i
Ψ(x, t) = √
2π −∞
Z ∞
1
= √ dk φ(k)ei(kx−ωt) (6.15)
2π −∞
R∞ P
where dk replaces and k is allowed to be both positive and negative
−∞ n
to include Left and Right waves. φ(k) replaces Cn . The factor √12π is an
arbitrary factor included for convenience. The definition (6.15) could be
made without it.
Ψ(x, t) in equation (6.15) is called a wave packet because it is a collection
of waves all with different energies and speeds. Each of the separate waves
6.1. FREE PARTICLE 117
travels at its own particular speed given by the phase velocity. A good
picture of a wave packet is Figure 2.6 of Griffiths [1995, pg. 47].
Recall how we obtained the coefficients cn in (6.13). We wrote
X
Ψ(x, 0) = cn ψn (x) (6.16)
n
and from the ON property of the basis set {ψn (x)} we obtained
Z ∞
cn = hψn | Ψ(x, 0)i ≡ ψn∗ (x)Ψ(x, 0)dx (6.17)
−∞
then Z ∞
1
F (k) ≡ √ f (x)e−ikx dk (6.19)
2π −∞
f (x) and F (k) are called a Fourier transform pair. (If (6.18) did not have
√1 in front then F (k) would have to have 1 in front.) Thus we can get
2π 2π
the “expansion coefficient” φ(k). Write
∞ Z
1
ψ(x) ≡ Ψ(x, 0) = √ dk φ(k)ψk (x)
2π −∞
Z ∞
1
= √ dk φ(k)eikx (6.20)
2π −∞
and Plancherel’s theorem tells us that the “expansion coefficients” or Fourier
transform is
Z ∞ Z ∞
1 −ikx 1
φ(k) = √ dxΨ(x, 0)e =√ dxψ(x)e−ikx (6.21)
2π −∞ 2π −∞
below) that the speed of the whole wave packet, called the group velocity, is
given by
dω
vgroup = (6.22)
dk
The dispersion relation is the formula that relates ω and k. From
2 2
E = h̄ω = h̄2m k
, we see that the dispersion relation for the plane wave
is
h̄k 2
ω(k) = (6.23)
2m
giving
dω h̄k
vgroup = =
dk m s
√
h̄ 2mE 2E
= = = vclassical (6.24)
m h̄ m
Evidently then it is the group velocity of the wave packet that corresponds
to the classical particle velocity.
Solution
ψ = ψ1 + ψ 2
1 1
= 2A cos [(2ω + dω)t − (2k + dk)x] cos (dωt − dkx)
2 µ ¶ 2
dω dk
≈ 2A cos(ωt − kx) cos t− x
2 2
³ ´
Now cos dω
2 t − dk
2 x has a tiny wavelength compared to
³ ´
cos(ωt − kx) and thus cos dω
2 t − dk
2 x represents a plane wave
6.2. TRANSMISSION AND REFLECTION 119
speed given by dω
dk , whereas the phase speed is
ω
k. See Figure 3.4
of Beiser [1987, pg. 95].
for 1-dimension. When actually calculating j it saves time to use the alter-
native expression µ ¶
h̄ ∂ψ
j= Im ψ ∗ (6.29)
m ∂x
where Im stands for “Imaginary Part”. For example Im(a + ib) = b. (do
Problem 6.1)
∂ρ
+∇·j=0 (6.30)
∂t
or
∂ρ ∂j
+ =0 (6.31)
∂t ∂x
for 1-dimension,
R
where ρ is the probability density
R
and the probability
P = ρ dx for 1-dimension. Integrating (6.31) over dx gives
∂P
j=− (6.32)
∂t
Now in scattering the incident, reflected and transmitted waves will always
be of the form
ψ = Aeikx (6.33)
(where k can be either negative or positive.) This is because the incident,
reflected and transmitted waves will always be outside the range of the
potential where U = 0,√ so that the Schrödinger equation will always be
ψ 00 + k 2 ψ = 0 with k = 2mE
h̄ for incident, reflected and transmitted waves.
This Schrödinger equation always has solution Aeikx + Be−ikx . However the
incident piece, the reflected piece and the transmitted piece will always ei-
ther be travelling to the Left or Right and so will either be Aeikx or Be−ikx
but not both. This is accomplished in (6.33) by letting k be negative or
positive. Using (6.33), the plane wave probability for incident, reflected and
transmitted waves will be
Z Z
∗
P = ψ ψdx = |A| 2
dx = |A|2 x (6.34)
6.3. STEP POTENTIAL 121
Note that, unlike Chapter 1, we are not integrating over the whole universe
and so j will not vanish. Thus from (6.32) we have
where v = dx dt is the speed of the wave. Thus the reflection and transmission
coefficients become ¯ ¯ ¯ ¯
¯ AR ¯2 vR ¯ AR ¯ 2
R = ¯¯ ¯ = ¯ ¯ (6.36)
Ai ¯ vi ¯A ¯
i
and
¯ ¯ ¯ ¯ s
¯ AT ¯2 vT ¯ AT ¯2 ET
T = ¯¯ ¯ = ¯¯ ¯ (6.37)
Ai ¯ vi Ai ¯ Ei
using (6.11). Note that the reflected and incident wave will always be in
the same region (or same medium) and so vR = vi always. The above
formulas are exactly analogous to those used in classical electrodynamics
[Griffiths, 1989] and obviously (6.37) with the factor vvTi is just the refractive
index.
¯ ¯
If the transmitted wave has the same speed as the incident wave then
¯ ¯2
T = ¯ AATi ¯ .
or we can write ψ(x) = C cos kx+D sin kx, however as mentioned before, the
complex exponential solution enables us to specify the boundary condition
(Left or Right) much more easily. The incident wave travels to the Right and
the reflected wave to the Left. Thus in (6.40) we have made the identification
ψi = Aeikx (6.41)
and
ψR = Be−ikx (6.42)
In region II we have
2m(E − U0 )
ψ 00 + ψ=0 (6.43)
h̄2
or
2m(U0 − E)
ψ 00 − ψ=0 (6.44)
h̄2
Now for our first problem we are considering E < U0 and thus 2m(E − U0 )
is negative, but 2m(U0 − E) is positive and thus we use the second equation
(6.44) which is
ψ 00 − κ2 ψ = 0 (6.45)
with p
2m(U0 − E)
κ≡ (6.46)
h̄
which is real because E < U0 . The solution is
ψI (x = 0) = ψII (x = 0)
gives
A+B =D
and
ψI0 (x = 0) = ψII
0
(x = 0)
6.3. STEP POTENTIAL 123
gives
Aik − Bik = −κD
Thus the wave functions become
µ ¶ µ ¶
D κ ikx D κ −ikx
ψI (x) = 1+i e + 1−i e
2 k 2 k
≡ ψi + ψR (6.49)
and µ ¶ Ã !
h̄ ∂ψi h̄ D κ2
ji = Im ψi∗ = k 1+ 2
m ∂x m2 k
giving ¯ ¯
¯ jR ¯
R = ¯¯ ¯¯ = 1
j i
which tells us that the wave is totally reflected. From equation (6.27) we
expect T = 0, that is no transmitted wave. (Exercise: Show that you get
¯ ¯2
¯ ¯
the same result by evaluating ¯ B
A ¯ directly.)
Solution
µ ¶
h̄ ∂ψT
jT = Im ψT∗
m ∂x
h̄ 2
= D Im [e−κx (−κ)e−κx ]
m
= 0
124 CHAPTER 6. SCATTERING STATES
as expected.
Solution
ET E − U0
=
Ei E
q
but E < U0 thus EET0 is complex, which would give complex T
which can’t happen.
Thus T = 0 (see Griffith’s Problem 2.33)
Even though the transmission coefficient is zero, the wave function still
penetrates into the barrier a short distance. This is seen by plotting the
wave function. (See Table 6-2 of Eisberg and Resnick [1976, pg. 243].)
(do Problem 6.2)
We assume that the incident particle comes in from the Left. Thus in region
III the wave will only be travelling to the right. Thus
and
ψIII (x) = Ceikx (6.54)
In region III we can write
2m(E − U0 )
ψ 00 + ψ=0 (6.55)
h̄2
or
2m(U0 − E)
ψ 00 − ψ=0 (6.56)
h̄2
but we will use the second equation because U0 > E. Thus
ψ 00 − κ2 ψ = 0 (6.57)
with p
2m(U0 − E)
κ≡ (6.58)
h̄
which is real. Thus
ψII (x) = Deκx + Ee−κx (6.59)
The boundary conditions are
giving
Ae−ika + Beika = De−κa + Eeκa (6.61)
and
ψI0 (−a) = ψII
0
(−a) (6.62)
giving
ikAe−ika − ikBeika = κDe−κa − κEeκa (6.63)
and
ψII (a) = ψIII (a) (6.64)
giving
Deκa + Ee−κa = Ceika (6.65)
126 CHAPTER 6. SCATTERING STATES
and
0 0
ψII (a) = ψIII (a) (6.66)
giving
κDeκa − κEe−κa = ikCeika (6.67)
Equations (6.61), (6.63), (6.65) and (6.67) are the coupled equations we
need to solve for the constants A, B, C, D, E. Let’s plan a little strategy
before jumping in. We want to calculate the transmission coefficient, which
is ¯ ¯2
¯C ¯
T = ¯¯ ¯¯ (6.68)
A
Thus we only need C in terms of A (or vice versa) and hopefully the other
constant will cancel. After all this algebra is done we get
· ¸
U02 2a q
T −1 = 1 + sinh2 2m(U0 − E) (6.69)
4E(U0 − E) h̄
(do Problem 6.3). The important point here is that even with E < U0 we
have T 6= 0. Classically the particle would bounce off the barrier and never
undergo transmission. Yet quantum mechanically transmission occurs! This
phenomenon is called tunnelling. It is again instructive to plot the wave
functions from equations (6.53) and (6.54). This is done in Table 6-2 of
Eisberg and Resnick [1974, pg. 243]. We can see that the wave function
penetrates all the way through the barrier.
such a localized wave packet moved to the left or right, it would be a pretty
good wave packet description of the concept of a classical particle. Such a
wave packet is represented by
Z ∞
1
≡ √ dk φ(k)ψk (x)
2π −∞
Z ∞
1
= √ dk φ(k)eikx (6.70)
2π −∞
and is called a Gaussian wave packet. Such a Gaussian wave packet is widely
used in quantum mechanics as a way to think about localized particles.
You can think of (6.70) as specifying some sort of initial conditions. Since
Ψ(x, 0) and φ(k) are arbitrary we are free to choose them to correspond to a
particular physical system that we want to study. That’s what we are doing
with our Gaussian wave packet. We are choosing to study a localized wave.
After all, there aren’t any other boundary conditions we can use on the free
particle solutions to the Schrödinger equation.
By the wave, in (6.70), the height of the wave packet is specified by A
and the width of the packet is proportional to 1/a. (Exercise: check this by
plotting various Gaussian wave packets with different A and a.)
Now previously we found that the plane wave solutions ψk (x) = eikx are
not normalizable, but we claimed this didn’t matter as long as Ψ(x, t) is
normalizable. Let’s check this for our Gaussian wave packet. To do this you
can use the famous Gaussian integral
Z ∞
r
−ax2 +bx π b2 /4a
dx e = e
−∞ a
(6.71)
Of course we don’t need the whole quadratic ax2 +bx+c because it’s obvious,
from (6.71), that
Z ∞ r
−ax2 +bx+c π b2 +c
dx e = e 4a
−∞ a
128 CHAPTER 6. SCATTERING STATES
Solution
ψ(x) ≡ Ψ(x, 0) = A e−(x−x0 )
2
Z ∞
1 = ψ ∗ (x)ψ(x)dx
−∞
Z ∞
e−2a(x−x0 ) dx
2
= A2
−∞
Z ∞
= A2 e−2ax0 e−2ax
2 2 +4ax
0x
dx
−∞
r
π 16a2 x0
= A2 e−2ax0
2
e 8a
2a
r
2 π
= A
2a
Thus
µ ¶1/4
2a
A=
π
and
µ ¶1/4
2a
e−a(x−x0 )
2
ψ(x) ≡ Ψ(x, 0) = (6.72)
π
is the normalized Gaussian wave packet centered at x = x0 .
This example shows that even though plane waves are not normalizable,
the superposed wave packet is normalizable.
Thus
1 k2
φ(k) = 1/4
e− 4a −ikx0 (6.73)
(2πa)
Note that (6.73) is also a Gaussian (ignore x0 or set it to zero). Thus the
Fourier transform of a Gaussian is a Gaussian.
h̄2 k2
with Ek ≡ h̄ω = 2m . This should tell us how our Gaussian wave packet
behaves in time.
Example 6.6 Calculate Ψ(x, t) and |Ψ(x, t)|2 for the Gaussian
wave packet.
130 CHAPTER 6. SCATTERING STATES
finally giving
µ ¶1/4 r
2a m −ma(x − x0 )2
Ψ(x, t) = exp [ ] (6.76)
π m + i2ah̄t m + i2ah̄t
This yields
r s
2 m2 a m2 a
|Ψ(x, t)|2 = exp [−2 (x − x0 )2 ]
m2 + 4a2 h̄2 t2
π m2 + 4a2 h̄2 t2
(6.77)
for the probability density.
These are wonderful results! They tell us how the Gaussian wave packet
changes with time. We can now Rreally see the utility of calculating φ(k),
plugging it into (6.75), doing the dk integral and finally getting out some
real answers!
The probability density (6.77) is very interesting. As time increases,
the amplitude decreases and the width of the wave packet increases. The
Gaussian wave packet “spreads out” over time. This is illustrated in Fig.
6.4. The wave packet dissipates. Initially the packet is well localized, and we
“know” where the particle is, but after a long time the packet is so spread
out that we don’t know where the particle is anymore.
propagating to the Right. We might have hoped that tacking on the time
dependence e−iωt in (6.75) would have given us a moving particle, but no
luck! What the time dependence did tell us was that the wave packet dissi-
pates, but our solutions (6.76) and (6.77) still won’t budge. They are still
clamped down at x = x0 .
We are back to our original problem discussed at the beginning of Section
6.5. The Schrödinger equation does not give us a particle and it also does
not give us a moving particle. We have to put both things in by hand, or
“specify initial conditions.”
Actually this is easy to do. The particle is fixed at x = x0 , so let’s just
make x0 move! A good way to do this is with
p0
x0 = t (6.78)
m
Thus our moving Gaussian wave is
p0
e−a(x− m t)
2
and now we have a moving wave. Just make the substitution (6.78) in all of
the above formulas.
and
√
σp = h̄ a
For t = 0 we see that
1
σx (t = 0) = √
2 a
but σx gets larger as t increases. This corresponds to the spreading of the
wave packet as time increases and we are more and more uncertain of the
position of the wave packet as time goes by (even thogh hxi = x0 ).
132 CHAPTER 6. SCATTERING STATES
h̄
σx σp ≥
2
which is the famous Uncertainty Principle. Here we have shown how it comes
about for the Gaussian wave packet. Later we shall prove it is general.
Chapter 7
FEW-BODY BOUND
STATE PROBLEM
133
134 CHAPTER 7. FEW-BODY BOUND STATE PROBLEM
x ≡ x1 − x2 (7.1)
m1 x1 + m2 x2
X≡ (7.2)
m1 + m2
M ≡ m1 + m2 (7.5)
and
ΣF2 = m2 ẍ2 (7.8)
7.1. 2-BODY PROBLEM 135
Assuming that the particles interact through the potential U then these
equations become
∂U
− = m1 ẍ1 (7.9)
∂x1
and
∂U
− = m2 ẍ2 (7.10)
∂x2
Now with U ≡ U (x1 − x2 ) = U (x) we have
∂U ∂U ∂x ∂U
= = (7.11)
∂x1 ∂x ∂x1 ∂x
and
∂U ∂U ∂x ∂U
= =− (7.12)
∂x2 ∂x ∂x2 ∂x
giving
dU
− = m1 ẍ1 (7.13)
dx
and
dU
+ = m2 ẍ2 (7.14)
dx
Equations (7.9) and (7.10) or (7.13) and (7.14) are a set of coupled equations.
They must be solved simultaneously to obtain a solution. But notice the
following. With our new coordinates X and x we get
µ ¶
1 1 dU dU
Ẍ = (m1 ẍ1 + m2 ẍ2 ) = − + =0 (7.15)
M M dx dx
giving
M Ẍ = 0
(7.16)
and
1 dU 1 dU
ẍ = ẍ1 − ẍ2 = − −
m1 dx m2 dx
m1 + m2 dU
= − (7.17)
m1 m2 dx
Defining the reduced mass
m1 m 2
µ≡ (7.18)
m1 + m2
we get
136 CHAPTER 7. FEW-BODY BOUND STATE PROBLEM
dU
− = µẍ
dx
(7.19)
Thus equations (7.16) and (7.19) are uncoupled equations which we can solve
separately. Thus we say that we have solved the 2-body problem! Notice
also that these equations are equivalent 1-body equations for a “particle” of
mass M and acceleration Ẍ and another separate “particle” of mass µ and
acceleration ẍ moving in the potential U (x). We have also shown that the
“particle” of mass M is the center of mass “particle” and it moves with zero
acceleration.
Lagrangian Method
For a Lagrangian L(qi , q̇i ) where qi are the generalized coordinates, the equa-
tions of motion are µ ¶
d ∂L ∂L
− =0 (7.20)
dt ∂ q̇i ∂qi
for each coordinate qi . For our 2-body problem in 1-dimension we identify
q1 ≡ x1 and q2 ≡ x2 . The 2-body Lagrangian is
1 1
L(x1 , x2 , ẋ1 , ẋ2 ) = m1 ẋ1 2 + m2 ẋ2 2 − U (x1 − x2 ) (7.21)
2 2
Lagrange’s equations are
µ ¶
d ∂L ∂L
− =0 (7.22)
dt ∂ ẋ1 ∂x1
and µ ¶
d ∂L ∂L
− =0 (7.23)
dt ∂ ẋ2 ∂x2
which yield the coupled equations (7.9) and (7.10). (Exercise: Prove this.)
The trick with Lagrange’s equations is to pick different generalized coor-
dinates. Instead of choosing q1 = x1 and q2 = x2 we instead make the choice
q1 ≡ X and q2 ≡ x. Using equations (7.1) and (7.2) in (7.21) we obtain
1 1
L(x, X, ẋ, Ẋ) = M Ẋ 2 + µẋ2 − U (x) (7.24)
2 2
(Exercise: Show this.) Lagrange’s equations are
µ ¶
d ∂L ∂L
− =0
dt ∂ Ẋ ∂X
7.1. 2-BODY PROBLEM 137
and µ ¶
d ∂L ∂L
− =0
dt ∂ ẋ ∂x
which yield (7.16) and (7.19) directly. (Exercise: Show this.)
To summarize, the trick with the Lagrangian method is to choose X and
x as the generalized coordinates instead of x1 and x2 .
Hψ = Eψ (7.25)
p21
H= + U (x1 ) (7.26)
2m1
p21 p2
H= + 2 + U (x1 − x2 ) (7.28)
2m1 2m2
which is our 2-body Schrödinger equation. The problem now is not that
we have 2 coupled equations, as in the classical case, but rather instead
of the ordinary differential equation (for the variable x1 ) that we had for
the 1-body Schrödinger equation (7.27), we are now stuck with a partial
differential equation (7.29) for the variables x1 and x2 .
138 CHAPTER 7. FEW-BODY BOUND STATE PROBLEM
1 ∂2 1 ∂2 1 ∂2 1 ∂2
2 + 2 = + (7.35)
m1 ∂x1 m2 ∂x2 µ ∂x2 M ∂X 2
∂2
where the “cross terms” ∂x∂X have cancelled out. Thus the Schrödinger
equation (7.29) becomes
" #
h̄2 ∂ 2 h̄2 ∂ 2
− − + U (x) ψ(X, x) = Eψ(X, x) (7.36)
2M ∂X 2 2µ ∂x2
which is still a partial differential equation but now U depends on only one
variable and we are now able to successfully apply the technique of separation
of variables. Thus we define
and
E = E1 + E2 ≡ EX + Ex (7.38)
which is a sum of center-of-mass and relative energy. Upon substitution of
(7.37) and (7.38) into (7.36) we obtain
7.2. 3-BODY PROBLEM 139
h̄2 d2 W (X)
− = EX W (X)
2M dX 2
(7.39)
and
h̄2 d2 w(x)
− + U (x)w(x) = Ex w(x)
2µ dx2
(7.40)
which are now two uncoupled ordinary differential equations.
Just as in the classical case, where we found two equivalent 1-body equa-
tions (7.16) and (7.19), so too have we found in quantum mechanics two
equivalent 1-body Schrödinger equations.
One is for the center of mass “particle” of mass M and one is for the
reduced mass “particle” of mass µ. In the classical case, the center of mass
equation was identical to a free particle equation (7.16) for mass M , so too
in the quantum case we have found the center of mass Schrödinger equation
is for a free particle of mass M . In the classical case the solution to (7.16)
was
Ẍ = 0 (7.41)
and in the quantum case the solution to (7.39) is
W (X) = Aeikx + Be−ikx (7.42)
which is the free particle plane wave solution where
√
2M Ex
K≡ (7.43)
h̄
In the classical case, to get the relative acceleration equation we just make
the replacement x1 → x and m1 → µ to get the 2-body relative equation
(7.19). We see that this identical replacement is also made for the quantum
equation (7.40).
We can now go ahead and solve the 2-body quantum equation (7.40)
using exactly the same techniques that we have used previously. In fact we
just copy all our previous answers but making the replacement m → µ and
treating the variable x as x ≡ x1 − x2 .
3-DIMENSIONAL
PROBLEMS
141
Chapter 8
3-DIMENSIONAL
SCHRÖDINGER
EQUATION
143
144 CHAPTER 8. 3-DIMENSIONAL SCHRÖDINGER EQUATION
−1 d2 Φ
= m2 (8.14)
Φ dφ2
In the first line m is a fixed value, but in the second line we allow m to
take on both positive and negative values. The third line arises because we
will put all of the normalization into the Θ(θ) solution.
The periodic boundary condition is
giving
eim(φ+2π) = eimφ (8.17)
and thus
ei2πm = 1 (8.18)
implying
m = 0, ±1, ±2 . . . (8.19)
m is called the azimuthal quantum number. Equation (8.19) effectively means
0
that the azimuthal angle is quantized! (Write Φ = eimφ ≡ eiφ and thus φ0 is
quantized.)
The other angular equation becomes
µ ¶ h i
d dΘ
sin θ sin θ + l(l + 1) sin2 θ − m2 Θ = 0 (8.20)
dθ dθ
l = 0, 1, 2 · · ·
(8.23)
A few points to note. Firstly, we could have solved (8.20) directly by the
power series method. Using a cut-off would have quantized l for us, just like
the cut-off quantized E for the harmonic oscillator. Secondly, for m = 0, the
Legendre Associated Differential Equation becomes the Legendre Differential
Equation with the Legendre Polynomials Pl (x) as solutions, whereas above
we have the Legendre function of the first kind Plm (x). Thirdly we have
already noted that l is required to be integer as given in (8.23). Fourthly,
the properties of the Legendre function also requires
m = −l, · · · + l
(8.24)
For example, if l = 0 then m = 0. If l = 1, then m = −1, 0, +1. If l = 2,
then m = −2, −1, 0, +1, +2.
Thus we have found that the solutions of the angular equations are char-
acterized by two discrete quantum numbers m and l. Thus we write (8.11)
as
Ylm (θ, φ) = Aeimφ Plm (cos θ) (8.25)
where A is some normalization. If we normalize the angular solutions to
unity then the overall normalization condition will require the spatial part
to be normalized to unity also. Let’s do this. The result is [Griffith, 1995]
s
(2l + 1) (l − |m|)! imφ m
Ylm (θ, φ) = ² e Pl (cos θ)
4π (l + |m|)!
(8.26)
where ² ≡ (−1)m for m ≥ 0 and ² = 1 for m ≤ 0. These are normalized as
in Z Z
2π π
∗
dφ dθ sin θYlm (θ, φ)Yl0 m0 (θ, φ) = δll0 δmm0 (8.27)
0 0
8.2. RADIAL EQUATION 147
(8.29)
This is often called the reduced Schrödinger equation for the reduced wave
function u(r). (R(r) is the radial wave function.) For l = 0 this is the same
form as the 1-dimensional Schrödinger equation! For l 6= 0 it is still the same
form as the 1-dimensional equation with an effective potential
l(l + 1)h̄2
Ueffective (r) ≡ U (r) + (8.30)
2µr2
Note that precisely the same type of thing occurs in the classical case.
The normalization of the wave function is now the volume integral
Z
dτ |Ψ(r, t)|2 = 1 (8.31)
R
where the volume integral dτ is to be performed over the whole universe.
In spherical coordinates we have
Z ∞ Z π Z 2π
r2 dr sin θdθ dφ |Ψ(r, θ, φ)|2 = 1 (8.32)
0 0 0
(8.34)
Remember, it is actually the u(r) that we solve for in the radial Schrödinger
equation (8.29). If the reduced wave functions u(r) are normalized according
to (8.34), then everything else is normalized.
x2 y 00 + xy 0 + (x2 − ν 2 )y = 0
(8.39)
where y = y(x), y0 ≡ dy
dx and ν ≥ 0. The solutions are [Spiegel, 1968, pg.
137, equation 24.14]
y(x) = A Jν (x) + B Nν (x) (8.40)
where Jν (x) are Bessel functions of order ν and Nν (x) are Neumann func-
tions of order ν. (The Neumann functions are often called Weber functions
and given the symbol Yν (x). See footnote [Spiegel, 1968, pg. 136]. However
Yν (x) is lousy notation for us, because we use Y for spherical harmonics.)
If the dependent variable is
η ≡ kr (8.41)
d2 y dy
r2 +r + (k 2 r2 − ν 2 )y = 0
dr2 dr
(8.42)
where y = y(kr) = y(η). (Exercise: show this)
Now the radial Schrödinger equation (8.37) does not look like the Bessel
equations (8.39) or (8.41). Introduce a new function [Arfken, 1985, pg. 623]
defined as
√
w(η) ≡ η R(r) (8.43)
i.e. √
w(kr) = kr R(r) (8.44)
Then the radial Schrödinger equation (8.37) becomes
" µ ¶2 #
2w
2d dw 1
r +r + k2 r2 − l + w=0
dr2 dr 2
(8.45)
which is the BDE. (Exercise: show this)
Notice that the order of the Bessel and Neumann functions is
1
ν =l+ (8.46)
2
150 CHAPTER 8. 3-DIMENSIONAL SCHRÖDINGER EQUATION
These Bessel and Neumann functions of half integer order are tabulated in
[Spiegel, 1968, pg. 138]
Now these Bessel and Neumann functions of half integer order are usually
re-defined as Spherical Bessel functions and Spherical Neumann functions of
order l as follows [Arfken, 1985, pg. 623]
r
π
jl (x) ≡ J 1 (x) (8.49)
2x l+ 2
r
π
nl (x) ≡ N 1 (x) (8.50)
2x l+ 2
giving r r
0 2η 2η
w(η) = A jl (η) + B 0 nl (η) (8.51)
π π
w(η)
Thus using R(r) = √
η
and µ ¶l
1 d cos x
nl (x) = −(−x) l
(8.55)
x dx x
giving, for example,
sin x
jo (x) = (8.56)
x
and
cos x
no (x) = − (8.57)
x
Thus jl (x) can be thought of as a “generalized” sine function and nl (x) as a
“generalized” cosine function. Now if the solution of a differential equation
can be written as
y = A0 sin kr + B 0 sin kr (8.58)
which is useful in bound state problems, it can also be written
and
e−ikr = cos kr − i sin kr (8.61)
Similarly the Hankel functions (of first and second kind) are defined as
[Spiegel, 1968, pg. 138]
and
Hν(2) (x) ≡ Jν (x) − i Nν (x) (8.63)
and are also solutions of the BDE, i.e. (8.40) can also be written
(8.64)
In 3-d bound state problems (for arbitrary l) the Bessel and Neumann func-
tions are useful, whereas in scattering the Hankel functions are more useful.
152 CHAPTER 8. 3-DIMENSIONAL SCHRÖDINGER EQUATION
Footnote: Given that J and N (or j and n) are the generalized sine and
cosine, then Hν is actually like ie−ikr = sin kr + i cos kr and Hν is like
(1) (2)
and
(2)
hl (x) ≡ jl (x) − i nl (x) (8.66)
Thus, for example,
eix
o = −i
h(1) (8.67)
x
and
e−ix
h(2)
o (x) = i (8.68)
x
Thus the solutions (8.52) and (8.53) to the Schrödinger equation can also
be written
(8.69)
or
(8.70)
Chapter 9
HYDROGEN-LIKE ATOMS
The hydrogen atom consists of one electron in orbit around one proton with
the electron being held in place via the electric Coulomb force. Hydrogen-
like atoms are any atom that has one electron. For example, in hydrogen-like
carbon we have one electron in orbit around a nucleus consisting of 6 protons
and 6 neutrons. The Coulomb potential for a hydrogen-like atom is
1 Ze2
U (r) = − (9.1)
4π²0 r
in MKS units. Here Z represents the charge of the central nucleus.
We shall develop the theory below for the 1-body problem of an electron
of mass me interacting via a fixed potential. (We will do this so that we can
pull out well known constants like the Bohr radius which involves me ). If
one wishes to consider the 2-body problem one simply replaces me with µ
in all the formulas below.
153
154 CHAPTER 9. HYDROGEN-LIKE ATOMS
Many books [Griffiths, 1995] solve the Hydrogen atom problem with the
power series method just as we did with the harmonic oscillator. They find
that the solutions are the famous Associated Laguerre Polynomials. We shall
instead follow the latter approach described above. We will closely inspect
the Schrödinger equation for the Hydrogen atom problem and observe that it
is nothing more than Laguerre’s Associated Differential Equation (ADE). We
then immediately conclude that the solutions are the Laguerre Associated
Polynomials. (Of course all good students will do an exercise and prove, by
power series method, that the Laguerre associated polynomials are, in fact,
solutions to Laguerre’s ADE.)
Inserting the Coulomb potential (9.1) into the reduced Schrödinger equa-
tion (8.29) gives
" #
d2 u 2me 1 Ze2 l(l + 1)
+ −k 2
+ − u=0 (9.2)
dr2 h̄2 4π²0 r r2
where we have used me instead of µ and me is the mass of the electron. Also
we have defined √
+ −2me E
k≡ (9.3)
h̄
because the bound state energies E of the Coulomb potential are negative
(E < 0). Now define a new variable
ρ ≡ 2kr (9.4)
giving (9.2) as
· ¸
d2 u 1 λ l(l + 1)
2
+ − + − u=0 (9.5)
dρ 4 ρ ρ2
with
me Ze2 1
λ≡ (9.6)
h̄2 4π²0 k
As with the harmonic oscillator we “peel off” the asymptotic solutions.
For ρ → ∞, equation (9.5) is approximately
d2 u 1
− u=0 (9.7)
dρ2 4
with solution
u(ρ) = Ae 2 ρ + Be− 2 ρ
1 1
(9.8)
9.1. LAGUERRE ASSOCIATED DIFFERENTIAL EQUATION 155
u(ρ → ∞) = Be− 2 ρ
1
(9.9)
d2 u l(l + 1)
− u=0 (9.10)
dρ2 ρ2
with solution
u(ρ) = Cρl+1 + Dρ−l (9.11)
which can be checked by substitution. This blows up for ρ → 0 and thus we
must have D = 0. Thus
u(ρ → 0) = Cρl+1 (9.12)
Thus we now define a new reduced wave function v(ρ) via
with the asymptotic behavior now separated out. Substituting this into
(9.5), and after much algebra, we obtain (do Problem 9.1)
with v 0 ≡ dv
dρ and v 00 ≡ d2 v
dρ2
. Laguerre’s ADE is [Spiegel, 1968, pg. 155]
with y 0 ≡ dx
dy
and the solutions are the associated Laguerre polynomials
m
Ln (x), which are listed in [Spiegel, 1968, pg. 155-156] together with many
useful properties, all of which can be verified similar to the homework done
for the Hermite polynomials in Chapter 5. Note that n and m are integers.
We see that the Schrödinger equation (9.14) is actually the Laguerre ADE
and the solutions are
v(ρ) = L2l+1
λ+l (ρ) (9.16)
Thus our complete solution, from (9.13) and (9.14) is
where we have inserted a normalization factor A. Note that this looks dif-
ferent [Griffiths, 1995] in many other quantum mechanics books because of
156 CHAPTER 9. HYDROGEN-LIKE ATOMS
λ = n = 1, 2, 3 . . .
(9.18)
which gives our energy quantization condition. Combining (9.18), (9.6) and
(9.3) gives
à !2
me Ze2 1 Z2 E1
En = − 2 2
≡ − 2 ER ≡ 2
2h̄ 4π²0 n n n
(9.19)
where the Rydberg energy is defined as
me e4
ER ≡ = 13.6 eV (9.20)
2(4π²0 h̄)2
The energy levels are plotted in Fig. 9.1. Notice that the spacing decreases
as n increases.
We now wish to normalize the reduced wave function (9.17), using (8.34).
This is done in Problem 9.2 with the result that
s
Z (n − l − 1)!
A= (9.21)
an2 [(n + l)!]3
where the Bohr radius is defined as
4π²0 h̄2
a≡ = 0.529 × 10−10 m ≈ 1Å (9.22)
me e2
which is a characteristic size for the hydrogen atom. Thus k becomes
Z
k= (9.23)
na
9.2. DEGENERACY 157
u(r)
Using r = 2k u(ρ)
ρ the final wave function is
(9.24)
or
sµ ¶3 µ ¶l µ ¶
2Z (n − l − 1)! 2Z 2Z
e− na r L2l+1
Z
ψnlm (r, θ, φ) = r n+l r Ylm (θ, φ)
na 2n[(n + l)!]3 na na
(9.25)
which are the complete wave functions for Hydrogen like atoms.
9.2 Degeneracy
Let us summarize our quantum numbers. For the hydrogen like atom we
have, from (9.19)
E1
En = 2 (9.26)
n
where n is the principal quantum number such that (see (9.18))
n = 1, 2, 3 · · · (9.27)
l = 0, 1, 2, · · · (9.28)
and from (8.19) and (8.24) that the magnetic quantum number is
ml = 0, ±1, ±2 · · · ± l (9.29)
However, looking at the wave function in (9.25) we see that in order to avoid
(undefined) negative factorials we must also have, for hydrogen-like atoms,
the condition
l = 0, 1, 2, · · · n − 1 (9.30)
158 CHAPTER 9. HYDROGEN-LIKE ATOMS
for n = 1, ψ100
for n = 2, ψ200 , ψ210 , ψ21−1 , ψ211
for n = 3, ψ300 , ψ310 , ψ31−1 , ψ311
ψ320 , ψ32−1 , ψ321 , ψ32−2 , ψ322
etc. However the energy En in (9.26) depends only on the principal quantum
number n. Thus the energy level E1 has only 1 wave function, but E2 has
4 wave functions associated with it and E3 has 9 wave functions. E2 is said
to be 4-fold degenerate and E3 is said to be 9-fold degenerate.
We shall see later that this degeneracy can be lifted by such things as
external magnetic fields (Zeeman effect) or external electric fields (Stark
effect). In the Zeeman effect we will see that the formula for energy explicitly
involves both n and ml .
Chapter 10
ANGULAR MOMENTUM
[Li , Lj ] = ih̄²ijk Lk
(10.5)
and
159
160 CHAPTER 10. ANGULAR MOMENTUM
[L2 , Li ] = 0
(10.6)
where Li is any of Lx , Ly , Lz and L2 ≡ L2x + L2y + L2z . The Levi-Civita
symbol is defined as
+1 if ijk are an even permutation of 1, 2, 3
²ijk = −1 if ijk are an odd permutation of 1, 2, 3 (10.7)
0 if ijk are not a permutation of 1, 2, 3
(For example ²123 = +1, ²231 = +1, ²321 = −1, ²221 = 0.) These commu-
tation relations are proved in the problems. (do Problems 10.1 and 10.2)
Notice that in (10.5) a sum over k is implied. In other words
X
[Li , Lj ] = ih̄ ²ijk Lk (10.8)
k
and " #
µ ¶
1 ∂ ∂ 1 ∂2
L = −h̄
2 2
sin θ + (10.15)
sin θ ∂θ ∂θ sin2 θ ∂φ2
10.1. ORBITAL ANGULAR MOMENTUM 161
and
Lz Ylm (θ, φ) = mh̄Ylm (θ, φ) (10.19)
which are often written in Dirac notation as
(10.20)
and
(10.21)
Thus we have found that the operators L2 and Lz have simultaneous eigen-
functions |lmi = Ylm such that l = 0, 1, 2, 3 . . . and m = −l . . . + l.
Commutator Theorem Two non-degenerate operators have simultaneous eigen-
function iff the operators commute.
Thus the fact that L2 and Lz have simultaneous eigenfunctions is an
instance of the commutator theorem. The question is, since say L2 and Lx
also commute, then won’t they also have simultaneous eigenfunctions? The
answer is no because Lx , Ly , and Lz do not commute among themselves
and thus will never have simultaneous eigenfunctions. Thus L2 will have
simultaneous eigenfunctions with only one of Lx , Ly and Lz .
Equation (10.20) tells us that the magnitude of the angular momentum
is
162 CHAPTER 10. ANGULAR MOMENTUM
q
L= l(l + 1)h̄ l = 0, 1, 2, . . .
(10.22)
√
For example, for l = 2 then L = 2h̄ (which does have the correct units for
angular momentum). Because of the angular momentum quantum number
we see that angular momentum is quantized. But this is fine. We have seen
that energy is quantized and we now find that angular momentum is also
quantized. But here’s the crazy thing. The projection, Lz , of the angular
momentum is also quantized via (10.21) with the magnitude of projection
being
Lz = mh̄ m = −l, . . . + l
(10.23)
You see even if the magnitude L is quantized, we would expect classically
that its projection on the z axis could be anything, whether or not L is
quantized. The fact that Lz is also quantized means that the vector L can
only point in certain directions! This is shown in Fig. 10.1. This is truly
amazing! Thus, quite rightly, the quantization of Lz is referred to as space
quantization.
The angular momentum raising and lowering operators have the property
that q
L± Ylm (θ, φ) = h̄ l(l + 1) − m(m ± 1)Ylm±1 (θ, φ) (10.24)
or
q
L± | lmi = h̄ l(l + 1) − m(m ± 1) | l m ± 1i
(10.25)
which raise or lower the value of n. (do Problem 10.4)
h̄
∆Lx ∆Ly ≥ |hLz i| (10.27)
2
where θ is the angle between the dipole moment µ and the external magnetic
field B. Define the z direction to be in the direction of B. Thus µ cos θ = µz
and
U = −µz B (10.29)
Thus we need to calculate µz now. The dipole moment for the current loop
of Fig. 10.2 is
µ = iAn̂ (10.30)
where i is the current, A is the area of the loop and n̂ is the vector normal
to the plane of the loop. The magnitude is
−e 2
µ = iA = πr (10.31)
T
where −e is the electron charge, T is the period and r is the radius of the
loop. The angular momentum is
2πr2
L = mvr = m (10.32)
T
164 CHAPTER 10. ANGULAR MOMENTUM
−e
and combining with (10.31) gives µ = 2m L or
µ ¶ µ ¶
e −e
µ=− L ≡ γL ≡ g L (10.33)
2m 2m
where the gyromagnetic ratio γ is defined as the ratio of µ to L and we have
also introduced the so-called g-factor. For the above example we have
−e
γ= (10.34)
2m
and
g=1 (10.35)
(Be careful because some authors call g the gyromagnetic ratio!) Thus
µ ¶ µ ¶
e e
µz = − Lz = − ml h̄ ≡ −µB ml = −g µB ml (10.36)
2m 2m
where the Bohr magneton is defined as
eh̄
µB ≡ . (10.37)
2m
Thus the interaction potential energy is
eh̄
U = ml µB B = ml B
2m
(10.38)
where we don’t need to worry about the − sign because ml takes on both +
and − signs via ml = −l, . . .+l. Now there are 2l+1 possible different values
for ml . Thus a spectral line of given l will be split into 2l + 1 separate lines
when placed in an external magnetic field. (In the absence of a field they
will not be split and will all have the same energy.) The spacing between
each of the split lines will be µB B. See Fig. 10.3.
Notice that the splitting of the spectral lines will be bigger for bigger
magnetic fields. This is great because if we notice the Zeeman effect in
the spectra of stars we can easily figure out the magnetic fields. In fact by
observing the spectra of sunspots, people were able to find the strength of
magnetic fields in the region of sunspots!
10.4 Spin
The Stern-Gerlach experiment performed in 1925 [Tipler, 1992] showed that
the electron itself also carries angular momentum which has only 2 possible
orientations. As nicely explained in [Griffiths, 1995] this angular momentum
is intrinsic to the electron and does not arise from orbit effects. The half
integral values of spin that we discovered above in the algebraic method are
obviously suitable for the electron. The Stern-Gerlach experiment implies a
spin s = 12 for the electron with 2 projections ms = + 12 and ms = − 12 .
The theory of spin angular momentum is essentially a copy of the theory
of orbital angular momentum. Thus we have (see (10.5) and (10.6)
and
[S 2 , Si ] = 0 = [S 2 , S] (10.40)
Similarly from (10.20) and (10.21) we have
and
Sz | smi = mh̄ | smi (10.42)
(Note that before our m meant ml and here our m means ms .) Finally from
(10.23) q
S± | smi = h̄ s(s + 1) − m(m ± 1) | s m ± 1i (10.43)
where
S± ≡ Sx ± iSy (10.44)
Now in the theory of orbital angular momentum with the spherical har-
monics, which were solutions to the Schrödinger equation, we only had
l = 0, 1, 2, . . .. But we saw from the algebraic method that half integer values
can also arise (but there won’t be solutions to the Schrödinger equation). In
the theory of spin we use all values, that is
1 3
s = 0, , 1, , . . . (10.45)
2 2
and we still have
m = −s, . . . + s. (10.46)
166 CHAPTER 10. ANGULAR MOMENTUM
1
10.4.1 Spin 2
All of the quarks and leptons (such as the electron and neutrino) as well as
the neutron and proton carry an intrinsic spin of 12 . Thus we shall study
this now in some detail.
Now in our orbital angular momentum theory the eigenfunctions |lmi
were just the spherical harmonic functions and the operators L2 , Lz were just
angular differential operators. It turns out that for spin 12 it is not possible
to find functions and differential operators satisfying the algebra specified in
equations (10.39)–(10.46), but it is possible to find matrix representations.
The eigenfunctions are
¯ Ã !
¯1 1 1
¯
¯ 2 2 i ≡ χ+ ≡ 0
(10.47)
for spin “down”. Now we can work out the operators. This is done in
[Griffiths, 1995, pg. 156]. I will just write down the answer which you can
check by substituting.
Given the eigenstates (10.47) and (10.48) then the operators in (10.41)
and (10.42) must be
à !
h̄ 0 1
Sx = (10.49)
2 1 0
à !
h̄ 0 −i
Sy = (10.50)
2 i 0
à !
h̄ 1 0
Sz = (10.51)
2 0 −1
(10.56)
à !
0 −i
σy ≡
i 0
(10.57)
à !
1 0
σz ≡
0 −1
(10.58)
In problems 10.5–10.8, it is shown very clearly that the spin 12 operators
Si acting on |smi obey exactly the same algebra as the operators Li acting
on |lmi = Ylm . Similar operators and states can also be found for all of the
half integer values of spin angular momentum. (We only considered spin 12 ,
but arbitrary l).
µz ∝ Sz (10.60)
interaction with the magnetic field produced by the orbiting nucleus and the
interaction energy again is
except that now µz comes from (10.60) and the magnetic field from (10.59).
In the Zeeman effect we used Lz = ml h̄ in (10.31) and so the interaction
energy (10.38) was directly proportional to ml . Thus if l = 1 the splitting
was 3-fold (ml = 0), or if l = 2 the splitting was 5-fold (ml = 0, ±1, ±2),
etc.
Similarly here we have
Sz = ms h̄
but for the electron ms has only two values (ms = ± 12 ) and so there will
only be double splitting of the spectral lines, rather than the 3-fold, 5-fold,
10.5. ADDITION OF ANGULAR MOMENTUM 169
etc. splitting observed in the Zeeman effect. The actual magnitude of the
splitting is calculated in equations (6.65) and (6.66) of [Griffiths, 1995]. The
point of our discussion was simply to show that spin leads to a double split-
ting of spectral lines.
A more physical understanding of this double splitting can be seen from
Fig. 10.4 and 10.5.
The spin-orbit effect occurs for all states except for S states (l = 0). This
can be seen as follows. The magnetic field due to the proton is proportional
to the electron angular momentum L as
B∝L
or, more correctly, the angular momentum of the proton from the electron’s
point of view. The dipole moment of the electron is proportional to its spin
µ∝S
U ∝S·L
or
U ∝ S · L = Sz L = ms l(l + 1)h̄
which is zero for l = 0 (S states). As shown in Fig. 10.5, the spin-orbit
interaction splits the P state but not the S state.
|j1 − j2 | ≤ j ≤ j1 + j2
(10.63)
with j jumping in integer steps. Here j1 and j2 are the magnitudes of the
individual spins. Also
m = m1 ± m2
(10.64)
because jz = j1z ± j2z . From our previous example with j1 = 1 and j2 = 12
we have |1 − 12 | ≤ j ≤ 1 + 12 giving 12 ≤ j ≤ 32 , implying j = 12 or 32 .
The symbol j can be either orbital angular momentum l or spin angular
momentum s
1
Example 10.1 Two electrons have spin 2. What is the total
spin of the two electron system?
Solution
1 1
S1 = , S 2 =
2 2
¯ ¯
¯1 ¯
¯ − 1¯ ≤ S ≤ 1 + 1
¯2 2¯ 2 2
O≤S≤1
S jumps in integer steps, therefore S = 0 or 1.
10.5. ADDITION OF ANGULAR MOMENTUM 171
Solution
1 1 1 1
m1 = + , − m 2 = + , −
2 2 2 2
1 1
m = m1 + m2 = + =1
2 2
1 1
or m = − =0
2 2
1 1
or m = − + =0
2 2
1 1
or m = − − = −1
2 2
Evidently the m = 0, ±1 belong to S = 1 and the other m = 0
belongs to S = 0. S = 1 is called the triplet combination and
S = 0 is called the singlet combination.
and ¯ À
¯1
¯ − 1 ≡↓ (10.66)
¯2 2
The |1 −1i state can only contain ↑ states. Thus
|1 1i =↑ ↑ (10.67)
and similarly
|1 − 1i =↓↓ (10.68)
172 CHAPTER 10. ANGULAR MOMENTUM
Note that both of these states are symmetric under interchange of particles
1 and 2. The |0 0i and |1 0i states must contain an equal admixture of ↑
and ↓ in order to get M = 0. Thus
and
|1 0i = A0 (↑↓ ± ↓↑) (10.70)
Now assuming Ã
each | 1 1 i and | 1 − 12 i are separately normalized, i.e. h 12 21 |
! 2 2
1
2 2 i = (1 0)
1 1
= 1, then we must have A = A0 = √12 . Thus the only
0
difference between |0 0i and |1 0i can be in the ± sign. Now for the + sign
the wave function will be symmetric under interchange of particles 1 and
2, while the − sign gives antisymmetry. Thus the + sign naturally belongs
to the |1 0i state. Therefore our final composite wave functions are for the
S = 1 symmetric triplet
|1 1i = ↑↑ (10.71)
1
|1 0i = √ (↑↓ + ↓↑) (10.72)
2
|1 − 1i = ↓↓ (10.73)
1
|0 0i = √ (↑↓ − ↓↑)
2
(10.74)
and
Lz = ml h̄ (10.76)
Also the magnitude of S is
q
S= s(s + 1) h̄ (10.77)
and
Sz = ms h̄ (10.78)
We now define the total angular momentum J to be the sum of orbital and
spin angular momenta as
J≡L+S (10.79)
Before we had L, Lz , S and Sz in terms of the quantum numbers l, ml , s,
ms . Let us define new quantum numbers j and mj such that the magnitude
of J is q
J ≡ j(j + 1) h̄
and
Jz ≡ mj h̄
From (10.63) and (10.64) we obviously have
j =l±s
and
m j = ml ± m s
Ji ≡ Li + Si
X
J ≡ Ji
i
LS coupling holds for most atoms and for weak magnetic fields. jj coupling
holds for heavy atoms and strong magnetic fields. jj coupling also holds for
most nuclei. [Beiser, pg. 264-267, 1987; Ziock, pg. 139, 1969]
The physical reasons as to why LS coupling holds versus jj coupling can
be found in these two references. The basic idea is that internal electric forces
are responsible for coupling the individual Li of each electron into a single
vector L. Strong magnetic fields can destroy this cooperative effect and
then all the spins act individually. Normally a single electron “cooperates”
P P
with all other electrons giving L = Li and S = Si . However in a
i i
strong magnetic field (internal or external), all the electrons start marching
to the orders of the strong field and begin to ignore each other. Then we get
Li = Li + Si .
Solution
1 1
S1 = , S2 =
2 2
l1 = 1 (P state) l2 = 2 (D state)
LS Coupling:
L = L1 + L2
10.6. TOTAL ANGULAR MOMENTUM 175
|L1 − L2 | ≤ L ≤ L1 + L2
|1 − 2| ≤ L ≤ 1 + 2 ⇒ L = 1, 2, 3
S = S1 + L2
|S1 − S2 | ≤ S ≤ S1 + S2
| 12 − 12 | ≤ S ≤ 1
2
+ 12 ⇒ S = 0, 1
J=L+S
|L − S| ≤ J ≤ L + S
S L J 2s+1 L
J
0 1 1 1P
1
2 2 1D
2
3 3 1F
3
1 1 0, 1, 2 3P 3P 3P
0 1 2
2 1, 2, 3 3D 3D 3D
1 2 3
3 2, 3, 4 3F 3F 3F
2 3 4
jj Coupling:
J = L1 + S1
|L1 − S2 | ≤ J1 ≤ L1 + S1
|1 − 12 | ≤ J1 ≤ 1
2
⇒ J1 = 1 3
,
2 2
|L2 − S2 | ≤ J2 ≤ L2 + S2
|2 − 12 | ≤ J2 ≤ 2 + 1
2
3
2
≤ J2 ≤ 5
2
⇒ J2 = 3 5
,
2 2
J = J 1 + J2
|J1 − J2 | ≤ J ≤ |J1 + J2 |
J1 J2 J
1
2
3
2
1, 2
1
2
5
2
2, 3
3
2
3
2
0, 1, 2, 3
3
2
5
2
1, 2, 3, 4
SHELL MODELS
Clearly atoms and nuclei are very complicated many body problems. The
basic idea of any shell model is to replace this difficult many body problem
by an effective 1-body problem.
In the case of atoms, instead of focusing on the very complicated behav-
iors of all the electrons we instead follow the behavior of 1 electron only.
The shell model approximation is to regard this single electron as moving
in an overall effective potential which results from all of the other electrons
and the nucleus. This is often called the mean field approximation, an it is
often a surprisingly good method. The same idea holds in the nuclear shell
model where a single nucleon’s behavior is determined from the mean field
of all the other nucleons.
1s
2s 2p
3s 3p 3d
4s 4p 4d 4f
177
178 CHAPTER 11. SHELL MODELS
this is shown in Fig. 11.3. Thus the shell model picture of Fig. 11.2 is not
correct for the outermost electrons.
The way that the outer shells are filled is shown in Fig. 11.5. Thus for
2 6 10 14 but rather (4s)2 (4p)6
60 N d the outer shell is not (4s) (4p) (4d) (4f )
10 2 6 2
(4d) (5s) (5p) (6s) (4f ) . 4
Fig. 11.5 neatly explains the Periodic Table. All students should fill all
the shells in Fig. 11.5 and watch how the periodic table arises. (Exercise: do
this). Fig. 11.5 is often represented in tabular form as shown in Table 11.1.
Fig. 11.5 and Table 11.1 are very misleading. They show you correctly
how the outer shells fill, but Fig. 11.5 does not represent the energy level
diagram of any atom. The outer electrons are arranged according to Table
11.1 and Fig. 11.5 but the inner electrons of a particular atom are arranged
according to Fig. 11.2. The clearest representation of this is shown in Fig.
11.6 which correctly shows the outer electrons and the inner electrons.
Why then do people use misleading figures like Fig. 11.5 and Table 11.1?
The chemical properties of atoms only depend on the outer electrons. The
inner electrons are essentially irrelevant, and so who cares how they are
arranged? It doesn’t really matter.
In order to avoid confusion it is highly recommended that Fig. 11.6 be
studied carefully.
11.1.4 Spectra
We have seen that the outer shells are filled according to Fig. 11.5, but
that the inner shells are more properly represented in Fig. 11.2. (And both
figures are combined in Fig. 11.6). What figure are we to use for explaining
the spectrum of an atom? Well the spectrum is always due to excitations of
the outer electrons and so obviously we use Fig. 11.5. (This supports what
we said earlier. Neither the chemical properties or the spectra care about
the inner electrons. It’s the outer electrons, hence Fig. 11.5, that determine
all the interesting behavior.) We summarize this in Table 11.2.
The ground state configuration of Hydrogen is (1s)1 . For Sodium it is
(1s)2 (2s)2 (2p)6 (3s)1 . The spectrum of Hydrogen is determined by transitions
of the (1s)1 electron to the higher states such as (2s)1 or (2p)1 or (3s)1 or
(3p)1 or (3d)1 , etc. The spectrum of Sodium is determined by transitions
of the (3s)1 electron to states like (3p)1 or (4s)1 etc. (Actually energetic
transitions also occur by promoting say one of the (2p)6 electrons to say
(3s)1 or higher).
When electronic transitions occur, a photon carries off the excess energy
180 CHAPTER 11. SHELL MODELS
Solution
DIRAC NOTATION
where |Ai ≡ A and |êi i ≡ |ii ≡ êi . We are just using different symbols.
Another way to write this is
X
hA| ≡ Ai hi| (12.3)
i
183
184 CHAPTER 12. DIRAC NOTATION
The inner product (often called the scalar product) of two vectors is
XX
A·B = Ai Bj êi · êj
i j
XX
≡ Ai Bj gij (12.4)
i j
A · B ≡ hA | Bi (12.8)
(12.10)
giving X
hi | Ai = Aj hi | ji = Ai (12.11)
j
and because we must have left hand side = right hand side we have the
identity X
êi êi · = 1 (12.13)
i
Similarly X X X
|Ai = Ai | ii = hi | Ai |ii = |iihi | Ai
i i
yielding
X
|ii hi| = 1
i
(12.14)
which is called the completeness or closure relation because of its similarity to
(??). The usefulness of (12.14) is that it can always be sandwiched between
things because it is unity. For example
X X
hA | Bi = hA | iihi | Bi = Ai Bi (12.15)
i i
|z|2 ≡ z ∗ z = x2 + y 2 (12.20)
186 CHAPTER 12. DIRAC NOTATION
which looks like Pythagoras’ theorem. This will help us define an inner
product for complex vectors.
Let’s consider a two-dimensional complex vector
which is the wrong sign. Thus instead of (12.6) as the scalar product, we
define the scalar product for a complex vector space as
X
A · B ≡ hA | Bi ≡ A∗i Bi
i
(12.23)
In 2-dimensions this gives A · B = hA | Bi = A∗1 B1 + A∗2 B2 which results in
X
z 2 = z · z = hz | zi = zi∗ zi = x2 + y 2 (12.24)
i
(12.27)
and
12.1. FINITE VECTOR SPACES 187
X
hA| ≡ A∗i hi|
i
(12.28)
With ordinary vector notation we have to define the scalar product in (12.23).
But with Dirac notation (12.27) and (12.28) imply the scalar product
P
hA | Bi = A∗i Bi . Thus in defining a complex vector space from scratch one
i P P
can either start with hA | Bi ≡ A∗i Bi or one can start with |Ai ≡ Ai | ii
P i i
and hA| ≡ Ai hi|. We prefer the latter.
i
Notice that
hA | Bi = hB | Ai∗
(12.30)
(do Problem 12.1) Actually we could start with this as a definition of our
complex vector space. (do Problem 12.2). Also note that
hA | xBi = x hA | Bi
(12.31)
and
hxA | Bi = x∗ hA | Bi
(12.32)
because then
à !
B1
A · B = hA | Bi = (A1 A2 ) = A1 B1 + A2 B2 (12.35)
B2
Thus again we see the advantage of Dirac notation. Ordinary vector notation
A provides us with no way to distinguish (12.33)
à and!(12.34). Notice above
A1
that (A1 A2 ) = hA| is the transpose of |Ai = .
A2
For complex vector spaces we keep (12.33). For the inner product A · B
we want our answer to be A · B ≡ hA | Bi = A∗1 B1 + A∗2 B2 . This will work
if
hA| ≡ (A∗1 A∗2 ) ≡ |Ai† (12.36)
à !
A1
which is the transpose conjugate of |Ai = , often called the Hermitian
A2
conjugate. For a general matrix C the Hermitian conjugate C † is defined as
C † ≡ Cji
∗
(12.37)
(interchange rows and columns and take the complex conjugate of every-
thing.) A matrix H is called Hermitian if
H† = H (12.38)
12.1.4 One-Forms
Note
à ! in our matrix representation of real vectors we found that |Ai =
that
A1
and hA| = (A1 A2 ) are rather different objects. For complex vectors
A2
12.2. INFINITE VECTOR SPACES 189
P P
|Ai = Ai | ii and hA| = Ai hi| are also different objects. We seem to
i i
have come across two different types of vector.
In general |Ai is called a vector and hA| is called a covector. Other names
are |Ai is a contravariant vector and hA| is a covariant vector or one-form.
This dual nature of vector spaces is a very general mathematical property.
|Ai is a space and hA| is the dual space.
By the way, in Dirac notation hA| is called a bra and |Ai a ket. Thus
hA | Bi is a braket.
X Z ∞
|ψi = ψi | ii = dx ψ(x) | xi
i −∞
(12.40)
for infinite vectors the components depend on the basis. Equation (12.40)
can be written
Z ∞ Z ∞
|ψi = dx ψ(x) | xi = dp ψ(p) | pi (12.41)
−∞ −∞
X Z ∞
hψ| = ψi∗ hi| = dx ψ ∗ (x)hx|
i −∞
(12.42)
The generalization of hi | ji = δij is
hx | x0 i = δ(x − x0 )
(12.43)
where δ(x − x0 ) is the Dirac delta function, defined as
Z ∞
f (x0 )δ(x − x0 )dx0 ≡ f (x) (12.44)
−∞
Z
|ψ| ≡ hψ | ψi =
2
dx ψ ∗ (x)ψ(x)
(12.46)
The components are obtained in the usual way. Using (12.40) gives
Z ∞
hx | ψi = hx| dx0 ψ(x0 ) | x0 i
−∞
Z
= dx0 ψ(x0 ) hx | x0 i
= ψ(x)
hx | ψi = ψ(x)
(12.47)
and
hψ | xi = ψ ∗ (x)
(12.48)
Therefore the completeness or closure relation is
Z
dx |xi hx| = 1
(12.49)
analogous to (12.14). (do Problem 12.3 and 12.4)
Everyone should now go back and review Section 2.3.6.
Defining
hei | Â | ej i ≡ hi | A | ji = Aij
(12.52)
gives X
x0i = Aij xj (12.53)
j
hx | ψ 0 i ≡ ψ 0 (x) = hx | Âψi
Z
= hx | Â dx0 ψ(x0 ) | x0 i
Z
= dx0 hx | Â | x0 i ψ(x0 ) (12.55)
x0i = hi | x0 i = hi | Âxi
X
= hi | Â | ji hj | xi
j
X
= Aij xj
j
ψ 0 (x) = hx | ψ 0 i = hx | Â | ψi
Z
= dx0 hx | Â | x0 i hx0 | ψi
Z
= dx0 hx | Â | x0 i ψ(x0 )
or
hi | A | ji† = hj | A | ii∗ (12.60)
This implies that
(AB)† = B † A† (12.61)
(do Problem 12.5) In equation (12.36) we have the general result for a vector
hψ| = |ψi†
(12.62)
and
hψ|† = |ψi
(12.63)
Defining
A | ψi ≡ |Aψi (12.64)
we also have
hA† φ | ψi = hφ | A | ψi = hφ | Aψi
(12.65)
and
hAφ | ψi = hφ | A† | ψi = hφ | A† ψi
(12.66)
12.3. OPERATORS AND MATRICES 195
Solution
hφ | A | ψi = |φi† A | ψi
= |φi† A†† | ψi
= (A† | φi)† | ψi
using (AB)† = B † A†
thus
H = H†
H | ψi = λ | ψi
hψ | H | ψi = λhψ | ψi
= hψ | H † | ψi
= hHψ | ψi
= λ∗ hψ | ψi
196 CHAPTER 12. DIRAC NOTATION
Thus
λ = λ∗
H | ψi = λ | ψi
and H | φi = µ | φi
Thus
hφ | H | ψi = λ hφ | ψi
= hφ | H † | ψi
= hHφ | ψi
= µ∗ hφ | ψi
hφ | ψi = 0
(Note: A set of vectors is said to span the space if every other vector
can be written as a linear combination of this set. For infinite vectors
this means they form a complete set.)
1) implies that Hermitian operators correspond to Observables.
3) implies that the eigenvectors of Hermitian operators (our observ-
ables) form basis vectors.
Solution
hAi ≡ hψ | A | ψi
Z
= dx dx0 hψ | xi hx | A | x0 i hx0 | ψi
Z
= dx dx0 ψ ∗ (x) hx | A | x0 i ψ(x0 )
where A takes the wave function |φi to |ψi. This hψ | A | φi is nothing more
than a matrix element. Thus we see that the expectation value hAi = hψ |
A | ψi is just a diagonal matrix element.
198 CHAPTER 12. DIRAC NOTATION
Solution
[x, p] = ih̄
... C = h̄
1 h̄
⇒ σ x σp ≥ |hh̄i =
2 2
h̄
... σx σp ≥
2
At this point all students should read Section 3.4.3 of [Griffiths, 1995] dealing
with the energy-time uncertainty principle. Note especially the physical
interpretation on Pg. 115.
200 CHAPTER 12. DIRAC NOTATION
Chapter 13
TIME-INDEPENDENT
PERTURBATION
THEORY, HYDROGEN
ATOM, POSITRONIUM,
STRUCTURE OF
HADRONS
We have been able to solve the Schrödinger equation exactly for a variety
of potentials such as the infinite square well, the finite square well, the
harmonic oscillator and the Coulomb potentials. However there are many
cases in nature where it is not possible to solve the Schrödinger equation
exactly such as the Hydrogen atom in an external magnetic field.
We are going to develop some approximation techniques for solving the
Schrödinger equation in special situations. A very important case occurs
when the total potential is a sum of an exactly solvable potential plus a
weak, or small, potential. In technical language this weak potential is called
a perturbation. The mathematical techniques of perturbation theory are
among the most widely used in physics. The great successes of the quantum
field theory of electromagnetism, called Quantum Electrodynamics, hinged
in great part upon the perturbation analysis involved in the so-called Feyn-
man diagrams. Such a perturbative framework was possible due to the weak-
201
202CHAPTER 13. TIME-INDEPENDENT PERTURBATION THEORY, HYDROGEN ATOM
ness of the electromagnetic interactions. The same can be said of the weak
interactions known as Quantum Flavordynamics. In general however the
strong interactions between quarks (Quantum Chromodynamics) cannot be
analyzed within a perturbative framework (except at very large momentum
transfers) and this has held up progress in the theory of strong interac-
tions. For example, it is known that quarks are permanently confined within
hadrons, yet the confinement mechanism is still not understood theoretically.
Consider the time-independent Schrödinger equation
H | ni = En | ni (13.1)
which we wish to solve for the energies En and eigenkets |ni. Suppose the
Hamiltonian H consists of a piece H0 which is exactly solvable and a small
perturbation λV . (We write λV instead of V because we are going to use λ
as an expansion parameter.) That is
H = H0 + λV (13.2)
H0 | n0 i = En0 | n0 i (13.3)
Note that the subscript ‘0’ denotes the unperturbed solution. It has nothing
to do with the ground state when n = 0.
So we know the solution to (13.3) but we want the solution to (13.1).
Let’s expand the state ket and energy in a power series
∞
X
|ni = λi | ni i = |n0 i + λ | n1 i + λ2 | n2 i + · · · (13.4)
i=0
and ∞
X
En = λi Eni = En0 + λEn1 + λ2 En2 + · · · (13.5)
i=0
where, for example, λ | n1 i and λEn1 are the first order corrections to the
exact unperturbed solutions for the state ket |n0 i and energy En0 .
Now simply substitute the expansions (13.4) and (13.5) into the Schrödinger
equation (13.1) giving
∞
X ∞
X ∞
X
(H0 + λV ) λ i | ni i = λj Enj λi | ni i (13.6)
i=0 j=0 i=0
203
or X X X
λ i H0 | ni i + λi+1 V | ni i = λi+j Enj | ni i (13.7)
i i ij
or
(H0 − En0 | n0 i = 0 (13.9)
which is just the unperturbed Schrödinger equation (13.3).
Now equate coefficients of λ1 . This gives i = 1 in the first term of (13.7).
The second term is i + 1 = 1 giving i = 0 and the third term is i + j = 1
which can happen in two ways. Either i = 0 and j = 1 or i = 1 and j = 0.
Thus
or
(H0 − En0 | n1 i = (En1 − V ) | n0 i (13.11)
Equating coefficients of λ2 gives i = 2 in the first term of (13.7) and
i + 1 = 2 giving i = 1 in the second term. The third term i + j = 2 can
happen in three ways. Either i = 0 and j = 2 or i = 1 and j = 1 or i = 2
and j = 0. Thus
(13.15)
This result is true for both degenerate and non-degenerate perturbation
theory.
as expected, and is the result that we already would have calculated from
our exact unperturbed Schrödinger equation. That is En0 is already known.
What interests us is the first order correction En1 to the unperturbed energy
En0 . To extract En1 we multiply the first order equation (13.11) on the left
again with the bra hn0 | giving
Now
hn0 | H0 = hn0 | H0† = En0 hn0 | (13.20)
because H0 is Hermitian. Thus the left side of (13.19) is 0. (Note that |n0 i
and |n1 i need not be orthogonal and thus hn0 | n1 i is not 0 in general.) Thus
(13.19) gives
(13.21)
which Griffiths [Griffiths, 1995] calls the most important result in quantum
mechanics! It gives the first order correction to the energy in terms of the
unperturbed state kets |n0 i which we already know.
Recall the complete expression for the energy in (13.5), which to first
order becomes
En = En0 + λEn1
= hn0 | H0 | n0 i + hn0 | λV | n0 i (13.22)
where λV is the complete perturbation. If you like you can now drop the
expansion parameter λ and just write the perturbation as V in which case
we have
En1 = hn0 | V | n0 i (13.23)
or
En = En0 + En1
= hn0 | H0 | n0 i + hn0 | V | n0 i (13.24)
Example 14.1 Evaluate the second order expression for the en-
ergy.
(13.29)
Wave Functions
We are assuming that we already know the unperturbed kets |n0 i allowing
us to evaluate En1 = hn0 | V | n0 i but to get the higher corrections Enk we
need |nk−1 i. We shall now discuss how to evaluate these.
Recall the first order equation (13.11). The |n0 i form a CON set and
therefore |n1 i can be expanded in terms of them as
X
|n1 i = cm | m0 i + cn | n0 i (13.30)
m6=n
where the term (H0 − En0 ) | n0 i = 0 which is why we wrote the expansion
(13.30) with m 6= n. Operate on (13.31) with hk0 | to give
X
cm (hk0 | H0 | m0 i − En0 hk0 | m0 i) = En1 hk0 | n0 i − hk0 | V | n0 i
m6=n
(13.32)
For k 6= n we have hk0 | n0 i = 0 and hk0 | m0 i = δkm and hk0 | H0 | m0 i =
Em0 hk0 | m0 i = Em0 δkm giving
or
hk0 | V | n0 i
ck = (k 6= n) (13.34)
En0 − Ek0
which are our expansion coefficients for k 6= n.
Thus cn = 0.
Thus we could have left off the second term in (13.30) which
would not have appeared in (13.31). (However we got zero there
anyway.)
X hm0 | V | n0 i
|n1 i = | m0 i
m6=n
En0 − Em0
208CHAPTER 13. TIME-INDEPENDENT PERTURBATION THEORY, HYDROGEN ATOM
(13.35)
for the first order wave function. Therefore the complete expression for the
second order energy in (13.28) is
X |hm0 | V | n0 i|2
En2 =
m6=n
En0 − Em0
(13.36)
where we used hm0 | V | n0 i = hn0 | V | m0 i∗ . Equations (13.35) and (13.36)
are our final expressions for the corrections to the wave function and energy
given completely in terms of unperturbed quantities.
As a practical matter, perturbation theory usually gives very accurate
answers for energies, but poor answers for wave functions [Griffiths, 1996].
In equations (13.35) and (13.36) we see that the answers will blow up if
En0 = Em0 . That is if there is a degeneracy. Thus we now consider how to
work out perturbation theory for the degenerate case.
H 0 | m 0 i = E | m0 i (13.37)
and
H0 | k0 i = E | k0 i (13.38)
then which states are we supposed to use in evaluating En1 ? Do we use
En1 = hm0 | V | m0 i or hk0 | V | k0 i ?
Actually linear combinations are equally valid eigenstates (see below),
i.e.
H0 (α | m0 i + β | k0 i) = E(α | m0 i + β | k0 i) (13.39)
13.2. DEGENERATE PERTURBATION THEORY 209
with the same energy E. Thus should we use the linear combination
|n0 i ≡ α | m0 i + β | k0 i (13.40)
as the state in En1 ? If the answer is yes, then what values do we pick for
α and β ? Thus even for the first order energy En1 we must re-consider
perturbation theory for degenerate states.
Degenerate perturbation theory is not just some exoteric technique. De-
generate perturbations are among the dramatic features of spectral lines. If
a single line represents a degenerate state, then the imposition of an external
field can cause the degenerate level to split into two or more levels. Thus
the subject of degenerate perturbation theory is very important. Actually
non-degenerate theory is almost useless because very few states are actually
non-degenerate.
H0 | m0 i = Em0 | m0 i (13.41)
and
H0 | k0 i = Ek0 | k0 i (13.42)
with
hk0 | m0 i = 0 (13.43)
and where the degeneracy is specified via
H0 (α | m0 i + β | k0 i) = E0 (α | m0 i + β | k0 i) (13.45)
with
|n0 i ≡ α | m0 i + β | k0 i (13.47)
where (13.46) is just the zero order equation (13.15).
We want to calculate the first order correction to the energy. We use
(13.11). In non-degenerate theory we multiplied (13.11) by hn0 |. Here we
shall multiply separately by hm0 | and hk0 |. Thus
where
Vmk ≡ hm0 | V | k0 i (13.50)
Multiplying (13.11) by hk0 | gives
· q ¸
1
En±1 = (Vmm + Vkk ) ± (Vmm − Vkk )2 + 4 | Vkm |2
2
(13.56)
which shows that a two-fold degeneracy is “lifted” by a perturbation. Note
that all of the matrix elements Vkm are known in principle because we know
|m0 and |k0 i and Vkm ≡ hk0 | V | m0 i.
We don’t have time to develop the full relativistic theory and instead we
shall just use the simplest possible generalization of the Schrödinger equation
that we can think of.
Recall the kinetic energy for the (non-relativistic) Schrödinger equation
is
p2
T = (13.59)
2m
where p = −ih̄ dx d
for 1-dimension and p = −ih̄5 in 3-dimensions. In the
relativistic case recall that
E =T +m (13.60)
and
E 2 = p2 + m2 (13.61)
giving q
T = p2 + m2 − m (13.62)
where we have left off the factors of c (or equivalently used units where c ≡ 1).
Thus the simplest relativistic generalization of the Schrödinger equation that
we can think of is simply to use (13.62), instead of (13.59), as the kinetic
energy. The resulting equation is called the Relativistic Schrödinger equa-
tion or Thompson equation. (The reason this is the simplest generalization
is because more complicated relativistic equations also include relativistic
effects in the potential energy V . We shall not study these here.)
The trouble with (13.62) is that as an operator it’s very weird. Consider
the Taylor expansion
s µ ¶2
p
T = m 1+ −m
m
p2 p4
≈ − + ··· (13.63)
2m 8m3
The first term is just the non-relativistic result and the higher terms are
relativistic corrections, but it is an infinite series in the operator p = −ih̄ dx
d
.
How are we ever going to solve the differential equation?! Well that’s for
me to worry about. [J. W. Norbury, K. Maung Maung and D. E. Kahana,
Physical Review A, vol. 50, pg. 2075, 3609 (1994)]. In our work now we
p2
will only consider the non-relativistic term 2m and the first order relativistic
correction defined as
p4
V ≡− 3 (13.64)
8m
214CHAPTER 13. TIME-INDEPENDENT PERTURBATION THEORY, HYDROGEN ATOM
so that the first order correction to the energy, using equation (13.21) or
(13.23), is
−1 1
En1 = 3
hn0 | p4 | n0 i ≡ − 3 hp4 i (13.65)
8m 8m
In position space this becomes
Z
hn0 | p | n0 i =
4
dr dr0 hn0 | rihr | p4 | r0 ihr0 | n0 i (13.66)
where
à !2
h̄2 2
hn0 | ri ≡ ψn∗ (r) 0
and hr | p | r i =
4
− ∇ δ(r − r0 )
2m
giving
Z Ã !2
1 h̄2 2 1
En1 =− 2 drψn∗ (r) − ∇ ψn (r) ≡ − hp4 i (13.67)
8m 2m 8m3
Now
p2 | n0 i = 2m(E − U ) | n0 i (13.70)
implying
†
hn0 | p2 = hn0 | 2m(E − U )† (13.71)
and p2 = p 2† and U = U † giving
so that
hp4 i = 4m2 hn0 | (E − U )2 | n0 i (13.73)
or
−1
En1 = h(E − U )2 i
2m
−1 2
= (E − 2EhU i + hU 2 i) (13.74)
2m
2
Thus to get En1 , we only have to calculate hU i and hU 2 i. Now U = − 4π²
1 Ze
0 r
(from equation (9.1) and so what we need are h 1r i and h r12 i. The results are
1 Z
h i= 2 (13.75)
r n a
where n is the principal quantum number and a is the Bohr radius and
[Griffiths, 1995, pg. 238]
1 1
h 2i = 1 (13.76)
r (l + 2 )n3 a2
for Z = 1.
(do Problem 14.1). Thus we obtain
à !
E2 4n
En1 =− n −3 (13.77)
2m l + 12
(do Problems 14.2 and 14.3)
But wait a minute ! We used hr|n0 i = ψnlm (r, θ, φ). According to the
procedure for degenrate states weren’t we supposed to find an operator A
p4
that commutes with V = − 8m 3 and then find simultaneous eigenstates of A
p4
and H0 and use the eiegnstates in hn0 |V |n0 i ? Yes ! An we did ! V = − 8m 3
commutes with L2 and Lz . The states Ylm are simultaneous eigenstates of
L2 , Lz and H0 . Thus the states ψnlm = Rnl (r)Ylm (θ, φ) already are the
correct eigenstates to use.
But for example, a two-fold degeneracy, aren’t we supposed to have two
different expressions hm0 |V |m0 i and hk0 |V |k0 i and similarly for higher-fold
degeneracies ? Yes ! And we did ! The states Ylm (θ, φ) are all different for
differnet values of m. Suppose n = 1 then we can have l = 0 and l = 1
which are two degenerate states |l = 0i and |l = 1i. Our answer 13.77 is
different for different values of l. In other words for two-fold degeneracy
(n = 1, l = 0, 1) we had |m0 i = |l = 0i and |k0 i = |l = 1i. We just wrote the
general state as |li or Ylm , but it’s really a collection of states |m0 i and |k0 i.
[NNN work out h r12 i for Z 6= 1. Is (13.77) also valid for Z 6= 1?]
216CHAPTER 13. TIME-INDEPENDENT PERTURBATION THEORY, HYDROGEN ATOM
then
p1 = −p2 ≡ p (13.81)
and then
p21 p2 p2
+ 2 = (13.82)
2m1 2m2 2µ
Now the obvious generalization of (13.78) is
T = T1 + T2
q q
= p21 + m21 − m1 + p22 + m22 − m2 (13.83)
or à ! à !
p21 p2 p41 p42
T ≈ + 2 − + + ··· (13.84)
2m1 2m2 8m31 8m32
which again reduces to the correct non-relativistic expressions (13.78) or
(13.79) or (13.82). When (13.83) is used as the kinetic energy operator, the
13.3. FINE STRUCTURE OF HYDROGEN 217
e2 1
V = −µ · B = S·L (13.89)
4π²0 m c2 r3
2
e2 1
V = S·L
8π²0 m c2 r3
2
(13.90)
which is our final expression for the spin-orbit interaction. (Actually if you
had been naive and left out the Dirac correction for µ and the Thomas
precession, you would still have got the right answer, because they cancel
out!)
Now we want to put this expression into our quantum mechanical ¯ formula ¯
¯ ¯
(13.21) for the energy shift. But how are we going to calculate hn0 ¯ r13 S · L¯ n0 i?
Let’s first review a few things about angular momentum.
The full wave function was written in equation (8.6) as ψ(r) ≡ ψ(r, θ, φ) ≡
R(r)Y (θ, φ) or ψnlm (r, θ, φ) = Rnl (r)Ylm (θ, φ). However we also need to in-
clude spin which does not arise naturally in the Schrödinger equation (it is
a relativistic effect). The spin wave function χ(s) must be tacked on as
or
[L · S, L] 6= 0
and
[L · S, S] 6= 0
However
[L · S, J] = [L · S, L2 ] = [L · S, S 2 ] = [L · S, J 2 ] = 0
J 2 = (L + S) · (L + S) = L2 + S 2 + 2L · S (13.102)
220CHAPTER 13. TIME-INDEPENDENT PERTURBATION THEORY, HYDROGEN ATOM
(13.107)
This is known as the fine-structure formula. (do Problem 14.6) Note the
presence of the famous fine structure constant
e2 1
α≡ ≈ (13.108)
4π²0 h̄c 137
VARIATIONAL
PRINCIPLE, HELIUM
ATOM, MOLECULES
14.3 Molecules
223
224CHAPTER 14. VARIATIONAL PRINCIPLE, HELIUM ATOM, MOLECULES
Chapter 15
WKB APPROXIMATION,
NUCLEAR ALPHA DECAY
h̄2 d2 ψ
− + U0 ψ = Eψ (15.1)
2m dx2
225
226CHAPTER 15. WKB APPROXIMATION, NUCLEAR ALPHA DECAY
2m(E − U0 )
ψ 00 + ψ = 0 for E > U0 (15.2)
h̄2
or
2m(U0 − E)
ψ 00 − ψ = 0 for E < U0 (15.3)
h̄2
The region E > U is sometimes called the “classical” region [Griffiths, 1995,
pg. 275] because E < U is not allowed classically. Defining
p
2m(E − U0 )
k≡ (15.4)
h̄
and p
2m(U0 − E)
κ≡ (15.5)
h̄
which are both real for E > U0 and E < U0 respectively. Thus the
Schrödinger equation is
ψ 00 + k 2 ψ = 0 (15.6)
or
ψ 00 − κ2 ψ = 0 (15.7)
with solutions
ψ(x) = A eikx + B e−ikx (15.8)
or
ψ(x) = C eκx + C e−κx (15.9)
(Of course (15.8) can also be written E cos kx + F sin kx, etc.) These wave
functions (15.8) and (15.9) are only the correct solutions for a constant po-
tential U = U0 . If U = U (x) they are not correct! However if U is slowly
varying then over small distances it does approximate a constant potential.
The answers for a slowly varying potential are
A R B R
ψ(x) ≈ √ ei k dx
+ √ e−i k dx
k k
(15.10)
and
15.1. GENERALIZED WAVE FUNCTIONS 227
C R D R
ψ(x) ≈ √ e κ dx
+√ e κ dx
k κ
(15.11)
where now k ≡ k(x) and κ = κ(x) are both functions of x because U =
U (x). I call these the generalized wave functions because they are obviously
generalizations of (15.8) and (15.9) which we had for the constant potential
U = U0 . The generalized wave functions have variable amplitude and variable
wavelength whereas these are constant for (15.8) and (15.9). The variable
wavelength are the k(x) and κ(x) terms in the exponential. Recall p = h̄k =
h
λ.
Let’s now see how to derive the generalized wave functions. In general
any (complex) wave function can be expressed in terms of an amplitude and
phase via
ψ(x) ≡ A(x)eiφ(x) (15.12)
This assumption is not an approximation, but rather, is exact. Let’s find
A(x) and φ(x) by substituting into the Schrödinger equation. For definite-
ness, use the first of the two Schrödinger equations, namely (15.6). Substi-
tution yields
A00 + 2iA0 φ0 + iAφ00 − Aφ02 + Ak 2 = 0 (15.13)
and equating the Real and Imaginery parts gives
and
Aφ00 + 2A0 φ0 = 0 (15.15)
which are exact equations relating the amplitude and phase of the general
wave function (15.12). The second equation can be solved in general by
writing it as
(A2 φ0 )0 = 0 (15.16)
yielding
C
A2 φ0 = C 2 , or A = √ 0 (15.17)
φ
where C is a constant. We don’t use A = ± √C 0 , because A being an
φ
amplitude, will give the same results for all observables whether we use +
or −. The first equation (15.14) cannot be solved in general. (If it could
228CHAPTER 15. WKB APPROXIMATION, NUCLEAR ALPHA DECAY
This makes good sense. If the potential is slowly varying over x then the
strength or amplitude of the wave function will only vary slowly.
Thus we can solve (15.14) in general as
φ0 = ±k (15.19)
giving
C
A= √ (15.20)
k
and the general solution of the first order ODE is
Z
φ(x) = ± k(x)dx (15.21)
Thus the generalized wave function (15.12) is written, in the WKB approx-
imation, as
C R
ψ(x) ≈ √ e±i k(x)dx (15.22)
k
The general solution is a linear combination
A R B R
ψ(x) = √ ei k(x)dx
+ √ e−i k(x)dx
k k
in agreement with (15.10). Equation (15.11) can be solved in a similar
manner. (do Problem 15.1)
Solution (All students should first go back and review the infi-
nite square well potential.)
15.1. GENERALIZED WAVE FUNCTIONS 229
ψ(0) = ψ(a) = 0
Solution We have
p
2m[E − U (x)]
k(x) ≡
h̄
For U (x) = 0 = constant we have
√
2mE
k(x) = constant =
h̄
which upon substitution into (15.23) gives
π 2 h̄2
En = n 2
2ma2
230CHAPTER 15. WKB APPROXIMATION, NUCLEAR ALPHA DECAY
in agreement with our previous result for the infinite square well
in (3.14).
The reason why we get the exact result is because A(x) does
not change as a function of x, and so the WKB approximation
A00 ≈ 0 is actually exact.
with p
2m[U (x) − E]
κ(x) ≡ (15.29)
h̄
From our previous experience we know that F is always bigger than D, so
that the dominant effect inside the well is one of exponential decay. Thus
let’s set
D≈0 (15.30)
which will be accurate for high or wide barriers, which is equivalent to the
tunnelling probability being small. Thus
F R
ψII (x) ≈ √ e− κ dx
(15.31)
κ
¯ ¯2
¯ ¯
Now the tunnelling probability is T ≡ ¯ C D
A ¯ but instead of getting A from
(15.27) and (15.25) let’s use the WKB wave function (15.31). Because we
must have ψI (−a) = ψII (−a) and ψII (a) = ψIII (a) then
¯ ¯ Ra
¯C ¯
¯ ¯ = e− −a κ dx (15.32)
¯A¯
giving Z a
T ≈ e−2γ with γ ≡ κ dx (15.33)
−a
Consider the nucleus 238 U, which is known to undergo alpha decay. Prior
to alpha decay, Gamow considered the alpha particle to be rattling around
inside the 238 U nucleus, or to be rattling around inside the potential well
of Fig. 15.2D. In this picture notice that the Coulomb potential represents
a barrier that the alpha particle must climb over, or tunnel through, in
order to escape. This is perhaps a little counter-intuitive as one would
think that the charge on the alpha particle would help rather than hinder
its escape, but such is not the case. (It’s perhaps easier to think of Fig.
15.2D in terms of scattering, whereby an incident alpha particle encounters
a repulsive barrier, but if it has sufficient energy it overcomes the barrier and
gets captured by the nuclear force.) Also we will be considering zero orbital
angular momentum l = 0. For l 6= 0 there is also an angular momentum
barrier in addition to the Coulomb barrier.
We are going to use the theory we developed in the previous section for
the finite potential barrier. You may object that Fig. 15.1 looks nothing
like Fig. 15.2D, but in fact they are similar if one considers that we want to
calculate the tunnelling probability for the alpha particle to go from being
trapped at r = r1 to escaping to r = r2 where r2 can be as large as you like.
(Also we developed our previous theory for 1-dimension and here we are
applying it to 3-dimensions, but that’s OK because the radial Schrödinger
equation is an effective 1-dimension equation in the variable r.)
The Coulomb potential between an alpha particle of charge +2e and a
nucleus of charge +Ze is
1 2Ze2 c
U (r) = + ≡ (15.34)
4π²0 r r
where
2Ze2
C≡ (15.35)
4π²0
√
2m[U (r)−E]
Thus (15.33) with κ = h̄ , (15.33) becomes
Z s µ ¶
1 r2 c
γ= 2m −E (15.36)
h̄ r1 r
particle has the energy E before it escapes. See Fig. 15.2D. Thus
1 2Ze2
E= (15.37)
4π²0 r2
We obtain
Z p
γ ≈ K1 √ − K2 Zr1 (15.38)
E
with à ! √
e2 π 2m
K1 = = 1.980 MeV1/2 (15.39)
4π²0 h̄
and à !1/2 √
e2 4 m
K2 = = 1.485 f m−1/2 (15.40)
4π²0 h̄
The above 3 equations are derived in [Griffiths, 1995]. Make certain you can
do the derivation yourself.
We want to know the lifetime τ or half-life τ1/2 of the decaying nucleus.
They are related via
τ1/2 τ1/2
τ= = (15.41)
ln 2 0.693
In the Gamow model of the alpha particle rattling around before emission
then the time between collisions with the wall, or potential barrier would be
2r1
v where v is the speed of the particle. Thus the lifetime would be
or
2r1 −2γ
τ=
e (15.42)
v
where v can be estimated from the energy.
(do Problems 15.2–15.5)
234CHAPTER 15. WKB APPROXIMATION, NUCLEAR ALPHA DECAY
Chapter 16
TIME-DEPENDENT
PERTURBATION
THEORY, LASERS
In general we have
U = U (r, t) (16.3)
That is the potential energy can be a function of both position and time.
If the potential is not a function of time U 6= U (t) then we were able to
use separation of variable to solve the Schrödinger equation (16.1) or (16.2).
This was discussed extensively in Chapter 2. We found
−i
Ψ(x, t) = ψ(x)e h̄ Et
for U 6= U (t) (16.4)
This very simple time dependence meant that |Ψ|2 ≡ Ψ∗ Ψ was constant
in time. That is Ψ was a stationary state. All expectation values were
235
236CHAPTER 16. TIME-DEPENDENT PERTURBATION THEORY, LASERS
also stationary. All of our studies up to now have been concerned with time-
independent potentials U 6= U (t). Thus we spent a lot of time calculating the
energy levels of square wells, the harmonic oscillator, the Coulomb potential
and even perturbed states, but there was never the possibility that a system
could jump from one energy level to another because of time-independence.
If an atom was in a certain energy level it was stuck there forever.
The spectra of atoms is due to time-dependent transitions between states.
If we are to understand how light is emitted and absorbed by atoms we bet-
ter start thinking about non-stationary states or time-dependent Hamilto-
nians. Thus we are back to the very difficult problem of solving the partial
differential Schrödinger equation, (16.1) or (16.2), for a general potential
U = U (r, t). This time separation of variables won’t work, and the partial
differential Schrödinger equation cannot be solved for general U (r, t).
Thus we can either specialize to certain forms of U (r, t) or we can develop
an approximation scheme known as time-dependent perturbation theory.
H = H0 + V (t) (16.5)
p2
H0 = + U (r) (16.6)
2m
for which we assume we know the full solution to the time-independent
Schrödinger equation
H0 | ni = En | ni (16.7)
analogous to (13.3). The problem we want to solve is
∂
H | Ψ(t)i = ih̄ | Ψ(t)i (16.8)
∂t
where
hr | Ψ(t)i ≡ Ψ(r, t) (16.9)
16.1. EQUIVALENT SCHRÖDINGER EQUATION 237
Recall the following results from Section 2.3. We can always write the
general solution Ψ(r, t) in terms of a complete set
X
Ψ(r, t) = cn Ψn (r, t) (16.10)
n
For the case U 6= U (t) this was a linear combination of separable solutions
X X
Ψ(r, t) = cn Ψn (r, t) = cn ψn (r)e−iωn t (16.11)
n n
or X X
|Ψ(t)i = cn | n(t)i = cn | nie−iωn t (16.12)
n n
where
|n(t)i ≡ |nie−iωn t (16.13)
and
En
ωn ≡ (16.14)
h̄
We also wrote (16.11) using
as X X
Ψ(r, t) = cn Ψn (r, t) = c0n (t)ψn (r) (16.16)
n n
or X X
|Ψ(t)i = cn | n(t)i = c0n (t) | ni (16.17)
n n
both of which look more like an expansion in terms of the complete set
{ψn (r)} or {|ni}.
For the time-dependent case U = U (t) we write our complete set expan-
sion as X
Ψ(r, t) ≡ cn (t)ψn (r)e−iωn t (16.18)
n
or
X
|Ψ(t)i = cn (t) | nie−iωn t (16.19)
n
= ca (t) | aie−iωa t + cb (t) | bie−iωn t (16.20)
(for 2–level system)
238CHAPTER 16. TIME-DEPENDENT PERTURBATION THEORY, LASERS
which contains both cn (t) and e−iωn t whereas above for U 6= U (t)we only had
one or the other. Here cn (t) contains an arbitrary time dependence whereas
c0n (t) above only had the oscillatory time dependence c0n (t) ≡ cn e−iωn t .
Equations (16.18) or (16.19) are general expansions for any wave func-
tions valid for arbitrary U (r, t). If we know cn (t) we know Ψ(t). Thus let’s
substitute (16.18) or (16.19) into the Schrödinger equation and get an equiv-
alent equation for cn (t), which we will solve for cn (t), put back into (16.18)
and then have Ψ(t).
Subsituting (16.19) into (16.8) gives
X X
H0 cn (t)e−iωn t | ni + V (t) cn (t)e−iωn t | ni (16.21)
n n
X X
= ih̄ ċn (t)e−iωn t | ni + cn (t)En e−iωn t | ni (16.22)
n n
where hm | ni = δmn has killed the sums on the right hand side. Now
hm | H0 | ni = Em δmn . Thus the first term on the left cancels the second
term on the right to give
−i X
ċm (t) = Vmn (t)eiωmn t cn (t) (16.24)
h̄ n
where
Vmn (t) ≡ hm | V (t) | ni (16.25)
and
ωmn ≡ ωm − ωn . (16.26)
Let’s condense things a bit. Define
Thus
16.1. EQUIVALENT SCHRÖDINGER EQUATION 239
−i
ċm (t) = vmn (t)cn (t)
h̄
(16.29)
where we have used the Einstein summation convention for the repeated
P
index n, i.e. vmn (t)cn (t) ≡ vmn (t)cn (t).
n
Equation (16.29) is the fundamental equation of time-dependent pertur-
bation theory, and it is completely equivalent to the Schrödinger equation.
All we have to do is solve (16.29) for cm (t) and then we have the complete
wave function Ψ(r, t). Thus I call equation (16.29) the equivalent Schrödinger
equation.
So far we have made no approximations!
Actually (16.29) consists of a whole collection of simultaneous or cou-
pled equations. For example, for a 2-level system [Griffiths, 1995], equation
(16.29) becomes
−i
ċa (t) = (vaa ca + vab cb )
h̄
−i
ċb (t) = (vba ca + vbb cb ) (16.30)
h̄
or in matrix form
à ! à !à !
ċb −i vaa vab ca
= (16.31)
ċb h̄ vba vbb cb
ċ1 v11 v12 v13 · · · c1
v21 v22 v23 · · ·
= −i
ċ2 c2
ċ3 h̄ v31 v32 v33 · · ·
c3
.. .. .. .. . .
. . . . .
(16.32)
again recalling that, for example v11 = V11 (t) and v21 = V21 (t)eiω21 t . Equa-
tion (16.32) is entirely the same as (16.29) and involves no approximations.
Thus (16.32) is also entirely equivalent to the Schrödinger equation.
240CHAPTER 16. TIME-DEPENDENT PERTURBATION THEORY, LASERS
The idea of iteration is as follows. For the cn (t1 ) appearing on the right
hand side just write it as a replica of (16.34), namely
µ ¶Z
−i t1
cn (t1 ) = δni + dt2 vnk (t2 )ck (t2 ) (16.35)
h̄ t0
P
where is implied for the repeated index k. Substitute the replica back
k
into (16.34) to give
µ ¶Z ·
−i t
cm (t) = δmi + dt1 vmn (t1 ) δni
h̄ t0
µ ¶ Z t1 ¸
−i
+ dt2 vnk (t2 )ck (t2 )
h̄ t0
µ ¶Z t
−i
= δmi + dt1 vmi (t1 )
h̄ t0
µ ¶2 Z t Z t1
−i
+ dt1 dt2 vmn (t1 )vnk (t2 )ck (t2 ) (16.36)
h̄ t0 t0
P
where we have used vmn (t1 )δni ≡ vnm (t1 )δni = vmi (t1 ). Do the same
n
again. Make a replica of (16.34) but now for ck (t2 ) and substitute. Contin-
uing this procedure the final result is
µ ¶Z
−i t
cm (t) = 1 + dt1 vmi (t1 )
h̄ t0
µ ¶2 Z t Z t1
−i
+ dt1 dt2 vmn (t1 )vni (t2 )
h̄ t0 t0
µ ¶3 Z t Z t1 Z t2
−i
+ dt1 dt2 dt3 vmn (t1 )vnk (t2 )vki (t3 )
h̄ t0 t0 t0
+··· (16.37)
16.3. CONSTANT PERTURBATION 241
or
∞ µ
X ¶ Z Z Z
−i j t t1 tj−1
cm (t) = 1 + dt1 dt2 · · · dtj vmn (t1 )vnk (t2 ) · · · vki (tj )
j=1
h̄ t0 t0 t0
(16.38)
This is called the Dyson equation for cm (t). (Actually what is usually called
the Dyson equation is for a thing called the time evolution operator, but it
looks the same as our equations.) We shall often only use the result to first
order which is
µ ¶Z
−i t
cm (t) = δmi + dt1 vmi (t1 )
h̄ t0
(16.39)
called the Strong Incompletely Coupled Approximation (SICA).
(Of course there are other things inside the integrals, but I have left them
out for now.) Vkn can be taken outside of the second integral because it is
constant over the time interval t0 → t, whereas it’s obviously not constant
over the entire time interval −∞ → +∞. Thus overall we still do have a
time dependent interaction. See [Schiff, 1955, 2nd ed., pg. 197; Griffiths,
1995, problems 9.28 9.14, see part c, pg. 320].
242CHAPTER 16. TIME-DEPENDENT PERTURBATION THEORY, LASERS
1 − eiωmi t
cm (t) = Vmi (16.42)
h̄ωmi
giving
2 1 − cos ωmi t
Pi→m (t) ≡ |cm (t)|2 = |Vmi |2 (16.43)
h̄2 2
ωmi
which is the transition probability for a constant perturbation. Now Pi→m (t)
represents the transition probability between the two states |ii and |mi. For
a two-state system the total probability P (t) is just
2 2 sin ωmi t
w= 2 |Vmi | ωmi
h̄
(16.46)
which is the transition rate for a two-level system under a constant pertur-
bation. The amazing thing is that it is a function of time which oscillates
sinusoidally. (See discussion in Griffiths, Pg. 304 of “flopping” frequency.
His discussion is for a harmonic perturbation, which we discuss next, but
the discussion is also relevant here.)
(do Problem 16.4)
16.3. CONSTANT PERTURBATION 243
Now Pi→m (t) represents the transition probability between two states |ii
and |mi. But |ii can undergo a transition to a whole set of possible final
states |ki. Thus the total probability P (t) is the sum of these,
X
P (t) = Pk (t) (16.47)
k
where
Pk (t) ≡ Pi→k (t) ≡ |ck (t)|2 (16.48)
P
If the energies are closely spaced or continuous, we replaced by an integral
k
Z ∞
P (t) = ρ(Ek )dEk Pk (t) (16.49)
0
where ρ(Ek ) is called the density of states. It’s just the number of energy
levels per energy interval dEk . Thus ρ(Ek )dEk is just the number of levels
in interval dEk . Often we will assume a constant density of states ρ which
just then comes outside the integral.
Note that formulas (16.47) and (16.49) involved addition of probabilities.
See Griffiths, Pg. 310 footnote. From (16.43) we get, assuming constant ρ,
and with dEk = h̄dωk
Z ∞
2 1 − cos ωki t
P (t) = |Vki |2 ρ dωk 2 (16.50)
h̄ 0 ωki
We want to change variables using dωk = dωki . However it is important to
realize that Z ∞ Z ∞
dωk = dωki (16.51)
0 −∞
because ωk varies from 0 to ∞ but so also does ωi . Thus when ωk = 0 and
ωi = ∞ then ωki = −∞ and when ωk = ∞ and ωi = 0 then ωki = +∞.
Thus Z ∞
2 1 − cos ωki t
P (t) = |Vki |2 ρ dωki 2 (16.52)
h̄ −∞ ωki
because the integrand is an even function. Using [Spiegel, 1968, Pg. 96]
Z ∞ 1 − cos px πp
2
dx = (16.53)
0 x 2
R∞ 1−cos px
which, due to the integrand being even implies x2
dx = πp, giving
−∞
2π
P (t) = |Vki |2 ρt (16.54)
h̄
244CHAPTER 16. TIME-DEPENDENT PERTURBATION THEORY, LASERS
2π
w= |Vki |2 ρ(Ek )
h̄
(16.55)
which is the transition rate for a multi-level system under a constant per-
turbation. This is the famous Fermi Golden Rule Number 2 for a constant
perturbation.
Footnote: Sometimes you might see an alternative derivation as follows.
[NASA TP-2363, Pg. 4]
dP (t)
w ≡ lim
dt
t→∞
Z ∞
2 sin ωki t
= lim |Vki |2 ρ dωk
t→∞ h̄ 0 ωki
Using the representation of the delta function [Merzbacher, 2nd ed., 1970,
pg. 84]
1 sin xθ
δ(x) = lim
π θ→∞ x
giving Z
2 ∞ 2π
w= |Vki |2 ρ dωki πδ(ωki ) = |Vki |2 ρ
h̄ −∞ h̄
It represents a constant rate of transition [Merzbacher, 2nd ed., 1970, pg.
479, equation (18.107)].
which, upon substitution into the first order amplitude (16.39), yields
µ ¶ Z
−i t eiωt1 + e−iωt1 iωmi t1
cm (t) = δmi + Vmi dt1 e
h̄ t 2
µ ¶ "0
−i ei(ωmi +ω)t − ei(ωmi +ω)t0
= δmi + Vmi
h̄ 2i(ωmi + ω)
#
ei(ωmi −ω)t − ei(ωmi −ω)t0
+ (16.57)
2i(ωmi − ω)
which is identical to (16.42) if ω ≡ 0. When we work out |cm (t)|2 it’s a big
mess. When ω ≈ ωmi the second term dominates so let’s just consider
1 − ei(ωmi −ω)t
cm (t) ≈ Vmi (16.59)
2h̄(ωmi − ω)
giving ¡ω ¢
mi −ω
sin2 t
Pi→m (t) ≡ |cm (t)| ≈ 2
|Vmi |2 2 2
(16.60)
h̄ (ωmi − ω)2
(do Problem 16.1)
This is the transition probability for a two-state system under a harmonic
perturbation. Again using (16.44) and (16.45) gives
1 2 sin(ωmi − ω)t
w= 2 |Vmi | ωmi − ω
2h̄
(16.61)
which is the transition rate for a two-level system under a harmonic pertur-
bation, which like (16.46) also oscillates in time.
For a multi-level system, using the same constant density ρ as before, we
have from (16.60)
³ ´
Z ∞ sin2 ωki −ω
t
1 2
P (t) = |Vki |2 ρ dωk (16.62)
h̄ 0 (ωki − ω)2
246CHAPTER 16. TIME-DEPENDENT PERTURBATION THEORY, LASERS
ωki −ω
and changing the variable to x ≡ 2 the integral becomes
Z ∞ Z ∞
dωk = dx (16.63)
0 −∞
as before giving
Z ∞
1 1 sin2 xt
P (t) = |Vki |2 ρ dx (16.64)
h̄ 2 −∞ x2
Using [Spiegel, 1968, pg. 96]
Z ∞ sin2 px πp
2
dx = (16.65)
0 x 2
R∞ sin2 px
which, due to the integrand being even implies x2
dx = πp, giving
−∞
π
P (t) = |Vki |2 ρt (16.66)
2h̄
from which we use w ≡ dP
dt to obtain
π
w= |Vki |2 ρ(Ek )
2h̄
(16.67)
which is the transition rate for a multi-level system under a harmonic per-
turbation, or the Fermi Golden Rule Number 2 for a harmonic perturbation,
Footnote: Sometimes you might again see an alternative derivation as
follows [NASA TP-2363, pg. 4]
dP (t)
w ≡ lim
t→∞dt
Z ∞
1 sin(ωki − ω)t
= lim |Vki |2 ρ dωk
t→∞ 2h̄ 0 ωki − ω
and using
sin(ωki − ω)t
lim = πδ(ωki − ω) = πδ(ωk − ωi − ω)
t→∞ ωki − ω
giving Z
1 ∞ π
w= |Vki |2 ρ d(ωki − ω)πδ(ωki − ω) = |Vki |2 ρ
2h̄ −∞ 2h̄
and which again represents a constant rate of transition. (If you want
(16.56) to look the same as (16.55) just rewrite (16.56) as V (t) = 2V cos ωt.)
16.5. PHOTON ABSORPTION 247
where we have assumed the field E points in the k̂ direction, and [Griffiths,
equation 9.32]
V (t) = −eE0 z cos ωt (16.69)
Compare this to (16.56). Thus
P ≡ ehm | z | ii (16.71)
1 2 2 sin(ωmi − ω)t
w= 2 |P| E0 ωmi − ω
(16.72)
2h̄
Here we have in mind that an atomic electron is excited by a single incident
photon polarized in the k̂ direction.
All students should read pages 307-309 of [Griffiths, 1995] for an excellent
discussion of absorption and stimulated emission and spontaneous emission.
These topics are very important in the study of lasers. Note especially that
the rate of absorption is the same as the rate of stimulated emission (just
swap indices).
π
w= |Vki |2 ρ(E)
2h̄
(16.74)
which is the transition rate for a radiation bath initial state under a harmonic
perturbation and is again the Fermi Golden Rule Number 2, with the density
of initial states in the radiation bath described by ρ(E). (do Problem 16.2)
In the above expression for photons we again have Vki = −PE0 as in
(16.70). However the initial radiation bath is not really characterized by an
electric field E0 but rather by the energy density (this time energy per unit
volume)
1
u = ²0 E02 (16.75)
2
to give
2u
|Vki |2 = |P|2 (16.76)
²0
giving
π
w= |P|2 uρ(E) (16.77)
²0 h̄
(Note: in comparing this with equation (9.43) of Griffiths, realize that ρ(ω0 )
used by Griffiths is actually the energy per unit frequency per unit volume.
This is related to my ρ(E) via ρ(ωh̄0 ) = uρ(E).)
Using the electric dipole moment, analogous to (16.71) as
P ≡ ehm | r | ii (16.78)
and averaging over all polarizations [Griffiths, pg. 311] gives a factor of 1/3,
i.e.
π
w= |P|2 uρ(E)
3²0 h̄
(16.79)
(The averaging works like this; consider r2 = x2 + y 2 + z 2 but now suppose
x2 = y 2 = z 2 giving r2 = 3z 2 and thus z 2 = 13 r2 .)
16.6. PHOTON EMISSION 249
π
B= |P|2
3²0 h̄2
(16.80)
and the spontaneous emission rate is
ω3
A= |P|2
3π²0 h̄c3
(16.81)
Students should study the derivation of these formulas in [Griffiths, 1995,
pg. 311-312].
The lifetime of an excited state is just [Griffiths, 1995, pg. 313]
1
τ=
A
(16.82)
∆m = ±1, 0
(16.84)
and
∆l = ±1
(16.85)
Transitions will not occur unless these are satisfied. They are very clearly
derived on pages 315-318 of [Griffiths, 1995]. All students should fully un-
derstand these derivations and include them in their booklet write-ups.
16.8 Lasers
Chapter 17
SCATTERING, NUCLEAR
REACTIONS
N = Lσ (17.1)
Thus to measure a cross section the experimentalist just divides the count
rate by the beam luminosity.
When measuring a total cross section σ, the experimentalist doesn’t mea-
sure either the energy or the angle of the outgoing particle. She just mea-
suress the total number of counts. However one might be interested in how
many particles are emitted as a function of angle. This is the angular dis-
dσ
tribution or angular differential cross section dΩ . Or one instead might be
interested in the number of particles as a function of energy. This is called
dσ
the spectral distribution or energy differential cross section dE . One might
d2 σ
be interested in both variables or the doubly differential cross section dEdΩ .
251
252 CHAPTER 17. SCATTERING, NUCLEAR REACTIONS
∇2 ψ + k 2 ψ = 0
where √
2mE
k≡
h̄
In 1-dimension (the z direction) the Schrödinger equation is
d2 ψ
+ k2 ψ = 0
dz 2
which has solution
ψ = Aeikx
for an incident beam in the z direction.
Let’s discuss the outgoing spherical wave in more detail. When the ex-
perimentalist detects an outgoing scattered particle, the detector is in what
17.2. SCATTERING AMPLITUDE 253
u(r)
where we have used R(r) ≡ r . In the asymptotic region this is
" #
X ikr
ikz l+1 e
ψ(r∞ , θ, φ) = A e + clm (−i) Ylm (θ, φ) (17.8)
lm
kr
or " #
eikr
ψ(r∞ , θ, φ) ≡ A eikz + f (θ, φ) (17.9)
r
where we have defined the scattering amplitude as
1X
f (θ, φ) ≡ clm (−i)l+1 Ylm (θ, φ) (17.10)
k lm
254 CHAPTER 17. SCATTERING, NUCLEAR REACTIONS
Equation (17.9) is very intuitive. It says that the scattering process consists
ikr
of an incident plane wave eikz and an outgoing scattered spherical wave e r .
The advantage of writing it in terms of the scattering amplitude is because
it is directly related to the angular differential cross section [Griffiths, 1995]
dσ
= |f (θ, φ)|2 (17.11)
dΩ
Thus our formula for the angular distribution is
dσ 1 X X l−l0 ∗ ∗
= 2 i clm cl0 m0 Ylm Yl0 m0 (17.12)
dΩ k lm l0 m0
but Z
∗
Ylm Yl0 m0 dΩ = δll0 δmm0 (17.14)
giving
1 X
σ= |clm |2 (17.15)
k 2 lm
But all these equations are overkill! I don’t know of any potential which is
not azimuthally symmetric. If azimuthal symmetry holds then our answers
won’t depend on φ or m. We could leave our equations as they stand and
simply grind out our answers and always discover independence of φ and m
in our answers. But it’s better to simply things and build in the symmetry
from the start. Azimuthal symmetry is simply achieved by setting m = 0.
From equation (??) we had
s
2l + 1 (l − |m|)! imφ
Ylm (θ, φ) = ² e Plm (cos θ) (17.16)
4π (l + |m|)!
where Plm (cos θ) are the Associated Legendre functions, which are related to
the Legendre polynomials Pl (cos θ) by
Thus s
2l + 1
Ylo (θ, φ) = Pl (cos θ) (17.18)
4π
17.2. SCATTERING AMPLITUDE 255
and
s
X 2l + 1 eikr
ψ(r∞ , θ) = A eikz + cl (−i)l+1 Pl (cos θ) (17.20)
l
4π kr
(17.21)
which is called the partial wave expansion of the scattering amplitude. Ob-
viously the angular distribution is now written as
dσ
= |f (θ)|2
dΩ
(17.22)
and
1 X 2
σ= |cl |
k2 l
(17.23)
17.2.1 Calculation of cl
(This section closely follows [Griffiths, 1995])
How then do we calculate cl ? Answer: by matching boundary conditions
(as usual). We match the wave function in the exterior, asymptotic region
to the wave function in the interior region near the potential.
256 CHAPTER 17. SCATTERING, NUCLEAR REACTIONS
ψI (r, θ) = 0
to give
s
X 2l + 1 (1)
il (2l + 1)jl (ka) + cl hl (ka) Pl (cos θ) = 0
l
4π
17.3. PHASE SHIFT 257
σ ≈ 4πa2
where in the third line we used sin(A + B) = sin A cos B + cos A sin B. Here
δ is the phase shift and simply serves as an arbitrary constant instead of B.
Now for arbitrary l we wrote equation (8.70) as
where we pute a minus sign in front of sin δl because nl (kr) → − cos kr.
[Schiff, 1955, pg. 86 (3rd ed.); Arfken, 1985, pg. 627 (3rd ed.)]. Using the
asymptotic expressions (17.5) and (17.6) we get
· µ ¶ µ ¶¸
Ar lπ lπ
ul (r → ∞) → cos δl sin kr − + sin δl cos kr −
kr 2 2
µ ¶
A lπ
= sin kr − + δl (17.29)
k 2
The relation between the scattering amplitude and phase shift is
∞
1 P
f (θ) = 2ik (2l + 1)(e2iδl − 1)Pl (cos θ)
l=0
∞
1 P
= k (2l + 1)eiδl sin δl Pl (cos θ)
l=0
(17.30)
(do Problem 17.3) and the cross section is easily calculated as
4π X
σ= (2l + 1) sin2 δl
k2 l
(17.31)
(do Problem 17.4) From this result one can prove the famous Optical The-
orem
4π
σ= Im f (0)
k
17.4. INTEGRAL SCATTERING THEORY 259
(17.32)
where Im f (0) denotes the imaginary part of the forward scattering ampli-
tude f (θ = 0). (do Problem 17.5)
(The example of hard sphere scattering is worked out, using phase shift,
in [“An Introduction to Quantum Physics” by G. Sposito, 1970, QC174.1.S68])
(do Problem 17.6)
(E − H0 ) | ψi = U | ψi (17.34)
or
2m
(∇2 + k 2 )ψ = Uψ (17.35)
h̄2
with √
2mE
k≡
h̄
The Schrödinger equation can be solved with Green function techniques.
Writing
(∇2 + k 2 )G(r) ≡ δ 3 (r) (17.36)
the general solution of the Schrödinger equation is
Z
2m
ψ(r) = φ(r) + 2 d3 r0 G(r − r0 )U (r0 )ψ(r0 ) (17.37)
h̄
where G(r − r0 ) is called a Green’s function, and φ(r) is the solution to the
free Schrödinger equation (with U = 0).
In abstract notation this may be written
or just
260 CHAPTER 17. SCATTERING, NUCLEAR REACTIONS
ψ = φ + U Gψ
(17.39)
Equations (17.37) or (17.39) are just the integral form of the Schrödinger
equation and is often called the Lippman-Schwinger equation. It may be
iterated as
ψ = φ + U Gφ + U GU Gφ + U GU GU Gφ + · · · (17.40)
which is called the Born series. The first Born approximation is just
ψ ≈ φ + U Gφ (17.41)
Equation (17.40) is reminiscent of the series representing Feynman diagrams
and G is often called the propagator. (Read [Griffiths, 1995, pg. 372])
We solve (17.36) for G, plug into (17.37), and we have a general (itera-
tive!) solution for the Schrödinger equation. Problem 17.7 verifies that
eikr
G(r) = − (17.42)
4πr
(do Problem 17.7). Thus
Z ik|r−r0 |
m 3 0e
ψ(r) = φ(r) − d r U (r0 )ψ(r0 )
2πh̄2 |r − r0 |
(17.43)
17.4. INTEGRAL SCATTERING THEORY 261
represents the incident beam of particles and the second term in (17.43) must
ikr
be f (θ) e r , allowing us to extract the scattering amplitude and calculate the
cross section.
The coordinate system is shown in Fig. 17.2. The coordinate r0 gives the
location of the potential U (r0 ) or scatttering region. The coordinate r is the
location of the detector, and r − r0 is the displacement between the “target”
and detector. In a typical scattering experiment we have r À r0 . To see this
put the origin inside the scattering region. Then r0 just gives the region of
the scattering center, and we put our detector far away so that r À r0 . In
that case
q p
0
|r − r | = |r − r0 |2 = r2 − 2r · r0 + r02
s s
µ ¶2
2r · r0 r0 2r · r0
= r 1− + ≈r 1−
r2 r r2
≈ r − êr · r0 (17.45)
Choose
k ≡ kêr (17.46)
giving
|r − r0 | ≈ r − k · r0 (17.47)
Thus [Griffiths, 1994, pg. 367]
0
eik|r−r | eikr −ik·r0
≈ e (17.48)
|r − r0 | r
giving
Z
m eikr 0
ψ(r) = Aeikz − d3 r0 e−ik·r U (r0 )ψ(r0 ) (17.49)
2πh̄2 r
and thus Z
m 0
f (θ, φ) = − d3 r0 e−ik·r U (r0 )ψ(r0 ) (17.50)
2πh̄2 A
which is an exact expression for the scattering amplitude.
262 CHAPTER 17. SCATTERING, NUCLEAR REACTIONS
giving (17.51) as
Z
m 0 cos θ0
f (θ) ≈ − 2π r02 dr0 sin θ0 dθ0 eiqr U (r0 ) (17.55)
2πh̄2
Using [Griffiths, 1995, equation 11.49, pg. 364]
Z π 2 sin sr
sin θ dθ eisr cos θ = (17.56)
0 sr
gives Z
2m ∞
f (θ) ≈ − r dr sin(qr)U (r) (17.57)
h̄2 q 0
and
θ
q = 2k sin (17.58)
2
Solution Z ∞
2m
f (θ) ≈ − C dr sin(qr) e−µr
h̄2 q 0
Z ∞ q
and dr sin(qr) e−µr =
0 µ2 + q2
giving
2mC 1
f (θ) = − 2 µ2 + q 2 (17.60)
h̄
and
dσ 4m2 C 2 1
= 4 (17.61)
dΩ h̄ (µ + q 2 )2
2
q1 q 2 1
f (θ) ≈ (17.63)
16π²0 E sin2 θ/2
or
264 CHAPTER 17. SCATTERING, NUCLEAR REACTIONS
µ ¶2
dσ q1 q2 1
=
dΩ 16π²0 E 2 sin4 θ/2
(17.64)
which is the famous Rutherford scattering formula with the char-
acteristic sin41θ/2 dependence. The same result occurs in classical
mechanics. All students should know how to derive this result
classically.
18.1 Solids
265
266 CHAPTER 18. SOLIDS AND QUANTUM STATISTICS
Chapter 19
SUPERCONDUCTIVITY
267
268 CHAPTER 19. SUPERCONDUCTIVITY
Chapter 20
ELEMENTARY
PARTICLES
269
270 CHAPTER 20. ELEMENTARY PARTICLES
Chapter 21
chapter 1 problems
21.1 Problems
1.1 Suppose 10 students go out and measure the length of a fence and the
following values (in meters) are obtained: 3.6, 3.7, 3.5, 3.7, 3.6, 3.7, 3.4, 3.5,
3.7, 3.3. A) Pick a random student. What is the probability that she made
a measurement of 3.6 m? B) Calculate the expectation value (i.e. average
or mean) using each formula of (1.2), (1.5), (1.8).
1.2 Using the example of problem 1.1, calculate the variance using both
equations (1.15) and (1.16).
271
272 CHAPTER 21. CHAPTER 1 PROBLEMS
21.2 Answers
1.2 0.0181
q
1.3 Griffiths Problem 1.6. A) A = λ
π B) hxi = a, hx2 i = a2 + 1
2λ ,
σ = √12λ
21.3. SOLUTIONS 273
21.3 Solutions
1.1
Let N (x) be the number of times a measurement of x is made.
Thus
N (3.7) = 4
N (3.6) = 2
N (3.5) = 2
N (3.4) = 1
N (3.3) = 1
total number of measurements N = 10
2
A) Probability of 3.6 is 10 = 15 = 0.2
B)
1 X
x = x
N
1
= (3.7 + 3.7 + 3.7 + 3.7 + 3.6 + 3.6
10
+ 3.5 + 3.5 + 3.4 + 3.3)
= 3.57
1 X
hxi ≡ x = xN (x)
N x
1
= [(3.7 × 4) + (3.6 × 2) + (3.5 × 2)
10
+ (3.4 × 1) + (3.3 × 1)]
= 3
finally
X
hxi = x = xP (x)
x
µ ¶ µ ¶ µ ¶
4 2 2
= 3.7 × + 3.6 × + 3.5 ×
10 10 10
µ ¶ µ ¶
1 1
+ 3.4 × + 3.3 ×
10 10
= 3
274 CHAPTER 21. CHAPTER 1 PROBLEMS
1.2
We found the average hji = 3.57. (I will round it off later.) The
squared distances are
A)
Z ∞
ρ(x) = Ae−λ(x−a)
2
ρ(x)dx = 1
−∞
Z ∞
du
e−λ(x−a) dx
2
= A let u = x − a, = 1, du = dx
−∞ dx
Z ∞
e−λu du e−λu is an even function
2 2
= A
−∞
Z∞
e−λu du
2
= 2A
0
r
1 π
= 2A integral found Pg. 98 of [Spiegel 1968]
2 λ
r
π
= A
λ
s
λ
therefore A =
π
B)
Z ∞ Z ∞
xe−λ(x−a) dx
2
hxi = xρ(x)dx = A
−∞ −∞
Z ∞
−λu2
= A (u + a)e du
−∞
Z Z ∞
∞
−λu2 −λu2
= A ue du + a e du
−∞ | {z } −∞
odd function | {z }
√π
= λ
from above
· r ¸
π
= A 0+a =a
λ
276 CHAPTER 21. CHAPTER 1 PROBLEMS
Z ∞ Z ∞
x2 e−λ(x−a) dx
2
hx i =
2 2
x ρ(x)dx = A
−∞ −∞
Z ∞
−λu2
= A (u + a)2 e du
−∞
·Z ∞ Z ∞ Z ∞ ¸
2 −λu2 −λu2 2 −λu2
= A u e du + 2a ue du + a e du
−∞ −∞ −∞
≡ A[I + J + K]
0 2a(m+1)/2
Z ∞
√ √
Γ(3/2) π/2 π
x2 e−ax dx =
2
therefore 3/2
= 3/2
= 3/2
0 2a 2a 4a
(See Pg. 101 of [Spiegel 1968])
Now the integrand of I is an even function
Z ∞ Z ∞
√ √
2 −λu2 2 −λu2 π π
therefore I = u e du = 2 u e du = 2 =
−∞ 0 4λ3/2 2λ3/2
" √ r #
π π
Thus hx2 i = A + O + a2
2λ3/2 λ
s r r ¸
·
λ 1 π π 1
= + a2 = + a2
π 2λ λ λ 2λ
21.3. SOLUTIONS 277
C)
Thus
it makes sense
that hxi = a!
Z
dhxi d
= Ψ∗ xΨdx
dt dt
Z
∂
= (Ψ∗ xΨ)dx
∂t
Z µ ¶
∂Ψ∗ ∗ ∂Ψ
= xΨ + Ψ x dx
∂t ∂t
and according to the Schrödinger equation
" #
∂Ψ 1 h̄2 ∂ 2 Ψ ih̄ ∂ 2 Ψ i
= − 2
+ U Ψ = 2
− UΨ
∂t ih̄ 2m ∂x 2m ∂x h̄
∂Ψ∗ ih̄ ∂ 2 Ψ∗ i
= − 2
+ U Ψ∗ (assuming U = U∗ )
∂t 2m ∂x h̄
Z Ã !
dhxi ih̄ ∂ Ψ∗
2 2
∗ ∂ Ψ
therefore = − xΨ + Ψ x 2 dx
dt 2m ∂x2 ∂x
Z µ ¶
ih̄ ∂ ∂Ψ∗ ∂Ψ
= − x Ψ − Ψ∗ dx
2m ∂x ∂x ∂x
(· µ ¶¸∞ Z µ ¶ )
ih̄ ∂Ψ∗ ∗ ∂Ψ ∂Ψ∗ ∗ ∂Ψ
= − x Ψ−Ψ − Ψ−Ψ dx
2m ∂x ∂x −∞ ∂x ∂x
from integration by parts
278 CHAPTER 21. CHAPTER 1 PROBLEMS
Z Z
∞ ∂ 2 Ψ∗ ∂Ψ ∂3Ψ ∞
where I1 ≡ dx I2 ≡
dx Ψ∗
−∞ ∂x2 ∂x −∞ ∂x2
dhpi ∂U
... need to show I1 = I2 then will have = −h i
dt ∂x
Z b Z b
recall integration by parts formula f dg = [f g]ba − g df
a a
Z Z Ã !
∞ 3Ψ ∞ ∂2Ψ
∗∂ ∂ ∗
I2 ≡ Ψ 3
dx = Ψ dx
−∞ ∂x −∞ ∂x ∂x2
Z Ã !
∞ ∂2Ψ
∗
= Ψ ∂
−∞ ∂x2
" # Z
2Ψ ∞ ∞ ∂2Ψ ∗
∗∂
= Ψ − ∂Ψ using integration by parts
∂x2 −∞ −∞ ∂x2
| {z }
= 0 because Ψ∗ (∞) = Ψ∗ (−∞) = 0
Z ∞ 2
∂ Ψ ∂Ψ∗
= − 2
dx
−∞ ∂x ∂x
Z ∞ 2 ∗ Z ∞ µ ¶
∂ Ψ ∂Ψ ∂Ψ ∂ ∂Ψ∗
I1 ≡ 2
dx = dx
−∞ ∂x ∂x −∞ ∂x ∂x ∂x
Z ∞ µ ¶
∂Ψ ∂Ψ∗
= ∂
−∞ ∂x ∂x
· ¸ Z ∞ µ ¶
∂Ψ ∂Ψ∗ ∞ ∂Ψ∗ ∂Ψ
= − ∂
∂x ∂x −∞ −∞ ∂x ∂x
| {z }
∂Ψ ∂Ψ
= 0 because must also have (∞) = (−∞) = 0
∂x ∂x
∂Ψ
if slope 6= 0 when Ψ(∞) = 0 would have
∂x
Z µ ¶
∂Ψ∗ ∂ ∂Ψ
∞
= − dx
−∞ ∂x ∂x ∂x
Z ∞
∂Ψ∗ ∂ 2 Ψ
= − 2
dx
−∞ ∂x ∂x
... I1 = I2 QED
280 CHAPTER 21. CHAPTER 1 PROBLEMS
Chapter 22
chapter 2 problems
22.1 Problems
2.2 Refer to Example 2.1.1. Determine the constants for the other 3 forms
of the solution using the boundary condition (i.e. determine B, C, F , δ, G,
γ from boundary conditions). Show that all solutions give x(t) = A sin ωt.
2.3 Check that the solution given in Example 2.1.2 really does satisfy the
A
differential equation. That is substitute x(t) = E cos(ωt + δ) + ω2 −α 2 cos αt
281
282 CHAPTER 22. CHAPTER 2 PROBLEMS
22.2 Answers
2.1
C = A + B, D = i(A − B) for C cos kx + D sin kx
A = F2 eiα , B = F2 e−iα for F cos(kx + α)
G −iβ
G iβ
A = 2i e , B = − 2i e for G sin(kx + β)
2.2
B = 2iA
C = − 2i
A
δ = π2 F = −A
γ=π G=A
22.3 Solutions
iii)
x(t) = G sin(ωt + γ)
x(0) = 0 = G sin γ ⇒ γ = π
therefore x(t) = G sin(ωt + π) = G sin ωt
π
x(T /4) = A = G sin = G
2
therefore γ=π G=A
therefore x(t) = A sin(ωt + π) = A sin ωt
22.3. SOLUTIONS 285
2.3
A
x(t) = E cos(ωt + δ) + cos αt
− α2
ω2
Aα
ẋ = −Eω sin(ωt + δ) − 2 sin αt
ω − α2
Aα2
ẍ = −Eω 2 cos(ωt + δ) − 2 cos αt
ω − α2
left hand side:
Aα2
ẍ + ω 2 x = −Eω 2 cos(ωt + δ) − cos αt
ω 2 − α2
Aω 2
+ Eω 2 cos(ωt + δ) + 2 cos αt
ω − α2
A
= (−α2 + ω 2 ) 2 cos αt
ω − α2
= A cos αt
= right hand side
QED
2.4
1 df i
= − E
f dt h̄
Z Z
1 df i
dt = − E dt + C 0 C 0 = constant
f dt h̄
i
ln f = − Et + C 0
h̄
0 0
f (t) = e− h̄ Et+C = eC e− h̄ Et ≡ Ce− h̄ Et
i i i
286 CHAPTER 22. CHAPTER 2 PROBLEMS
2.5
Z
hψn | f i ≡ ψn∗ (x)f (x)dx
Z X
= ψn∗ (x) cm ψm (x)dx
m
X Z
= cm ψn∗ (x)ψm (x)dx
m
X
= cm δmn from (2.32)
m
= cn
QED
Chapter 23
chapter 3 problems
23.1 Problems
3.2 For the infinite 1-dimensional box, show that the same energy levels
and wave functions as obtained in (3.14) and (3.19) also arise if the other
solution y(x) = Ae(α+iβ)x + Be(α−iβ)x is used from Theorem 1.
287
288 CHAPTER 23. CHAPTER 3 PROBLEMS
23.2 Answers
23.3 Solutions
3.1
π 2 h̄2 me c2
E1 = =
2ma2 100
Now rest mass of electron is me c2 = 0.511 MeV and h̄c = 197 MeV fm where
a fermi is fm ≡ 10−15 m. (1 fm ≈ size of proton)
π 2 (h̄c)2 me c2
=
2me c2 a2 100
Therefore
π 2 (h̄c)2
a2 = 100
2(me c2 )2
n2 × 1972 MeV2 fm2
= 100
2 × 0.5112 MeV2
= 73, 343, 292 fm2
a = 8, 564 fm
a ≈ 10, 000 fm = 10−11 m (approximately)
h̄2 k 2 π 2 h̄2
giving En2 = = n2
2m 2ma2
nπh̄
or En = √
2ma
The wave function normalization follows (3.17) and (3.18). From the above
we have
chapter 4 problems
24.1 Problems
4.1 Show that the expectation value of the Hamiltonian or Energy operator
is real (i.e. show that hÊi is real where Ê = ih̄ ∂t
∂
).
291
292 CHAPTER 24. CHAPTER 4 PROBLEMS
24.2 Answers
A) r
30
A=
a5
B)
a
hxi =
2
hpi = 0
5h̄2
hHi =
ma2
ci = 0.99928
c2 = 0
c3 = 0.03701
24.3. SOLUTIONS 293
24.3 Solutions
4.1 In Section 2.3.3 we saw that the expectation value of the Hamiltonian
was just the total energy, i.e. hHi = E.
h̄2 d2
Instead of using Ĥ = − 2m dx2
∂
+ U let’s instead note that ĤΨ = ih̄ ∂t Ψ
∂
and use Ĥ = ih̄ ∂t . Thus
Z ∞
∂
hĤi = ih̄ Ψ dxΨ∗
−∞ ∂t
Z ∞
∂
hĤi∗ = −ih̄ Ψ Ψ∗ dx
−∞ ∂t
From the Schrödinger equation
h̄2 ∂ 2 Ψ ∂Ψ
− 2
+ U Ψ = ih̄
2m ∂x ∂t
we have Z Z
h̄2 ∞ ∂2Ψ ∞
hĤi = − Ψ∗ dx + Ψ∗ U Ψ dx
2m −∞ ∂x2 −∞
and
h̄2 ∂ 2 Ψ∗ ∗ ∂Ψ∗
− + U Ψ = −ih̄
2m ∂x2 ∂t
assuming
U = U∗
we have
Z Z
h̄2 ∞ ∂ 2 Ψ∗ ∞
hĤi∗ = − Ψ dx + ΨU Ψ∗ dx
2m −∞ ∂x2 −∞
(Z Z ∞ )
h̄ 2 ∞ ∂2Ψ ∂ 2 Ψ∗
⇒ hĤi − hĤi∗ = − Ψ∗ 2 dx − Ψ dx
2m −∞ ∂x −∞ ∂x2
hĤi − hĤi∗ = 0
hĤi = hĤi∗
4.2
P̂ φ = pφ
∂
P̂ ≡ −ih̄
∂x
∂φ
−ih̄ = pφ
Z
∂x Z
1 ∂φ p
dx = dx
φ ∂x −ih̄
i
ln φ = px
h̄
i
φ = φ0 e h̄ px
295
296 BIBLIOGRAPHY
[15] E.J. Purcell and D. Varberg, Calculus and Analytic Geometry, 5th ed.
(Prentice-Hall, Englewood Cliffs, New Jersey, 1987).
[16] G.B. Thomas and R.L. Finney, Calculus and Analytic Geometry, 7th
ed. (Addison-Wesley, Reading, Massachusetts, 1988).
[17] F.W. Byron and R.W. Fuller, Mathematics of Classical and Quantum
Physics (Addison-Wesley, Reading, Massachusetts, 1969).