Nuclear Physics
Nuclear Physics
Nuclear Physics
6.1
Time-dependent Schr
odinger equation
6.1.1 Solutions to the Schr
odinger equation
6.1.2 Unitary Evolution
6.2 Evolution of wave-packets
6.3 Evolution of operators and expectation values
6.3.1 Heisenberg Equation
6.3.2 Ehrenfests theorem
6.4 Fermis Golden Rule
Until now we used quantum mechanics to predict properties of atoms and nuclei. Since we were interested mostly
in the equilibrium states of nuclei and in their energies, we only needed to look at a time-independent description
of quantum-mechanical systems. To describe dynamical processes, such as radiation decays, scattering and nuclear
reactions, we need to study how quantum mechanical systems evolve in time.
dinger equation
6.1 Time-dependent Schro
When we rst introduced quantum mechanics, we saw that the fourth postulate of QM states that:
The evolution of a closed system is unitary (reversible). The evolution is given by the time-dependent Schr
odinger
equation
|)
iI
= H|)
t
where H is the Hamiltonian of the system (the energy operator) and I is the reduced Planck constant
(I = h/2 with h the Planck constant, allowing conversion from energy to frequency units).
We will focus mainly on the Schr
odinger equation to describe the evolution of a quantum-mechanical system. The
statement that the evolution of a closed quantum system is unitary is however more general. It means that the state
of a system at a later time t is given by |(t)) = U (t) |(0)), where U (t) is a unitary operator. An operator is unitary
if its adjoint U (obtained by taking the transpose and the complex conjugate of the operator, U = (U )T ) is equal
to its inverse: U = U 1 or U U = 11.
Note that the expression |(t)) = U (t) |(0)) is an integral equation relating the state at time zero with the state at
time t. For example, classically we could write that x(t) = x(0) + vt (where v is the speed, for constant speed). We
can as well write a dierential equation that provides the same information: the Schrodinger equation. Classically
for example, (in the example above) the equivalent dierential equation would be dx
dt = v (more generally we would
have Newtons equation linking the acceleration to the force). In QM we have a dierential equation that control the
evolution of closed systems. This is the Schrodinger equation:
iI
(x, t)
= H(x, t)
t
where H is the systems Hamiltonian. The solution to this partial dierential equation gives the wavefunction (x, t)
at any later time, when (x, 0) is known.
dinger equation
6.1.1 Solutions to the Schro
2
p
We rst try to nd a solution in the case where the Hamiltonian H = 2m
+ V (x, t) is such that the potential V (x, t)
is time independent (we can then write V (x)). In this case we can use separation of variables to look for solutions.
That is, we look for solutions that are a product of a function of position only and a function of time only:
83
(x, t)
d(x)
2 (x, t)
d2 (x)
=
f (t) and
=
f (t)
dx2
x
dx
x2
df (t)
I2 d2 (x)
(x) =
f (t) + V (x)(x)f (t)
dt
2m x2
df (t) 1
I2 d2 (x) 1
=
+ V (x)
dt f (t)
2m x2 (x)
Now the LHS is a function of time only, while the RHS is a function of position only. For the equation to hold, both
sides have then to be equal to a constant (separation constant):
iI
df (t) 1
I2 d2 (x) 1
= E,
+ V (x) = E
dt f (t)
2m x2 (x)
f (t) = f (0)ei
Et
i
I2 d2 (x) 1
+ V (x) = E
2m x2 (x)
that we have already seen as the time-independent Schrodinger equation. We have extensively studied the solutions
of the this last equation, as they are the eigenfunctions of the energy-eigenvalue problem, giving the stationary (equi
librium) states of quantum systems. Note that for these stationary solutions (x) we can still nd the corresponding
total wavefunction, given as stated above by (x, t) = (x)f (t), which does describe also the time evolution of the
system:
(x, t) = (x)ei
Et
i
Does this mean that the states that up to now we called stationary are instead evolving in time?
The answer is yes, but with a caveat. Although the states themselves evolve as stated above,
J any measurable quantity
(such as the probability density |(x, t)|2 or the expectation values of observable, (A) = (x, t) A[(x, t)]) are still
time-independent. (Check it!)
Thus we were correct in calling these states stationary and neglecting in practice their time-evolution when studying
the properties of systems they describe.
Notice that the wavefunction built from one energy eigenfunction, (x, t) = (x)f (t), is only a particular solution
of the Schrodinger equation, but many other are possible. These will be complicated functions of space and time,
whose shape will depend on the particular form of the potential V (x). How can we describe these general solutions?
We know that in general we can write a basis given by the eigenfunction of the Hamiltonian. These are the functions
{(x)} (as dened above by the time-independent Schrodinger equation). The eigenstate of the Hamiltonian do not
evolve. However we can write any wavefunction as
L
(x, t) =
ck (t)k (x)
k
This just corresponds to express the wavefunction in the basis given by the energy eigenfunctions. As usual, the
coecients ck (t) can be obtained at any instant in time by taking the inner product: (k |(x, t)).
What is the evolution of such a function? Substituting in the Schrodinger equation we have
L
( k ck (t)k (x)) L
iI
=
ck (t)Hk (x)
t
k
that becomes
iI
L (ck (t))
k
k (x) =
L
k
84
ck (t)Ek k (x)
dck
= Ek ck (t)
dt
ck (t) = ck (0)ei
Ek t
i
Obs. We can dene the eigen-frequencies Ik = Ek from the eigen-energies. Thus we see that the wavefunction is a
superposition of waves k propagating in time each with a dierent frequency k .
The behavior of quantum systems even particles thus often is similar to the propagation of waves. One example
is the diraction pattern for electrons (and even heavier objects) when scattering from a slit. We saw an example in
the electron diraction video at the beginning of the class.
Obs. What is the probability of measuring a certain energy Ek at a time t? It is given by the coecient of the k
Ek t
eigenfunction, |ck (t)|2 = |ck (0)ei i |2 = |ck (0)|2 . This means that the probability for the given energy is constant,
does not change in time. Energy is then a so-called constant of the motion. This is true only for the energy eigenvalues,
Example: Consider instead the probability of nding the system at a certain position, p(x) = |(x, t)|2 . This of course
changes in time. For example, let (x, 0) = c1 (0)1 (x) + c2 (0)2 (x), with |c1 (0)|2 + |c2 (0)|2 = |c1 |2 + |c2 |2 = 1 (and
1,2 normalized energy eigenfunctions. Then at a later time we have (x, 0) = c1 (0)ei1 t 1 (x) + c2 (0)ei2 t 2 (x).
2
c1 (0)ei1 t 1 (x) + c2 (0)ei2 t 2 (x)
= |c1 (0)|2 |1 (x)|2 + |c2 (0)|2 |2 (x)|2 + c1 c2 1 2 ei(2 1 )t + c1 c2 1 2 ei(2 1 )t
[
]
= |c1 |2 + |c2 |2 + 2Re c1 c2 1 2 ei(2 1 )t
The last term describes a wave interference between dierent components of the initial wavefunction.
Obs.: The expressions found above for the time-dependent wavefunction are only valid if the potential is itself
time-independent. If this is not the case, the solutions are even more dicult to obtain.
(x, 0)
U
(x, 0)
= HU
t
iI
= HU
t
where in the second step we used the fact that since the equation holds for any wavefunction it must hold for the
operator themselves. If the Hamiltonian is time independent, the second equation can be solved easily, obtaining:
iI
= HU
t
(t) = eiHt/n
U
85
Now we want to study the case where the eigenfunctions form form a continuous basis, {k } {(k)}. More
precisely, we want to describe how a free particle evolves in time. We already found the eigenfunctions of the free
particle Hamiltonian (H = p2 /2m): they were given by the momentum eigenfunctions eikx and describe more properly
a traveling wave. A particle localized in space instead can be described by wavepacket (x, 0) initially well localized
in x-space (for example, a Gaussian wavepacket).
How does this wave-function evolve in time? First, following Section 2.2.1, we express the wavefunction in terms of
momentum (and energy) eigenfunctions:
J
1
(x, 0) =
(k) eikx dk,
2
We saw that this is equivalent to the Fourier transform of (k), then (x, 0) and (k)
are a Fourier pair (can be
obtained from each other via a Fourier transform).
where
(k) = k x (k) t.
Now, if (k)
is strongly peaked around k = k0 , it is a reasonable approximation to Taylor expand (k) about k0 .
(kk0 ) 2
We can then approximate (k) by (k) e 4 (k)2 and keeping terms up to second-order in k k0 , we obtain
{
}
J
(kk0 ) 2
1
4 (k)
2
2
(x, t)
e
exp i k x + i 0 + 0 (k k0 ) + 0 (k k0 )
,
2
where
0
0
=
=
(k0 ) = k0 x 0 t,
d(k0 )
= x vg t,
dk
d2 (k0 )
dk2
= t,
i k x + i k0 x 0 t + (x vg t) (k k0 ) +
with
1
(k k0 ) 2
2 0
d(k0 )
d2 (k0 )
,
=
.
dk
dk 2
As usual, the variance of the initial wavefunction and of its Fourier transform are relates: k = 1/(2 x), where x
is the initial width of the wave-packet and k the spread in the momentum. Changing the variable of integration to
y = (k k0 )/(2 k), we get
J
0 = (k0 ),
vg =
(x, t) e i (k0 x0 t)
where
1
2
=
=
e i 1 y(1+i 2 ) y dy,
2 k (x x0 vg t),
2 (k) 2 t,
i(k0 x0 t)(1+i2 ) 2 /4
J
2
1/2 i (k0 x0 t)(1+i 2 ) 2 /4
(x, t) (1 + i 2 )
e
ez dz.
86
where
4 (t)2
ei(k0 x0 t) e
(x, t)
1 + i2 (k)2 t
2 (t) = (x) 2 +
2 t2
.
4 (x) 2
Note that even if we made an approximation earlier by Taylor expanding the phase factor (k) about k = k0 , the
above wave-function is still identical to our original wave-function at t = 0.
The probability density of our particle as a function of times is written
2
(x
v
t)
0
g
|(x, t)| 2 1 (t) exp
.
2 2 (t)
Hence, the probability distribution is a Gaussian, of characteristic width (t) (increasing in time), which peaks at
x = x0 + vg t. Now, the most likely position of our particle obviously coincides with the peak of the distribution
function. Thus, the particles most likely position is given by
x = x0 + vg t.
It can be seen that the particle eectively moves at the uniform velocity
vg =
d
,
dk
which is known as the group-velocity. In other words, a plane-wave travels at the phase-velocity, vp = /k, whereas
a wave-packet travels at the group-velocity, vg = d/dt vg = d/dt. From the dispersion relation for particle waves
the group velocity is
d(I)
dE
p
=
= .
vg =
d(Ik)
dp
m
which is identical to the classical particle velocity. Hence, the dispersion relation turns out to be consistent with
classical physics, after all, as soon as we realize that particles must be identied with wave-packets rather than
plane-waves.
Note that the width of our wave-packet grows as time progresses: the characteristic time for a wave-packet of original
width x x to double in spatial extent is
m (x)2
t2
.
I
So, if an electron is originally localized in a region of atomic scale (i.e., x 1010 m ) then the doubling time is
only about 1016 s. Clearly, particle wave-packets (for freely moving particles) spread very rapidly.
The rate of spreading of a wave-packet is ultimately governed by the second derivative of (k) with respect to k,
2
k2 . This is why the relationship between and k is generally known as a dispersion relation, because it governs
If we consider light-waves, then is a linear function of k and the second derivative of with respect to k is zero.
This implies that there is no dispersion of wave-packets, wave-packets propagate without changing shape. This is
of course true for any other wave for which (k) k. Another property of linear dispersion relations is that the
phase-velocity, vp = /k, and the group-velocity, vg = d/dk are identical. Thus a light pulse propagates at the
same speed of a plane light-wave; both propagate through a vacuum at the characteristic speed c = 3 108 m/s .
Of course, the dispersion relation for particle waves is not linear in k (for example for free particles is quadratic).
Hence, particle plane-waves and particle wave-packets propagate at dierent velocities, and particle wave-packets
87
(x, t)
i
= H(x, t),
t
I
i
(x, t)
= (H(x, t))
t
I
and the fact (H(x, t)) = (x, t) H = (x, t) H (since the Hamiltonian is hermitian H = H). With this, we have
\ )
J
J
J
d A
i
A
i
t) + d3 x (x, t)
(x, t)
d3 x (x, t) AH(x, t)
=
d3 x (x, t) HA(x,
dt
I
t
I
J
J
[
]
i
A
=
d3 x (x, t) HA AH (x, t) + d3 x (x, t)
(x, t)
I
t
[
]
We now rewrite HA AH = [H, A] as a commutator and the integrals as expectation values:
\ )
d A
dt
)
i\
+
=
[H, A]
I
A
t
\
)
)
d( A
. Then if
Obs. Notice that if the observable itself is time independent, then the equation reduces to dt = ni [H, A]
the observable A commutes with the Hamiltonian, we have no evolution at all of the expectation value. An observable
that commutes with the Hamiltonian is a constant of the motion. For example, we see again why energy is a constant
of the motion (as seen before).
Notice that since we can take the expectation value with respect to any wavefunction, the equation above must hold
also for the operators themselves. Then we have the Heisenberg equation:
dA
i
A
= [H, A]+
dt
I
t
This is an equivalent formulation of the systems evolution (equivalent to the Schrodinger equation).
Obs. Notice that if the operator A is time independent and it commutes with the Hamiltonian H then the operator
only depends on the distance, V (r)). We have seen when solving the 3D time-independent equation that [H, L
Thus the angular momentum is a constant of the motion.
88
Notice that this is the same equation that links the classical position with momentum (remember p/m = v velocity).
Now we turn to the equation for the momentum:
2
i
i
p
d h
pi
= h[H, p]i =
[
+ V (x), p]
dt
~
~ 2m
2
p
Here of course [ 2m
, p] = 0, so we only need to calculate [V (x), p]. We substitute the explicit expression for the
momentum:
f (x)
(V (x)f (x))
[V (x), p]f (x) = V (x) i~
i~
x
x
= V (x)i~
f (x)
V (x)
f (x)
V (x)
+ i~
f (x) + i~
V (x) = i~
f (x)
x
x
x
x
Then,
d h
pi
=
dt
V (x)
x
Obs. Notice that in these two equations ~ has been canceled out. Also the equation involve only real variables (as in
classical mechanics).
Obs. Usually, the derivative of a potential function is a force, so we can write Vx(x) = F (x)
If we could approximate hF (x)i F (hxi), then the two equations are rewritten:
1
d h
xi
=
h
pi
dt
m
d h
pi
= F (hxi)
dt
These are two equations in the expectation values only. Then we could just make the substitutions h
pi p and
h
xi x (i.e. identify the expectation values of QM operators with the corresponding classical variables). We obtain
in this way the usual classical equation of motions. This is Ehrenfests theorem.
F(x)
F(<x>)
|(x)|2
F(x)
|(x)|2
F(<x>) <F(x)>
<x>
F(<x>) <F(x)>
x
<x>
Fig. 40: Localized (left) and spread-out (right) wavefunction. In the plot the absolute value square of the wavefunction is shown
in blue (corresponding to the position probability density) for a system approaching the classical limit (left) or showing more
quantum behavior. The force acting on the system is shown in black (same in the two plots). The shaded areas indicate the
region over which |(x)|2 is non-negligible, thus giving an idea of the region over which the force is averaged. The wavefunctions
give the same average position hxi. However, while for the left one F (hxi) hF (x)i, for the right wavefunction F (hxi) 6= hF (x)i
E
D
V (hxi)
When is the approximation above valid? We want Vx(x) hxi . This means that the wavefunction is localized
enough such that the width of the position probability distribution is small compared to the typical length scale
over which the potential varies. When this condition is satisfied, then the expectation values of quantum-mechanical
probability observable will follow a classical trajectory.
R
Assume for example (x) is an eigenstate of the position operator (x) = (x x
). Then h
xi = dx x(x x
)2 = x
and
Z
V (x)
V (x)
V (hxi)
=
(x hxi)dx =
x
x
hxi
If instead the wavefunction is a packet centered around hxi but with a finite width x (i.e. a Gaussian function) we
1
no longer have an equality but only an approximation if x L = V1 Vx(x) (or localized wavefunction).
89
If the system is in its equilibrium state, we expect it to be stationary, thus the wavefunction will be one of the
eigenfunctions of the Hamiltonian. For example, if we consider an atom or a nucleus, we usually expect to nd it in
its ground state (the state with the lowest energy). We consider this to be the initial state of the system:
(x, 0) = ui (x)
(where i stands for initial ). Now we assume that a perturbation is applied to the system. For example, we could
have a laser illuminating the atom, or a neutron scattering with the nucleus. This perturbation introduces an extra
potential V in the systems Hamiltonian (a priori V can be a function of both position and time V (x, t), but we will
consider the simpler case of time-independent potential V (x)). Now the hamiltonian reads:
H = H0 + V (x)
What we should do, is to nd the eigenvalues {Ehv } and eigenfunctions {vh (x)} of this new Hamiltonian and express
ui (x) in this new basis and see how it evolves:
L
L
v
ui (x) =
dh (0)vh (x, t) =
dh (0)eiEh t/n vh (x).
h
Most of the time however, the new Hamiltonian is a complex one, and we cannot calculate its eigenvalues and
eigenfunctions. Then we follow another strategy.
Consider the examples above (atom+laser or nucleus+neutron): What we want to calculate is the probability of
making a transition from an atom/nucleus energy level to another energy level, as induced by the interaction. Since
H0 is the original Hamiltonian describing the system, it makes sense to always describe the state in terms of its
energy levels (i.e. in terms of its eigenfunctions). Then, we guess a solution for the state of the form:
L
(x, t) =
ck (t)eik t uk (x)
k
This is very similar to the expression for (x, t) above, except that now the coecient ck are time dependent. The
time-dependency derives from the fact that we added an extra potential interaction to the Hamiltonian.
Let us now insert this guess into the Schr
odinger equation, iI
t = H0 + V :
(
)
L[
] L
iI
ck (t)eik t uk (x) ick (t)eik t uk (x) =
ck (t)eik t H0 uk (x) + V [uk (x)]
k
(where c is the time derivative). Using the eigenvalue equation to simplify the RHS we nd
] L[
]
L[
ck (t)eik t Ik uk (x) + ck (t)eik t V [uk (x)]
iIck (t)eik t uk (x) + Ick (t)eik t uk (x) =
k
Now let us take the inner product of each side with uh (x):
J
J
L
L
iIck (t)eik t
uh (x)uk (x)dx =
ck (t)eik t
In the LHS we nd that uh (x)uk (x)dx = 0 for h = k and it is 1 for h = k (the eigenfunctions are orthonormal).
Then in the sum over k the only term that survives is the one k = h:
J
L
iIck (t)eik t
uh (x)uk (x)dx = iIch (t)eih t
k
90
On the RHS we do not have any simplication. To shorten the notation however, we call Vhk the integral:
J
Vhk =
uh (x)V [uk (x)]dx
iL
ck (t)ei(h k )t Vhk
I
k
This is a dierential equation for the coecients ch (t). We can express the same relation using an integral equation:
iL
ch (t) =
I
k
We now make an important approximation. We said at the beginning that the potential V is a perturbation, thus
we assume that its eects are small (or the changes happen slowly). Then we can approximate ck (t ) in the integral
with its value at time 0, ck (t = 0):
ch (t) =
iL
ck (0)
I
k
[Notice: for a better approximation, an iterative procedure can be used which replaces ck (t ) with its rst order
Now lets go back to the initial scenario, in which we assumed that the system was initially at rest, in a stationary
i
I
ei(h i )t Vhi dt
0
or, by calling h = h i ,
i
ch (t) = Vhi
I
t
0
eih t dt =
Vhi
1 eih t
Ih
What we are really interested in is the probability of making a transition from the initial state ui (x) to another
state uh (x): P (i h) = |ch (t)|2 . This transition is caused by the extra potential V but we assume that both initial
and nal states are eigenfunctions of the original Hamiltonian H0 (notice however that the nal state will be a
superposition of all possible states to which the system can transition to).
We obtain
(
)2
4|Vhi |2
h t
P (i h) = 2
sin
2
I h2
The function sinz z is called a sinc function (see gure 41). Take sin(t/2)
. In the limit t (i.e. assuming we are
/2
describing the state of the system after the new potential has had a long time to change the state of the quantum
system) the sinc function becomes very narrow, until when we can approximate it with a delta function. The exact
limit of the function gives us:
2|Vhi |2 t
P (i h) =
(h )
I2
We can then nd the transition rate from i h as the probability of transition per unit time, Wih =
Wih =
d P (ih)
:
dt
2
|Vhi |2 (h )
I2
This is the so-called Fermis Golden Rule, describing the transition rate between states.
Obs.: This transition rate describes the transition from ui to a single level uh with a given energy Eh = Ih . In many
cases the nal state is an unbound state, which, as we saw, can take on a continuous of possible energy available.
Then, instead of the point-like delta function, we consider the transition to a set of states with energies in a small
interval E E + dE. The transition rate is then proportional to the number of states that can be found with this
91
energy. The number of state is given by dn = (E)dE, where (E) is called the density of states (we will see how to
calculate this in a later lecture). Then, Fermis Golden rule is more generally expressed as:
Wih =
2
|Vhi |2 (Eh )|Eh =Ei
I
[Note, before making the substitution () (E) we need to write () = I(I) = I(Eh Ei )
I(Eh )|Eh =Ei . This is why in the nal formulation for the Golden rule we only have a factor I and not its square.]
92
MIT OpenCourseWare
http://ocw.mit.edu
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.