Answer Table For The Multiple-Choice Questions: Quantum Mechanics Name
Answer Table For The Multiple-Choice Questions: Quantum Mechanics Name
Answer Table For The Multiple-Choice Questions: Quantum Mechanics Name
b) Hermitian matrix.
c) Real matrix.
e) Some kind of matrix.
Redaction: Jeffery, 2001jan01
0 = (1 − λ) − 1 = −2λ + λ2
SUGGESTED ANSWER:
a) It would be pedantic to enumerate the properties. The subset of all vectors (x, y, 0) is clearly a
vector space of dimension 2. The dimensionality is clear since one only needs two basis vectors
to construct any member: say (1, 0, 0) and (0, 1, 0).
b) Nyet: the sum of two members of the subset, e.g.,
where |αi and |βi are any vectors in the space and |γi also in the space. Note we have not defined
what vector addition consists of. That definition goes beyond the general requirements.
ii) Vector addition is commutative:
|αi + |βi = |βi + |αi .
|αi + | − αi = |0i .
a|αi = |βi .
(ab)|αi = a(b|αi) .
x) One has
0|αi = |0i .
SUGGESTED ANSWER:
a) By inspection the set has all the requisite properties, and so does constitute a vector space.
The set {xℓ } with ℓ an integer in the range [0, n − 1] is an obvious basis set for the vector
space since the elements are all linearly independent and any polynomial of degree less than
n can be constructed by linear combination from them. From the number of basis vectors we
conclude that the dimension of the space is n.
Note the choice of basis is obvious and convenient, but not unique. For example say we
define
a+ = x1 + x0
and
a− = x1 − x0 .
We can construct the two lowest degree basis vectors x0 and x1 from a+ and a− :
1
x1 = (a+ + a− )
2
and
1
x0 = (a+ − a− ) .
2
Therefore we can replace x0 and x1 in the basis by a+ and a− to create a new basis. Note also
that a+ and a− are in fact independent. Say we tried to set
a− = Ca+ + linear combination of other basis vectors ,
where C is some constant to be determined. The other basis vectors have no x0 or x1 in them,
and so can contribute nothing. Solving for C we find that C must equal both 1 and −1. This
is impossible: a+ and a− are thus proven to be linearly independent.
b) By inspection the set has all the requisite properties, and so does constitute a vector space.
The set {xℓ } with ℓ an even integer in the range [0, n − 2] if n is even and [0, n − 1] if n is odd
is an obvious basis set for the vector space since the elements are all linearly independent and
any even polynomial of degree less than n can be constructed by linear combination from them.
The dimension of the space is (n − 2)/2 + 1 = n/2 if n is even and (n − 1)/2 + 1 = (n + 1)/2 if
n is odd. The fact that there is a x0 vector in our convenient basis clues us in that there must
be those “+1”’s in the expressions for the dimensions—if it wasn’t obvious from the foregoing.
c) No. The polynomial that is the sum of two elements have leading coefficient 2a, and thus this
polynomial is not in the specified subset.
d) By inspection the set has all the requisite properties, and so does constitute a vector space.
The set {(x − g)ℓ } with ℓ an integer in the range [1, n − 1] is a natural basis set for the vector
space since the elements are all linearly independent and any polynomial in the subset of order
less than n can be constructed by linear combination from them. Maybe a demonstration is
needed. Say P (x) is a general element in the subset, then
n−1
X n−1
X
P (x) = aℓ xℓ = aℓ [(x − g) + g]ℓ
ℓ=0 ℓ=0
n−1
X n−1
X
= bℓ (x − g)ℓ = bℓ (x − g)ℓ ,
ℓ=0 ℓ=1
6
where we have done a rearrangement of terms using the binomial theorem implicitly and
where we have recognized that b0 = 0 since by hypothesis P (x) is in the subset. Since P (x)
is a general member of the subset, {(x − g)ℓ } is a basis for the subset. The dimension of the
space is obviously n − 1: there is no place for the non-existent (x − g)0 unit vector.
e) No. The polynomial that is the sum of two elements when evaluated at g equals 2h, and thus
polynomial is not in the specified subset.
Redaction: Jeffery, 2001jan01
SUGGESTED ANSWER:
Say |ψi is a general vector of the vector space for which {|φi i} is the basis. Assume that we
have two different expansions X X
|ψi = ci |φi i = di |φi i ,
i i
then X
0= (ci − di )|φi i .
i
But since the basis vectors all independent by the hypothesis that they are basis vectors, no term
i can be completely eliminated by any combination of the other terms. The only possibility is that
ci − di = 0 for all i or ci = di for all i. Thus, our two different expansions must, in fact, be the
same. An expansion is unique.
If one would like a more concrete demonstration, assume that cj − dj 6= 0 for some j. Then
we can divide the second sum above through by cj − dj , move |φj i to the other side, and multiply
through by −1 to get
X ci − di
|φj i = − |φi i .
cj − dj
i, but i6=j
But now we find |φj i is a linear combination of the other “basis” vectors. This cannot be if our
vectors are independent. We conclude again that ci = di for all i and the expansion is unique.
Redaction: Jeffery, 2001jan01
|α1 i
|α′1 i = ,
||α1 ||
The second step is create a new second vector that is orthogonal to the new first vector using the old
second vector and the new first vector:
Note we have subtracted the projection of |α2 i on |α′1 i from |α2 i and normalized.
7
Input the vectors into the procedure in the reverse of their nominal order: why might a marker
insist on this? Note setting kets equal to columns is a lousy notation, but you-all know what I
mean. The bras, of course, should be “equated” to the row vectors. HINT: Make sure you use the
normalized new vectors in the construction procedure.
SUGGESTED ANSWER:
a) Behold:
Pi−1
|αi i − j=1 |α′j ihα′j |αi i
|α′i i = Pi−1 ,
|| |αi i − j=1 |α′j ihα′j |αi i ||
where the summation upper limit −1 is interpreted as meaning the whole summation is just a
zero.
Just to demonstrate concretely that |α′i i is indeed orthogonal to all |α′k i with k < i (i.e.,
the all already constructed new basis vectors), we evaluate hα′k |α′i i:
Pi−1
hα′k |αi i − ′ ′ ′
j=1 hαk |αj ihαj |αi i
hα′k |α′i i = Pi−1
|| |αi i − j=1 |α′j ihα′j |αi i ||
hα′k |αi i − i−1 ′
P
j=1 δkj hαj |αi i
= Pi−1 ′
|| |αi i − j=1 |αj ihα′j |αi i ||
hα′k |αi i − hα′k |αi i
= Pi−1
|| |αi i − j=1 |α′j ihα′j |αi i ||
=0,
where we have used the fact that the new vectors for k < i are orthogonal. Pretty clearly
|α′i i is normalized: it’s explicitly so actually. So |α′i i is orthonormalized with respect to all the
previously constructed new basis vectors.
8
b) Quite obviously if the set is orthonormal, none of the set can be expanded in terms of a
combination of the others. Since they can’t, they are by definition all linearly independent.
To be absolutely concrete assume contrafactually that one of a set of orthonormal vectors
is expanded in the others: X
|αi i = |αj i .
j6=i
hαi |αi i = 0
by orthogonality. But the vectors were non-null by hypothesis, and so our assumption of
dependence was wrong.
c) No. The first vector you input to the procedure yields a normalized version of itself. In general
all the other vectors arn’t orthogonal to that vector: therefore their counterparts in the new
set are not in general just normalized versions of themselves. But you can start the procedure
with any of the original vectors you like. Thus you can preserve any one original vector you
wish in normalized form, but not in general the others. Thus every starting input vector can
in general lead to a different orthonormal set. I think it’s intuitively suggested that if you have
n linearly independent original vectors that up to n! orthonormalized bases can be generated.
But the proof is beyond me at this time of night. Hm: it seems obviously true though.
d) There are infinitely many choices of vector direction in a vector space of dimension n ≥ 2.
Any such vector can be used as the starting vector in a Gram-Schmidt procedure. The other
n − 1 linear independent original vectors can be obtained from almost any old basis: the n − 1
vectors must be linearly independent of your starting vector of course.
e) Consider n vectors in an original set {|αi i} and carry out the Gram-Schmidt procedure to the
end. Along the way m of the new vectors were null; you just ignored these and the old vectors
they were meant to replace. Null vectors can’t be normalized or form part of a basis. At the
end of the procedure you have a new set of n − m non-null, orthonormal vectors {|α′i i}. Since
they are orthogonal, they are all linearly independent. Thus the set {|α′i i} is an orthonormal
basis for an n − m dimensional space.
In constructing the n − m new vectors you, in fact, used only n − m of the original vectors:
the vectors you used were the n − m ones that did not give null new vectors.
You can reconstruct the entire original set from the n − m new vectors. First, every |αi i
that gave a null vector in the procedure is clearly a linear combination of the newly constructed
basis:
i−1
X
|αi i = |α′j ihα′j |αi i .
j=1
Second, every |αi i that gave a non-null vector is also a linear combination of the newly
constructed basis:
i−1
X i−1
X
|αi i = || |αi i − |α′j ihα′j |αi i || |α′i i + |α′j ihα′j |αi i .
j=1 j=1
Thus, the original set {|αi i} is entirely contained in the n − m space spanned by the new basis.
Since the n − m new vectors were constructed from n − m old vectors and the n − m
new vectors can be used to construct any of the n vectors in the original set, the n − m old
vectors used to construct the new n − m vectors can be used to construct m vectors of the
original set that gave nulls in the procedure. Thus there were in the original set only n − m
independent vectors and the original set spanned only an n − m dimensional space. Of course,
one doesn’t mean that the n − m old vectors used in the Gram-Schmidt procedure were “the
independent” ones and the m neglected old vectors were “the non-independent” ones. One
means some set of m vectors of the n old vectors could be constructed from the others and
eliminated beforehand from the Gram-Schmidt procedure. Which set of m old vectors could
be chosen in various ways in general depending on the exact nature of the n old vectors. In the
9
Gram-Schmidt procedure which m old vectors get neglected depends in general on the order
one chooses to input vectors into the procedure.
Now to answer the original question. If the original set of vectors is not independent then
one will get nulls in the Gram-Schmidt procedure. If one doesn’t get null vectors, then one
constructs independent vectors out of non-independent vectors which is a logical inconsistency.
The number of non-null vectors the procedure yields is exactly the number of independent
vectors in the original set or (meaning the same thing) is the dimension of the space spanned
by the original set. The Gram-Schmidt procedure offers a systematic a way to test for the
linear independence of a set of vectors. Every null new vector means one more of the original
vectors was not linearly independent.
f) From the part (e) answer, it follows that if all the original vectors were linearly independent,
then no no null vectors are generated in the Gram-Schmidt procedure. Thus the procedure
generates n new vectors given n old linearly independent vectors. The n new vectors are clearly
linearly independent since they are orthogonal: see the part (b) answer. Thus if the original
set consisted of n linearly independent vectors, the new set must too.
g) I think inputting the vectors in the reverse of the nominal order gives the simplest result. I
insist the student’s do so, so that everyone should get the same orthonormalized set if all goes
well. Recall from the part (c) answer that the orthonormalized set does depend on the order
the original vectors are used in the procedure. Behold:
0
|α′3 i = 1
0
i 0 i
3 − 1 (0, 1, 0) · 3
1 0 1
|α′2 i = √
...
i
1
= √ 0
2 1
1+i 0 1+i i 1+i
1
1 − 1 (0, 1, 0) · 1 − 0 (−i, 0, 1) · 1
2
i 0 i 1 i
|α′3 i = √
...
1+i 0 i
1 − 1 − 1 0 (−i + 1 + i)
2
i 0 1
= √
...
1+i i
0 − 1 0
2
i 1
= √
...
i
r 1+
2 02 .
=
5 1
i−
2
unmotivated expression and do a number of unmotivated steps to arrive at a result that you could never
have been guessed from the way you were going about getting it. Well sans too many absurd steps, let
us see if we can prove the Schwarz inequality
|hα|βi|2 ≤ hα|αihβ|βi
for general vectors |αi and |βi. Note the equality only holds in two cases. First when |βi = a|αi, where
a is some complex constant. Second, when either or both of |αi and |βi are null vectors: in this case,
one has zero equals zero.
NOTE: A few facts to remember about general vectors and inner products. Say |αi and |βi are general
vectors. By the definition of the inner product, we have that hα|βi = hβ|αi∗ . This implies that hα|αi is
pure real. If c is a general complex number, then the inner product of |αi and c|βi is hα|c|βi = chα|βi.
Next we note that that another inner-product property
p is that hα|αi ≥ 0 and the equality only holds if
|αi is the null vector. The norm of |αi is ||α|| = hα|αi and |αi can be normalized if it is not null: i.e.,
for |αi not null, the normalized version is |α̂i = |αi/||α||.
a) In doing the proof of the Schwarz inequality, it is convenient to have the result that the bra
corresponding to c|βi (where |βi is a general vector and c is a general complex number) is hβ|c∗ .
Prove this correspondance. HINT: Consider general vector |αi and the inner product
hα|c|βi
|βk i = |α̂ihα̂|βi .
Evaluate hβk |βk i. Now what is the Schwarz inequality telling us.
c) The vector component of |βi that is orthogonal to |α̂i (and therefore |βk i) is
Prove this and then prove the Schwarz inquality itself (for |αi not null) by evaluating hβ|βi expanded
in components. What if |αi is a null vector?
SUGGESTED ANSWER:
a) Behold:
hα|c|βi = chα|βi = chβ|αi∗ = (c∗ hβ|αi)∗ = hβ|c∗ |αi∗ .
Since |αi, |βi, and c are general, we find that bra corresponding to general c|βi is hβ|c∗ .
b) Well doing the suggested division we get
|hα̂|βi|2 ≤ hβ|βi .
Now we find
hβk |βk i = hβ|α̂ihα̂|α̂ihα̂|βi = hβ|α̂ihα̂|βi = |hα̂|βi|2 ,
where we have used the part (a) result and the fact that hα̂|α̂i = 1. We see that the left-
hand side of the inequality is the magnitude squared of |βk i and the right-hand side is the
11
magnitude squared of |βi. Now it is clear. The Schwarz inequality just means that magnitude
of the component of a vector along some axis is less than or equal to the magnitude of the
vector itself.
The equality holds if |βi is aligned with the axis defined by |α̂i. In this case,
|βi = a|α̂i ,
So |βk i is |βi in this case. This is just what the question preamble preambled.
There are contexts in which the Schwarz inequality is used for some purpose not clearly
connected to the meaning found above. Most importantly in quantum mechanics, the Schwarz
inequality is used in its ordinary form in proving the generalized uncertainy principle (Gr-110).
c) Behold:
hα̂|β⊥ i = hα̂|βi − hα̂|βk i = hα̂|βi − hα̂|α̂ihα̂|βi = hα̂|βi − hα̂|βi = 0 ,
where we have used the normalization of |α̂i. Since |βk i = |α̂ihα̂|βi (i.e., |βk i is parallel to
|αi), we have that |β⊥ i is orthogonal to |βk i too.
Now
|βi = |βk i + |β⊥ i ,
and so
where we have used orthogonality of |β⊥ i and |βk i. By an inner-product property, we know
that
hβk |βk i ≥ 0 and hβ⊥ |β⊥ i ≥ 0 .
Thus, we have
hβk |βk i = |hα̂|βi|2 ≤ hβ|βi .
The last expression is equivalent to the Schwarz inequality for |αi not null as we saw in the
part (a) answer. So we have proven the Schwarz inequality for the case that |αi is not a null
vector.
If |αi is a null vector, then both sides of the Schwarz inequality are zero and the inequality
holds in its equality case. This is just what the question preamble preambled.
I have to say that WA-513–514’s proof of the Schwarz inequality seems unnecessarily
obscure since it hides the meaning of Schwarz inequality. It also deters people from finding the
above proof which seems the straightforward one to me and is based on the meaning of the
Schwarz inequality.
Redaction: Jeffery, 2001jan01
|hα|βi|
cos θgen = p ,
hα|αihβ|βi
SUGGESTED ANSWER:
a) Well no the definition is not completely consistent. Say ~a and ~b are ordinary vectors with
magnitudes a and b, respectively. If you put them in the generalized angle formula you get
~a · ~b
cos θgen = .
ab
But the ordinary vector dot product formula is
~a · ~b
cos θ = .
ab
Thus
cos θ , for cos θ ≥ 0;
cos θgen = | cos θ| =
− cos θ = cos(π − θ) , for cos θ ≤ 0.
Thus for θ ∈ [0, π/2], we get θgen = θ and for θ ∈ [π/2, π], θgen = π − θ.
b) Well
hα|αi = 4 ,
hβ|βi = 25 ,
and
4−i
hα|βi = ( 1 − i 1 −i ) 0 = 3 − 5i + 0 − 2i − 2 = 1 − 7i .
2 − 2i
SUGGESTED ANSWER:
Behold:
where we have used the Schwarz inequality to get the 4th line. The triangle inequality follows at
once:
||(|αi + |βi)|| ≤ ||α|| + ||β|| .
Interpreting the triangle in terms of ordinary vectors, it means that the sum of the magnitudes
of the two vectors is always greater than or equal to the magnitude of the vector sum. The equality
only holds if the vectors are parallel and not antiparallel. The Schwarz inequality interpretted in
terms of ordinary vectors is that the magnitude of a component of a vector along an axis is always
than or equal to the magnitude of the vector. The equality only holds when the vector is aligned
with the axis. One can see the situation by dividing the Schwarz inequality by one vector magnitude
in order to create a unit vector for an axis: e.g.,
|hα̂|βi|2 ≤ hβ|βi .
SUGGESTED ANSWER:
a) Behold:
T T
(AB)T ij = (AB)ji = Ajk Bki = Bki Ajk = Bik Akj = B T AT ij ,
where we have used the Einstein rule of summation on repeated indices. Since the matrix
element is general this completes the proof.
b) Recall that to complex conjugate a matrix is to change its elements to their complex conjugates.
Note that
[(AB)ij ]∗ = (Aik Bkj )∗ = A∗ik Bkj
∗
,
and so (AB)∗ = A∗ B ∗ . Thus, complex conjugating the part (a) identity gives
∗ ∗ ∗
(AB)T = B T AT
or
(AB)† = B † A† ,
which completes the proof. Note that the effect of the complex conjugation and transpose
operations is independent of order.
c) Note
(AB)−1 (AB) = 1
14
e) Behold:
AB = BA = B † A† = (AB)† ,
and so AB is Hermitian given that A and B are commuting Hermitian operators.
Is the converse of the above result true? Say we are given that AB is Hermitian. Then
we know
AB = (AB)† = B † A† ,
but I don’t think that we can make any further very general statements about A and B. The
converse probably doesn’t hold, but to be sure one would have to find a counterexample. Well
how about the trivial counterexample where B = A−1 . We know that AA−1 = 1, where I,
the unit matrix, is trivially Hermitian. But any nonsingular matrix A has an inverse, and they
arn’t all Hermitian. For example
0 2
A=
1 0
is not Hermitian and has an inverse:
0 2 0 1 1 0
1 = .
1 0 2 0 0 1
Their sum
1 1
1 1
is its own Hermitian conjugate, but this Hermitian conjugate is not the inverse:
1 1 1 1 2 2
=
1 1 1 1 2 2
14. There are 4 simple operations that can be done to a matrix: inversing, (−1), complex conjugating (∗),
transposing (T ), and Hermitian conjugating (†). Prove that all these operations mutually commute. Do
this systematically: there are
4 4!
= =6
2 2!(4 − 2)!
combinations of the 2 operations. We assume the matrices have inverses for the proofs involving them.
SUGGESTED ANSWER: We will do the proofs in nearly wordless sequences for brevity and
clarity.
i) For −1 and ∗:
1 = AA−1 ,
1 = (AA−1 )∗ ,
1 = A∗ (A−1 )∗ ,
(A∗ )−1 = (A−1 )∗ .
and
1 = AA−1 ,
1 = (AA−1 )T ,
1 = (A−1 )T AT ,
(AT )−1 = (A−1 )T .
1 = AA−1 ,
1 = (AA−1 )† ,
1 = (A−1 )† A† ,
(A† )−1 = (A−1 )† .
where we have used Einstein summation. Note we havn’t used the commuting property of
transposing and complex conjugating in this proof, but we have supposed that Hermitian
conjugation applies transposition first and then complex conjugation. Since we prove
transposition and complex conjugation commute in in part (iv) just below there is no concern.
iv) For ∗ and T
∗
(A∗ )T = AT
by inspection. But if you really need a tedious proof, then behold:
h ∗ i h i∗ h
T
i
AT = AT ij
= (Aji )∗ = (A∗ )ji = (A∗ ) .
ij ij
Note ∗
(A∗ )T = AT = A† .
v) For ∗ and †:
∗ ∗
(A∗ )† = (A∗ )T = A† ,
where we have used the fact that transposing and complex conjugating commute, and thus
that Hermitian conjugation does not care which order they are applied in.
vi) For T and †:
† h ∗ iT T
AT = AT = A† ,
where we have used the fact that transposing and complex conjugating commute, and thus
that Hermitian conjugation does not care which order they are applied in.
Redaction: Jeffery, 2001jan01
exists: i.e., is convergent to a finite value. In other words, that f (x) are g(x) are square-integrable is
sufficient for the inner product’s existence.
a) Prove the statement for the case where f (x) and g(x) are real functions. HINT: In doing this it
helps to define a function
f (x) where |f (x)| ≥ |g(x)| (which we call the f region);
h(x) =
g(x) where |f (x)| < |g(x)| (which we call the g region),
Now expand Z ∞
hf |gi = f ∗ g dx
−∞
17
in the terms of the new real and imaginary parts and reduce the problem to the part (a) problem.
c) Now for the easy part. Prove the converse of the statement is false. HINT: Find some trivial
counterexample.
d) Now another easy part. Say you have a vector space of functions {fi } with inner product defined
by Z ∞
fj∗ fk dx .
−∞
Prove the following two statements are equivalent: 1) the inner product property holds; 2) the
functions are square-integrable.
SUGGESTED ANSWER:
a) Define
f (x) where |f (x)| ≥ |g(x)| (which we call the f region);
h(x) =
g(x) where |f (x)| < |g(x)| (which we call the g region).
For the first case, we that
h2 = f 2 = |f |2 ≥ |f ||g|
and for the second that
h2 = g 2 = |g|2 ≥ |f ||g|
likewise. Thus in all cases
h2 ≥ |f ||g| .
Clearly now,
Z ∞ Z Z Z ∞ Z ∞
2 2 2 2
h dx = f dx + g dx ≤ f dx + g 2 dx .
−∞ f region g region −∞ −∞
it follows that Z ∞ Z ∞ Z ∞
h2 dx ≥ f g dx ≥ − h2 dx .
−∞ −∞ −∞
Now
Z ∞ Z ∞ Z ∞
f ∗ f dx = 2
fRe dx + 2
fIm dx
−∞ −∞ −∞
and Z ∞ Z ∞ Z ∞
∗ 2 2
g g dx = gRe dx + gIm dx ,
−∞ −∞ −∞
All the inner product integrals on the right-hand side of this equation just involve pure real
functions that are all square-integrable, and so they all exist by the part (a) answer. Thus the
inner product Z ∞
f ∗ g dx
−∞
exists: QED.
c) Let f (x) = x2 and g(x) = 0. Clearly,
Z ∞
f ∗ g dx = 0
−∞
exists, but Z ∞ Z ∞
∗
f f dx = 2 x2 dx
−∞ 0
exists, but Z ∞ Z ∞
f ∗ f dx = 2 x2 dx
−∞ 0
does not. This counterexample proves the converse is false too or even also.
2
Still another goody is when f (x) = 1/x and g(x) = xe−x . Clearly after using a table of
integrals (Hod-313), Z ∞
√
f ∗ g dx = π
−∞
exists, but Z ∞ Z ∞
f ∗ f dx = x−2 dx
−∞ −∞
does not.
d) Say we have the vector space of functions {fi (x)} for which statement 1 holds: i.e., the inner
product property holds: i.e., Z ∞
fj∗ fk dx
−∞
exists for all functions in the vector space. The inner product also exists for j = k, and so the
functions are square-integrable: statement 2 follows from statement 1. If statement 2 holds
(i.e., the functions are all square-integrable), then it follows from the part (b) answer proof
that the inner product property holds: viz., statement 1 follows from statement 2. Thus the
two statements are equivalent.
19
hg|Q† |f i∗ = hf |Q|gi
is the defining relation for the Hermitian conjugate Q† of operator Q. You will have to write
the matrix element hf |Q|gi in the position representation and use integration by parts to find the
conditions.
SUGGESTED ANSWER:
a) Behold:
d2
d
2
Qf = − 2 + x f = − (−xf ) + x2 f = −x2 f + f + x2 f = f .
dx dx
Thus f (x) is an eigenfunction and the eigenvalue is 1. By the by, Q is a dimensionless version
of the simple harmonic oscillator Hamiltonian (see, e.g., Gr-37).
b) Behold:
hg|Q† |f i∗ = hf |Q|gi
Z b
= f ∗ Qg dx
a
Z b 2
∗d g ∗ 2
= −f + f x g dx
a dx2
b Z b 2 ∗ Z b
df ∗
dg d f
= −f ∗ gx2 f ∗ dx
+ g − g 2 dx +
dx dx a a dx a
b Z b !∗
∗
dg df
= −f ∗ g ∗ Qf dx
+ g +
dx dx a a
∗
b
dg df
= −f ∗ g + hg|Q|f i∗ ,
+
dx dx a
where we require f and g to be general square-integrable functions over the range [a, b], we have
used the general definition of the Hermitian conjugate, and we have used integration by parts
twice. Clearly Q is Hermitian if the boundary condition term vanishes since then Q† = Q.
For instance, if the boundaries were at minus and plus infinity, then the boundary condition
term would have to vanish since the functions are square-integrable and so must be zero at
minus and plus infinity. Or if periodic conditions were imposed on the functions, the boundary
condition term would have to vanish again.
Redaction: Jeffery, 2001jan01
a) Show explicitly that any linear combination of two functions in the Hilbert space L2 (a, b) is also in
L2 (a, b). (By explicitly, I mean don’t just refer to the definition of a vector space which, of course
requires the sum of any two vectors to be a vector.)
b) For what values of real number s is f (x) = |x|s in L2 (−a, a)
c) Show that f (x) = e−|x| is in L2 = L2 (−∞, ∞). Find the wavenumber space representation of f (x):
recall the wavenumber “orthonormal” basis states in the position representation are
eikx
hx|ki = √ .
2π
SUGGESTED ANSWER:
a) Given f and g in L2 (a, b),
Z b
hf + g|f + gi = (f + g)∗ (f + g) dx
a
Z b Z b Z b Z b
= |f |2 dx + |g|2 dx + f ∗ g dx + g ∗ f dx
a a a a
= hf |f i + hg|gi + hf |gi + hg|f i .
The inner products hf |f i and hg|gi exist by hypothesis. And their existence by theorem (see
Gr-95–96) verifies that hf |gi and hg|f i exist. Thus hf + g|f + gi exists and f + g are in the
Hilbert space.
b) Behold: 2s+1 a
x
for s 6= −1/2;
Z a 2s + 1
hf |f i = 2 x2s dx = 2 × a 0
0
ln(x) for s = −1/2.
0
Clearly the integral only exists for s > −1/2. It’s value when it exists is 2a2s+1 /(2s + 1).
c) Behold: Z ∞
hf |f i = 2 e−2x dx = 1 .
0
18. Some general operator and vector identities should be proven. Recall the definition of the Hermitian
conjugate of general operator Q is giveny by
hα|Q|βi
(AB)† = B † A† .
The result is, of course, consistent with matrix representations of these operators. But there
are representations in which the operators are not matrices: e.g., the momentum operator in the
position representation is differentiating operator
−
h ∂
p= .
i ∂x
Our proof holds for such operators too since we’ve done the proof in the general operator-vector
formalism.
d) Generalize the proof in part (c) for an operator product of any number.
e) Prove that (A + B)† = A† + B † .
f) Prove that c[A, B] is a Hermitian operator for Hermitian A and B only when c is pure imaginary
constant.
SUGGESTED ANSWER:
a) Behold:
hα|Q|βi == hβ|Q† |αi∗
by the definition of the Hermitian conjugate. And that completes the proof.
b) Let |αi, |βi and and c be general. Just using the vector rules that we have
Thus c† = c∗ .
c) For general |αi and |βi,
where we have used the definition of the Hermitian conjugate operator and the fact that the
bra corresponding to Q|γi for any operator Q and ket |γi is hγ|Q† . Since |αi and |βi are
general, it follows that
(AB)† = B † A† .
22
d) Behold:
(ABCD . . .)† = (BCD . . .)† A† = (CD . . .)† B † A† = . . . D† C † B † A† .
e) Behold:
hα|(A + B)† |βi = hβ|(A + B)|αi∗ = hβ|A|αi∗ + hβ|B|αi∗ = hα|A† |βi + hα|B † |βi ,
and given that |αi and |βi are general, it follows that
(A + B)† = A† + B † .
where |αi, |βi, and |γi are general vectors of the vector space and b and c are general complex scalars.
There are some immediate corollaries of the properties. First, if hα|βi is pure real, then
hβ|αi = hα|βi .
hβ|αi = −hα|βi .
Third, if
|δi = b|βi + c|γi ,
then
hδ|αi∗ = hα|δi = bhα|βi + chα|γi
which implies
hδ|αi = b∗ hβ|αi + c∗ hγ|αi .
This last result makes
hβ|b + hγ|c |αi = b∗ hβ|αi + c∗ hγ|αi
∗ ∗
23
a meaningful expression. The 3rd rule for a vector product inner space and last corollary together mean
that the distribution of inner product multiplication over addition happens in the normal way one is
used to.
Dirac had the happy idea of defining dual space vectors with the notation hα| for the dual vector
of |αi: hα| being called the bra vector or bra corresponding to |αi, the ket vector or ket: “bra” and
“ket” coming from “bracket.” Mathematically, the bra hα| is a linear function of the vectors. It has the
property of acting on a general vector |βi and yielding a complex scalar: the scalar being exactly the
inner product hα|βi.
One immediate consequence of the bra definition can be drawn. Let |αi, |βi, and a be general and
let
|α′ i = a|αi .
Then
hα′ |βi = hβ|α′ i∗ = a∗ hβ|αi∗ = a∗ hα|βi
implies that the bra corresponding to |α′ i is given by
The use of bra vectors is perhaps unnecessary, but they do allow some operations and properties
of inner product vector spaces to be written compactly and intelligibly. Let’s consider a few nice uses.
a) The projection operator or projector on to unit vector |ei is defined by
Pop = |eihe| .
This operator has the property of changing a vector into a new vector that is |ei times a scalar. It
is perfectly reasonable to call this new vector the component of the original vector in the direction
of |ei: this definition of component agrees with our 3-dimensional Euclidean definition of a vector
component, and so is a sensible generalization of that the 3-dimensional Euclidean definition. This
generalized component would also be the contribution of a basis of which |ei is a member to the
expansion of the original vector: again the usage of the word component is entirely reasonable. In
symbols
Pop |αi = |eihe|αi = a|ei ,
where a = he|αi.
2 n
Show that Pop = Pop , and then that Pop = Pop , where n is any integer greater than or equal
to 1. HINTS: Write out the operators explicitly and remember |ei is a unit vector.
b) Say we have
Pop |αi = a|αi ,
where Pop = |eihe| is the projection operator on unit vector |ei and |αi is unknown non-null vector.
Solve for the TWO solutions for a. Then solve for the |αi vectors corresponding to these solutions.
HINTS: Act on both sides of the equation with he| to find an equation for one a value. This
equation won’t yield the 2nd a value—and that’s the hint for finding the 2nd a value. Substitute
the a values back into the original equation to determine the corresponding |αi vectors. Note one
a value has a vast degeneracy in general: i.e., many vectors satisfy the original equation with that
a value.
c) The Hermitian conjugate of an operator Q is written Q† . The definition of Q† is given by the
expression
hβ|Q† |αi = hα|Q|βi∗ ,
where |αi and |βi are general vectors. Prove that the bra corresponding to ket Q|βi must hβ|Q† for
general |αi. HINTS: Let |β ′ i = Q|βi and substitute this for Q|βi in the defining equation of the
Hermitian conjugate operator. Note operators are not matrices (although they can be represented
as matrices in particular bases), and so you are not free to use purely matrix concepts: in particular
the concepts of tranpose and complex conjugation of operators are not generally meaningful.
24
Q = |φihψ| ,
where |φi and |ψi are general vectors. Solve for Q† . Under what condition is
Q† = Q ?
When an operator equals its Hermitian conjugate, the operator is called Hermitian just as in the
case of matrices.
e) Say {|ei i} is an orthonormal basis. Show that
|ei ihei | = 1 ,
where we have used Einstein summation and 1 is the unit operator. HINT: Expand a general
vector |αi in the basis.
SUGGESTED ANSWER:
a) Behold:
2
Pop = |eihe|eihe| = |eihe| = Pop ,
where we have used the normalization of |ei. Now
n
Pop = |eihe|ein−1 he| = |eihe| = Pop ,
n
where we have used the normalization of |ei. I would actually accept that Pop = Pop is true
2
by inspection given that Pop = Pop is true.
b) Well
he|Pop |αi = ahe|αi
leads to
he|αi = ahe|αi .
If he|αi =
6 0, a = 1. If he|αi = 0, a = 0 in order for
|eihe|αi = |αi
or
|αi = |eihe|αi .
Thus |αi points in the |ei direction. If a = 0, then
he|αi = 0 .
|βi − |eihe|βi
,
||(|βi − |eihe|βi)||
where |βi is general non-null vector. This set of eigenvectors is actually complete. General
vector |βi can be expanded so
Act on this vector with |ei and one gets he|βi. Act on it with any |e⊥ i orthogonal to |ei and
one gets he⊥ |βi
c) Well
hβ|Q† |αi = hα|β ′ i∗ = hβ ′ |ai .
Since a is general we must interpret
hβ ′ | = hβ|Q† .
d) Well
hβ|Q† |αi = hα|Q|βi∗ = hα|φi∗ hψ|βi∗ = hβ|ψihφ|αi ,
and so
Q† = |ψihφ| .
If |ψi = |φi, then Q† = |φihφ| = Q, and Q would be a Hermitian operator.
e) Since the basis is a basis
|αi = ai |ei i ,
where the coefficients are given by
ai = hei |αi .
Thus
|αi = |ei ihei |αi ,
and so we must identify
|ei ihei | = 1 .
This may seem trivial, but it is a great notational convenience. One can insert the unit operator
1 in anywhere one likes in expressions, and so one can insert |ei ihei | in those places too. This
makes expanding in various bases notationally very simple and obvious. The operator |ei ihei |
can be considered the projection operator onto a basis.
Redaction: Jeffery, 2001jan01
(AB)† = B † A† .
Now the Hermitian conjugate of scalars λ and hχ|ψi are λ∗ and hψ|χi, respectively. For proof
consider scalar c and general vectors |αi and |βi:
and so c† = c∗ . The Hermitian conjugate of operator |φihℓ| is |ℓihψ|. For a proof consider the
general vectors |αi and |βi:
and that completes the proof. Now put all the above results together and answer (e) follows.
Remeber scalars can be commuted freely.
Wrong Answers:
a) This is just the same thing all over again.
Redaction: Jeffery, 2001jan01
Q† = Q :
hγ|Q|γi ,
is pure real.
b) If the expectation value
hγ|Q|γi
is always pure real for general |γi, prove that Q is Hermitian. The statement to be proven is the converse
of the statement in part (a). HINT: First show that
Then let |αi and |βi be general vectors and construct a vector |ξi = |αi + c|βi, where c is a general
complex scalar. Note that the bra corresponding to c|βi is c∗ hβ|. Expand both sides of
and then keep simplifying both sides making use of the first thing proven and the definition of a Hermitian
conjugate. It may be useful to note that
where A and B are general operators and You should be able to construct an expression where choosing
c = 1 and then c = i requires Q = Q† .
c) What simple statement follows from the proofs in parts (a) and (b)?
SUGGESTED ANSWER:
a) Well
hγ|Q|γi = hγ|Q† |γi∗
by the general definition of Hermitian conjugation. But since Q† = Q, we have
hγ|Q|γi = hγ|Q|γi∗ .
Since the expection value is equal to its own complex conjugate, it is pure real: i.e., hγ|Q|γi is
pure real.
b) First note for general |γi that
where we have used the definition of the Hermitian conjugate and then the fact that by
assumption hγ|Q|γi is pure real which imples that hγ|Q† |γi∗ is pure real too.
For general vectors |αi and |βi, we define
where c is a general complex number The point of this definition is to exploit our statement
above for the general vector |γi (which will hold for |αi, |βi, and |ξi) and then the generality
of |αi and |βi to get a statement like
which verifies Q† = Q—but we havn’t got this statement yet. Why do we need the general
complex constant c? Well if you went through the steps without it, you would reach a point
where you needed and then you’d go back and put it in.
Starting from
hξ|Q|ξi = hξ|Q† |ξi ,
30
we proceed as follows
Equation (1) must be true for all choices of c. If c = 1, then the real parts of equation (1)
cancel identically, but imaginary parts give
Im[hα|(Q − Q† )|βi] = 0 .
i2 − (−i)(−i) = i2 − i2 = 0
Re[hα|(Q − Q† )|βi] = 0 .
Thus, we obtain
hα|(Q − Q† )|βi = 0
or
hα|Q|βi = hα|Q† |βi .
Remember that |αi and |βi are general, and thus
Q† = Q .
or
hα|Q|βi = hβ|Q|αi∗
which implies
Q† = Q .
I think that the above proof may be the simplest proof available.
Just to satisfy paranoia, we might prove some of the results we used in the above proof.
First, that c∗ hβ is the bra corresponding to the ket c|βi where c is a general complex number
and c|βi is a general vector. Consider general vector |αi and note that.
and so
(Q† )† = Q
QED.
31
Third, that (A + B)† = A† + B † . Consider general vectors |αi and |βi. The proof is
and thus
hα|(A + B)† |βi = hα|(A† + B † )|βi ,
and thus again
(A + B)† = A† + B †
QED.
c) The above two proofs show that the statements of parts (a) and (b) are equivalent: each implies
the other. We could phrase the situation with the sentence “if and only if Q is Hermitian is
hγ|Q|γi pure real for general |γi.” Or we could say “it is necessary and sufficient that Q be
Hermitian for hγ|Q|γi to be pure real for general |γi.” These ways of saying things always
seem asymmetric and obscure to me. I prefer to say that the statements of parts (a) and (b)
are equivalent.
Redaction: Jeffery, 2001jan01
U Qâ = λU â
32
Q′ â′ = λâ′ .
Prove that the transformation matrix U that gives the diagonalized matrix Q′ just consists of rows
that are the Hermitian conjugates of the eigenvectors. Then find the diagonalized matrix itself and
its eigenvalue.
e) Compare the determinant det|Q|, trace Tr(Q), and eigenvalues of Q to those of Q′ .
f) The matrix U that we considered in part (d) is actually unitary. This means that
U † = U −1 .
Satisfy yourself that this is true. Unitary transformations have the useful property that inner
products are invariant under them. If the inner product has a physical meaning and in particular
the magnitude of vector has a physical meaning, unitary transformations can be physically relevant.
In quantum mechanics, the inner product of a normalized state vector with itself 1 and this should
be maintained by all physical transformations, and so such transformations must be unitary. Prove
that
ha′ |b′ i = ha|bi
where
|a′ i = U |ai|b′ i = U |bi
and U is unitary.
SUGGESTED ANSWER:
a) Yes by inspection. But to be explicit
T
† ∗ T 1 1+i 1 1−i
Q = (Q ) = =Q,
1−i 0 1+i 0
(1 − λ)(−λ) − 2 = λ2 − λ − 2 = (λ − 2)(λ + 1) = 0 .
a1 + (1 − i)a2 = λa1
1 1
! !
~a(λ) = λ − 1 = (λ − 1)(1 + i) .
1−i 2
The normalized eigenvectors are
1
r !
1
2 1
â(λ = 2) = 1+i and â(λ = −1) = √ .
3 3 −(1 + i)
2
33
The global phase factor eiπ/4 can be dropped because it is physically irrelevant. Physical
eigenvectors are unique only to within a global phase factor. Caveat markor.
d) Following the question lead-in, we define
r
2 1−i
3 √
6 .
U =
1 1−i
√ − √
3 3
Because the eigenvectors are orthonomal, the inverse matrix U −1 constructed using the
eigenvectors for columns. Thus, we obtain.
r
2 1
√
U −1 =
3 3 .
1+i 1+i
√ − √
6 3
Obviously,
U U −1 = I ,
where I is the unit matrix. If we multiply both sides from the right by U , we get
U U −1 U = U ,
Q′ = U QU −1 ,
we have
−1 −1
Q′ij = Uik Qkℓ Uℓj = Uik λj Ukj = λj δij = λi δij ,
where we have used the eigenproblem property itself, the fact that U U −1 = 1, and Einstein
summation (i.e., there is an implicit sum on repeated indexes. That
−1 −1
Qkℓ Uℓj = λj Ukj
where the columns of V are the columns of U 1 times the eigenvalue of the eigenvector that
makes up that row.
The proof is complete since the U matrix constructed as required yields a diagonal matrix
whose elements are the eigenvalues.
So Q′ is the required diagonalized matrix. In our case,
′ 2 0
Q = .
0 −1
34
e) Well
det|Q| = −2 = det|Q′ | , Tr(Q) = 1 = Tr(Q′ ) ,
and the eigenvalues of Q (i.e., 2 and −1) are clearly the eigenvalues of Q′ .
The above results are special canses of more general results. We can prove this.
Say S is a general transformation acting on general matrix A. Then
where we have used well known determinant properties: see, e.g., Mih-657. We see that the
determinant is generally invariant under a similarity transformation.
Now note that
Tr(ABC . . . Z) = Aij Bjk Cℓm . . . Zni = Bjk Cℓm . . . Zni Aij = Tr(BC . . . ZA) .
where we have NOT used Einstein summation and where λℓ is the eigenvalue corresponding
†
to the eigenvector in the ℓth column of Ukℓ . Thus we find that H d is diagonal with diagonal
elements being the eigenvalues of H: these elements are the eigenvalues of H d by inspection.
Are eigenvalues invariant under general similarity transformations or general unitary similarity
transformations. I’d guess not, but we’ve done enough proofs for now.
f) By inspection, the matrix U we found in part (d) is unitary.
Well for general vectors |ai and |bi
by the definition of the Hermitian conjugate. There in general the bra corresponding to ket
U |bi is hb|U † .
Now for general vectors |ai and |bi, we can write
34. Consider the observable Q and the general NORMALIZED vector |Ψi. By quantum mechanics
postulate, the expectation of Qn , where n ≥ 0 is some integer, for |Ψi is
a) Assume Q has a discrete spectrum of eigenvalues qi and orthonormal eigenvectors |qi i. It follows
from the general probabilistic interpretation postulate of quantum mechanics, that expectation
value of Qn for |Ψi is given by X
hQn i = qin |hqi |Ψi|2 .
i
n
|hqi |Ψi|2
P
Show that this expression for hQ i also follows from the one in the preamble. What is i
equal to?
b) Assume Q has a continuous spectrum of eigenvalues q and Dirac-orthonormal eigenvectors |qi.
(Dirac-orthonormal means that hq ′ |qi = δ(q ′ − q), where δ(q ′ − q) is the Dirac delta function. The
term Dirac-orthonormal is all my own invention: it needed to be.) It follows from the general
probabilistic interpretation postulate of quantum mechanics, that expectation value of Qn for |Ψi
is given by Z
hQ i = dq q n |hq|Ψi|2 .
n
Show that this expression for hQn i also follows from the one in the preamble. What is dq |hq|Ψi|2
R
equal to?
SUGGESTED ANSWER:
a) Since the eigenvectors |qi i constitute a complete orthonormal set,
X
|Ψi = |qi ihqi |Ψi .
i
Thus
Thus
c) [A, BC] = ABC − BCA = ABC − BAC + BAC − BCA = [A, B]C + B[A, C].
d) Behold:
[A, [B, C]] + [B, [C, A]] + [C, [A, B]] = (ABC − ACB − BCA + CBA)
+ (BCA − BAC − CAB + ACB)
+ (CAB − CBA − ABC + BAC)
=0.
e) Behold:
(c[A, B])† = c∗ (B † A† − A† B † ) = c∗ [B † , A† ] .
f) Using the parts (e) and (a) answers for A and B Hermitian and c pure imaginary,
In this special case, c[A, B] is itself Hermitian. If c is pure real then c[A, B] is anti-
Hermitian (see Gr-83).
Redaction: Jeffery, 2001jan01
SUGGESTED ANSWER:
a) Any operator function can be expanded into an operator series. Thus
∞
X
F (B) = fn B n , (1)
n=0
where Gi and Hj are arbitrary operators and gi and hj arbitrary constants. Therefore,
∞
X ∞
X
[A, F (B)] = fn [A, B n ] = fn [A, B]nB n−1 = [A, B]F ′ (B) (3)
n=0 n=0
follows if
[A, B n ] = [A, B]nB n−1 . (4)
Thus we only need to prove the last equation which we will do by induction.
Step 1: For n = 0, equation (4) is true by inspection. For n = 1, it is again true by inspection.
For n = 2 (which is the first non-trivial case),
where we have used the condition [B, [A, B]] = 0. Equation (5) satisfies equation (4) as
required.
Step 2: Assume equation (4) for n − 1.
Step 3: For n, we find
where we have used Step 2 and the condition [B, [A, B]] = 0.
We have now completed the prove of equation (4), and thus the proof of equation (3) which is
the given identity.
38
b) Behold:
−
h
[x, p] = xp − px = xp − − xp = i−
h .
i
c) From the part (b) answer [p, [x, p]] = [p, −h /i] = 0. Thus from the part (a) answer it follows
that
[x, pn ] = [x, p]npn−1 = i−
h npn−1 .
d) From the part (b) answer [x, [p, x]] = [x, −− h /i] = 0. Thus from the part (a) answer it follows
that
[p, xn ] = [p, x]nxn−1 = −i− h nxn−1 .
This internal system is 2-dimensional in the abstract vector sense of dimensional: i.e., it can be described
completely by an orthonormal basis of consisting of the 2 vectors we have just given. When we measure
this system we force it into one or other of these states: i.e., we make the fundamental perturbation.
But the system can exist in a general state of course:
c+ (t)
|Ψ(t)i = c+ (t)|+i + c− (t)|−i = .
c− (t)
a) Given that |Ψ(t)i is NORMALIZED, what equation must the coefficients c+ (t) and c− (t) satisfy.
b) For reasons only known to Mother Nature, the states we can measure (eigenvectors of whatever
operator they may be) |+i and |−i are NOT eigenstates of the Hamiltonian that governs the time
evolution of internal system. Let the Hamiltonian’s eigenstates (i.e., the stationary states) be |+′ i
and |−′ i: i.e.,
H|+′ i = E+ |+′ i and H|−′ i = E− |−′ i ,
where E+ and E− are the eigen-energies. Verify that the general state |Ψ(t)i expanded in these
energy eigenstates,
− −
|Ψ(t)i = c+ e−iE+ t/ h |+′ i + c− e−iE− t/ h |−′ i
satisfies the general vector form of the Schrödinger equation:
∂
i−
h |Ψ(t)i = H|Ψ(t)i .
∂t
e) Given at t = 0 that
a
|Ψ(0)i =
b
show that
1 − 1 −
|Ψ(t)i = √ (a + b)e−i(f +g)t/ h |+′ i + √ (a − b)e−i(f −g)t/ h |−′ i
2 2
and then show that
cos(gt/− −i sin(gt/−
−if t/−
h h) h)
|Ψ(t)i = e a +b .
−i sin(gt/−
h) cos(gt/−h)
HINT: Recall the time-zero coefficients of expansion in basis {|φi i} are given by hφi |Ψ(0)i.
f) For the state found given the part (e) question, what is the probability at any time t of measuring
(i.e., forcing by the fundamental perturbation) the system into state
1
|+i = ?
0
SUGGESTED ANSWER:
a) Behold:
∗ ∗ c+ (t)
1 = hΨ(t)|Ψ(t)i = (c+ (t) , c− (t) ) = |c+ (t)|2 + |c− (t)|2 .
c− (t)
Thus c+ (t) and c− (t) satisfy
1 = |c+ (t)|2 + |c− (t)|2 .
b) Behold:
∂ − −
i−
h |Ψ(t)i = E+ c+ e−iE+ t/ h |+′ i + E− c− e−iE− t/ h |−′ i
∂t
− −
= c+ e−iE+ t/ h H|+′ i + c− e−iE− t/ h H|−′ i
= H|Ψ(t)i .
f u± + gv± = (f ± g)u±
40
leading to
v± = ±u± .
Thus the eigenvectors are given by
1 1
|±′ i = √ .
2 ±1
1 − 1 −
|Ψ(t)i = √ (a + b)e−i(f +g)t/ h |+′ i + √ (a − b)e−i(f −g)t/ h |−′ i
2 2
1 −
−i(f +g)t/ h 1 1 1 −i(f −g)t/−h 1 1
= √ (a + b)e √ + √ (a − b)e √
2 2 1 2 2 −1
− − − −
1 − e−igt/ h + eigt/ h 1 −if t/− e−igt/ h − eigt/ h
= ae−if t/ h − − + be h
− −
2 e−igt/ h − eigt/ h 2 e−igt/ h + eigt/ h
cos(gt/− −i sin(gt/−
− h) h)
= e−if t/ h a + b .
−i sin(gt/− h) cos(gt/−h)
cos(gt/− −i sin(gt/−
−if t/−
h h) h)
h+|Ψ(t)i = (1, 0) · e a +b
−i sin(gt/− h) cos(gt/−h)
−
= e−if t/ h a cos(gt/−
h ) − ib sin(gt/−
h) .
If ab∗ were pure real (which would be most easily arranged if a and b separately were pure
real), the cross term would vanish.
g) If a = 1 and b = 0, the probability of measuring the system in state |+i is
n
dn (f g) X n dk f dn−k g
Leibniz’s formula =
dxn k dxk dxn−k
k=0
(x − hxi)2
1
Normalized Gaussian P = √ exp −
σ 2π 2σ 2
3 Schrödinger’s Equation
p2
∂Ψ(x, t)
HΨ(x, t) = + V (x) Ψ(x, t) = i−
h
2m ∂t
p2
Hψ(x) = + V (x) ψ(x) = Eψ(x)
2m
p2
∂Ψ(~r , t) ∂
HΨ(~r , t) = + V (~r ) Ψ(~r , t) = i−
h H|Ψi = i−
h |Ψi
2m ∂t ∂t
p2
Hψ(~r ) = + V (~r ) ψ(~r ) = Eψ(~r ) H|ψi = E|ψi
2m 41
42
4 Some Operators
−
h ∂ 2 ∂
2
p= p2 = −−
h
i ∂x ∂x2
− 2
p2 h ∂2
H= + V (x) = − + V (x)
2m 2m ∂x2
−
h 2
p= ∇ p2 = −−
h ∇2
i
− 2
p2 h
H= + V (~r ) = − ∇2 + V (~r )
2m 2m
∂ ∂ ∂ ∂ 1 ∂ 1 ∂
∇ = x̂ + ŷ + ẑ = r̂ + θ̂ + θ̂
∂x ∂y ∂z ∂r r ∂θ r sin θ ∂θ
∂2 ∂2 ∂2 ∂2
1 ∂ ∂ 1 ∂ ∂ 1
∇2 = 2
+ 2+ 2 = 2 r2 + 2 sin θ + 2 2
∂x ∂y ∂z r ∂r ∂r r sin θ ∂θ ∂θ r sin θ ∂φ2
1, ijk cyclic;
(
1, i = j;
δij = εijk = −1, ijk anticyclic;
0, otherwise
0, if two indices the same.
dh~r i 1 dh~
pi
Ehrenfest’s Theorem = h~
pi and = −h∇V (~r )i
dt m dt
−
cj (0)e−iEj t/ h |φj i
X
|Ψ(t)i =
j
2
!
1 −
h ∂2 1
V (x) = mω 2 x2 − + mω 2 x2 ψ = Eψ
2 2m ∂x2 2
β 1/2 1
r
mω 2 2 1 −
β= − ψn (x) = 1/4 √ Hn (βx)e−β x /2 En = n + hω
h π n
2 n! 2
− 2
h k2
p=−
hk Ekinetic = ET =
2m
Ψ(k, t)
|Ψ(p, t)|2 dp = |Ψ(k, t)|2 dk Ψ(p, t) = √
−
h
−
h ∂
−
h ∂
xop = x pop = Q x, ,t position representation
i ∂x i ∂x
−
h ∂
−
h ∂
xop =− pop = p Q − , p, t momentum representation
i ∂p i ∂p
∞ − ∞
eipx/ h eikx
Z Z
δ(x) = dp δ(x) = dk
−∞ 2π−h −∞ 2π
∞ − ∞
eipx/ h eikx
Z Z
Ψ(x, t) = Ψ(p, t) dp Ψ(x, t) = Ψ(k, t) dk
−∞ (2π−h )1/2 −∞ (2π)1/2
∞ − ∞
e−ipx/ h e−ikx
Z Z
Ψ(p, t) = Ψ(x, t) dx Ψ(k, t) = Ψ(x, t) dx
−∞ (2π−h )1/2 −∞ (2π)1/2
− ~
ei~p·~r/ h 3 eik·~r
Z Z
Ψ(~r , t) = Ψ(~
p , t) d p Ψ(~r , t) = Ψ(~k , t) d3 k
all space (2π− h )3/2 all space (2π)3/2
− ~
e−i~p·~r/ h 3 e−ik·~r 3
Z Z
Ψ(~
p , t) = Ψ(~r , t) d r Ψ(~k , t) = Ψ(~r , t) d r
all space (2π− h )3/2 all space (2π)3/2
9 Commutator Formulae
X X X
[A, BC] = [A, B]C + B[A, C] ai Ai , bj Bj = ai bj [Ai , bj ]
i j i,j
[x, p] = i−
h [x, f (p)] = i−
h f ′ (p) [p, g(x)] = −i−
h g ′ (x)
−
h 1
σx σp = ∆x∆p ≥ σQ σQ = ∆Q∆R ≥ |hi[Q, R]i|
2 2
−
h
σH ∆tscale time = ∆E∆tscale time ≥
2
Ψ(x, t) = hx|Ψ(t)i P (dx) = |Ψ(x, t)|2 dx ci (t) = hφi |Ψ(t)i P (i) = |ci (t)|2
12 Spherical Harmonics
1/2 1/2
1 3 3
Y0,0 =√ Y1,0 = cos(θ) Y1,±1 = ∓ sin(θ)e±iφ
4π 4π 8π
2
L2 Yℓm = ℓ(ℓ + 1)−
h Yℓm Lz Yℓm = m−
h Yℓm |m| ≤ ℓ m = −ℓ, −ℓ + 1, . . . , ℓ − 1, ℓ
0 1 2 3 4 5 6 ...
s p d f g h i ...
13 Hydrogenic Atom
a
me
−
h λC e2
az = a0 = = α= −
Z mreduced me cα 2πα hc
−3/2 −r/aZ 1 −3/2 1 r
R10 = 2aZ e R20 = √ aZ 1− e−r/(2aZ )
2 2 aZ
1 −3/2 r −r/(2aZ )
R21 = √ aZ e
24 aZ
( 3 )1/2
2 (n − ℓ − 1)! 2r
Rnℓ = − e−ρ/2 ρℓ L2ℓ+1
n+ℓ (ρ) ρ=
naZ 2n[(n + ℓ)!]3 nrZ
q
x d
e−x xq
Lq (x) = e Rodrigues’s formula for the Laguerre polynomials
dx
j
d
Ljq (x) = Lq (x) Associated Laguerre polynomials
dx
aZ 2
hrinℓm = 3n − ℓ(ℓ + 1)
2
[Ji , Jj ] = i−
h εijk Jk (Einstein summation on k) [J 2 , J~ ] = 0
2
J 2 |jmi = j(j + 1)−
h |jmi Jz |jmi = m−
h |jmi
J± |jmi = −
p
J± = Jx ± iJy h j(j + 1) − m(m ± 1)|jm ± 1i
( )
1
J{ x } = 2
1 (J+ ± J− ) †
J± J± = J∓ J± = J 2 − Jz (Jz ± −
h)
y
2i
[Jf i , Jgj ] = δf g i−
h εijk Jk J~ = J~1 + J~2 J 2 = J12 + J22 + J1+ J2− + J1− J2+ + 2J1z J2z
X
J± = J1± + J2± |j1 j2 jmi = |j1 j2 m1 m2 ihj1 j2 m1 m2 |j1 j2 jmij1 j2 jmi
m1 m2 ,m=m1 +m2
jX
1 +j2
−
h 0 1
−
h 0 −i
−
h 1 0
Sx = Sy = Sz =
2 1 0 2 i 0 2 0 −1
1 1
|±ix = √ (|+i ± |−i) |±iy = √ (|+i ± i|−i) |±iz = |±i
2 2
1
| + +i = |1, +i|2, +i | + −i = √ (|1, +i|2, −i ± |1, −i|2, +i) | − −i = |1, −i|2, −i
2
0 1 0 −i 1 0
σx = σy = σz =
1 0 i 0 0 −1
~ · ~σ )(B
(A ~ · ~σ ) = A
~·B
~ + i(A
~ × B)
~ · ~σ
~ · n̂)
d(S i ~ ~ α~ ~ ~
= − − [S ~ · n̂]
· α̂, S ~ · n̂ = e−iS·~
S S · n̂0 eiS·~α |n̂± i = e−iS·~α |ẑ± i
dα h
46
e−
h
µBohr = = 0.927400915(23) × 10−24 J/T = 5.7883817555(79) × 10−5 eV/T
2m
α
g = 2 1+ + . . . = 2.0023193043622(15)
2π
~
L ~
S ~ + g S)
(L ~
~µorbital = −µBohr − µ
~ spin = −gµBohr − ~µtotal = ~µorbital + ~µspin = −µBohr −
h h h
~ (Lz + gSz )
Hµ = −~
µ·B Hµ = µBohr Bz −
h
∞
X
H = H (0) + λH (1) |ψi = N (λ) λk |ψn(k) i
k=0
m
X ∞
X
H (1) |ψn(m−1) i(1 − δm,0 ) + H (0) |ψn(m) i = E (m−ℓ) |ψn(ℓ) i |ψn(ℓ>0) i = anm |ψn(0) i
ℓ=0 m=0, m6=n
D E
(0) (0)
ψk |H (1) |ψn
(0)
X
|ψn1st i = |ψn(0) i + λ (0) (0)
|ψk i
all k, k6=n En − Ek
D E
En1st = En(0) + λ ψn(0) |H (1) |ψn(0)
(0) (1) (0) 2
D E
D E X ψk |H |ψn
En2nd = En(0) + λ ψn(0) |H (1) |ψn(0) + λ2 (0) (0)
all k, k6=n En − Ek
hφ|H|φi
E(φ) = δE(φ) = 0
hφ|φi
∞
sin2 (x)
Z
π= dx
−∞ x2
47
2π
Γ0→n = − |hn|Hperturbation|0i|2 δ(En − E0 )
h
~
~ op = − 1 ∂ Aop
E ~ op = ∇ × A
B ~ op
c ∂t
19 Box Quantization
2πn 2π 3 (2π)3
kL = 2πn, n = 0, ±1, ±2, . . . k= ∆kcell = ∆kcell =
L L V
k 2 dk dΩ
dNstates = g
(2π)3 /V
20 Identical Particles
1
|a, bi = √ (|1, a; 2, bi ± |1, b; 2, ai)
2
1
ψ(~r1 , ~r 2 ) = √ (ψa (~r 1 )ψb (~r 2 ) ± ψb (~r 1 )ψa (~r 2 ))
2
21 Second Quantization
{ai , a†j } = δij {ai , aj } = 0 {a†i , a†j } = 0 |N1 , . . . , Nn i = (a†n )Nn . . . (a†1 )N1 |0i
X e−i~p·~r † X ei~p·~r
Ψs (~r )† = √ ap~s Ψs (~r ) = √ ap~s
p
~
V p
~
V
[Ψs (~r ), Ψs′ (~r ′ )]∓ = 0 [Ψs (~r )† , Ψs′ (~r ′ )† ]∓ = 0 [Ψs (~r ), Ψs′ (~r ′ )† ]∓ = δ(~r − ~r ′ )δss′
1
|~r1 s1 , . . . , ~rn sn i = √ Ψsn (~r n )† . . . Ψsn (~r n )† |0i
n!
√
Ψs (~r )† |~r1 s1 , . . . , ~rn sn i n + 1|~r1 s1 , . . . , ~rn sn , ~rsi
Z
|Φi = d~r1 . . . d~rn Φ(~r1 , . . . , ~rn )|~r1 s1 , . . . , ~rn sn i
X Z ∞
X
1n = d~r1 . . . d~rn |~r1 s1 , . . . , ~rn sn ih~r1 s1 , . . . , ~rn sn | 1 = |0ih0| + 1n
s1 ...sn n=1
48
X p2 †
ap†~s ap~s
X
N= T = a ap~s
2m p~s
p
~s p
~s
XZ 1 X
Z
ρs (~r ) = Ψs (~r )† Ψs (~r ) N= d~r ρs (~r ) T = d~r ∇Ψs (~r )† · ∇Ψs (~r )
s
2m s
~js (~r ) = 1
Ψs (~r )† ∇Ψs (~r ) − Ψs (~r )∇Ψs (~r )†
2im
1X
Z
v2nd = d~rd~r ′ v(~r − ~r ′ )Ψs (~r )† Ψs′ (~r ′ )† Ψs′ (~r ′ )Ψs (~r )
2 ′
ss
1 X X
Z
vp~−~p ′ δp~+~q,~p′ +~q′ ap†~s aq†~s′ aq~ ′ s′ ap~ ′ s
′
v2nd = vp~−~p ′ = d~r e−i(~p−~p )·~
r
v(~r )
2V ′ ′ ′
pp qq ss
22 Klein-Gordon Equation
2 "
− 2 #
p 1 − ∂ h 2 2
E = p 2 c2 + m 2 c4 ih Ψ(~r, t) = ∇ + m c Ψ(~r, t)
c2 ∂t i
" 2 #
1 ∂2
mc
− ∇2 + − Ψ(~r, t) = 0
c2 ∂t2 h
i− ∂Ψ∗ −
h ∗ ∂Ψ ~j = h (Ψ∗ ∇Ψ − Ψ∇Ψ∗ )
ρ= Ψ − Ψ
2mc2 ∂t ∂t 2im
2 "
− 2 #
1 − ∂ h e~ 2 2
i h − eΦ Ψ(~r, t) = ∇ − A + m c Ψ(~r, t)
c2 ∂t i c
− −
p, E) = ei(~p·~r−Et)/ h
Ψ+ (~ p, E) = e−i(~p·~r−Et)/ h
Ψ− (~