CH 8 Sol
CH 8 Sol
CH 8 Sol
♦ 8.1.13. Let a be complex. Prove that u(t) = c ea t is the (complex) solution to our scalar ordi-
nary differential equation (8.1). Describe the asymptotic behavior of the solution as t → ∞,
and the stability properties of the zero equilibrium solution.
Solution: The solution is still valid as a complex solution. If Re a > 0, then u(t) → ∞ as t →
∞, and the origin is an unstable equilibrium . If Re a = 0, then u(t) remains bounded t → ∞,
and the origin is a stable equilibrium. If Re a < 0, then u(t) → 0 as t → ∞, and the origin is an
asymptotically stable equilibrium.
0 1 0
01 1 1
1 −1
(h) Eigenvalues: 2, 0; eigenvectors: B C B
@ −1 A, @ −3 A
C
0
1 1 1 0 1 0 1
1
2 −2
B C B3C B 3C
(i) −1 simple eigenvalue, eigenvector @ −1 A; 2 double eigenvalue, eigenvectors @ 0 A, @ 1 A
1 1 0
0 1 0 1 0 1 0 1
1 0 1 0
B
−1 C B
C B 0C
C B C B C
B1C B0C
(j) Two double eigenvalues: −1, eigenvectors B B C, B
@ 0 A @ −3 A
C and 7, eigenvectors B C, B C.
@0A @1A
0
0 1 0 1 0 1 0 1
2 0 2
0 0 0 1
B C B C B C B C
0C B0C B1C B1C
(k) Eigenvalues: 1, 2, 3, 4; eigenvectors: B
B C , B C, B C, B C .
@0A @1A @1A @0A
1 1 0 0
!
cos θ − sin θ
8.2.2. (a) Find the eigenvalues of the rotation matrix Rθ = . For what values
sin θ cos θ
of θ are the eigenvalues real? (b) Explain why your answer gives an immediate solution to
Exercise 1.5.7c.
!
±iθ 1
Solution: (a) The eigenvalues are cos θ ± i sin θ = e with eigenvectors . They are real
∓i
only for θ = 0 and π. (b) Because Rθ − a I has an inverse if and only if a is not
!
an eigenvalue.
cos θ sin θ
8.2.3. Answer Exercise 8.2.2a for the reflection matrix Fθ = .
sin θ − cos θ
8.2.4. Write down (a) a 2 × 2 matrix that has 0 as one of its eigenvalues and ( 1, 2 ) T as a cor-
responding eigenvector; (b) a 3 × 3 matrix that has ( 1, 2, 3 )T as an eigenvector for the
eigenvalue −1.
Solution: (a) O, and (b) − I , are trivial examples. 0 1
0 1 0
B
8.2.5. (a) Write out the characteristic equation for the matrix 0 @ 0 1C A.
α β γ
(b) Show that, given any 3 numbers a, b, and c, there is a 3 × 3 matrix with characteristic
equation − λ3 + a λ2 + b λ + c = 0.
Solution:
(a) The characteristic equation is − λ3 + a λ2 + b λ + c = 0.
(b) Use the matrix in part (a). 0 1
0 c −b
8.2.6. Find the eigenvalues and eigenvectors of the cross product matrix A = B
@ −c 0 aCA.
b −a 0
√
Solution: The eigenvalues are 0, ± i a2 + b2 + c2 . If a = b = c = 0 then A = O and all vectors
0 1
1 4 4
8.2.13. (a) Compute the eigenvalues and corresponding eigenvectors of A = B @ 3 −1 0 A.
C
0 2 3
(b) Compute the trace of A and check that it equals the sum of the eigenvalues. (c) Find
the determinant of A, and check that it is equal to to the product of the eigenvalues.
“ ”T
Solution: (a) Eigenvalues: −3, 1, 5; eigenvectors: ( 2, −3, 1 )T , − 23 , −1, 1 , ( 2, 1, 1 )T .
(b) tr A = 3 = −3 + 1 + 5 (c) det A = −15 = (−3) · 1 · 5.
8.2.14. Verify the trace and determinant formulae (8.24–25) for the matrices in Exercise 8.2.1.
Solution:
(a) tr A = 2 = 3 + (−1); det A = −3 = 3 · (−1).
(b) tr A = 56 = 21 + 31 ; det A = 16 = 12 · 13 .
(c) tr A = 4 = 2 + 2; √ det A = 4 = 2√· 2. √ √
(d) tr A = 2 = (1 + i 2) + (1 − i 2); det A = 3 = (1 + i 2) · (1 − i 2).
(e) tr A = 8 = 4 + 3√+ 1; det √A = 12 = 4 · 3 · 1. √ √
(f ) tr A = 1 = 1 + 6 + (− 6); det A = −6 = 1 · 6 · (− √ 6). √
(g) tr A = 2 = 0 + (1 + i ) + (1 − i ); det A = 0 = 0 · (1 + i 2) · (1 − i 2).
(h) tr A = 4 = 2 + 2 + 0; det A = 0 = 2 · 2 · 0.
(i) tr A = 3 = (−1) + 2 + 2; det A = −4 = (−1) · 2 · 2.
(j) tr A = 12 = (−1) + (−1) + 7 + 7; det A = 49 = (−1) · (−1) · 7 · 7.
♥ 8.2.34. (a) Prove that every eigenvalue of a matrix A is also an eigenvalue of its transpose A T .
(b) Do they have the same eigenvectors? (c) Prove that if v is an eigenvector of A with
eigenvalue λ and w is an eigenvector of AT with a different eigenvalue µ 6= λ, then v and w
are orthogonal vectors with respect to the dot product.
!
0 1
5 −4 2
0 −1
(d) Illustrate this result when (i) A = , (ii) A = B @ 5 −4 1CA.
2 3
−2 2 −3
Solution:
(a) det(AT − λ I ) = det(A − λ I )T = det(A − λ I ), and hence A and AT have the same
characteristic polynomial.
(b) No. See the examples.
(c) λ v · w = (A v)T w = vT AT w = µ v · w, so if µ 6= λ, v · w = 0 and the vectors are
orthogonal. ! !
−1 1
(d) (i) The eigenvalues are 1, 2; the eigenvectors of A are v1 = , v2 = − 2 ;
1 1
! !
T 2 1
the eigenvectors of A are w1 = , w2 = , and v1 , w2 are orthogonal, as
1 1
0 1
1
are v2 , w1 . The eigenvalues are 1, −1, −2; the eigenvectors of A are v1 = B C
@ 1 A, v 2 =
0 1 0 1 0 1 0 1
0 0 1
1 0 3 2 1
B C B1C T B C B C B C
@ 2 A, v3 = @ 2 A; the eigenvectors of A are w1 = @ −2 A, w2 = @ −2 A, w3 = @ −1 A.
1 1 1 1 1
Note that vi is orthogonal to wj whenever i 6= j.
8.2.35. (a) Prove that every real 3 × 3 matrix has at least one real eigenvalue. (b) Find a real
1 1 1
(b) The j th entry of the eigenvalue equation Mn vk = λk vk reads
(j − 1) k π (j + 1) k π kπ jkπ
sin + sin = 2 cos sin ,
n+1 n+1 n+1 n+1
α−β α+β
which is a standard trigonometric identity: sin α + sin β = 2 cos sin . These
2 2
are all the eigenvalues because an n × n matrix has at most n eigenvalues.
♦ 8.2.47. Let a, b be fixed scalars. Determine the eigenvalues and eigenvectors of the n × n tridi-
agonal matrix with all diagonal entries equal to a and all sub- and super-diagonal entries
equal to b. Hint: See Exercises 8.2.18, 46.
Solution: We have A = a I +b Mn , so by Exercises 8.2.18 and 8.2.46 it has the same eigenvectors
kπ
as Mn , while its corresponding eigenvalues are a + b λk = a + 2 b cos for k = 1, . . . , n.
n+1
♥ 8.2.48. Find a formula for the eigenvalues of the tricirculant n × n matrix Z n that has 1’s on
the sub- and super-diagonals as well as its (1, n) and (n, 1) entries, while all other entries
are 0. Hint: Use Exercise 8.2.46 as a guide.
Solution:
!T
2kπ 2kπ 4kπ 6kπ 2 (n − 1) k π
λk = 2 cos , vk = cos , cos , cos , . . . , cos , 1 ,
n n n n n
for k = 1, . . . , n.
8.2.49. Let A be an n × n matrix with eigenvalues λ1 , . . . , λk , and B an m × m matrix with
0 1
1 0 1
1 0 1 0 1
0 0 1 1
B C B C B C B C
B0C B0C B3C B1C
(h) Eigenvalue 0 has B @0A
C. Eigenvalue −1 has B C. Eigenvalue 1 has B C, B C. Complete.
@1A @1A @0A
1 0 1 0
1
1 0
01 1
1 2 −1
B C B
0 C B −1 C B
B 1C
C
(i) Eigenvalue 0 has eigenspace basis B B C, B
@1A @ 0A
C
C. Eigenvalue 2 has B
@ −5 A
C. Not complete.
0 1 1
8.3.3. Which of the following matrices admit eigenvector bases of R n ? For those that do, ex-
n
hibit such a basis. If not, what is the dimension of the subspace of
0 R spanned1by the
! ! ! 1 −2 0
1 3 1 3 1 3
eigenvectors? (a) , (b) , (c) , (d) B
@ 0 −1 0CA,
3 1 −3 1 0 1
40 −4 −1 1
0 1 0 1 0 1 0 0 −1 1
1 −2 0 2 0 0 0 0 −1 B
B 0 −1 0 1C
(e) B
@ 0 −1 0C B
A, (f ) @ 1 −1 1C B
A, (g) @ 0 1 0CA, (h) B
@1
C
C.
0 −1 0 A
0 −4 −1 2 1 −1 1 0 0
1 0 1 0
Solution: ! !
−1 1
(a) Eigenvalues: −2, 4; the eigenvectors , form a basis for R 2 .
1 1
! !
i −i
(b) Eigenvalues: 1−3 i , 1+3 i ; the eigenvectors , , are not real, so the dimension
1 1
is 0. !
1
(c) Eigenvalue: 1; there is only one eigenvector v1 = spanning a one-dimensional sub-
0
space of R 2 . 0 1 0 1
1 1
B C B C
(d) The eigenvalue 1 has eigenvector @ 0 A, while the eigenvalue −1 has eigenvectors @ 1 A,
0 1
2 0
0
B C 3
@ −1 A. The eigenvectors form a basis for R .
0 0 1 0 1
1 0
(e) The eigenvalue 1 has eigenvector B C
@ 0 A, while the eigenvalue −1 has eigenvector
B C
@ 0 A.
0 1
The eigenvectors span a two-dimensional subspace of R 3 .
0 6 1 1
The real eigenvectors span a two-dimensional subspace of R 4 .
8.3.4. Answer Exercise 8.3.3 with R n replaced by C n .
Solution: Cases (a,b,d,f,g,h) all have eigenvector bases of C n .
8.3.5. (a) Give an example of a 3 × 3 matrix that only has 1 as an eigenvalue, and has only one
linearly independent eigenvector. (b) Give an example that has two linearly independent
eigenvectors.
0 1 0 1
1 1 0 1 0 0
Solution: (a) B
@0 1 1CA, (b)
B
@0 1 1CA.
0 0 1 0 0 1
8.3.6. True or false: (a) Every diagonal matrix is complete. (b) Every upper triangular ma-
trix is complete.
Solution: (a) True. The standard basis vectors are eigenvectors. (b) False. The Jordan matrix
1 101 is incomplete since e1 is the only eigenvector.
8.3.7. Prove that if A is a complete matrix, so is c A+d I , where c, d are any scalars. Hint: Use
Exercise 8.2.18.
Solution: According to Exercise 8.2.18, every eigenvector of A is an eigenvector of c A + d I with
eigenvalue c λ + d, and hence if A has a basis of eigenvectors, so does c A + d I .
8.3.8. (a) Prove that if A is complete, so is A2 . (b) Give an example of an incomplete matrix
A such that A2 is complete.
Solution: (a) Every eigenvector of A is an eigenvector of!A2 with eigenvalue λ2 , and hence if A
0 1
has a basis of eigenvectors, so does A2 . (b) A = with A2 = O.
0 0
♦ 8.3.9. Suppose v1 , . . . , vn forms an eigenvector basis for the complete matrix A, with λ1 , . . . , λn
the corresponding eigenvalues. Prove that every eigenvalue of A is one of the λ 1 , . . . , λn .
n
X n
X
Solution: Suppose A v = λv. Write v = ci vi . Then A v = ci λi vi and hence, by linear
i=1 i=1
independence, λi ci = λ ci . THus, either λ = λi or ci = 0. Q.E.D.
n n
8.3.10. (a) Prove that if λ is an eigenvalue of A, then λ is an eigenvalue of A . (b) State
and prove a converse if A is complete. Hint: Use Exercise 8.3.9. (The completeness hy-
pothesis is not essential, but this is harder, relying on the Jordan canonical form.)
Solution: (a) If A v = λ v, then, by induction, An v = λn v, and hence v is an eigenvector
with eigenvalue λn . (b) √
Conversely, if A is complete and An has eigenvalue µ, then there is a
th
(complex) n root λ = n µ that is an eigenvalue of A. Indeed, the eigenvector basis of A is an
! ! !
3 −9 5 −4 −4 −2
8.3.15. Diagonalize the following matrices: (a) , (b) , (c) ,
2 −6 2 −1 5 2
0
0 7 3 0 0 1
−1 0 1
− 51 − 53 i − 15 + 53 i −1 2 + 3i 0 0
(f ) S =B@ −1 −1
C
0 A, D = @ 0
B
2 − 3i 0 CA.
0
1 1
1 0
1 1
0 0 −2
1 0 −1 2 0 0
B C B
(g) S = @ 0 −1 0 A, D = @ 0 2 0CA.
0
0 1 1 1 0
0 0 −3 1
−4 3 1 0 −2 0 0 0
B C B C
−3 2 0 1 C B 0 −1 0 0 C
(h) S =BB
@ 0 6 0 0A
C, D = B
@ 0
C.
0 1 0A
0
12 0 0 0 1 0
0 0 0 2 1
0 −1 0 1 −1 0 0 0
B C B
−1 0 1 0 0 −1 0 0C
(i) S =BB
@ 0
C
C, D = B
B C
C.
1 0 1A @ 0 0 1 0A
0
1 0 1 0 0 10 0 1
3 0 1
−1 1 2i − 23 i 1 0 0 0
B C B
B 1 −3 − 1 − 2 i − 21 + 2 i C C, D = B 0 −1 0 0CC
(j) S =BB 2 C B C.
@ 0 0 1+ i 1− i A @ 0 0 i 0 A
0 0 1 1 0 0 0 −i
!
1 1
8.3.16. Diagonalize the Fibonacci matrix F = .
1 0
0 √ √ 10 √ 10 √ 1
! √1
1+ 5 1− 5 1+ 5
0 CB − 1−√ 5
1 1 B C B 5 2 √5 C
Solution: = @ 2 2 A@ 2 √ AB@
C.
1 0 1− 5 1 1+√ 5 A
1 1 0 2 −√
5 2 5
!
0 −1
8.3.17. Diagonalize the matrix of rotation through 90◦ . How would you interpret
1 0
the result?
! !
i −i i 0
Solution: S = ,D = . A rotation does not stretch any real vectors, but
1 1 0 −i
8.4.1. Find the eigenvalues and an orthonormal eigenvector basis for the following symmetric
matrices: 0 1 0 1
! ! ! 1 0 4 6 −4 1
2 6 5 −2 2 −1 B C B
(a) (b) , (c) , (d) @ 0 1 3 A, (e) @ −4 6 −1 C A.
6 −7 −2 5 −1 5
4 3 1 1 −1 11
Solution: ! !
1 2 1 1
(a) eigenvalues: 5, −10; eigenvectors: √ ,√ ,
5 ! 1 5 !−2
1 −1 1 1
(b) eigenvalues: 7, 3; eigenvectors: √ ,√ .
2 1 2 1
√ √
7 + 13 7 − 13
(c) eigenvalues: , ;
2 2 0 √ 1 0 √ 1
2 3− 13 2 3+ 13
B C B C
eigenvectors: q √ @ 2 A, q √ @ 2 A.
26 − 6 13 1 26 + 6 13 1
8.4.2. Determine whether the following symmetric matrices are positive definite by computing
their eigenvalues. Validate your conclusions by using the methods from Chapter 4.
0 1 0 1
! ! 1 −1 0 4 −1 −2
2 −2 −2 3 B C B
(a) (b) , (c) @ −1 2 −1 A, (d) @ −1 4 −1 C A.
−2 3 3 6
0 −1 1 −2 −1 4
Solution: √
(a) eigenvalues 52 ± 21 17; positive definite
(b) eigenvalues −3, 7; not positive definite
(c) eigenvalues 0, 1, 3;√
positive semi-definite
(d) eigenvalues 6, 3 ± 3; positive definite.
8.4.3. Prove that a symmetric matrix is negative definite if and only if all its eigenvalues are
negative.
Solution: Use the fact that K = − N is positive definite and so has all positive eigenvalues. The
eigenvalues of N = − K are − λj where λj are the eigenvalues of K. Alternatively, mimic the
proof in the book for the positive definite case.
8.4.4. How many orthonormal eigenvector bases does a symmetric n × n matrix have?
Solution: If all eigenvalues are distinct, there are 2n different bases, governed by the choice of
sign in the unit eigenvectors ± uk . If the eigenvalues are repeated, there are infinitely many,
since any orthonormal basis of each eigenspace will contribute to an orthonormal eigenvector
basis of the matrix. !
a b
8.4.5. Let A = . (a) Write down necessary and sufficient conditions on the entries
c d
a, b, c, d that ensures that A has only real eigenvalues. (b) Verify that all symmetric 2 × 2
matrices satisfy your conditions.
Solution:
(a) The characteristic equation p(λ) = λ2 − (a + d)λ + (a d − b c) = 0 has real roots if and
only if its discriminant is non-negative: 0 ≤ (a + d)2 − 4 (a d − b c) = (a − d)2 + 4 b c, which
is the necessary and sufficient condition for real eigenvalues.
(b) If A is symmetric, then b = c and so the discriminant is (a − d)2 + 4 b2 ≥ 0.
♥ 8.4.6. Let AT = − A be a real, skew-symmetric n × n matrix. (a) Prove that the only possi-
ble real eigenvalue of A is λ = 0. (b) More generally, prove that all eigenvalues λ of A are
purely imaginary, i.e., Re λ = 0. (c) Explain why 0 is an eigenvalue of A whenever n is
odd. (d) Explain why, if n = 3, the eigenvalues of A 6= O are 0, i ω, − i ω for some real ω.
(e) Verify these facts for
0
the particular
1
matrices0 1
0 1
! 0 0 2 0
0 3 0 0 1 −1 B
0 −2 0 0 0 −3 C
(i) , (ii) B@ −3 0 −4 A,
C
(iii) B
@ −1 0 −1 A,
C
(iv ) B
B
C
C.
2 0 @ −2 0 0 0A
0 4 0 1 1 0
0 3 0 0
√ b −a 0
2 2 2
hence the eigenvalues are 0, ± i a + b + c , and so are √ all zero if and only if A = O.
(e) The eigenvalues are: (i) ± 2 i , (ii) 0, ± 5 i , (iii) 0, ± 3 i , (iv ) ± 2 i , ± 3 i .
♥ 8.4.7. (a) Prove that every eigenvalue of a Hermitian matrix A, so AT = A as in Exercise
3.6.49, is real. (b) Show that the eigenvectors corresponding to distinct eigenvalues are
orthogonal under the Hermitian dot product on C n . (c) Find the eigenvalues and eigen-
vectors of the following Hermitian matrices, and verify orthogonality:
0 1
! ! 0 i 0
2 i 3 2− i
(i) , (ii) , (iii) B
@−i 0 iC A.
− i −2 2+ i −1
0 −i 0
Solution:
(a) Let A v = λ v. Using the Hermitian dot product,
λk v k2 = (A v) · v = vT AT v = vT A v = v · (A v) = λk v k2 ,
and hence λ = λ, which implies that the eigenvalue λ is real.
(b) Let A v = λ v, A w = µ w. Then
λv · w = (A v) · w = vT AT w = vT A w = v · (A w) = µv · w,
since µ is real. Thus, if λ 6= µ then v · w = 0. √ ! √ !
√ (2 − 5) i (2 + 5) i
(c) (i) eigenvalues ± 5; eigenvectors: , .
!
1 !
1
2− i −2 + i
(ii) eigenvalues 4, −2; eigenvectors: , .
1 5
0 1 0 1 0 1
√ 1 −1
√ −1
√
(iii) eigenvalues 0, ± 2; eigenvectors: B C B C B
@ 0 A, @ i 2 A , @ − i 2 A.
C
1 1 1
♥ 8.4.8. Let M > 0 be a fixed positive definite n × n matrix. A nonzero vector v 6= 0 is called a
generalized eigenvector of the n × n matrix K if
K v = λ M v, v 6= 0, (8.31)
where the scalar λ is the corresponding generalized eigenvalue. (a) Prove that λ is a gen-
eralized eigenvalue of the matrix K if and only if it is an ordinary eigenvalue of the ma-
trix M −1 K. How are the eigenvectors related? (b) Now suppose K is a symmetric ma-
trix. Prove that its generalized eigenvalues are all real. Hint: First explain why this does
not follow from part (a). Instead mimic the proof of part (a) of Theorem 8.20, using the
weighted Hermitian inner product h v , w i = v T M w in place of the dot product. (c) Show
that if K > 0, then its generalized eigenvalues are all positive: λ > 0. (d) Prove that the
eigenvectors corresponding to different generalized eigenvalues are orthogonal under the
weighted inner product h v , w i = v T M w. (e) Show that, if the matrix pair K, M has n
distinct generalized eigenvalues, then the eigenvectors form an orthogonal basis for R n . Re-
λ k v k2 = λ vT M v = (λ M v)T v = (K v)T v = vT (K v) = λ vT M v = λ k v k2 ,
and hence λ is real. (c) If K v = λ M v, K w = µ M w, with λ, µ and v, w real, then
λ h v , w i = (λ M v)T w = (K v)T w = vT (K w) = µ vT M w = µ h v , w i,
and so if λ 6= µ then h v , w i = 0, proving orthogonality. (d) If K > 0, then λ h v , v i =
vT (λ M v) = vT K v > 0, and so, by positive definiteness of M , λ > 0. (e) Part (b) proves
that the eigenvectors are orthogonal with respect to the inner product induced by M , and so
the result follows immediately from Theorem 5.5.
8.4.9. Compute the generalized eigenvalues and eigenvectors, as in (8.31), for the following
matrix pairs. Verify orthogonality of the eigenvectors under the appropriate inner product.
! ! ! !
3 −1 2 0 3 1 2 0
(a) K = , M= , (b) K = , M= ,
−1 2 0 3 1 1 0 1
0 1 0 1
! ! 6 −8 3 1 0 0
2 −1 2 −1 B
(c) K = , M= , (d) K = @ −8 24 −6 C B
A, M = @0 4 0CA,
−1 4 −1 1
3 −6 99 0 0 9
0 1 0 1 0 1 0 1
1 2 0 1 1 0 5 3 −5 3 2 −3
(e) K = B
@2 8 2C B
A, M = @1 3 1C B
A , (f ) K = @ 3 3 −1 C B
A, M = @ 2 2 −1 C
A.
0 2 1 0 1 1 −5 −1 9 −3 −1 5
Solution: ! !
−3 1
5 1
(a) eigenvalues: 3, 2; eigenvectors: , 2 ;
1 1!
!
1 1
1
(b) eigenvalues: 2, 2 ; eigenvectors: , − 2 ;
1 1
! !
1 1
(c) eigenvalues: 7, 1; eigenvectors: 2 , ;
10 0
1 0 1 0 1
6 −6 2
(d) eigenvalues: 12, 9, 2; eigenvectors: @ −3 A, @ 3 C
B C B B C
A, @ 1 A ;
0
41 0 21 0 0 1
1 −1 1
(e) eigenvalues: 3, 1, 0; eigenvectors: B C B C B 1C
@ −2 A, @ 0 A, @ − 2 A;
1 1
0 1 0
1 1
1 −1
(f ) 2 is a double eigenvalue with eigenvector basis B C B C
@ 0 A, @ 1 A, while 1 is a simple eigen-
0 1
1 0
2
value with eigenvector B C
@ −2 A. For orthogonality you need to select an M orthogonal
1
basis of the two-dimensional eigenspace, say by using Gram–Schmidt.
♦ 8.4.10. Let L = L∗ : R n → R n be a self-adjoint linear transformation with respect to the inner
product h · , · i. Prove that all its eigenvalues are real and the eigenvectors are orthogonal.
Hint: Mimic the proof of Theorem 8.20 replacing the dot product by the given inner product.
C = c0 I + c1 S + c2 S 2 + · · · + cn−1 S n−1 ,
0 1 0
1
1 0
−1i 0 −1 1
i
1 1 1 1
B C B
1C B i C B C B
C B −1 C B − i C
C
(iv ) Eigenvalues 0, 2, 4, 2; eigenvectors B B C, B C, B
@ 1 A @ −1 A @ 1 A @ −1 A
C, B C.
1 −i −1 √ i√ √ √
7+ 5 7+ 5 7− 5 7− 5
(c) The eigenvalues are (i) 6, 3, 3; (ii) 6, 4, 4, 2; (iii) 6, 2 , 2 , 2 , 2 ; (iv ) in
2kπ
the n × n case, they are 4 + 2 cos for k = 0, . . . , n − 1. The eigenvalues are real and
n
positive because the matrices are symmetric, positive definite.
(d) Cases (i,ii) in (d) and all matrices in part (e) are invertible. In general, an n × n circu-
lant matrix is invertible if and only if none of the roots of the polynomial c0 +c1 x+· · ·+
cn−1 xn−1 = 0 is an nth root of unity: x 6= e2 k π/n .
8.4.14. Write out the spectral factorization of the matrices listed in Exercise 8.4.1.
Solution: 0 10 10 1
! 2 √1 C 2 1
2 6 B √5 5 CB 5 0 CB √
5
√
5C
(a) = B @ AB C.
6 −7 @ 1 2 A @ 1 2 A
√ − √ 0 −10 √ − √
05 5 10 10
5 5 1
! 1 1 1 √1 C
− √ √ C − √
5 −2 B 2 2 CB 5 0 CB
B 2 2C
(b) =B@ A@ A@ A.
−2 5 1 1 1 1
√
2
√
2
0 −10 √
2
√
2
0 √ √ 1 0 √ 1
3− 13 3+ 13 0 √ 1
!
B √ √ √ √ C 7+ 13 B √3− 13 √ √ 2 √ C
2 −1 B 26−6 13 26+6 13 CB 2 0 C B 26−6 13 26−6 13 C
(c) =B C@ √ AB √ C,
−1 5 @
√ 2 √ 2 A 7− 13 @
√ 3+ 13 √ 2 √ A
√ √ 0 2 √
0 26−6 13 26+6 1 13 26+6 13 26+6 13
4 3 4 0 10 4 3 1
1
0 1 B 5 √ 2 − 5 − 5 √2 C 6 0 0 CB √ √ √
1 0 4 B CB B 5 2 5 2 2CC
B C B 3 4 3 C B C B C
(d) @0 1 3A = B B 5√ 2 5
√ C C
B
B 0 1 0 CB
C − 3 4
0 C,
B 5 2 C@ A@ B 5 5 C
4 3 1 @ A A
4 3 1
√1 0 √1 0 0 −4 − √ − √ √
20 2 5 2 5 2 2
1 0 1
1 1 1 0 1 1 1 2
0 1 B √ − √ √ √ − √ √
6 −4 1 B 6 3 2CCB 12 0 0 CB
B 6 6 6CC
B C B 1 1 1 CB CB 1 1 1 C
(e) @ −4 B
6 −1 A = B − √ √ C B
√ CB 0 9 0 CB − √ C B √ √ C .
B 6 3 2 C@ AB 3 3 3CC
1 −1 11 @ A @ A
√2 √1 0 0 0 2 √1 √1 0
6 3 2 2
8.4.15. Construct a symmetric matrix with the following eigenvectors and eigenvalues, or ex-
“ ”T “ ”T
3 4
plain why none exists: (a) λ1 = 1, v1 = 5, 5 , λ2 = 3, v2 = − 54 , 3
5 ,
(b) λ1 = −2, v1 = ( 1, −1 ) , λ2 = 1, v2 = ( 1, 1 ) , (c) λ1 = 3, v1 = ( 2, −1 )T ,
T T
0 1 1
(b) det A = λ1 λ2 λ3 = 0.
(c) Only positive semi-definite;
0 1
not positive
0
definite
1
since it has a zero eigenvalue.
0 1
1 1
√1 B − √6 C B √3 C
B 2C B C B C
B C B
(d) u1 = B 1 C
B √ C, u 2 =B 1 CC, u
B
=B 1 C
C;
B √ − √
B 2C
@ A B 6CC 3
B
B 3C
C
@ A @ A
0 √2 √1
0 6 13 0 1
1 1 1 0 1
0 1 B √2 − √6 √ C √1 √1 0C
2 1 −1 B 3 CB 3 0 0 CB
B 2 2 C
B B 1 CB CB C
(e) @ 1 2 1CA=BB √2 √1 − √1 C B
B0 3 0C B − √1 √1 √2 C;
B 6 3CC@
CB
AB 6 6 6 C
C
−1 1 2 @ A @ A
0 √2 √1 0 0 0 √1 − √1 √1
0 1 6 3 3 3 3
1
B C √1 √1 √1
(f ) @0A = u1 − u2 + u3 .
2 6 3
0
8.4.18. True or false: A matrix with a real orthonormal eigenvector basis is symmetric.
Solution: True. If Q is the orthogonal matrix formed by the eigenvector basis, then A Q = Q Λ
where Λ is the diagonal eigenvalue matrix. Thus, A = Q Λ Q−1 = Q Λ QT , which is symmetric.
0.5
-1
1
√ q
0.5 ellipse with semi-axes 2, 23 ,
(ii) -1.5 -1 -0.5 0.5 1 1.5 and principal axes ( −1, 1 )T , ( 1, 1 )T .
-0.5
-1
-1.5
1.5
1
1 1
ellipse with semi-axes √ √ ,√ √ ,
0.5 2+ 2 2− 2
“ √ ”T “ √ ”T
(iii) -1.5 -1 -0.5 0.5 1 1.5 and principal axes 1+ 2, 1 , 1− 2, 1 .
-0.5
-1
-1.5
♦ 8.4.24. Let K be a positive definite 3×3 matrix. (a) Prove that the quadratic equation x T K x = 1
defines an ellipsoid in R 3 . What are its principal axes and semi-axes? (b) Describe the
surface defined by the quadratic equation 11 x2 − 8 x y + 20 y 2 − 10 x z + 8 y z + 11 z 2 = 1.
Solution: (a) Same method as in Exercise 8.4.23. Its principal axes are the eigenvectors of K,
and the semi-axes are the reciprocals of the square roots of the eigenvalues. (b) Ellipsoid with
principal axes: ( 1, 0, 1 )T , ( −1, −1, 1 )T , ( −1, 2, 1 )T and semi-axes √1 , √1 , √1 .
6 12 24
T
8.4.25. Prove that A = A has a repeated eigenvalue if and only if it commutes, A J = J A,
with a nonzero skew-symmetric matrix: J T = − J =
6 O. Hint: First prove this when A is a
diagonal matrix.
Solution: If Λ = diag (λ1 , . . . , λn ), then the (i, j) entry of Λ M is di mij , whereas the (i, j) entry
of M Λ is dk mik . These are equal if and only if either mik = 0 or di = dk . Thus, Λ M = M Λ
with M having one or more non-zero off-diagonal entries, which includes the case of non-zero
skew-symmetric matrices, if and only if Λ has one or more repeated diagonal entries. Next,
suppose A = Q Λ QT is symmetric with diagonal form Λ. If A J = J A, then Λ M = M Λ
where M = QT J Q is also nonzero, skew-symmetric, and hence A has repeated eigenvalues.
Conversely, if λi = λj , choose M such that mij = 1 = − mji , and then A commutes with
J = Q M QT . Q.E.D.
♦ 8.4.26. (a) Prove that every positive definite matrix K has a unique positive definite square
root, i.e., a matrix B > 0 satisfying B 2 = K. (b) Find the positive definite square roots of
the following matrices: 0 1 0 1
! ! 2 0 0 6 −4 1
2 1 3 −1
(i) , (ii) , (iii) B C
@ 0 5 0 A, (iv ) B
@ −4 6 −1 C A.
1 2 −1 1
0 0 9 1 −1 11
Solution: √ √
(a) Set B = Q Λ QT , where Λ is the diagonal matrix with the square roots of the eigen-
values of A along the diagonal. Uniqueness follows from the fact that the eigenvectors
and eigenvalues are uniquely determined. (Permuting them does not change the final
form of B.)
8.4.33. Find the minimum and maximum values of the quadratic form 5 x2 + 4 x y + 5 y 2 where
x, y are subject to the constraint x2 + y 2 = 1.
Solution: Maximum: 7; minimum: 3.
8.4.34. Find the minimum and maximum values of the quadratic form 2 x2 + x y + 2 x z + 2 y 2 +
2 z 2 where x, y, z are required to satisfy x2 + y 2 + z 2 = 1.
√(c) 12 = max{
Solution: 6 x2 − 8 x y + 2 x z + 6 y 2 − 2 y z + 11 z 2 | x2 + y 2 + z 2 = 1, x − y + 2 z = 0 };
(d) 3 + 3 = max{ 4 x2 − 2 x y − 4 x z + 4 y 2 − 2 y z + 4 z 2 | x2 + y 2 + z 2 = 1, x − z = 0 },
8.4.37. What is the minimum and maximum values of the following rational functions:
3 x2 − 2 y 2 x2 − 3 x y + y 2 3 x2 + x y + 5 y 2 2 x2 + x y + 3 x z + 2 y 2 + 2 z 2
(a) , (b) , (c) , (d) .
x2 + y 2 x2 + y 2 x2 + y 2 x2 + y 2 + z 2
Solution:
(a) Maximum: 3; minimum: −2.
(b) Maximum: 52 ; minimum: − 21 .
√ √
8+ 5 8− 5
(c) Maximum: 2√ = 5.11803; minimum: 2 √ = 2.88197.
4+ 10
(d) Maximum: 2 = 3.58114; minimum: 4−2 10 = .41886.
n−1
X
8.4.38. Find the minimum and maximum values of q(x) = 2 xi xi+1 for k x k2 = 1. Hint: See
Exercise 8.2.46. i=1
π π
Solution: Maximum: 2 cos ; minimum − 2 cos .
n+1 n+1
8.4.39. Suppose K > 0. What is the maximum value of q(x) = xT K x when x is constrained
to a sphere of radius k x k = r?
Solution: Maximum: r 2 λ1 ; minimum r 2 λn , where λ1 , λn are, respectively, the maximum and
minimum eigenvalues of K.
8.4.40. Let K >n 0. Prove˛ the product
o
formula
n ˛ o
max xT K x ˛˛ k x k = 1 min xT K −1 x ˛
˛ kxk = 1 = 1.
vT K v v
Solution: Note that = uT K u, where u = is a unit vector. Moreover, if v is or-
k v k2 kvk
thogonal to an eigenvector vi , so is u. Therefore,
8 ˛ 9
˛
< vT K v ˛
˛
=
max : ˛ v 6= 0, v · v1 = · · · = v · vj−1 = 0 ;
k v k2 ˛
n ˛ o
= max uT K u ˛˛ k u k = 1, u · v1 = · · · = u · vj−1 = 0 = λj
by Theorem 8.30.
♥ 8.4.44. (a) Let K, M be positive definite n × n matrices and λ1 ≥ · · · ≥ λn their generalized
eigenvalues, as in Exercise 8.4.9. Prove that that the largest generalized eigenvalue can be
characterized by the maximum principle λ1 = max{ xT K x | xT M x = 8
1 }. Hint:˛ Use Ex-9
< xT K x ˛˛ =
ercise 8.4.26. (b) Prove the alternative maximum principle λ1 = max : T ˛ x 6= 0 .
˛ ;
x Mx ˛
(c) How would you characterize the smallest generalized eigenvalue? (d) An intermediate
generalized eigenvalue?
Solution: √
(a) Let R = M be the positive definite square root of M , and set Kc = R−1 K R−1 . Then
f y, xT M x = yT y = k y k2 , where y = R x. Thus,
xT K x = y T K
n ˛ o n ˛ o
˛ ˛
max xT K x ˛ xT M x = 1 = max yT K
fy ˛ k y k2 = 1 e ,
=λ 1
f But K
the largest eigenvalue of K. f y = λ y implies K x = λ x, and so the eigenvalues of
f
K and K coincide. Q.E.D.
x T
(b) Write y = √ so that y M y = 1. Then, by part (a),
xT 8
Mx ˛ 9
< xT K x ˛˛ = n ˛ o
˛ x 6= 0 T ˛ T
max : T ˛ ;
= max y K y ˛ y M y = 1 = λ1 .
x Mx ˛
! !
1 1 0 1
8.5.1. Find the singular values of the following matrices: (a) , (b) ,
0 2 −1 0
0 1
! ! ! 1 −1 0
1 −2 2 0 0 2 1 0 −1 B
(c) , (d) , (e) , (f ) @ −1 2 −1 CA.
−3 6 0 3 0 0 −1 1 1
0 −1 1
√ √ √ √
Solution: (a) 3 ± 5, (b) 1, 1; (c) 5 2; (d) 3, 2; (e) 7, 2; (f ) 3, 1.
8.5.2. Write out the singular value decomposition (8.40) of the matrices in Exercise 8.5.1.
Solution: 0 √ √ 1 0 √ 1
0q 1
!
B √−1+ √
5 √−1− 5
√ C
√ √−2+ √
5 √ 1
√
1 1 3+ 5 0 CB C
(a) =B
B
10−2 5 10+2 5 CB
C@ q √ AB
B 10−4 5
√
10−4 5 C
C,
0 2 @
√ 2 √ √ 2 √ A @
√−2− √5 √ 1 √ A
0 3− 5
! 10−2 5 ! 10+2
! 5 ! 10+4 5 10+4 5
0 1 1 0 1 0 0 1
(b) = ,
−1 0 0 −1 0 1 1 0
0 1
!
1 −2 − √1 “ √ ”“ 1 ”
(c) =@ B 10 C
A 5 2 − √5 √2 ,
−3 6 √3 5
! 10! ! !
2 0 0 0 1 3 0 0 10
(d) = ,
0 3 0 1 0 0 2 1 00
0 10 10 1
!
− √2 √1 √ − √4 − √3 √1 √3 C
2 1 0 −1 B 5 5 CB 7 0 CB 35 35 35 35 C
(e) =B
@ A@
C
√ AB@ A,
0 −1 1 1 √1 √2 √2 − √1 √2 √1
5 5 1
0 2 10 10 10 10
0
0 1 B √1 − √1 C0 10 √ 1
1 −1 0 B √6 2C
CB 3 √1 − √2 √1 C
B 0 CB
(f ) B
@ −1 2 −1 C
A=B
B − √2 0CC@
BA@
6 3 6C
A.
B 3 C 0 √1 1
0 −1 1 @ A 1 − 0 √
√1 √1 2 2
6 2
8.5.18. True or false: The singular values of A2 are the squares σi2 of the singular values of A.
8.5.19. True or false: If B = S −1 A S are similar matrices, then A amd B have the same singu-
lar values.
Solution: False. This is only true if S is an orthogonal matrix.
♦ 8.5.20. A complex matrix A is called normal if it commutes with its Hermitian transpose
A† = AT , as defined in so A† A = A A† .(a) Show that a real matrix is normal if it com-
mutes with its transpose: AT A = A AT . (b) Show that every real symmetric matrix is
normal. (c) Find a normal matrix which is real but not symmetric. (d) Show that the
eigenvectors of a normal matrix form an orthogonal basis of C n under the Hermitian dot
product. (e) Show that the converse is true: a matrix has an orthogonal eigenvector basis
of C n if and only if it is normal. (f ) Prove that if A is normal, the singular values of An
are σin where σi are the singular values of A. Show that this result is not true if A is not
normal.
(b) The semi-axes are the eigenvalues: 12, 9, 2; the principal axes are the eigenvectors:
( 1, −1, 2 )T , ( −1, 1, 1 )T , ( 1, 1, 0 )T .
(c) Since the unit sphere has volume 34 π, the volume of E is 43 π det A = 288 π.
♦ 8.5.24. Optimization Principles for Singular Values: Let A be any nonzero m × n matrix.
Prove that (a) σ1 = max{ k A u k | k u k = 1 }. (b) Is the minimum the smallest singular
value? (c) Can you design an optimization principle for the intermediate singular values?
Solution:
(a) k A u k2 = (A u)T A u = uT K u, where K = AT A. According to Theorem 8.28,
max{ k A u k2 = uT K u | k u k = q
1 } is the largest eigenvalue λ1 of K = AT A, hence
the maximum value of k A u k is λ1 = σ1 . Q.E.D.
(b) This is true if rank A = n by the same reasoning, but false if ker A 6= {0}, since then the
minimum is 0, but, according to our definition, singular values are always nonzero.
(c) The k th singular value σk is obtained by maximizing k A u k over all unit vectors which
are orthogonal to the first k − 1 singular vectors.
♦ 8.5.25. Let A be a square matrix. Prove that its maximal eigenvalue is smaller than its maxi-
mal singular value: max | λi | ≤ max σi . Hint: Use Exercise 8.5.24.
Solution: Let λ1 be the maximal eigenvalue, and let u1 be a corresponding unit eigenvector. By
Exercise 8.5.24, σ1 ≥ k A u1 k = | λ1 |.
max{ k A u k | k u k = 1 }
8.5.26. Let A be a nonsingular square matrix. Prove that κ(A) = .
min{ k A u k | k u k = 1 }
Solution: By Exercise 8.5.24, the numerator is the largest singular value, while the denominator
is the smallest, and so the ratio is the condition number.
! !
1 −1 1 −2
8.5.27. Find the pseudoinverse of the following matrices: (a) , (b) ,
−3 3 2 1
0 1 0 1 0 1 0 1
2 0 0 0 1 ! 1 3 1 2 0
1 −1 1
(c) B
@0 −1 C
A, (d)
B
@0 −1 0CA, (e) , (f ) B C B
@ 2 6 A, (g) @ 0 1 1CA.
−2 2 −2
0 0 0 0 0 3 9 1 1 −1
0 1 0 1 0 1 0 1
1 3 1 2 1
0 0A 0 0 0
B − 20 C B 5C
Solution: (a) @ 20 A, (b) @ 5 A, (c) @2 , (d) B
@0 −1 0CA,
− 1 3
− 2 1 0 −1 0 1 0 0
20 20 5 5
♥ 8.5.29. Prove that the pseudoinverse satisfies the following identities: (a) (A + )+ = A,
(b) A A+ A = A, (c) A+ A A+ = A+ , (d) (A A+ )T = A A+ , (e) (A+ A)T = A+ A.
Solution: We repeatedly use the fact that the columns of P, Q are orthonormal, and so
P T P = I , QT Q = I .
8.6.1. For each of the following Jordan matrices, identify the Jordan blocks. Write down the
eigenvalues, the eigenvectors, and the Jordan basis. Clearly identify the Jordan chains.
8.6.3. Write down all possible 3×3 Jordan matrices that have eigenvalues 2 and 5 (and no others).
Solution:
0 1 0 1 0 1 0 1 0 1 0 1
2 0 0 2 0 0 5 0 0 2 0 0 5 0 0 5 0 0
B
@0 2 0CA,
B
@0 5 0CA,
B
@0 2 0CA,
B
@0 5 0CA,
B
@0 2 0CA,
B
@0 5 0CA,
0 0 5 0 0 2 0 0 2 0 0 5 0 0 5 0 0 2
0 1 0 1 0 1 0 1
2 1 0 5 0 0 2 0 0 5 1 0
B
@0 2 0CA,
B
@0 2 1CA,
B
@0 5 1CA,
B
@0 5 0CA.
0 0 5 0 0 2 0 0 5 0 0 2
0 0 0
0 1 0 1 0 1 0 1
−2 1 −1 0
B B−1 C
1C B C
B0C B 2C B
0C
(f ) Eigenvalue: 2. Jordan basis: v1 = B
B
@ 1A
C
C, v2 = B C, v3 = B
@0A
C
B 1 C, v 4 =
B
B
C
C.
@ 2A @ 0A
0 0 0 − 21
0 1
2 0 0 0
B
0 2 1 0C
Jordan canonical form: B
B
@0 0 2 1A
C
C.
0 0 0 2
8.6.5. Write down a formula for the inverse of a Jordan block matrix. Hint: Try some small
examples first to help figure out the pattern.
Solution:
0 −1 1
λ − λ−2 λ−3 − λ−4 ... − (−λ)n
B C
B 0
B λ−1 − λ−2 λ−3 ... − (−λ)n−1 C C
B C
−1
B 0
B 0 λ−1 − λ−2 ... − (−λ)n−2 C C
Jλ,n = B
B 0 0 0 λ−1 ... − (−λ) n−3 C.
C
B C
B . .. .. .. .. .. C
B . C
@ . . . . . . A
0 0 0 0 ... λ−1
8.6.9. True or false: If A has Jordan canonical form J, then A2 has Jordan canonical J 2 .
†
See Exercise 1.2.33 for the basics of matrix polynomials.