MATH 235 Assignment 6
MATH 235 Assignment 6
3
= {~v1 }. For 2 = 3 we get
1
1
A + 3I
0
1/3
0
1
Hence a basis for E2 is
= {~v2 }. Note that ~v1 ~v2 = 3 1 + 1 (3) = 3 3 = 0 so ~v1 and ~v2 are orthogonal.
3
We normalize the vectors to get the orthonormal basis
1/ 10
3/10
,
1/ 10
3/ 10
Hence we get
P =
3/10 1/ 10
1/ 10 3/ 10
and
D = P T AP =
7
0
0
3
6
4 2
12 4
(b) A = 4
2 4 6
The characteristic polynomial of A is
6
4
2
12
4
C() = det 4
2
4
6
= (6 )((12 )(6 ) (4)(4)) 4(4(6 ) (4)(2)) + (2)(4(4) (12 )(2))
= 3 + 242 144 + 256
= (16 )( 4)2
Thus the eigenvalues are 1 = 16 with algebraic multiplicity 1 and 2 = 4 with algebraic multiplicity 2. For
1 = 16 we get
1 0 1
A 16I 0 1 2
0 0 0
1
Hence a basis for E1 is 2 = {~v1 }. For 2 = 4 we get
1 2 1
A 4I 0 0 0
0 0 0
2
1
Hence a basis for E2 is 0 , 1 = {~v2 , ~v3 }. Observe that both ~v2 and ~v3 are orthogonal to ~v1 but not to
1
0
each other. Thus we apply the Gram-Schmidt procedure to {~v2 , ~v3 } to get an orthonormal basis for E2 .
1
w
~ 2 = 0
1
2
1
1
2
h~v3 , w
~ 2i
1
0 = 1
w
~
=
w
~ 3 = ~v3
2
kw
~ 2 k2
2
0
1
1
Thus {w
~ 2, w
~ 3 } is an orthogonal basis for E2 and so {~v1 , w
~ 2, w
~ 3 } is an orthogonal basis for R3 of eigenvectors of
A. We normalize the vectors to get the basis
1/ 3
1/ 2
1/6
2/ 6 , 0 , 1/ 3
1/ 2
1/ 6
1/ 3
Hence we get
1
(c) A = 2
1
2
2
2
1/6
P = 2/ 6
1/ 6
1/ 2
0
1/ 2
1/ 3
1/3
1/ 3
and
16
D = P T AP = 0
0
0 0
4 0
0 4
1
2
1
1
2
2
C() = det 2
1
2
1
2
1
1 0 1
A 4I 0 1 2
0 0 0
2
1
Hence a basis for E1 is 2 = {~v1 }. For 2 = 2 we get
1 0 1
A 2I 0 1 0
0 0 0
1
Hence a basis for E2 is 0 = {~v2 }. For 3 = 2 we get
1 0 1
A + 2I 0 1 1
0 0 0
1
Hence a basis for E3 is 1 = {~v3 }. Because each are eigenvectors for distinct eigenvalues of a symmetric matrix
1
we know that {~v1 , ~v2 , ~v3 } is an orthogonal basis for R2 of eigenvectors of A. We normalize the vectors to get the basis
1/ 3
1/ 2
1/6
2/ 6 , 0 , 1/ 3
1/ 2
1/ 6
1/ 3
Hence we get
1/6
P = 2/ 6
1/ 6
1/ 2
0
1/ 2
1/ 3
1/3
1/ 3
and
4
D = P T AP = 0
0
0
2
0
0
0
2
2. Let A be a symmetric matrix. Prove that A2 = I if and only if all eigenvalues of A are 1.
Suppose A2 = I. Then if (, ~v ) is an eigenpair of A we have
~v = I~v = A2~v = A(A~v ) = A(~v ) = A~v = 2~v
so (1 2 )~v = ~0 and since ~v 6= ~0 by assumption (as an eigenvector) we must have 0 = 1 2 = (1 + )(1 ) which
has roots = 1 as required.
On the other hand, suppose all eigenvalues of A are 1. Then find an orthogonal matrix P such that P T AP = D :=
diag(1, . . . , 1, 1, . . . , 1) where the eigenvalues of A have been arranged so that all the 1s come first and the 1s
come after. Then we have D2 = diag(12 , . . . 12 , (1)2 , . . . , (1)2 ) = I so
I = D2 = (P T AP )2 = P T AP P T AP = P T A2 P
since P T P = I by orthogonality of P . Multiplying on the left by P and on the right by P T gives
A2 = P P T A2 P P T = P P T = I
as required.
3. Prove that every n n matrix A with real eigenvalues is orthogonally similar to a lower triangular matrix T .
Since the determinant is preserved by taking the transpose and transpose is linear and I is symmetric we have
det(AT I = det((A I)T ) = det(A I)
so A and AT have the same characteristic polynomial and thus the same eigenvalues. Therefore AT has real eigenvalues
and so by the triangularization theorem there exists an orthogonal matrix P and an upper triangular matrix T so that
P T AT P = T . Taking transposes in this equation gives P T AP = T T and since T is upper triangular, T T is lower
triangular and therefore we have shown that A is orthogonally similar to a lower triangular matrix as required.
3
4. Prove or disprove: if A Mnn (R) is a diagonalizable matrix such that eigenvectors corresponding to different eigenvalues are orthogonal, then A is symmetric.
Enumerate the eigenvalues of A as {1 , . . . , k }. Find a basis Bi for each Ei and apply the Gram-Schmidt procedure
and normalize to get an orthonormal basis Ci for each Ei . Then C = C1 Ck is an orthonormal set of vectors
containing as many elements as B1 Bk which was a basis for Rn since A was assumed to be diagonalizable.
Moreover C consists entirely of eigenvectors of A. Therefore if C = {~v1 , . . . , ~vn } then the matrix P = ~v1 ~vn
is an orthogonal matrix such that P T AP is diagonal. Then by a theorem in the textbook since A is orthogonally
diagonalizable it is symmetric, as required.
5. (a) Prove that if A
a
Write A = 0
0
b c
d e . Then since A is orthogonal we have AT A = I and therefore
0 f
1
0
0
0
1
0
0
a
0 = AT A = b
1
c
0 0
a
d 0 0
e f
0
2
b c
a
d e = ab
0 f
ac
ab
b2 + d2
bc + de
ac
bc + de
c2 + e2 + f 2
Comparing the top left entries we get a2 = 1 so in particular a 6= 0. Comparing the top middle entries gives ab = 0
and since a 6= 0 we conclude that b = 0. Similarly comparing the top right entries gives ac = 0 so c = 0. Finally
comparing the middle entries and using b = 0 gives d2 = 1 so d 6= 0 and therefore comparing the right-hand
middle entries and using b = c = 0 we get de = 0 and we can finally conclude that e = 0 as well. In conclusion
b = c = e = 0 so A is indeed diagonal.
(b) Prove that if B is a 3 3 orthogonal matrix with all real eigenvalues, then B is orthogonally diagonalizable.
By the triangularization theorem, since B has all real eigenvalues, there exists an orthogonal matrix P and an
upper triangular matrix T such that P T BP = T . Taking transposes in this equation gives P T B T P = T T .
Therefore
T T T = (P T B T P )(P T BP ) = P T B T BP = P T P = I
since both B and P are orthogonal. Therefore T is a 3 3 upper triangular orthogonal matrix and so by part (a)
T is in fact diagonal, so we have used P to orthogonally diagonalize B as required.