Linear Algebra With Applications
Linear Algebra With Applications
W. Keith Nicholson
December, 2011
TABLE OF CONTENTS
CHAPTER 1 SYSTEMS OF LINEAR EQUATIONS
1.1 Solutions and Elementary Operations
1.2 Gaussian Elimination
1.3 Homogeneous Equations
1.4 An Application to Network Flow
1.5 An Application to Electrical Networks
1.6 An Application to Chemical Reactions
Supplementary Exercises for Chapter 1
1
1
2
8
10
10
12
12
15
15
17
20
25
32
35
40
42
43
46
48
48
52
57
61
64
65
66
68
68
73
81
83
84
85
86
86
87
89
91
93
95
98
99
101
101
104
106
110
112
115
115
118
123
127
127
CHAPTER 8 ORTHOGONALITY
8.1 Orthogonal Complements and Projections
8.2 Orthogonal Diagonalization
8.3 Positive Definite Matrices
8.4 QR-Factorization
8.5 Computing Eigenvalues
8.6 Complex Matrices
8.7 An Application to Linear Codes over Finite Fields
8.8 An Application to Quadratic Forms
8.9 An Application to Constrained Optimization
8.10 An Application to Statistical Principal Component Analysis
131
131
133
138
139
140
141
144
146
150
150
151
151
157
160
165
165
168
173
175
178
179
179
181
APPENDIX
A Complex Numbers
B Proofs
C Mathematical Induction
183
183
187
188
114
Exercises 1.1
9(b)
Hence x = 3, y = 2.
1
4
3 4
(d)
5
4
10(b)
1
0
1
3
1
3
32
1
3
37
y + z =
= 1
3x + 2y + z =
1
9
10
9
73
4
1
13
. Hence x = 1 , y =
9
10
9 ,
z=
1
3
1
3
7
3 .
17
13
11(b)
12
16
solution.
. The last equation is 0x + 0y = 36, which has no
5
36
; this simplifies to x = 5, y = 1.
17 As in the Hint, multiplying by (x2 + 2)(2x 1) gives x2 x + 3 = (ax + b)(2x 1) + c(x2 + 2).
Equating coefficients of powers of x gives equations 2a + c = 1, a + 2b = 1, b + 2c = 3.
Solving this linear system we find a = 19 , b = 59 , c = 11
9 .
19 If John gets $x per hour and Joe gets $y per hour, the two situations give 2x + 3y = 24.6 and
3x + 2y = 23.9. Solving gives x = $4.50 and y = $5.20.
Exercises 1.2
Gaussian Elimination
2(b)
5
4
2
1
3
0
1
1
11
42
13
13
11
0
1
0
0
0
0
0
0
11
13
11
0
1
3(b) The matrix is already in reduced row-echelon form. The nonleading variables are parameters;
x2 = r, x4 = s and x6 = t.
The first equation is x1 2x2 + 2x4 + x6 = 1, whence x1 = 1 + 2r 2s t.
The second equation is x3 + 5x4 3x6 = 1, whence x3 = 1 5s + 3t.
The third equation is x5 + 6x6 = 1, whence x5 = 1 6t.
row-echelon form.
1
4(b)
The
The
The
The
3
2
1
3
3
17
37
. Hence x = 17 , y = 37 .
(d)
Note that the variables
order.
in the second equation
are in the wrong
2
2
3 1 2
1 31
3
1
3
is a parameter; then x =
0
2
3
0
+ 13 t
1
3 (t + 2).
2
(f) Again the order of the variables is reversed in the second equation.
2
2 3 5
. There is no solution as the second equation is 0x + 0y = 7.
0
5(b)
2
3
3
3
5
2
1
0
9
14
4
11
22
17
34
4
3
15
11
14
21
1
2
5
14
17
0
1 2 1 2
1 2 1 2
1 2 1
2
0 1 1 3
0 1 1 3 .
(d) 2 5 3 1
0 0
0 7
1 4 3 3
0 2 2
1
There is no solution as the third equation is 0x + 0y + 0z = 7.
(f)
(h)
12
17
10
4
10
2
7
9
1
10
15
3
17
. Hence x = 7, y = 9, z = 1.
10
0
0
. Hence z = t, x = 4, y = 3 + 2t.
6(b) Label the rows of the augmented matrix as R1 , R2 and R3 , and begin the gaussian algorithm
on the augmented matrix keeping track of the row operations:
35
R1
R2
R3
R2
32
R2 R1 .
R3 R1
At this point observe that R3 R1 = 4(R2 R1 ), that is R3 = 5R1 4R2 . This means
that equation 3 is 5 times equation 1 minus 4 times equation 2, as is readily verified. (The
solution is x1 = t 11, x2 = 2t + 8 and x3 = t.)
7(b)
1
1
x3 = 0.
(d)
0
3
14
14
14
2 ab
8(d)
14
14
14
8
4
0
25b
2ab ,
5+a
2ab
25b
2ab
5+a
2ab
5+a
y = 2ab
.
1
. Hence there is no solution if a = 5. If a = 5,
0 0 5+a
2
1
1
5
and the matrix is
. Then y = t, x = 1 + 25 t.
5+a
Case 1 If ab = 2, it continues
then b =
Hence x4 = t; x1 = 1, x2 = 1 t, x3 = 1 + t.
1
b
1
1
b
1
8(b)
.
a
. Hence x4 = t; x1 = 0, x2 = t,
1
2
b
2
1
2
b
2
a
2
ab
2
1
2
b
2
2a
2 ab
Case 1 If a = 2 it continues:
The unique solution: x =
9(b)
b
c
1
2
2ab
2a .
12 t = 12 (1 t).
1
2
a 2c
b 2a + 4c
3
1
b1
2a
2ab
2a
2(1 b)
, so y = t, x =
b
2
b
2
2ab
2a
a 2c
0
1
2
1
2
1
2
y=
1
2
b1
2a ,
a 2c
1
b 2a + 5c
3a b 6c
b 2a + 4c
Hence, for any values of a, b and c there is a unique solution x = 2a + b + 5c, y = 3a b 6c,
and z = 2a + b + 4c.
(d)
ac
ab
ab
1 + abc
1 0 ab 0
, so z = t, x = abt, y = bt.
Case 2 If abc = 1, the matrix is
b
0
0 1
0 0
0
0
Note: It is impossible that there is no solution here: x = y = z = 0 always works.
(f)
a2
1
1
a2
1
1
2(a 1)
2(a 1)
1
0
a1
Case
3 If a = 1 and a = 0,there
is a unique solution:
a1
1
0
1
0
Hence x = 1 a1 , y = 0, z = a1 .
a1
1
a
a1
10(b)
1
2
12
3
2
; rank is 1.
(f)
11(b)
(d)
2
3
5
3
1
(f)
1
3
8
17
1
a2
1a
6a
2a
If a = 2 we get
If a = 0, a = 2, we get
12(b) False. A =
(d) False. A =
11
22
; rank = 3.
If a = 0 we get
1
4
1
2
; rank is 1.
a2
a2
4 2a2
2a
a2
a2
2a
a2
1
0
17
a2
a2
2a
4 a2
1
0
0
11
; rank is 2.
; rank = 2.
; rank = 2.
a2
2+a
; rank = 3.
(h) True. A has 3 rows so there can be at most 3 leading 1s. Hence the rank of A is at most 3.
b+c
c+a
a+b
b+c
ba
ab
ca
ac
c a is nonzero (by hypothesis) so that row provides the second leading 1 (its row becomes
b+c
1
0
b+c+a
16(b) Substituting the coordinates of the three points in the equation gives
1+1+a+b+c =0
a + b + c = 2
25 + 9 + 5a 3b + c = 0 5a 3b + c = 34
9 + 9 3a 3b + c = 0
1
2
1
2
34
18
5
6
24
24
2
6
3a + 3b c = 18
1
2
a=
1
7
10 a + 10 b
2
10 c
c=
5
1
10 a + 10 b
6
10 c.
Hence
6a + 2b + 2c = 0
a 3b + 2c = 0
6
1
5
5
8 t,
1
0
0
16
5a + b 4c = 0
14
16
14
7
= 8 t, c =
0
0
0
1
0
0
3
1
0
87
58
78
Exercises 1.3
1(b) False. A =
(d) False. A =
(f) False. A =
Homogeneous Equations
1
(h) False. A =
2(b)
a2
a+3
(d)
1
a
1a
1+a
a+1
a=1:
a = 1 :
; x = t, y = t, z = 0.
; x = t, y = 0, z = t.
1
0
1
0
4
0
1
2 1 1 1 0
1 2 0 2 3 0
1 2 2 0 1 0 0 0 1 1 2 0
1 2 3 1 3 0
0 0 0 0 0 0
x=
2r 2s 3t
= r
s 2t
s
t
2
1
0
0
0
+ s
2
0
1
1
0
10
4
8
2s + 3t
s
t
0
2
1
0
0
2
0
1
= s
+t
x=
]T
+ t
3
0
1
0
x+y = 1
6(b) The system 2x + 2y = 2 has nontrivial solutions with fewer variables than equations.
x y = 1
7(b) There are n r = 6 1 = 5 parameters by Theorem 2 1.2.
(d) The row-echelon form has four rows and, as it has a row of zeros, has at most 3 leading
1s. Hence rank A = r = 1, 2 or 3 (r = 0 because A has nonzero entries). Thus there are
n r = 6 r = 5, 4 or 3 parameters.
9(b) Insisting that the graph of ax + by + cz + d = 0 (the plane) contains the three points leads
to three linear equations in the four variables a, b, c and d. There is a nontrivial solution by
Theorem 1.
10
11. Since the system is consistent there are n r parameters by Theorem 2 Section 1.2. The
system has nontrivial solutions if and only if there is at least one parameter, that is if and
only if n > r.
Exercises 1.4
1(b) There are five flow equations, one for each junction:
f1 f2
f1
= 25
+ f3
+ f5
f2
= 50
+ f4
+ f7 = 60
f3 + f4
25
50
60
75
40
50
25
35
75
40
85
60
1
0
75
1
0
40
0
= 75
f5 + f6 f7 = 40
1
25
25
60
75
+ f6
40
85
60
35
40
40
f3 = 75 + f4 + f6
f5 = 40 f6 + f7 .
Exercises 1.5
11
I1
I2
I3
Right junction
I1
I2
I3
Top circuit
5I1 + 10I2
Lower circuit
1
5
0
10
10
10
5
15 ,
0
0
Hence I1 =
3
5
I2 =
1
15
10
0
4
5.
4
5
10
and I3 =
10I2 + 5I3 = 10
1
0
0
1
3
2
2
15
3
5
4
5
I1 I5 I6 = 0
Top junction
I2 I4 + I6 = 0
Middle junction
I2 + I3 I5 = 0
10
10
10
1
0
1
1
Right circuit
10I3 + 10I4 = 10
Lower circuit
10I3 + 10I5 = 20
0
0
10
10
10
20
10
1
2
1
0
0
2
1
3
10I5 10I6 = 10
2
1
1
3
2
2
1
0
0
1
1
12
1
2
3
2
3
2
1
2
Exercises 1.6
. Hence I1 = 2, I2 = 1, I3 = 1 , I4 = 3 , I5 = 3 , I6 = 1 .
2
2
2
2
2. Suppose xN H3 + yCuO zN2 + wCu + vH2 O where x, y, z, w and v are positive integers.
Equating the number of each type of atom on each side gives
N : x = 2z
Cu : y = w
H : 3x = 2v
O:y=v
Supplementary Exercises
Chapter 1
1(b) No. If the corresonding planes are parallel and distinct, there is no solution. Otherwise they
either coincide or have a whole common line of solutions.
2(b)
14
10
10
10
6
10
4
10
0
0
3(b)
If a = 1 the matrix is
1a
5 3a
4 a2
If a = 2 the matrix is
z = t.
a1
2
0
0
1
0
0
0
1
(16
10
1a
5 3a
4 a2
3(2 a)
6s 6t) and
16
10
1
10
6
10
1
10
4 a2
, so there is no solution.
, so x = 2 2t, y = t,
13
1a
5 3a
4 a2
3(2 a)
a2
5a+8
3(a1)
a2
3(a1)
a+2
3
3a5
a1
a2 4
a1
a+2
3
Hence x =
85a
3(a1) ,
y=
2
a1
3a5
a1
a2
3(a1) ,
z=
a+4
a1
a2 4
a1
a+2
3
a+2
3 .
4. If R1 and R2 denote the two rows, then the following indicate how they can be interchanged
using row operations of the other two types:
R
R2
R + R2
R + R2
R
2 .
1
1 1
R1
R1
R1
R2
R2
Note that only one row operation of Type II was used a multiplication by 1.
3 a + 2c = 0
2c = 3
that is
3b c 6 = 1
3b
= 9
3a + 2b
3a 2 + 2b = 5
= 7
equations
c has
unique solution:
8.
7
20
3
9
20
15
. Hence a = 1, b = 2, c = 1.
14
.
15
+
2
1
3
10
7
1
9 30 + 7
14
=
=
3 10 7
20
(d) [3 1 2] 2 [9 3 4] + [3 11 6] = [3 1 2] [18 6 8] + [3 11
= [3 18 + 3 1 6 + 11 2 8 6] = [12 4 12]
(f)
3(b) 5C 5
T
3
2
2
1
0
0+2
15
10
1 + 0
=3
5
0
(h) 3
2
1
1
0
1
2
6
3
3
0
2
4
2
6
6]
4
1
1
6
16
6(b) Given 4X + 3Y = A , subtract the first from the second to get X + Y = B A. Now
5X + 4Y = B
subtract 3 times this equation from the first equation: X = A 3(B A) = 4A 3B. Then
X + Y = B A gives Y = (B A) X = (B A) (4A 3B) = 4B 5A.
Note that this also follows from the Gaussian Algorithm (with matrix constants):
1
1
B
B
A
5
4
4 3
A
5 4 B
4 3 A
4 3
A
1
1
BA
1 0 4A 3B
.
0
5A 4B
4B 5A
=p
+q
+r
+s
p+q+r
q+s
r+s
= a
+ s = b
r + s = c
p
= d
11(b)
(A + A) + A = A + 0 (associative law
0 + A = A + 0 (definition of A)
13(b) If A =
a1
A = A
0
..
.
a2
..
.
0
..
.
an
(property of 0)
and B =
b1
0
..
.
b2
..
.
0
..
.
bn
17
a1 b1
then A B =
14(b)
st
0
..
.
a2 b2
..
.
0
..
.
an bn
so A B is also diagonal.
=
, so A = 13
3
T
1
0
= 2A 5
(d) 4A 9
=
5
1 0
1 2
9
9
5 5
4
14
Hence 2A =
=
.
9 0
0 10
9 10
4
14
2
7
=
.
Finally A = 21
9
1
(2AT )T
10
1
0
1
2
Exercises 2.2
+ 2x4 = 1
x1 + 2x2 + 2x3 +
= 2
x2 + 2x3 5x4 = 0
18
2(b) x1
+ x2
1
2
3
2
4
By Theorem 4:
1
2
AX =
0
(d) By Definition 1:
AX =
AX =
5(b)
1
1
1
x3
x1
x2
x3
7
9
= x1
x1
x2
x3
x4
1
0
x2
x3
x4
+ x2
= x1
Observe that
2
2
0
(d)
2
7
4x2 + 5x3
x1 + 2x2 + 3x3
4x2 + 5x3
+ x3
4
6
4
x
y
z
1
3
2
3
0
0
2
1
+ x4
5
0
0 x1 + 2 x2 + 1 x3 + 5 x4
2x2 + x3 + 5x4
x1 + 2x2 + 3x3
2 + t
2 3t
t
2
2
0
+ t
(8) x1 + 7 x2 + (3) x3 + 0 x4
1
1
12
3 x1 + (4) x2 + 1 x3 + 6 x4
+ x2
+ x3
2x2 + x3 + 5x4
=
0
0 x1 + (4) x2 + 5 x3
1 x1 + 2 x2 + 3 x3
x1
+ x4
x2
By Theorem 4:
x1
+ x3
3(b) By Definition 1:
1
2
3
AX =
0
3
1
1
3
1
is a solution
19
so
x1
x2
x3
x4
Here
9
2
0
. Hence x1 = 3 t, x2 = 4t 9, x3 = t 2, x4 = t,
3t
9 + 4t
2 + t
t
9
2
0
+ t
4
1
1
4
1
1
homogeneous equations.
6 To say that x0 and x1 are solutions to the homogeneous system Ax = 0 of linear equations
means simply that Ax0 = 0 and Ax1 = 0. If sx0 + tx1 is any linear combination of x0 and
x1 , we compute:
A(sx0 + tx1 ) = A(sx0 ) + A(tx1 ) = s(Ax0 ) + t(Ax1 ) = s0 + t0 = 0
using Theorem 2. This shows that sx0 + tx1 is also a solution to Ax = 0.
8(b) The reduction of the augmented matrix is
2
6
11
11
7
3
10(b) False.
2
1
0
0
3
0
1
0
0
+ s
2
1
0
0
0
3
0
+ t
so X =
5
0
2
0
1
3 + 2s 5t
s
1 + 2t
0
t
combination of
(h) False. If A =
1
2
1
1
and
1
1
1
2
1
1
, there is a solution
1
2
1
for B =
1
2
0
0
. But there is no
20
solution for B =
. Indeed, if
1
0
x
y
z
1
0
then x y + z = 1 and
x + y z = 0. This is impossible.
x
y
11(b) If
is reflected in the line y = x the result is
; see the diagram for Example 12,
y
x
x
y
0 1
x
0 1
2.4. In other words, T
=
=
. So T has matrix
.
y
(d) If
x
y
x
y
z
y
z
x
y
z
x
y
in the y-z plane keeps y and z the same and negates x. Hence
x
y
z
, so the matrix is
Hence we have
A(x + y) = (x1 + y1 )a1 + (x2 + y2 )a2 + + (xn + yn )an
Definition 1
Theorem 1 2.1
= (x1 a1 + x2 a2 + + xn an ) + (y1 a1 + y2 a2 + + yn an )
Theorem 1 2.1
= Ax + Ay.
Exercises 2.3
1(b)
1
2
1
0
2
4
Definition 1
Matrix Multiplication
2
212
4+04
39+0
6+0+0
17+4
2+0+8
1
0
6
6
2
10
21
3]
(d) [1 3
1 3]
(f) [1
(h)
(j)
2
8
= [3 6 + 0 0 + 3 18] = [3 15]
a
b
c
= [2 1 24] = [23]
65
10 10
3 + 3
5 + 6
aa + 0 + 0
0+0+0
0+0+0
0+0+0
0 + bb + 0
0+0+0
0+0+0
0+0+0
0+
0 + cc
aa
bb
cc
2
6
, AC =
BA =
,B =
, CB =
1
CA =
2a + a1
2b + b1
a + 2a1
b + 2b1
5(b) A(BC) =
6(b) If A =
1
0
1
1
then A
2b + b1 = 2
a + 2a1 = 1
b + 2b1 = 4
2a + a1 = 7
=
10
16
1
14
17
1
0
a
c
2
3
1
1
b
A becomes
d
0
, as required.
2
0
0
0
= (AB)C.
whence b = 0
1
0
1
0
1
1
1 0
1
0
1
1
,
,
8(b) (i)
,
,
(ii)
0 1
0 1
0 1
0 1
0 0
0 0
22
12(b) Write A =
where P
,X=
, and Q =
. Then P X +
0 Q
0
1
0
0
0
1
2 1
2 1
P 2 P X + XQ
P2
0
XQ =
+
= 0, so A2 =
=
. Then A4 =
2
0
0
0
0
0
Q
0
Q2
P2
0
P2
0
P4
0
P6
0
6
4
2
=
, A = A A =
, . . . ; in general we claim
2
2
4
6
Q
that
A2k =
P 2k
Q2k
for k = 1, 2, . . .
(*)
2k
=A A =
P 2k
Q2k
P2
Q2
P 2(k+1)
Q2(k+1)
for m = 1, 2, . . .
(**)
m+1
=P P =
1
0
m
1
1
0
(m + 1)
1
A2k =
Finally
A2k+1
13(b)
(d) I
A2k
(2k + 1)
A=
I
0
[X
X
I
P 2k
0 I
2
0
1
0
I]T = I
P 2k
1
1
1
I 2 + X0
IX + XI
0X + I 2
0I + I0
X T
I
2k
1
0
P 2k+1
P 2k X
for
= I2k
= IX T + X T I = Ok
k 1.
23
0 X
(f)
0 X
0
0 X
0
0 X
0 X
0
0 X
0
0 X
I
X2
0 X
2m
X2
X2
X2
Xm
=
for m 1. It is true if m = 1 and,
I 0
0 Xm
if it holds for some m, we have
2(m+1)
2m
2
0 X
0 X
0 X
Xm
0
X 0
X m+1
0
=
.
=
I 0
I 0
I 0
0 Xm
0 X
0
X m+1
2m+1
2m
m
m+1
0 X
0 X
0 X
X
0
0 X
0 X
=
m
I 0
I 0
I 0
0 X
I 0
Xm
0
22(b) Let A =
24
26. We have A =
, and hence A3 =
of length 3 from v1 to v4 because the (4,1)-entry of A3 is 3. Similarly, the fact that the
(3,2)-entry of A3 is 0 means that there are no paths of length 3 from v2 to v3 .
1 0
27(b) False. If A =
= J then AJ = A, but J = I.
0
(d) True. Since A is symmetric, we have AT = A. Hence Theorem 2 2.1 gives (I + A)T =
I T + AT = I + A. In other words, I + A is symmetric.
0 1
(f) False. If A =
then A =
0 but A2 = 0.
0
(h) True. We are assuming that A commutes with A + B, that is A(A + B) = (A + B)A.
Multiplying out each side, this becomes A2 + AB = A2 + BA. Subtracting A2 from each side
gives AB = BA; that is A commutes with B.
2 4
1
2
(j) False. Let A =
and B =
. Then AB = 0 is the zero matrix so both
1
28(b) If A = [aij ] the sum of the entries in row i is nj=1 aij = 1. Similarly for B = [bij ]. If
AB =C = [cij ] then cij is the dot product of row i of A with column j of B, that is
cij = nk=1 aik bkj . Hence the sum of the entries in row i of C is
n
n
n
n
n
n
cij =
aik bkj =
aik
bkj =
aik 1 = 1.
j=1
j=1 k=1
k=1
j=1
k=1
Easier Proof: Let X be the n1 column matrix with every entry equal to 1. Then the entries
of AX are the row sums of A, so these all equal 1 if and only if AX = X. But if also BX = X
then (AB)X = A(BX) = AX = X, as required.
30(b) If A = [aij ] then the trace of A is the sum of the entries on the main diagonal, that is
tr A = a11 + a22 + + ann . Now the matrix kA is obtained by multiplying every entry of A
by k, that is kA = [kaij ]. Hence
tr (kA) = ka11 + ka22 + + kann = k(a11 + a22 + + ann ) = k tr A.
(e) If A = [aij ] the transpose AT is obtained by replacing each entry aij by the entry aji directly
across the main diagonal. Hence, write AT = [aij ] where aij = aji for all i and j. Let bi
T
denote the (i,
of AA
the dot product of row i of A and column i of AT ,
i)-entry
n. Then bi is
n
that is bi = k=1 aik aki = k=1 aik aik = nk=1 a2ik . Hence we obtain
n
n
n
n
n
T
2
a2ik .
tr (AA ) =
bi =
aik =
i=1
i=1
k=1
i=1 k=1
25
a(AB) = a[Abj ]
= [a(Abj )]
Scalar Multiplication
= [A(abj )]
Theorem 2 2.2
Definition 1
= A(aB).
Similarly,
Definition 1
a(AB) = a[Abj ]
= [a(Abj )]
Scalar Multiplication
Theorem 22.2
= [(aA)bj )]
= (aA)B.
Definition 1
Exercises 2.4
Matrix Inverses
2 In each case we need row operations that carry A to I; these same operations carry I to A1 .
In short [A I] [I A1 ]. This is called the matrix inversion algorithm.
(b)
We begin
row 2 from row
by subtracting
1.
4 1 1 0
1 1 1 1
1 1
1
1
3 2 0 1
0
3 2 0 1
5 3 4
2
1 1
1
1
1 0
15
5
1
5
2
3
1
4
26
(d)
1
5
(f)
1
3
11
1
0
1
10
Hence A1 =
10
14
. So A1 =
0
1
(j)
2
5
1 1
2
4
54
34
2
4
2
4
24
1
0
0
1
Hence A1 =
2
2
34
. Hence A1 =
5
4
14
1
10
2
10
9
10
(h)
We begin by subtracting
row 2from twice row 1:
3
4
10
2
10
14
10
1
10
2
10
1
10
2
1
14
2
4
14
0 2
2
10
1
2
10
2
5 2 5 .
3 2 1
5
1
4
4
1
2
2
2
27
(l)
Hence
A1
30
15
210
30
105
7
15
5
1
0
35
210
105
35
7
1
4 3
. Left multiply AX = B by A1 to get
algorithm or Example 4) A1 = 51
1 2
1
X =A
AX = A
Hence x = 53 and y = 25 .
(d) Here A =
By the algorithm, A1 =
1
5
14
10
15
1
5
1
1 =
0
, x =
B=
y
z
9
4
10
4
15
1
5
0
1
1
5
3
2
x = A1 (Ax) = A1 b =
Hence x =
23
5 ,
23
5
8
5
, b =
14
1
5
1
5
y = 85 , and z = 25
5 = 5.
14
10
1
0
1
15
1
0
1
5
23
8
25
B = A1 P =
0
1
28
1
10
=
1
2
1
A = (A ) =
(d) We have (I
2AT )1
T T
Thus
=I
This gives AT =
1
2
1
1
1
2
(f) Given
Now
1
A
=
1
(h) Given
(A1
2I)T
= 2
A
Hence
A1
= 2I 2
1 1
A = (A
= 2
1
2
A=
T
1
10
1
=
1
1
1
2
1
2
.
T
1
1
2
1
2
1
1
1
1
2
2I = 2
, so
A=
A = (A ) =
1
1
10
T T
1
5
. Finally
(I 2AT ) =
2AT
1
T
=
2
1
2
1
2
= 2
0
1
2
0
2
1
1
1
1
1
3
1
T
= 2
1
1
1
2
=2
0
1
by the algorithm.
1
1
1
2
. Finally
29
x
3
4
7
1
5
4
and B =
x
= A
=
y
4
7
=
25
9(b) False. A =
and B =
is not.
(h) True. Since A2 is invertible, let (A2 )B = I. Thus A(AB) = I, so AB is the inverse of A by
Theorem 5.
10(b) We are given C 1 = A, so C = (C 1 )1 = A1 . Hence C T = (A1 )T . This also has the form
C T = (AT )1 by Theorem 4. Hence (C T )1 = AT .
11(b) If a solution x to Ax = b exists, it can be found by left multiplication by C : CAx = Cb,
Ix = Cb, x = Cb.
3
3
(i) x = Cb =
here but x =
is not a solution. So no solution exists.
0
0
2
(ii) x = Cb =
in this case and this is indeed a solution.
1
1
so B 4 = (B 2 )2 =
1
0
1
0
0
1
0
1
0
Thus B B 3 = I = B 3 B, so B 1 = B 3 = B 2 B =
15(b) B 2 =
c2
ues of c.
c2
c
1
c
0
1
. Hence
0
1
= I.
1
=
0
1
c2
3
c2
c2
18(b) Suppose column j of A consists of zeros. Then Ay = 0 where y is the column with 1 in the
position j and zeros elsewhere. If A1 exists, left multiply by A1 to get A1 Ay = A1 0,
that is Iy = 0; a contradiction. So A1 does not exist.
30
(d) If each column of A sums to 0, then xA = 0 where x is the row of 1s. If A1 exists, right
multiply by A1 to get xAA1 = 0A1 , that is xI = 0, x = 0, a contradiction. So A1 does
not exist.
19(bii) Write A =
1
0
24(b) The condition can be written as A(A3 + 2A2 I) = 4I, whence A[ 41 (A3 + 2A2 I)] = I.
By Corollary 1 of Theorem 5, A is invertible and A1 = 14 (A3 + 2A2 I). Alternatively, this
follows directly by verifying that also [ 14 (A3 + 2A2 I)]A = I.
25(b) If Bx = 0 then (AB)x = 0 so x = 0 because AB is invertible. Hence B is invertible by
Theorem 5. But then A = (AB)B 1 is invertible by Theorem 4 because both AB and B 1
are invertible.
2
1
1
1
1
26(b) As in Example 11, B Y A = (1) [1 3]
= [13 8], so
5
3
1
1
0
3 1
3 1 0
5 2 0 =
0
5 2
=
(1)1
1 3 1
13 8
2
1 0
5
3 0 .
13
8 1
(d) As in Example 11,
1
1
1
1
A1 XB 1
1
1
2
1
2
14
16
1
1
1
1
2
14
16
1
1
9
1
1
2
2
1
1
1
BB
16
, so
14
= I2n
31
= (I + A + A2 + + An1 ) A A2 A3 An
= I An
=I
1
3
32
= In 4XX T + 4XIm X T
= In .
39(b) If P 2 = P then I 2P is self-inverse because
(I 2P )(I 2P ) = I 2P 2P + 4P 2 = I.
Conversely, if I 2P is self-inverse then
I = (I 2P )2 = I 4P + 4P 2 .
Hence 4P = 4P 2 ; so P = P 2 .
41(b) If A and B are any invertible matrices (of the same size), we compute:
A1 (A + B)B 1 = A1 AB 1 + A1 BB 1 = B 1 + A1 = A1 + B 1 .
Hence A1 +B 1 is invertible by Theorem 4 because each of A1 , A+B, and B 1 is invertible.
Furthermore
(A1 + B 1 )1 = [A1 (A + B)B 1 ]1 = (B 1 )1 (A + B)1 (A1 )1 = B(A + B)1 A
gives the desired formula.
Exercises 2.5
Elementary Matrices
1
5
1
0
0
0
33
,
,
.
and
. In each case
6(b)
12
1
0 2 6
5 1
0 1 3
52
2
0 1
1 0
7 12
1
1
0
7
1
2
so U A = R =
where U = 2
1
0 1 3 52
0 1 3
2
U is the product of the elementary matrices used at each stage:
12
U =
1
5
where E1 =
where E2 =
= E3 E2 E1 A where E3 =
= E1 A
= E2 E1 A
7(b)
2
1
1
1
0
1
Then E1 =
E1
and E2 =
1
0
12
. This matrix
=A
1
2
1
1
E2
, so U = E2 E1 .
. So U =
is a
34
8(b)
1
0
1
1
0
1
= E1 A
= A
= E2 E1 A
= E3 E2 E1 A
= E4 E3 E2 E1 A
where E1 =
where E2 =
where E3 =
where E4 =
Thus E4 E3 E2 E1 A = I so
A = (E4 E3 E2 E1 )1
= E11 E22 E31 E41
0 1
1 0
=
1
3
0
1
2 1 0 1
0 1 2
3
2 1 0 1
1 2
1 0 1
2
. Hence, U A = R = I2 in this case so U = A1 .
so U =
0 1
2
3
2 3
Thus, r = rank A = 2 and, taking V = I2 , U AV = U A = I2 .
(d) [A | I] =
R=
1
0
1
1
2
3
2
3
4
0
. Hence, U A = R where U =
2
3
2
and
35
[RT | I] =
Hence, (U AV )T = (RV )T = V T RT =
, so U AV =
so V T =
U 1 A] = [U 1 U
U 1 A] = U 1 [U
A].
(*)
2.3).
Exercises 2.6
1(b) Write A =
Matrix Transformations
3
2
1
,B =
2
0
5
and X =
5
6
13
asked to find T (X). Since T is linear it is enough (by Theorem 1) to express X as a linear
combination of A and B. If we set X = rA + sB, equating entries gives equations 3r + 2s = 5,
2r = 6 and r + 5s = 13. The (unique) solution is r = 3, s = 2, so X = 3A 2B. Since
T is linear we have
3
1
11
T (X) = 3T (A) 2T (B) = 3
2
=
.
5
11
36
2(b) LetA =
1
1
1
1
,B =
and X =
1
2
1
2
4
express X as a linear combination of A and B, and use the assumption that T is linear. If
we write X = rA + sB, equate entries, and solve the linear equations, we find that r = 2 and
s = 3. Hence X = 2A 3B so, since T is linear,
3(b) In R2 , we have e1 =
1
0
and e2 =
0
1
0
1
x
y
x
y
x
y
1
1
0
x
y
1
0
0
1
for all
x
y
in R2 , so in this case
4(b) Let e1 =
R3
0
0
, e2 =
0
1
0
and e3 =
0
0
1
5(b) Since y1 and y2 are both in the image of T, we have y1 = T (x1 ) for some x1 in Rn , and
y2 = T (x2 ) for some x2 in Rn . Since T is linear, we have
T (ax1 + bx2 ) = aT (x1 ) + bT (x2 ) = ay1 + by2 .
This shows that ay1 + by2 = T (ax1 + bx2 ) is also in the image of T.
37
7(b) It turns out that T2 fails for T : R2 R2 .T2 requires that T (ax) = aT (x) for all x in R2
0
and all scalars a. But if a = 2 and x =
then
1
T 2
0
1
=T
0
2
0
4
, while 2T
0
1
=2
0
1
0
2
Note that T1 also fails for this transformation T, as you can verify.
x
x+y
1
1
x
x
1
1
8(b) We are given T
= 2
= 2
for all
, so T is the matrix
y
x + y
1 1
y
y
1
1
1
1
2
2
transformation induced by the matrix A = 12
=
. By Theorem 4 we
1
1
1
(d) Here T
x
y
1
10
8x + 6y
6x 8y
1
10
6
1
10
6
8
x
y
6
8
for all
x
y
we have T
y
0
y
0
=T
y
z
because
y
0
+T
0
z
the x-z-plane through the angle from the x axis to the z axis. By Theorem 4 the effect of
T on the x-z-plane is given by
x
cos sin
x
x cos z sin
=
z
Hence T
x
0
z
sin
x cos z sin
0
x sin + z cos
x
y
z
= T
cos
x sin + z cos
, and so
0
y
0
+T
x cos z sin
y
x sin + z cos
cos
sin
sin
0
cos
x
0
z
0
y
0
cos
+
0
sin
x cos z sin
0
x sin + z cos
sin
0
cos
x
y
z
38
1
0
0
1
(d) Let Q0 denote reflection in the x axis, and let R 2 denote rotation through 2 . Then Q0 has
1
0
0 1
matrix A =
, and R 2 has matrix B =
. Then Q0 followed by R 2 is the
0 1
1
0
0 1
by Theorem 3. This is the
transformation R 2 Q0 , and this has matrix BA =
1
13(b) Since R has matrix A, we have R(x) = Ax for all x in Rn . By the definition of T we have
T (x) = a R(x) = a(Ax) = (aA)x
for all x in Rn . This shows that the matrix of T is aA.
14(b) We use Axiom T2: T (x) = T [(1)x] = (1) T (x) = T (x).
17(b) The matrix of T is B, so T (x) = Bx for all x in Rn . Let B 2 = I. Then
T 2 (x) = T [T (x)] = B[Bx] = B 2 x = Ix = x = 1R2 (x) for all x in Rn .
Hence T 2 = 1Rn since they have the same effect on every column x.
Conversely, if T 2 = 1Rn then
B 2 x = B(Bx) = T (T (x)) = T 2 (x) = 1R2 (x) = x = Ix for all x in Rn .
This implies that B 2 = I by Theorem 5 2.2.
1
,
,
0
1
1
0
and
0
1
1
0
respectively. We use Theorem 3 repeatedly: If S has matrix A and T has matrix B then S T
has matrix AB.
0 1
1
0
0 1
=
, which is the matrix of R 2 .
(b) The matrix of Q1 Q0 is
1
0
1
1
0
0
1
1
0
19(b) We have Pm [Qm (x)] = Pm (x) for all x in R2 because Qm (x) lies on the line y = mx. This
means Pm Qm = Pm .
39
= (x1 + x2 + + xn ) + (y1 + y2 + + yn )
= T (x) + T (y),
T (ax)
x1
x2
..
.
xn
= x1 + + xn = [1 1 1]
x1
x2
..
.
xn
so we see immediately that T is the matrix transformation induced by [1 1 1]. Note that
this also shows that T is linear, and so avoids the tedious verification above.
22(b) Suppose that T : Rn R is linear. Let e1 , e2 , , en be the standard basis of Rn , and write
T (ej ) = wj for each j = 1, 2, , n. Note that each wj is in R. As T is linear, Theorem 2
asserts
T has matrix A = [T (e1 ) T (e2 ) T (en )] = [w1 w2 wn ]. Hence, given
that
x1
x=
x2
..
.
xn
in Rn , we have
T (x) = Ax = [w1 w2 wn ]
x1
x2
.
..
xn
= w1 x1 + w2 x2 + + wn xn = w x = Tw (x) for
40
Definition of S T
= S[a T (x)]
T is linear
= a [S[T (x)]]
S is linear
= a [(S T )(x)].
Exercises 2.7
1(b)
LU-factorization
(d)
(f)
3
1
1
3
1
1
2
1
3
Definition of S T
23
12
= U.
0
0
1
0
0
1
1
0
0
The elementary
are
matrices corresponding
P1 =
and P2 =
= U. Hence
, so take P = P2 P1 =
= U.
41
PA =
0
0
1
0
0
0
= U.
(d)
The reduction to row-echelon
form requires
two
row interchanges:
10
The elementary
matrices
corresponding
(in order)
to the interchanges
are
P1 =
and P2 =
We apply
A:
the LU-algorithmto P
PA =
10
so P = P2 P1 =
= U.
3(b) Write L =
tem LY = B is
2y1
, U =
y1 + 3y2
1
1
0
1
2
2
1
0
, X =
x1
x2
x3
x4
, Y =
y1
y2
y3
. The sys-
y1 + 2y2 + y3 = 1
tion: y1 = 1, y2 = 13 (1 y1 ) = 0, y3 = 1 + y1 2y2 = 0. The system U X = Y
42
0 = 0
x2 = x4 = t, x1 = 1 + x4 x2 = 1 + 2t.
8 2t
, x =
1
0
6t
1 t
t
, t arbitrary
.
R2
R2
R1
R1
R1
ab
0
by Theorem 4 2.2, and A1 B1 is upper triangular by induction.
AB =
Xb + A1 Y A1 B1
Hence AB is upper triangular.
9(b) Let A = LU = L1 U1 be two such factorizations. Then U U11 = L1 L1 ; write this matrix as
D = U U11 = L1 L1 . Then D is lower triangular (apply Lemma 1 to D = L1 L1 ), and D is
also upper triangular (consider U U11 ). Hence D is diagonal, and so D = I because L1 and
L1 are unit triangular. Since A = LU, this completes the proof.
Exercises 2.8
1(b) I E =
.5
.1
.1
.4
.5
.2
.7
.1
3
3
1
3
0
(d) E =
.5
.2
.3
.1
.2
.2
.1
.1
0
.2
.1
.1
.1
.2
.4
10
2
9
11
3
43
0
0
7
3
13
14
28
13
46
94
23
47
14
28
13
47
23
14
23
17
23
47
23
. The
I E =
1
0
1
1
so we get
1
1
0
entries.
.4
.8
.7
.2
then I E =
, because (I E)1 = 54
1a
c
1d
. We have det(I E) = (1 a)(1 d) bc =
1 (a + d) + (ad bc)
= 1 tr E + det E. If det(I E) = 0 then Example 4 2.3 gives
(I E)1 =
1
det(IE)
1d
c
1a
9(b) If P =
(d) If p =
2
1
3
2
2
Exercises 2.9
1(b) Not regular. Every power of P has the (1, 2)- and (3, 2)-entries zero.
1
1
1 2
2t
2
2(b) I P =
so
(I
P
)s
=
0
has
solutions
s
=
. The entries
12
1
0
0
t
2
1
1
3
is the steady state vector. Given s0 =
, we
of s sum to 1 if t = 3 , so s =
1
3
44
1
2
1
2
, s2 = P s1
.6
(d) I P =
s0 =
.4
s=
.2
t
1
.1
.5
.3
.7
.4
.2
0
0
3
4
1
4
, s3 = P s2 =
11
11
11
11
1
3
, s1 = P s0 =
0
0
.4
.2
.4
(f) I P =
s=
5t
7t
8t
s0 =
1
0
0
.9
.3
.6
.3
.3
.6
.9
.9
.6
, s2 = P s1 =
1
0
0
24
21
24
21
, s1 = P s0 =
.1
.3
.6
1
20
.34
.1
.1
.8
.3
.2
.1
.6
1
0
so (I P )s = 0 has solution
.28
.42
.30
58
87
0
.350
.312
.338
1
3
1
3
1
3
. Given
. Hence it is in
, so (I P )s = 0 has solution
.7
, s3 = P s2 =
.28
, s2 = P s1 =
.38
5
8
3
8
, s3 = P s1 =
.244
.306
.450
5
20
7
20
8
20
. Given
. Hence it is in
upper, middle and lower classes respectively and, for example, the last column asserts that,
for children of lower class people, 10%become upper class,
30% become middle
.3
.1
.2
.1
.2
.1
.1
.3
.4
2t
t
10
, so s =
10
1
4
1
2
1
4
1
2
0
solution. Eventually, upper, middle and lower classes will comprise 25%, 50% and 25% of this
society respectively.
6. Let States
1and 2 be late and on time respectively. Then the transition matrix in
1
1
3
2
P = 2 1 . Here column 1 describes what happens if he was late one day: the two entries
3
sum to 1 and the top entry is twice the bottom entry by the information we are given. Column
3
2 is determined similarly. Now if Monday is the initial state, we are given that s0 = 41 .
4
3
7
8
16
Hence s1 = P s0 = v 5 and s2 = P s1 =
. Hence the probabilities that he is late
9
8
7
16
and
9
16
16
respectively.
45
1
1
1
0
P =
1
3
1
3
1
2
1
3
2
5
1
5
1
5
1,
1
s0 =
0
0
0
0
, s1 = P s0 =
0
1
3
1
2
2
5
, s2 = P s1 =
1
3
1
3
1
4
1
4
2
4
0
3
10
7
30
1
15
, s3 = P s2 =
7
75
23
120
69
200
53
300
29
150
1
1
1
21
(I P ) =
31
31
0
31
3
5
15
51
41
1p
p
q
1q
1
p+q
q
p
1pq
p+q
1
p+q
12
2
5
4
2
14
1
2
1
so the steady state is s = 16
5
16
23
25
2
0
7
75 .
of his time).
(1 p)q + qp
pq + (1 q)p
1
p+q
q
p
(b) If m = 1
1
p+q
p
p
q
q
=
=
1
p+q
1
p+q
=P
In general, write X =
and Y =
P Y = (1 p q)Y . Hence if P m =
1
p+q X
q + p p2 pq
p p_p2 + pq
(p + q)(1 p)
(p + q)p
1
p+q X
1
p+q X
p + q pq q 2
(p + q)q
(p + q)(1 q)
. Then P X = X and
p
q
(1pq)m
+ p+q Y
P m+1 = P P m =
q q + pq + q 2
(1pq)m
1
PY
p+q P X +
p+q
(1pq)m
(1 p q)Y
p+q
(1pq)m+1
Y.
p+q
1
p+q
q
p
46
Now 0 < p < 1 and 0 < q < 1 imply 0 < p+q < 2, so that 1 < (p+q 1) < 1. Multiplying
through by 1 gives 1 > (1 p q) > 1, so (1 p q)m converges to zero as m increases.
Supplementary Exercises
Chapter 2
47
48
(d) Subtract row 2 from row 1:
a+1
a1
a1
= (a 1) a = 1
(f) Subtract 2 times row 2 from row 1, then expand along row 2:
2 0 3 0 4 13
4 13
1 2 5 = 1 2
= 39
5 =
3
0
0 3 0 0 3
0
(h) Expand along row 1:
(j) Expand along row 1:
0 a b
a 0 c = a a
b
b c 0
c
0
+ b
= a
= a(0) = 0
= a(bc) + b(ac) = 2abc
(n) Subtract multiplies of row 4 from rows 1 and 2, then expand along column 1:
4 1 3 1 0 9 7 5
9 7 5
3 1 0 2 0 5 3 1
0 1 2 2 = 0 1 2 2 = 5 3 1
1 2 2
1 2 1 1 1 2 1 1
Again, subtract multiplies of row 3 from rows
4 1 3 1
0 25 13
3 1 0 2
25 13
0 1 2 2 = 0 13 9 = 13 9
1 2 2
1 2 1 1
(p) Keep expanding along row 1:
1
13
5
9
= (9 + 65) = 56
49
5(b)
(d)
2
5
= a
1
11
21
16
46
= a b
=
2
11
=
2
17
106
= ab(cd) = abcd.
17
= 106
a
b
c
6(b) Subtract row 1 from row2: a + b 2b c + b
2
2
2
a b c
= b b b
2 2 2
= (17) = 17
2
= 0 by Theorem 2(4).
7(b) Take 2 and 3 out of rows 1 and 2, then subtract row 3 from row 2, then take 2 out of row 2:
2a
2p + x
3x
a
2q + y 2r + z = 6 2p + x
x
3y
3z
a b c
= 6 2p 2q 2r = 12
x y z
2b
2c
2q + y
2r + z
= 12
3b + 3q + 3y
3c + 3r + 3z
2q + y
2r + z
2y + b
2z + c
Now subtract row 1 from rows 2 and 3, and then add row
2 plus twice row 3 to row 1, to get
a+p+x b+q+y c+r+z
3x
3y
3z
= 3 pa qb rc
qb
rc
= 3 p a
xp yq zr
xp
yq
zr
Next take 3 out of row 1, and
then add row
3 to row2, to get
x
x y z
y
z
= 9 p a q b r c = 9 a b c
p
p q r
q
r
Now use row interchanges and common row factors to get
50
= 9
a
p
x
= 9
det R = 1.
(f) False. A =
(h) False. If A =
1
0
is R =
1
T
A
and B =
0
0
, det A = 1 = det AT .
(d)
= det
1
3
2
1
= 5(7) = 35.
14(b) Follow the Hint, take out the common factor in row 1, subtract multiples of column 1 from
columns 2 and 3, and expand along row 1:
1
x 1 3
x2 x2 x2
1
1
1
1
x 1
1
x1 =
2
1
x 1 = (x 2) 2
det 2
3 x + 2 2
3 x + 2 2 3 x + 2 2
1
0
0
3 x 3
= (x 2) 2
3
x 3 = (x 2)
x+5
1
3 x + 5
1
= (x 2)[x2 2x + 12] = (x 2)(x2 + 2x 12).
15(b) If we expand along column 2, the coefficient of z is
2
1
1
3
= (6 + 1) = 7. So c = 7.
16(b) Compute det A by adding multiples of row 1 to rows 2 and 3, and then expanding along
column 1:
51
1
x
x
2
x
x
x2 2
x2 + x
x2 x x2 3
2
(x + x)(x2 x) = (x4
3
= (x2 2)(x2 3)
Hence det A = 0 means x2 =
3
2
x2 2
x2 x
x2 + x
x2 3
= x x3 y y 3
x1
y1
x2
y2
21 Let x = . , y = .
,
and
A
=
where x + y is in
c
x
+
y
c
1
n
..
..
xn
yn
column j. Expanding det A along column j we obtain
T (x + y) = det A = ni=1 (xi + yi )cij (A)
= ni=1 xi cij (A) + ni=1 yi cij (A)
= T (x) + T (y).
where the determinant at the second step is expanded along column 1. Similarly, T (ax) =
aT (x) for any scalar a.
24. Suppose A is n n. B can be found from A by interchanging the following pairs of columns:
1 and n, 2 and n 1, . . . . There are two cases according as n is even or odd:
Case 1. n = 2k. Then we interchange columns 1 and n, 2 and n 1, . . . , k and k + 1, k
interchanges in all. Thus det B = (1)k det A in this case.
Remark: Observe that, in each case, k and 12 n(n 1) are both even or both odd, so (1)k =
1
1
(1) 2 n(n1) . Hence, if A is n n, we have det B = (1) 2 n(n1) det A.
52
Exercises 3.2
1
1
1
1
1
1
0
1
2
1
2
0
3
0
1
0
1
3
0
1
2
1
2
0
3
0
1
0
1
3
1
1
1
1
1
1
1
9
1
2
2
19
2
1
9
2
1
2
1
2
1
2
2
2
19
2
1
9
1
2
91
1
2
2
1
2
1
2
2
1
9
2
2
1
91
2
1
9
1
2
1
2
2
2
1
2
1
9
3
6
6
1
3
3M
(d) In computing the cofactor matrix, we use the fact that det
matrix M. Thus the cofactor matrix is
1
2
2
1
9
1
3
1
2
2
1
2
2
2
1
. Note that
the cofactor matrix is symmetric here. Note also that the adjugate actually equals the original
matrix in this case.
2(b) We compute the determinant by first adding column 3 to column 2:
0 c c 0 0 c
1 2 1 = 1 1 1 = (c) 1 1 = (c)(c) = c2 .
c 0
c c c c 0 c
(d) Begin by subtracting row 1 from row 3, and then subtracting column 1 from column 3:
4 c 3 4 c 3 4 c 1
c 2 c = c 2 c = c 2 0 = 1 c 1 = 2.
2 0
5 c 4 1 0 1 1 0 0
This is nonzero for all values of c, so the matrix is invertible for all c.
(1 + c)(1 c)
1
1+c
c
53
This is zero if and only if c = 1 (the roots of c2 c + 1 are not real). Hence the matrix is
invertible if and only if c = 1.
1
1
det A det B
det A det B = 1.
The reason is that A1 B 1 AB may not equal A1 AB 1 B because B 1 A need not equal
AB 1 .
6(b) Since C is 3 3, the same is true for C 1 , so det(2C 1 ) = 23 det C 1 = det8 C . Now we
compute det C by taking 2 and 3 out of columns 2 and 3, subtracting column 3 from column
2:
2p a + u 3u
p a + u u
p a u
det C = 2q b + v 3v = 6 q b + v v = 6 q b v
2r c + w 3w
r c + w w
r c w
Now take 1 from column 2, interchange columns 1 and
p a u
a p u
a b c
q
b
v
b
q
v
det C = 6
= 6
= 6 p q r
r c w
c r w
u v w
Finally det 2C 1 =
8
det C
8
18
= 49 .
7(b) Begin by subtracting row 2 from row 3, and then expand along column 2:
2b
a+1
4d
2
2(c 1)
2b
4d
= 2
2c
2b
4d
2c
= 4
9
4
1 1
3
4
2 1
5
11
5
11 ,
y=
3
2
3
2
1
4
1
9
21
11
21
11 .
2d
2c
= 8
54
14
15
11
= 79
9(b) A1 =
A1
5
4
1
det A adj
A=
1
det A
1
5
1
5
12
79
= 37
79
=2.
79
= 51, the (2, 3) entry of A1 is
4
51
4
51 .
10(b) If A2 = I then det A2 = det I = 1, that is (det A)2 = 1. Hence det A = 1 or det A = 1.
(d) If P A = P , P invertible, then det P A = det P, that is det P det A = det P . Since det P = 0
(as P is invertible), this gives det A = 1.
(f) If A = AT , A is n n, then AT is also n n so, using Theorem 3 3.1 and Theorem 3,
det A = det(AT ) = det[(1)AT ] = (1)n det AT = (1)n det A.
If n is even this is det A = det A and so gives no information about det A. But if n is odd it
reads det A = det A, so det A = 0 in this case.
15. Write d = det A, and let C denote the cofactor matrix of A. Here
AT = A1 = d1 adj A = 1d C T .
Take transposes to get A = d1 C, whence C = dA.
19(b) Write A =
[Cij ] =
c
c
2
1
c
c
c
c
c
2
c
c
c
1
1
c
1
c
0
c
0
1
1
c
c
c
c
1
1
c
0
c
0
1
2
c
c
c
c
2
c2
c
c2
c
55
1
det A adj A
Hence A1 =
(d) Write A =
[Cij ] =
2
c
c
2
1
det A adj
Hence A1 =
(f) Write A =
4
c
3
c
1
1
c
c
4
3
4
A=
3
c
c
5
4
5
4
c
[Cij ] =
c
1
c
1
Hence A1 =
1
c
1
c3 +1
1
c
1
1
c
0
1
0
1
c
c2
c2
1
c
for any c = 0.
c
2
1
2
8 c2
c
c2
8 c2
1
c
10
c2
c
c
c2 6
8
c2
c2 10
c
c2
c2 + 1
c2
c
c
1
det A adj A
1c
2
c
[Cij ]T =
1
2
is real).The cofactor
matrix
is
1
1
3
4
c
c
1
c2
c 4
c
c
5
4
[Cij ]T =
1
c2
1
c
1
c
1
1
c
0
1
0
1
c
1
[C ]T
c3 +1 ij
c 1
c+1
c2
1
1
c
1
c
1
1
c3 +1
c1
c2
c
c1
(c2 + 1)
c+1
(c2 + 1)
c
c2
(1 + c)
c+1
(c + 1)
1
c2
c2
, where c = 1.
20(b) True. Write d = det A, so that d A1 = adj A. Since adj A = A1 by hypothesis, this gives
d A1 = A1 , that is (d 1)A1 = 0. It follows that d = 1 because A1 = 0 (see Example 7
2.1).
(d) True. Since AB = AC we get A(B C) = 0. As A is invertible, this means B = C. More
precisely, left multiply by A1 to get A1 A(B C) = A1 0 = 0; that is I(B C) = 0; that
is B C = 0, so B = C.
(f) False. If A =
56
(h) False. If A =
(j) False. If A =
(l) False. If A =
1
then adj A =
0
0
1
1
1
0
1
1
= A.
= p(0) = 5
r0 +
r1
r2
= p(1) = 3
p(0)
p(1)
r0 +
r1
r2
r3
r0
r1
r2
r3
= p(1) =
5
3 ,
p(2) = 3
r2 = 21 , r3 = 76 , so p(x) = 1 53 x + 12 x2 + 76 x3 .
r1
r2
= p(0) =
r3
= p(1) =
1.49
r0 + 2r1 + 4r2 +
8r3
= p(2) =
0.42
b
Y
in block form. Then
A1 =
Z C
1 0
ab
+
XZ
aY
+
XC
= AA1 =
.
0 I
BZ
BC
57
b Y
is
because A is) and BZ = 0 gives Z = 0 because B is invertible. Hence A1 =
0 C
upper triangular.
1
d
= det(A1 ) = det
= 21. Hence d =
1
21
1
21 .
By Theorem
A1 (adj A1 ) = d1 I.
Take inverses to get (adj A1 )1 A = dI. But dI = (adj A)A by the adjugate formula for A.
Hence
(adj A1 )1 A = (adj A)A.
1
Since A is invertible, we get adj A1
= adj A, and the result follows by taking inverses
again.
(d) The adjugate formula gives
AB adj (AB) = det AB I = det A det B I.
On the other hand
AB adj B adj A = A[(det B)I]adj A
= A adj A (det B)I
= (det A)I (det B)I
= det A det B I.
Thus AB adj (AB) = AB adj B adj A, and the result follows because AB is invertible.
Exercises 3.3
x2
4
= x2 x 6 = (x 3)(x + 2); hence the eigenvalues are 1 = 3, and
1(b) cA (x) =
1
x+1
2 = 2. Take these values for x in the matrix
xI A for cA (x):
4
1 4
1 4
.
; x1 =
1 = 3 :
0 0
1 4
1
1
4
4
1 1
2 = 2 :
; x2 = .
1
1
0
0
1
3
0
4
1
.
So P = [x1 x2 ] =
has P 1 AP =
1 1
0 2
58
x + 4
4
0
s 3t
, x =
; x1 =
s
t
1
0
, x2 =
3
0
1
2
1
2
2 = 3 :
0
1
0
0
; x2 =
1
0
1
; x1 =
1
0
1
Since n = 3 and there are only two basic eigenvectors, A is not diagonalizable.
2(b) As in Exercise 1, we find 1 = 2 and 2 = 1; with corresponding eigenvectors x1 =
1
2 1
2
0
1
, so P =
satisfies P AP = D =
. Next compute
and x2 =
2
b=
Hence b1 =
7
3
b1
b2
P01 v
so, as 1 is dominant,
1
3
2
1
k
xk
= b1 1 x1
1
2
7 k
32
3
1
2
1
1
3
7
5
2
1
59
P =
2
3
Hence vk
= 32 3k
. Now P 1 =
1
0
1
1
6
1
0
1
, x2 =
1
1
3
, so P01 v0 =
and x3 =
1
6
9
2
1
1
2
3
, and
and hence b1 = 3 .
2
9(b) A =
6(2 1)
92 8
Example 10.
60
x
I A ] = rn det I A .
r
r
As cA (x) = det[xI A], this shows that crA (x) = rn cA xr .
crA (x) = det[xI rA] = det[r
x
cA (x) = det
1
4
14
x
= x2 14 x
3
4
= (x 1)(x + 34 )
1
1
formula, the roots are 10
[3 69], so the dominant eigenvalue is 10
[3 + 69] 1.13 > 1, so
the population diverges.
61
2
5
So the population stabilizes if = 15 . In fact it is easy to see that the population becomes
extinct (1 < 1) if and only if < 15 , and the population diverges (1 > 1) if and only if
> 15 .
Exercises 3.4
b=
xk
xk+1
xk =
b1
b2
4 k
31
4
3
P01 v0
1
1
1
k
3 (2)
1
k
3 (2)
1
3
2
2
1
2
1
k
3 [4 (2) ]
1
2
xk+1
2xk xk+1
1
4
3
13
xk
xk+1
= Avk .
. Hence
. Thus
k
k
1
1 4 2
(2)
3
for large k.
4
1
k
Here 2 is the dominant eigenvalue, so xk = 13 (2)k [ (2)
k 1] 3 (2) if k is large.
xk+1
b=
Now
xk
xk+1
b1
b2
P01
v=
4 k
52
6xk xk+1
1
2
1
5
1
k
5 (3)
1
3
xk
xk+1
= Avk . Diagonalizing
. Hence
1
1
4
5
1
5
2(b) Let vk =
and D =
xk
xk+1
xk+2
1
0
0
. Then A =
. Then
2
b1
b2
b3
= P01 v0 =
13
1
3
21
0
1
2
12
1
3
1
6
1
0
1
1
2
0
1
2
62
vk = 12 (1)1k
1
1
1
+ (0)(2)k
1
2
4
+ 1 1k
2
1
1
1
where 1 = 21 (1 +
b1
b2
the method and use the initial conditions to determine the values of b1 and b2 directly. More
precisely, x0 = 1 means 1
= b1 + b2 whilex1 = 2 means 2 = b1 1 + b2 2 . These equations
. It follows that
have unique solution b1 = 253
and b2 = 253
5
5
xk =
2 5
k
k
(3 + 5) 1+2 5 + (3 + 5) 12 5
for each k 0.
7. In a stack of k + 2 chips, if the last chip is gold then (to avoid having two gold chips together)
the second last chip must be either red or blue. This can happen in 2xk ways. But there are
xk+1 ways that the last chip is red (or blue) so there are 2xk+1
ways these possibilities can
0 1
occur. Hence xk+2 = 2xk + 2xk+1 . The matrix is A =
with eigenvalues 1 = 1 + 3
2 2
1
1
and 2 = 1 3 and corresponding eigenvectors x1 =
and x2 =
. Given the
2
1
2 3
2 3
2 3
2 3
2 + 3
2+
63
b1 k1
1
1
+ b2 k2
1
2
2 3
(2 + 3)(1 + 3)k + (2 + 3)(1 3)k .
yk +yk+1
2
= 21 yk + 12yk+1 .
1
The eigenvalues are 1 = 1 and 2 = 12 , with corresponding eigenvectors x1 =
and
1
2
x2 =
. Given that k = 0 for the year 1990, we have the initial conditions y0 = 10 and
9. Let yk be the yield for year k. Then the yield for year k + 2 is yk+2 =
y1 = 12. Thus
Since
b1
b2
P01 v0
Vk =
1
3
k
34
3 (1)
then
yk =
For large k, yk
11(b) We have A =
write x =
34
3
k
34
3 (1)
1
1
2
3
4
3
12
1 k
2
k
+ 23 (2) 12 =
10
34
3
1
3
34
2
1 k
2 .
0
1
c
, we have
Ax =
2
a + b
+ c2
2
3
= x
5
5
11
12(b) We have
p = 6 from (a), so yk = xk + 6 satisfies yk+2 = yk+1 + 6yk with
y0 = y1 = 6 . Here
0 1
1 1
with eigenvalues 3 and 2, and diagonalizing matrix P =
. This gives
A=
6 1
3
2
k+1 (2)k+1 , so x = 11 3k+1 (2)k+1 5 .
yk = 11
k
30 3
30
6
64
(b) If rk is any solution of (*) then rk+2 = ark+1 + brk + c(k). Define qk = rk pk for each k.
Then it suffices to show that qk is a solution of (**). But
qk+2 = rk+2 pk+2 = (ark+1 + brk + c(k)) (apk+1 + bpk + c(k)) = aqk+1 + bqk
which is what we wanted.
Exercises 3.5
so cA (x) =
x+1
= (x 4)(x + 2).
; an eigenvector is X1 =
.
1 1
0 0
1
1 5
1 5
5
2 = 2 :
; an eigenvector is X2 =
.
1 5
0 0
1
4
0
1
5
Thus P 1 AP =
where P =
. The general solution is
0
1 x
f = c1 X1 e
2 x
+ c2 X2 e
= c1
1
1
4x
+ c2
5
1
e2x .
Hence, f1 (x) = c1 e4x + 5c2 e2x , f2 (x) = c1 e4x c2 e2x . The boundary condition is f1 (0) = 1,
f2 (0) = 1; that is
1
1
5
= f (0) = c1
+ c2
.
1
(d) Now A =
cA (x) =
x2
x1
x2
x2
2
x 1
x+1
x2
1 = 1 :
2 = 2 :
2
3
2
2
2
0
1
2
8
7
10
7
1
2
0
x4
0
; X1 =
; X2 =
x+1
x2
8
10
7
1
2
1
65
3 = 4 :
2
2
3
Thus P 1 AP =
2
0
3
where P =
8
10
7
f = c1 X1 ex + c2 X 2 e2x + c3 X3 e4x = c1
That is
8
10
7
1
0
; X3 =
1
0
1
ex + c2
1
2
1
e2x + c3
1
0
1
e4x .
c2
+ c3 =
10c1
2c2
2c1
c2
+ c3 = 1.
Exercises 3.6
66
Rp
R
p+1
..
Rp+1
.
..
.
Rq1
qp
Rq1
Rq
interchanges
Rq
Rp
Hence 2(q p) 1 interchanges are used in all.
Supplementary Exercises
Rq
Rp+1
..
. .
qp1
interchanges q1
Rp
Chapter 3
n
j=1
n
j=1
67
68
1(b)
(d)
1
1
2
1
0
2
(f)
3
=
=
1
1
2
(1)2 + 02 + 22 =
= |3| 12 + 12 + 22 = 3 6
2
1
2
2
1
2
Since 8u is a unit vector, we want u = 1; that is 1 = |t| (2)2 + (1)2 + 22 = 3t, which
gives t =
1
3.
Hence u =
4(b) Write u =
2
1
2
1
3
2
1
2
and v =
difference: u v =
1
1
2
0
1
=
4
0
2
3
2
0
=
1
2
2
2.
=
12 + (2)2 + (2)2 = 3.
6(b) In the diagram, let E and F be the midpoints of sides BC and AC respectively. Then
1
F E = F C + CE = 2 AC + 2 CB = 2 (AC + CB) = 12 AB
7 Two nonzero vectors are parallel if and only if one is a scalar multiple of the other.
(b) Yes, they are parallel: u = (3)v.
(d) Yes, they are parallel: v = (4)u.
69
(d) RO = (p + q) because OR = p + q.
9(b) P Q =
1
1
6
2
1
0
1
1
5
, so
P Q =
(1)2 + (1)2 + 52 =
P Q = 0.
(f) P Q =
2 3.
1
1
4
3
1
6
Let v =
1
3
x
y
z
2
2
= 2
and p =
1
1
3
0
1
. Hence
P Q = |2|
(i) If P Q = v then q p = v, so q = p + v =
5
1
2
(ii) If P Q = v then q p = v, so q = p v =
. Thus Q = Q(5, 1, 2)
1
4
x = 5w + u 6v =
12(b) We have au + bv + cw =
au + bv + cw = x =
1
3
0
a
a
2a
25
0
b
2b
24
1
0
c
0
c
0
6
a +
a+b
= 3
2a + 2b c = 0.
The solution is a = 5, b = 8, c = 6.
2a + 2b c
+ c = 1
b
a+c
gives equations
a
(1)2 + 12 + (1)2 =
27 = 3 3.
26
4
19
. Hence setting
70
13(b) Suppose
for a, b, c:
5
6
1
3a + 4b + c
= au+bv +cw =
a + c
b+c
3a + 4b + c =
+ c =
b + c = 1
This system has no solution, so no such a, b, c exist.
y
z
, p1 =
and p2 =
1
2
0
be the vectors of P ,
p p2 = P2 P = p2 + 14 (P2 P1 ) = p2 + 14 (p1 p2 ) = 41 p1 + 34 p2 .
Since p1 and p2 are known, this gives
p=
Hence P = P
5
1
5
4, 4, 2
1
4
+ 3
4
1
2
0
1
4
5
2
Then
q =
OQ denote
the vectors of the points P and
0
1
1
1
1
q p = P Q = 4 and p = 3 , so q = (q p) + p = 4 + 3 = 7 .
3
18(b) We have u2 = 20, so the given equation is 3u + 7v = 20(2x + v). Solving for x gives
26
40x = 3u 13v =
12
13
26
20
13
14
20(b) Let S denote the fourth point. We have RS = P Q, so
OS = OR + RS = OR + P Q = 1 +
0
Hence x =
4
4
2
1
3
2
1
40
20
13
14
71
22(b) One direction vector is d = QP =
2
1
5
p = p0 + td =
3
1
4
. Let p0 =
+ t
2
1
5
3
1
4
when p =
y
z
is the vector of an arbitrary point on the line. Equating coefficients gives the parametric
equations of the line
x = 3 + 2t
y = 1 t
z = 4 + 5t.
(d) Now p0 =
p=
x
y
z
1
1
x=1+t
, the scalar equations are y = 1 + t .
1
1
1
+ t
1
1
1
. Taking
z =1+t
1
0
1
parallel to this one, d will do as direction vector. We are given the vector p0 =
p = p0 + td =
2
1
1
+ t
x = 2t
y = 1
z = 1 + t.
1
0
1
1
1
of a
72
y
z
4t
3
1 2t
y = 2 + 2t
z = 1 + 3t
x = 2s
y =1+s
z=3
(d) If
y
z
Eliminating
x
y
z
x
y
z
x
y
z
gives
2
12
4
1
5
+ t
+ s
+ t
1
0
1
1
0
2
7
12
+ s
0
2
3
4+t=2
1 = 7 2s
5 + t = 12 + 3s.
This has a (unique) solution t = 2, s = 3 so the lines do intersect. The point of intersection
has vector
1
5
+t
0
1
1
5
0
1
1
3
73
(equivalently
29. Let a =
1
2
2
7
12
+ s
and b =
0
2
3
2
0
1
2
7
12
0
2
3
2
1
3
).
1
1
1
is a
direction vector for the line through A and B, so the vector c of C is given by c = a + td for
some t. Then
AC = c a = td = |t| d and BC = c b = (t 1)d = |t 1| d .
Hence AC = 2 BC means |t| = 2 |t 1| , so t2 = 4(t 1)2 , whence 0 = 3t2 8t + 4 =
5
3
1
0
or c =
3
1
3
4
3
31(b) If there are 2n points, then Pk and Pn+k are opposite ends of a diameter of the circle for each
2EF = 2(EA + AF ) = 2EA + 2AF = DA + F C = CB + F C = F B.
Hence EF = 21 F B so F is in the line segment EB, 31 the way from E to B. Hence F is the
trisection point of boty AC and EB.
Exercises 4.2
1(b) u v = u u = 12 + 22 + (1)2 = 6
(d) u v = 3 6 + (1)(7) + 5(5) = 18 + 7 25 = 0
(f) v = 0 so u v = a 0 + b 0 + c 0 = 0
2(b) cos =
,
u,v
,
u ,
v
182+0
10 40
(d) cos =
,
u,v
,
u ,v
6+63
6(3 6)
= 12 . Hence = 3 .
(f) cos =
,
u,v
,
u ,v
0214
25 100
= 12 . Hence =
3(b) Writing u =
2
1
1
20
20
= 1. Hence = .
2
3 .
and v =
1
2
x
2
, the requirement is
= cos 3 =
uv
u v
2x+2
.
6 x2 +5
Hence 6(x2 + 5) = 4(4 x)2 , whence x2 + 16x 17 = 0. The roots are x = 17 and x = 1.
74
+ z
= 0
1
1
2
1
2
0
+ t
0
3
1
2
3
2
6(b) P Q = 2
= 9 + 4 + 16 = 29
4
2
2
2
QR = 7
= 4 + 49 + 4 = 57
2
2
2
5
= 25 + 25 + 36 = 86.
5
P
R
=
6
2
2
Hence P R = P Q + QR . Note that this implies that the triangle is right angled,
that P R is the hypotenuse, and hence that the angle at Q is a right angle. Of course, we can
8(b) We have AB =
Hence =
2
1
1
and
AC =
1
2
1
2+21
AB AC
= 12 .
cos =
=
6 6
AB AC
or 60 . Next BA =
2
1
1
and
BC =
1
1
BA BC
21+2
cos =
= 12 .
=
6
6
BA BC
75
. Since
add to , the angle at C is
the
triangle
angles in any
1
1
= 3 .
CA CB
1 + 2 + 2
= 21 .
cos =
=
6
6
CA CB
uv
v
v2
122+1
16+1+1
uv
(d) proj,v (u) = v
2v =
5
21
2
1
4
1
21
53
26
20
u u1 =
3
2
1
orthogonal to v.
4
2
1 .
1
11
18
= 1
2
6+1+0
4+1+16
4
2
5
21
2
1
. Then u2 = u u1 =
1
4
3
1
0
27
53
uv
v
v2
6
4
1
12(b) Write p0 =
uv
v
v2
1 =
1
1882
36+16+4
, d =
= 1881
36+16+1
=
3
1
4
1
53
27
53
6
4
1
26
1
1
3
and write u =
P0 P = p p0 =
01+16
9+1+16
3
1
4
45
1
from P to the line is QP = u u1 = 26 41
=
44
its vector. Then
71 15 34
Hence Q = Q( 26
, 26 , 26 ).
. Then u2 is given by u2 =
q = p0 + u1 =
, p =
=
1
26
15
26
1
4
3
1
4
1
4
. Write
15
26
1
26
71
15
34
76
,i
,j
,i
,j
,k
13(b) u v = det
(d) u v = det
,k
= 0i 0j + 0k =
14(b) A normal is n = AB AC =
1
1
15
8
= 0.
= 4i 15j + 8k =
= det
8
17
,i
,j
,k
17
23
32
11
2
1
1
to this one, 8n will serve as normal. The point P (3, 0, 1) lies on our plane, the equation is
2(x 3) (y 0) + (z (1) = 0, that is 2x y + z = 5.
1
(f) The plane contains P (2, 1, 0) and P0 (3, 1, 2), so the vector u = P P0 = 2 is parallel to
n = u d = det
2
3
2
1
0
n = d1 d2 = det
1
1
3
and d2 =
2
7
3
x
a
z
2
1
1
that is, 2x + 3y + 2z = 7.
3
1
0
that is 2x + 7y + 3z = 1.
+t
1
3
by construction; it contains
the other line because it contains P (0, 2, 5) and is parallel to d2 . This implies that the lines
intersect (both are in the same plane). In fact the point of intersection is P (4, 0, 3) [t = 1 on
the first line and t = 2 on the second line].
77
(j) The set of all points R(x, y, z) equidistant from both P (0, 1, 1) and Q(2, 1, 3) is deter
2
2
mined as follows: The condition is P R = QR , that is P R = QR , that is
x2 + (y 1)2 + (z + 1)2 = (x 2)2 + (y + 1)2 + (z + 3)2 .
2
1
0
to the given plane will serve as direction vector for the line. Since the
d = d1 d2 = det
1
1
2
1
1
1
x
y
z
and d2 =
1
2
3
1
3
+ t
2
1
0
, so
Hence d is a direction vector for the line we seek. As P (1, 1, 1) is on the line, the vector
equation is
y
z
+ t
1
1
(f) Each point on the given line has the form Q(2 + t, 1 + t, t) for some t. So P Q =
1
1
1
1+t
t
t2
1
of the given line). This condition
4 is(1 + t) + t + (t 2) = 0, that is
t = 3 . Hence the line
3
1
3
5
3
7
4 1
3, 3, 3
x
y
z
1
1
2
+t
16(b) Choose a point P0 in the plane, say P0 (0, 6, 0), and write u = P0 P =
. As the line we
4
1
5
3
5
1
. [Note that
. Now write
78
n=
2
1
1
un
n=
n2
2
6
1
1
Since p0 = 6
and q are the vectors of P0 and Q, we get
0
Hence Q = Q
q = p0 + (8u 8u1 ) =
7
2 2
3, 3, 3
0
6
0
2
n = PQ PR =
2
4
3
1
3
3
5
1
= det
1
3
2
2
10
6
8
that is 5x 3y 4z = 0.
2
1
1
5
5
14
23
5
2
2
7
y
z
2 + 3t
7 5t
2t
2
7
0
= +t
5
2
79
equation
the plane gives 1(1 + 2t) 4(2 + 5t) 3(3 t) = 6,
x
whence t =
8
19 .
Thus
y
z
3
19
78
19
65
19
so P
3
0
2
3
78 65
19 , 19 , 19
is any point, the plane 3(x x0 ) = 0(y y0 ) + 2(z z0 ) = 0 is perpendicular to the line. This
can be written 3x + 2z = 3x0 + 2z0 , so 3x + 2z = d, d arbitrary.
a
1
1
is parallel to these planes so the normal n = b is
c
ba
. The plane
vector d =
2
1
(where a and b are not both zero as n = 0). Thus the equation is
a(x 3) + b(y 0) + (a 2b)(z 2) = 0,
a
b
a 2b
that is ax + by + (a 2b)z = 5a 4b
where a and b are not both zero. As a check, observe that the plane contains every point
P (3 + t, 2t, 2 t) on the line.
23(b) Choose P1 (3, 0, 2) on the first line. The distance in question is the distance
from P1 to
the
4
3
ud
d
d2
10
10 [3
1 0]T = [3 1 0]T .
then the required distance is u u1 = [1 3 0]T = 10.
80
1
1
1
3
1
0
normal to the plane. Given P1 (1, 1, 0) and P2 (2, 1, 3) on the lines, let u = P1 P2 =
Compute
u1 = projn (u) =
7
14
1
3
1
2
1
3
2
1
0
3
vectors d1 =
1
1
1
and d2 =
3
1
0
3s
37 13
1
24(d) Analogous to (b). The distance is 66 , and the points are A( 19
3 , 2, 3 ) and B = B 6 , 6 , 0 .
26(b) Position the cube with and vertex at the origin and sides along the
axes. Assume
positive
a
each side has length a and consider the diagonal with direction d =
a
0
a
0
and
a
a
a
a
28. Position the solid with one vertex at the origin and
x,
oflengths
a, b,
c, along
the positive
sides,
a
a
b
b
c
, and
The possible dot products are (a2 + b2 + c2 ), (a2 b2 + c2 ), (a2 + b2 c2 ), and one of
these is zero if and only if the sum of two of a2 , b2 and c2 equals the third.
34(b) The sum of the squares of the lengths of the diagonals equals the sum of the squares of the
lengths of the four sides.
38(b) The angle between u and u + 8v + w
8 is given by
cos =
u (u + v + w)
uu+uv+uw
u2 + 0 + 0
u
=
=
=
.
u u + v + w
u u + v + w
u u + v + w
u + v + w
v
u + v + w
and
cos =
w
.
u + v + w
81
x0 x1
39(b) If P1 (x, y) is on the line then ax + by + c = 0. Hence u = P1 P0 =
so the distance is
y0 y1
u n |u n|
|a(x0 x) + b(y0 y)|
|ax0 + by0 + c|
projn u =
=
.
2 n = n =
2
2
a +b
a2 + b2
n
Exercises 4.3
5
1
1
5
5
. We have u v =
5
k 1 2
1
= 5 3. Hence the unit vectors parallel to uv are
5 3
1
2
5
5
33
1
1
1
Area of triangle = 2 AB AC = 2 1
2
= 2 0 = 0.
1
0
2
1
1
5(b) We have u v =
Theorem 5.
4
5
1
6(b) The line through P0 perpendicular to the plane has direction vector n, and so has vector
equation p8 = p80 + t8n where p8 = [x, y, z]T . If P (x, y, z) also lies in the plane, then n p =
ax + by + cz = d. Using p = p0 + tn we find
d = n p = n p0 + t(n n) = n p0 + t n2
d n p0
d n p0
8n. Finally, the distance from P0 to the plane
Hence t =
,
so
p
=
p
+
0
n2
n2
is
d n p0
|d n p0 |
P
P
=
p
p
=
8n
.
0
0
=
2
n
n
82
10. The points A, B and C are all on one line if and only if the parallelogram they determine has
area zero. Since this area is AB AC, this happens if and only if AB AC = 80.
12. If u and v are perpendicular, Theorem 4 shows that u v = u v . Moreover, if w is
perpendicular to both u and v, it is parallel to u v so w (u v) = w u v because
the angle between them is either 0 or . Finally, the rectangular parallepiped has volume
|w (u v)| = w u v = w (u v)
using Theorem 5.
15(b) If u =
y , v = q and w = m then, by the row version of Exercise 19 3.1, we
z
r
n
get
l+p
y m+q
k z n+r
i x l
i x p
+ det j y m = u v + u w.
= det
j
y
q
k z n
k z r
u (v + w) = det
j
v1
w1
u1
16(b) Let v =
v2 , w = w2 and u = u2 . Compute
u3
w3
v3
v [(u v) + (v w) + (w u)] = v (u v) + v (v w) + v (w u)
v1 w1 u1
= 0 + 0 + det
v2 w2 u2
v3 w3 u3
by Theorem 1. Similarly
w1 u1 v1
83
22. If v1 and v2 are vectors of points in the planes (so v1 n = d1 and v2 n = d2 ), the distance
is the length of the projection of v2 v1 along n
; that is
(v2 v1 ) n
|(v2 v1 ) n|
|d2 d1 |
=
projn (v2 v1 ) =
8
n
=
.
n2
n
n
Linear Operators on R3
Exercises 4.4
1(b) By inspection, A =
1
2
1
1
of projection on y = x.
3
(d) By inspection, A = 51
of reflection in y = 2x.
1
(f) By inspection, A = 2
3.
1
1+m2
m2
preceding
Theorem
2). Hence
the projections on the lines y = x and y = x have matrices
1
1
1
1
1
and 12
, respectively, so the first followed by the second has matrix (note
2
1
the order)
1
2
1
2
1
4
= 0.
17
3(b) By Theorem 3:
(d) By Theorem 3:
(f) By Theorem 2:
1
21
1
30
1
25
20
22
8
4
28
20
10
12
12
16
20
10
20
1
7
26
0
1
1
21
=
1
25
11
1
15
93
0
124
32
1
35
84
(h) By Theorem 2:
1
11
2
5
0
3
1
=
=
1
11
28
49
18
3
2
and sin
1
2
is
3
2
1
2
= 12 , the matrix is
1
0
3
3
2
1
3
1
2
12
1
6
6. Denote the rotation by RL, . Here the rotation takes place about the y axis, so RL, (8j) = 8j.
In the x-z
plane the effect of RL, is to rotate counterclockwise through
, and
this
has
cos
matrix
RL,
sin
sin
0
1
cos
sin
cos
cos
sin
cos
sin
sin
cos
1
0
sin
0
cos
cos
sin
and
. Finally, the
Exercises 4.5
1(b) Translate to the origin, rotate and then translate back. As in Example 1, we compute
2
1
1
2
0
1
0
1
2
1
2+2
3 2 + 4
2
2
2
2
2
2
0
0
0
0
7 2+2
3 2+4
3 2+2
5 2+4
2+2
2+4
5 2 + 2
9 2+4
, so we translate by w,
8 then reflect in y = 2x, and then trans
3 4
1
late back by w.
8 The line y = 2x has matrix 5
. Thus the matrix (for homogeneous
1
85
Hence for w
8 =
1
4
1
1
5
we get 1
5
Supplementary Exercises
2
5
1
4
1
1
5
9
18
5
1
5
4
2
5
9
18
5, 5
Chapter 4
4. Let p and w be the velocities of the airplane and the wind. Then p = 100 knots and
w = 75 knots and the resulting actual velocity of the airplane is v = w + p. Since w and p
are orthogonal. Pythagoras theorem gives v2 = w2 +p2 = 752 +1002 = 252 (32 +42 ) =
75
252 52 . Hence v = 25 5 = 125 knots. The angle satisfies cos = w
v = 125 = 0.6 so
= 0.93 radians or 53 .
6. Let v = [x y]T denote the velocity of the boat in the water. If c is the current velocity then
c = (0, 5) because it flows south at 5 knots. We want to choose
v so that the resulting actual
z
x2 + 25 gives x2 = 144, x = 12. But x > 0 as w heads east, so x = 12. Thus he steers
v = [12 5]T , and the resulting actual speed is w = z = 12 knots.
86
(d) No.
(f) No.
2
0
0
is in U but 2
0
1
0
0
1
0
2
0
0
0
0
1
is in U but (1)
0
1
0
so Theorem 1 applies.
4
0
0
is not in U.
0
1
0
is not in U.
2(b) No. If x = ay + bz equating first and third components gives 1 = 2a + b, 15 = 3b; whence
a = 3, b = 5. This does not satisfy the second component which requires that 2 = a b.
(d) Yes. x = 3y + 4z.
3(b) No. Write these vectors as a1 , a2 , a3 and a4 , and let A = [a1 a2 a3 a4 ] be the matrix with
these vectors as columns. Then det A = 0, so A is not invertible. By Theorem 5 2.4, this
means that the system Ax = b has no solution for some column b. But this says that bis not
a linear combination of the ai by Definition 1 2.2. That is, the ai do not span R4 .
1
0
0
0
10. Since ai xi is in span {xi } for each i, Theorem 1 shows that span {ai xi } span {xi } . Since
xi = a1
i (ai xi ) is in span {ai xi } , we get span {xi } span {ai xi } , again by Theorem 1.
12. We have U = span{x1 , , xk } so, if y is in U, write y= t1 x1 + + tk xk where the ti are in
R. Then Ay = t1 Ax1 + + t1 Axk = t1 0 + + tk 0 = 0.
15(b) x= (x + y)y is in U because x+y and y = (1)y are both in U and U is a subspace.
16(b) True. If we take r = 1 we see that x= 1x is in U.
(d) True. We have span {y, z} span {x, y, z} by Theorem 1 because both y and z are in
span {x, y, z} . In other words, U span {x, y, z} .
For the other inclusion, it is clear that y and z are both in U = span {y, z} , and we are
given that x is in U. Hence span {x, y, z} U by Theorem 1.
' (
1
2
(f) False. Every vector in span
,
has second component zero.
0
87
20. If U is a subspace then S2 and S3 certainly hold. Conversely, suppose that S2 and S3 hold.
It is here that we need the condition that U is nonempty. Because we can then choose some
x in U, and so 0= 0x is in U by S3. So U is a subspace.
22(b) First, 0 is in U + W because 0= 0+0 (and 0 is in both U and W ). Now suppose that P and
Q are both in U + W, say p= x1 + y1 and q= x2 + y2 where x1 and x2 are in U, and y1 and
y2 are in W. Hence
p + q = (x1 + y1 ) + (x2 + y2 ) = (x1 + x2 ) + (y1 + y2 )
so p + q is in U + W because x1 + x2 is in U (both x1 and x2 are in U), and y1 + y2 is in
W. Similarly
aP = a(x1 + y1 ) = ax1 + ay1
is in p+q because ax1 is in p and ay1 is in Q. Hence U + W is a subspace.
Exercises 5.2
1(b) Yes. The matrix with these vectors as columns has determinant 2 = 0, so Theorem 3
applies.
1(d) No. (1, 1, 0, 0) (1, 0, 1, 0) + (0, 0, 1, 1) (0, 1, 0, 1) = (0, 0, 0, 0) is a nontrivial linear combination that vanishes.
2(b) Yes. If a(x + y) + b(y + z) + c(z + x) = 0 then (a + c)x + (a + b)y + (b + c)z = 0. Since we
are assuming that {x, y, z} is independent, this means a + c = 0, a + b = 0, b + c = 0. The
only solution is a = b = c = 0.
(d) No. (x+ y) (y +z) + (z +w) (w +x) = 0 is a nontrivial linear combination that vanishes.
3(b) Write x1 = (2, 1, 0, 1), x2 = (1, 1, 1, 1), x3 = (2, 7, 4, 1), and write U = span{x1 , x2 , x3 }.
Observe that x3 = 3x1 +4x2 so U = {x1 , x2 } . This is a basis because {x1 , x2 } is independent,
so the dimension is 2.
(d) Write x1 = (2, 0, 3, 1), x2 = (1, 2, 1, 0), x3 = (2, 8, 5, 3), x4 = (1, 2, 2, 1) and write
U = span{x1 , x2 , x3 , x4 }. Then x3 = 3x1 + 4x2 and x4 = x1 + x2 so the space is span{x1 , x2 }.
As this is independent, it is a basis so the dimension is 2.
4(b) (a + b, a b, b, a)= a(1, 1, 0, 1) + b(1, 1, 1, 0) so
(d) (ab, b+c, a, b+c)= a(1, 0, 1, 0)+b(1, 1, 0, 1)+c(0, 1, 0, 1). Hence U = span {(1, 0, 1, 0), (1, 1, 0, 1), (0, 1, 0, 1
This is a basis so dim U = 3.
(f) If a + b = c + d then a = b + c + d. Hence U = {(b + c + d, b, c, d) | b, c, d in R} so
U = span {(1, 1, 0, 0), (1, 0, 1, 0), (1, 0, 0, 1)} . This is a basis so dim U = 3.
5(b) Let a(x + w) + b(y + w) + c(z + w) + dw = 0, that is ax + by + cz + (a + b + c + d)w = 0. As
{x, y, z, w} is independent, this implies that a = 0, b = 0, c = 0 and a + b + c + d = 0. Hence
d = 0 too, proving that {x + w, y + w, z + w, w} is independent. It is a basis by Theorem 7
because dim R4 = 4.
88
6(b) Yes. They are independent (the matrix with them as columns has determinant 2) and so
are a basis of R3 by Theorem 7 (since dim R3 = 3).
(d) Yes. They are independent (the matrix with them as columns has determinant 6) and so
are a basis of R3 by Theorem 7 (since dim R3 = 3).
(f) No. The determinant of the matrix with these vectors as its columns is zero, so they are not
independent (by Theorem 3). Hence they are not a basis of R4 because dim R4 = 4.
7(b) True. If sy + tz = 0 then 0x + sy + tz = 0, so s = t = 0 by the independence of {x, y, z}.
(d) False. If x =0 let k = 2, x1 =x and x2 = x. Then each xi = 0 but {x1 , x2 } is not
independent.
(f) False. If y= x and z= 0 then 1x + 1y + 1z = 0, but {x, y, z} is certainly not independent.
(h) True. The xi are not independent so, by definition, some nontrivial linear combination vanishes.
10. If rx2 + sx3 + tx5 = 0 then 0x1 + rx2 + sx3 + 0x4 + tx5 + 0x6 = 0. Since the larger set is
independent, this implies r = s = t = 0.
12. If t1 x1 + t2 (x1 + x2 ) + + tk (x1 + x2 + + xk ) = 0 then, collecting terms in x1 , x2 , . . . ,
(t1 + t2 + + tk )x1 + (t2 + + tk )x2 + + (tk1 + tk )xk1 + tk xk = 0.
Since {x1 , x2 , . . . , xk } is independent we get
t1 + t2 + + tk = 0
t2 + + tk = 0
..
.
tk1 + tk = 0
tk = 0.
The solution (from the bottom up) is tk = 0, tk1 = 0, . . . , t2 = 0, t1 = 0.
T
2
16(b) We show that AT is invertible.
Suppose A x = 0, x in R . By Theorem 5 2.4, we must
s
show that x= 0. If x=
then AT x = 0 gives as + ct = 0, bs + dt = 0. But then
t
17(b) Note first that each V 1 xi is in null(AV ) because (AV )(V 1 xi ) = Axi = 0. If t1 V 1 x1 +
+ tk V 1 xk = 0 then V 1 (t1 x1 + + tk xk ) = 0 so t1 x1 + + tk xk =)0 (by multiplication
*
by V ). Thus t1 = = tk = 0 because {x1 , . . . , xk } is independent. So V 1 x1 , . . . , V 1 xk
is independent. To see that it spans null(AV ), let y be in null(AV ), so that AV y = 0.
Then V y is in null A so V y = s1 x1 + + sn xn because {x1 , . . . , xn } spans null A. Hence
y = s1 V 1 x1 + +sk V 1 xk , as required.
89
20. We have {0} U W where dim{0} = 0 and dim W = 1. Hence dim U is an integer between
0 and 1 (by Theorem 8), so dim U = 0 or dim U = 1. If dim U = 0 then U = {0} by Theorem
8 (because {0} U and both spaces have dimension 0); if dim U = 1 then U = W again by
Theorem 8 (because U W and both spaces have dimension 1).
Exercises 5.3
1(b)
Orthogonality
,
vector.
e1 e2 = 1 + 0 1 = 0, e1 e3 = 2 + 0 2 = 0, e2 e3 = 2 4 + 2 = 0,
e1 e2 = 1 1 + 0 = 0, e1 e3 = 1 + 1 2 = 0, and e2 e3 = 1 1 + 0 = 0.
Hence {e1 , e2 , e3 } is orthogonal and hence is a basis of R3 . If x = (a, b, c), Theorem 6 gives
x e2
x e3
x e1
ab
a+b2c
e3 = a+b+c
e3 .
x=
3 e1 + 2 e2 +
6
2 e1 +
2 e2 +
e1
e2
e3 2
4(b) If e1 = (2, 1, 0, 3) and e2 = (2, 1, 2, 1) then {e1 , e2 } is orthogonal because e1 e2 = 4
1+03 = 0. Hence {e1 , e2 } is an orthogonal basis of the space U it spans. If x = (14, 1, 8, 5)
is in U, Theorem 6 gives
x e1
x e2
40
x=
e2 = 42
14 e1 + 10 e2 = 3e1 + 4e2 .
2 e1 +
e1
e2 2
We check that these are indeed equal. [We shall see in Section 8.1 that in any case,
x e1
x e2
x
e
+
e
is orthogonal to every vector in U.]
1
2
e1 2
e2 2
5(b) The condition that (a, b, c, d) is orthogonal to each of the other three vectors gives the following
equations for a, b, c, and d.
a
2a +
a
Solving we get:
c + d = 0
b
3b
1 0 1 1 0
2 1
1 1 0
1 3 1
0 0
1 0 1
1
0 1 3 3
0 0 11 10
+ c d = 0
+ c
0
0
0
= 0
1
1
3
1
3
1
11
3
11
10
11
0
0
0
0
0
90
(*)
Rn .
Ax2
x2
0.
91
Exercises 5.4
1(b)
2
2
4
Rank of a Matrix
2
0
1
0
1
2
21
0
0
*
)
Hence, rank A = 2 and [1 12 12 ]T , [0 0 1]T is a basis of row A. Thus {[2 1 1]T , [0 0 1]T }
is also a basis of row A. Since the leading 1s are in columns 1 and 3, columns 1 and 3 of A
are a basis of col A.
1
2
1
3
1 2 1 3
1 2 1 3
(d)
)
*
Hence, rank A = 2 and [1 2 1 3]T , [0 0 0 1]T is a basis of row A. Since the leading 1s
are in columns 1 and 4, columns 1 and 4 of A are a basis of col A.
2(b) Apply the gaussian algorithm to the matrix with these vectors as rows:
1
1
0
0
25
12
32
Hence,
6
8
10
12
1
5
6
0
1
0
0
12
0
0
1
6
4
8
36
is a basis of U.
6
1
24
0
6
1
1
0
92
7(b) The null space of A is the set of columns X such that AX = 0. Applying gaussian elimination
to the augmented matrix gives:
3
5
5
2
0 0
1 0
2
2
1 0
1 0 2 2
1 0
0 5 1 4 3 0
1 1 1 2 2 0
0 1 1 4 3 0
2 0 4 4 2 0
0 0
0
0
0 0
1 0
2
2
1 0
1 0 0 6 5 0
0 1 1 4 3 0
0 1 0 0
0 0
0 0
4
16
12 0
0 0 1
4
3 0
0 0
0
0
0 0
0 0 0
0 0
0
6s + 5t
4s
3t
Hence, the set of solutions is null A =
| s, t in R = span B where
6
5
0 0
1 0
93
Exercises 5.5
1(b) det A = 5, det B = 1 (so A and B are not similar). However, tr A = 2 = tr B, and rank
A = 2 = rank B (both are invertible).
(d) tr A = 5, tr B = 4 (so A and B are not similar). However, det A = 7 = det B, so
rank A = 2 = rank B (both are invertible).
(f) tr A = 5 = tr B; det A = 0 = det B; however rank A = 2, rank B = 1 (so A and B are not
similar).
3(b) We have A B, say B = P 1 AP. Hence B 1 = (P 1 AP )1 = P 1 A1 (P 1 )1 , so
A1 B 1 because P 1 is invertible.
x3
0
6
= (x + 3)(x2 5x 24) = (x + 3)2 (x 8). So the eigenvalues
x+3
0
4(b) cA (x) = 0
5
0
x2
are 1 = 3,
eigenvectors:
2 = 8. To findthe associated
1 = 3:
2 = 8:
Since
P =
11
1
0
1
0
1
1
6
0
x4
(d) cA (x) = 0
2
E1 =
6
0
1
0
6
0
5
65
0
0
; basic eigenvectors
; basic eigenvector
x2
3
0
2
x1
0
1
6
0
5
1
0
will satisfy P 1 AP =
0
3
0
0
= (x 4)2 (x + 1). For = 4,
2
3
1
0
94
xc
a
b
xa
xb
xs
xs
xs
xa
xb
= (x s) x2 (a b)2 (a c)(b c)
= (x s)(x2 k).
xs
a
b
0
x + (a b)
bc
0
ac
x (a b)
20(b) To compute cA (x) = det(xI A), add x times column 2 to column 1, and expand along row
1:
x
1
0
0
0
0
0
x
1
0
0
0
0
0
x
1
0
0
..
..
..
..
..
..
cA (x) = .
.
.
.
.
.
0
0
0
0
1
0
0
0
0
0
x
1
r0 r1 r2 r3 rk2 x rk1
0
1
0
0
0
0
x2
x
1
0
0
0
0
0
x
1
0
0
..
..
..
..
..
..
=
.
.
.
.
.
.
0
0
0
0
1
0
0
0
0
0
x
1
r0 r1 x r1 r2 r3
rk2 x rk1
Now expand along row 1 to get
95
x2
0
0
..
.
1
..
.
0
..
.
r0 r1 x
r2
r3
rk2
0
..
.
x rk1
This matrix has the same form as xI A, so repeat this procedure. It leads to the given
expression for det(xI A).
Exercises 5.6
1(b) Here A =
, B =
1
0
8
, X =
96
(AT A)1 =
1
144
120
x
y
z
. Hence, AT A =
120
216
288
516
168
216
288
1
36
24
30
54
26
20
12
12
30
54
72
129
42
72
12
12
12
1
36
24
30
30
54
72
129
42
54
72
44
15
29
1
36
60
138
285
1
12
20
46
95
Of course this can be found more efficiently using gaussian elimination on the normal equations
for Z.
4
1 2
1 4
3
4
21
1 1 1 1
1 1 1 1
TY =
=
=
,
M
2(b) Here M T M =
10
42
21
133
A = (M T M )1 M T Y =
1
91
133
21
21
4
.
64
13
10
42
1
. =
91
6
13 x.
4
(d) Analogous to (b). The best fitting line is y = 10
17
10 x.
448
42
1
. =
13
64
6
96
3(b) Now M T M =
MT Y =
16
16
1
0
2
3
16
.
=
.
=
29
29
83
29
83
353
. .
16
70
We use (MM T )1 to solve the normal equations even though it is more efficient to solve them
by gaussian elimination.
3348
642
426
540
.127
6
1
1
642
571
187 16 =
102 = .024 .
A = (M T M)1 (M T Y ) =
4248
4248
426
187
91
Hence the best fitting quadratic has equation y = .127 .024x + .194x2 .
MT M =
14
36
34
36
98
90
34
90
85
, M =
1
5
10
, and (M T M )1 =
1
92
02
20
12
21
22
22
32
23
230
92
34
92
36
36
76
Z = (M T M)1 M T Y =
5(b) Here Y =
1
5
9
MT M =
(1)2
02
22
32
14
14
98
10
10
2
17
46
18
1
2
46 [23x + 33x
, M =
1
46
115
sin 2
sin(0)
sin()
3
sin
46
18
38
2
14
2
1
20 [18 + 21x
103
1
2
1
0
1
40
111
and (M T M)1 =
Z = (M T M)1 M T Y =
41
1
46
1
46
14
12
49
+ 28 sin
. Hence
24
7
35
31
2
203
2
19
2
x
2 ].
35
10
10
49
1
20
17
46
18
33
30
21
28
. Hence,
23
18
115
+ 30(2)x ].
24
.194
822
70
46
18
38
97
M M=
M Y =
1
98
Hence A = (M T M)1 M T Y =
98
14
14
95
80
56
=
231
919
14
14
98
231
919
1
98
9772
477
99.71
4.87
to two
MT M =
Hence
MT Y =
A = (M T M)1 (M T Y ) =
1
4
76
95
80
56
84
84
98
20
24
=
20
24
6
231
423
919
14
14
36
14
36
98
231
423
919
1
4
404
6
18
101
32
92
A=
50
18
10
40
20
16
35
14
10
40
12
12
30
16
14
B=
28
30
21
23
23
98
AT A =
(AT A)1 =
1
50160
195
80
62
195
7825
3150
2390
80
3150
1320
1008
2390
1008
796
62
1
50160
1035720
16032
16032
10080
16032
10080
45300
10080
416
632
632
45300
1035720
2600
800
16032
416
632
800
45300
2180
10080
632
2600
2180
800
2180
45300
800
2180
3950
3950
125
4925
2042
1568
5.19
0.34
0.51
0.71
n
=
(a20 2a0 yi + yi2 )
i=1
= na20 2
yi a0 +
yi2 .
r0
1
..
.
1
+ r1
ex1
..
.
ex2
0
..
.
0
1
..
.
ex1
..
.
exn
are independent. If
Exercises 5.7
99
2. Let X = [x1 x2 x10 ] = [12 16 13 14] denote the number of years of education. Then
1
1
x
= 10
xi = 15.3, and s2x = n1
(xi x
)2 = 9.12 (so sx = 3.02).
Let Y = [y1 y2 y10 ] = [31 48 35 35] denote the number of dollars (in thousands) of
1
1
yearly income. Then y = 10
ti = 40.3, and s2y = n1
(yi y)2 = 114.23 (so sy = 10.69).
The correlation is r =
X Y 10
xy
= 0.599.
9sx sy
Hence
1
1
1
z =
(a + bxi ) =
na + b
xi = a + b
xi = a + b
x.
n
n
n
1
1
1 2
(zi z)2 =
[(a + bxi ) (a + b
x)]2 =
b (xi x
)2 = b2 s2x .
n1
n1
n1
1
1
] and x2 = [
is not independent.
1
1
1
0
0
1
0
0
0
0
1(n) False. [ ], [
], [ ], [ ] is not independent.
0
0
0
1
100
.
101
1(b) No: S5 fails 1(x, y, z) = (1x, 0, 1z) = (x, 0, z) = (x, y, z) for all (x, y, z) in V. Note that the
other nine axioms do hold.
(d) No: S4 and S5 fail: S5 fails because 1(x, y, z) = (2x, 2y, 2z) = (x, y, z); and S4 fails because
a[b(x, y, z)] = a(2bx, 2by, 2bz) = (4abx, 4aby, 4abz) = (2abx, 2aby, 2abz) = ab(x, y, z). Note
that the eight other axioms hold.
2(b) No: A1 fails for example (x3 + x + 1) + (x3 + x + 1) = 2x + 2 is not in the set.
(d) No: A1 and S1 both fail. For example x + x2 and 2x are not in the set. Hence none of the
other axioms make sense.
a b
x y
(f) Yes. First verify A1 and S1. Suppose A =
and B =
are in V, so a+c = b+d
c d
z w
a+x b+y
and x + z = y + w. Then A + B =
is in V because
c+z
d+w
(a + x) + (c + z) = (a + c) + (x + z) = (b + d) + (y + w) = (b + y) + (d + w).
ra rb
Also rA =
is in V for all r in R because ra + rc = r(a + c) = r(b + d) = rb + rd.
rc
rd
A2, A3,
S2,S3, S4, S5. These hold for matrices in general.
0 0
A4.
is in V and so serves as the zero of V .
0 0
a b
A5. Given A =
with a + c = b + d, then A =
c
a
c
is also in V because
a c = (a + c) = (b + d) = b d. So A is the negative of A in V .
(h) Yes. The vector space axioms are the basic laws of arithmetic.
(j) No. S4 and S5 fail. For S4, a(b(x, y)) = a(bx, by) = (abx, aby), and this need not equal
ab(x, y) = (abx, aby); as to S5, 1(x, y) = (x, y) = (x, y) if y = 0.
Note that the other axioms do hold here:
A1, A2, A3, A4 and A5 hold because they hold in R2 .
S1 is clear; S2 and S3 hold because they hold in R2 .
(l) No. S3 fails: Given f : R R and a, b in R, we have
[(a + b)f](x) = f ((a + b)x) = f (ax + bx)
(af + bf)(x) = (af)(x) + (bf )(x) = f(ax) + f(bx).
These need not be equal: for example, if f is the function defined by f (x) = x2 ;
Then f(ax + bx) = (ax + bx)2 need not equal (ax)2 + (bx)2 = f(ax) + f(bx).
102
103
It is worth noting that these equations can also be solved by gaussian elimination using u
and v as the constants.
a 0
0 b
c
c
0 0
6(b) au + bv + cw = 0 becomes
+
+
=
.
0
b + c = 0,
b + c = 0,
a c = 0.
by S3
= (a1 v + a2 v + + an v) + an+1 v
by induction
= a1 v + a2 v + + an v + an+1 v
Hence it holds for n + 1, and the induction is complete.
15(c) Since a = 0, a1 exists in R. Hence av = aw gives a1 av = a1 aw; that is 1v = 1w, that is
v = w.
Alternatively: av = aw gives av aw = 0, so a(v w) = 0. As a = 0, it follows that
v w = 0 by Theorem 3, that is v = w.
104
Exercises 6.2
1(b) Yes. U is a subset of P3 because xg(x) has degree one more than the degree of g(x). Clearly
0 = x 0 is in U. Given u = xg(x) and v = xh(x) in U (where g(x) and b(x) are in P2 ) we
have
u + v = x(g(x) + h(x)) is in U because g(x) + h(x) is in P2
ku = x(kg(x)) is in U for all k in R because kg(x) is in P2
ku =
ka
kb
kc
kd
b + b1
d + d1
is in U. If u =
and u1 =
a1
b1
c1
d1
are in U then
is in U because (a + a1 ) + (b + b1 ) = (a + b) + (a1 + b1 )
= (c + d) + (c1 + d1)
= (c + c1 ) + (d + d1 ).
3(b) No. U is not closed under addition. For example if f and g are defined by f(x) = x + 1 and
g(x) = x2 + 1, then f and g are in U but f + g is not in U because (f + g)(0) = f(0) + g(0) =
1 + 1 = 2.
(d) No. U is not closed under scalar multiplication. For example, if f is defined by f (x) = x,
then f is in U but (1)f is not in U (for example [(1)f ]( 21 ) = 12 so is not in U).
105
(f) Yes. 0 is in U because 0(x + y) = 0 = 0 + 0 = 0(x) + 0(y) for all x and y in [0, 1]. If f and g
are in U then, for all k in R:
(f + g)(x + y) = f(x + y) + g(x + y)
= (f (x) + f (y)) + (g(x) + g(y))
= (f (x) + g(x)) + (f (y) + g(y))
= (f + g)(x) + (f + g)(y)
5(b) Suppose X =
x1
..
.
xn
= 0, say xk = 0. Given Y =
y1
..
.
yn
column x1
k Y and the other columns zero. Then Y = AX by matrix multiplication, so Y is
in U. Since Y was an arbitrary column in Rn , this shows that U = Rm .
6(b) We want r, s and t such that 2x2 3x + 1 = r(x + 1) + s(x2 + x) + t(x2 + 2). Equating
coefficients of x2 , x and 1 gives s + t = 2, r + s = 3, r + 2t = 1. The unique solution is
r = 3, s = 0, t = 2.
(d) As in (b), x = 23 (x + 1) + 13 (x2 + x) 31 (x2 + 2).
7(b) If v = su + tw then x = s(x2 + 1) + t(x + 2). Equating coefficients gives 0 = s, 1 = t and
0 = s + 2t. Since there is no solution to these equations, v does not lie in span{u, w} .
1 4
1 1
2 1
(d) If v = su + tw, then
=s
+t
. Equating corresponding entries
5
x2 = 21 [(1 + 2x2 ) 1] is in U.
)
*
Since P2 = span 1, x, x2 , this shows that P2 U. Clearly U P2 , so U = P2 .
106
11(b) The vectors uv = 1u+(1)v, u+v, and w are all in span {u, v, w} so span{u v, u + w, w}
span{u, v, w} by Theorem 2. The other inclusion also follows from Theorem 2 because
u = (u + w) w
v = (u v) + (u + w) w
w=w
show that u, v and w are all in span{u v, u + v, w} .
14. No. For example (1, 1, 0) is not even in span{(1, 2, 0), (1, 1, 1)} . Indeed (1, 1, 0) = s(1, 2, 0) +
t(1, 1, 1) requires that s + t = 1, 2s + t = 1, t = 0, and this has no solution.
18. Write W = span{u, v2 , . . . , vn } . Since u is in V we have W V. But the fact that a1 = 0
means
v1 = a11 u aa12 v2 aan1 vn
so v1 is in W. Since v2 , . . . , vn are all in W, this shows that V = span{v1 , v2 , . . . , vn } W.
Hence V = W.
Exercises 6.3
(f) Dependent.
5
x2 +x6
1
x2 5x+6
6
x2 9
= 0.
+ s + 3t = 0.
107
gives
2 1 0 0
1 1
x 0 1 0 2 1
1 1 3 0
x 0
3
0
1
0
0
0
1 3x
0
0
0
1 + 3x
0
0
0
2 1 0
2
1 0
x
1
det x 0 1 = det x 0 1 = det
= (1 + 3x).
1
Spanning: Write U = span{(1, 1, 1), (1, 1, 1), (1, 1, 1)} . Then (1, 0, 0) = 12 [(1, 1, 1) +
(1, 1, 1)] is in U; similarly (0, 1, 0) and (0, 0, 1) are in U. As R3 = span{(1, 0, 0), (0, 1, 0), (0, 0, 1)} ,
we have R3 U.Clearly U R3 , so we have R3 = U.
, that is
xy
zw
x
z
'
,
(
x+z
y+w
B is a basis of U, so dim U = 2.
108
'
(
1
1
0
1
x y
(d) Write U = A | A
=
A . If A =
then A is in U if and only
1 0
1 1
z w
x y
1
1
0
1
x y
xy x
z
w
if
=
; that is
=
.
z
zw
zx
wy
'
(
1 0
0
1
Thus U = span B where B =
,
. But B is independent because s
1 1
1 0
0
1
0 0
t
=
implies s = t = 0. Hence B is a basis of U, so dim U = 2.
1
8(b) If X =
+
x+z
y+w
the condition AX = X is
=
and this holds if and
0
0
z w
x y
1 0
0 1
only if z = w = 0. Hence X =
=x
+y
. So U = span B where
0 0
0 0
0 0
'
(
1 0
0 1
B=
,
. As B is independent, it is a basis of U, so dim U = 2.
0
a
q
r
| a, b, p, q, r, s, m in R = span B where
b
p
s
V =
0
1
0
0
0 0
1
0 0
0 0 0
B = 0 0 0 , 0 0 0 , 1 0 0 , 0 0 0 ,
0 1 0
1 0 0
1 0 0
1 1 1
0 0
1
0 0
0
0
0
0
0 1 0 , 0 0 0 , 0 0 1 .
109
for every p(x) in P3 . This is not the case, so no such basis of P3 can exist. [Indeed, no such
spanning set of P3 can exist.]
'
(
1 0
1 1
1 0
0 1
(d) No. B =
,
,
,
is a basis of invertible matrices.
0 1
0 1
1 1
1 1
1 0
1 1
1 0
0 1
0 0
Independent: r
+s
+t
+u
=
gives
0
r + s + t = 0,s + u = 0,
t + u =0, r+ s + t+ u = 0. The only solution is r = s = t = u = 0.
0 1
1 1
1 0
Spanning:
=
is in span B
0 0
0 1
0 1
0 0
1 0
1 0
=
is in span B
1 0 1 1 0 1
0 0
0 1
0 1
0 0
=
is in span B
0 1
1 1
0 0
1 0
1 0
1 0
0 0
=
is in span B
0 0
0 1
0 1
'
(
0 1
0 0
0 0
1 0
Hence M22 = span
,
,
,
span B. Clearly span B
0
M22 .
Hence
{u , v }
s
t
0
0
implies
s
t
0
0
110
29(b) If U is not invertible, let Ux = 0 where x= 0 in Rn (Theorem 5, 2.3). We claim that no set
{A1 U, A2 U, . . .} can span Mmn (let alone be a basis). For if it did, we could write any matrix
B in Mmn as a linear combination
B = a1 A1 U + a2 A2 U +
Then Bx = a1 AUx + a2 A2 Ux + = 0 + 0 + = 0, a contradiction. In fact, if entry k of
xis nonzero, then Bx = 0 where all entries of B are zero except column k, which consists of
1s.
33(b) Suppose U W = 0. If su + tw = 0 with u and w nonzero in U and W, then su = tw
is in U W = {0}. Hence su = 0 = tw. So s = 0 = t (as u = 0 and w = 0). Thus {u, v}
is independent. Conversely, assume that the condition holds. If v = 0 lies in U W, then
{v, v} is independent by the hypothesis, a contradiction because 1v + 1(v) = 0.
36(b) If p(x) = a0 + a1 x + + an xn is in On , then p(x) = p(x), so
a0 a1 x + a2 x2 + a3 x3 + a4 x4 = a0 a1 x a2 x2 a3 x3 a4 x4 .
)
*
3 , x5 , . . .
Hence a0 = a2 = a4 = = 0 and p(x) = a1 x+a3 x3 +a5 x5 + ). Thus On = span
x,
x
*
is spanned) by the odd powers
of x in Pn . The set B = x, x3 , x5 , . . . is independent
*
(because 1, x, x2 , x*3 , x4 , . . . is independent) so it is a basis of On . If)n is even, B =
)
*
x, x3 , x5 , . . . , xn1 has n2 members, so dim On = n2 . If n is odd, B = x, x3 , x5 , . . . , xn
n+1
has n+1
2 members, so dim On = 2 .
Exercises 6.4
1(b) B = {(1, 0, 0), (0, 1, 0), (0, 1, 1)} is independent as r(1, 0, 0) + s(0, 1, 0) + t(0, 1, 1) = (0, 0, 0)
implies r = 0, s + t = 0, t = 0, whence r = s = t = 0. Hence B is a basis by Theorem 3
because dim R3 = 3.
)
*
(d) B = 1, x, x2 x + 1 is independent because r1 + sx + t(x2 x 1) = 0 implies r t = 0,
s t = 0, and t = 0; whence r = s = t = 0. Hence B is a basis by Theorem 3 because
dim P2 = 3.
2(b) As dim P2 = 3, any independent set of
vectors is a basis by
) three
* Theorem 3. But we have
2
2
2
2
(x +3)+2(x+2)+(x 2x1) = 0, x )+ 3, x + 2, x 2x 1 , so is dependent.
However
*
any other subset of three vectors from x2 + 3, x + 2, x2 2x 1, x2 + x is independent
(verify).
111
is in span B
is in span B
and, of course, (0, 1, 0, 0) and (0, 0, 1, 0) are in span B. Hence B is a basis of R4 by Theorem
3 because dim R4 = 4.
)
*
(d) B = 1, x2 + x, x2 + 1, x3 spans P3 because x2 = (x2 + 1) 1 and x = (x2 + x) x2 are in
span B (together with 1 and x3 ). So B is a basis of P3 by Theorem 3 because dim P3 = 4.
4(b) Let z = a + bi; a, b in R. Then b = 0 as z is not real and a = 0 as z is not pure imaginary.
Since dim C = 2, it suffices (by Theorem 3) to show that {z, z} is independent. If rz + s
z=0
then 0 = r(a + bi) + s(a bi) = (r + s)a + (r s)bi. Hence (r + s)a = 0 = (r s)b so (because
a = 0 = b) r + s = 0 = r s. Thus r = s = 0.
5(b) The four polynomials in S have distinct degrees. Use Example 4 6.3.
6(b) {4, 4x, 4x2 , 4x3 } is such a basis. There is no basis of P3 consisting of polynomials have the
property that their coefficients sum to zero. For if it did then every polynomial in P3 would
have this property (since sums and scalar multiples of such polynomials have the same property).
7(b) Not a basis because (2u + v + 3w) (3u + v w) + (u 4w) = 0.
(d) Not a basis because 2u (u + w) (u w) + 0(v + w) = 0.
8(b) Yes, four vectors can span R3 say any basis together with any other vector.
No, four vectors in R3 cannot be independent by the fundamental theorem (Theorem 2 6.3)
because R3 is spanned by 3 vectors (dim R3 = 3).
10. We have det A = 0 if and only if A is not invertible. This holds if and only if the rows of
A are dependent by Theorem 3 5.2. This in turn holds if and only if some row is a linear
combination of the rest by the dependent lemma (Lemma 3).
11(b) No. Take X = {(0, 1), (1, 0)} and D = {(0, 1), (1, 0), (1, 1)}. Then D is dependent, but its
subset X is independent.
(d) Yes. This is follows from Exercise 15 6.3 (solution above).
15. Let {u1 , ..., um }, m k, be a basis of U so dim U = m. If v U then W = U by Theorem
2 6.2, so certainly dim W = dim U. On the other hand, if v
/ U then {u1 , ..., um , v} is
independent by the independent lemma (Lemma 1). Since W = span{u1 , ..., um , v}, again
by Theorem 2 6.2, it is a basis of W and so dim W = 1 + dim U.
18(b) The two-dimensional subspaces of R3 are the planes through the origin, and the one-dimensional
subspaces are the lines through the origin. Hence part (a) asserts that if U and W are distinct
planes through the origin, then U W is a line through the origin.
23(b) Let vn denote the sequence with 1 in the nth coordinate and zeros elsewhere. Thus v0 =
(1, 0, 0, . . .), v1 = (0, 1, 0, . . .) etc. Then a0 v0 + a1 v1 + + an vn = (a0 , a1 , . . . , an , 0, 0, . . .)
so a0 v0 + a1 v1 + + an vn = 0 implies a0 = a1 = = an = 0. Thus {v0 , v1 , . . . , vn }
is an independent set of n + 1 vectors. Since n is arbitrary, dim V cannot be finite by the
fundamental theorem.
112
Exercises 6.5
An Application to Polynomials
2(b) f (0) (x) = f (x) = x3 + x + 1, so f (1) (x) = 3x2 + 1, f (2) (x) = 6x, f (3) (x) = 6. Hence, Taylors
theorem gives
f (3) (1)
f (2) (1)
(x 1)2 +
(x 1)3
2!
3!
= 3 + 4(x 1) + 3(x 1)2 + (x 1)3 .
(d) f (0) (x) = f(x) = x3 3x2 + 3x, f (1) (x) = 3x2 6x + 3, f (2) (x) = 6x 6, f (3) (x) = 6. Hence,
Taylors theorem gives
f(x) = f (0) (1) + f (1) (1)(x 1) +
= 1 + 0(x 1) +
f (3) (1)
f (2) (1)
(x 1)2 +
(x 1)3
2!
3!
0
(x 1)2 + 1(x 1)3
2!
= 1 + (x 1)3 .
6(b) The three polynomials are x2 3x + 2 = (x 1)(x 2), x2 4x + 3 = (x 1)(x 3) and
x2 5x + 6 = (x 2)(x 3), so use a0 = 3, a1 = 2, a2 = 1, in Theorem 2.
7(b) The Lagrange polynomials for a0 = 1, a1 = 2, a2 = 3, are
(x 2)(x 3)
= 21 (x 2)(x 3)
(1 2)(1 3)
(x 1)(x 3)
1 (x) =
= (x 1)(x 3)
(2 1)(2 3)
(x 1)(x 2)
2 (x) =
= 21 (x 1)(x 2).
(3 1)(3 2)
0 (x) =
Given f (x) = x2 + x + 1:
f(x) = f(1)0 (x) + f(2)1 (x) + f(3)2 (x)
= 32 (x 2)(x 3) 7(x 1)(x 3) +
13
2 (x
1)(x 2).
10(b) If r(x a)2 + s(x a)(x b) + t(x b)2 = 0, then taking x = a gives t(a b)2 = 0,
so t = 0 because a = b; and taking x = b gives r(b a)2 = 0, so r = 0. Thus, we are
left with
) s(x a)(x b) = 0. If x is any
* number except, a, b, this implies s = 0. Thus
B = (x a)2 , (x a)(x b), (x b)2 is independent in P2 ; since dim P2 = 3, B is a
basis.
113
11(b) Have Un = {f(x) in Pn | f(a) = 0 = f(b)} . Let {p1 (x), . . . , pn1 (x)} be a basis of Pn2 ; it
suffices to show that
B = {(x a)(x b)p1 (x), . . . , (x a)(x b)pn1 (x)}
is a basis of Un . Clearly B Un .
Independent: Let s1 (x a)(x b)p1 (x) + + sn1 (x a)(x b)pn1 (x) = 0. Then
(xa)(xb)[s1 p1 (x)+ +sn1 pn1 (x)] = 0, so (by the hint) s1 p1 (x)+ +sn1 pn1 (x) = 0.
Thus s1 = s2 = = sn1 = 0.
Spanning: Given f(x) in Pn with f(a) = 0, we have f (x) = (x a)g(x) for some polynomial
g(x) in Pn1 by the factor theorem. But 0 = f(b) = (b a)g(b) so (as b = a) g(b) = 0. Then
g(x) = (x b)h(x) with h(x) = r1 p1 (x) + + rn1 pn1 (x), ri in R, whence
f(x) = (x a)g(x)
= (x a)(x b)g(x)
= (x a)(x b)[r1 p1 (x) + + rn1 pn1 (x)]
= r1 (x a)(x b)p1 (x) + + rn1 (x a)(x b)pn1 (x).
Exercises 6.6
1(b) By Theorem 1, f(x) = cex for some constant c. We have 1 = f(1) = ce1 , so c = e. Thus
f (x) = e1x .
(d) The characateristic polynomial is x2 + x 6 = (x 2)(x + 3). Hence f(x) = ce2x + de3x
for some c, d. We have 0 = f(0) = c + d and 1 = f(1) = ce2 + de3 . Hence, d = c and
1
e2x e3x
c = e2 e
3 so f(x) = e2 e3 .
(f) The characteristic polynomial is x2 4x+4 = (x2)2 . Hence, f(x) = ce2x +dxe2x = (c+dx)e2x
for some c, d. We have 2 = f (0) = c and 0 = f (1) = (c d)e2 . Thus c = d = 2 and
f (x) = 2(1 + x)e2x .
(h) The characteristic polynomial is x2 a2 = (x a)(x + a), so (as a = a) f (x) = ceax + deax
for some c, d. We have 1 = f(0) = c + d and 0 = f (1) = cea + dea . Thus d = 1 c and
c = 1e1 2a whence
f(x) = cax + (1 c)eax =
eax ea(2x)
.
1 e2a
4(b) If f(x) = g(x) + 2 then f + f = 2 becomes g + g = 0, whence g(x) = cex for some c. Thus
f (x) = cex + 2 for some constant c.
114
is a particular solution.
Hence, f (x) = x
3
3
g(x) = h(x) f(x) = h(x) + x3 . Then
g + g 6g = (h + h 6h) (f + f 6f ) = 0.
So, to find g, the characteristic polynomial is x2 + x 6 = (x 2)(x + 3). Hence we have
g(x) = ce3x + de2x , where c and d are constants, so
h(x) = ce3x + de2x
x3
3 .
6(b) The general solution is m(t) = 10( 45 )1/3 . Hence 10( 45 )t/3 = 5 so t =
3 ln(1/2)
ln(4/5)
= 9.32 hours.
7(b) If m = m(t) is the mass at time t, then the rate m (t) of decay is proportional to m(t), that
is m (t) = km(t) for some k. Thus, m km = 0 so m = cekt for some constant c. Since
m(0) = 10, we obtain c = 10, whence m(t) = 10ekt . Also, 8 = m(3) = 10e3k so e3k = 45 ,
t/3
1/3
, m(t) = 10(ek )t = 10 45
.
ek = 45
9. In Example 4, we found that the period of oscillation is
2
= 0.044.
k = 15
Supplementary Exercises
.
k
Hence
= 30 so we obtain
Chapter 6
2(b) Suppose {Ax1 , . . . , Axn } is a basis of Rn . To show that A is invertible, we show that Y A = 0
implies Y = 0. (This shows AT is invertible by Theorem 5 2.3, so A is invertible). So assume
that Y A = 0. Let c1 , . . . , cm denote the columns of Im , so Im = [C1 , C2 , . . . , Cm ] . Then
Y = Y Im = Y [c1 c2 . . . cm ] = [Y c1 Y c2 . . . Y cm ] , so it suffices to show that
Y cj = 0 for each j. But cj is in Rn so our hypothesis shows that cj = r1 Av1 + + rn Avn
for some rj in R. Hence,
cj = A(r1 v1 + + rn vn )
so Y cj = Y A(r1 v1 + + rn vn ) = 0, as required.
4. Assume that A is m n. If x is in null A, then Ax = 0 so (AT A)x = AT 0 = 0. Thus x is in
null AT A, so null A null AT A. Conversely, let x be in nullAT A; that is AT Ax = 0. Write
Ax = y =
y1
..
.
ym
115
= T (X) + T (Y ) and
(h) Here Z is fixed in Rn and T (X) = X Z for all X in Rn . We use Theorem 1, Section 5.3:
T (X + Y ) = X + Y ) Z = X Z + Y Z = T (X) + T (Y )
T (rX) = (rX) Z = r(X Z) = rT (X).
2(b) Let A =
...
...
0
..
.
0
..
.
0
..
.
...
..
.
0
..
.
...
, B =
...
...
0
..
.
0
..
.
...
..
.
0
..
.
...
0
..
.
0
, then A + B =
...
...
0
..
.
0
..
.
0
..
.
...
..
.
0
..
.
...
(d) Here T (v) = v+u, T (w) = w+u, and T (v+w) = v+w+u. Thus if T (v+w) = T (v)+T (w)
then v + w + u = (v + u) + (w + u), so u = 2u, u = 0. This is contrary to assumption.
Alternatively, T (0) = 0 + u = 0, so T cannot be linear by Theorem 1.
3(b) Because T is linear, T (3v1 + 2v2 ) = 3T (v1 ) + 2T (v2 ) = 3(2) + 2(3) = 0.
116
1
7
1
1
and
=r
1
1
1
, it suffices to express
+s
1
1
1
7
as a linear
(f) We know T (1), T (x + 2) and T (x2 + x), so we express 2 x + 3x2 as a linear combination of
these vectors:
2 x + 3x2 = r 1 + s(x + 2) + t(x2 + x).
Equating coefficients gives 2 = r + 2s, 1 = s + t and 3 = t. The solution is r = 10, s = 4
and t = 3, so
T (2 x + 3x2 ) = T [r 1 + s(x + 2) + t(x2 + x)]
= rT (1) + sT (x + 2) + tT (x2 + x)
= 5r + s + 0
= 46.
In fact, we can find the action of T on any vector a + bx + cx2 in the same way. Observe that
a + bx + cx2 = (a 2b + 2c) 1 + (b c)(x + 2) + c(x2 + x)
for any a, b and c, so
T (a + bx + cx2 ) = (a 2b + 2c)T (1) + (b c)T (x + 2) + cT (x2 + x)
= (a 2b + 2c) 5 + (b c) 1 + c 0
= 5a 9b + 9c.
117
x+2y
This works in general. Observe that (x, y) = xy
3 (2, 1) + 3 (1, 1) for any x and y, so
since T is linear,
x+2y
T (x, y) = xy
3 T (2, 1) + 3 T (1, 1)
is a linear combination
a
=r
+s
(
+t
+u
= (a c + b) 3 + b (1) + (c b) 0 + d 0
= 3a + 2b 3c.
118
18. Assume that {v, T (v)} is independent Then T (v) = v (or else 1v + (1)T (v) = 0) and
similarly T (v) = v.
Conversely, assume that T (v) = v and T (v) = v. To verify that {v, T (v)} is independent,
let rv + sT (v) = 0; we must show that r = s = 0. If s = 0, then T (v) = av where a = rs .
Hence v = T [T (v)] = T (av) = aT (v) = a2 v. Since v = 0, this gives a = 1, contrary to
hypothesis. So s = 0, whence rv = 0 and r = 0.
21(b) Suppose that T : Pn R is a linear transformation such that T (xk ) = T (x)k holds for all
k 0 (where x0 = 1). Write T (x) = a. We have T (xk ) =) T (x)k = ak = Ea (xk )*for each k by
assumption. This gives T = Ea by Theorem 2 because 1, x, x2 , . . . , xi , . . . , xn is a basis of
Pn .
Exercises 7.2
2 1 1 3 0
1 0
3
1 0
1 0
3
1 0
1 0 3 1 0 0 1 7 1 0 0 1 7 1 0 .
1 1 4 2 0
0 1 7 1 0
0 0
0
0 0
Hence ker TA =
3s t
7s t
s
t
| s, t in R = span
3
7
1
0
1
1
0
)
*
im TA = AX | X in R4
2 1 1 3
s
1 0
3
1
=
|
r,
s,
t,
u
in R
t
1 1 4 2
y
2
1
1
3
= r 1 + s 0 + t 3 + u 1 | r, s, t, u in R .
7
0
Thus dim(im TA ) = rank A = 2. However, we want a basis of col A, and we obtain this by
writing the columns of A as rows and carrying the resulting matrix (it is AT ) to row-echelon
form:
119
0
1
0
1
3
1
0
0
Hence, ker TA =
t
2t
t
| t in R = span
1
2
1
0
0
dim (ker TA ) = 1. As in (b), im TA = col A and we find a basis by doing gaussian elimination
on AT :
1
0
1
1
1
3
0
1
1
2
, so rank TA = dim(im TA ) = 2.
120
Hence,
hand,
= a + d. Hence
( '
a
ker T =
|a+d=0 =
c d
c
'
1
0
0 1
0
= span
,
,
a
'
,
im T =
So {1} is a basis of im T.
(
| a, b, c in R
a
(
0
.
b
'
a+d |
in M22
= R.
ker T = {X | XA = 0} =
'
0 1
= span
,
0
Thus,
'
,
(
Thus,
,
(
(
. Writing X =
(
'
(
| y, w in R
im T = {XA | X in M22 } =
'
'
(
'
| x, z in R = span
,
(
is a basis of im T .
= {v | P (v) = 0} {v | Q(v) = 0}
= ker P ker Q.
121
4
0
1
1
1 0
1
1
1 0
1 0
3
1
2 1 3 0
0 3 1 0 0 1 3 0 .
0 3 1 0
0 3 1 0
0 0 0 0
0 0
0 0
3
0
4 0
0 3 1 0
Hence, ker T = {(4t, t, 3t) | t in R} = span{(4, 1, 3)} . Hence,{(1, 0, 0), (0, 1, 0), (4, 1, 3)}
is one basis of R3 containing a basis of ker T . Thus
{T (1, 0, 0), T (0, 1, 0)} = {(1, 2, 0, 3), (1, 1, 3, 0)}
is a basis of im T by Theorem 5.
6(b) Yes. dim(im T ) = dim V dim(ker T ) = 52 = 3. As dim W = 3 and im T is a 3-dimensional
subspace, im T = W. Thus, T is onto.
(d) No. If ker T = V then T (v) = 0 for all v in V, so T = 0 is the zero transformation. But
W need not be the zero space. For example, T : R2 R2 defined by T (x, y) = (0, 0) for all
(x, y) in R2 .
(f) No. Let T : R2 R2 be defined by T (x y) = (y 0) for all (x, y) R2 . Then ker T = {(x, 0) |
x R} = im T.
(h) Yes. We always have dim(im T ) dim W (because im T is a subspace of W ). Since
dim(ker T ) dim W also holds in this case:
dim V = dim(ker T ) + dim(im T ) dim W + dim W = 2 dim W.
Hence dim W
1
2
dim V.
(j) No. T : R2 R2 given by T (x, y) = (x, 0) is not one-to-one (because ker T = {(0, y) | y R}
is not 0).
(l) No. T : R2 R2 given by T (x, y) = (x, 0) is not onto.
(n) No. Define T : R2 R2 by T (x, y) = (x, 0), and let v1 = (1, 0) and v2 = (0, 1). Then {v1 , v2 }
spans R2 , but {T (v1 ), T (v2 )} = {v1 , 0} does not span R2 .
7(b) Given w in W, we must show that it is a linear combination of T (v1 ), . . . , T (vn ). As T is onto,
w = T (v) for some v in V. Since V = span{v1 , . . . , vn } we can write v = r1 v1 + + rn vn
where each ri is in R. Hence
w = T (v) = T (r1 v1 + + rn vn ) = r1 T (v1 ) + + rn T (vn ).
8(b) If T is onto, let v be any vector in V . Then v = T (r1 , . . . , rn ) for some (r1 , . . . , rn ) in Rn ;
that is v = r1 v1 + + rn vn is in span{v1 , . . . , vn } . Thus V = span{v1 , . . . , vn } . Conversely,
if V = span{v1 , . . . , vn }, let v be any vector in V. Then v is in span{v1 , . . . , vn } so r1 , . . . , rn
exist in R such that
v = r1 v1 + + rn vn = T (r1 , . . . , rn ).
Thus T is onto.
122
10. The trace map T : M22 R is linear (Example 2, Section 7.1) and it is onto (for example,
r = tr[diag (r, 0, . . . , 0)] = T [diag (r, 0, . . . , 0)] for any r in R). Hence the dimension theorem
gives dim(ker T ) = dim Mnn dim(im T ) = n2 dim(R) = n2 1.
12. Define TA : Rn Rm and TB : Rn Rk by TA (x) = Axand TB (x) = Bxfor all xin Rn .
Then the given condition means ker TA ker TB , so dim(ker TA ) dim(ker TB ). Hence
rank A = dim(im TA ) = n dim(ker TA ) n dim(ker TB ) = dim(im TB ) = rank B.
)
*
15(b) Write B = x 1, x2 1, . . . , xn 1 . Then B ker T because T (xk 1) = 1 1 = 0 for
all k. Hence span B ker T. Moreover, the polynomials in B are independent (they have
distinct degrees), so dim(span B) = n. Hence, by Theorem 2 6.4, it suffices to show that
dim(ker T ) = n. But T : Pn R is onto, so the dimension theorem gives dim(ker T ) =
dim(Pn ) dim(R) = (n + 1) 1 = n, as required.
20. If we can find an onto linear transformation T : Mnn Mnn with ker T = U and im T = V,
then we are done by the dimension theorem. The condition ker T = U suggests that we define
T by T (A) = A AT for all A in Mnn . By Example 3, T is linear, ker T = U, and im T = V.
This is what we wanted.
22. Fix a column y = 0 in Rn , and define T : Mmn Rm by T (A) = Ay for all A in Mmn . This
is linear and ker T = U, so the dimension theorem gives
mn = dim(Mmn ) = dim(ker T ) + dim(im T ) = dim U + dim(im T ).
Hence, it suffices to show that dim(im T ) = m, equivalently (since im T Rm ) that T is
onto. So let xbe a column in Rm , we must find a matrix A in Mmn such that Ay = x. Write
A in terms of its columns as A = [C1 C2 . . . Cn ] and write y = [y1 y2 . . . yn ]T .
Then the requirement that Ay = x becomes
x = [C1
C2
...
y1
y2
Cn ]
..
.
yn
= y1 C1 + y2 C2 + + yn Cn .
(*)
if i > m.
123
Exercises 7.3
5(b) T 2 (x, y) = T [T (x, y)] = T (x + y, 0) = [x + y + 0, 0] = (x + y, 0) = T (x, y). This holds for all
(x, y), whence T 2 = T .
a b
a b
a+c b+d
1 a+c b+d
1
2
=T 2
(d) T
=T T
= 2T
c
1
4
a+c
(a + c) + (a + c)
(b + d) + (b + d)
(a + c) + (a + c)
(b + d) + (b + d)
b+d
1
2
a+c
a+c
b+d
a+c
b+d
b+d
=T
, so T 2 = T.
124
implies a + 2c = 0 = 3c a and
3z x
3w y
x + 2z = a
x + 3z = c
y + 2w = b
y + 3w = d.
The solution is x = 15 (3a 2c), z = 15 (a + c), y = 15 (3b 2d), w = 51 (b + d). Hence
3a 2c 3b 2d
a b
1
1
.
T
=5
c
T 1
a+c
b+d
(*)
. This matrix
is invertible which easily implies that T is one-to-one (and onto), and if S : M22 M22 is
1 by Theorem 5.
defined by S(X) = A1 X then
ST = 1M22 and T S = 1M22 . Hence S = T
Note that A1 =
1
5
3
1
2
1
T 6 (x, y, z, w) = T 3 T 3 (x, y, z, w) = T 3 [x, y, z, w] = (x, y, z, w) = 1R4 (x, y, z, w).
T 1 (x, y, z, w) = T 2 T 3 (x, y, z, w) = T 2 (x, y, z, w) = (y x, x, z, w).
125
so ST = 1Mnn
so T S = 1Mnn .
10(b) Given V W U with T and S both onto, we are to show that ST : V U is onto. Given
u in U, we have u = S(w) for some w in W because S is onto; then w = T (v) for some v in
V because T is onto. Hence,
ST (v) = S[T (v)] = S[w] = u.
This shows that ST is onto.
12(b) If u lies in im RT write u = RT (v), v in V . Thus u = R[T (v)] where T (v) in W, so u is in
im R.
T
13(b) Given V U W with ST onto, let w be a vector in W. Then w = ST (v) for some v in
V because ST is onto, whence w = S[T (v)] where T (v) is in U. This shows that S is onto.
Now the dimension theorem applied to S gives
dim U = dim(ker S) + dim(im S) = dim(ker S) + dim W
because im S = W (S is onto). As dim(ker S) 0, this gives dim U dim W.
14. If T 2 = 1V then T T = 1V so T is invertible and T 1 = T by the definition of the inverse of a
transformation. Conversely, if T 1 = T then T 2 = T T 1 = 1V .
16. Theorem 5, Section 7.2 shows that {T (e1 ), T (e2 ), . . . , T (er )} is a basis of im T. Write
126
24(b) T S[x0 , x1 ...) = T [0, x0 , x1 , ...) = [x0 , x1 , ...) so T S = 1V . Hence T S is both onto and oneto-one, so T is onto and S is one-to-one by Exercise 113. But [1, 0, 0, ...) is in ker T while
[1, 0, 0, ...) is not in im S.
26(b) If p(x) is in ker T, then p(x) = xp (x). If we write p(x) = a0 + a1 x + + an xn , this becomes
a0 + a1 x + + an1 xn1 + an xn = a1 x 2a2 x2 nan xn .
Equating coefficients gives a0 = 0, a1 = a1 , a2 = 2a2 , . . . , an = nan . Hence we have,
a0 = a1 = = an = 0, so p(x) = 0. Thus ker T = {0}, so T is one-to-one. As T : Pn Pn
and dim Pn is finite, this implies that T is also onto, and so is an isomorphism.
27(b) If T S = 1W then, given w in W, T [S(w)] = w, so T is onto. Conversely, if T is onto, choose a
basis {e1 , . . . , er , er+1 , . . . , en } of V such that {er+1 , . . . , en } is a basis of ker T. By Theorem 5,
7.2, {T (e1 ), . . . , T (en )} is a basis of im T = W (as T is onto). Hence, a linear transformation
S : W V exists such that S[T (ei )] = ei for i = 1, 2, . . . , r. We claim that T S = 1W , and
we show this by verifying that these transformations agree on the basis {T (e1 ), . . . , T (er )} of
W. Indeed
T S[T (ei )] = T {S[T (ei )]} = T (ei ) = 1W [T (ei )]
for i = 1, 2, . . . , n.
28(b) If T = SR, then every vector T (v) in im T has the form T (v) = S[R(v)], whence im T im
S. Since R is invertible, S = T R1 implies im S im T, so im S = im T.
Conversely, assume that im S = im T. The dimension theorem gives
127
R(gi ) = ei for i = 1, 2, . . . , r
R(fj ) = ej
for j = r + 1, . . . , n.
for j = r + 1, . . . , n.
Hence SR = T .
29. As in the hint, let {e1 , e2 , . . . , er , . . . , en } be a basis of V where {er+1 , . . . , en } is a basis of
ker T. Then {T (e1 ), . . . , T (er )} is linearly independent by Theorem 5, 7.2, so extend it to a
basis {T (e1 ), . . . , T (er ), wr+1 , . . . , wn } of V. Then define S : V V by
S[T (ei )] = ei for 1 i r
S(wj ) = ej
for r < j n.
Exercises 7.4
Exercises 7.5
128
15
20 ,
b=
8
20 ,
3
c = 20
, so
xn =
1
20 (15
+ 2n+3 + (3)n+1 ) n 0.
As 1 is a double root of p(x), [1n ) = [1) and [n1n ) = [n) are solutions to the recurrence by
Theorem 3. Similarly, [(2)n ) is a solution, so {[1), [n), [(2)n )} is a basis for the space of
solutions by Theorem 4. The required sequence has the form
[xn ) = a[1) + b[n) + c[(2)n )
for constants a, b, c. Thus, xn = a + bn + c(2)n for n 0, so taking n = 0, 1, 2, we get
a +
a +
c = x0 =
b 2c = x1 = 1
a + 2b + 4c = x2 =
The solution is a = 59 , b = 69 , c = 49 , so
xn = 19 5 6n + (2)n+2
1.
n 0.
)
*
Hence, [1n ) = [1), [n1n ) = [n) and [n2 1n ) = [n2 ) are solutions and so [1), [n), [n2 ) is a basis
for the space of solutions. Thus
xn = a 1 + bn + cn2 ,
a, b, c constants. As x0 = 1, x1 = 1, x2 = 1, we obtain
a
= x0 =
a +
b +
c = x1 = 1
a + 2b + 4c = x2 =
The solution is a = 1, b = 4, c = 2, so
xn = 1 4n + 2n2
n 0.
1.
129
4(b) The recurrence xn+4 = xn+2 +2xn+3 has r0 = 0 as there is no term xn . If we write yn = xn+2 ,
the recurrence becomes
yn+2 = yn + 2yn+1 .
Now the associated polynomial is x2 2x + 1 = (x 1)2 so basis sequences for the solution
space for yn are [1n ) = [1, 1, 1, 1, . . .) and [n1n ) = [0, 1, 2, 3, . . .). As yn = xn+2 , corresponding
basis sequences for xn are [0, 0, 1, 1, 1, 1, . . .) and [0, 0, 0, 1, 2, 3, . . .). Also, [1, 0, 0, 0, 0, 0, . . .) and
[0, 1, 0, 0, 0, 0, . . .) are solutions for xn , so these four sequences form a basis for the solution
space for xn .
7. The sequence has length 2 and associated polynomial x2 + 1. The roots are nonreal: 1 = i
and 2 = i. Hence, by Remark 2,
[in + (i)n ) = [2, 0, 2, 0, 2, 0, 2, 0, . . .) and [i(in (i)n )) = [0, 2, 0, 2, 0, 2, 0, 2, . . .)
are solutions. They are independent as is easily verified, so they are a basis for the space of
solutions.
130
.
131
Chapter 8 Orthogonality
Exercises 8.1
1(b) Write x1 = (2, 1) and x2 = (1, 2). The Gram-Schmidt algorithm gives
e1 = x1 = (2, 1)
x2 e1
e2 = x2
e1
e1 2
= (1, 2) 45 (2, 1)
= 51 {(5, 10) (8, 4)}
= 53 (1, 2).
In hand calculations, {(2, 1), (1, 2)} may be a more convenient orthogonal basis.
(d) If x1 = (0, 1, 1), x2 = (1, 1, 1), x3 = (1, 2, 2) then
e1 = x1 = (0, 1, 1)
x2 e1
e2 = x2
e1 = (1, 1, 1) 22 (0, 1, 1) = (1, 0, 0)
e1 2
x3 e1
x3 e2
e3 = x3
e2 = (1, 2, 2) 20 (0, 1, 1) 11 (1, 0, 0) = (0, 2, 2).
2 e1
e1
e2 2
2(b) Write e1 = (3, 1, 2) and e2 = (2, 0, 3). Then {e1 , e2 } is orthogonal and so is an orthogonal
basis of U = span{e1 , e2 } . Now x = (2, 1, 6) so take
x e1
x e2
x1 = projU (x) =
e2
2 e1 +
e1
e2 2
14
= 17
14 (3, 1, 2) 13 (2, 0, 3)
1
(271, 221, 1030).
= 182
Then x2 = x x1 =
is in U ).
1
182 (93, 402, 62).
(d) If e1 = (1, 1, 1, 1), e2 = (1, 1, 1, 1), e3 = (1, 1, 1, 1) and x = (2, 0, 1, 6), then {e1 , e2 , e3 }
is orthogonal so take
x e1
x e2
x e3
x1 = projU (x) =
e3
2 e1 +
2 e2 +
e1
e2
e3 2
= 94 (1, 1, 1, 1) 45 (1, 1, 1, 1) 34 (1, 1, 1, 1)
= 41 (1, 7, 11, 17).
ab+2c
(1, 1, 2, 0) + a+b+c+d
(1, 1, 1, 1)
6
4
5a5b+c3d 5a+5bc+3d ab+11c+3d 3a+3b+3c+3d
(
,
,
,
)
12
12
12
12
7a+5bc+3d 5a+7b+c3d a+b+c3d 3a3b3c+9d
x x1 = (
,
,
,
).
12
12
12
12
x1 = projU (x) =
=
x2 =
132
projU (x) =
15
(c) projU (x) = 14
(1, 0, 2, 3) +
3
70 (4, 7, 1, 2)
3
10 (3, 1, 7, 11).
3
10 (3, 1, 7, 11).
4(b) U = span{(1, 1, 0), (1, 0, 1)} but this basis is not orthogonal. By Gram-Schmidt:
e1 = (1, 1, 0)
e2 = (1, 0, 1)
(1, 0, 1) (1, 1, 0)
(1, 1, 0) = 12 (1, 1, 2).
(1, 1, 0)2
So we use U = span{(1, 1, 0), (1, 1, 2)} . Then the vector x1 in U closest to x= (2, 1, 0) is
x1 = projU (x) =
21+0
2+1+0
(1, 1, 0) +
(1, 1, 2) = (1, 0, 1).
2
6
(d) The given basis of U is not orthogonal. The Gram-Schmidt algorithm gives
e1 = (1, 1, 0, 1)
e2 = (1, 1, 0, 0) = (1, 1, 0, 0) 30 e1 = (1, 1, 0, 0)
e3 = (1, 1, 0, 1) 13 (1, 1, 0, 1) 22 (1, 1, 0, 0) = 13 (1, 1, 0, 2).
Given x= (2, 0, 3, 1), we get (using e3 = (1, 1, 0, 2) for convenience)
projU (x) = 33 (1, 1, 0, 1) + 22 (1, 1, 0, 0) + 06 (1, 1, 0, 2) = (2, 0, 0, 1).
1 1
2
1
1 1
2
1
1 0 1 1
5(b) Here A =
10. Let {f1 , . . . , fm } be an orthonormal basis of U. If X is in U then, since fi = 1 for each i, so
x = (x f1 )f1 + + (x fm )fm = projU (x) by the expansion theorem (applied to the space
U).
y1T
.
..
T
ym
each i; if and only if yi x = 0 for each i; if and only if xis in (U ) = U = U. This shows
that U = {x in Rn | Ax = 0}.
133
T
E T = AT (AAT )1 A = AT (AAT )1 (AT )T
1
= AT (AAT )T
A = AT (AT )T AT
A
1
A = E.
= AT AAT
Thus, E 2 = E = E T .
Exercises 8.2
1(b) Since
32
+ 42
Orthogonal Diagonalization
=
52 ,
a2 + b2 = 0, so
3
5
4
5
4
5
3
5
6, 3, 2 respectively, so
2
1
1
is orthogonal.
(h) Each row has length
1
a2 +b2
6
1
6
1
3
1
2
1
3
1
2
1
6
4 + 36 + 9 = 49 = 7. Hence
2 6
is orthogonal.
7
3
7
67
7
2
7
3
7
6
7
2
7
1
7
1
5
3
4
4
3
is orthogonal.
is orthogonal.
3
6
1
0
' (
1
; E0 (A) = span
.
0
1
'
(
1
; E2 (A) = span
.
1
Note that these eigenvectors are orthogonal (as Theorem 4 asserts). Normalizing them gives
an orthogonal matrix
1
1
1 1
1
2
2
P =
= 2
.
1
1
Then P 1 = P T and P T AP =
134
x3
0
7
= (x 5)(x2 6x 40) = (x 5)(x + 4)(x 10). Hence the
x5
0
(d) cA (x) = 0
7
0
x3
eigenvalues
are 1 = 5, 2 =
10, 3 =4.
2
0 7
1 0 0
0
0
0
0
1 = 5 :
0 0 1 ; E5 (A) = span 1 .
7 0
2
0 0 0
0
7
0 7
1 0 1
1
2 = 10 :
0
5
0
0 1 0
; E10 (A) = span 0 .
7 0
7
0 0
0
1
7
0
7
1 0 1
1
3 = 4 : 0 9 0 0 1 0 ; E4 (A) = span 0 .
Note that the three eigenvectors are pairwise orthogonal (as Theorem 4 asserts). Normalizing
them gives an orthogonal matrix
P =
1
2
1
2
1
2
1
2
Then P 1 = P T and P T AP =
(f) cA (x) =
x5
2
= (x 9)
2 = 9.
1 = 0 :
5
2
4
x8
x5
x8
x1
2 = 9 :
10
1
2
x9
0
9x
0
0
x9
= 2
= 2
x
8
2
x
8
4
4
2
x5
4
2
x1
= (x 9)(x2 9x) = x(x 9)2 . The eigenvalues are 1 = 0,
2
E0 (A) = span 1 .
18
18
1
2
0
0
4
1
; E9 (A) = span
12
0
1
0
1
1
2
0
12
0
However, these are not orthogonal and the Gram-Schmidt algorithm replaces
Z2 =
4
1
P T AP =
. Hence P =
0
2
3
1
3
2
3
0
1
3 2
4
3 2
1
3 2
3 2
2 2
2 2
3
0
4
1
1
2
0
with
135
1
3
2
1
and
1
2
2
also satisfies QT AQ =
(h) To evaluate
cA (x), we begin adding
rows
2, 3 and 4 to row 1.
x 3 5
1
1
x8 x8 x8 x8
5 x 3 1
1
1
= 5 x 3 1
cA (x) =
1
1
1
x
3
5
1
x
3
5
1
1
1
5
x3
1
5
x3
=
x8
x2
5
1
= (x 8)
1 = 0 :
3 = 8 :
6
6
4
x
x2
1
1
5
5
1
5
5
1
1
1
1
1
0
1
1
5
x4
x2
x2 4
6
(x 8) = 0
x
0
6
4 x 2
x+2
= x(x 8)(x2 4x 32) = x(x + 4)(x 8)2 .
x2
1 1 3 5
3 5
1
1
1
0 8 8 16
1
8 8 0 0
1 1 3 5
5
0 16 24 40
0
0
8 8
3
0
0
1
1
1
1 0 0 1
0 1 0 1
1
; E0 (A) = span
.
0 0 1 1
x2
2 = 4 :
x2
= (x 8)
x4
x+2
= x(x 8)
0
0
12
48
12
12
5
5
1
0
36
36
E4 (A) = span
24
24
0
1
1
1
1
24
12
5
24
24
0
0
0
136
E8 (A) = span
1
0
0
2
12
12
1
2
Hence, P =
6. cA (x) =
12
0
1
1
1
2
1
2
1
2
12
1
2
2
1
= x
1
2
1
1
1
1
+a
a
c
0
0
0
x
gives P T AP =
0
0
= x(x2 c2 ) a2 x = x(x2 k2 ).
0
c
0
a
0
c
1 = 0 : a 0 c 0 = 0 ; E0 (A) = span 0 .
a
0
a
0
c
0
0
a
k a 0
a
2 = k : a k c k = 0 ; Ek (A) = span k .
c
0
c
0 c k
0
a
k a
0
a
3 = k : a k c k = 0 ; Ek (A) = span k ,
c
0
c
0
c k
These
are
eigenvalues
orthogonal and have length,
k, 2k, 2k respectively. Hence, P =
c 2
1
2k
a 2
a
k
c
k
c
is orthogonal and P T AP =
y1
y2
= y = PTx =
1
5
x1 + 2x2
2x1 + x2
; so y1 =
1 (x1
5
+ 2x2 ) and y2 =
1 (2x1
5
+ x2 ).
137
((*))
But if Ej denotes column j of the identity matrix, then writing A = [aij ] we have
eTi Aej = aij
Since (*) shows that AT and A have the same (i, j)-entry for each i and j. In other words,
AT = A.
Note that the same argument shows that if A and B are matrices with the property that
= xT Ay for all columns x and y, then B = A.
xT By
18(b) If P =
cos
sin
sin
cos
and Q =
cos
sin
sin
cos
det P = 1 and det Q = 1. (We note that every 2 2 orthogonal matrix has the form of P
or Q for some .)
(d) Since P is orthogonal, P T = P 1 . Hence
P T (I P ) = P T P T P = P T I = (I P T ) = (I P )T .
Since P is n n, taking determinants gives
det P T det(I P ) = (1)n det[(I P )T ] = (1)n det(I P ).
Hence, if I P is invertible, then det(I P ) = 0 so this gives det P T = (1)n ; that is
det P = (1)n , contrary to assumption.
21. By the definition of matrix multiplication, the [i, j]-entry of AAT is ri rj . This is zero if
i = j, and equals ri 2 if i = j. Hence, AAT = D = diag(r1 2 , r2 2 , . . . , rn 2 ). Since D
is invertible (ri 2 = 0 for each i), it follows that A is invertible and, since row i of AT is
[a1i a2i . . . aji . . . ani ]
A1 = AT D1 =
..
.
a1i
..
.
...
...
...
aji
.
rj 2
..
.
aji
..
.
...
...
...
..
.
ani
..
.
1
r1 2
0
..
.
...
1
r2 2
...
..
.
...
..
.
...
1
rn 2
23(b) Observe first that I A and I +A commute, whence I A and (I +A)1 commute. Moreover,
138
1
(I + A)1 = (I + A)T
= (I T + AT )1 = (I A)1 . Hence,
Exercises 8.3
1(b)
(d)
2
1
1
1
20
and A =
U T U.
1
1
2
. Then A = U T U where U =
20
6
5
15
4
20
6
5
5
12
2
0
1
2
. Hence, U =
2
5
0
0
2
2
1
5
6
30
2
0
2 5
10
30
5
2 15
1
30
60 5
0
12 5
6 30
10. Since A is symmetric, the principal axis theorem asserts that an orthogonal matrix P exists
such that P TAP = D = diag(1 , 2 , . . . , n ) where the
of A. Since
i are
the eigenvalues
each i > 0, i is real and positive, so define B = diag 1 , 2 , . . . , n . Then B 2 = D.
As A = P DP T , take C = P BP T . Then
C 2 = P BP T P BP T = P B 2 P T = P DP T = A.
Finally, C issymmetric because B is symmetric C T = P T T B T P T = P BP T = C and C has
eigenvalues i > 0 (C is similar to B). Hence C is positive definite.
12(b) Suppose that A is positive definite so A = U0T U0 where U0 is upper triangular with positive
diagonal entries d1 , d2 , . . . , dn . Put D0 = diag(d1 , d2 , . . . , dn ) . Then L = U0T D01 is lower
triangular with 1s on the diagonal, U = D01 U0 is upper triangular with 1s on the diagonal,
and A = LD02 U. Take D = D02 .
Conversely, if A = LDU as in (a), then AT = U T DLT . Hence, AT = A implies that U T DLT =
T = L and LT = U by (a). Hence, A = U T DU. If D = diag(d , d , . . . , d ), let
LDU, so U
1 2
n
D1 = diag d1 , d2 , . . . , dn . Then D = D12 so A = U T D12 U = (D1 U)T (D1 U). Hence, A
is positive definite.
15 5
10 30
5 15
139
Exercises 8.4
QR-Factorization
2
1
f1 = c1 =
and c2 =
2
1
c2 f1
f2 = c2
f1 =
f1 2
1
1
1
1
3
5
2
1
15
2
5
q1 =
Hence Q = [q1
1
5
L=
q2 ] =
preceding Theorem 1:
2
1
f1
0
1
2
1
5
1
5
2
1
1
2
c2 q1
f2
5
0
3
5
1
5
1
5
Then A = QR.
(d) The columns of A are c1 = [1 1 0 1]T , c2 = [1 0 1
Apply the Gram-Schmidt algorithm
f1 = c1 = [1 1 0 1]T
c2 f1
f2 = c2
f1 = [1 0 1
f1 2
c3 f1
c3 f2
f3 = c3
f2
2 f1
f1
f2 2
= [0 1 1 0]T
=
2
3
1
3
[1
[0 1 1 1]T .
1]T 30 F1 = [1 0 1
1 0 1]T 31 [1 0 1
Normalize
1
f1 =
f1
1
Q2 =
f2 =
f2
1
Q3 =
f3 =
f3
Q1 =
1 0 1]T
1
3
[1
1
3
[1 0 1
1
3
[0 1 1 1]T .
1]T
1]T
1]T
140
Hence Q = [q1
q3 ] =
q2
1
3
R=
f1
c2 q1
c3 q1
f3
f2
c3 q2
Then A = QR.
3
0
0
1
3
3
0
3
1
3
2
3
1
3
1
1
2
Exercises 8.5
Computing Eigenvalues
x 5 2
= (x + 1)(x 4), so 1 = 1, 2 = 4.
1(b) A =
. Then cA (x) =
3 2
3
x+2
6 2
3 1
1
If 1 = 1 :
: eigenvector =
3
1
3
0 0
1 2
1 2
2
If 2 = 4 :
; dominant eigenvector =
.
3
6
0 0
1
1
Starting with x0 =
, the power method gives x1 = Ax0 , x2 = Ax1 , . . . :
x1 =
7
5
x2 =
25
11
x3 =
103
53
x4 =
409
203
3 1
= x2 3x 1, so the eigenvalues are 1 = 1 (3 + 13),
(d) A =
; cA (x) =
2
1
x
1 0
1
2 = 2 (3 13). Thus the dominant eigenvalue is 1 = 12 (3 + 13). Since 1 2 = 1 and
1 + 2 = 3, we get
1 3
so a dominant eigenvector is
1
1
1
0
. We start with x0 =
13
4
x3 =
43
13
1
1
x4 =
142
43
141
r1 = 3.29,
r2 = 3.30270,
1
1
3.302776
1
r3 = 3.30278.
3 1
= x2 3x 3; 1 = 1 3 + 13 = 3.302776 and
2(b) A =
; cA (x) =
2
1
A1 =
= Q1 R1 where Q1 = 10
, R1 = 10
1
A2 = R1 Q1 =
1
10
A3 = R2 Q2 =
1
109
33
360
33
= Q2 R2 where Q2 =
1
1090
3.302752
0.009174
0.009174
0.302752
33
33
3
1
, R2 =
1
1090
109
10
Exercises 8.6
1(b)
Complex Matrices
|1 i|2 + |1 + i|2 + 12 + (1)2 =
(1 + 1) + (1 + 1) + 1 + 1 =
(d) 4 + |i|2 + |1 + i|2 + |1 i|2 + |2i|2 =
4 + 1 + (1 + 1) + (1 + 1) + 4 =
13
2(b) Not orthogonal: (i, i, 2 + i), (i, i, 2 i) = i(i) + (i)(i) + (2 + i)(2 + i) = 3 + 4i
(d) Orthogonal: 4 + 4i, 2 + i, 2i), (1 + i, 2, 3 2i) = (4 + 4i)(1 i) + (2 + i)2 + (2i)(3 + 2i)
= (8i) + (4 + 2i) + (4 + 6i) = 0.
3(b) Not a subspace. For example, i(0, 0, 1) = (0, 0, i) is not in U.
142
=
= A. Thus A is hermitian and so is
(d) A =
, A = (A) =
i
AAH
A2
normal. But,
=
= 2I so A is not unitary.
1
1+i
1
1i
(f) A =
. Here A = AT so AH = A =
= A (thus A is not hermitian).
1+i
i
1i
i
3
2 2i
3
2 + 2i
H
H
Next, AA =
= I so A is not unitary. Finally, A A =
=
2+i
2 2i
1
2|z|
so
AH
1
2|z|
. Thus A = AH
if and
if z
= z; that is Ais hermitian if and only if z is real. We have AAH =
only
2
2 |z|
0
1
= I, and similarly, AH A = I. Thus it is unitary (and hence normal).
2
2|z|2
2 |z|
3i
x4
3 + i
= x2 5x 6 = (x + 1)(x 6).
2
5
3 + i
3+i 2
; an eigenvector is x1 =
.
3 i
1
0
0
3+i
2
3 + i
2 3 + i
3i
; an eigenvector is x2 =
.
0
0
2
3 i
5
2
3i
As x1 and x2 are orthogonal and x1 = x2 = 14, U = 114
is unitary and
3+i
2
1 0
U H AU =
.
8(b) A =
, cA (x) =
3+i
1
Eigenvectors for 1 = 1 :
Eigenvectors for 2 = 6 :
3 i
x1
143
x 2 1 i
= x2 5x + 4 = (x 1)(x 4).
(d) A =
; cA (x) =
1i
3
1 + i
x3
1
1 i
1 1+i
1+i
Eigenvectors for 1 = 1 :
; an eigenvector is x1 =
.
1 + i
2
0
0
1
2
1 i
1 + i 1
1
Eigenvectors for 2 = 4 :
; an eigenvector is x2 =
.
1 + i
1
0
0
1i
1+i
1
is unitary and
Since x1 and x2 are orthogonal and x1 = x2 = 3, U = 13
1
1i
1 0
U H AU =
.
1+i
(f) A =
1+i
1i
cA (x) =
x1
0
x1
1 + i
0
1 i
x2
Eigenvectors for 1 = 1 :
If 2 = 0 :
1
0
x3 =
0
1
1i
0
0
1 i
1 + i
Eigenvectors for 3 = 3 :
= (x 1)(x2 3x) = (x 1)x(x 3).
1 i
1 + i
1 + i
1 i
1+i
; an eigenvector is x2 =
1 + i
is orthogonal and U AU =
; an eigenvector is x1 =
1+i
1
3
0
0
; an eigenvector is
1+i
1i
10(b) (1) If z = (z1 , z2 , . . . , zn ) then z2 = |z1 |2 + |z2 |2 + + |zn |2 . Thus z = 0 if and only if
|z1 | = = |zn | = 0, if and only if z = (0, 0, . . . , 0).
144
18(b) If V =
1
i
i
0
V T,
but iV =
i
1
1
0
is
a + b
c + d
Exercises 8.7
1(b) The elements with inverses are 1, 3, 7, 9 : 1 and 9 are self-inverse; 3 and 7 are inverses of each
other.
As for the rest, 2 5 = 4 5 = 6 5 = 8 5 = 0 in Z10 so 2, 5, 4, 6 and 8 do not have inverses
in Z10 .
(d) The powers of 2 computed in Z10 are: 2, 4, 8, 16 = 6, 32 = 2, ,so the sequence repeats: 2,
4, 8, 16, 2, 4, 8, 16, .
2(b) If 2a = 0 in Z10 then 2a = 10k for some integer k. Thus a = 5k so a = 0 or a = 5 in Z10 .
Conversely, it is clear that 2a = 0 in Z10 if a = 0 or a = 5.
145
3(b) We want a number a in Z19 such that 11a = 1. We could try all 19 elements in Z19 , the one
that works is a = 7. However the euclidean algorithm is a systematic method for finding a.
As in Example 2, first divide 19 by 11 to get
19 = 1 11 + 8.
Then divide 11 by 8 to get
11 = 1 8 + 3.
Now divide 8 by 3 to get
8 = 2 3 + 2.
Finally divide 3 by 2 to get
3 = 1 2 + 1.
The process stops here since a remainder of 1 has been reached. Now eliminate remainders
from the bottom up:
1 = 3 1 2 = 3 (8 2 3) = 3 3 8
= 3(11 1 8) 8 = 3 11 4 8
= 3 11 4(19 1 11) = 7 11 4 19.
Hence 1 = 7 11 4 19 = 7 11 in Z19 because 19 = 0 in Z19 .
1
1
6(b) Working in Z7 , we have det
A =15
24 = 1 + 4 = 5 = 0, so A exists. Since 5 = 3 in Z7 ,
3
6
3 1
2 3
A1 = 3
=3
=
.
4
7(b) Gaussian elimination works over any field F in the same way that we have been using it over
R. In this case we have F = Z7 , and we reduce the augmented matrix of the system as follows.
We have 5 3 = 1 in Z7 so the first step in the reduction is to multiply row 1 by 5 in Z7 :
3 1 4 3
1 5 6 1
1 5 6 1
1 5 6 1
1 0 5 3
.
4
Hence x and y are the leading variables, and the non-leading variable z is assigned as a
parameter, say z = t. Then, exactly as in the real case, we obtain x = 3 + 2t, y = 1 + 4t,
z = t where t is arbitrary in Z7 .
9(b) If the inverse is a + bt then 1 = (1 + t)(a + bt) = (a b) + (a + b)t.This certainly holds if
a b = 1and a + b = 0.Adding gives 2a = 1,that is a = 1inZ3 ,that is a = 1 = 2.Hence
a + b = 0gives b = a = 1,so a + bt = 2 + t;that is (1 + t)1 = 2 + t.Of course it is easily
checked directly that (1 + t)(2 + t) = 1.
10(b) The minimum weight of C is 5, so it detects 4 errors and corrects 2 errors by Theorem 5.
11(b) The linear (5, 2)-code {00000, 01110, 10011, 11101} has minimum weight 3 so it corrects 1
error by Theorem 5.
12(b) The code is {0000000000, 1001111000, 0101100110, 0011010111,
146
13(b) C = {00000, 10110, 01101, 11011} is a linear (5, 2)-code of minimal weight 3, so it corrects
single errors.
u
14(b) G = 1 u where u is any nonzero vector in the code. H =
.
In1
Exercises 8.8
1(b) A =
(d) A =
1
1
(1
2
+ 1)
2
1
(2
2
1
1
(4
2
1
(5
2
1)
+ 2)
1
1
(2
2
+ 0)
1
(1 + 5)
2
1
(0 2)
2
+ 4)
1)
1
3
x 1 2
= x2 2x 3 = (x + 1)(x 3)
2(b) q =
where A =
. cA (x) =
2 1
2
x1
2
2
1 1
1
1 = 3 :
; so an eigenvector is X1 =
.
2
2
0
0
1
2 2
1 1
1
2 = 1 :
; so an eigenvector is X2 =
.
2 2
1
0 0
1
1
3
0
is orthogonal and P T AP =
. As in Theorem 1, take
Hence, P = 12
1 1
0 1
1
1
x1
x1 + x2
= 12
. Then
Y = P T X = 12
X T AX
x2
y1 =
x1 x2
1 (x1
2
+ x2 ) and y2 =
1 (x1
2
x2 ).
Finally, q = 3y12 y22 , the index of q is 1 (the number of positive eigenvalues) and the rank of
q is 2 (the number of nonzero eigenvalues).
(d) q = X T AX where A =
1 = 9 :
2
4
2 = 9 :
16
4
8
8
8
1
x7
cA (x) = 4
4
x7
= 4
0
4
8
8
10
8
4
8
10
0
0
x1
x9
x1
x+7
0
0
2
0
0
x7
4
4
= 4
x1
8
0
x + 9 x 9
= (x 9)2 (x + 9)
; orthogonal eigenvectors
1
9
9
1
0
2
2
1
; eigenvector
2
1
2
1
2
2
. These
147
and P T AP =
1
3
. Thus
Y = PTX =
1
3
so
x1
x2
x3
1
3
1
2
2x1 + 2x2 x3
2x1 x2 + 2x3
x1 + 2x2 + 2x3
is orthogonal
y1 = 31 [2x1 + 2x2 x3 ]
y2 = 31 [2x1 x2 + 2x3 ]
y3 = 31 [x1 + 2x2 + 2x3 ]
will give q = 9y12 + 9y22 9y32 . The index of q is 2 and the rank of q is 3.
(f) q = X T AX where A =
1 = 9 :
2 = 0 :
5
2
4
2
8
2
2
5
cA (x) =
=
x5
4
x9
0
0
x5
0
4
x1
x8
x9
0
= 2
x
8
4
2
= x(x 9)2 .
x8
x + 9
2
x5
18
18
1
1
0
2
2
1
and
orthogonal and P T AP =
. If
Y = PTX =
then
1
3
2
2
y1 = 13 (2x1 + 2x2 + x3 )
y2 = 13 (x1 + 2x2 2x3 )
y3 = 13 (2x1 + x2 + 2x3 )
1
2
2
x1
x2
x3
1
3
; an eigenvector is
1
2
is
148
(h) q = X T AX where A =
1 = 2 :
2 = 1;
3 = 1 :
2
1
0
1
1
1
1
0
cA (x) =
=
x1
1
2
Hence,
x1
1
0
1
x1
0
1
1
is orthogonal and P T AP =
1
2
1
3
1
3
x1
P =
0
1
2
Y = PTX =
then
1
2
2
0
0
;an eigenvector is
1
6
y2 =
1 (x1 + x2
3
1 (x1 + x3 )
2
y3 =
1 (x1
6
y1 =
; an eigenvector is
. If
3
1
1
0
1
1
1
1
; an eigenvector is
3
0
x1
x2
x3
1
2
1
1
2
1
+ x3 )
+ 2x2 x3 )
; an eigenvector is
.
2
1
6
2
6
1
6
1
6
x1 0 x1
= 1
x
1
0
1 x 1
= (x 1)(x 2)(x + 1)
t3
2
2
t
= (t4)(t+1)
149
4
2
1
5
Hence, P =
x1 =
1 (2xy)
5
1
2
1
1
; an eigenvector is
.
2
4
0
gives P T AP =
. If Y = P T X =
1 (x+2y).
5
and y1 =
1 = 6 :
2 = 1 :
1
2
Hence, P =
1 (x + 2y),
5
2
1
2
4
1
5
t5
; an eigenvector is
; an eigenvector is
gives P T AP =
2 1
1 (2x y)
5
y1 =
x1
y1
, then
x
(d) q = 2x2 + 4xy + 5y 2 = X T AX where X =
,A=
y
t 2 2
= (t 1)(t 6).
In this case cA (t) =
1
2
2
1
.
.
. If Y = P T X =
x1
y1
, then x1 =
x
x1
are related to X =
by X = AX1
4. After the rotation, the new variables X1 =
y
y1
cos sin
where A =
(this is equation (**) preceding Theorem 2, or see Theorem 4
sin
cos
2.6). Thus x = x1 cos y1 sin and y = x1 sin + y1 cos . If these are substituted in the
equation ax2 + bxy + cy 2 = d, the coefficient of x1 y1 is
2a sin cos + b(cos2 sin2 ) + 2c sin cos = b cos 2 (a c) sin 2.
This is zero if is chosen so that
cos 2 =
b2
ac
+ (a c)2
and
2 ac 2
b +(ac)
2
t1
2
2
t3
0
2
0
t3
t1
2
0
sin 2 =
+
x1
x2
x3
b2
2b 2
b (ac)
, A =
2
t3
t3
2
0
t3
2
b
.
+ (a c)2
= 1.
2
0
t1
2
0
1 = 3 :
2
2
2
1
0
; an eigenvector is
0
1
1
, B = [5 0
4
t3
0
2
0
t3
6] .
150
2 = 5 :
4
2
2
3 = 1 :
Hence, P =
4
0
1
2
1
2
then
2
0
4
3
1
3
+ 1
3
2
6
1
6
1
6
Y =
=
y1
y2
y3
1
6
1
2
2
0
y2 =
y3 =
= PTX =
y1 =
1
6
; an eigenvector is
2
1
1
2
1
0
1
1
1
; an eigenvector is
satisfies P T AP =
3(x2 + x3 )
2(x1 + x2 x3 )
2x1 x2 + x3
2
1
1
. If
1 (x2 + x3 )
2
1
(x1 + x2 x3 )
3
1 (2x1 x2 + x3 ).
6
As BP = 16 6 3 11 2 4 = 3 2 113 3 2 3 6 , this is
9(b) We have A = U T U where U is upper triangular with positive diagonal entries. Hence
q(x) = xT U T Ux = (Ux)T (Ux) = Ux2
So take y = Ux as the new column of variables.
Exercises 8.9
Exercises 8.10
151
1(b) CB (v) =
(d) CB (v) =
1
2
2b c
cb
ab
a+b
a + 3b + 2c
1
2
because
2
1
3
2
2(b) MDB (T ) = CD [T (1)] CD [T (x)] CD [T (x )] =
. Comparing columns gives
v = (a, b, c) =
CD [T (1)] =
2
1
CD [T (x)] =
1
0
CD [T (x )] =
3
2
Hence
T (1) = 2(1, 1) (0, 1) = (2, 1)
T (x) = 1(1, 1) + 0(0, 1) = (1, 1)
T (x2 ) = 3(1, 1) 2(0, 1) = (3, 1).
Thus
T (a + bx + cx2 ) = aT (1) + bT (x) + cT (x2 )
= a(2, 1) + b(1, 1) + c(3, 1)
= (2a + b + 3c, a + b + c).
'
CD T
'
1
= CD
0
3(b) MDB (T ) =
(d) MDB (T ) =
0
0
0
(
0
(
'
(
'
(
'
(
0 1
0 0
0 0
CD T
CD
CD T
0 0
1 0
0 1
(
(
(
'
'
'
0 0
0 1
0 0
CD
CD
CD
1
CD [T (1)] CD [T (x)] CD [T (x2 )] = CD (1) CD (x + 1) CD (x2 + 2x + 1)
152
4(b) MDB (T ) = [CD [T (1, 1)] CD [T (1, 0)]] = [CD (1, 5, 4, 1) CD (2, 3, 0, 1)] =
ab
b
ab
. Hence,
2a b
3a + 2b
4b
a
CD [T (1)] CD [T (x)] CD [T (x2 )]
(d) MDB (T ) =
. Hence
1
2
1
1
a
b
c
1
2
a+bc
a+b+c
= CD
We have v =
so CB (v) =
a
b
c
d
1
=a
CD
CD
CD
+d
+b
+c
0
0
a
b
c
d
a
b+c
b+c
d
(
153
+ (b + c)
+d
b+c
b+c
1 1 0
0
MED (S)MDB (T ) =
= MEB (ST ).
)
*
T
S
(d) Have R3 P2 R2 with bases B = {(1, 0, 0), (0, 1, 0), (0, 0, 1)} , D = 1, x, x2 ,
E = {(1, 0), (0, 1)} .
CE S(x)] CE [S(x2 )]
= CD (1 x) CD (1 + x2 ) CD (x)
1
0
154
The action of ST is ST (a, b, c) = S (a b) + (c a)x + bx2 = (2a b c, b). Hence,
MEB (ST ) = [CE [ST (1, 0, 0)] CE [ST (0, 1, 0)] CE [ST (0, 0, 1)]]
= [CE (2, 0) CE (1, 1) CE (1, 0)]
2 1 1
=
0
1 1 0
MED (S)MDB (T ) =
0
1
1
0
2
0
1
1
1
0
= MEB (ST ).
CD [T (0, 0, 1)]]
MBD (T 1 ) = CB T 1 (1, 0, 0)
CB T 1 (0, 1, 0)
CB T 1 (0, 0, 1)
= CB 12 , 12 , 12
CB 21 , 12 , 12
CB 21 , 12 , 12
1
2
1
1
(d) MDB (T ) = CD [T (1)] CD [T (x)] C T (x2 )
Hence,
MBD (T 1 ) = CB T 1 (1, 0, 0)
CB T 1 (0, 1, 0)
CB T 1 (0, 0, 1)
= CB (1) CB (1 + x) CB (x + x2 )
1
1
1
1
155
(
'
CD T
(
'
CD T
(
'
CD T
= =
(
This is invertible and the matrix inversion algorithm (and Theorem 4) gives
If v = (a, b, c, d) then
CB T 1 (v) = MDB (T 1 )CD (v) =
1
0
1
1
a
b
c
d
ab
bc
c
(a, b, c, d) = T
=
(v) = (a b)
ab bc
.
c
+ (b c)
+c
+d
C2
...
CD [T (e2 )]
...
CD [T (en )]]
Cn ] = In .
16(b) Define T : Pn Rn+1 by T [p(x)] = (p(a0 ), p(a1 ), . . . , p(an )), where a0 , . . . , an are fixed
distinct real numbers. If B = {1, x, . . . , xn } and D Rn+1 is the standard basis,
MDB (T )( = CD [T (1)] CD [T (x)] CD T (x2 )
. . . CD [T (xn )]
a0
a20
...
an
0
1
.
..
a1
.
..
a21
.
..
...
..
.
an
1
.
..
an
a2n
...
an
n
Since the ai are distinct, this matrix has nonzero determinant by Theorem 7, 3.2. Hence, T
is an isomorphism by Theorem 4.
156
S,T
20(d) Assume that V W U. Recall that the sum S + T : W U of two operators is defined
by (S + T ) (w) = S(w) + T (w) for all w in W. Hence, for v in V :
[(S + T ) R] (v) = (S + T )[R(v)]
= S[R(v)] + T [R(v)]
= (SR)(v) + (T R)(v)
= (SR + T R)(v).
Since this holds for all v in V, it shows that (S + T )R = SR + T R.
21(b) If P and Q are subspaces of a vector space W, recall that P + Q = {p + q | p in P, q in Q} is
a subspace of W (Exercise 25, 6.4). Now let w be any vector in im(S + T ) . Then
w = (S + T )(v) = S(v) + T (v) for some v in V, whence w is in im S+ im T. Thus,
im(S + T ) im S + im T.
22(b) If T is in X10 , then T (v) = 0 for all v in X1 . As X X1 , this implies that T (v) = 0 for all v
in X; that is T is in X 0 . Hence, X10 X 0 .
24(b) We have R : V L(R, V ) defined by R(v) = Sv . Here Sv : R V is defined by Sv (r) = rv.
R is a linear transformation: The requirements that R(v + w) = R(v) + R(w) and
R(av) = aR(v) translate to Sv+w = Sv + Sw and Sav = aSv . If r is arbitrary in R:
Sv+w (r) = r(v + w) = rv + rw = Sv (r) + Sw (r) = (Sv + Sw )(r)
Sav (r) = r(av) = a(rv) = a [Sv (r)] = (aSv )(r).
Hence, Sv+w = Sv + Sw and Sav = aSv so R is linear.
R is one-to-one: If R(v) = 0 then Sv = 0 is the zero transformation R V. Hence we have
0 = Sv (r) = rv for all r; taking r = 1 gives v = 0. Thus ker R = 0.
R is onto: Given T in L(R, V ), we must find v in V such that T = R(v); that is T = Sv .
Now T : R V is a linear transformation and we take v = T (1). Then, for r in R:
Sv (r) = rv = rT (1) = T (r 1) = T (r).
Hence, Sv = T as required.
25(b) Given the linear transformation T : R V and an ordered basis B = {b1 , b2 , . . . , bn }
of V , write T (1) = a1 b1 + a2 b2 + + an bn where the ai are in R. We must show that
T = a1 S1 + a2 S2 + +an Sn where Si (r) = rbi for all r in R. We have
(a1 S1 + a2 S2 + + an Sn )(r) = a1 S1 (r) + a2 S2 (r) + + an Sn (r)
= a1 (rb1 ) + a2 (rb2 ) + + an (rbn )
= rT (1)
= T (r)
for all r in R. Hence a1 S1 + a2 S2 + + an Sn = T.
157
Exercises 9.2
1(b) PDB
= CD (x) CD (1 + x) CD (x2 ) =
32
1
1
1
2
1
2
x = 23 2 + 1(x + 3) + 0(x2 1)
because
1
2
Given v = 1 + x + x2 , we have
CB (v) =
0
1
1
and CD (v) =
21
1
1
1
2
0
1
1
1
2
= CB (1 + x + x2 ) CB (1 x) CB (1 + x2 ) =
PDB = CD (1) CD (x) CD (x2 ) =
1=
x=
x2 =
1
3
1
3
1
3
1
3
2
2
because
= CD (v)
1
0
(1 + x + x2 ) + (1 x) (1 + x2 )
(1 + x + x2 ) 2(1 x) (1 + x2 )
(1 + x + x2 ) + (1 x) + 2(1 + x2 ) .
158
1
0
1
PED = CE (1 + x + x2 ) CE (1 x) CE (1 + x2 ) = 1 1 0
PEB
= CE (1) CE (x) CE (x2 ) =
)
*
where we note the order of the vectors in E = x2 , x, 1 . Finally, matrix multiplication
verifies that PED PDB = PEB .
5(b) Let B = {(1, 2, 1), (2, 3, 0), (1, 0, 2)} be the basis formed by the transposes of the columns
of A. Since D is the standard basis:
= A.
because
= CB0 (1 x2 ) CB0 (1 + x) CB0 (2x + x2 ) =
MB0 (T ) = CB0 [T (1)] CB0 [T (x)] CB0 T (x2 )
= CB0 (1 + x2 ) CB0 (1 + x) CB0 (x + x2 )
Finally
MB (T ) = CB T (1 x2 )
CB [T (1 + x)] CB T (2x + x2 )
= CB (1 x) CB (2 + x + x2 ) CB (2 + 3x + x2 )
159
CB
2
5
9(b) Choose a basis of R2 , say B = {(1, 0), (0, 1)} , and compute
MB (T ) = [CB [T (1, 0)]
x 3 5
= x2 6x 1. Note that the calculation is easy
Hence, cT (x) = cMB (T ) (x) =
2
x3
because B is the standard basis, but any basis could be used.
)
*
(d) Use the basis B = 1, x, x2 of P2 and compute
MB (T ) = CB [T (1)] CB [T (x)] CB T (x2 )
= CB (1 + x 2x2 ) CB (1 2x + x2 ) CB (2 + x)
Hence,
(f) Use B =
'
2
1
cT (x) = cMB (T ) (x) =
1
x1
x+2
1
2
= x3 + x2 8x 3.
0 1
0 0
,
,
,
0
1
x
(
'
(
'
(
1 0
0 1
MB (T ) = CB T
CB T
0 0
0 0
1 0
0 1
1
= CB
CB
CB
1 0
0 1
1
1
0
1
0
1
0
x1
x+2
x + 3
x2
and compute
'
0
CB T
1
0
0
CB
0
0
0
(
1
'
CB T
(
160
x1
0
1
0
x1
x+1
x+1
x+1
x+1
0
1
0
x1
x+1
= x4 .
12. Assume that A and B are both n n and that null A = null B. Define TA : Rn Rn by
TA (x) = Ax for all x in Rn ; similarly for TB . Then let T = TA and S = TB . Then ker S =
null B and ker T = null A so, by Exercise 28, 7.3 there is an isomorphism R : Rn Rn
such that T = RS. If B0 is the standard basis of Rn , we have
A = MB0 (T ) = MB0 (RS) = MB0 (R)MB0 (S) = U B
where U = MB0 (R). This is what we wanted because U is invertible by Theorem 4, 9.1.
Conversely, assume that A = U B with U invertible. If x is in null A then Ax = 0, so
U Bx = 0, whence Bx = 0 (because U is invertible), that is x is in null B. In other words null
A null B. But B = U 1 A so null B null A by the same argument. Hence null A = null
B.
16(b) We verify first that S is linear. Showing S(w + v) = S(w) + S(v) means showing that
MB (Tw+v ) = MB (Tw ) + MB (Tv ). If B = {b1 , b2 } then column j of MB (Tw+v ) is
CB [Tw+v (bj )] = CB [(w + v)bj ] = CB (wbj + vbj ) = CB (wbj ) + CB (vbj )
because CB is linear. This is column j of MB (Tw ) + MB (Tv ), which shows that S(w + v) =
S(w) + S(v). A similar argument shows that MB (Taw ) = a MB (Tw ), so S(aw) = a S(w), and
hence that S is linear.
To see that S is one-to-one, let S(w) = 0; by Theorem 2 7.2 we must show that w = 0. We
have MB (Tw ) = S(w) = 0 so, comparing jth columns, we see that CB [Tw (bj )] = CB [wbj ] = 0
for j = 1, 2. As CB is an isomorphism, this means that wbj = 0 for each j. But B is
a basis of C and 1 is in C, so there exist r and s in R such that 1 = rb1 + sb2 . Hence
w = w 1 = rwb1 + swb2 = 0, as required.
Finally, to show that S(wv) = S(w)S(v) we first show that Tw Tv = Twv . Indeed, given z
in C, we have
(Tw Tv )(z) = Tw (Tv (z)) = w(vz) = (wv)z = Twv (z).
Since this holds for all z in C, it shows that Tw Tv = Twv . But then Theorem 1 shows that
S(wv) = MB (Tw Tv ) = MB (Tw )MB (Tv ) = S(w)S(v).
This is what we wanted.
Exercises 9.3
161
2(b) Let v T (U), say v = T (u) where u U. Then T (v) = T [T (u)] T (U) because T (u) U.
This shows that T (U) is T -invariant.
3(b) Given v in S(U), we must show that T (v) is also in S(U). We have v = S(u) for some u in
U. As ST = T S, we compute:
T (v) = T [S(u)] = (T S)(u) = (ST )(u) = S[T (u)].
As T (u) is in U (because U is T -invariant), this shows that T (v) = S[T (u)] is in S(U).
6. Suppose that a subspace U of V is T -invariant for every linear operator T : V V ; we must
show that either U = 0 or U = V. Assume that U = 0; we must show that U = V. Choose
u = 0 in U, and (by Theorem 1, 6.4) extend {u} to a basis {u, e2 , . . . , en } of V . Now let v be
any vector in V. Then (by Theorem 3, 7.1) there is a linear transformation T : V V such
that T (u) = v and T (ei ) = 0 for each i. Then v = T (u) lies in U because U is T -invariant.
As v was an arbitrary vector in V , it follows that V = U.
[Remark: The only place we used the hypothesis that V is finite dimensional is in extending
{u} to a basis of V. In fact, this is true for any vector space, even of infinite dimension.]
)
*
8(b) We have U = span 1 2x2 , x + x2 . To show that U is T -invariant, it suffices (by Example
3) to show that T (1 2x2 ) and T (x + x2 ) both lie in U. We have
T (x + x2 ) = 1 + 2x2 = (1 2x2 )
So both T (1 2x2 ))and T (x + x2 ), *so U is T -invariant. To get a block triangular matrix for
T extend the basis 1 2x2 , x + x2 of U to a basis B of V in any way at all, say
)
*
B = 1 2x2 , x + x2 , x2 .
Then, using (*), we have
MB (T ) = CB T (1 2x2 )
CB T (x + x2 )
CB T (x2 ) =
3
3
0
where the last column is because T (x2 ) = 1 + x + 2x2 = (1 2x2 ) + (x + x2 ) + 3(x2 ). Finally,
x 3 1 1
x3 1
= (x 3)(x2 3x + 3).
cT (x) = 3 x 1 = (x 3)
3
x
0
0 x3
162
These calculations hold for any matrix E; but if E 2 = E we get M22 = U W. First
U W = {0} because X in U W implies X = XE because X is in U and XE = 0 because
X is in W, so X = XE = 0. To prove that U + W = M22 let X be any matrix in M22 . Then:
XE is in U
X XE is in W
because (XE)E = XE 2 = XE
because (X XE)E = XE XE 2 = XE XE = 0.
163
20. Let B1 bea basis of U and extend it (using Theorem 1 6.4) to a basis B of V . Then
MB1 (T ) Y
MB (T ) =
by Theorem 1. Since we are writing T1 for the restriction of T to
0
xI MB1 (T )
0
xI Z
22(b) We have T : P3 P3 given by T [p(x)] = p(x) for all p(x) in P3 . We leave it to the reader
to verify that T is a linear operator. We have
T 2 [p(x)] = T {T [p(x)]} = T [p(x)] = p((x)) = p(x) = 1P3 (p(x)).
Hence, T 2 = 1P3 . As in Example 10, let
U1 = {p(x) | T [p(x)] = p(x)} = {p(x) | p(x) = p(x)}
U2 = {p(x) | T [p(x)] = p(x)} = {p(x) | p(x) = p(x)} .
These are the subspaces of even and odd polynomials in P3 , respectively, and have bases
B1 = {1, x2 } and B2 = {x, x3 }. Hence, use the ordered basis B = {1, x2 ; x, x3 } of P3 . Then
MB1 (T )
0
I2
0
MB (T ) =
=
MB2 (T )
I2
MB (T ) = CB [t(1)] CB [T (x2 )] CB [T (x)] CB [T (x3 )]
= [CB (1) CB (x2 ) CB (x) CB (x3 )]
I2
I2
22(d) Here T 2 (a, b, c) = [(a + 2b + c) + 2((b + c) + (c), (b + c) c), (c)] = (a, b, c), so
T 2 = 2R3 .
Note that T (1, 1, 0) = (1, 1, 0), while T (1, 0, 0) = (1, 0, 0) and T (0, 1, 2) = (0, 1, 2).
Hence, in the notation of Theorem 10, let B1 = {(1, 1, 0)} and B2 = {(1, 0, 0), (0, 1, 2)}.
These are bases of U1 = R(1, 1, 0) and U2 = R(1, 0, 0) + R(0, 1, 2), respectively.
So if we
take B = {(1, 1, 0); (1, 0, 0), (0, 1, 2)} then MB1 (T ) = [1] and MB2 (T ) =
1
0
0
MB1 (T )
0
MB (T ) =
= 0 1 0 .
0
MB2 (T )
1
0
. Hence
164
23(b) Given v, T [v T (v)] = T (v) T 2 (v) = T (v) T (v) = 0, so v T (v) lies in ker T . Hence
v = (v T (v)) + T (v) is in ker T + im T for all v, that is V = ker T + im T . If v lies
in ker T im T, write v = T (w), w in V . Then 0 = T (v) = T 2 (w) = T (w) = v, so
ker T im T = 0.
25(b) We first verify that T 2 = T. Given (a, b, c) in R3 , we have
T 2 (a, b, c) = T (a + 2b, 0, 4b + c) = (a + 2b, 0, 4b + c) = T (a, b, c)
Hence T 2 = T . As in the preceding exercise, write
U1 = {v | T (v) = v}
and
U2 = {v | T (v) = 0} = ker(T ).
MB2 (T )
01
f(v)(z f(z)z) = 0
for all v. Since f = 0, f(v) = 0 for some v, so this holds if and only if
z = f(z)z.
As z = 0, this holds if and only if f(z) = 1.
30(b) Let be an eigenvalue of T . If A is in E (T ) then T (A) = A; that is U A = A. If we write
A = [p1 p2 . . . pn ] in terms of its columns p1 , p2 , . . . , pn , then U A = A becomes
U [p1
[Up1
p2
Up2
...
...
pn ] = [p1
Upn ] = [p1
p2 . . .
p2
pn ]
...
pn ] .
Comparing columns gives Upi = pi for each i; that is pi is in E (U) for each i. Conversely,
if p1 , p2 , . . . , pn are all in E (U) then Upi = pi for each i, so T (A) = U A = A as above.
Thus A is in E (T ).
165
2. If , denotes the inner product on V, then u1 , u2 is a real number for all u1 and u2 in
U. Moreover, the axioms P 1 P 5 hold for the space U because they hold for V and U is a
subset of V. So , is an inner product for the vector space U.
.
.
3(b) f2 = cos2 x dx = 12 [1 + cos(2x)] dx = 12 x + 12 sin(2x) = . Hence f = 1 f is a
unit vector.
1 1
3
1
1
2
T
(d) v = v, v = v
v = 3 1 v
= 17.
1
2
1
2
1
3
1
1
Hence v v = 17
is a unit vector in this space.
1
(d) f g2 = (1 cos x)2 dx = 32 2 cos x + 21 cos(2x) dx because we have cos2 (x) =
1
Hence f g2 = 23 x 2 sin(x) + 14 sin(2x) = 32 [ ()] = 3. Hence
2 [1 + cos(2x)].
d(f, g) = 3.
166
= g, f .
v, v = v Av = [v1 , v2 ]
5
3
3
2
v1
v2
9 2
= 5 v12 65 v1 v2 + 25
v2 95 v22 + 2v22
2
= 5 v1 35 v2 + 15 v22
= 15 (5v1 3v2 )2 + v22 .
Thus, v, v 0 for all v; and v, v = 0 if and only if 5v1 3v2 = 0 = v2 ; that is if and only
if v1 = v2 = 0 (v = 0). So P 5 holds.
v1
(d) As in (b), consider v =
.
v2
v, v = [v1
=
=
=
=
v2 ]
v1
v2
(3v1 + 4v2 )2 + 2v22 .
+ 6v22
Thus, v, v 0 for all v; and (v, v) = 0 if and only if 3v1 + 4v2 = 0 = v2 ; that is if and only
if v = 0. Hence P 5 holds. The other axioms hold because A is symmetric.
167
and a22
a11
a12
a21
a22
, then aij is the coefficient of vi wj in v, w. Here a11 = 1, a12 = 1 = a21 ,
1
1
= 2. Thus, A =
. Note that a12 = a21 , so A is symmetric.
(d) As in (b): A =
2
0
14. As in the hint, write x, y = xT Ay. Since A is symmetric, this satisfies axioms P 1, P 2, P 3
and P 4 for an inner product on Rn (and only P 2 requires that A be symmetric). Then it
follows that
0 = x + y, x + y = x, x + x, y + y, x + y, y = 2x, y for all x, y in Rn .
Hence x, y = 0 for all x and y in Rn . But if ej denotes column j of In , then
ei , ej = eTi Aej is the (i, j)-entry of A. It follows that A = 0.
P3
P2
(3) By (1): v, 0 = v, 0 + 0 = v, 0 + v, 0. Hence v, 0 = 0. Now 0, v = 0 by P2.
(4) If v = 0 then v, v = 0, 0 = 0 by (3). If v, v = 0 then it is impossible that v = 0 by
P 5, so v = 0.
26(b) Here
)
*
W = w | w in R3 and v w = 0
= {(x, y, z) | x y + 2z = 0}
= {(s, s + 2t, t) | s, t in R}
= span B
where B = {(1, 1, 0), (0, 2, 1)} . Then B is the desired basis because B is independent
[In fact, if s(1, 1, 0) + t(0, 2, 1) = (s, s + 2t, t) = (0, 0, 0) then s = t = 0].
28. Write u = v w; we show that u = 0. We are given that
u, vi = v w, vi = v, vi w, vi = 0
168
Thus, u = 0, so u = 0.
29(b) If u = (cos , sin ) in R2 (with the dot product), then u = 1. If v = (x, y) the Schwarz
inequality (Theorem 4 10.1) gives
u, v2 u2 v2 1 v2 = v2 .
This is what we wanted.
Exercises 10.2
e1 , e2 = [1 1 1]
f1 , f3 = [1 1 1]
f2 , f3 = [1 0 1]
1
1
0
1
and f3 =
= [3 1 3]
= [3 1 3]
6
1
, f2 =
6
1
1
0
1
1
6
1
= [1 0 1]
1
6
1
=0
=0
1
6
1
= 0.
v, f1
v, f2
v, f3
f3
2 f1 +
2 f2 +
f1
f2
f3 2
3a + b + 3c
ca
3a 6b + 3c
=
e1 +
e2 +
e3
7
2
42
1
= 14
{(6a + 2b + 6c)e1 + (7c 7a)e2 + (a 2b + c)e3 ].
v=
/
because
,
a
c
b
0
f1 , f2 = 1 + 0 + 0 1 = 0
f1 , f3 = 0 + 0 + 0 + 0 = 0
f1 , f4 = 0 + 0 + 0 + 0 = 0
f2 , f3 = 0 + 0 + 0 + 0 = 0
f2 , f4 = 0 + 0 + 0 + 0 = 0
f3 , f4 = 0 + 1 + 0 + 1 = 0.
169
v=
2(b) Write b1 = (1, 1, 1), b2 = (1, 1, 1), b3 = (1, 1, 0). Note that in the Gram-Schmidt algorithm
we may multiply each ei by a nonzero constant and not change the subsequent ei . This avoids
fractions.
f1 = b1 = (1, 1, 1)
b2 , e1
f2 = b2
f1
e1 2
= (1, 1, 1) 46 (1, 1, 1)
= 31 (1, 5, 1); use e3 = (1, 5, 1) with no loss of generality
b3 , f1
b3 , f2
f3 = b3
f2
2 f1
e1
e2 2
(3)
= (1, 1, 0) 36 (1, 1, 1)
(1, 5, 1)
(30)
1
[(10, 10, 9) (5, 5, 5) + (1, 5, 1)]
= 10
1
= 5 (3, 0, 2); use f3 = (3, 0, 2) with no loss of generality
So the orthogonal basis is {(1, 1, 1), (1, 5, 1), (3, 0, 2)}.
/
0
a b
a b
3(b) Note that
,
= aa + bb + cc + dd . For convenience write
c d
c d
1 1
1 0
1 0
1 0
b1 =
, b2 =
, b3 =
, b4 =
. Then:
0
f1 = b1 =
b2 , f1
f1
f2 = b2
f1 2
1 0
1
23
=
1
1
3
2
1
1
1
1
3
2
1
, the result is the same.
b3 , f1
b3 , f2
f3 = b3
f2
2 f1
f1
f2 2
1 0
1 1
2
=
23
15
0 1
0 1
1
2
= 51
.
2
1
3
2
1
170
1
2
2
1
b4 , f1
b4 , f2
b4 , f3
f4 = b4
f3
2 f1
2 f2
f1
f2
f3 2
1 0
1 1
1 2
1
=
13
15
0 0
0 1
3
1
1
0
1
=2
.
0
Use f4 =
orthogonal basis
'
1
10
1
2
2
1
4(b) f1 = 1
x, f1
f1 = x 22 1 = x 1
f1 2
x2 , f2
x2 , f1
f
f2 = x2
f3 = x2
1
f1 2
f2 2
f2 = x
6(b) [x y
8/3
2
4/3
2/3
(x 1) = x2 2x + 23 .
w] is in U if and only if
x + y = [x y
w] [1 1 0 0] = 0.
Thus y = x and
U = {[x x z w] | x, z, w in R}
= span {[1 1 0 0] , [0 0 1 0] , [0 0 0 1]} .
Hence dim U = 3 and U = 1.
(d) If p(x) = a + bx + cx2 , p is in U if and only if
1 1
0 = p, x =
(a + bx + cx2 )x dx =
0
a
2
b
3
+ 4c .
Thus a = 2s + t, b)= 3s, c = 2t*where s and t are in R, so p(x) = (2s + t) 3sx 2tx2 .
Hence, U = span 2 3x, 1 2x2 and dim U = 2, dim U = 1.
a b
is in U if and ony if
(f)
c d
0=
0=
0=
/
/
/
,
,
,
0
0
0
=a+b
=a+c
= a + c + d.
171
The solution d = 0, b = c = a, so
1 and dim U = 3.
1 0
7(b) Write b1 =
, b2 =
0
'
a
a
, and b3 =
a
0
(
a in R = span{[]} . Thus dim U =
1
0
If E3 =
f1 = b1 = []
b2 , f1
1
1
1
0
2
f2 = b2
2 f1 =
1
1
0
f1
b3 , f1
b3 , f2
f3 = b3
f2
2 f1
f1
f2 2
1 1
1 0
1
1
1
2
=
2
4
0 0
0 1
1 1
0
1
= 21
.
0
1
then
A, E1
A, E2
A, E3
E3
E
+
E
+
1
2
E1 2
E2 2
E3 2
1 0
1
1
0
1
4
4
2
=2
+4
+ 2
0 1
1 1
1 0
3 0
=
projU (A) =
x2 + 1, 1
x2 + 1, 2x 1
1
+
(2x 1)
12
2x 12
3/4
1 1
= x + 56 .
1/6
1/3 (2x 1)
172
1
6
is
11(b) We have v + w, v w = v, v v, w + w, v u, u = v2 w2 . But this means
that v + w, v u = 0 if and only if v = w. This is what we wanted.
14(b) If v is in U then v, u = 0 for all u in U. In particular, v, ui = 0 for 1 i n, so v is in
{u1 , . . . , um } . This shows that U {u1 , . . . , um } . Conversely, if v is in {u1 , . . . , um }
then v, ui = 0 for each i. If u is in U, write u = r1 u1 + + rm um , ri in R. Then
v, u = v, r1 u1 + + rm um
= r1 v, u1 + + rm v, um
= r1 0 + + rm 0
= 0.
As u was arbitrary in U, this shows that v is in U ; that is {u1 , . . . , um } U .
18(b) Write e1 = (3, 2, 5) and e2 = (1, 1, 1), write B = {e1 , e2 } , and write U = span B. Then
B is orthogonal and so is an orthogonal basis of U. Thus if v = (5, 4, 3) then
v e1
v e2
e2
2 e1 +
e1
e2 2
6
= 38
38 (3, 2, 5) + 3 (1, 1, 1)
= (5, 4, 3)
= v.
projU (v) =
projU (v1 ) =
'
(
nw
19(b) The plane is U = {x | x n = 0}, so span n w, w
n U. Since
n2
'
(
nw
dim U = 2, it suffices to show that B = n w, w
n is independent. These
n2
two vectors are orthogonal (because (n w) n = 0 = (n w) w). Hence B is orthogonal
(and so independent) provided eash of the vectors is nonzero. But: n w = 0 because n
nw
and w are not parallel, and w
n is nonzero because w and n are not parallel, and
n2
nw
n (w
n) = 0.
n2
173
23(b) Let V be an inner product space, and let U be a subspace of V. If U = span{f1 , ..., fm }, then
m
m
v, fi 2
v, fi
2 =
f
by
Theorem
7
so
proj
(v)
by Pythagoras theorem.
projU (v) =
i
U
fi 2
fi 2
i=1
i=1
So it suffices to show that projU (v)2 v2 .
Given v in V, write v = u + w where u = projU (v) is in U and w is in U . Since u and
w are orthogonal, Pythagoras theorem (again) gives
v2 = u2 + w2 u2 = projU (v)2 .
This is what we wanted.
Exercises 10.3
Orthogonal Diagonalization
T (E1 ) =
T (E2 ) =
T (E3 ) =
T (E4 ) =
0
0
0
1
1
= E1 + E3
= E2 + E4
= E1 + 2E3
= E2 + 2E4 .
Hence,
MB (T ) = [CB [T (E1 )] CB [T (E2 )] CB [T (E3 )] CB [T (E4 )]]
1
0
1
0
174
5(b) If E = {e1 = (1, 0, 0), e2 = (0, 1, 0), e3 = (0, 0, 1)} is the standard basis of R3 :
ME (T ) = [CE [T (e1 )] CE [T (e2 )] CE [T (e3 )]]
= [CE (7, 1, 0) CE (1, 7, 0) CE (0, 0, 2)]
x7
1
0
= (x 6)(x 8)(x 2) so the eigenvalues are 1 = 6,
x7
0
Thus, cT (x) = 1
0
0
x2
2 = 8, and
Corresponding (orthogonal) eigenvectors
3= 2, (real
as M
B0 (T ) is symmetric).
1
are x1 =
1
0
, x2 =
1
0
, and x3 =
1
2
1
1
0
0
1
, so
, 1
2
1
1
0
0
0
1
1 (1, 1, 0)
2
MB0 (T ) = CB0 [T (1)] CB0 [T (x)] CB0 [T (x2 )]
= CB0 (1 + x2 ) CB0 (3x) CB0 (1 x2 )
Hence, cT (x) =
x+1
x3
1
0
x+1
= x(x 3)(x + 2) so the (real) eigenvalues are 1 = 3,
0
, so
1
0
, X2 =
0
1
1
1
1 , 0 , 0 is an orthonormal basis of eigenvectors of
X3 = 0
2
2
1
0
1
1
MB0 (T ). These have the form CB0 (x), CB0 12 (1 + x2 ) , and CB0 12 (1 x2 ) , respectively,
so
+
,
x, 12 (1 + x2 ), 12 (1 x2 )
175
and compute:
1 0
0 0
MB (T ) = CB T
CB T
CB T
0 0
1 0
a 0
b 0
0 a
= CB
CB
CB
CB
c 0
d 0
0 c
CB T
Hence,
'
(
xI
0
A 0
cT (x) = det[xI MB (T )] = det
0
xI
0 A
xI A
0
= det
= det(xI A) det(xI A) = [cA (x)]2 .
xI A
MB (T ) = CB [T (f1 )] CB [T (f2 )]
Hence, column j of MB (T ) is
CB (T (fj )) =
fj , T (f1 )
fj , T (f2 )
..
.
fj , T (fn )
CB [T (fn )] .
by the definition of T . Hence the (i, j)-entry of MB (T ) is fj , T (fi ). But this is the (j, i)-entry
of MB (T ) by Theorem 2. Thus, MB (T ) is the transpose of MB (T ).
Exercises 10.4
2(b) We have T
a
b
Isometries
a
b
1
0
0
1
a
b
so T has matrix
1
0
0
1
, which is orthog-
cos
Theorem 4 2.6; see also the discussion following Theorem 3). This can also be seen directly
from the diagram.
176
X
a a
T =
b b
(d) We have T
a
b
0
1
1
0
a
b
so T has matrix
0
1
1
0
. This is orthog-
y = x
a b
T =
b a
= CB0
CB0
2
2
1
1
1 1
= 12
.
1
Hence, det T = 1 so T is a rotation. Indeed, (the discussion following) Theorem 3 shows that
T is a rotation through an angle where cos = 12 , sin = 12 ; that is = 4 .
177
3(b) T
a
b
c
cT (x) =
1
2
3c a
3a + c
2b
1
2
23
x+
0
x
3
2
21
1
2
1
2
23
x+
a
b
c
3
2
x2 12
, so T has matrix
3
2
2
x 12
1
2
3
2
x+
1
2
. Thus,
= (x1) x2 + 3 x + 1 .
2
Hence,
we are in (1) of Table 8.1 so T is a rotation about the line Re with direction vector
e=
3(d) T
a
b
c
a
b
c
, so T has matrix
. This is orthog-
onal,so T is an isometry. Since cT (x) = (x 1)(x + 1)2 , we are in case (4) of Table 1. Then
1
e=
0
0
3(f) T
1
2
Hence,
cT (x) =
a+c
2b
ca
1
2
=
0
x+1
1
2
1
2
0
x
1
2
2
0
1
0
1
= (x + 1)
, so T has matrix
b
c
1
2
1
2
0
1
0
1
x
2
1
2
1
2
1
0
1
2
0
1
0
1
= (x + 1)(x2 2x + 1).
eigenvalue 1, so T is rotation of 3
4 about the line Re (the y-axis) followed by a reflection
in the plane (Re) the x-z plane.
6. Let T be an arbitrary isometry, and let a be a real number. If aT is an isometry then Theorem
2 gives
v = (aT )(v) = a(T (v)) = |a| T (v) = |a|v holds for all v.
Thus |a| = 1 so, since a is real, a = 1. Conversely, if a = 1 then |a| = 1 so we have
(aT )(v) = |a| T (v) = 1 T (v) = v for all v. Hence aT is an isometry by Theorem 2.
12 (b) Assume that S = Su T where u is in V and T is an isometry of V. Since T is onto (by
Theorem 2), let u = T (w) where w V. Then for any v V, we have
(T Sw )(v) =
T (w + v) = T (w) + T (v) = ST (w) (T (v)) = (ST (w) T )(v).
Since this holds for all v V, it follows that T Sw = ST (w) T.
178
Exercises 10.5
The integrations involved in the computation of the Fourier coefficients are omitted in 1(b), 1(d),
and 2(b).
1(b) f5 =
cos x +
cos 3x
32
cos 5x
52
+ sin x sin22x + sin33x
2x
4x
6x
8 cos
+ cos
+ cos
22 1
42 1
62 1
(d) f5 =
2(b)
sin 4x
4
sin 5x
5
cos x +
cos 3x
32
cos 5x
52
4. We use the formula that cos ( ) = cos cos sin sin, so that
179
x + 5 3 1 x + 1 x 1 0 x + 1
0
0
x 2 1 =
4
x2
1 =
4
x + 2 1 = (x + 1)3 .
1(b) cA (x) = 4
4
4
4
3
x
3
x
1
x
Hence, 1 = 1 and we are in case k = 1 of the triangulation algorithm.
I A =
p11
1
1
p12
2 .
(I
A)
Hence, {p11 , p12 } is a basis of null(I
A).
We
now
expand
this
to
a
basis
of
null
However, (I A)2 = 0 so null (I A)2 = R3 . Hence,
inthis case, we expand {p11 , p12 }
0
R3 ,
P = [p11
p12
p13 ] =
0
1
. Hence
satisfies P 1 AP =
1
0
0
as may be verified.
x+3
x+3
1
0
1
0
x+1
3
(d) cA (x) = 4 x + 1 3 = 4
4
0
2
x + 1 x 1
x 4
x+3
1
0
= 4 x 2 3 = (x 1)2 (x + 2).
0
0
x1
Hence 1 = 1, 3 = 2, and we are in case k = 2 of the triangulation algorithm.
I A=
3
0
p11 =
1
4
4
(I A)12 =
12
12
3
3
1
0
0
p11 =
1
4
4
Thus, null(I A) = span{p11 } . We enlarge {p11 } to a basis of null (I A)2
12
p12 =
0
1
2
Thus, null (I A)2 = span{p11 , p12 } . As dim[G1 (A)] = 2 in this case (by Lemma 1), we
have G1 (A) = span{p11 , p12 } . However, it is instructive to continue the process:
(I A)2 = 3
1
1
1
180
(I A)3 = 9
1
1
= 3(I A)2 .
This continues to give (I A)4 = 32 (I A)2, . . . , and in general (I A)k = 3k2 (I A)2 for
k 2. Thus null (I A)k = null (I A)2 for all k 2, so
G1 (A) = null (I A)2 = span {p11 , p12 } .
as we expected. Turning to 2 = 2 :
2I A =
0
3
p21 =
1
1
1
Hence, null[2I A] = span{p21 } . We need go no further with this as {p11 , p12 , p21 } is a
basis of R3 . Hence
P = [p11
p21 ] =
p12
1
1
1
satisfies P 1 AP =
as may be verified.
(f) To evaluate cA (x), we begin by adding column 4 to column 1:
cA (x) =
x + 1 6 3
0
2
x 3 2 2
x 3 2
=
0
1
3
x
1
3
x
1
1
2
x
x+1
1
2
x 3 2 2
x3
= (x + 1) 3
x
1 = (x + 1) 3
5
5
1
x+2
x 3 2
= (x + 1)2 (x 1)2 .
= (x + 1)1
x+3
1
x
2
x
x+1
0
0
x 1
x+1
x+1
x3
2
1
= (x + 1)
x+2
x3
x+1
x+1
I A =
(I A)2 =
13
23
13
13
12
10
0
0
1
0
p11
p11
0
0
1
1
0
0
1
p12 =
1
0
1
0
181
We have dim [G1 (A)] = 2 as 1 = 1 has multiplicity 2 in cA (x), so G1 (A) = span{P11 , P12 } .
Turning to 2 = 1 :
I A=
(I A)2 =
1
1
2
0
p21 =
p21
0
0
as may be verified.
2
1
5
2
1
gives P 1 AP =
p22 =
0
0
0
1
1
0
0
4. Let B be any basis of V and write A = MB (T ). Then cT (x) = cA (x) and this is a polynomial:
cT (x) = a0 + a1 x + + an xn for some ai in R. Now recall that MB : L(V, V ) Mnn
is an isomorphism of vector spaces (Exercise 26, 9.1) with the additional property that
MB (T k ) = MB (T )k for k 1 (Theorem 1 9.2). With this we get
MB [cT (T )] = MB [a0 1V + a1 T + + an T n ]
= a0 MB (1V ) + a1 MB (T ) + + an MB (T )n
= a0 I + a1 A + + an An
= cA (A)
=0
by the Cayley-Hamilton theorem. Hence cT (T ) = 0 because MB is one-to-one.
Exercises 11.2
2.
, and
is invertible.
182
.
183
APPENDICES
Exercises A
Complex Numbers
1(b) 12 + 5i = (2 + xi)(3 2i) = (6 + 2x) + (4 + 3x)i. Equating real and imaginary parts gives
6 + 2x = 12, 4 + 3x = 5, so x = 3.
(d) 5 = (2 + xi)(2 xi) = (4 + x2 ) + 0i. Hence 4 + x2 = 5, so x = 1.
3 2i 3 7i
1i
2 3i
=
=
=
(1 i)(1 + i)
(2 3i)(2 + 3i)
5 + i 27 5i
1+1
4+9
11
23
26 + 26 i
(1 + 5i)(1 2i)
11 + 3i
1 + 5i
=
=
=
1 + 2i
(1 + 2i)(1 2i)
1+4
11
5
+ 35 i.
1
4
(5)
(5)2 4 2 2 =
1
4
5 9 = 2, 21 .
5(b) If x = rei then x3 = 8 becomes r3 e3i = 8ei . Thus r3 = 8 (whence r = 2) and 3 = +2k.
Hence = 3 + k 2
3 , k = 0, 1, 2. The roots are
2ei/3 = 1 +
2ei = 2
5i/3
2e
3i (k = 0)
(k = 1)
= 1 3i (k = 2).
184
2 2e0i = 2 2
(k = 0)
i/2
2 2e
= 2 2i (k = 1)
i
2 2e = 2 2 (k = 2)
2 2e3i/2 = 2 2i (k = 3).
6(b) The quadratic is (x u)(x u
) = x2 (u + u
)x + u u
= x2 4x + 13. The other root is
u
= 2 + 3i.
(d) The quadratic is (x u)(x u
) = x2 (u + u
)x + u u
= x2 6x + 25. The other root is
u
= 3 + 4i.
8. If u = 2 i, then u is a root of (x u)(x u
) = x2 (u + u
)x + u u
= x2 4x + 5.
2
If v = 3 2i, then v is a root of (x v)(x v) = x (v + v)x + v
v = x2 6x + 13.
Hence u and v are roots of
(x2 4x + 5)(x2 6x + 13) = x4 10x3 + 42x2 82x + 65.
10(b) Taking x = u = 2: x2 + ix (4 2i) = 4 2i 4 + 2i = 0. If v is the other root then
u + v = i (i is the coefficient of x) so v = u i = 2 i.
(d) Taking x = u = 2 + i: (2 + i)2 +3(1 i)(2 + i) 5i
= (3 ri) + 3(1 + 3i) 5i
= 0.
If v is the other root then u + v = 3(1 i), so v = 3(1 i) u = 1 + 2i.
12(b) |z 1| = 2 means that the distance from z to 1 is 2. Thus the graph is the circle, radius 2,
center at 1.
(d) If z = x + yi, then z =
z becomes x + yi = x + yi. This holds if and only if x = 0; that is
if and only if z = yi. Hence the graph is the imaginary axis.
(f) If z = x + yi, then im z = m re z becomes y = mx. This is the line through the origin with
slope m.
18(b) 4i = 4e3i/2 .
185
4 + 4 3i = 8e2i/3 .
4
8
6 + 6i = 6 2e3i/4 .
1
2.
6 2
1
2
3
2 i.
i/4
1
= 2 2
2e
= 2 cos 4 + i sin
4
Thus =
1 .
2
1
+
i
sin
=
2
(f) 2 3e2i/6 = 2 3 cos
3 2
3
3
20(b) (1 +
4
3i)
= (2ei/3 )4 = 24 e4i/3
=
=
1
16
1
16
1 i
2
3,
so =
Thus =
2
3 .
and we have
so =
3
4 ;
whence
= 1 i.
3
2 i
3 3i.
[cos(4/3) + i sin(4/3)]
12 + 23 i
1
= 32
+
3
32 i.
i/4 10
2e
= ( 2)10 e5i/2 = ( 2)10 e(/22)i
+ i sin
= ( 2)10 ei/2 = 25 cos
2
2
(d) (1 i)10 =
= 32(0 i) = 32i.
5
(f) ( 3 i)9 (2 2i)5 = 2ei/6 ]9 2 2ei/4
= 216 (1 + i).
+ 2 k
k = 0, 1, 2, 3.
3 1 2
2 2 + 2i = 2
3+i
= 2 21 + 23 i = 22 1 + 3i
= 2 23 12 i = 22 1 + 3i
= 2 12 23 i = 22 1 + 3i
=
186
k=2
k =1
/6
k =3
k =4
k
z
3+i
1
2i
3+i
3i
4
2i
3i
26(b) Each point on the unit circle has polar form ei for some angle . As the n points are equally
spaced, the angle between consecutive points is 2
n . Suppose the first point into the first
i
2i/n
quadrant is z0 = e . Write w = e
. If the points are labeled z1 , z2 , z3 , . . . , zn around the
unit circle, they have polar form
z1 = ei
z2 = e(+2/n)i = ei e2i/n = z1 w
z3 = e[+2(2/n)]i = ei e4i/n = z1 w2
z4 = e[+3(2/n)]i = ei e6i/n = z1 w3
..
.
zn = e[+(n1)(2/n)]i = eai e2(n1)i/n = z1 wn1 .
187
Appendix B: Proofs
z4
z3
z2
2 / n
2 / n
2 / n
z1
(*)
n
Now wn = e2i/n = e2i = 1 so
0 = 1 wn = (1 w)(1 + w + w2 + + wn1 ).
Exercises B
Proofs
1(b) (1). We are to prove that if the statement m is even and n is odd is true then the statement
m + n is odd is also true.
If m is even and n is odd, they have the form m = 2p and n = 2q + 1, where p and q are
integers. But then m + n = 2(p + q) + 1 is odd, as required.
(2). The converse is false. It states that if m + n is odd then m is even and n is odd; and a
counterexample is m = 1, n = 2.
(d) (1). We are to prove that if the statement x2 5x + 6 = 0 is true then the statement x = 2
or x = 3 is also true.
Observe first that x2 5x + 6 = (x 2)(x 3). So if x is a number satisfying x2 5x + 6 = 0
then (x 2)(x 3) 0 so either x = 2 or x = 3. [Note that we are using an important fact
about real numbers: If the product of two real numbers is zero then one of them is zero.]
(2). The converse is true. It states that if x = 2 or x = 3 then x satisfies the equation
x2 5x + 6 = 0. This is indeed the case as both x = 2 or x = 3 satisfy this equation.
188
2(b) The implication here is p q where p is the statement n is any odd integer, and q is the
statement n2 = 8k + 1 for some integer k. We are asked to either prove this implication or
give a counterexample.
This implication is true. If p is true then n is odd, say n = 2t + 1 for some integer t. Then
= (2t)2 + 2(2t) + 1 = 4t(t + 1) + 1. But t(t + 1) is even (because t is either even or odd),
say t(t + 1) = 2k where k is an integer. Hence n2 = 4t(t + 1) + 1 = 4(2k) + 1, as required.
n2
3(b) The implication here is p q where p is the statement n + m = 25, where n and m are
integers, and q is the statement one of m and n is greater than 12 is also true. We are asked
to either prove this implication by the method of contradiction, or give a counterexample.
The implication is true. To prove it by contradiction, we assume that the conclusion q
is false, and look for a contradiction. In this case assuming that q is false means both n 12
and m 12. But then n + m 24, contradicting the hypothesis that n + m = 25. So the
statement is true by the method of proof by contradiction.
The converse is false. It states that q p, that is if one of m and n is greater than 12
then n + m = 25. But n = 13 and m = 13 is a counterexample.
(d) The implication here is p q where p is the statement mn is even, where n and m are
integers, and q is the statement m is even or n is even. We are asked to either prove this
implication by the method of contradiction, or give a counterexample.
This implication is true. To prove it by contradiction, we assume that the conclusion q
is false, and look for a contradiction. In this case assuming that q is false means that m and
n are both odd. But then mn is odd (if either were even the product would be even). This
contradicts the hypothesis, so the statement is true by the method of proof by contradiction.
The converse is true. It states that if m or n is even then mn is even, and this is true (if
m or n is a multiple of 2, then mn is a multiple of 2).
4(b) The implication here is: x is irrational and y is rational x + y is irrational.
5(b) At first glance the statement does not appear to be an implication. But another way to say
it is that if the statement n 2 is true then the statement n3 2n is also true.
This is not true. In fact, n = 10 is a counterexample because 103 = 1000 while 210 = 1024.
It is worth noting that the statement n3 2n does hold for 2 n < 9.
Exercises C
Mathematical Induction
(Sn )
1
1
Then S1 is true: It reads 12
= 1+1
, which is true. Now assume Sn is true for some n 1.
We must use Sn to show that Sn+1 is also true. The statement Sn+1 reads as follows:
1
1
1
n+1
+
+ +
=
.
12 23
(n + 1)(n + 2)
n+2
189
1
n(n+1)
so we can use Sn :
1
1
1
1
1
1
1
+
+ +
=
+
+ +
+
12 23
(n + 1)(n + 2)
12 23
n(n + 1)
(n + 1)(n + 2)
1
n
+
=
n + 1 (n + 1)(n + 2)
n(n + 2) + 1
=
(n + 1)(n + 2)
(n + 1)2
=
(n + 1)(b + 2)
n+1
=
.
n+2
Thus Sn+1 is true and the induction is complete.
14. Write Sn for the statement
1
1
1
+ + + 2 n 1.
(Sn )
n
1
2
Then S1 is true as it asserts that 11 2 1 1, which is true. Now assume that Sn is true
for some n 1. We must use Sn to show that Sn+1 is also true. The statement Sn+1 reads
as follows:
1
1
1
1
1
1
1
+ + +
= + + + +
n
n+1
n+1
1
2
1
2
1
2 n1 +
n+1
2
2 n +n+1
=
1
n+1
2(n + 1)
1
<
n+1
= 2 n+11
where, at the second last step, we used the fact that n2 + n < (n + 1)this follows by
showing that n2 + n < (n + 1)2 , and taking positive square roots. Thus Sn+1 is true and the
induction is complete.
18. Let Sn stand for the statement
n3 n is a multiple of 3.
Clearly S1 is true. If Sn is true, then n3 n = 3k for some integer k. Compute:
(n + 1)3 (n + 1) = (n3 + 3n2 + 3n + 1) (n + 1)
= 3k + 3n2 + 3n
which is clearly a multiple of 3. Hence Sn+1 is true, and so Sn is true for every n by induction.
190
20. Look at the first few values: B1 = 1, B2 = 5, B3 = 23, B4 = 119, . If these are compared
to the factorials: 1! = 1, 2! = 4, 3! = 6, 4! = 24, 5! = 120, , it is clear that Bn = (n+1)!1
holds for n = 1, 2, 3, 4 and 5. So it seems a reasonable conjecture that
Bn = (n + 1)! 1
for
n 1.
Sn
S9
S17 S25
Chapter 5:
Q1. Let U = {u1 = (1, 1, 2, 5), u2 = (4, 1, 1, 1), u3 = (7, 28, 5, 5)}. Using
Maple, verify if the following subset of R4 is independent. Verify if U contains
orthogonal vectors. If so, normalize them.
> with(LinearAlgebra)
> u1 := V ector([1, 1, 2, 5])
> u2 := V ector([4, 1, 1, 1])
> u3 := V ector([7, 28, 5, 5])
> A :=< u1|u2|u3 >
> LU Decomposition(A, output =0 R0 )
> DotP roduct(u1, u2)
> DotP roduct(u1, u3)
> DotP roduct(u2, u3)
> normu1 := (1/N orm(u1, 2)) u1
> normu2 := (1/N orm(u2, 2)) u2
> normu3 := (1/N orm(u3, 2)) u3
1
4
2
4 3
5
1
5
2 1
Q2. Let A =
4 5 19 10 3. Find a bases for the row space and
0 11 10 9 1
column space of A. Determine the rank of A.
> with(LinearAlgebra)
> A := M atrix([[1, 4, 2, 4, 3], [5, 1, 5, 2, 1], [4, 5, 19, 10, 3], [0, 11, 10, 9, 1]])
> RowSpace(A)
(Returns the row space of A)
> N ullSpace(A)
(Returns the null space of A)
> Rank(A)
(Returns the rank of A)
Chapter 6:
Q1. Let V = M2,2 . Determine if the following subset U of V is independent:
1 0
1 0
1 1
{
,
,
}
0 1
1 0
1 1
Does v =
1
5
4
lie in U ?
3
> with(LinearAlgebra)
> e1 := V ector([1, 0, 0, 1])
> e2 := V ector([1, 0, 1, 0])
> e3 := V ector([1, 1, 1, 1])
> v := V ector([1, 4, 5, 3])
> A :=< e1|e2|e3 >
> LU Decomposition(A, output =0 R0 )
> newA :=< e1|e2|e3|v >
> LU Decomposition(newA, output =0 R0 )
Q2. Let V = P3 . Using Maple, determine if the following set of polynomials,
{1, 1 + x3 , x + x2 , 1 + x + x3 }, spans V .
> with(LinearAlgebra)
> e1 := V ector([1, 0, 0, 0])
> e2 := V ector([1, 0, 0, 1])
> e3 := V ector([0, 1, 0, 1])
> e4 := V ector([1, 1, 0, 1])
> A :=< e1|e2|e3|e4 >
> Rank(A)
Chapter 5:
Q1. Let U = {u1 = (16, 5, 9, 4), u2 = (2, 11, 7, 14), u3 = (3, 10, 6, 15)}. Using
Matlab, verify if the following subset of R4 is independent. Is b = (13, 8, 12, 1)
is the span of U ? Verify if U contains orthogonal vectors.
>> u1 = [16; 5; 9; 4]
>> u2 = [2; 11; 7; 14]
>> u3 = [3; 10; 6; 15]
>> b = [13, 8, 12, 1]
>> A =[u1 u2 u3]
>> rref (A)
>> aug =[A b]
>> rref (aug)
>> u10 u2
>> u10 u3
>> u20 u3
4 0 0
1 0 1
Q2. Let A = 3 4 1 and B = 0 1 0. Using Matlab, compute the
0 0 2
0 0 2
rank and trace of A and B to verify if these two matrices are similar. Using
theorem 3 in section 5.5, find the eigenvalues and eigenvectors of A to determine
if it is diagonalizable. If yes, then find the matrix P such that P 1 AP = D.
>> A =[-1 0 1; -3 4 1; 0 0 2]
>> B =[4 0 0; 0 1 0; 0 0 2]
>> rankA = rank(A)
(Returns the rank of A)
>> rankB = rank(B)
(Returns the rank of B)
>> traceA = trace(A)
(Returns the trace of A)
>> traceA = trace(B)
(Returns the trace of B)
>> [P, D] = eig(A)
>> test = inv(P ) A P
Chapter 11:
Q3. Let T : R3 R3 be defined by T (x, y, z) = (14y + z, 5y, 4x13y + 4z). Use
MATLAB to find the Jordan canonical form of the transformation matrix, A,
with respect to the standard basis. Verify that P transforms A in Jordan form
by showing that P 1 AP = D.
Input the matrix A as follows
>> A =[ 0 14 1; 0 5 0; -4 13 4]
>> [P, J] = jordan(A)
(Returns the invertible matrix P and the Jordan blocks)
>> inv(P ) A P
Q2. Let T : R5 R5 be defined by T (a, b, c, d, e) = (3a + b + 2c + 3d +
4e, 3b + c + 2d + 3e, 3c + d + 3e, 3d + e, e). Use MATLAB to find the Jordan
canonical form of the transformation matrix, A, with respect to the standard
basis. Verify that P transforms A in Jordan form by showing that P 1 AP = D..
>> A =[3 1 2 3 4; 0 3 1 2 3; 0 0 3 1 2; 0 0 0 3 1; 0 0 0 0 3]
>> [P, J] = jordan(A)
>> inv(P ) A P
11