Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
4 views

Practice 5

Uploaded by

Andrew Alingog
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Practice 5

Uploaded by

Andrew Alingog
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Linear Algebra (MATH 3333 – 04) Spring 2011

Final Exam Practice Problem Solutions


Instructions: Try the following on your own, then use the book and notes where you need help. Afterwards,
check your solutions with mine online. For Sections 1 and 2, no explanations are necessary. For all other
problems, justify your work.
I highly recommend you make sure you can do all of these problems, as well as the Exam 1, Exam 2, and
practice midterm problems, on your own before the final exam.
Note: Not every problem on the practice sheet is modeled off of one of your problems for homework.
However, you can figure out how to do them with an understanding of the basic concepts from the course.
They are designed to help piece together your understanding of the course material. There may be questions.
1. Go over your old exams, Homeworks 9–11, and the practice midterm problems.

1 True/False
In this section A is an n × n matrix.

2. T F Two vectors are linearly independent if one is not a scalar multiple of the other.

True. See midterm problem solutions for a proof.

3. T F Every 2 × 2 matrix is diagonalizable.


   
1 1 1 0
False. We saw and were not.
0 1 1 1

4. T F If A is diagonalizable, then there is a basis of eigenvectors of A.

True. In fact these are equivalent conditions.

5. T F If λ is an eigenvalue for A, then the eigenvectors with eigenvalue λ are scalar multiples of each
other.

False. This would mean the eigenvectors, i.e., ker(λI − A) is a line, but it could be a plane or higher
dimensional.

6. T F If A = P DP −1 , then A3 = P 3 D3 P −3 .

False. A3 = P D3 P −1 .

7. T F A = PT ←S [A]T PS←T where S is the standard basis for Rn .

False. It should either be A = PS←T [A]T PT ←S or [A]T = PT ←S APS←T .

8. T F If A does not have n distinct eigenvalues, then A is not diagonalizable.


 
1 0
False. It might be diagonalizable and it might not be. E.g., is diagonalizable, but it’s only
0 1
eigenvalue is 1. See also #26 and # 27. However, the other direction is true: if A has n distinct
eigenvalues, then it has a basis of eigenvectors, so A is diagonalizable.

1
9. T F There is a linear transformation T : R2 → R2 whose image is the same as its kernel.

True, though I admit it’s a bit of a strange question.


 Rank-Nullity
 says if the image is a line, so is
1 0
the kernel, so it seems plausible. For example, A = is projection on the x-axis and has kernel
  0 0  
0 1 0 0
the y-axis, so composing this with B = , which is reflection about y = x gives BA = ,
1 0 1 0
which has kernel and image the y-axis.

10. T F The set of all even degree polynomials (including 0) is a vector space.

False. It’s not closed under addition, e.g., let p(x) = x2 + x + 1 and q(x) = −x2 . Then p(x) + q(x) =
x + 1.

11. T F If S = {v1 , . . . , vn } ⊆ Rn and span(S) = Rn then S is a basis for Rn .

True. Since the size of S is the dimension of Rn , this means S is a minimal spanning set, which is
equivalent to being a basis.

12. T F A is invertible if and only if det(A) 6= 0.

True.

13. T F If A is diagonalizable, then A is invertible.


 
1 0
False. For example is diagonalizable (already diagonal), but not invertible. Similarly, being
0 0
invertible does not imply being diagonalizable, e.g., the examples in #2.

14. T F If rank(A) = n, then A is invertible.

True, by Rank-Nullity.

15. T F if Av = cv for some c ∈ R, then v is an eigenvector for A.

False. It would be true with the condition that v 6= 0.

2 Short Answer

16. State the definition for a subset S = {v1 , . . . , vk } of a vector space to be linearly independent.

17. State the definition of a basis for a vector space V .

18. State the definition of an eigenvalue and an eigenvector for an n × n matrix A.

19. State the Rank–Nullity Theorem.

See text for above answers.

2
20. State three things linear algebra has applications to.

This is a bit of a trick question, but in a good way, since linear algebra is so fundamental, pretty much
any 3 fields you write down will be okay. Some things we mentioned in class are engineering (solving
systems of equations), physics (linear motions, solving systems of equations), computer graphics (ani-
mating rotations), robotics (computing robot joint rotations), dynamical systems (population models),
search engines (Google pagerank), sports rankings.
Note that the dynamical systems applications include applications to almost any field—e.g., chemical
reactions, economic changes, etc.

21. Find a linear transformation A : R2 → R3 whose image is the plane x + 2y + 3z = 0.


   
3 0
Find a basis for the plane, e.g.,  0  and  3 , and take
−1 −2
 
3 0
A= 0 3 .
−1 −1
This works because the image is the span of the columns, and the matrix is 3 × 2.

22. What is the geometric significance of det(A) for a 2 × 2 matrix A?

A scales area by det(A). Similarly, if A is 3, A scales volume by det(A).

23. Why might you want to exponentiate a matrix?

One possible answer is: to analyze a dynamical system.


       
1 1 3 3
24. If A is a 2 × 2 matrix such that A =4 and A =4 , what is A?
2 2 −1 −1
       
 1 1 1 −1 
25. Is  2  , −2 , −1 ,  1  a basis for R3 ?
−2 3 0 −1
 

No. This set has 4 elements, but the dimension of R3 is 3.

26. Give an example of a 3 × 3 matrix with only 2 eigenvalues which is not diagonalizable.
 
1 1 0
0 1 0. This has two eigenvalues, 1 and 2, but you can check it does not have a basis of
0 0 2
eigenvectors. The idea here was to use the example from #2 in a 3 setting. Also note that if D
is a diagonal matrix, the diagonal entries are just the eigenvalues. More generally if A is an upper
triangular matrix, i.e.,  
a b c
A = 0 d e 
0 0 f
then the eigenvalues are just the diagonal entries a, d, f , and one can show that A is diagonalizable
if and only if b, c, e are all 0. A similar statement is true for lower triangular matrices, and these
statement hold true for n × n matrices in general.

3
27. Give an example of a 3 × 3 matrix with 2 eigenvalues which is diagonalizable.

This is easier because we can just take a diagonal matrix


 
1 0 0
0 1 0 .
0 0 2

3 Problems
28. Find all solutions to the following system of equations:

−2x + y + z = 1
x+z =0
x + y − 2z = −1.

We may write this system as an augmented matrix


 
1 0 1 | 0
1 1 −2 | −1 ,
−2 1 1 | 1
where we have reordered the equations to make the reduction simpler. This reduces to
0 | − 13
 
1 0
0 1 0 | 0 .
1
0 0 1 | 3

which means the system has precisely one solution: (x, y, z) = (−1/3, 0, 1/3).

29. Is {(x, y, z) : 2x − 3y = 1 + z} a subspace of R3 . Justify your answer?

No, for many reasons. The most obvious one is that is does not contain the origin (i.e., the zero
vector).
     
 1 1 −1 
30. (i) Show T =  1  , −1 ,  1  is a basis for R3 .
−1 1 1
 
 
1
(ii) If v =  1 , find [v]T .
−1
 
1
(iii) If [v]T =  1 , find v.
−1
(iv) If S is the standard basis, find the transition matrices PS←T and PT ←S .

(i) It has the right number of elements, so just check they are LI, e.g., reduce
 
1 1 −1 | 0
 1 −1 1 | 0
−1 1 1 | 0

4
to get  
1 0 0 | 0
0 1 0 | 0 .
0 0 1 | 0
 
1
(ii) Since v is the first element in the basis T , we have [v]T = 0.
0
 
3
(iii) Here v is the sum of the first two vectors in T minus the third, i.e., v = −1.
−1
 
1 1 −1
(iv) PT ←S =  1 −1 1  . Since PS←T = PT−1 ←S , we reduce
−1 1 1
 
1 1 −1 | 1 0 0
1 −1 1 | 0 1 0
−1 1 1 | 0 0 1

to get  
1 0 0 | 1/2 1/2 0
0 1 0 | 1/2 0 1/2 .
0 0 1 | 0 1/2 1/2
 
1/2 1/2 0
Thus PS←T = 1/2 0 1/2 .
0 1/2 1/2
 
1 −1 1 1
−1 1 1 −1
31. Let A =  .
1 −1 −1 1
−1 1 −1 −1
(i) FInd a basis for the image of A.
(ii) Find a basis for the kernel of A.
(iii) Find rank(A) and nullity(A).

Row reducing A gives  


1 −1 0 1
0 0 1 0
 .
0 0 0 0
0 0 0 0
This means
(iii) rank(A) = 2 and nullity(A) = 2.
(i) The first and third columns are linearly independent (have leading 1’s) so a basis for the image of
A is    

 1 1 
   
−1
 ,  1  .



 1  −1 


−1 −1
 

5
(ii) Solving Av =0from our
row reduction above gives (x, y, z, w) with y, w free, z = 0 and x−y+w = 0,

 y − w 

y 
 
so the kernel is   , and we can take a basis to be


 0 

w
 
   

 1 −1 
   
1 ,  0  .

 0  0 
 
0 1
 

   
1 2 3 4 1
2 4 1 3 1
32. Let A = 
4
. Show v =   is an eigenvector for A.
2 3 1 1
3 1 4 2 1

Just compute the matrix product Av and see you get v, so 10 · v is an eigenvector with eigenvalue 10.
 
0 0 −2
33. Let A =  0 −2 0 .
−2 0 3
(i) Find the eigenvectors and eigenvalues of A.
(ii) Diagonalize A, i.e., write A = P DP −1 for some diagonal matrix D.

See p. 465. The only thing that’s not done is explicitly writing A = P DP −1 . Here we should have
 
0 −1 2
P = PS←T = 1 0 0
0 2 1

and we compute  
0 1 0
1
P −1 = −1 0 2 .
5
2 0 −1

34. Construct a matrix A which acts as reflection about the plane x + y + z = 0 in R3 .


   
1 0
A basis for the plane is {v1 , v2 } where v1 =  0  and v2 =  1 . An orthogonal vector to the
  −1 −1
1
plane is v3 = 1.
1
(At this point, I realize we didn’t cover orthogonal vectors to planes in R3 in the class, so if I ask a
problem like this on the final, it will be a reflection in R2 or I will tell you the orthogonal vector. But
this is covered in Calc 3/4 and Chapter 5 of our text—you need the dot product of v3 with v 1 and

a
v2 to be 0, and it is. In general, an orthogonal vector for the plane ax + by + cz = 0 is just  b .
c
However, given this orthogonal vector, you should be able to do everything else.)

6
Let S be the standard basis for R3 , and T = {v1 , v2 , v3 }. Then
 
1 0 0
A = [A]S = PS←T [A]T PT ←S = P 0 1 0  P −1
0 0 −1

where  
1 0 1
P = PS←T = 0 1 1
−1 −1 1
and  
2 −1 −1
1
P −1 = −1 2 −1 .
3
1 1 1
So we compute  
1 0 0
1
A= −2 1 −2 .
3
−2 −2 1
You can double check your matrix A by verifying Av1 = v1 , Av2 = v2 and Av3 = v3 . I recommend
doing this on the exam.

35. Suppose you have a (discrete) dynamical system given by

x(t + 1) = x(t) + 2y(t)


y(t + 1) = 4x(t) + 3y(t),

with initial conditions x(0) = 2, y(0) = 1. Find explicit formulas for x(t) and y(t).

Rewrite this as     
x(t + 1) 1 2 x(t)
= .
y(t + 1) 4 3 y(t)
 
1 2
The eigenvalues for A = are λ1 = −1 and λ2 = 5 with eigenvectors {cv1 : c 6= 0} and
4 3
   
1 1
{cv2 : c 6= 0}, where v1 and v2 . Let S be the standard basis and T = {v1 , v2 }. Then
−1 2
 
1 1
P = PS←T =
−1 2

and  
1 2 −1
P −1 = .
3 1 1
 
−1 −1 0
Then A = P DP where D = so
0 5

(−1)t
   t
5 + (−1)t
         
x(t) t x(0) t −1 2 1 1 1 0 2 −1 2
=A = PD P = = .
y(t) y(0) 1 3 −1 2 0 5t 1 1 1 2 · 5t − (−1)t

7
36. Suppose v is an eigenvector for an n × n matrix A with eigenvalue λ.
(i) Show cv is also an eigenvector with eigenvalue λ for any c 6= 0.
(ii) Show v is also eigenvector for A2 with eigenvalue λ2 .

Suggestion: first write the definition.


This mean v 6= 0 and Av = λv.
(i) If c 6= 0, then cv 6= 0, and we see A(cv) = c(Av) = c(λv ) = (cλv). So, by definition, cv is also an
eigenvector with eigenvalue λ.
(ii) Note A2 v = A(Av) = A(λv) = λ(Av) = λ(λv ) = λ2 v. Again, by the definition, this means v is an
eigenvector with eigenvalue λ2 for A.

You might also like