SampleProofs PartialSolutions
SampleProofs PartialSolutions
SampleProofs PartialSolutions
This document contains a number of theorems, the proofs of which are at a difficulty level
where they could be put on a test or exam. This should not be taken as an indication that
the only theorems on tests or exams will be taken from this document, nor that every (or
any) theorem in this document need be tested. This document is for information purposes
only. Questions marked with a * are a little harder than the others, and the more stars, the
harder the question, but all are testable. More solutions will be provided as time allows.
1. Prove that for any set of vectors S = {v1 , . . . , vn } in a vector space V , span(S) is a
subspace of V .
A1)
u + w = (c1 v1 + c2 v2 + . . . + cn vn ) + (k1 v1 + k2 v2 + . . . + kn vn )
= (c1 + k1 )v1 + (c2 + k2 )v2 + . . . + (cn + kn )vn ∈ span(S).
ku = k(c1 v1 + c2 v2 + . . . + cn vn )
= k(c1 v1 ) + k(c2 v2 ) + . . . + k(cn vn )
= (kc1 )v1 + (kc2 )v2 + . . . + (kcn )vn ∈ span(S).
2. Prove that if S is a linearly independent set of vectors, then S is a basis for span(S).
Solution: To be a basis for span(S), it must be linearly independent and span the
space. Certainly the span of S is equal to the entire space, span(S) (by definition),
and S is given to be linearly independent in the question. Thus S is a basis for
span(S).
R. Borgersen
3. Show that if A is an m × n matrix, then the solution set V to the equation Ax = 0 is a
subspace of Rn .
Thus x1 + x2 ∈ V .
M1). Let x1 ∈ V , k ∈ R. Then kx1 ∈ Rn , and
A(kx1 ) = k(Ax1 )
= k(0) = 0.
4. Prove that any finite set of vectors containing the zero vector is linearly dependent.
c1 0 + c2 v1 + . . . + cn vn = 0
has the solution c1 = 1, c2 = c3 = · · · = cn = 0, which is not all zeros, and thus the
set S is linearly dependent.
5. Prove that if S = {v1 , v2 , . . . , vn } is a basis for a vector space V , then every vector
v ∈ V can be expressed as a linear combination of the elements of S in exactly one way.
v = c1 v1 + c2 v2 + . . . + cn vn
and
v = k1 v1 + k2 v2 + . . . + kn vn .
Then subtracting straight down we get
R. Borgersen
But since S is a basis, S is linearly independent, and thus the only way this could
happen is if c1 − k1 = 0, ..., cn − kn = 0. Thus c1 = k1 , . . . , cn = kn and so v can
only be written as a linear combination of the elements of S in one way.
∗
6. Given that {u, v, w} is a linearly independent set of vectors in some vector space V ,
prove that:
(a) the set {u, v} is linearly independent.
(b) the set {u, u + v} is linearly independent.
(c) the set {u + v, v + w} is linearly independent.
∗
7. Let u, v ∈ R3 be such that u • v = 0. Prove that {u, v} is a linearly independent set.
∗
8. Let V be a vector space. Prove that for every u ∈ V , 0 · u = 0.
0 · u = (0 + 0) · u (since 0 + 0 = 0)
= 0 · u ⊕ 0 · u. (axiom M 3)
and
Therefore
0 · u = 0.
∗
9. Let V be a vector space. Prove that for every k ∈ R, k · 0 = 0.
k · 0 = k · (0 ⊕ 0) (by A4)
=k·0⊕k·0 (by M 2).
R. Borgersen
By A5, there exists −(k · 0). Then,
but
Therefore
k · 0 = 0.
∗
10. Let V be a vector space. Prove that for every u ∈ V , (−1) · u = −u.
Therefore, −u = (−1) · u.
∗
11. Let V be a vector space. Prove that if for some k ∈ R and u ∈ V , k · u = 0, then
either k = 0, or u = 0.
but
1 1
· (k · u) = · 0 (by assumption)
k k
=0 (by part 2).
Therefore u = 0.
R. Borgersen
∗
12. Prove that a set of vectors is linearly dependent if and only if at least one vector in
the set is a linear combination of the others.
∗
13. Let A be a m×n matrix. Prove that if both the set of rows of A and the set of columns
of A form linearly independent sets, then A must be square.
∗∗
14. Let V be the set of 2 × 2 matrices, together with the operation ⊕ defined for any 2 × 2
matrices A and B as
A ⊕ B = AB (the usual matrix multiplication),
and with the standard scalar multiplication for matrices.
(a) Show that the vector space axiom A4 holds.
(b) Prove that V is not a vector space.
∗∗
15. Let
V = {(a, b) ∈ R2 : a > 0, b > 0}
together with the operations defined as follows: for (a, b), (c, d) ∈ V , k ∈ R,
(a, b) ⊕ (c, d) = (ac, bd)
k · (a, b) = (ak , bk ).
(a) Show that the vector space axiom M 3 holds in this space.
(b) Does the axiom A4 hold in this space? If so, find the zero vector and prove it is the
zero vector. If not, show that there is no possible zero vector.
∗∗
16. Let V be a vector space, and let W1 and W2 be subspaces of V . Prove that the set
U = {v : v ∈ W1 and v ∈ W2 }
(that is, U is the set of vectors in BOTH W1 and W2 ). Prove that U is a subspace of V
as well.
R. Borgersen
∗∗
17. Let W be a subspace of a vector space V , and let v1 , v2 , v3 ∈ W . Prove then that
every linear combination of these vectors is also in W .
(c1 v1 + c2 v2 ) + c3 v3 = c1 v1 + c2 v2 + c3 v3 ∈ W.
∗∗
18. Let S = {v1 , . . . , vr } be a set of vectors in Rn . If r > n, then S is linearly dependent.
c1 v1 + . . . + cr vr = 0.
The coefficient matrix of this system has n rows and r columns. But r > n. Therefore
this system is guaranteed to have a parameter, and since it is a homogeneous system
and has at least one solution, it therefore has infinitely many solutions. Thus there
is at least one solution for the ci ’s that is not all zero, and so the set S is linearly
dependent.
R. Borgersen
LINEAR TRANSFORMATION PROOFS
20. Prove that given two linear transformations T1 : U → V and T2 : V → W , the composi-
tion T2 ◦ T1 : U → W is also a linear transformation.
24. Prove that for any one-to-one linear transformation T : V → W , T −1 is also a one-to-one
linear transformation.
TA (v) = Av
is a linear transformation.
∗
26. If T : V → W is a linear transformation, then prove each of the following:
R. Borgersen
T (V ) ̸= W , it follows that rank(T ) = dim(T (V )) < dim(W ). But then, by the
Dimension Theorem,
Then since nullity(T ) > 0, ker(T ) ̸= {0}, and thus there are two vectors in V that
map to the zero vector in W . Thus T is not 1:1, a contradiction.
∗∗
31. Let T1 : U → V and T2 : V → W be two linear transformations. Prove that if T2 ◦ T1
is one-to-one, then T1 must be one-to-one.
and therefore,
(T2 ◦ T1 )(u) = (T2 ◦ T1 )(v).
Then, because (T2 ◦ T1 ) is one-to-one, it follows that u = v. Therefore T1 is one-to-
one.
∗∗
32. Let T : V → W be an onto linear transformation. Prove that if dim(V ) = dim(W ),
then T is an isomorphism.
R. Borgersen
Solution: Assume that dim(V ) = dim(W ). Then since T is onto, T (V ) = W , and
so rank(T ) = dim(W ) = dim(V ). But then by the dimension theorem,
Therefore Ker(T ) = {0}, and therefore by the theorem above, we have that T is 1:1.
Therefore T is both 1:1 and onto, and is thus an isomorphism.
∗∗
33. Let T : V → W be a one-to-one linear transformation. Prove that T is an isomorphism
between V and T (V ).
∗∗
34. Let E be a fixed 2 × 2 elementary matrix.
(a) Does the formula T (A) = EA define a one-to-one linear operator on M2,2 ? Prove
or disprove.
(b) Does the formula T (A) = EA define an onto linear operator on M2,2 ? Prove or
disprove.
Solution: 1-to-1: Let A, B ∈ M2,2 be such that T (A) = T (B). Then EA = EB.
Since E is an elementary matrix, and all elementary matrices are invertible, E −1
exists. Multiplying both sides by E −1 we get E −1 EA = E −1 EB, and thus A = B.
Therefore T is 1:1.
Onto: Let A ∈ M2,2 . Then B = E −1 A is a matrix in M2,2 such that T (B) = EB = A.
Thus T is onto.
Thus T is 1:1 and onto (and is thus an isomorphism).
∗∗
35. Let B = {v1 , v2 , . . . , vn } be a basis for a vector space V , and let T : V → W be a
linear transformation. Show that if T (v1 ) = T (v2 ) = · · · = T (vn ) = 0W , then T is the
zero transformation (that is, for every v ∈ V , T (v) = 0W ).
R. Borgersen
Then,
T (v) = T (c1 v1 + . . . + cn vn )
= T (c1 v1 ) + . . . + T (cn vn )
= c1 T (v1 ) + . . . + cn T (vn )
= c1 0W + . . . + cn 0W
= 0W .
∗∗∗
36. Let T1 : V → W and T2 : V → W be two linear transformations and let B =
{v1 , . . . , vn } be a basis for V . Prove that if for all i, 1 ≤ i ≤ n, T1 (vi ) = T2 (vi ), then
T1 = T2 (that is, for all v ∈ V , T1 (v) = T2 (v)).
T1 (v) = T1 (c1 v1 + . . . + cn vn )
= T1 (c1 v1 ) + . . . + T1 (cn vn )
= c1 T1 (v1 ) + . . . + cn T1 (vn )
= c1 T2 (v1 ) + . . . + cn T2 (vn )
= T2 (c1 v1 ) + . . . + T2 (cn vn )
= T2 (c1 v1 + . . . + cn vn )
= T2 (v).
Thus T1 = T2 .
R. Borgersen
EIGENVALUE/VECTOR AND INNER PRODUCT SPACE PROOFS
Solution: Let E be the set of all eigenvectors corresponding to λ, together with the
zero vector. Let u, v ∈ E, k ∈ R.
A(u + v) = Au + Av
= λu + λv
= λ(u + v).
Therefore u + v is also in E.
A(ku) = kA(u)
= k(λu)
= λ(ku).
Thus ku ∈ E as well.
⟨u − v, w⟩ = ⟨u, w⟩ − ⟨v, w⟩
42. Show that in any inner product space V , for all v ∈ V , ⟨v, 0⟩ = 0.
R. Borgersen
Solution: Let v ∈ V . Then
⟨v, 0⟩ = ⟨v, 0 · 0⟩
= 0⟨v, 0⟩
= 0.
43. Prove each of the following properties about inner product spaces: for all u, v, w in an
inner product space V , and all k ∈ R,
• ||u|| ≥ 0
• ||u|| = 0 if and only if u = 0
• ||ku|| = |k|||u||
• ||u + v|| ≤ ||u|| + ||v|| (Triangle Inequality)
• d(u, v) ≥ 0
• d(u, v) = 0 if and only if u = v
• d(u, v) = d(v, u)
• d(u, v) ≤ d(u, w) + d(w, v). (Triangle Inequality)
1 1
44. Prove that if u and v are orthogonal, then so are ||u||
u and ||v||
v.
∗
45. Let A be an n × n matrix. Prove that A and AT have the same eigenvalues.
Solution:
Thus A and AT have the same characteristic polynomials, and so must have the same
eigenvalues.
∗∗
46. Let A be an n×n matrix. Prove that A is invertible if and only if 0 is not an eigenvalue
of A.
Solution: I will solve this problem by proving the contrapositives: that A is not
invertible if and only if 0 is an eigenvalue of A.
(⇒): Assume A is not invertible. Then Ax = 0 has infinitely many solutions, and
thus at least one non-zero solution, say x0 . Then
Ax0 = 0 = 0x0
R. Borgersen
and thus 0 is an eigenvalue of A with eigenvector x0 .
(⇐): Assume 0 is an eigenvalue of A. Then det(A − 0I) = det(A) = 0. Thus A is
not invertible.
∗∗
47. Prove that if B = C −1 AC, then B and A have the same eigenvalues (HINT: Look at
the characteristic polynomials of B and A).
Solution:
Thus A and B have the same characteristic polynomials, and so must have the same
eigenvalues.
∗∗
48. Let v be a nonzero vector in an inner product space V . Let W be the set of all vectors
in V that are orthogonal to v. Prove that W is a subspace of V .
R. Borgersen
∗∗
49. Prove that for any two vectors u and v in an inner product space, if
||u|| = ||v||,
then u + v is orthogonal to u − v.
⟨u + v, u − v⟩ = ⟨u, u − v⟩ + ⟨v, u − v⟩
= ⟨u − v, u⟩ + ⟨u − v, v⟩
= ⟨u, u⟩ − ⟨v, u⟩ + ⟨u, v⟩ − ⟨v, v⟩
= ||u||2 − ⟨u, v⟩ + ⟨u, v⟩ − ||v||2
= ||u||2 − ||v||2
= ||u||2 − ||u||2
= 0.
Thus u + v is orthogonal to u − v.
∗∗
50. Let B = {v1 , v2 , . . . , vr } be a basis for an inner product space V . Show that the zero
vector is the only vector in V that is orthogonal to all of the basis vectors.
Solution: We have a property (proved elsewhere) that for all v ∈ V (and thus for
all v ∈ B), ⟨0, v⟩ = 0. But this question is in some sense the opposite of this. Let
w ∈ V be a vector such that for all v ∈ B, ⟨w, v⟩ = 0. Then we must prove that w
must have been the zero vector.
Since B is a basis for V , we know that there exist k1 , . . . , kr ∈ R such that w =
k1 v1 + k2 v2 + · · · + kr vr .
Then look at ⟨w, w⟩:
⟨w, w⟩ = ⟨w, k1 v1 + k2 v2 + · · · + kr vr ⟩
= ⟨w, k1 v1 ⟩ + ⟨w, k2 v2 ⟩ + · · · + ⟨w, kr vr ⟩
= k1 ⟨w, v1 ⟩ + k2 ⟨w, v2 ⟩ + · · · + kr ⟨w, vr ⟩
= k1 (0) + k2 (0) + · · · + kr (0)
= 0.
∗∗∗
51. Let S = {v1 , v2 , . . . , vn } be an orthonormal basis for an inner product space V , and
u is any vector in V . Prove that
u = ⟨u, v1 ⟩v1 + ⟨u, v2 ⟩v2 + · · · + ⟨u, vn ⟩vn .
R. Borgersen
Solution: Let u ∈ V . Since S is a basis, there exist k1 , . . . , kn such that u =
k1 v1 + · · · + kn vn . Then
⟨u, vi ⟩ = ⟨k1 v1 + · · · + kn vn , vi ⟩
= ⟨k1 v1 , vi ⟩ + · · · + ⟨kn vn , vi ⟩
= k1 ⟨v1 , vi ⟩ + · · · + kn ⟨vn , vi ⟩
= ki ⟨vi , vi ⟩ (since S is orthogonal)
= ki ||vi ||2
= ki (since S is orthonormal).
∗∗∗
52. An n × n matrix A is said to be nilpotent if for some k ∈ Z+ , Ak is a zero matrix.
Prove that if A is nilpotent, then 0 is the only eigenvalue of A.
Ax = λx
A2 x = A(λx) = λ(Ax) = λ2 x.
....
..
Ak x = λk x.
But Ak is a zero matrix, and so the left hand side is a zero matrix. Thus λk x is a
zero matrix. However x being an eigenvector forces x ̸= 0, and thus λk = 0, and so
λ = 0. Thus 0 is the only eigenvalue of A.
∗∗∗
53. Let W be any subspace of an inner product space V , B = {b1 , . . . , bn } an orthonor-
mal basis for W . Let v ∈ V . Let the vector v0 be defined as
∑
n
v0 = ⟨v, b1 ⟩b1 + · · · + ⟨v, bn ⟩bn = ⟨v, bi ⟩bi .
i=1
Solution: What we need to check is the inner product of v − v0 with every vector
in W and verify it is 0.
∑
Let w ∈ W . Then since B is an orthonormal basis, w = ni=1 ⟨w, bi ⟩bi . Then
R. Borgersen
and
∑
n
⟨v0 , w⟩ = ⟨ ⟨v, bi ⟩bi , w⟩ (by def’n of v0 )
i=1
∑
n
= ⟨⟨v, bi ⟩bi , w⟩ (by additivity)
i=1
∑n
= ⟨v, bi ⟩⟨bi , w⟩ (by homogeneity)
i=1
∑
n ∑
n
= ⟨v, bi ⟩⟨bi , ⟨w, bj ⟩bj ⟩
i=1 j=1
∑n ∑
n
= ⟨v, bi ⟩ ⟨bi , ⟨w, bj ⟩bj ⟩ (by additivity)
i=1 j=1
∑n ∑n
= ⟨v, bi ⟩ ⟨w, bj ⟩⟨bi , bj ⟩ (by homogeneity)
i=1 j=1
∑
n
= ⟨v, bi ⟩⟨w, bi ⟩⟨bi , bi ⟩ (since i ̸= j =⇒ ⟨bi , bj ⟩ = 0)
i=1
∑
n
= ⟨v, bi ⟩⟨w, bi ⟩||bi ||2
i=1
∑n
= ⟨v, bi ⟩⟨w, bi ⟩ (since bi is a unit vector)
i=1
∑
n
= ⟨v, ⟨w, bi ⟩bi ⟩ (by homogeneity)
i=1
∑
n
= ⟨v, ⟨w, bi ⟩bi ⟩ (by additivity)
i=1
= ⟨v, w⟩.
⟨v − v0 , w⟩ = ⟨v, w⟩ − ⟨v0 , w⟩
= ⟨v, w⟩ − ⟨v, w⟩
= 0.
R. Borgersen