Vectorspace PDF
Vectorspace PDF
Vector Spaces
Vector spaces and linear transformations are the primary objects of study in linear algebra. A
vector space (which I’ll define below) consists of two sets: A set of objects called vectors and a field (the
scalars).
Definition. A vector space V over a field F is a set V equipped with an operation called (vector) addition,
which takes vectors u and v and produces another vector u + v.
There is also an operation called scalar multiplication, which takes an element a ∈ F and a vector
u ∈ V and produces a vector au ∈ V .
These operations satisfy the following axioms:
1. Vector addition is associative: If u, v, w ∈ V , then
(u + v) + w = u + (v + w).
u + v = v + u.
0 + u = u = u + 0 for all u ∈ V.
u + (−u) = 0 = (−u) + u.
5. If a, b ∈ F and x ∈ V , then
a(bx) = (ab)x.
6. If a, b ∈ F and x ∈ V , then
(a + b)x = ax + bx.
7. If a ∈ F and x, y ∈ V , then
a(x + y) = ax + ay.
8. If x ∈ V , then
1 · x = x.
The elements of V are called vectors; the elements of F are called scalars. As usual, the use of words
like “multiplication” does not imply that the operations involved look like ordinary “multiplication”.
F n = {(a1 , . . . , an ) | a1 , . . . , an ∈ F }.
F n is called the vector space of n-dimensional vectors over F . The elements a1 , . . . , an are called the
vector’s components.
1
F n becomes a vector space over F with the following operations:
You’re probably familiar with addition and scalar multiplication for these vectors:
(0, 0), (0, 1), (0, 2), (1, 0), (1, 1), (1, 2), (2, 0), (2, 1), (2, 2).
Example. The set R[x] of polynomials with real coefficients is a vector space over R, using the standard
operations on polynomials. For example,
Example. Let C[0, 1] denote the continuous real-valued functions defined on the interval 0 ≤ x ≤ 1. Add
functions pointwise:
(f + g)(x) = f (x) + g(x) for f, g ∈ C[0, 1].
2
From calculus, you know that the sum of continuous functions is a continuous function.
If a ∈ R and f ∈ C[0, 1], define scalar multiplication in pointwise fashion:
The product of continuous functions is continuous, so the integral of f (x)g(x) is defined. This example
shows that abstract vectors do not have to look like little arrows!.
Proof. (a) Note that the “0” on the left is the zero scalar in F , whereas the “0” on the right is the zero
vector in V .
0 · x = (0 + 0) · x = 0 · x + 0 · x.
Subtracting 0 · x from both sides, I get 0 = 0 · x.
(b) (The “−1” on the left is the scalar −1; the “−x” on the right is the “negative” of x ∈ V .)
(c)
−(−x) = (−1) · [(−1) · x] = [(−1) · (−1)]x = 1 · x = x.
Definition. Let V be a vector space over a field F , and let W ⊂ V , W 6= ∅. W is a subspace of V if:
(a) If u, v ∈ W , then u + v ∈ W .
In other words, W is closed under addition of vectors and under scalar multiplication.
Lemma. Let W be a subspace of a vector space V .
(b) If w ∈ W , then −w ∈ W .
Proof. (a) Take any vector w ∈ W (which you can do because W is nonempty), and take 0 ∈ F . Since W
is closed under scalar multiplication, 0 · w ∈ W . But 0 · w = 0, so 0 ∈ W .
3
(b) Since w ∈ W and −1 ∈ F , (−1) · w = −w is in W .
Example. Consider the real vector space R2 , the usual x-y plane. Then
W = {(x, y, 1) | x, y ∈ R}.
If you’re trying to decide whether a set is a subspace, it’s always good to check whether it contains
the zero vector before you start checking the axioms. In this case, the set consists of 3-dimensional vectors
whose third components are equal to 1. Obviously, the zero vector (0, 0, 0) doesn’t satisfy this condition.
Since W doesn’t contain the zero vector, it’s not a subspace of R3 .
Example. Let
W = {(x, sin x) | x ∈ R} .
Prove or disprove: W is a subspace of R2 .
Note that (0, 0) = (0, sin 0) ∈ W . This is not one of the axioms for a subspace, but it’s a good thing to
check first because you can usually do it quickly. If the zero vector is not in a set, then the lemma above
shows that the set is not a subspace. In this case, the zero vector is in W , so the issue isn’t settled, and I’ll
try to check the subspace axioms.
First, I might try to check that the set is closed under sums. I take two vectors in W — say (x, sin x)
and (y, sin y). I add them:
(x, sin x) + (y, sin y) = (x + y, sin x + sin y).
4
The last vector isn’t in the right form — it would be if sin x + sin y was equal to sin(x + y). That doesn’t
sound right, so I suspect that W is not a subspace. I try to get a specific counterexample to contradict
closure under addition.
First, π π π
, sin = , 1 ∈ W and (π, sin π) = (π, 0) ∈ W.
2 2 2
On the other hand,
π π π 3π
, sin + (π, sin π) = , 1 + (π, 0) = , 1 6∈ W.
2 2 2 2
3π
For I have sin = −1 6= 1.
2
Since W is not closed under vector addition, it is not a subspace.
Example. Let F be a field, and let A, B ∈ M (n, F ). Consider the following subset of F n :
W = {v ∈ F n | Av = Bv}.
Au + Av = Bu + Bv
A(u + v) = B(u + v)
Warning: Don’t say “A(u + v) + B(u + v) ∈ W ” — it doesn’t make sense! “A(u + v) + B(u + v)” is
an equation that u + v satisfies; it can’t be an element of W , because elements of W are vectors.
Next, I’ll check closure under scalar multiplication. Let k ∈ F and let v ∈ W . Since v ∈ W , I have
Av = Bv.
Multiply both sides by k, then commute the matrices and the scalar:
k(Av) = k(Bv)
A(kv) = B(kv)
5
Example. Consider the following subsets of the polynomial ring R[x]:
Lemma. If A is an m × n matrix over the field F , the set of n-dimensional vectors x which satisfy
Ax = 0
A(x + y) = Ax + ay = 0 + 0 = 0.
A(kx) = k(Ax) = k · 0 = 0.
w 0
1 1 0 1 x 0
= .
0 0 1 3 y 0
z 0
w = −s − t, x = s, y = −3t, z = t.
Thus,
w −s − t
x s
= .
y −3t
z t
The Lemma says that the set of all vectors of this form constitute a subspace of R4 .
For example, if you add two vectors of this form, you get another vector of this form:
−s − t −s − t′ −(s + s′ ) − (t + t′ )
′
s s′ s + s′
+ = .
−3t −3t′ −3(t + t′ )
t t′ t + t′
You can check that the set is also closed under scalar multiplication.
6
Definition. If v1 , v2 , . . . , vn are vectors in a vectors space V , a linear combination of the v’s is a vector
k1 v1 + k2 v2 + · · · + kn vn ,
√ π2
( 2 − 17)u + v is also a linear combination of u and v. u and v are themselves linear combinations
4
of u and v, as is the zero vector (why?).
In fact, it turns out that any vector in R2 is a linear combination of u and v.
On the other hand, there are vectors in R2 which are not linear combinations of p = h1, −2i and
q = h−2, 4i. Do you see how this pair is different from the first?
Definition. If S is a subset of a vector space V , the span hSi of S is the set of all linear combinations of
vectors in S.
j1 u1 + j2 u2 + · · · + jn un , k1 v1 + k2 v2 + · · · + kn vn ,
where the j’s and k’s are scalars and the u’s and v’s are elements of S.
Take two elements of the span and add them:
(j1 u1 + j2 u2 + · · · + jn un ) + (k1 v1 + k2 v2 + · · · + km vm ) = j1 u1 + j2 u2 + · · · + jn un + k1 v1 + k2 v2 + · · · + km vm .
This humongous sum is an element of the span, because it’s a sum of vectors in S, each multiplied by
a scalar. Thus, the span is closed under taking sums.
Take an element of the span and multiply it by a scalar:
This is an element of the span, because it’s a sum of vectors in S, each multiplied by a scalar. Thus,
the span is closed under scalar multiplication.
Therefore, the span is a subspace.
V = {ha, b, 0i | a, b ∈ R} .
To show that two sets are equal, you need to show that each is contained in the other. To do this, take
a typical element of the first set and show that it’s in the second set. Then take a typical element of the
second set and show that it’s in the first set.
7
Let W be the span of h3, 1, 0i and h2, 1, 0i in R3 . A typical element of W is a linear combination of the
two vectors:
x · h3, 1, 0i + y · h2, 1, 0i = h3x + 2y, x + y, 0i.
Since the sum is a vector of the form ha, b, 0i for a, b ∈ R, it is in V . This proves that W ⊂ V .
Now let ha, b, 0i ∈ V . I have to show that this vector is a linear combination of h3, 1, 0i and h2, 1, 0i.
This means that I have to find real numbers x and y such that
3x + 2y = a, x + y = b.
This is a system of linear equations which you can solve by row reduction or matrix inversion (for
instance). The solution is
x = a − 2b, y = −a + 3b.
In other words,
(a − 2b) · h3, 1, 0i + (−a + 3b) · h2, 1, 0i = ha, b, 0i.
Since ha, b, 0i is a linear combination of h3, 1, 0i and h2, 1, 0i, it follows that ha, b, 0i ∈ W . This proves
that V ⊂ W .
Since W ⊂ V and V ⊂ W , I have W = V .
c 2008 by Bruce Ikenaga 8