Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
113 views

Vectorspace PDF

1. The document defines a vector space as consisting of a set of vectors and a field of scalars, along with operations of vector addition and scalar multiplication that satisfy certain axioms. 2. Common examples of vector spaces include the set of all n-dimensional vectors of components from a field F, the set of all polynomials with real coefficients, and the set of all continuous functions between two real numbers. 3. A subset W of a vector space V is a subspace if it is closed under vector addition and scalar multiplication. Common subspaces include the zero vector, the entire space V, and lines or planes passing through the origin.

Uploaded by

Tatenda Bizure
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
113 views

Vectorspace PDF

1. The document defines a vector space as consisting of a set of vectors and a field of scalars, along with operations of vector addition and scalar multiplication that satisfy certain axioms. 2. Common examples of vector spaces include the set of all n-dimensional vectors of components from a field F, the set of all polynomials with real coefficients, and the set of all continuous functions between two real numbers. 3. A subset W of a vector space V is a subspace if it is closed under vector addition and scalar multiplication. Common subspaces include the zero vector, the entire space V, and lines or planes passing through the origin.

Uploaded by

Tatenda Bizure
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

2-5-2008

Vector Spaces
Vector spaces and linear transformations are the primary objects of study in linear algebra. A
vector space (which I’ll define below) consists of two sets: A set of objects called vectors and a field (the
scalars).
Definition. A vector space V over a field F is a set V equipped with an operation called (vector) addition,
which takes vectors u and v and produces another vector u + v.
There is also an operation called scalar multiplication, which takes an element a ∈ F and a vector
u ∈ V and produces a vector au ∈ V .
These operations satisfy the following axioms:
1. Vector addition is associative: If u, v, w ∈ V , then

(u + v) + w = u + (v + w).

2. Vector addition is commutative: If u, v ∈ V , then

u + v = v + u.

3. There is a zero vector 0 which satisfies

0 + u = u = u + 0 for all u ∈ V.

4. For every vector u ∈ V , there is a vector −u ∈ V which satisfies

u + (−u) = 0 = (−u) + u.

5. If a, b ∈ F and x ∈ V , then
a(bx) = (ab)x.

6. If a, b ∈ F and x ∈ V , then
(a + b)x = ax + bx.

7. If a ∈ F and x, y ∈ V , then
a(x + y) = ax + ay.

8. If x ∈ V , then
1 · x = x.

The elements of V are called vectors; the elements of F are called scalars. As usual, the use of words
like “multiplication” does not imply that the operations involved look like ordinary “multiplication”.

Example. If F is a field, then F n denotes the set

F n = {(a1 , . . . , an ) | a1 , . . . , an ∈ F }.

F n is called the vector space of n-dimensional vectors over F . The elements a1 , . . . , an are called the
vector’s components.

1
F n becomes a vector space over F with the following operations:

(a1 , . . . , an ) + (b1 , . . . , bn ) = (a1 + b1 , . . . , an + bn ).

p · (a1 , . . . , an ) = (pa1 , . . . , pan ), where p ∈ F.


It’s easy to check that the axioms hold. For example, I’ll check Axiom 6. Let p, q ∈ F , and let
(a1 , . . . , an ) ∈ F n . Then

(p + q)(a1 , . . . , an ) = ((p + q)a1 , . . . , (p + q)an ) Definition of scalar multiplication


= (pa1 + qa1 , . . . , pan + qan ) Field axiom: Distributivity
.
= (pa1 , . . . , pan ) + (qa1 , . . . , qan ) Definition of vector additon
= p(a1 , . . . , an ) + q(a1 , . . . , an ) Definition of scalar multiplication

As a specific example, R3 consists of 3-dimensional vectors with real components, like


 
1
(3, −2, π) or , 0, −1.234 .
2

You’re probably familiar with addition and scalar multiplication for these vectors:

(1, −2, 4) + (4, 5, 2) = (1 + 4, −2 + 5, 4 + 2) = (5, 3, 6).

7 · (−2, 0, 3) = (7 · (−2), 7 · 0, 7 · 3) = (−14, 0, 21).


(Sometimes people write h3, −2, πi, using angle brackets to distinguish vectors from points. I’ll use angle
brackets when there’s a danger of confusion.)
Z23 consists of 2-dimensional vectors with components in Z3 . Since each of the two components can be
any element in {0, 1, 2}, there are 3 · 3 = 9 such vectors:

(0, 0), (0, 1), (0, 2), (1, 0), (1, 1), (1, 2), (2, 0), (2, 1), (2, 2).

Here are examples of vector addition and multiplication in Z23 :

(1, 2) + (1, 1) = (1 + 1, 2 + 1) = (2, 0).

2 · (2, 1) = (2 · 2, 2 · 1) = (1, 2).

Example. The set R[x] of polynomials with real coefficients is a vector space over R, using the standard
operations on polynomials. For example,

(2x2 + 3x + 5) + (x3 + 7x − 11) = x3 + 2x2 + 10x − 6.

4 · (−3x2 + 10) = −12x2 + 40.


Unlike Rn , R[x] is infinite dimensional (in a sense to be made more precise shortly). Intuitively, you
need an infinite set of polynomials, like
1, x, x2 , x3 , . . .
to “construct” all the elements of R[x].

Example. Let C[0, 1] denote the continuous real-valued functions defined on the interval 0 ≤ x ≤ 1. Add
functions pointwise:
(f + g)(x) = f (x) + g(x) for f, g ∈ C[0, 1].

2
From calculus, you know that the sum of continuous functions is a continuous function.
If a ∈ R and f ∈ C[0, 1], define scalar multiplication in pointwise fashion:

(af )(x) = a · f (x).

For example, if f (x) = x2 and a = 3, then

(af )(x) = 3x2 .

These operations make C[0, 1] into an R-vector space.


Like R[x], C[0, 1] is infinite dimensional. However, its dimension is uncountably infinite, while R[x]
has countably infinite dimension over R.
You can also define a “dot product” for two vectors f, g ∈ C[0, 1]:
Z 1
f ·g = f (x)g(x) dx.
0

The product of continuous functions is continuous, so the integral of f (x)g(x) is defined. This example
shows that abstract vectors do not have to look like little arrows!.

Proposition. Let V be a vector space over a field F .

(a) 0 · x = 0 for all x ∈ V .


(b) (−1) · x = −x for all x ∈ V .

(c) −(−x) = x for all x ∈ V .

Proof. (a) Note that the “0” on the left is the zero scalar in F , whereas the “0” on the right is the zero
vector in V .
0 · x = (0 + 0) · x = 0 · x + 0 · x.
Subtracting 0 · x from both sides, I get 0 = 0 · x.

(b) (The “−1” on the left is the scalar −1; the “−x” on the right is the “negative” of x ∈ V .)

(−1) · x + x = (−1) · x + 1 · x = ((−1) + 1) · x = 0 · x = 0.

(c)
−(−x) = (−1) · [(−1) · x] = [(−1) · (−1)]x = 1 · x = x.
Definition. Let V be a vector space over a field F , and let W ⊂ V , W 6= ∅. W is a subspace of V if:

(a) If u, v ∈ W , then u + v ∈ W .

(b) If k ∈ F and u ∈ W , then ku ∈ W .

In other words, W is closed under addition of vectors and under scalar multiplication.
Lemma. Let W be a subspace of a vector space V .

(a) The zero vector is in W .

(b) If w ∈ W , then −w ∈ W .

Proof. (a) Take any vector w ∈ W (which you can do because W is nonempty), and take 0 ∈ F . Since W
is closed under scalar multiplication, 0 · w ∈ W . But 0 · w = 0, so 0 ∈ W .

3
(b) Since w ∈ W and −1 ∈ F , (−1) · w = −w is in W .

Example. If V is a vector space over a field F , {0} and V are subspaces of V .

Example. Consider the real vector space R2 , the usual x-y plane. Then

W1 = {(x, 0) | x ∈ R} and W2 = {(0, y) | y ∈ R}

are subspaces of R2 . (These are just the x and y-axes, of course.)


I’ll check that W1 is a subspace. First, I have to show that two elements of W1 add to an element of W1 .
An element of W1 is a pair with the second component 0. So here are two elements of W1 : (x1 , 0), (x2 , 0).
Add them:
(x1 , 0) + (x2 , 0) = (x1 + x2 , 0).
(x1 + x2 , 0) is in W1 , because its second component is 0. Thus, W1 is closed under sums.
Next, I have to show that W1 is closed under scalar multiplication. Take a scalar k ∈ R and a vector
(x, 0) ∈ W1 . Take their product:
k · (x, 0) = (kx, 0).
The product (kx, 0) is in W1 because its second component is 0. Therefore, W1 is closed under scalar
multiplication.
Thus, W1 is a subspace.
Notice that in doing the proof, I did not use specific vectors in W1 like (42, 0) or (−17, 0). I’m trying to
prove statements about arbitrary elements of W1 , so I use “variable” elements.
In general, the subspaces of R2 are {0}, R2 , and lines passing through the origin. (Why can’t a line
which doesn’t pass through the origin be a subspace?)
In R3 , the subspaces are the {0}, R3 , and lines or planes passing through the origin.
And so on.

Example. Prove or disprove: The following subset of R3 is a subspace:

W = {(x, y, 1) | x, y ∈ R}.

If you’re trying to decide whether a set is a subspace, it’s always good to check whether it contains
the zero vector before you start checking the axioms. In this case, the set consists of 3-dimensional vectors
whose third components are equal to 1. Obviously, the zero vector (0, 0, 0) doesn’t satisfy this condition.
Since W doesn’t contain the zero vector, it’s not a subspace of R3 .

Example. Let
W = {(x, sin x) | x ∈ R} .
Prove or disprove: W is a subspace of R2 .
Note that (0, 0) = (0, sin 0) ∈ W . This is not one of the axioms for a subspace, but it’s a good thing to
check first because you can usually do it quickly. If the zero vector is not in a set, then the lemma above
shows that the set is not a subspace. In this case, the zero vector is in W , so the issue isn’t settled, and I’ll
try to check the subspace axioms.
First, I might try to check that the set is closed under sums. I take two vectors in W — say (x, sin x)
and (y, sin y). I add them:
(x, sin x) + (y, sin y) = (x + y, sin x + sin y).

4
The last vector isn’t in the right form — it would be if sin x + sin y was equal to sin(x + y). That doesn’t
sound right, so I suspect that W is not a subspace. I try to get a specific counterexample to contradict
closure under addition.
First, π π  π 
, sin = , 1 ∈ W and (π, sin π) = (π, 0) ∈ W.
2 2 2
On the other hand,
 
π π π  3π
, sin + (π, sin π) = , 1 + (π, 0) = , 1 6∈ W.
2 2 2 2


For I have sin = −1 6= 1.
2
Since W is not closed under vector addition, it is not a subspace.

Example. Let F be a field, and let A, B ∈ M (n, F ). Consider the following subset of F n :

W = {v ∈ F n | Av = Bv}.

Prove or disprove: W is a subspace.


This set is defined by a property rather than by appearance, and axiom checks for this kind of set often
give people trouble. The problem is that elements of W don’t “look like” anything — if you need to refer to
a couple of arbitrary elements of W , you might call them u and v (for instance). There’s nothing about the
symbols u and v which tells you that they belong to W . But u and v are like people who belong to a club:
You can’t tell from their appearance that they’re club members, but they’re carrying membership cards in
their pockets.
With this in mind, I’ll check closure under addition. Let u, v ∈ W . I must show that u + v ∈ W .
Since u and v are in W ,
Au = Bu and Av = Bv.
Adding the equations and factoring out, I get

Au + Av = Bu + Bv
A(u + v) = B(u + v)

The last equation shows that u + v ∈ W .

Warning: Don’t say “A(u + v) + B(u + v) ∈ W ” — it doesn’t make sense! “A(u + v) + B(u + v)” is
an equation that u + v satisfies; it can’t be an element of W , because elements of W are vectors.
Next, I’ll check closure under scalar multiplication. Let k ∈ F and let v ∈ W . Since v ∈ W , I have

Av = Bv.

Multiply both sides by k, then commute the matrices and the scalar:

k(Av) = k(Bv)
A(kv) = B(kv)

The last equation says that kv ∈ W .


Since W is closed under addition and scalar multiplication, it’s a subspace.

5
Example. Consider the following subsets of the polynomial ring R[x]:

V1 = {f (x) ∈ R[x] | f (2) = 0}, V2 = {f (x) ∈ R[x] | f (2) = 1}.

V1 is a subspace; it consists of all polynomials having x = 2 as a root.


V2 is not a subspace. One way to see this is to notice that the zero polynomial (i.e. the zero vector) is
not in V2 , because the zero polynomial does not give 1 when you plug in x = 2.
Alternatively, the constant polynomial f (x) = 1 is an element of V2 — it gives 1 when you plug in 2 —
but 2 · f (x) is not. So V2 is not closed under scalar multiplication.

Lemma. If A is an m × n matrix over the field F , the set of n-dimensional vectors x which satisfy

Ax = 0

is a subspace of F n (the solution space of the system).

Proof. If Ax = 0 and Ay = 0, then

A(x + y) = Ax + ay = 0 + 0 = 0.

Therefore, if x and y are in the set, so is x + y.


If Ax = 0 and k is a scalar, then

A(kx) = k(Ax) = k · 0 = 0.

Therefore, if x is in the set, then so is kx.


Therefore, the solution space is a subspace.

Example. Consider the following system of linear equations over R:

w 0
  
 
1 1 0 1  x  0
  =  .
0 0 1 3 y 0
z 0

The solution can be written as

w = −s − t, x = s, y = −3t, z = t.

Thus,
w −s − t
   
x  s 
 = .
y −3t
z t
The Lemma says that the set of all vectors of this form constitute a subspace of R4 .
For example, if you add two vectors of this form, you get another vector of this form:

−s − t −s − t′ −(s + s′ ) − (t + t′ )
   ′   
 s   s′ s + s′
+ = .
  
−3t −3t′ −3(t + t′ )

t t′ t + t′

You can check that the set is also closed under scalar multiplication.

6
Definition. If v1 , v2 , . . . , vn are vectors in a vectors space V , a linear combination of the v’s is a vector

k1 v1 + k2 v2 + · · · + kn vn ,

where the k’s are scalars.

Example. Take u = h1, 2i and v = h−3, 7i in R2 . Here is a linear combination of u and v:

2u − 5v = 2 · h1, 2i − 5h−3, 7i = h17, −31i.

√ π2
( 2 − 17)u + v is also a linear combination of u and v. u and v are themselves linear combinations
4
of u and v, as is the zero vector (why?).
In fact, it turns out that any vector in R2 is a linear combination of u and v.
On the other hand, there are vectors in R2 which are not linear combinations of p = h1, −2i and
q = h−2, 4i. Do you see how this pair is different from the first?

Definition. If S is a subset of a vector space V , the span hSi of S is the set of all linear combinations of
vectors in S.

Theorem. If S is a subset of a vector space V , the span hSi of S is a subspace of V .

Proof. Here are typical elements of the span of S:

j1 u1 + j2 u2 + · · · + jn un , k1 v1 + k2 v2 + · · · + kn vn ,

where the j’s and k’s are scalars and the u’s and v’s are elements of S.
Take two elements of the span and add them:

(j1 u1 + j2 u2 + · · · + jn un ) + (k1 v1 + k2 v2 + · · · + km vm ) = j1 u1 + j2 u2 + · · · + jn un + k1 v1 + k2 v2 + · · · + km vm .

This humongous sum is an element of the span, because it’s a sum of vectors in S, each multiplied by
a scalar. Thus, the span is closed under taking sums.
Take an element of the span and multiply it by a scalar:

k · (k1 v1 + k2 v2 + · · · kn vn ) = kk1 v1 + kk2 v2 + · · · + kkn vn .

This is an element of the span, because it’s a sum of vectors in S, each multiplied by a scalar. Thus,
the span is closed under scalar multiplication.
Therefore, the span is a subspace.

Example. Prove that the span of h3, 1, 0i and h2, 1, 0i in R3 is

V = {ha, b, 0i | a, b ∈ R} .

To show that two sets are equal, you need to show that each is contained in the other. To do this, take
a typical element of the first set and show that it’s in the second set. Then take a typical element of the
second set and show that it’s in the first set.

7
Let W be the span of h3, 1, 0i and h2, 1, 0i in R3 . A typical element of W is a linear combination of the
two vectors:
x · h3, 1, 0i + y · h2, 1, 0i = h3x + 2y, x + y, 0i.
Since the sum is a vector of the form ha, b, 0i for a, b ∈ R, it is in V . This proves that W ⊂ V .
Now let ha, b, 0i ∈ V . I have to show that this vector is a linear combination of h3, 1, 0i and h2, 1, 0i.
This means that I have to find real numbers x and y such that

x · h3, 1, 0i + y · h2, 1, 0i = ha, b, 0i.

If I expand the left side, I get


h3x + 2y, x + y, 0i = ha, b, 0i.
Equating corresponding components, I get

3x + 2y = a, x + y = b.

This is a system of linear equations which you can solve by row reduction or matrix inversion (for
instance). The solution is
x = a − 2b, y = −a + 3b.
In other words,
(a − 2b) · h3, 1, 0i + (−a + 3b) · h2, 1, 0i = ha, b, 0i.
Since ha, b, 0i is a linear combination of h3, 1, 0i and h2, 1, 0i, it follows that ha, b, 0i ∈ W . This proves
that V ⊂ W .
Since W ⊂ V and V ⊂ W , I have W = V .


c 2008 by Bruce Ikenaga 8

You might also like