Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
46 views

General Vector Spaces

A vector space is defined by a nonempty set V whose elements are called vectors, along with two operations - vector addition and scalar multiplication - that satisfy certain properties. A vector space has a basis, which is a linearly independent set of vectors whose linear combinations span the entire space. The dimension of a vector space is the number of elements in any basis. Important subspaces include the null space, row space, and column space of a matrix.

Uploaded by

Inna
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views

General Vector Spaces

A vector space is defined by a nonempty set V whose elements are called vectors, along with two operations - vector addition and scalar multiplication - that satisfy certain properties. A vector space has a basis, which is a linearly independent set of vectors whose linear combinations span the entire space. The dimension of a vector space is the number of elements in any basis. Important subspaces include the null space, row space, and column space of a matrix.

Uploaded by

Inna
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

The definition of a vector space (V , +, ·)

1. For any ~u and ~v in V , ~u + ~v is also in V .


2. For any ~u and ~v in V , ~u + ~v = ~v + ~u .
3. For any ~u , ~v , w
~ in V , ~u + (~v + w
~ ) = (~u + ~v ) + w
~.
4. There is an element in V called the zero or null vector, which
we denote by ~0, such that for all ~u in V we have ~0 + ~u = ~u .
5. For every ~u in V , there is a vector called the negative of ~u
and denoted −~u , such that −~u + ~u = ~0.
6. If k is any scalar in R and ~u is any vector in V , then k · ~u is a
vector in V .
7. For any scalar k in R and any vectors ~u and ~v in V ,
k · (~u + ~v ) = k · ~u + k · ~v .
8. For any scalars k and m in R and any vector ~u in V ,
(k + m) · ~u = k · ~u + m · ~u .
9. For any scalars k and m in R and any vector ~u in V ,
k · (m · ~u ) = (k m) · ~u .
10. For any vector ~u in V , 1 · ~u = ~u .
What determines a vector space?

A nonempty set V whose elements are called vectors.


An operation + called vectors addition, such that

vector + vector = vector

In other words, we have closure under vector addition.


An operation · called scalar multiplication, such that

scalar · vector = vector

In other words, we have closure under scalar multiplication.


The remaining 8 axioms are all satisfied.
Some basic identities in a vector space

Theorem: Let V be a vector space. The following statements are


always true.
a) 0 · ~u = ~0
b) k · ~0 = ~0
c) (−1) · ~u = −~u
d) If k · ~u = ~0 then k = 0 or ~u = ~0.
Vector subspace

Let (V , +, ·) be a vector space.


Definition: A subset W of V is called a subspace if W is itself a
vector space with the operations + and · defined on V .
Theorem: Let W be a nonempty subset of V . Then W is a
subspace of V if and only if it is closed under addition and scalar
multiplication, in other words, if:
i.) For any ~u , ~v in W , we have ~u + ~v is in W .
ii.) For any scalar k and any vector ~u in W we have k · ~u is in W .
Linear combinations and the span of a set of vectors
Let (V , +, ·) be a vector space.
Definition: Let ~v be a vector in V . We say that ~v is a linear
combination of the vectors v~1 , v~2 , . . . , v~r if there are scalars
k1 , k2 , . . . , kr such that

~v = k1 · v~1 + k2 · v~2 + . . . + kr · v~r

The scalars k1 , k2 , . . . , kr are called the coefficients of the linear


combination.
Definition: Let S = {v~1 , v~2 , . . . , v~r } be a set of vectors in V .
Let W be the set of all linear combinations of v~1 , v~2 , . . . , v~r :

W = {k1 ·v~1 +k2 ·v~2 +. . .+kr ·~


vr : for all choices of scalars k1 , k2 , . . . , kr }

Then W is called the span of the set S.


We write:

W = span S or W = span {v~1 , v~2 , . . . , v~r }


Linear combinations and the span of a set of vectors

Let (V , +, ·) be a vector space and let S = {v~1 , v~2 , . . . , v~r } be a


set of vectors in V .

spanS = {k1 · v~1 +k2 · v~2 +. . .+kr · v~r : for all scalars k1 , k2 , . . . , kr }

(all linear combinations of v~1 , v~2 , . . . , v~r ).

Theorem: span {v~1 , v~2 , . . . , v~r } is a subspace of V .


It is in fact the smallest subspace of V that contains all vectors
{v~1 , v~2 , . . . , v~r }.
Linear independence in a vector space V
Let V be a vector space and let S = {v~1 , v~2 , . . . , v~r } be a set of
vectors in V .
Definition: The set S is called linearly independent if the vector
equation
(∗) c1 · v~1 + c2 · v~2 + . . . + cr · v~r = ~0
has only one solution, the trivial one:

c1 = 0, c2 = 0, . . . , cr = 0

The set is called linearly dependent otherwise, if equation (*) has


other solutions besides the trivial one.

Theorem: The set of vectors S is linearly independent if and only if


no vector in the set is a linear combination of the other vectors in
the set.
The set of vectors S is linearly dependent if and only if one of the
vectors in the set is a linear combination of the other vectors in the
set.
The solutions to a homogeneous system of equations
A · ~x = ~0

Consider a homogeneous system of m equations with n unknowns.


In other words,
let A be an m × n matrix
let ~x be an n × 1 matrix (or vector) whose entries are the
unknowns x1 , x2 , . . . , xn
let ~0 denote the n × 1 matrix (vector) whose entries are all 0.
The system can then be written as

A · ~x = ~0
Theorem: The set of solutions to a homogeneous system of m
equations with n unknowns is a subspace of Rn .
Reminder from MA1201 on systems of equations
The case when # of equations = # of unknowns
Theorem: Let A be a square matrix in Mn n .
The following statements are equivalent:
1. A is invertible
2. det(A) 6= 0
3. The homogeneous system A · ~x = ~0 has only the trivial
solution
4. The system of equations A · ~x = ~b has exactly one solution for
every vector ~b in Rn .

The case when # of unknowns > # of equations


Theorem: Let A be a matrix in Mm n , where n > m.
Then the homogeneous system A · ~x = ~0 has infinitely many
solutions.
Basis in a vector space V

Definition: A set S = {v~1 , v~2 , . . . , v~r } of vectors is called a basis


for V if
1. S is linearly independent
2. span (S) = V .

In other words, the set S = {v~1 , v~2 , . . . , v~r } is a basis for V if


1. The equation c1 · v~1 + c2 · v~2 + . . . + cr · v~r = ~0 has only the
trivial solution.
2. The equation c1 · v~1 + c2 · v~2 + . . . + cr · v~r = ~b has a solution
for every ~b in V .
Standard bases for the most popular vector spaces

In R2 : {~i,~j}.
In R3 : {~i,~j, ~k}.
In Rn : {e1 , e2 , . . . , en }.
In Pn : 1, X , X 2 , . . . , X n .
In M2 2 : all matrices with all entries 0 except for one entry,
which is 1. There are 4 such matrices.
In Mm n : all matrices with all entries 0 except for one entry,
which is 1. There are m · n such matrices.
Dimension of a vector space
Some vector spaces do not have a finite basis.
A vector space has many different bases. However,
Theorem: All bases of a finite dimensional vector space have the
same number of elements.

Definition: Let V be a finite dimensional vector space. We call


dimension of V is the number of elements of a basis for V .
We use the notation dim(V ) for the dimension of V .

Example: Counting the elements of the standard basis of each of


the popular vectors spaces, we have that:
dim(Rn ) = n
dim(Pn ) = n + 1
dim(Mm n = m · n
The null space, row space and column space of a matrix

Definition: Given an m × n matrix A, we define:


1. The column space of A = the subspace of Rm spanned by the
columns of A. Its dimension is called the rank (A).
2. The row space of A = the subspace of Rn spanned by its rows.
3. The null space of A = the solution space of A · ~x = ~0.
It is a subspace of Rn . Its dimension is called the nullity (A).
Theorem: Let A be an m × n matrix. Then
(a) rank (A)+ nullity (A) = n
(b) dim column space of A = dim row space of A
= rank (A) = number of leading 1s in the RREF of A.
Finding bases for the null space, row space and column
space of a matrix
Given an m × n matrix A
1. Reduce the matrix A to the reduced row echelon form R.
2. Solve the system R · ~x = ~0. Find a basis for the solutions
space.
The same basis for the solution space of R · ~x = ~0 is a basis
for the null space of A.
3. Consider the non-zero rows of R. They form a basis for the
row space of R.
The same basis for the row space of R is a basis for the row
space of A.
4. Take the columns of R with leading 1s. They form a basis for
the column space of R.
The corresponding column vectors in A form a basis for the
column space of A.
Gaussian elimination: reduced row echelon form (RREF)
 
Example: 1 0 0 1
A= 0 1 0 2  is in RREF.
 
0 0 1 3
Definition: To be in RREF, a matrix A must satisfy the following 4
properties (if it satisfies just the first 3 properties, it is in REF):
1. If a row does not have only zeros, then its first nonzero number
is a 1. We call this a leading 1.
2. The rows that contain only zeros (if there are any) are at the
bottom of the matrix.
3. In any consecutive rows that do not have only zeros, the leading
1 in the lower row is farther right than the leading 1 in the row
above.
4. Each column that contains a leading 1 has zeros everywhere
else in that column.

You might also like