Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
27 views

Lecture13 Aurora

A linear combination of two vectors a 1, a 2, is a weighted sum of the vectors. Span the span of a finite set of vectors, v 1., v r, is the set of all linear combinations of those vectors.

Uploaded by

martin701107
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views

Lecture13 Aurora

A linear combination of two vectors a 1, a 2, is a weighted sum of the vectors. Span the span of a finite set of vectors, v 1., v r, is the set of all linear combinations of those vectors.

Uploaded by

martin701107
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Fall 2011, 18-202 Lecture

Given by: Aurora Schmidt for Prof. Jose M. F. Moura


Oct. 24, 2011
Contents
1 Linear Combination and Span 1
2 Vector Space 2
3 Basis and Dimension 5
4 Linear Independence of Vectors 6
4.1 Checking if n vectors, in R
m
, are Linearly Independent . . . . . . . . . . . . . . . . . . . . . . 6
5 Finding a basis and properties of the basis 8
6 Inner Product 8
1 Linear Combination and Span
Linear Combination
A linear combination of two vectors a
1
, a
2
, is a weighted sum of the vectors.
Example: A linear combination of 2D vectors
Let v
1
=
_
1
0
_
and v
2
=
_
0
1
_
v
3
= 3v
1
2v
2
=
_
3
2
_
(1)
The new vector v
3
is a linear combination of v
1
and v
2
.
span
The span of a nite set of vectors, {v
1
, . . . , v
r
}, is the set of all linear combinations of those vectors.
span {v
1
, v
2
, . . . , v
n
} = {v =
1
v
1
+
2
v
2
+ +
n
v
n
,
i
F } (2)
where F is some scalar eld, such as R or C.
Example: Again let v
1
=
_
1
0
_
and v
2
=
_
0
1
_
.
span {v
1
, v
2
} = {v = v
1
+ v
2
, , R } (3)
= R
2
(4)
1
Showing why the span of these is all of R
2
: Suppose we want to describe an arbitrary point in R
2
, x =
_
a
b
_
.
The vector x in in the span of {v
1
, v
2
} since we can choose = a and = b. Therefore the span of these
two vectors is the whole of 2D space.
2 Vector Space
Let V be a set of vectors over a eld, F, along with the operations vector addition and scalar multiplication.
Then V forms a vector space if these axioms are satised...
i. Closure under addition or scalar multiplication:
u, v V, (u +v) V
c F, v V, cv V
(This can also be called closure under linear combination.)
ii. Associativity of addition:
u, v, w V, u + (v +w) = (u +v) +w
iii. Commutativity of addition:
u, v V, v +w = w +v
iv. Identity element of addition:
0 V : v
1
+0 = v
1
, v
1
V
There is a unique element 0 V, called the zero vector, such that v + 0 = v for all vectors v in
the space.
v. Inverse elements of addition:
v V, w V : v +w = 0
For any v V, there exists an element w V, called the additive inverse of v, such that
v +w = 0. The additive inverse may denoted v.
vi. Distributivity of scalar multiplication with respect to vector addition:
v, w V, a R, a(v +w) = av + aw
vii. Distributivity of scalar multiplication with respect to eld addition:
v V, a, b R, (a + b)v = av + bv
viii. Associativity of scalar multiplication: a(bv) = (ab)v a, b F, v V
ix. Identity element of scalar multiplication:
1 F : 1v
1
= v
1
, v
1
V
2
Example 1, Real and Complex Column Vectors:
R
n
and C
n
Real and complex n-dimensional vectors are each vector spaces.
Check closure and commutativity under addition for R
2
,
v
1
+v
2
=
_
v
11
v
12
_
+
_
v
21
v
22
_
=
_
v
11
+ v
21
v
12
+ v
22
_
Clearly the output is a 2D real vector, therefore we have closure under addition. Also, we conclude commu-
tativity under additions, since for real v
11
, v
21
, v
11
+ v
21
= v
21
+ v
11
, so
v
1
+v
2
=
_
v
11
v
12
_
+
_
v
21
v
22
_
=
_
v
11
+ v
21
v
12
+ v
22
_
=
_
v
21
+ v
11
v
22
+ v
12
_
= v
2
+v
1
The other properties can be similarly checked, citing properties inherited by the eld of real numbers. The
same can be done for C
2
, where now we cite the same properties of the eld of complex numbers.
Example 2, Real and Complex Rectangular Matrices:
R
mn
and C
mn
The sets of real and complex m-by-n matrices are also each a vector space.
Check closure and commutativity under addition.
A, B R
mn
, A + B = B + A R
mn
What about commutativity of multiplication? Does AB = BA?
No. Notice that multiplication need not be dened nor commutative for a set to be a vector space.
Example 3, Semi-innite Real-valued Sequences:
The set of discrete real-valued sequence, described by,
f[k] =
_
f
k
R, for k 0
0, for k < 0
forms a vector space. We can show closure under linear combination. For real constants and , f[k]+g[k]
will also be a real-valued sequence starting at k = 0,
h[k] = f[k] + g[k] =
_
f
k
+ g
k
R, for k 0
0, for k < 0
We can also check for the existence of a zero element. Let z[k] = 0, k. This is a real sequence that is zero
for all negative k, so it is an element of V. We can see that f[k] +z[k] = f[k] for any f[k], so this is the zero
element. The other 7 properties also hold. The interesting aspect of this example is that each vector in
this space is a sequence with innite length, yet it may still be thought of as a vector.
Note that the same reasoning applies to complex-valued sequences, so these are also a vector space.
Example 4, Real and Complex-valued Functions:
The set of real-valued functions, described by,
f(t) R, for all t
and complex-valued functions, described by,
f(t) C, for all t
are vector spaces. We can show closure under linear combination. For real functions, and real constants
and , f(t) + g(t) will also be a function of t,
h(t) = f(t) + g(t) R for all t
3
Complex functions are also closed under linear combination using complex and .
We can also check for the existence of a zero element. In this case it would be z(t) = 0, k. The interesting
aspect of this example is that each vector in this space is a function of real-valued t. You can also show
that the set of continuous functions is a vector space, since all linear combinations of a nite number of
continuous functions are also continuous.
Example 5, Polynomials of degree n:
A polynomial of degree n can be written as,
p(x) = a
n
x
n
+ a
n1
x
n1
+ + a
1
x + a
0
In order to be considered an nth degree polynomial, the coecient a
n
must not be zero.
If we just consider the space of polynomials with degree exactly equal to n, we would not have a vector space.
For one, closure under addition would fail. For example, if we add f(x) = a
n
x
n
+ a
n1
x
n1
+ + a
1
x + a
0
to g(x) = a
n
x
n
+ b
n1
x
n1
+ + b
1
x + b
0
we get
h(x) = (a
n
a
n
)x
n
+ (a
n1
+ b
n1
)x
n1
+ + (a
1
+ b
1
)x + a
0
+ b
0
we get a n 1 degree polynomial, which is not in the set of degree n polynomials. Since polynomials can
only decrease in order when they are linearly combined with other polynomials of the same or lower order,
then if we consider polynomials with degree of at most n, we have a vector space. This space is closed under
linear combination. Also there is a zero element,
z(x) = 0x
n
+ 0x
n1
+ + 0x + 0 = 0
We note that we can represent these polynomials with degree less than or equal to n by collecting their
coecients into a vector,
v =
_

_
a
n
a
n1
.
.
.
a
1
a
0
_

_
For polynomials with real coecients and degree less than or equal to n, their seems to be an equivalence
to the space R
n+1
. The dierence lies in the interpretation of the vectors in this space.
For an example, use this vector representation, to represent this 3rd order polynomial, 2x
3
x
2
+ x, as an
element of the space of polynomials with degree at most 3.
answer: v =
_

_
2
1
1
0
_

_
Example 6, An example that is not a vector space:
So far almost everything we proposed has been a vector space. For illustration, we give an example that is
not a vector space. Consider the set of real 2D vectors; i.e., R
2
, but we will redene the operation of vector
addition, +.
+ : V V V, v +u =
_
v
1
v
2
_
+
_
u
1
u
2
_

_
min(v
1
, u
1
)
min(v
2
, u
2
)
_
Now we check to see if the properties of vector spaces still hold.
4
The set is closed under linear combination,
_
v
1
v
2
_
+
_
u
1
u
2
_
=
_
min(v
1
, u
1
)
min(v
2
, u
2
)
_
V.
Commutativity holds,
_
v
1
v
2
_
+
_
u
1
u
2
_
=
_
min(v
1
, u
1
)
min(v
2
, u
2
)
_
=
_
u
1
u
2
_
+
_
v
1
v
2
_
.
Associativity holds,
__
v
1
v
2
_
+
_
u
1
u
2
__
+
_
w
1
w
2
_
=
_
min(v
1
, u
1
)
min(v
2
, u
2
)
_
+
_
w
1
w
2
_
=
_
min(v
1
, u
1
, w
1
)
min(v
2
, u
2
, w
2
)
_
= v + (u +w).
Existence of a zero element fails! For any vector v =
_
v
1
v
2
_
R
2
, there must be an element z also in R
2
such
that v + z = v. The minimum of two numbers gives back the lower number, so this implies the entries of
z should be greater than all possible entries of v. But in the set of reals, there is no largest number. We
could think of choosing z =
_

_
. However this z is not R
2
, by virtue of the fact that / R. Hence this
property fails, as will the existence of additive inverses.
3 Basis and Dimension
A basis for a vector space V is a minimal set of vectors, {v
1
, . . . , v
r
}, for which all other vectors in the space
can be described as

r
i=1
c
i
v
i
, for some coecients c
i
F. The minimal number of vectors needed to form
a basis is called the dimension of the space.
In other words, a vector space V may be described by all possible linear combinations (the span) of its basis
vectors.
Example, A Basis for R
3
:
e
1
=
_
_
1
0
0
_
_
, e
2
=
_
_
0
1
0
_
_
, e
3
=
_
_
0
0
1
_
_
,
Can we reach all points in R
3
by linearly combining these vectors?
Take an arbitrary point,
_
_
a
b
c
_
_
, where a, b, c are any real values. We can express
_
_
a
b
c
_
_
= a
_
_
1
0
0
_
_
+ b
_
_
0
1
0
_
_
+ c
_
_
0
0
1
_
_
= ae
1
+ be
2
+ ce
3
Clearly, the span {e
1
, e
2
, e
3
} = R
3
. In fact, these particular vectors are called the standard basis for R
3
,
and have the nice property that they are orthogonal to each other and each have unit norm, forming an
orthonormal basis. We can also show this set is minimal; i.e., we cannot span R
3
with a smaller set of vectors.
To show why, we need to dene the concept of linear independence (next section).
Question: Is the choice of basis unique?
Answer: No, there are innitely many choices for bases for a vector space. For example, consider these
vectors as an alternative basis for R
3
.
v
1
=
_
_
1
1
1
_
_
, v
2
=
_
_
1
1
0
_
_
, v
3
=
_
_
1
0
0
_
_
,
5
Again we can represent an arbitrary point,
_
_
a
b
c
_
_
, as a linear combination of the vectors.
_
_
a
b
c
_
_
= c
_
_
1
1
1
_
_
+ (b c)
_
_
1
1
0
_
_
+ (a b)
_
_
1
0
0
_
_
Therefore this set of vectors spans all of R
3
. It is also a minimal set because all of the vectors {v
i
} are
linearly independent.
4 Linear Independence of Vectors
A set of vectors v
1
, . . . , v
m
is linearly independent if none of the vectors can be written as a linear
combination of the others. This means that for every k = 1, . . . , m, there is no set of coecients {c
i
} F
such that
v
k
=

i=k
c
i
v
i
This means that within the set {v
i
} no vector can be represented as a linear combination of the other vectors.
Now we can show why a basis must be comprised of linearly independent vectors.
Suppose we had a basis for V, a set of vectors v
1
, v
2
, . . . , v
m
. And suppose that there was a vector in the
set that is a linear combination of the others.
v
k
=

i=k
c
i
v
i
, for some {c
i
} F
This means that v
k
is in the span of the other vectors, v
k
span {v
i
: i [1, n], i = k}. Hence, any vectors
in V that are a combination of v
k
and the other vectors, could be written simply as a combination of the
other vectors. This contradicts the requirement that a basis be a minimal set. Therefore, to be a basis all
v
1
, v
2
, . . . , v
m
must be linearly independent.
Example:
Check whether the vectors from the basis example are linearly independent.
v
1
=
_
_
1
1
1
_
_
, v
2
=
_
_
1
1
0
_
_
, v
3
=
_
_
1
0
0
_
_
Can we write v
1
= v
2
+ v
3
, for some , R?
_
_
1
1
1
_
_
=
_
_
1
1
0
_
_
+
_
_
1
0
0
_
_
There is no way to get a 1 into the last coordinate so, v
1
is independent of {v
2
, v
3
}. We would also need to
check that v
2
is independent of {v
1
, v
3
}, and that v
3
is independent of {v
1
, v
2
}. However, there is a way
for us to check all combinations at once.
4.1 Checking if n vectors, in R
m
, are Linearly Independent
We have a set of vectors, { v
i
R
m
: i = 1, 2, . . . , n }. The set is linearly dependent if there is some k for
which
v
k
=

i=k
c
i
v
i
6
which is equivalent to
v
k

_
_

i=k
c
i
v
i
_
_
= 0
Multiplying this relation by a nonzero value, c
k
, we have
c
k
v
k

_
_

i=k
c
k
c
i
v
i
_
_
= 0
Dene a new set of coecients x
i
for i = 1, 2, . . . , n, where
x
i
=
_
c
k
for i = k
c
k
c
i
for i = k
so we have
x
k
v
k

_
_

i=k
x
i
v
i
_
_
=
_
n

i=1
x
i
v
i
_
= 0
Write this in matrix form.
_
_
v
1
v
2
v
n
_
_
_

_
x
1
x
2
.
.
.
x
n
_

_
= 0 (5)
A x = 0 (6)
where there is a requirement that at least one coecient of x that is nonzero. The set {v
i
} being lin-
early independent means that there is no x with at least one nonzero coordinate, such that the matrix
_
v
1
v
m

= A times x yields the zero vector. Since A0 = 0, there is always the solution x = 0, which
is called the trivial solution. We can conclude that the columns of A are linearly independent, when x = 0
is the only solution to Ax = 0, and conversely they are dependent if there are solutions other than x = 0.
We now use this method to check the example basis for linear independence.
v
1
=
_
_
1
1
1
_
_
, v
2
=
_
_
1
1
0
_
_
, v
3
=
_
_
1
0
0
_
_
so we look for x solving
_
_
1 1 1
1 1 0
1 0 0
_
_
_
_
x
1
x
2
x
3
_
_
= 0
The last row implies, x
1
= 0. The second row implies x
1
+ x
2
= 0, and since we know x
1
= 0, x
2
must
also be 0. The rst row says x
1
+ x
2
+ x
3
= 0, and so we conclude x
3
must also be 0. The only solution to
Ax = 0 is x = 0, the trivial solution, so these columns are linearly independent and therefore are minimal.
Since they are a minimal set of vectors spanning R
3
, they are indeed a basis for R
3
.
7
5 Finding a basis and properties of the basis
Example: Let V be the set of vectors,
V =
_
_
_
v R
3
: v =
_
_
1
1
0
_
_
+
_
_
1
0
0
_
_
, , R
_
_
_
(7)
One can check that this is a vector space, by going through the properties.
Lets give a basis for V. One possibility:
v
1
=
_
_
1
1
0
_
_
, v
2
=
_
_
1
0
0
_
_
(8)
We rst check that they span V. They do by the denition of V as being all linear combinations of these
vectors. Next we check if these vectors are linearly independent. For just two vectors this means there is no
c = 0, for which v
1
= cv
2
. The two vectors are linearly indendent, so this is a basis for V. Since there are
two vectors in the basis, the dimension of V is 2.
Notice that although the vectors within the space are 3-dimensional, the vector space itself is said to have
dimension 2, since that is the number of vectors needed to give a basis for the space. This V is called a
subspace of R
3
, since it has lower dimension than the space it is embedded in.
Can we nd another basis for this V? We notice that the space of vectors in V seem to be described by
vectors where the 3rd coordinate is zero. Therefore another choice of basis is
v
1
=
_
_
1
0
0
_
_
, v
2
=
_
_
0
1
0
_
_
(9)
This basis is special. It is called orthonormal, since i = j, v
T
i
v
j
= 0 and for every i, v
i
= 1. An
orthonormal basis is one where all basis vectors have unit norm and are orthogonal to each other. We will
see that these are very convenient bases.
Remark on dimension: Now let V be given by,
V =
_
_
_
v R
3
: v =
_
_
0
0
0
_
_
, R
_
_
_
(10)
The span of 0 is a set with the single element, namely {0}. It would seem by our denition of dimension
that this vector space V has dimension 1 because it can be described by all linear combination of the zero
vector. However, the convention is that this space has dimension 0. This makes sense because we think of a
plane having 2 dimensions and a line having one dimension, so a point should have 0 dimension.
6 Inner Product
An inner product is a mapping of two vectors to a scalar; i.e., < , > : V V F, which satises three
properties.
i. Positivity:
v V, < v, v > 0
and < v, v >= 0 v = 0
8
ii. Conjugate symmetry:
x, y V, < v, y >=< y, x >

iii. Linearity:
x, y, z V, < x +y, z > = < x, z > + < y, z >
x, y V, F, < x, y > = < x, y >
Example:
We consider inner product in R
2
.
< v, u > = v
T
u
First check that this is indeed a scalar.
v
T
u =
_
v
1
v
2

_
u
1
u
2
_
= v
1
u
1
+ v
2
u
2
We can additionally check for positivity,
< v, v > = v
T
v = v
2
1
+ v
2
2
which is greater than or equal to 0 for all real vectors, and can only be 0 when v = 0.
Conjugate symmetry,
< v, u > = v
T
u = v
1
u
1
+ v
2
u
2
= u
1
v
1
+ u
2
v
2
= < u, v >
In this case, since the eld is real, the inner product is symmetric.
Linearity,
< v, u > = (v)
T
u = < v, u >
< v +y, u > = (v +y)
T
u = v
T
u +y
T
u =< v, u > + < y, u >
9

You might also like