Lecture 4
Lecture 4
Vector Spaces
4.1 Vectors in Rn
4.2 Vector Spaces
4.3 Subspaces of Vector Spaces
4.4 Spanning Sets and Linear Independence
4.5 Basis and Dimension
4.6 Rank of a Matrix and Systems of Linear Equations
4.7 Coordinates and Change of Basis
Addition:
1
n
4.1 Vectors in R
An ordered n-tuple:
a sequence of n real number ( x1 , x2 , , xn )
n
n-space: R
the set of all ordered n-tuple
Ex:
1
n=1 R = 1-space
= set of all real number
2
n=2 R = 2-space
= set of all ordered pair of real numbers ( x1 , x2 )
3
n=3 R = 3-space
= set of all ordered triple of real numbers ( x1 , x 2 , x3 )
4
n=4 R = 4-space
= set of all ordered quadruple of real numbers ( x1 , x2 , x3 , x4 )
2
Notes:
n
(1) An n-tuple ( x1 , x2 , , xn ) can be viewed as a point in R
with the xi’s as its coordinates.
(2) An n-tuple ( x1 , x2 , , xn ) can be viewed as a vector
x ( x1 , x2 , , xn ) in Rn with the xi’s as its components.
Ex:
x1 , x2 x1 , x2
0,0
a point a vector
Equal:
u v if and only if u1 v1 , u 2 v2 , , u n vn
Notes:
The sum of two vectors and the scalar multiple of a vector
n
in R are called the standard operations in Rn.
3
Negative:
Zero vector:
0 (0, 0, ..., 0)
Notes:
(1) The zero vector 0 in Rn is called the additive identity in Rn.
4
Ex 5: (Vector operations in R4)
Let u=(2, – 1, 5, 0), v=(4, 3, 1, – 1), and w=(– 6, 2, 0, 3) be
4
vectors in R .
Solve x in each of the following equations.
(a) x = 2u – (v + 3w)
(b) 3(x+w) = 2u – v+x
Sol: (a)
x 2u ( v 3w )
2u v 3w
(4, 2, 10, 0) (4, 3, 1, 1) (18, 6, 0, 9)
(4 4 18, 2 3 6, 10 1 0, 0 1 9)
(18, 11, 9, 8).
(b) 3(x w ) 2u v x
3x 3w 2u v x
3x x 2u v 3w
2x 2u v 3w
x u 12 v 32 w
2,1,5,0 2, 23 , 21 , 12 9,3,0, 29
9, 211 , 92 ,4
5
Thm 4.2: (Properties of additive identity and additive inverse)
n
Let v be a vector in R and c be a scalar. Then the following is true.
(1) The additive identity is unique. That is, if u+v=v, then u = 0
(2) The additive inverse of v is unique. That is, if v+u=0, then u = –v
(3) 0v=0
(4) c0=0
(5) If cv=0, then c=0 or v=0
(6) –(– v) = v
Linear combination:
The vector x is called a linear combination of v 1 , v 2 ,..., v n ,
if it can be expressed in the form
x c1v 1 c2 v 2 cn v n c1 , c2 , , cn : scalar
Ex 6:
Given x = (– 1, – 2, – 2), u = (0,1,4), v = (– 1,1,2), and
3
w = (3,1,2) in R . Show that x is a linear combination of u, v and w.
That is, we need to find a, b, and c such that x = au+bv+cw.
Sol:
b 3c 1
a b c 2
4a 2b 2c 2
a 1, b 2, c 1 Thus x u 2 v w
6
Notes:
7
4.2 Vector Spaces
Vector spaces:
Let V be a set on which two operations (vector addition and
scalar multiplication) are defined. If the following axioms are
satisfied for every u, v, and w in V and every scalar (real number)
c and d, then V is called a vector space.
Addition:
(1) u+v is in V
(2) u+v=v+u
(3) u+(v+w)=(u+v)+w
(4) V has a zero vector 0 such that for every u in V, u+0=u
(5) For every u in V, there is a vector in V denoted by –u
such that u+(–u)=0
Scalar multiplication:
(6) cu is in V.
(7) c(u v) cu cv
(8) (c d )u cu du
(10) 1(u) u
8
Notes:
(1) A vector space consists of four entities:
a set of vectors, a set of scalars, and two operations
V:nonempty set
c:scalar
(u, v ) u v: vector addition
(c, u) cu: scalar multiplication
V , , is called a vector space
(2) Matrix space: V M mn (the set of all m×n matrices with real values)
Ex: :(m = n = 2)
u11 u12 v11 v12 u11 v11 u12 v12
u u v v u v u v vector addition
21 22 21 22 21 21 22 22
u u ku ku12
k 11 12 11 scalar multiplication
u21 u22 ku21 ku22
9
(3) n-th degree polynomial space: V Pn (x)
(the set of all real polynomials of degree n or less)
Given 2 polynomials of degree n
p ( x) a0 a1 x an x n
q ( x) b0 b1 x bn x n
Define:
p ( x) q ( x) (a0 b0 ) (a1 b1 ) x (an bn ) x n
kp ( x) ka0 ka1 x kan x n
(4) Function space: V c(
(the, set
) of all real-valued
continuous functions defined on the entire real line.)
( f g )( x) f ( x) g ( x)
(kf )( x) kf ( x)
10
Notes: To show that a set is not a vector space, you need
only find one axiom that is not satisfied.
Ex 6: The set V of all integer is not a vector space.
Pf: 1 V , 12 R
( 12 )(1) 12 V (V is not closed under scalar multiplication)
noninteger
scalar
integer
Ex 8:
V=R2=the set of all ordered pairs of real numbers
vector addition: (u1 , u 2 ) (v1 , v2 ) (u1 v1 , u 2 v2 )
scalar multiplication: c(u1 , u2 ) (cu1 ,0)
Verify V is not a vector space.
Sol:
1(1, 1) (1, 0) (1, 1)
the set (together with the two given operations) is
not a vector space
11
4.3 Subspaces of Vector Spaces
Subspace:
(V ,,) : a vector space
W
: a nonempty subset
W V
(W ,,) :a vector space (under the operations of addition and
scalar multiplication defined in V)
W is a subspace of V
Trivial subspace:
Every vector space V has at least two subspaces.
(1) Zero vector space {0} is a subspace of V.
(2) V is a subspace of V.
12
Ex: Subspace of R2
(1) 0 0 0, 0
(2) Lines through the origin
(3) R 2
Ex: Subspace of R3
(1) 0 0 0, 0, 0
(2) Lines through the origin
(3) Planes through the origin
(4) R 3
Ex 2: (A subspace of M2×2)
Let W be the set of all 2×2 symmetric matrices. Show that
W is a subspace of the vector space M2×2, with the standard
operations of matrix addition and scalar multiplication.
Sol:
W M 22 M 22 : vector sapces
Let A1, A2 W ( A1T A1, A2T A2 )
A1 W, A2 W ( A1 A2 )T A1T A2T A1 A2 ( A1 A2 W )
13
Ex 3: (The set of singular matrices is not a subspace of M2×2)
Let W be the set of singular matrices of order 2. Show that
W is not a subspace of M2×2 with the standard operations.
Sol:
1 0 0 0
A W , B W
0 0 0 1
1 0
A B W
0 1
W2 is not a subspace of M 22
Sol:
Let u (1, 1) W
1u 11, 1 1, 1 W (not closed under scalar
multiplication)
W is not a subspace of R 2
14
Ex 6: (Determining subspaces of R2)
Which of the following two subsets is a subspace of R2?
(a) The set of points on the line given by x+2y=0.
(b) The set of points on the line given by x+2y=1.
Sol:
(a) W ( x, y ) x 2 y 0 (2t , t ) t R
Let v1 2t1 , t1 W v2 2t 2 , t 2 W
W is a subspace of R 2
Let v (1,0) W
1v 1,0 W
W is not a subspace of R 2
15
Ex 8: (Determining subspaces of R3)
Which of the following subsets is a subspace of R 3?
(a) W ( x1 , x2 ,1) x1 , x2 R
(b) W ( x1 , x1 x3 , x3 ) x1 , x3 R
Sol:
(a) Let v (0,0,1) W
(1) v (0,0,1) W
W is not a subspace of R 3
(b) Let v ( v1 , v1 v 3 , v 3 ) W , u (u1 , u1 u 3 , u 3 ) W
v u v1 u 1 , v1 u1 v 3 u 3 , v 3 u 3 W
kv kv1 , kv1 kv 3 , kv 3 W
W is a subspace of R 3
16
4.4 Spanning Sets and Linear Independence
Linear combination: (recall)
A vector v in a vector space V is called a linear combination of
the vectors u1,u 2 , ,u k in V if v can be written in the form
c1 c3 1
2c1 c2 1
3c1 2c2 c3 1
17
1 0 1 1 1 0 1 1
2 1 0 1 ¾ ¾¾¾¾¾¾® 0 1 2 1
GaussJordan Elimination
3 2 1 1 0 0 0 0
c1 1 t , c2 1 2t , c3 t
(b)
w c1 v1 c2 v 2 c3 v 3
1 0 1 1 1 0 1 1
2 1 0 2 ¾ ¾¾¾¾¾¾® 0 1 2 4
GaussJordan Elimination
3 2 1 2 0 0 0 7
w c1 v1 c2 v 2 c3 v 3
18
the span of a set: span (S)
If S={v1, v2,…, vk} is a set of vectors in a vector space V,
then the span of S is the set of all linear combinations of
the vectors in S,
Notes:
(1) span( ) 0
(2) S span( S )
(3) S1 , S 2 V
S1 S 2 span( S1 ) span( S 2 )
19
Ex 5: (A spanning set for R3)
Show that the set S (1,2,3), (0,1,2), ( 2,0,1) spans R 3
Sol:
We must determine whether an arbitrary vector u (u1 , u2 , u3 )
in R 3 can be written as a linear combination of v1 , v 2 , and v 3 .
u R 3 u c1 v1 c2 v 2 c3 v 3
c1 2c3 u1
2c1 c2 u2
3c1 2c2 c3 u3
The problem thus reduces to determining whether this system
is consistent for all values of u1 , u 2 , and u3 .
1 0 2
A2 1 0 0
3 2 1
span( S ) R 3
20
Thm 4.6: (Properties of Span(S))
21
Notes:
22
Ex 9: (Testing for linearly independent)
Determine whether the following set of vectors in P2 is L.I. or L.D.
S = {1+x – 2x2 , 2+5x – x2 , x+x2}
v1 v2 v3
Sol:
c1v1+c2v2+c3v3 = 0
i.e. c1(1+x – 2x2) + c2(2+5x – x2) + c3(x+x2) = 0+0x+0x2
c1+2c2 =0 1 2 0 0 1 2 0 0
1 1
c1+5c2+c3 = 0 5 1 0 ¾G.
¾J.® 1 1 3 0
–2c1 – c2+c3 = 0 0 0 0 0
2 1 1 0
This system has infinitely many solutions.
(i.e., This system has nontrivial solutions.)
S is linearly dependent. (Ex: c1=2 , c2= – 1 , c3=3)
2 1 3 0 1 0 0 0
c1 c2 c3 0 0
0 1 2 1 2 0
23
2c1+3c2+ c3 = 0
c1 =0
2c2+2c3 = 0
c1+ c2 =0
2 3 1 0 1 0 0 0
1
0 0 0 Gauss- Jordan Elimination 0
1 0 0
¾¾¾¾¾¾¾ ¾®
0 2 2 0 0 0 1 0
1 1 0 0 0 0 0 0
S is linearly independent.
Pf:
(=>) c1v1+c2v2+…+ckvk = 0
S is linearly dependent
$ ci 0 for some i
c1 c c c
vi v1 i 1 v i 1 i 1 v i 1 k v k
ci ci ci ci
24
()
Let vi = d1v1+…+di-1vi-1+di+1vi+1+…+dkvk
=> d1v1+…+di-1vi-1-vi+di+1vi+1+…+dkvk = 0
S is linearly dependent
Notes:
(1) Ø is a basis for {0}
(2) the standard basis for R3:
{i, j, k} i = (1, 0, 0), j = (0, 1, 0), k = (0, 0, 1)
25
n
(3) the standard basis for R :
{e1, e2, …, en} e1=(1,0,…,0), e2=(0,1,…,0), en=(0,0,…,1)
Ex: R4 {(1,0,0,0), (0,1,0,0), (0,0,1,0), (0,0,0,1)}
S is linearly independent
==> c1= b1 , c2= b2 ,…, cn= (i.e., uniqueness)
bn
26
Thm 4.9: (Bases and linear dependence)
If S v1 , v 2 , , v n is a basis for a vector space V, then every
set containing more than n vectors in V is linearly dependent.
Pf:
Let S1 = {u1, u2, …, um} , m > n
span ( S ) V
u1 c11v1 c21v 2 cn1 v n
ui V u 2 c12 v1 c22 v 2 cn 2 v n
u m c1m v1 c2 m v 2 cnm v n
Let k1u1+k2u2+…+kmum= 0
=> d1v1+d2v2+…+dnvn= 0 (where di = ci1k1+ci2k2+…+cimkm)
S is L.I.
=> di=0 i i.e. c11k1 c12 k 2 c1m k m 0
c21k1 c22 k 2 c2 m k m 0
cn1k1 cn 2 k 2 cnm k m 0
Known: If the homogeneous system has fewer equations
than variables, then it must have infinitely many solution.
m > n => k1u1+k2u2+…+kmum = 0 has nontrivial solution
==> S1 is linearly dependent
27
Thm 4.10: (Number of vectors in a basis)
If a vector space V has one basis with n vectors, then every
basis for V has n vectors. (All bases for a finite-dimensional
vector space has the same number of vectors.)
Pf:
S ={v1, v2, …, vn}
two bases for a vector space
S'={u1, u2, …, um}
S is a basis Thm.4.9
nm
S ' is L.I.
nm
S is L.I. Thm.4.9
n m
S ' is a basis
Finite dimensional:
A vector space V is called finite dimensional,
if it has a basis consisting of a finite number of elements.
Infinite dimensional:
If a vector space V is not finite dimensional,
then it is called infinite dimensional.
Dimension:
The dimension of a finite dimensional vector space V is
defined to be the number of vectors in a basis for V.
V: a vector space S: a basis for V
28
Notes: dim(V) = n
Linearly
(1) dim({0}) = 0 = #(Ø) Generating
Bases Independent
Sets Sets
Ex:
(1) Vector space Rn - basis {e1 , e2 , … , en}
- dim(Rn) = n
(2) Vector space Mmxn - basis {Eij | i = 1,…, m , j = 1,…, n}
- dim(Mmxn)=mn
(3) Vector space Pn(x) - basis {1, x, x2, … , xn}
- dim(Pn(x)) = n+1
(4) Vector space P(x) - basis {1, x, x2, …}
- dim(P(x)) = ∞
29
Ex 9: (Finding the dimension of a subspace)
(a) W={(d, c–d, c): c and d are real numbers}
(b) W={(2b, b, 0): b is a real number}
Sol: (Note: Find a set of L.I. vectors that spans the subspace)
(a) (d, c– d, c) = c(0, 1, 1) + d(1, – 1, 0)
=> S = {(0, 1, 1) , (1, – 1, 0)} (S is L.I. and S spans W)
=> S is a basis for W
=> dim(W) = #(S) = 2
(b) 2b, b,0 b2,1,0
=> S = {(2, 1, 0)} spans W and S is L.I.
=> S is a basis for W
=> dim(W) = #(S) = 1
30
Thm 4.11: (Basis tests in an n-dimensional space)
Let V be a vector space of dimension n.
(1) If S v1 , v 2 , , v n is a linearly independent set of
vectors in V, then S is a basis for V.
(2) If S v1 , v 2 , , v n spans V, then S is a basis for V.
dim(V) = n
Generating Linearly
Sets Bases Independent
Sets
Recall:
row vectors: Row vectors of A
a11 a12 a1n A1 [a11 , a12 , , a1n ] A(1)
a
a22 a2 n A2 [a21 , a22 , , a2n ] A(2)
A
21
am1 am 2 amn Am [am1 , am2 , , amn ] A( m )
31
Four Fundamental subspaces
Let A be an m×n matrix.
Row space:
The row space of A is the subspace of Rn spanned by
the row vectors of A.
RS ( A) { 1 A(1) 2 A( 2) ... m A( m ) | 1 , 2 ,..., m R}
Column space:
The column space of A is the subspace of Rm spanned by
the column vectors of A.
Compare notations
Here In your textbook
RS(A) C(AT)
CS(A) C(A)
NS(A) N(A)
NS(AT) N(AT)
32
4.6 Rank of a Matrix and Systems of Linear Equations
form, then the nonzero row vectors of B form a basis for the
row space of A.
33
Ex 2: ( Finding a basis for a row space)
1 3 1 3
0 1 1 0
A 3 0 6 1
Find a basis of row space of A
3 4 2 1
2 0 4 2
Sol: 1 3 1 3 1 3 1 3 w 1
0 0 1 1 0 w 2
1 1 0
3 1 0 0 1 w 3
A=
0 6
¾¾ ¾ ¾® 0
Gauss E . B =
3 4 2 1 0 0 0 0
2 0 4 2 0 0 0 0
a1 a 2 a3 a4 b1 b 2 b3 b 4
Notes:
(1) b 3 2b1 b 2 a 3 2a1 a 2
(2) {b1 , b 2 , b 4 } is L.I. {a1 , a 2 , a 4 } is L.I.
34
Ex 3: (Finding a basis for a subspace)
Find a basis for the subspace of R3 spanned by
v1 v2 v3
S {(1, 2, 5), (3, 0, 3), (5, 1, 8)}
Sol: 1 2 5 v1 1 2 5 w1
A = 3 0 3 v2 G.E.
B0 1 3 w2
5 1 8 v3 0 0 0
1 1 6 2 4 0 0 1 1 1 w3
3 0 1 1 2 0 0 0 0 0
35
CS(A)=RS(AT)
a basis for CS(A)
= a basis for RS(AT)
= {the nonzero vectors of B}
= {w1, w2, w3}
1 0 0
1 0
0
3, 9 , 1 (a basis for the column space of A)
3
5 1
6 1
2
Note: This basis is not a subset of {c1, c2, c3, c4}.
Sol. 2: 1 3 1 3 1 3 1 3
0 1 1 0 0 1 1 0
A 3 0 6 1 ¾G¾
¾
.E .
® B 0 0 0 1
3 4 2 1 0 0 0 0
2 0 4 2 0 0 0 0
c1 c 2 c3 c4 v1 v 2 v3 v 4
Leading 1 => {v1, v2, v4} is a basis for CS(B)
{c1, c2, c4} is a basis for CS(A)
Notes:
(1) This basis is a subset of {c1, c2, c3, c4}.
(2) v3 = –2v1+ v2, thus c3 = – 2c1+ c2 .
36
Thm 4.14: (Solutions of a homogeneous system)
If A is an m×n matrix, then the set of all solutions of the
homogeneous system of linear equations Ax = 0 is a subspace
of Rn called the nullspace of A. NS ( A) {x R n | Ax 0}
Pf:
NS(A) R n
NS(A) (∵A0 0)
Let x1, x 2 NS(A) (i.e. Ax1 0, Ax 2 0)
Then (1)A(x1 x 2 ) Ax1 Ax 2 0 0 0 Addition
(2)A(cx1) c(Ax1) c(0) 0 Scalar multiplication
Thus NS(A) is a subspace of R n
Notes: The nullspace of A is also called the solution space of
the homogeneous system Ax = 0.
37
Thm 4.15: (Row and column space have equal dimensions)
If A is an mxn matrix, then the row space and the column
space of A have the same dimension.
dim(RS(A)) = dim(CS(A))
Rank:
The dimension of the row (or column) space of a matrix A
is called the rank of A and is denoted by rank(A).
Nullity:
nullity(A) = dim(NS(A))
38
Thm 4.16: (Dimension of the solution space)
If A is an mxn matrix of rank r, then the dimension of
the solution space of Ax = 0 is n – r. That is
n = rank(A) + nullity(A)
Notes:
(1) rank(A): The number of leading variables in the solution of Ax=0.
(The number of nonzero rows in the row-echelon form of A)
(2) nullity (A): The number of free variables in the solution of Ax = 0.
Notes:
If A is an mxn matrix and rank(A) = r, then
39
Ex 7: (Rank and nullity of a matrix)
Let the column vectors of the matrix A be denoted by a1, a2, a3,
a4, and a5.
1 0 2 1 0
0 1 3 1 3
A
2 1 1 1 3
0 3 9 0 12
a1 a2 a3 a4 a5
(a) Find the rank and nullity of A.
(b) Find a subset of the column vectors of A that forms a basis for
the column space of A .
(c) If possible, write the third column of A as a linear combination
of the first two columns.
1 0 2 1 0 1 0 2 0 1
0 1 3 1 3 0 1 3 0 4
A B
2 1 1 1 3 0 0 0 1 1
0 3 9 0 12 0 0 0 0 0
a 1 a 2 a3 a 4 a 5 b1 b2 b3 b4 b5
nuillity(A) n rank(A) 5 3 2
40
(b) Leading 1
{b1 , b 2 , b 4 } is a basis for CS ( B )
{a1 , a 2 , a 4 } is a basis for CS ( A)
1 0 1
0 1 1
a1 , a 2 , and a 4 ,
2 1 1
0 3 0
41
Ex 8: (Finding the solution set of a nonhomogeneous system)
Find the set of all solution vectors of the system of linear equations.
x1 2 x3 x4 5
3 x1 x2 5 x3 8
x1 2 x2 5 x4 9
Sol:
1 0 2 1 5 1 0 2 1 5
3 1 5 0
8 ¾¾¾®0 1
G. J .E 1 3 7
1 2 0 5 9 0 0 0 0 0
s t
x1 2 s t 5 2 1 5
x s 3t
7 1 3 7
x s t
2
x3 s 0t 0 1 0 0
x4 0 s t 0 0 1 0
su 1 t u 2 x p
5
7
i.e. x p is a particular solution vector of Ax=b.
0
0
42
Thm 4.18: (Solution of a system of linear equations)
The system of linear equations Ax = b is consistent if and only
if b is in the column space of A.
Pf.
Let
a11 a12 a1n x1 b1
a a22 a2 n x b
A 21 , x 2, and b 2
am 1 am 2 amn xn bm
be the coefficient matrix, the column matrix of unknowns,
and the right-hand side, respectively, of the system Ax = b.
Then
a11 a12 a1n x1 a11 x1 a12 x2 a1n xn
a a22 a2 n x2 a21 x1 a22 x2 a2 n xn
Ax 21
am1 am 2 amn xn am1 x1 a m 2 x2 amn xn
a11 a12 a1n
a a a
x1 x2 xn .
21 22 2n
am1 am 2 amn
43
Note:
If rank([A|b])=rank(A)
Then the system Ax=b is consistent.
x1 x2 x3 1
x1 x3 3
3 x1 2 x2 x3 1
Sol:
1 1 1 1 0 1
A 1 0 1 ¾¾¾® 0 1 2
G. J .E .
3 2 1 0 0 0
1 1 1 1 1 0 1 3
[ A b] 1 0 1 3 ¾¾¾®0 1 2 4
G.J .E .
3 2 1 1 0 0 0 0
c1 c2 c3 b w1 w2 w3 v
v 3w1 4w 2
b 3c1 4c 2 0c 3 (b is in the column space of A)
Check:
rank ( A) rank ([ A b]) 2
44
Summary of equivalent conditions for square matrices:
If A is an n×n matrix, then the following conditions are equivalent.
(1) A is invertible
(2) Ax = b has a unique solution for any n×1 matrix b.
x c1 v1 c2 v 2 cn v n .
The scalars c1, c2, …, cn are called the coordinates of x relative
to the basis B. The coordinate matrix (or coordinate vector)
of x relative to B is the column matrix in Rn whose components
are the coordinates of x.
c1
c
xB
2
cn
45
n
Ex 1: (Coordinates and components in R )
Find the coordinate matrix of x = (–2, 1, 3) in R3
relative to the standard basis
S = {(1, 0, 0), ( 0, 1, 0), (0, 0, 1)}
Sol:
x (2, 1, 3) 2(1, 0, 0) 1(0, 1, 0) 3(0, 0, 1),
2
[x]S 1.
3
c1 2c3 1 1 0 2 c1 1
c2 3c3 2
i.e. 0 1 3 c2 2
c1 2c 2 5c3 1 1 2 5 c3 1
1 0 2 1 1 0 0 5 5
0 1 3 2 ® 0
G.J. E.
1 0 8 [ x ] 8
B
46
Change of basis problem:
You were given the coordinates of a vector relative to one
basis B;
and you are asked to find the coordinates of the same vector
relative to another basis B'.
Ex: (Change of basis)
Consider two bases for a vector space V
B {u1 , u 2 }, B {u1 , u2 }
a c
If [u1 ]B , [u2 ]B
b d
i.e., u1 au1 bu 2 , u2 cu1 du 2
k1
Let v V , [ v ]B
k 2
v k1u1 k 2u2
k1 (au1 bu 2 ) k 2 (cu1 du 2 )
(k1a k 2 c)u1 (k1b k 2 d )u 2
k a k 2c a c k1
[ v ]B 1
k1b k 2 d b d k 2
u1 B u2 B v B
47
Transition matrix from B' to B:
Let B {u1 , u 2 ,..., u n } and B {u1 , u2 ..., un } be two bases
for a vector space V
If [v]B is the coordinate matrix of v relative to B
[v]B’ is the coordinate matrix of v relative to B'
then [ v ]B P[ v ]B
u1 B , u2 B ,..., un B v B
where
P u1 B , u2 B , ..., un B
Notes:
B {u1 , u 2 , ..., u n }, B ' {u1 , u2 , ..., un }
v B [u1 ]B , [u2 ]B , ..., [un ]B v B P vB
v B [u1 ]B , [u 2 ]B , ..., [u n ]B vB P 1 v B
48
Thm 4.20: (Transition matrix from B to B')
Let B={v1, v2, … , vn} and B' ={u1, u2, … , un} be two bases
n
for R .Then the transition matrix P–1 from B to B' can be found
by using Gauss-Jordan elimination on the n×2n matrix B B
as follows.
B B I n P 1
49
Sol:
(a) 3 4 1 2 G.J.E. 1 0 3 2
2 0 1 2 1
2 2 2
B B' I P
3 2
P (the transition matrix from B' to B)
2 1
(b)
1 3 2 1 1
[v ]B [v ]B P [v ]B
Check : 2 2 1 2 0
1
[v ]B v (1)(1,2) (2)(2,2) (3,2)
2
1
[v ]B v (1)(3,2) (0)(4,2) (3,2)
0
(c)
1 2 3 4 1 0 1 2
G.J.E.
2 2 2 2 0 1 2 3
-1
B' B I P
1 2
P 1 (the transition matrix from B to B')
2 3
Check:
3 2 1 2 1 0
PP 1 I2
2 1 2 3 0 1
50
Ex 6: (Coordinate representation in P3(x))
(a) Find the coordinate matrix of p = 3x3-2x2+4 relative to the
standard basis S = {1, x, x2, x3} in P3(x).
(b) Find the coordinate matrix of p = 3x3-2x2+4 relative to the
basis S = {1, 1+x, 1+ x2, 1+ x3} in P3(x).
4
Sol: 0
(a) p (4)(1) (0)( x) (-2)(x 2 ) (3)( x 3 ) [ p]B
2
3 3
0
(b) p (3)(1) (0)(1 x) (-2)(1 x ) (3)(1 x ) [ p ]B
2 3
2
3
5 6 1 0 0 1 0 0 0 0
x 5 6 7 8
7 8 0 0 0 0 1 0 0 1
5
6
x B
7
8
1
51
Slide 102