Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

2.3. LinearTransformations

Download as pdf or txt
Download as pdf or txt
You are on page 1of 29

MATH 221: LINEAR ALGEBA

MÜGE TAŞKIN
Bog̃aziçi University, Istanbul

Spring 2022/2023
Linear transformations on vector space

Here among all functions f : U 7→ V from the vector space u to V , we are interested on those satisfying that for
all u~1 , u~2 ∈ U and k ∈ R,
f (u~1 + u~2 ) = f (u~1 ) + f (u~2 ) and f (k u~1 ) = kf (u~1 )

and call such function as a linear transformation from U to V .

Lemma
Prove that if a function f : U 7→ V is a linear transformation then f (~
0) = ~
0.

Proof.
Observe that since f is linear transformation we must have f (~
0) = f (~
0 +~
0) = f (~
0) + f (~
0).

Observe that if x satisfies x + x = x then x = x + x − x = (x + x) − x = x − x = 0.

Hence f (~
0) = ~
0.
Linear transformations on vector space
New Notations: Recall a standard vector ~ u in Rn with terminal point (u1 , . . . , un ) is represented by a column
vector or row vector. For the sake of clear notation we also denote a vectpr by its terminal point.
That is  
u1
 u2  T  
~
u = (u1 , . . . , un ) or ~
u=  ...  or ~
 u = u1 u2 ... un
un

Examples:
1. Consider functions f , g , h from one dimensional real vector space R to itself such that f (x) = 2x,
g (x) = x + x 2 and h(x) = 5x + 2. Then check that f is a linear transformation but g and h are not.
2. Consider functions f , g , h from R2 to R3 be a function given by the rule that for any vector (x, y ) ∈ R2 ,

f (x, y ) = (2x, x + y , x − 2y )
g (x, y ) = (2x, 0, y + 2)
2
h(x, y ) = (2x, x + y , x )

Here one can see that f is a linear transformation but g and h are not, since

g (0, 0) = (0, 0, 2)

h((x1 , y1 ) + (x2 , y2 )) = h(x1 + x2 , y1 + y2 )


2
= (2(x1 + x2 ), x1 + x2 + y1 + y2 , (x1 + x2 ) )
2 2
6= (2(x1 + x2 ), x1 + x2 + y1 + y2 , x1 + x2 )
= h(x1 , y1 ) + h(x2 , y2 )
Linear transformations on vector space

Any m × n matrix A induces function A : Rn 7→ Rm since for any vector ~


u ∈ Rn , A~
u is a vector in Rm .
Now A : Rn 7→ Rm is in fact a linear transformation since

A(~ ~ ) = A~
u+w u + A~
w and A(k~
u ) = kA~
u.

EXAMPLE: Observe that each matrix Ai below in fact produces the linear transformation fi from R2 to R2 . That is

For ~
u = (x, y ), we have Ai ~
u = Ai (x, y ) = fi (x, y ).

 
c 0
A1 = and f1 (x, y ) = (cx, cy )
0 c
 
0 1
A2 = and f2 (x, y ) = (y , x)
1 0
 
1 0
A3 = and f3 (x, y ) = (x, 0)
0 0
 
0 −1
A4 = and f4 (x, y ) = (−y , x)
1 0

Here observe that f1 creates a vector in the same or opposite direction of (x, y ); f2 reflects (x, y ) around the line
y = x ; f3 projects (x, y ) on the x axis and f4 rotates (x, y ) in counterclockwise direction by π/2 radians.
Linear transformations on vector space

EXAMPLE: Consider the vector space of all polynomials P(R). It is easy to observe that for all polynomial
p(x) = a0 + a1 x + . . . + an x n the following functions

n−1
Der(p(x)) = a1 + 2a2 x + . . . + nan x

a1 2 an n+1
Int(p(x)) = a0 x + x + ... + x
2 n+1

are in fact linear transformations from P(R) to P(R). As one ca observe quickly, the first one is the usual

derivation and the second one is the usual integration on polynomials.


Important properties of linear transformations
Remark: Any linear transformation f : U 7→ V is uniquely determine by their effect on the basis elements of U. To
see that let {~ en } be a basis for a vector space U. Then any ~
e1 , . . . , ~ u ∈ U can be written as

~
u = x1 ~
e1 + . . . + xn ~
en for unique scalars x1 , . . . , xn .

Then f (~
u ) = x1 f (~
e1 ) + . . . + xn f (~
en )

u ) can be written as a linear combinations of {f (~


i.e., f (~ e1 ), . . . , f (~
en )} for the unique choice of scalars
x1 , . . . , xn .

Example: Determine linear transformation f : R2 7→ R3 if

f (1, 1) = (1, 2, 2) and f (1, −1) = (−1, 2, 0)

Observe that any vector (x, y ) ∈ R2 can be written as

x +y x −y
(x, y ) = (1, 1) + (1, −1)
2 2

Hence
x +y x −y
f (x, y ) = f ( (1, 1) + (1, −1))
2 2
x +y x −y
= f (1, 1) + f (1, −1)
2 2
x +y x −y
= (1, 2, 2) + (−1, 2, 0)
2 2
= (y , 2x, x + y ).
Kernel and Range of linear transformations

Lemma
Given a linear transformation f : U 7→ V the followings are true.
1. Kernel of f : Ker(f ) := {~ u) = ~
u ∈ U | f (~ 0V } is a subspace of U.
2. Range of f : Ran(f ) := {~
v ∈ V | v = f (~ u ∈ U} is a subspace of V
u ) for some ~
3. f is onto if and only if Ran(f ) = V .
4. f is one to one if and only if Ker(f ) = {~
0U }.

Proof.
The first three statements are easy to show. We will deal with only the last statement.
First suppose that f is one to one. Let ~ u ∈ U such that f (~u) = ~0V = f (~ u =~
0U ). Since f is one to one ~ 0U and
hence Ker(f ) = {~ 0U }.
Now suppose that Ker(f ) = {~ 0U }. Let ~ u2 ∈ U such that f (~
u1 , ~ u1 ) = f (~
u2 ). Then
u1 − ~
f (~ u2 ) = f (~ u2 ) = ~
u1 ) − f (~ 0V , hence ~u1 − ~u2 ∈ Ker(f ) = {~ 0U }. This shows that ~ u1 = ~u2 and therefore f
is one to one.

EXAMPLE: Let A be any m × n matrix and A : Rn 7→ Rm be the linear transformation induced bt A. Then

Ker(A) = Null(A) and Ran(A) = Col(A).


Compositions of Linear transformations

Theorem
Let f : U 7→ V and g : V 7→ W be two linear transformations. Then
The composition g ◦ f : U 7→ W is also a linear transformation.

Proof.
Let ~ u2 ∈ U and k ∈ R. Then
u1 , ~

g ◦ f (~
u1 + k~
u2 ) = g (f (~
u1 + k~
u2 )) = g (f (~
u1 ) + kf (~
u2 )) = g (f (~ u2 )) = g ◦ f (~
u1 )) + kg (f (~ u1 ) + kg ◦ f (~
u2 ).

Hence g ◦ f : U 7→ W is also a linear transformation.


Isomorphism of vector spaces.

We say that function f : U 7→ V has a compositional inverse of g : V 7→ U if g ◦ f = idU and f ◦ g = idV .


Once can show that:
Compositional inverse of f : U 7→ V exists if and only if f is one to one and onto.
If Compositional inverse of f : U 7→ V exists then it is unique and denoted by f −1 : V →
7 U.

Theorem
Let f : U 7→ V be a linear transformation. Suppose that f is one to one and onto. Then
1. f −1 : V 7→ U exists and it is also a linear transformation.
2. For any basis α = {~ un } of U, the set β = {f (~
u1 , . . . , ~ u1 ), . . . , f (~
un )} is a basis for V .
3. dim(U) = dim(V ).
Isomorphism of vector spaces.

Proof.
1. f −1 : V 7→ U exists since f is one to one and onto. Let ~ v2 ∈ V . Then there exist ~
v1 , ~ u2 ∈ U such
u1 , ~
that ~ u1 ) and ~
v1 = f (~ v2 = f (~
u2 ). Moreover

−1 −1 −1 −1 −1
f (~
v1 + k~
v2 ) = f (f (~
u1 ) + kf (~
u2 )) = f (f (~ u2 )) = ~
u1 + k~ u1 + k~
u2 = f (~
v1 ) + kf (~
v2 ).

Hence f −1 : V 7→ U is also a linear transformation.


2. We first show that {f (~ u1 ), . . . , f (~
un )} is linearly independent in V . Consider
~
0 = x1 f (~ un ) = f (x1 ~
u1 ) + . . . + xn f (~ u1 + . . . + xn f ~
un )
Since f is one to one ~ 0 = f (x1 ~ u1 + . . . + xn f ~un ) implies that ~
0 = x1 ~
u1 + . . . + xn f ~
un . Now since
{~ un } is linearly independent in U, this shows x1 = x2 = . . . = xn = 0 Hence
u1 , . . . , ~
{f (~
u1 ), . . . , f (~
un )} is linearly independent in V .

Now we will show that Span(f (~ un )) = V . Let ~


u1 ), . . . , f (~ v ∈ V . Since f is onto there exist ~ u ∈ U such
u) = ~
that f (~ v . Now ~ u = x1 ~ u1 + . . . + xn ~ un for unique choice of scalars. Then
~
v = f (~u ) = f (x1 ~
u1 + . . . + xn ~ un ) = x1 f (~ u1 ) + . . . + xn f (~
un ). Hence Span(f (~
u1 ), . . . , f (~
un )) = V
Therefore f (~u1 ), . . . , f (~
un ) is a basis for V
3. dim(U) = dim(V ) follows from the previous fact.
Isomorphism of vector spaces.
DEFINITON: Let V and W be two vector spaces over R.
If there exist and invertible linear transformation f : U 7→ V then V and W are said to be isomorphic vector
spaces.

Theorem
If dim(U) = dim(V ) then there exists an isomorphism f : U 7→ V .

Proof.
For a fixed basis α = {~ un } of U and β = {~
u1 , . . . , ~ vn } of V the map f : U 7→ V defined by the rule:
v1 , . . . , ~

f (x1 ~
u1 + x2 ~
u2 + . . . + xn ~
un ) = x1~
v1 + x2~
v2 + . . . + xn ~
vn

u ∈ U there exist unique scalars c1 , c2 , . . . , cn such that


is well defined map, since for all ~

u = c1 ~
~ u1 + c2 ~
u 2 + . . . + cn ~
un and hence f (~ v1 + c2 ~
u ) = c1 ~ v2 + . . . + cn ~
vn is uniquely determined .

It is easy to see that f is linear transformation.


Claim: f is onto. For all ~ v ∈ V there exist unique scalars d1 , d2 , . . . , dn such that

~
v = d1 ~
v1 + . . . + dn ~
vn = f (d1 ~
u1 + . . . + dn ~
un ).

Claim: f is one to one. If ~ u ) = c1 ~


0V = f (~ v1 + c2 ~
v2 + . . . + cn ~
vn we must have c1 = c2 = . . . = cn = 0 since
{~ vn } is linearly independent. This shows that ~
v1 , . . . , ~ u =~ 0U and hence Ker (f ) = {~
0U }.
Matrix of linear transformations f : U 7→ W
REMARK: Let U be a vector space of dimension n and α = {~ un } be a fixed basis of U.
u1 , . . . , ~

For {~ en } be the standard basis of Rn , consider the map k : U 7→ Rn defined by the following rule: For
e1 , . . . , ~
u ∈ U there exist unique real numbers x1 , x2 , . . . , xn such that ~
any ~ u = x1 ~
u1 + x2 ~
u2 + . . . xn ~
un and

x1
 
 . 
u ) = k(x1 ~
k(~ u1 + x2 ~
u2 + . . . xn ~
un ) = x1 ~
e1 + x2 ~
e2 + . . . xn ~
en =  .  .
 . 
xn

By previous theorem k : U 7→ Rn is an isomorphism.

DEFINITON: Let U be a vector space and α = {~ un } be a fixed basis of U.


u1 , . . . , ~
For any ~ u1 + x2 ~
u = x1 ~ un ∈ U the image of k(~
u2 + . . . xn ~ u ) is called α-coordinates of ~
u and denoted by ~
uα .
That is

x1
 
 .  n
~
uα = k(~
u) =  .  ∈R .
 . 
xn

One can easily see that for any ~ ~ in U, and c ∈ R,


u, w

(~ ~ )α = ~
u+w ~α
uα + w

and (c~
u )α = c~

Matrix of linear transformations f : U 7→ W

Let U be a vector space with a fixed basis α = {~ un } and W be another vector space with a fixed basis
u1 , . . . , ~
β = {~ ~ m }.
w1 , . . . , w

For any linear transformation f : U 7→ W , construct an m × n matrix [ f ]β


α with the following rule:

β
[ f ]α := [f (~
u1 )β ...f (~
u n )β ]

where f (~
ui )β is β-coordinates of the vector f (~
ui ).

Here [ f ]β
α is called the matrix representing f with respect to the basis α of U and β of W .

Observe that
~
u = x1 ~
u1 + x2 ~ un =⇒ f (~
u2 + . . . xn ~ u ) = x1 f (~ un ) ∈ W
u1 ) + ... + xn f (~

m
=⇒ f (~
u )β = x1 f (~ u n )β ∈ R
u1 )β + ... + xn f (~
x1
 
 . 
=⇒ f (~
u )β = [f (~
u1 )β ...f (~
un )β ]  . 

. 
xn
β
=⇒ f (~
u )β = [ f ]α ~

Matrix of linear transformations f : U 7→ W

EXAMPLE: f : P≤3 (R) 7→ P≤4 (R) given by f (p(x)) = Int(p(x)) + Der(p(x))


Take basis α = {1, x, x 2 , x 3 } of P≤3 (R) and β = {1, x, x 2 , x 3 , x 4 } of P≤4 (R). Now

0 1 0 0
 
 1 0 2 0 
β 2 3
[ f ]α = [f (1)β f (x)β f (x )β f (x )β ] =  0 1/2 0 3
 

 0 0 1/3 0 
0 0 0 1/4

since

f (1) = x =⇒f (1)β =(0, 1, 0, 0, 0)


2
f (x) = x /2 + 1 =⇒f (x)β =(1, 0, 1/2, 0, 0)
2 3 2
f (x ) = x /3 + 2x =⇒f (x )β =(0, 2, 1/3, 0, 0)
3 4 2 3
f (x ) = x /4 + 3x =⇒f (x )β =(0, 0, 3, 0, 1/4)
Matrix of linear transformations f : U 7→ W

Now for p(x) = a0 + a1 x + a2 x 2 + a3 x 3

2 3 4
f (p(x)) = a1 + (a0 + 2a2 )x + (a1 /2 + 3a3 )x + (a2 /3)x + (a3 /4)x =

Hence
a1 0 1 0 0
    
a
 a0 + 2a2  1 0 2 0  0 
  a1 

β
f (p(x))β =  a1 /2 + 3a3 = 0 1/2 0 3 = [f ]α p(x)α .
  
a2 

 a2 /3   0 0 1/3 0 
a3
a3 /4 0 0 0 1/4
Matrix of linear transformations f : U 7→ W
EXAMPLE: Consider the linear transformation form R4 into R3 given by

 
x1  
 x2  2x1 − 4x4
f (~
x ) = f (
 ) = f (x1 , x2 , x3 , x4 ) = (2x1 − 4x4 , x2 − x3 , −x1 + 4x3 − x4 ) =  x2 − x3 
x3 
−x1 + 4x3 − x4
x4

With respect to standard basis of R4 and R3 we see that:

f (e~1 ) = f (1, 0, 0, 0) = (2, 0, −1)


f (e~2 ) = f (0, 1, 0, 0) = (0, 1, 0)
f (e~3 ) = f (0, 0, 1, 0) = (0, −1, 4)
f (e~4 ) = f (0, 0, 0, 1) = (−4, 0, −1)

Hence  
2 0 0 −4
[f ] = [f (e~1 ), . . . , f (e~4 )] =  0 1 −1 0 
−1 0 4 −1

Observe that

 
  x1  
2 0 0 −4  x2  2x1 − 4x4
x = 0
[f ] ~ 1 −1 0  =
 x3  x 2 − x 3  = f (~
x)
−1 0 4 −1 −x1 + 4x3 − x4
x4
Matrix of linear transformations f : U 7→ W
 
  x +y 2y
x y
EXAMPLE: f : M2×2 (R) 7→ M3×2 (R) given by f ( ) =  x + 2z y −z 
z t
x +t z + 3t

Take basis        
1 0 0 1 0 0 0 0
α = {E11 = , E12 = , E21 = , E22 = } of M2×2 (R) and
0 0 0 0 1 0 0 1

β = {F11 , F12 , F21 , F22 , F31 , F32 }


           
1 0 0 1 0 0 0 0 0 0 0 0
= { 0 0 , 0 0 , 1 0 , 0 1 , 0 0 , 0 0 } of M3×2 (R).
0 0 0 0 0 0 0 0 1 0 0 1

Now

 
  1 0
1 0
f (E11 ) = f ( )= 1 0  =⇒f (E11 )β = (F11 + F21 + F31 )β =(1, 0, 1, 0, 1, 0)
0 0
1 0
 
  1 2
0 1
f (E12 ) = f ( )= 0 1  =⇒f (E12 )β = (F11 + 2F12 + F22 )β =(1, 2, 0, 1, 0, 0),
0 0
0 0
 
  0 0
0 0
f (E21 ) = f ( )= 2 −1  =⇒f (E21 )β = (2F21 − F22 + F32 )β =(0, 0, 2, −1, 0, 1)
1 0
0 1
 
  0 0
0 0
f (E22 ) = f ( )= 0 0  =⇒f (E22 )β = (F31 + 3F32 )β =(0, 0, 0, 0, 1, 3)
0 1
1 3
Matrix of linear transformations f : U 7→ W
Hence
1 1 0 0
 
 0 2 0 0 
1 0 2 0
 
β
[ f ]α = [f (E11 )β f (E12 )β f (E21 )β f (E22 )β ] = 
 
0 1 −1 0

 
 1 0 0 1 
0 0 1 3

Now observe that


 
  x
x y  y 
while A = = xE11 + yE12 + zE21 + tE22 , we see that Aα =  
z t  z 
t
 
x +y 2y
f (A) =  x + 2z y − z  = (x + y )F11 + (2y )F12 + (x + 2z)F21 + (y − z)F22 + (x + t)F31 + (z + 3t)F32
x +t z + 3t
x +y
 
 2y 
 x + 2z 
 
and we see that f (A)β = 
 y −z 

 x +t 
z + 3t
Now the following equality holds as we expect:

1 1 0 0 x +y
   
 
 0 2 0 0  x  2y 
1 0 2 0 y   x + 2z
   
β 
[f ]α Aα =  =  = f (A)β
  
0 1 −1 0 z   y −z

  
 1 0 0 1  t  x +t 
0 0 1 3 z + 3t
Properties of Matrix representations

Theorem
Let f : U 7→ V and g : V 7→ W be two linear transformations. Then The composition g ◦ f : U 7→ W is also a
linear transformation. Moreover, for fixed basis α = {~ un } of U, β = {~
u1 , . . . , ~ vm } of V and
v1 , . . . , ~
γ = {~ ~ k } of W , we have
w1 , . . . , w
γ γ β
[g ◦ f ]α = [g ]β [f ]α

Proof.
u ∈ U, and g (f (~
Observe that for all ~ u )) ∈ W . Then
γ
g ◦ f (~
u )γ = [g ◦ f ]α ~

On the other hand f (~ u )β = [ f ] β


u ) ∈ V and therefore f (~ α~uα . Hence
γ γ β
g ◦ f (~
u )γ = g (f (~ u )β = [ g ] β [ f ] α ~
u ))γ = [ g ]β f (~ uα .

γ
This shows that [g ◦ f ]γ β
α = [g ]β [f ]α .
Properties of Matrix representations

Example: Consider the following functions with respect to standard basis λ of R2 and β of R3 :
 
2 0
2 3 β
f : R 7→ R where f (x, y ) = (2x, x + y , x − 2y ) and [f ]α =  1 1 
1 −2
 
3 2 α 1 0 1
g : R 7→ R where g (x, y , z) = (x + z, y − z) and [g ]β =
0 1 −1

Then g ◦ f : R2 7→ R2 where

 
α 3 −2
g ◦ f (x, y ) = g (2x, x + y , x − 2y ) = (3x − 2y , 3y ) and [g ◦ f ]α =
0 3

Observe that
 
    2 0
α 3 −2 1 0 1
[g ◦ f ]α = =  1 1  = [g ]α β
β [f ]α
0 3 0 1 −1
1 −2

β
HOMEWORK: Find f ◦ g and its matrix representation [f ◦ g ]β .
Isomorphism of vector spaces.

Theorem
Let f : U 7→ V be a linear transformation. Suppose that f is one to one and onto. Then f −1 : V 7→ U exists and
it is also a linear transformation.
Moreover, for fixed basis α = {~ un } of U and β = {~
u1 , . . . , ~ vn } of V we have
v1 , . . . , ~

−1 α β −1 α α β −1 α −1 β β
[f ]β [f ]α = [f ◦ f ]α = [idU ]α = In and [f ]α [f ]β = [f ◦ f ]β = [idV ]β = In

−1
That is, if A = [f ]β
α we have A = [f −1 ]α
β

Proof.
This follows from the previous theorem.
Isomorphism of vector spaces.

EXAMPLE: Find the inverse of f : R2 7→ R2 where f (x, y ) = (2x + 5y , x + 3y ) if it exists.

With respect to standard basis α = {~ e2 } of R2 ,


e1 , ~

        
α 2 5 α x 2 5 x 2x + 5y
A = [f ]α = that is [f ]α = =
1 3 y 1 3 y x + 3y

Observe that

        
−1 3 −5 α −1 x 3 −5 x 3x − 5y
A = that is ([f ]α ) = =
−1 2 y −1 2 y −x + 2y

Hence f −1 (x, y ) = (3x − 5y , −x + 2y ) is the compositional inverse of f .


Isomorphism of vector spaces.
HOMEWORK: Consider f : R2 7→ R2 where f (< x, y >) =< 2x + 5y , x + 3y >.

β
Find [f ]α β
α , [f ]α and[f ]β where α = {< 1, 0 >, < 0, 1 >} and β = {< 2, 1 >, < 1, 1 >}.

HOMEWORK: Recall that any n × n matrix A induces a linear transformation fA : Rn 7→ Rn with the rule that

n
fA (~
x ) = A~ x ∈R .
x for any ~

Under which condition this function fA is invertible?

 
2 0
HOMEWORK: Let A =  1 1 . Consider fA : M2×2 7→ M3×2 with the rule that
1 −2

fA (B) = AB for any matrix B ∈ M2×2 .

Find [fA ]β
α where α is the standard basis of M2×2 and beta is the standard basis of M3×2

HOMEWORK: Observe that any n × n matrix A induces a linear transformation fA : Mn×k 7→ Mn×k with the
rule that
fA (B) = AB for any matrix B ∈ Mn×k .

Under which condition this function fA is invertible?


Change of Coordinates
Consider the identity linear transformation Id : V 7→ V such that Id(x) = x for all x ∈ V .
Now given any two bases α and β of V the matrix

β
[Id]α

v ∈ V we have,
satisfies the following: For all ~
β
[Id]α [~
v ]α = [Id(~
v )]β = [~
v ]β .

Therefore [Id]β v ]α of ~
α sends α-coordinates [~ v to the β-coordinates [~
v ]β of the same vector.

Definition: The matrix [I ]β


α which changes the α-coordinates to β-coordinates, is called
the change of coordinate matrix.

Remark: Given any two bases α and β of V , of dimension n, [Id]β α


α and [Id]β satisfies the following:
β −1
1. [Id]α β α α
β [Id]α = [Id ◦ Id]α = [Id]α = In . Hence ([Id]α ) = [Id]α
β.

2. For any linear transformation T : V 7→ V ,

β α α β
[Id]α [T ]α [Id]β = [T ]β .
Similar matrices and Change of Coordinates

DEFINITION: Two n × n matrix A and B are called similar if

−1
A = SBS for some inverstible matrix S.

CLAIM: If S −1 AS = B then A and B are matrices representing the same linear transformation (with respect to
different bases of Rn ).

Let LA : Rn 7→ Rn be linear transformation given by LA (~


x ) = A~
x.
Let α be the standard basis of Rn . Then
α
[LA ]α = A.

Now observe that since S is invertible, the columns of S = [~ s1 . . . ~


sn ] are linearly independent. So
β = {~ sn } is a basis of Rn .
s1 , . . . , ~
Moreover
α β α −1 −1
S = [Id]β and hence [Id]α = ([id]β ) =S .

Now
−1 β α α β β
B =S AS = [Id]α [LA ]α [Id]β = [id ◦ LA ◦ id]β = [LA ]β

β
That is A = [LA ]α
α and B = [LA ]β and therefore, A and B represent the same linear transformation.
Kernel and Range of linear transformations
Recall that, given a linear transformation f : U 7→ V
1. Kernel of f : Ker(f ) := {u ∈ U | f (u) = 0} is a subspace of U.
2. Range of f : Ran(f ) := {v ∈ V | v = f (u) for some u ∈ U} is a subspace of V
3. f is one to one if and only if Ker(f ) = {~
0}.
4. f is onto if and only if Ran(f ) = V .
And recall that for any m × n matrix A, A : Rn 7→ Rm be the linear transformation and

Ker(A) = Null(A) and Ran(A) = Col(A).

Theorem
Let f : U 7→ W be a linear transformation and let

β
A = [f ]α

be the matrix representing f with respect to basis α of U and β of W . Then


1. The map cα : Ker(f ) 7→ Null(A) sending any u ∈ Ker(f ) to its α-coordinates uα , is an isomorphism
of vector spaces.
2. The map cβ : Ran(f ) 7→ Col(A) sending any w ∈ Ran(f ) to its β-coordinates wβ , is an isomorphism
of vector spaces.

Proof.
HOMEWORK.
More on projection, reflection and rotation matrices:
Rotation in R2 : For 0 ≤ θ ≤ π, let fθ : R2 7→ R2 be the function which rotates any vector (x, y ) in R2 by θ
angle in counter clockwise direction. Then

e~2 = (0, 1)
fθ (e~1 ) = (cos θ, sin θ) fθ (e~2 ) = (− sin θ, cos θ)
6
 @
I
@ θ
θ e~1-
= (1, 0) @

For α = {e, e2 } the standard basis for R2 ,

 
α cos θ − sin θ
[fθ ]α = [fθ (e~1 ) fθ (e~2 )] =
sin θ cos θ

Observe that f−θ : R2 7→ R2 then rotates any vector < x, y > by θ angle in clockwise direction and

   
α cos(−θ) − sin(−θ) cos θ sin θ
[f−θ ]α = =
sin(−θ) cos(−θ) − sin θ cos θ

It is easy to see that f−θ is the compositional inverse of fθ

f−θ ◦ fθ = identity function = fθ ◦ f−θ

and the product of matrix representations gives identity matrix:


  
α α cos θ sin θ cos θ − sin θ
[f−θ ]α [fθ ]α = = I2
− sin θ cos θ sin θ cos θ
More on projection, reflection and rotation matrices:
Reflections in R2 For 0 ≤ θ ≤ π, let Lθ be the line which makes θ angle with positive x-axis. Denote by rθ , the
function which reflects any vector (x, y ) in R2 around this line.

rθ (e~1 ) e~2 = (0, 1)


J 6
J
 J J
θ
 θ - Je~1 = (1, 0) θ J
 HHJ rθ (e~2 )
  j
HJ
 
Observe that since the angle between rθ (e~1 ) and positive x-axis is 2θ we have

rθ (e~1 ) = (cos 2θ, sin 2θ)

On the other hand the angle between rθ (e~2 ) and positive x-axis is π/2 − 2(π/2 − θ) = 2θ − π/2 and so

rθ (e~2 ) = (cos(2θ − π), sin(2θ − π)) = (sin 2θ, − cos 2θ)

2 cos2 θ − 1
   
cos 2θ sin 2θ 2 cos θ sin θ
Now [rθ ]α
α = [rθ (e~1 ) rθ (e~2 )] = = .
sin 2θ − cos 2θ 2 cos θ sin θ 2 sin2 θ − 1

Observe that composition rθ ◦ rθ (x, y ) = (x, y ) i.e., it is the identity function. Hence composition inverse of rθ is
itself. Observe also that matrix product

α α
[rθ ]α [rθ ]α is equal to the identity matrix I2
More on projection, reflection and rotation matrices:
Projections in R2 : For 0 ≤ θ ≤ π, let Lθ be the line which makes θ angle with positive x-axis.
Denote by pθ be the function which projects any vector (x, y ) on to Lθ .

Observe (cos θ, sin θ) lies on Lθ and for any < x, y >, pθ (x, y ) is a vector either in the same direction or
opposite direction of (cos θ, sin θ).


e~2 = (0, 1)
 

3 pθ (~
e1 )  6
J
pθ (x, y ) (cos θ, sin θ)  
π/2 3Jπ/2

 JJpθ (~e2 )

3J θ
 J- 3


HHJ  e~1 = (1, 0) 
 j
H
J  
 (x, y )  
Observe that since the angle between e~1 and Lθ is θ,

pθ (e~1 ) = cos θ(cos θ, sin θ)

Similarly, since the angle between e~2 and Lθ is π/2 − θ

pθ (e~2 ) = cos(π/2 − θ)(cos θ, sin θ) = sinθ(cos θ, sin θ)

cos2 θ
 
sin θ cos θ
Hence [pθ ]α
α = [pθ (e~1 ) pθ (e~2 )] = but neither pθ nor [pθ ]α
α is invertible.
sin θ cos θ sin2 θ

You might also like