De Nitions of Linear Algebra Terms
De Nitions of Linear Algebra Terms
De Nitions of Linear Algebra Terms
_
8 10 6 8
5 7 6 6
0 5 10 1
1 3 10 5
4 8 1 10
_
_
is a 5 4 matrix.
r =
r =
_
2 3 1 9 8
4
is a row vector of dimension 5.
c =
c =
_
_
4
5
10
_
_
is a column vector of dimension 3.
Remark 13 If A is an m n matrix whose columns are made up of the
column vectors c
1
; c
2
; : : : ; c
n
, then A can be written as
A = [c
1
c
2
c
n
] .
Likewise, if A is made up of the row vectors r
1
; r
2
; : : : ; r
m
, then A can be
written as
A =
_
_
r
1
r
2
.
.
.
r
m
_
_
.
Example 14 The 5 4 matrix
A =
_
_
8 10 6 8
5 7 6 6
0 5 10 1
1 3 10 5
4 8 1 10
_
_
can be written as
A = [c
1
c
2
c
3
c
4
]
where
c
1
=
_
_
8
5
0
1
4
_
_
, c
2
=
_
_
10
7
5
3
8
_
_
, c
3
=
_
_
6
6
10
10
1
_
_
, c
4
=
_
_
8
6
1
5
10
_
_
5
or can be written as
A =
_
_
r
1
r
2
r
3
r
4
r
5
_
_
where
r
1
=
_
8 10 6 8
r
2
=
_
5 7 6 6
r
3
=
_
0 5 10 1
r
4
=
_
1 3 10 5
r
5
=
_
4 8 1 10
.
Denition 15 For a given linear system of equations
a
11
x
1
+ a
12
x
2
+ : : : + a
1n
x
n
= b
1
a
21
x
1
+ a
22
x
2
+ : : : + a
2n
x
n
= b
2
.
.
.
a
m1
x
1
+ a
m2
x
2
+ : : : + a
mn
x
n
= b
m
,
the mn matrix
A =
_
_
a
11
a
12
a
1n
a
21
a
22
a
2n
.
.
.
.
.
.
.
.
.
.
.
.
a
m1
a
m2
a
mn
_
_
is called the coecient matrix of the system and the m(n + 1) matrix
[A b] =
_
_
a
11
a
12
a
1n
b
1
a
21
a
22
a
2n
b
2
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
a
m1
a
m2
a
mn
b
m
_
_
is called the augmented matrix of the system.
6
Example 16 The coecient matrix of the linear system
2x
1
+ 2x
2
5x
3
+ 4x
4
2x
5
= 4
4x
1
+ x
2
5x
4
= 1
3x
1
2x
2
+ 4x
4
2x
5
= 4
is
A =
_
_
2 2 5 4 2
4 1 0 5 0
3 2 0 4 2
_
_
and the augmented matrix of this system is
[A b] =
_
_
2 2 5 4 2 4
4 1 0 5 0 1
3 2 0 4 2 4
_
_
.
Denition 17 An elementary row operation is performed on a matrix
by changing the matrix in one of the following ways:
1. Interchange: Interchange two rows of the matrix. If rows i and j are
interchanged, then we use the notation r
i
r
j
.
2. Scaling: Multiply all entries in a single row by a nonzero constant,
k. If the ith row is multiplied by k, then we use the notation k r
i
r
i
.
3. Replacement: Replace a row with the sum of itself and a nonzero
multiple of another row. If the ith row is replaced with the sum of itself
and k times the jth row, then we use the notation r
i
+ k r
j
r
i
.
Example 18 Here are some examples of elementary row operations per-
formed on the matrix
A =
_
_
2 5 7 7
7 4 8 2
8 7 8 2
_
_
.
1. Interchange rows 1 and 3:
_
_
2 5 7 7
7 4 8 2
8 7 8 2
_
_
r
1
!r
3
_
_
8 7 8 2
7 4 8 2
2 5 7 7
_
_
7
2. Scale row 1 by a factor of 5.
_
_
2 5 7 7
7 4 8 2
8 7 8 2
_
_
5r
1
!r
1
_
_
10 25 35 35
7 4 8 2
8 7 8 2
_
_
3. Replace row 2 with the sum of itself and 2 times row 3.
_
_
2 5 7 7
7 4 8 2
8 7 8 2
_
_
r
2
+2r
3
!r
2
_
_
2 5 7 7
9 18 8 2
8 7 8 2
_
_
Denition 19 An mn matrix, A, is said to be row equivalent (or simply
equivalent) to an m n matrix, B, if A can be transformed into B by a
series of elementary row operations. If A is equivalent to B, then we write
A ~ B.
Example 20 As can be seen by studying the previous example
_
_
2 5 7 7
7 4 8 2
8 7 8 2
_
_
~
_
_
8 7 8 2
7 4 8 2
2 5 7 7
_
_
,
_
_
2 5 7 7
7 4 8 2
8 7 8 2
_
_
~
_
_
10 25 35 35
7 4 8 2
8 7 8 2
_
_
,
and
_
_
2 5 7 7
7 4 8 2
8 7 8 2
_
_
~
_
_
2 5 7 7
9 18 8 2
8 7 8 2
_
_
.
Example 21 The sequence of row operations
_
_
3 6
1 6
7 1
_
_
2r
1
!r
1
_
_
6 12
1 6
7 1
_
_
4r
3
!r
3
_
_
6 12
1 6
28 4
_
_
r
1
!r
3
_
_
28 4
1 6
6 12
_
_
shows that
_
_
3 6
1 6
7 1
_
_
~
_
_
28 4
1 6
6 12
_
_
.
8
Remark 22 (optional) Row equivalence of matrices is an example of what
is called an equivalence relation. An equivalence relation on a nonempty
set of objects, X, is a relation, ~, that is reexive, symmetric, and transitive.
The reexive property is the property that x ~ x for all x X. The
symmetric property is the property that if x X and y X with x ~ y,
then y ~ x. The transitive property is the property that if x X, y X,
and z X with x ~ y and y ~ z, then x ~ z. If we take X to be the set
of all mn matrices, then row equivalence is an equivalence relation on X
because
1. A ~ A for all A X.
2. If A X and B X and A ~ B, then B ~ A.
3. If A X, B X, and C X and A ~ B and B ~ C, then A ~ C.
1.2: Row Reduction and Echelon Forms
Denition 23 The leading entry in a row of a matrix is the rst (from
the left) nonzero entry in that row.
Example 24 For the matrix
_
_
0 1 4 8
0 0 4 3
5 0 0 10
0 0 0 1
9 7 2 8
_
_
,
the leading entry in row 1 is 1, the leading entry in row 2 is 4, the leading
entry in row 3 is 5, the leading entry in row 4 is 1, and the leading entry
in row 5 is 9. (Note that if a matrix happens to have a row consisting of all
zeros, then that row does not have a leading entry.)
Denition 25 A zero row of a matrix is a row consisting entirely of zeros.
A nonzero row is a row that is not a zero row.
Denition 26 An echelon form matrix is a matrix for which
1. No zero row has a nonzero row below it.
9
2. Each leading entry is in a column to the right of the leading entry in
the row above it.
3. All entries in the same column and below any leading entry are zeros.
Denition 27 A reduced echelon form matrix is an echelon form matrix
which also has the properties:
4. The leading entry in each nonzero row is 1.
5. Each leading 1 is the only nonzero entry in its column.
Example 28 The matrix
_
_
6 4 10
1 2 10
4 4 2
0 10 1
_
_
is not an echelon form matrix.
The matrix
_
0 0 0 4 8
0 0 0 0 3
_
is an echelon form matrix but not a reduced echelon form matrix.
The matrix
_
_
0 0 1 0
0 0 0 1
0 0 0 0
0 0 0 0
0 0 0 0
_
_
is a reduced echelon form matrix.
Theorem 29 (Uniqueness of the Reduced Echelon Form) Each ma-
trix, A, is equivalent to one and only one reduced echelon form matrix. This
unique matrix is denoted by rref(A).
Example 30 For
A =
_
_
6 4 10
1 2 10
4 4 2
0 10 1
_
_
;
10
we can see (after some work) that
rref (A) =
_
_
1 0 0
0 1 0
0 0 1
0 0 0
_
_
.
For
A =
_
0 0 0 4 8
0 0 0 0 3
_
,
we have
rref (A) =
_
0 0 0 1 0
0 0 0 0 1
_
.
For
A =
_
_
0 0 1 0
0 0 0 1
0 0 0 0
0 0 0 0
0 0 0 0
_
_
,
we have
rref (A) =
_
_
0 0 1 0
0 0 0 1
0 0 0 0
0 0 0 0
0 0 0 0
_
_
.
Denition 31 A pivot position in a matrix, A, is a position in A that
corresponds to a rowleading 1 in rref(A). A pivot column of A is column
of A that contains a pivot position.
Example 32 By studying the preceding example, we can see that the pivot
columns of
A =
_
_
6 4 10
1 2 10
4 4 2
0 10 1
_
_
are columns 1, 2, and 3. (Thus all columns of A are pivot columns).
11
The pivot columns of
A =
_
0 0 0 4 8
0 0 0 0 3
_
are columns 4 and 5.
The pivot columns of
A =
_
_
0 0 1 0
0 0 0 1
0 0 0 0
0 0 0 0
0 0 0 0
_
_
are columns 3 and 4.
Denition 33 Let A be the coecient matrix of the linear system of equa-
tions
a
11
x
1
+ a
12
x
2
+ : : : + a
1n
x
n
= b
1
a
21
x
1
+ a
22
x
2
+ : : : + a
2n
x
n
= b
2
.
.
.
a
m1
x
1
+ a
m2
x
2
+ : : : + a
mn
x
n
= b
m
.
Each variable that corresponds to a pivot column of A is called a basic vari-
able and the other variables are called free variables.
Example 34 The coecient matrix of the linear system
2x
1
+ 2x
2
5x
3
+ 4x
4
2x
5
= 4
4x
1
+ x
2
5x
4
= 1
3x
1
2x
2
+ 4x
4
2x
5
= 4
is
A =
_
_
2 2 5 4 2
4 1 0 5 0
3 2 0 4 2
_
_
.
Since
rref (A) =
_
_
1 0 0
6
5
2
5
0 1 0
1
5
8
5
0 0 1
2
5
6
5
_
_
,
12
we see that the pivot columns of A are columns 1, 2, and 3. Thus the basic
variables of this linear system are x
1
, x
2
; and x
3
. The free variables are x
4
and x
5
.
Theorem 35 (Existence and Uniqueness Theorem) A linear system of
equations is consistent if and only if the rightmost column of the augmented
matrix for the system is not a pivot column. If a linear system is consistent,
then the solution set of the linear system contains either a unique solution
(when there are no free variables), or innitely many solutions (when
there is at least one free variable).
Example 36 The augmented matrix of the system
2x
1
+ 2x
2
5x
3
+ 4x
4
2x
5
= 4
4x
1
+ x
2
5x
4
= 1
3x
1
2x
2
+ 4x
4
2x
5
= 4
is
[A b] =
_
_
2 2 5 4 2 4
4 1 0 5 0 1
3 2 0 4 2 4
_
_
.
Since
rref ([A b]) =
_
_
1 0 0
6
5
2
5
6
5
0 1 0
1
5
8
5
19
5
0 0 1
2
5
6
5
6
5
_
_
,
we see that the rightmost column of [A b] is not a pivot column. This means
that the system is consistent. Furthermore, since the system does have free
variables, there are innitely many solutions to the system.
Here is a general procedure for determining whether a given linear sys-
tem of equations is consistent or not and, if it is consistent, whether it has
innitely many solutions or a unique solution:
1. Compute rref([A b]) to determine whether or not the rightmost col-
umn of [A b] is a pivot column. If the rightmost of [A b] is a pivot
column, then the system is inconsistent.
13
2. Assuming that the rightmost column of [A b] is not a pivot column,
take a look at rref(A). (This requires no extra computation.) If all
columns of A are pivot columns, then the linear system has a unique
solution. If not all columns of A are pivot columns, then the system
has innitely many solutions.
1.3: Vector Equations
Denition 37 Suppose that
u =
_
_
u
1
u
2
.
.
.
u
n
_
_
and v =
_
_
v
1
v
2
.
.
.
v
n
_
_
are column vectors of dimension n. We dene the sum of u and v to be the
vector
u +v =
_
_
u
1
+ v
1
u
2
+ v
2
.
.
.
u
n
+ v
n
_
_
.
Denition 38 Suppose that
u =
_
_
u
1
u
2
.
.
.
u
n
_
_
is a column vectors of dimension n and suppose that c is a real number (also
called a scalar). We dene the scalar multiple cu to be
cu =
_
_
cu
1
cu
2
.
.
.
cu
n
_
_
.
14
Denition 39 The symbol R
n
denotes the set of all ordered ntuples of num-
bers. The symbol V
n
denotes the set of all column vectors of dimension n.
(The textbook author does not make this distinction. He uses R
n
to denote
both things. However, there really is a dierence. Elements of R
n
should
be regarded as points whereas elements of V
n
are matrices with one column.)
If x = (x
1
; x
2
; : : : ; x
n
) is a point in R
n
, then the position vector of x is the
vector
x =
_
_
x
1
x
2
.
.
.
x
n
_
_
.
Example 40 x = (1; 2) is a point in R
2
. The position vector of this point
(which is an element of V
2
) is
x =
_
1
2
_
.
Denition 41 Suppose that u
1
; u
2
; : : : ; u
n
are vectors in V
m
. (Note that
this is a set of n vectors, each of which has dimension m). Also suppose that
c
1
; c
2
; : : : ; c
n
are scalars. Then the vector
c
1
u
1
+ c
2
u
2
+ + c
n
u
n
is called a linear combination of the vectors u
1
; u
2
; : : : ; u
n
.
Example 42 Suppose that
u
1
=
_
1
2
_
, u
2
=
_
0
5
_
, u
3
=
_
1
4
_
.
Then the vector
4u
1
2u
2
+ 3u
3
= 4
_
1
2
_
2
_
0
5
_
+ 3
_
1
4
_
=
_
7
30
_
is a linear combination of u
1
,u
2
, and u
3
.
Denition 43 Suppose that u
1
; u
2
; : : : ; u
n
are vectors in V
m
. The span of
this set of vectors, denoted by Spanu
1
; u
2
; : : : ; u
n
, is the set of all vectors
that are linear combinations of the vectors u
1
; u
2
; : : : ; u
n
. Formally,
Spanu
1
; u
2
; : : : ; u
n
= v V
m
[ there exist scalars c
1
; c
2
; : : : ; c
n
such that v = c
1
u
1
+ c
2
u
2
+ + c
n
u
n
.
15
Denition 44 The zero vector in V
m
is the vector
0
m
=
_
_
0
0
.
.
.
0
_
_
.
Remark 45 If u
1
; u
2
; : : : ; u
n
is any set of vectors in V
m
, then 0
m
Spanu
1
; u
2
; : : : ; u
n
because
0
m
= 0u
1
+ 0u
2
+ + 0u
n
.
Denition 46 A vector equation in the variables x
1
, x
2
,. . . ,x
n
is an equa-
tion of the form
x
1
a
1
+ x
2
a
2
+ + x
n
a
n
= b
where a
1
,a
2
,. . . ,a
n
and b are vectors in V
m
(which are usually known before-
hand). An ordered ntuple (s
1
; s
2
; : : : ; s
n
) is called a solution of this vector
equation if the equation is satised when we set x
1
= s
1
, x
2
= s
2
,. . . , x
n
= s
n
.
The solution set of the vector equation is the set of all of its solutions.
Example 47 An example of a vector equation is
x
1
_
1
2
_
+ x
2
_
0
5
_
+ x
3
_
1
4
_
=
_
4
2
_
.
The ordered triple (x
1
; x
2
; x
3
) = (9; 4=5; 5) is a solution of this vector equa-
tion because
9
_
1
2
_
+
4
5
_
0
5
_
+ 5
_
1
4
_
=
_
4
2
_
.
Remark 48 The vector equation
x
1
a
1
+ x
2
a
2
+ + x
n
a
n
= b
is consistent if and only if b Spana
1
; a
2
; : : : ; a
n
.
Remark 49 The vector equation
x
1
a
1
+ x
2
a
2
+ + x
n
a
n
= b
16
where
a
1
=
_
_
a
11
a
21
.
.
.
a
m1
_
_
, a
2
=
_
_
a
12
a
22
.
.
.
a
m2
_
_
, . . . ,a
n
=
_
_
a
1n
a
2n
.
.
.
a
mn
_
_
and
b =
_
_
b
1
b
2
.
.
.
b
m
_
_
is equivalent to the system of linear equations
a
11
x
1
+ a
12
x
2
+ : : : + a
1n
x
n
= b
1
a
21
x
1
+ a
22
x
2
+ : : : + a
2n
x
n
= b
2
.
.
.
a
m1
x
1
+ a
m2
x
2
+ : : : + a
mn
x
n
= b
m
.
This means that the vector equation and the system have the same solution
sets. Each can be solved (or determined to be inconsistent) by rowreducing
the augmented matrix
_
a
1
a
2
a
n
b
.
Example 50 Let us solve the vector equation
x
1
_
1
2
_
+ x
2
_
0
5
_
+ x
3
_
1
4
_
=
_
4
2
_
.
This equation is equivalent to the linear system
x
1
+ x
3
= 4
2x
1
5x
2
+ 4x
3
= 2.
Since
[A b] =
_
1 0 1 4
2 5 4 2
_
and
rref ([A b]) =
_
1 0 1 4
0 1
2
5
6
5
_
,
17
we see that the vector equation is consistent (because the rightmost column of
[A b] is not a pivot column) and that it in fact has innitely many solutions
(because not all columns of A are pivot columns). We also see that the general
solution is
x
1
= 4 x
3
x
2
=
6
5
+
2
5
x
3
x
3
= free.
Note that if we set x
3
= 5, then we get the particular solution
x
1
= 9
x
2
=
4
5
x
3
= 5
that was identied in an earlier example.
1.4: The Matrix Equation Ax = b
Denition 51 For a matrix of size mn,
A =
_
a
1
a
2
a
n
=
_
_
a
11
a
12
a
1n
a
21
a
22
a
2n
.
.
.
.
.
.
.
.
.
.
.
.
a
m1
a
m2
a
mn
_
_
,
and a vector of dimension n,
x =
_
_
x
1
x
2
.
.
.
x
n
_
_
,
we dene the product Ax to be
Ax = x
1
a
1
+ x
2
a
2
+ + x
n
a
n
.
18
Example 52 Suppose
A =
_
2 3 4
4 3 3
_
and
x =
_
_
4
1
0
_
_
.
Then
Ax = 4
_
2
4
_
1
_
3
3
_
+ 0
_
4
3
_
=
_
11
19
_
.
Denition 53 Suppose that
A =
_
a
1
a
2
a
n
=
_
_
a
11
a
12
a
1n
a
21
a
22
a
2n
.
.
.
.
.
.
.
.
.
.
.
.
a
m1
a
m2
a
mn
_
_
is an mn matrix and that
b =
_
_
b
1
b
2
.
.
.
b
m
_
_
V
m
.
Then
Ax = b
is called a matrix equation. Generally, A and b are known beforehand and
the problem is to solve for the unknown vector x V
n
.
Remark 54 The system of linear equations
a
11
x
1
+ a
12
x
2
+ : : : + a
1n
x
n
= b
1
a
21
x
1
+ a
22
x
2
+ : : : + a
2n
x
n
= b
2
.
.
.
a
m1
x
1
+ a
m2
x
2
+ : : : + a
mn
x
n
= b
m
,
19
the vector equation
x
1
a
1
+ x
2
a
2
+ + x
n
a
n
= b,
and the matrix equation
Ax = b
are all equivalent to each other! The only subtle dierence is that solutions
of the system and of the vector equation are ordered ntuples
(x
1
; x
2
; : : : ; x
n
) = (s
1
; s
2
; : : : ; s
n
) R
n
;
whereas solutions of the matrix equation are the corresponding vectors
_
_
x
1
x
2
.
.
.
x
n
_
_
=
_
_
s
1
s
2
.
.
.
s
n
_
_
V
n
.
1.5: Solution Sets of Linear Systems
Denition 55 A homogeneous linear system is one of the form
Ax = 0
m
where A is an mn matrix.
A nonhomogeneous linear system is a linear system that is not ho-
mogeneous.
Remark 56 Homogenous systems are consistent! Thats because they have
the solution x = 0
n
.
Denition 57 The solution x = 0
n
is called the trivial solution of the
homogeneous system Ax = 0
m
. A nontrivial solution of the homogeneous
system Ax = 0
m
is a solution x V
n
such that x ,= 0
n
.
Example 58 The system
5x + 2y 3z + w = 0
3x + y 5z + w = 0
x 3y 5z + 5w = 0
is homogeneous. A nontrivial solution of this system (as the reader can
check) is (x; y; z; w) = (18; 34; 1; 25).
20
Theorem 59 If every column of A is a pivot column, then the homogeneous
system Ax = 0
m
has only the trivial solution. If not every column of A is
a pivot column, then the homogeneous system Ax = 0
m
has innitely many
nontrivial solutions (in addition to the trivial solution).
Example 60 The system given in the preceding example has coecient ma-
trix
A =
_
_
5 2 3 1
3 1 5 1
1 3 5 5
_
_
.
Without doing any computations, we can see that it is not possible that every
column of A can be a pivot column. (Why?) Therefore the homogeneous
system Ax = 0
m
has innitely many nontrivial solutions.
Theorem 61 Suppose that A is an mn matrix and suppose that b V
m
.
Also suppose that v V
n
is a solution of the homogeneous system Ax = 0
m
and that w V
n
is a solution of the system Ax = b. Then v + w is also a
solution of Ax = b.
1.7: Linear Independence
Denition 62 A set of vectors v
1
; v
2
; : : : ; v
n
in V
m
is said to be linearly
independent if the vector equation
x
1
v
1
+ x
2
v
2
+ + x
n
v
n
= 0
m
has only the trivial solution. Otherwise the set of vectors is said to be linearly
dependent.
Example 63 Let v
1
and v
2
be the vectors
v
1
=
_
_
4
1
4
_
_
, v
2
=
_
_
1
1
4
_
_
.
We solve the vector equation
x
1
v
1
+ x
2
v
2
= 0
3
21
by observing that
A =
_
_
4 1
1 1
4 4
_
_
~
_
_
1 0
0 1
0 0
_
_
.
Since each column of A is a pivot column, the vector equation has only the
trivial solution. Therefore the set of vectors v
1
; v
2
is linearly independent.
Denition 64 Suppose that v
1
; v
2
; : : : ; v
n
is a linearly dependent set of
vectors and suppose that (c
1
; c
2
; : : : ; c
n
) is a nontrivial solution of the vector
equation
x
1
v
1
+ x
2
v
2
+ + x
n
v
n
= 0
m
.
Then the equation
c
1
v
1
+ c
2
v
2
+ + c
n
v
n
= 0
m
is called a linear dependence relation for the set of vectors v
1
; v
2
; : : : ; v
n
.
Example 65 Consider the set of vectors
v
1
=
_
_
4
1
4
_
_
, v
2
=
_
_
1
1
4
_
_
, v
3
=
_
_
2
3
4
_
_
, v
4
=
_
_
4
3
3
_
_
.
We solve the vector equation
x
1
v
1
+ x
2
v
2
+ x
3
v
3
+ x
4
v
4
= 0
3
by observing that
_
_
4 1 2 4
1 1 3 3
4 4 4 3
_
_
~
_
_
1 0 0
11
40
0 1 0
13
20
0 0 1
9
8
_
_
and hence that the vector equation has nontrivial solutions and that the
general solution is
x
1
= 11t
x
2
= 26t
x
3
= 45t
x
4
= 40t.
A linear dependence relation for the set of vectors v
1
,v
2
,v
3
,v
4
is
11v
1
+ 26v
2
+ 45v
3
+ 40v
4
= 0
3
.
22
1.8: Introduction to Linear Transformations
Denition 66 A transformation (or function or mapping), T, from V
n
to V
m
is a rule that assigns each vector x V
n
to a vector T (x) V
m
. The
set V
n
is called the domain of T and the set V
m
is called the codomain (or
target set) of T. If T is a transformation from V
n
to V
m
, then we write
T : V
n
V
m
.
Denition 67 Suppose that T : V
n
V
m
. For each x V
n
, the vector
T (x) V
m
is called the image of x under the action of T. The set of all
images of vectors in V
n
under the action of T is called the range of T:In set
notation, the range of T is
y V
m
[ y = T (x) for at least one x V
n
.
Example 68 This example illustrates the terms dened in the preceding two
denitions: Consider the transformation T : V
2
V
3
dened by
T
__
x
y
__
=
_
_
x
2
4
y x
_
_
.
The domain of T is V
2
and the codomain of T is V
3
.
The image of the vector
x =
_
4
1
_
is
T (x) =
_
_
16
4
5
_
_
.
The range of T consists of all vectors in V
3
that have the form
_
_
s
4
t
_
_
where s can be any nonnegative real number and t can be any real number.
For example, the vector
y =
_
_
6
4
9
_
_
23
is in the range of T because
y = T
__ _
6
9 +
_
6
__
.
Note that it is also true that
y = T
__
_
6
9
_
6
__
.
Are there any other vectors x V
2
such that T (x) = y? The answer to this
does not matter as far as determining whether or not y is in the range of
T is concerned. We know that y is in the range of T because we know that
there exists at least one vector x V
2
such that T (x) = y.
Denition 69 A transformation T : V
n
V
m
is said to be a linear trans-
formation if both of the following conditions are satised:
1. T (u +v) = T (u) + T (v) for all vectors u and v in V
n
.
2. T (cu) = cT (u) for all vectors u in V
n
and for all scalars c.
Example 70 Let T : V
3
V
2
be dened by
T
_
_
_
_
x
y
z
_
_
_
_
=
_
x
x + y + 2z
_
.
We will show that T is a linear transformation: Let
u =
_
_
x
1
y
1
z
1
_
_
and v =
_
_
x
2
y
2
z
2
_
_
24
be two vectors in V
3
. Then
T (u +v) = T
_
_
_
_
x
1
+ x
2
y
1
+ y
2
z
1
+ z
2
_
_
_
_
=
_
x
1
+ x
2
(x
1
+ x
2
) + (y
1
+ y
2
) + 2 (z
1
+ z
2
)
_
=
_
x
1
+ x
2
(x
1
+ y
1
+ 2z
1
) + (x
2
+ y
2
+ 2z
2
)
_
=
_
x
1
x
1
+ y
1
+ 2z
1
_
+
_
x
2
x
2
+ y
2
+ 2z
2
_
= T (u) + T (v) :
Now let
u =
_
_
x
y
z
_
_
be a vector in V
3
and let c be a scalar. Then
T (cu) = T
_
_
_
_
cx
cy
cz
_
_
_
_
=
_
cx
cx + cy + 2 (cz)
_
= c
_
x
x + y + 2z
_
cT (u) .
We have shown that T is a linear transformation.
Denition 71 The zero transformation is the linear transformation T :
V
n
V
m
dened by T (u) = 0
m
for all u V
n
.
Denition 72 The identity transformation is the linear transformation
T : V
n
V
n
dened by T (u) = u for all u V
n
.
We leave it to the reader to prove that the zero transformation and the
identity transformation are both linear transformations.
25
Example 73 The transformation T : V
2
V
3
dened by
T
__
x
y
__
=
_
_
x
2
4
y x
_
_
is not a linear transformation. We leave it to the reader to explain why not.
1.9: The Matrix of a Linear Transformation
Denition 74 A transformation T : V
n
V
m
is said to be onto V
m
if the
range of T is V
m
.
Denition 75 (Alternate Denition) A transformation T : V
n
V
m
is said to be onto V
m
if each vector y V
m
is the image of at least one
vector x V
n
.
Denition 76 A transformation T : V
n
V
m
is said to be onetoone
if each vector y V
m
is the image of at most one vector x V
n
.
Denition 77 A transformation T : V
n
V
m
that is both onto V
m
and
onetoone is called a bijection.
Remark 78 In reference to the dart analogy that we have been using in
class: Supposing that V
n
is our box of darts and V
m
is our dartboard, a
transformation T : V
n
V
m
is onto V
m
if every point on the dartboard
gets hit by at least one dart; whereas T is onetoone if every point on the
dartboard either does not get hit or gets hit by at most one dart. If T is a
bijection, then every point on the dartboard gets hit by exactly one dart.
2.2: The Inverse of a Matrix
Denition 79 Suppose that A =
_
a b
c d
_
is a 2 2 matrix. The quantity
ad bc is called the determinant of A. It is denoted either by det (A) or by
[A[.
Denition 80 The n n identity matrix, denoted by I
n
, is the matrix
that has entries of 1 at all positions along its main diagonal and entries of 0
elsewhere.
26
Example 81 The 3 3 identity matrix is
I
3
=
_
_
1 0 0
0 1 0
0 0 1
_
_
.
Denition 82 An n n matrix, A, is said to be invertible if there exists
an n n matrix, B, such that BA = AB = I
n
.
Proposition 83 If A is an invertible n n matrix and B and C are n n
matrices such that BA = AB = I
n
and CA = AC = I
n
, then B = C. In
other words, if A has an inverse, then this inverse is unique.
Denition 84 Suppose that A is an invertible n n matrix. The inverse
of A is dened to be the unique matrix, A
1
, such that A
1
A = AA
1
= I
n
.
Example 85 The inverse of the matrix
A =
_
3 0
1 3
_
is
A
1
=
_
1
3
0
1
9
1
3
_
.
(The reader should perform the computations to check that A
1
A = AA
1
=
I
2
.)
Denition 86 An elementary matrix is a matrix is a matrix that can
be obtained by performing a single elementary row operation on an identity
matrix.
Example 87 The matrix
E =
_
_
1 0 0
0 1 0
9 0 1
_
_
is an elementary matrix because it is obtained from I
3
by the elementary row
operation 9 R
1
+ R
3
R
3
.
27
2.3: Characterizations of Invertible Matrices
Denition 88 A transformation T : V
n
V
n
is said to be invertible if
there exists a transformation S : V
n
V
n
such that S (T (x)) = x for all
x V
n
and T (S (x)) = x for all x V
n
.
Proposition 89 If T : V
n
V
n
is invertible, then there is a unique trans-
formation S : V
n
V
n
such that S (T (x)) = x for all x V
n
and
T (S (x)) = x for all x V
n
. This unique transformation is called the
inverse of T and is denoted by T
1
.
Example 90 If T : V
2
V
2
is the linear transformation that rotates all
vectors in V
2
counterclockwise about the origin through an angle of 90
, then
T is invertible and T
1
: V
2
V
2
is the linear transformation that rotates
all vectors in V
2
clockwise about the origin through an angle of 90
.
Theorem 91 Suppose that A is an n n matrix and let T : V
n
V
n
be
the linear transformation dened by T (x) = Ax for all x V
n
. Then the
following statements are all equivalent to each other (meaning that all of the
statements are true or all of them are false).
1. A is invertible.
2. T is invertible.
3. T is onto V
n
.
4. T is onetoone.
5. A ~ I
n
.
6. The columns of A span V
n
.
7. The homogeneous equation Ax = 0
n
has only the trivial solution.
8. For every b V
n
, the equation Ax = b has a unique solution.
28