Ahjnb
Ahjnb
Ahjnb
57
QED
2.4 Corollary Where one matrix reduces to another, each row of the second is a
linear combination of the rows of the first.
Proof For any two interreducible matrices A and B there is some minimum
number of row operations that will take one to the other. We proceed by
induction on that number.
In the base step, that we can go from the first to the second using zero
reduction operations, the two matrices are equal. Then each row of B is trivially
~i = 0
a combination of As rows
~1 + + 1
~i + + 0
~ m.
For the inductive step assume the inductive hypothesis: with k > 0, any
matrix that can be derived from A in k or fewer operations has rows that are
linear combinations of As rows. Consider a matrix B such that reducing A to B
requires k + 1 operations. In that reduction there is a next-to-last matrix G, so
that A G B. The inductive hypothesis applies to this G because
it is only k steps away from A. That is, each row of G is a linear combination of
the rows of A.
We will verify that the rows of B are linear combinations of the rows of G.
Then the Linear Combination Lemma, Lemma 2.3, applies to show that the
rows of B are linear combinations of the rows of A.
If the row operation taking G to B is a swap then the rows of B are just the
rows of G reordered and each row of B is a linear combination of the rows of G.
If the operation taking G to B is multiplication of a row by a scalar ci then
~ i = c~i and the other rows are unchanged. Finally, if the row operation is
adding a multiple of one row to another ri + j then only row j of B differs from
~ j = ri + j , which is indeed a linear combinations
the matching row of G, and
of the rows of G.
Because we have proved both a base step and an inductive step, the proposition follows by the principle of mathematical induction.
QED
We now have the insight that Gausss Method builds linear combinations
of the rows. But of course its goal is to end in echelon form, since that is a
58
2 3 7 8 0
0 0 1 5 1
R=
0 0 0 3 3
0 0 0 0 2
0
1
0
1
x1 has been removed from x5 s equation. That is, Gausss Method has made
x5 s row in some way independent of x1 s row.
The following result makes this intuition precise. What Gausss Method
eliminates is linear relationships among the rows.
2.5 Lemma In an echelon form matrix, no nonzero row is a linear combination
of the other nonzero rows.
Proof Let R be an echelon form matrix and consider its non-~0 rows. First
()
The matrix is in echelon form so every row after the first has a zero entry in that
column r2,`1 = = rm,`1 = 0. Thus equation () shows that c1 = 0, because
r1,`1 6= 0 as it leads the row.
The inductive step is much the same as the base step. Again consider
equation (). We will prove that if the coefficient ci is 0 for each row index
i { 1, . . . , k} then ck+1 is also 0. We focus on the entries from column `k+1 .
0 = c1 r1,`k+1 + + ck+1 rk+1,`k+1 + + cm rm,`k+1
59
number of columns n.
The base case is that the matrix has n = 1 column. If this is the zero matrix
then its echelon form is the zero matrix. If instead it has any nonzero entries
then when the matrix is brought to reduced echelon form it must have at least
one nonzero entry, which must be a 1 in the first row. Either way, its reduced
echelon form is unique.
For the inductive step we assume that n > 1 and that all m row matrices
having fewer than n columns have a unique reduced echelon form. Consider
an mn matrix A and suppose that B and C are two reduced echelon form
matrices derived from A. We will show that these two must be equal.
be the matrix consisting of the first n 1 columns of A. Observe
Let A
that any sequence of row operations that bring A to reduced echelon form will
to reduced echelon form. By the inductive hypothesis this reduced
also bring A
is unique, so if B and C differ then the difference must occur
echelon form of A
in column n.
We finish the inductive step, and the argument, by showing that the two
cannot differ only in that column. Consider a homogeneous system of equations
for which A is the matrix of coefficients.
a1,1 x1 + a1,2 x2 + + a1,n xn = 0
a2,1 x1 + a2,2 x2 + + a2,n xn = 0
..
.
am,1 x1 + am,2 x2 + + am,n xn = 0
()
By Theorem One.I.1.5 the set of solutions to that system is the same as the set
of solutions to Bs system
b1,1 x1 + b1,2 x2 + + b1,n xn = 0
b2,1 x1 + b2,2 x2 + + b2,n xn = 0
..
.
bm,1 x1 + bm,2 x2 + + bm,n xn = 0
()
60
and to Cs.
c1,1 x1 + c1,2 x2 + + c1,n xn = 0
c2,1 x1 + c2,2 x2 + + c2,n xn = 0
..
.
cm,1 x1 + cm,2 x2 + + cm,n xn = 0
()
With B and C different only in column n, suppose that they differ in row i.
Subtract row i of () from row i of () to get the equation (bi,n ci,n )xn = 0.
Weve assumed that bi,n 6= ci,n so xn = 0. Thus in () and () the n-th
column contains a leading entry, or else the variable xn would be free. Thats a
contradiction because with B and C equal on the first n 1 columns, the leading
entries in the n-th column would have to be in the same row, and with both
matrices in reduced echelon form, both leading entries would have to be 1, and
would have to be the only nonzero entries in that column. So B = C.
QED
That result answers the two questions from this sections introduction: do
any two echelon form versions of a linear system have the same number of free
variables, and if so are they exactly the same variables? We get from any echelon
form version to the reduced echelon form by eliminating up, so any echelon form
version of a system has the same free variables as the reduced echelon form, and
therefore uniqueness of reduced echelon form gives that the same variables are
free in all echelon form version of a system. Thus both questions are answered
yes. There is no linear system and no combination of row operations such that,
say, we could solve the system one way and get y and z free but solve it another
way and get y and w free.
We close with a recap. In Gausss Method we start with a matrix and then
derive a sequence of other matrices. We defined two matrices to be related if we
can derive one from the other. That relation is an equivalence relation, called
row equivalence, and so partitions the set of all matrices into row equivalence
classes.
13
27
13
01
...
(There are infinitely many matrices in the pictured class, but weve only got
room to show two.) We have proved there is one and only one reduced echelon
form matrix in each row equivalence class. So the reduced echelon form is a
canonical form for row equivalence: the reduced echelon form matrices are
61
?
?
10
01
...
1 0
0 1
0 0
0
1
62
2.9 Example We can describe all the classes by listing all possible reduced echelon
form matrices. Any 22 matrix lies in one of these: the class of matrices row
equivalent to this,
!
0 0
0 0
the infinitely many classes of matrices row equivalent to one of this type
!
1 a
0 0
where a R (including a = 0), the class of matrices row equivalent to this,
!
0 1
0 0
and the class of matrices row equivalent to this
!
1 0
0 1
(this is the class of nonsingular 22 matrices).
Exercises
X 2.10 Decide if the matrices are row equivalent.
1 0 2
1 0 2
1 2
0 1
(a)
,
(b) 3 1 1 , 0 2 10
4 8
1 2
5 1 5
2 0 4
2 1 1
1 0 2
1 1 1
0 3 1
(c) 1 1 0 ,
(d)
,
0 2 10
1 2 2
2 2 5
4 3 1
1 1 1
0 1 2
(e)
,
0 0 3
1 1 1
2.11 Describe the matrices in each of the classes represented in Example 2.9.
2.12 Describe
the row equivalence
class of these.
all matrices
in
1 0
1 2
1 1
(a)
(b)
(c)
0 0
2 4
1 3
2.13 How many row equivalence classes are there?
2.14 Can row equivalence classes contain different-sized matrices?
2.15 How big are the row equivalence classes?
(a) Show that for any matrix of all zeros, the class is finite.
(b) Do any other classes contain only finitely many members?
X 2.16 Give two reduced echelon form matrices that have their leading entries in the
same columns, but that are not row equivalent.
X 2.17 Show that any two nn nonsingular matrices are row equivalent. Are any two
singular matrices row equivalent?
X 2.18 Describe all of the row equivalence classes containing these.
63
(b) 2 3 matrices
(c) 3 2 matrices
1
3
1
2
0
4
3
3
5