Copy of Copy of Copy of hello 3
Copy of Copy of Copy of hello 3
x
1
+
3
x
2
−
2
x
3
+
2
x
5
=
0
2
x
1
+
6
x
2
−
5
x
3
−
2
x
4
+
4
x
5
−
3
x
6
=
0
5
x
3
+
10
x
4
+
15
x
6
=
0
2
x
1
+
6
x
2
+
8
x
4
+
4
x
5
+
18
x
6
=
0
(4)
Solution
Observe first that the coefficients of the unknowns in this system are the same
as those in Example 5; that is, the two systems differ only in the constants on the right
side. The augmented matrix for the given homogeneous system is
⎡
⎢
⎢
⎢
⎣
13
−
20200
26
−
5
−
24
−
30
0 0 5 10 0 15 0
26084180
⎤
⎥
⎥
⎥
⎦
(5)
which is the same as the augmented matrix for the system in Example 5, except for zeros
in the last column. Thus, the reduced row echelon form of this matrix will be the same
as that of the augmented matrix in Example 5, except for the last column. However,
a moment’s reflection will make it evident that a column of zeros is not changed by an
elementary row operation, so the reduced row echelon form of (5) is
⎡
⎢
⎢
⎢
⎣
1304200
0012000
0000010
0000000
⎤
⎥
⎥
⎥
⎦
(6)
The corresponding system of equations is
x
1
+
3
x
2
+
4
x
4
+
2
x
5
=
0
x
3
+
2
x
4
=
0
x
6
=
0
Solving for the leading variables, we obtain
x
1
=−
3
x
2
−
4
x
4
−
2
x
5
x
3
=−
2
x
4
x
6
=
0
(7)
If we now assign the free variables
x
2
,x
4
, and
x
5
arbitrary values
r
,
s
, and
t
, respectively,
then we can express the solution set parametrically as
x
1
=−
3
r
−
4
s
−
2
t, x
2
=
r, x
3
=−
2
s, x
4
=
s, x
5
=
t, x
6
=
0
Note that the trivial solution results when
r
=
s
=
t
=
0.
Free Variables in
Homogeneous Linear
Systems
Example 6 illustrates two important points about solving homogeneous linear systems:
1.
Elementary row operations do not alter columns of zeros in a matrix, so the reduced
row echelon form of the augmented matrix for a homogeneous linear system has
a final column of zeros. This implies that the linear system corresponding to the
reduced row echelon form is homogeneous, just like the original system.
1.2 Gaussian Elimination 19
2.
When we constructed the homogeneous linear system corresponding to augmented
matrix (6), we ignored the row of zeros because the corresponding equation
0
x
1
+
0
x
2
+
0
x
3
+
0
x
4
+
0
x
5
+
0
x
6
=
0
does not impose any conditions on the unknowns. Thus, depending on whether or
not the reduced row echelon form of the augmented matrix for a homogeneous linear
system has any rows of zero, the linear system corresponding to that reduced row
echelon form will either have the same number of equations as the original system
or it will have fewer.
Now consider a general homogeneous linear system with
n
unknowns, and suppose
that the reduced row echelon form of the augmented matrix has
r
nonzero rows. Since
each nonzero row has a leading 1, and since each leading 1 corresponds to a leading
variable, the homogeneous system corresponding to the reduced row echelon form of
the augmented matrix must have
r
leading variables and
n
−
r
free variables. Thus, this
system is of the form
x
k
1
+
∑
()
=
0
x
k
2
+
∑
()
=
0
.
.
.
.
.
.
x
k
r
+
∑
()
=
0
(8)
where in each equation the expression
∑
()
denotes a sum that involves the free variables,
if any [see (7), for example]. In summary, we have the following result.
THEOREM
1.2.1
FreeVariableTheorem for Homogeneous Systems
If a homogeneous linear system has
n
unknowns
,
and if the reduced row echelon form
of its augmented matrix has
r
nonzero rows
,
then the system has
n
−
r
free variables.
Theorem 1.2.1 has an important implication for homogeneous linear systems with
Note that Theorem 1.2.2 ap-
plies only to homogeneous
systems—a
nonhomogeneous
system with more unknowns
than equations need not be
consistent. However, we will
prove later that if a nonho-
mogeneous system with more
unknowns then equations is
consistent, then it has in-
finitely many solutions.
more unknowns than equations. Specifically, if a homogeneous linear system has
m
equations in
n
unknowns, and if
m<n
, then it must also be true that
r<n
(why?).
This being the case, the theorem implies that there is at least one free variable, and this
implies that the system has infinitely many solutions. Thus, we have the following result.
THEOREM
1.2.2
A homogeneous linear system with more unknowns than equations has
infinitely many solutions.
In retrospect, we could have anticipated that the homogeneous system in Example 6
would have infinitely many solutions since it has four equations in six unknowns.
Gaussian Elimination and
Back-Substitution
For small linear systems that are solved by hand (such as most of those in this text),
Gauss–Jordan elimination (reduction to reduced row echelon form) is a good procedure
to use. However, for large linear systems that require a computer solution, it is generally
more efficient to use Gaussian elimination (reduction to row echelon form) followed by
a technique known as
back-substitution
to complete the process of solving the system.
The next example illustrates this technique.
20
Chapter 1
Systems of Linear Equations and Matrices
EXAMPLE 7
Example 5 Solved by Back-Substitution
From the computations in Example 5, a row echelon form of the augmented matrix is
⎡
⎢
⎢
⎢
⎣
13
−
20200
0012031
000001
1
3
0000000
⎤
⎥
⎥
⎥
⎦
To solve the corresponding system of equations
x
1
+
3
x
2
−
2
x
3
+
2
x
5
=
0
x
3
+
2
x
4
+
3
x
6
=
1
x
6
=
1
3
we proceed as follows:
Step 1.
Solve the equations for the leading variables.
x
1
=−
3
x
2
+
2
x
3
−
2
x
5
x
3
=
1
−
2
x
4
−
3
x
6
x
6
=
1
3
Step 2.
Beginning with the bottom equation and working upward, successively substitute
each equation into all the equations above it.
Substituting
x
6
=
1
3
into the second equation yields
x
1
=−
3
x
2
+
2
x
3
−
2
x
5
x
3
=−
2
x
4
x
6
=
1
3
Substituting
x
3
=−
2
x
4
into the first equation yields
x
1
=−
3
x
2
−
4
x
4
−
2
x
5
x
3
=−
2
x
4
x
6
=
1
3
Step 3.
Assign arbitrary values to the free variables, if any.
If we now assign
x
2
,x
4
, and
x
5
the arbitrary values
r
,
s
, and
t
, respectively, the
general solution is given by the formulas
x
1
=−
3
r
−
4
s
−
2
t, x
2
=
r, x
3
=−
2
s, x
4
=
s, x
5
=
t, x
6
=
1
3
This agrees with the solution obtained in Example 5.
EXAMPLE 8
Suppose that the matrices below are augmented matrices for linear systems in the un-
knowns
x
1
,x
2
,x
3
, and
x
4
. These matrices are all in row echelon form but not reduced row
echelon form. Discuss the existence and uniqueness of solutions to the corresponding
linear systems
1.2 Gaussian Elimination 21
(a)
⎡
⎢
⎢
⎢
⎣
1
−
3725
012
−
41
00169
00001
⎤
⎥
⎥
⎥
⎦
(b)
⎡
⎢
⎢
⎢
⎣
1
−
3725
012
−
41
00169
00000
⎤
⎥
⎥
⎥
⎦
(c)
⎡
⎢
⎢
⎢
⎣
1
−
3725
012
−
41
00169
00010
⎤
⎥
⎥
⎥
⎦
Solution
(
a
)
The last row corresponds to the equation
0
x
1
+
0
x
2
+
0
x
3
+
0
x
4
=
1
from which it is evident that the system is inconsistent.
Solution
(
b
)
The last row corresponds to the equation
0
x
1
+
0
x
2
+
0
x
3
+
0
x
4
=
0
which has no effect on the solution set. In the remaining three equations the variables
x
1
,x
2
, and
x
3
correspond to leading 1’s and hence are leading variables. The variable
x
4
is a free variable. With a little algebra, the leading variables can be expressed in terms
of the free variable, and the free variable can be assigned an arbitrary value. Thus, the
system must have infinitely many solutions.
Solution
(
c
)
The last row corresponds to the equation
x
4
=
0
which gives us a numerical value for
x
4
. If we substitute this value into the third equation,
namely,
x
3
+
6
x
4
=
9
we obtain
x
3
=
9. You should now be able to see that if we continue this process and
substitute the known values of
x
3
and
x
4
into the equation corresponding to the second
row, we will obtain a unique numerical value for
x
2
; and if, finally, we substitute the
known values of
x
4
,
x
3
, and
x
2
into the equation corresponding to the first row, we will
produce a unique numerical value for
x
1
. Thus, the system has a unique solution.
Some Facts About Echelon
Forms
There are three facts about row echelon forms and reduced row echelon forms that are
important to know but we will not prove:
1.
Every matrix has a unique reduced row echelon form; that is, regardless of whether
you use Gauss–Jordan elimination or some other sequence of elementary row oper-
ations, the same reduced row echelon form will result in the end.
*
2.
Row echelon forms are not unique; that is, different sequences of elementary row
operations can result in different row echelon forms.
3.
Although row echelon forms are not unique, the reduced row echelon form and all
row echelon forms of a matrix
A
have the same number of zero rows, and the leading
1’s always occur in the same positions. Those are called the
pivot positions
of
A
.A
column that contains a pivot position is called a
pivot column
of
A
.
*
A proof of this result can be found in the article “The Reduced Row Echelon Form of a Matrix Is Unique: A
Simple Proof,” by Thomas Yuster,
Mathematics Magazine
, Vol. 57, No. 2, 1984, pp. 93–94.
22
Chapter 1
Systems of Linear Equations and Matrices
EXAMPLE 9
Pivot Positions and Columns
Earlier in this section (immediately after Definition 1) we found a row echelon form of
A
=
⎡
⎢
⎣
00
−
20712
24
−
10 6 12 28
24
−
56
−
5
−
1
⎤
⎥
⎦
to be
⎡
⎢
⎣
12
−
53 614
0010
−
7
2
−
6
0000 12
⎤
⎥
⎦
The leading 1’s occur in positions (row 1, column 1), (row 2, column 3), and (row 3,
column 5). These are the pivot positions. The pivot columns are columns 1, 3, and 5.
If
A
is the augmented ma-
trix for a linear system, then
the pivot columns identify the
leading variables. As an illus-
tration, in Example 5 the pivot
columns are 1, 3, and 6, and
theleadingvariablesare
x
1
,x
3
,
and
x
6
.
Roundoff Error and
Instability
There is often a gap between mathematical theory and its practical implementation—
Gauss–Jordan elimination and Gaussian elimination being good examples. The problem
is that computers generally approximate numbers, thereby introducing
roundoff
errors,
so unless precautions are taken, successive calculations may degrade an answer to a
degree that makes it useless. Algorithms (procedures) in which this happens are called
unstable
. There are various techniques for minimizing roundoff error and instability.
For example, it can be shown that for large linear systems Gauss–Jordan elimination
involves roughly 50% more operations than Gaussian elimination, so most computer
algorithms are based on the latter method. Some of these matters will be considered in