Linear Algebra
Linear Algebra
Linear Algebra
Articles
Linear Algebra/Cover
Linear Algebra/Notation
Linear Algebra/Introduction
Chapter I
17
26
37
Linear Algebra/Automation
41
43
43
51
59
59
65
73
75
79
80
84
90
Chapter II
93
93
94
94
105
115
116
126
Linear Algebra/Basis
126
Linear Algebra/Dimension
133
139
146
155
157
160
165
Chapter III
172
Linear Algebra/Isomorphisms
172
172
181
192
192
202
208
208
211
219
Linear Algebra/Inverses
227
235
235
240
Linear Algebra/Projection
245
246
251
257
265
274
281
286
Chapter IV
292
Linear Algebra/Determinants
292
293
Linear Algebra/Exploration
293
299
303
312
317
318
326
326
332
334
337
Chapter V
348
348
348
349
351
352
Linear Algebra/Diagonalizability
355
359
Linear Algebra/Nilpotence
367
Linear Algebra/Self-Composition
367
Linear Algebra/Strings
370
380
380
387
400
402
406
408
Appendix
415
Linear Algebra/Appendix
415
Linear Algebra/Propositions
415
Linear Algebra/Quantifiers
418
420
421
428
Linear Algebra/Resources
428
Linear Algebra/Bibliography
429
Linear Algebra/Index
432
References
Article Sources and Contributors
453
456
Article Licenses
License
462
Linear Algebra/Cover
Linear Algebra/Cover
Linear Algebra/Notation
Notation
,
, ordered
-tuples of reals
natural numbers:
complex numbers
set of . . . such that . . .
,
set of
matrices
and
Linear Algebra/Notation
2
matrices
transformations; maps from a space to itself
square matrices
matrix representing the map
matrix entry from row
, column
is the determinant shown (the absolute value of the size is the area). The size of the second box is
equals the size of the final box. Hence,
is the final determinant divided by the first determinant.
Linear Algebra/Introduction
Linear Algebra/Introduction
This book helps students to master the material of a standard undergraduate linear algebra course.
The material is standard in that the topics covered are Gaussian reduction, vector spaces, linear maps, determinants,
and eigenvalues and eigenvectors. The audience is also standard: sophomores or juniors, usually with a background
of at least one semester of Calculus and perhaps with as much as three semesters.
The help that it gives to students comes from taking a developmental approach this book's presentation
emphasizes motivation and naturalness, driven home by a wide variety of examples and extensive, careful, exercises.
The developmental approach is what sets this book apart, so some expansion of the term is appropriate here.
Courses in the beginning of most Mathematics programs reward students less for understanding the theory and more
for correctly applying formulas and algorithms. Later courses ask for mathematical maturity: the ability to follow
different types of arguments, a familiarity with the themes that underlay many mathematical investigations like
elementary set and function facts, and a capacity for some independent reading and thinking. Linear algebra is an
ideal spot to work on the transition between the two kinds of courses. It comes early in a program so that progress
made here pays off later, but also comes late enough that students are often majors and minors. The material is
coherent, accessible, and elegant. There are a variety of argument styles proofs by contradiction, if and only if
statements, and proofs by induction, for instance and examples are plentiful.
So, the aim of this book's exposition is to help students develop from being successful at their present level, in
classes where a majority of the members are interested mainly in applications in science or engineering, to being
successful at the next level, that of serious students of the subject of mathematics itself.
Helping students make this transition means taking the mathematics seriously, so all of the results in this book are
proved. On the other hand, we cannot assume that students have already arrived, and so in contrast with more
abstract texts, we give many examples and they are often quite detailed.
In the past, linear algebra texts commonly made this transition abruptly. They began with extensive computations of
linear systems, matrix multiplications, and determinants. When the concepts vector spaces and linear maps
finally appeared, and definitions and proofs started, often the change brought students to a stop. In this book, while
we start with a computational topic, linear reduction, from the first we do more than compute. We do linear systems
quickly but completely, including the proofs needed to justify what we are computing. Then, with the linear systems
work as motivation and at a point where the study of linear combinations seems natural, the second chapter starts
with the definition of a real vector space. This occurs by the end of the third week.
Another example of our emphasis on motivation and naturalness is that the third chapter on linear maps does not
begin with the definition of homomorphism, but with that of isomorphism. That's because this definition is easily
motivated by the observation that some spaces are "just like" others. After that, the next section takes the reasonable
step of defining homomorphism by isolating the operation-preservation idea. This approach loses mathematical
slickness, but it is a good trade because it comes in return for a large gain in sensibility to students.
One aim of a developmental approach is that students should feel throughout the presentation that they can see how
the ideas arise, and perhaps picture themselves doing the same type of work.
The clearest example of the developmental approach taken here and the feature that most recommends this
book is the exercises. A student progresses most while doing the exercises, so they have been selected with great
care. Each problem set ranges from simple checks to reasonably involved proofs. Since an instructor usually assigns
about a dozen exercises after each lecture, each section ends with about twice that many, thereby providing a
selection. There are even a few problems that are challenging puzzles taken from various journals, competitions, or
problems collections. (These are marked with a "?" and as part of the fun, the original wording has been retained as
much as possible.) In total, the exercises are aimed to both build an ability at, and help students experience the
pleasure of, doing mathematics.
Linear Algebra/Introduction
Monday
Wednesday
Friday
One.I.1
One.I.1, 2
One.I.2, 3
One.I.3
One.II.1
One.II.2
One.III.1, 2
One.III.2
Two.I.1
Two.I.2
Two.II
Two.III.1
Two.III.1, 2
Two.III.2
Exam
Two.III.2, 3
Two.III.3
Three.I.1
Three.I.2
Three.II.1
Three.II.2
Three.II.2
Three.II.2
Three.III.1
Three.III.1
Three.III.2
Three.IV.1, 2
10
Three.IV.2, 3, 4
Three.IV.4
Exam
11
Four.I.1, 2
12
Four.I.3
Four.II
Four.II
13
Four.III.1
Five.I
Five.II.1
14
Five.II.2
Five.II.3
Review
The second timetable is more ambitious (it supposes that you know One.II, the elements of vectors, usually covered
in third semester calculus).
Linear Algebra/Introduction
week
Monday
Wednesday
Friday
One.I.1
One.I.2
One.I.3
One.I.3
One.III.1, 2
One.III.2
Two.I.1
Two.I.2
Two.II
Two.III.1
Two.III.2
Two.III.3
Two.III.4
Three.I.1
Exam
Three.I.2
Three.II.1
Three.II.2
Three.III.1
Three.III.2
Three.IV.1, 2
Three.IV.2
Three.IV.3
Three.IV.4
Three.V.1
Three.V.2
Three.VI.1
10
Three.VI.2
Four.I.1
Exam
11
Four.I.2
Four.I.3
Four.I.4
12
Four.II
Four.II, Four.III.1
Four.III.2, 3
13
Five.II.1, 2
Five.II.3
Five.III.1
14
Five.III.2
Five.IV.1, 2
Five.IV.2
Linear Algebra/Introduction
Author's Note. Inventing a good exercise, one that enlightens as well as tests, is a creative act, and hard work.
The inventor deserves recognition. But for some reason texts have traditionally not given attributions for questions. I
have changed that here where I was sure of the source. I would greatly appreciate hearing from anyone who can help
me to correctly attribute others of the questions.
ja: /
Chapter I
Linear Algebra/Solving Linear Systems
Systems of linear equations are common in science and mathematics. These two examples from high school science
(O'Nan 1990) give a sense of how they arise.
The first example is from Physics. Suppose that we are given three objects, one with a mass known to be 2 kg, and
are asked to find the unknown masses. Suppose further that experimentation with a meter stick produces these two
balances.
Since the sum of moments on the left of each balance equals the sum of moments on the right (the moment of an
object is its mass times its distance from the balance point), the two balances give this system of two equations.
The second example of a linear system is from Chemistry. We can mix, under controlled conditions, toluene
and nitric acid
to produce trinitrotoluene
be controlled very well, indeed trinitrotoluene is better known as TNT). In what proportion should those
components be mixed? The number of atoms of each element present before the reaction
must equal the number present afterward. Applying that principle to the elements C, H, N, and O in turn gives this
system.
To finish each of these examples requires solving a system of equations. In each, the equations involve only the first
power of the variables. This chapter shows how to solve any such system.
References
O'Nan, Micheal (1990), Linear Algebra (3rd ed.), Harcourt College Pub.
is the constant. An
, ...,
-tuple
for the
if that
Example 1.2
The ordered pair
In contrast,
is not a solution.
Finding the set of all solutions is solving the system. No guesswork or good fortune is needed to solve a linear
system. There is an algorithm that always works. The next example introduces that algorithm, called Gauss'
method. It transforms the system, step by step, into one with a form that is easily solved.
Example 1.3
To solve this system
The third step is the only nontrivial one. We've mentally multiplied both sides of the first row by
, mentally
added that to the old second row, and written the result in as the new second row.
Now we can find the value of each variable. The bottom equation shows that
middle equation shows that
. Substituting
for
in the
and so the
Most of this subsection and the next one consists of examples of solving linear systems by Gauss' method. We will
use it throughout this book. It is fast and easy. But, before we get to those examples, we will first show that this
method is also safe in that it never loses solutions or picks up extraneous solutions.
Theorem 1.4 (Gauss' method)
If a linear system is changed to another by one of these operations
1. an equation is swapped with another
2. an equation has both sides multiplied by a nonzero constant
3. an equation is replaced by the sum of itself and a multiple of another
then the two systems have the same set of solutions.
Each of those three operations has a restriction. Multiplying a row by
change the solution set of the system. Similarly, adding a multiple of a row to itself is not allowed because adding
times the row to itself has the effect of multiplying the row by . Finally, swapping a row with itself is
disallowed to make some results in the fourth chapter easier to state and remember (and besides, self-swapping
doesn't accomplish anything).
Proof
We will cover the equation swap operation here and save the other two cases for Problem 14.
Consider this swap of row
} The
the
-tuple
variables,
with row
satisfies the system before the swap if and only if substituting the values, the
the
's,
gives
true
and
statements:
's, for
and
...
and
...
...
.
In a requirement consisting of statements and-ed together we can rearrange the order of the statements, so that this
requirement
is
met
if
and
only
if
and
...
and
...
and
...
" by "
, with the row that is changed written second. We will also, to save writing, often list pivot
Example 1.6
A typical use of Gauss' method is to solve this system.
The first transformation of the system involves using the first row to eliminate the
the third. To get rid of the second row's
, we multiply the entire first row by
10
and write the result in as the new second row. To get rid of the third row's
, add that to
the third row, and write the result in as the new third row.
steps
and
last two equations involve only two unknowns. To finish we transform the second system into a third system, where
the last equation involves only one unknown. This transformation uses the second row to eliminate from the third
row.
Now we are set up for the solution. The third row shows that
, and then substitute back into the first row to get
Example 1.7
For the Physics problem from the start of this chapter, Gauss' method gives this.
So
Example 1.8
The reduction
shows that
, and
As these examples illustrate, Gauss' method uses the elementary reduction operations to set up back-substitution.
Definition 1.9
In each row, the first variable with a nonzero coefficient is the row's leading variable. A system is in echelon form
if each leading variable is to the right of the leading variable in the row above it (except for the leading variable in
the first row).
Example 1.10
The only operation needed in the examples above is pivoting. Here is a linear system that requires the operation of
swapping equations. After the first pivot
. To get one, we look lower down in the system for a row that has a leading
11
(Had there been more than one row below the second with a leading
Back-substitution gives
, and
Strictly speaking, the operation of rescaling rows is not needed to solve linear systems. We have included it because
we will use it later in this chapter as part of a variation on Gauss' method, the Gauss-Jordan method.
All of the systems seen so far have the same number of equations as unknowns. All of them have a solution, and for
all of them there is only one solution. We finish this subsection by seeing for contrast some other things that can
happen.
Example 1.11
Linear systems need not have the same number of equations as unknowns. This system
has more equations than variables. Gauss' method helps us understand this system also, since this
gives
and
. The "
That example's system has more equations than variables. Gauss' method is also useful on systems with more
variables than equations. Many examples are in the next subsection.
Another way that linear systems can differ from the examples shown earlier is that some linear systems do not have a
unique solution. This can happen in two ways.
The first is that it can fail to have any solution at all.
Example 1.12
Contrast the system in the last example with this one.
Here the system is inconsistent: no pair of numbers satisfies all of the equations simultaneously. Echelon form makes
this inconsistency obvious.
12
The other way that a linear system can fail to have a unique solution is to have many solutions.
Example 1.14
In this system
any pair of numbers satisfying the first equation automatically satisfies the second. The solution set
is infinite; some of its members are
,
, and
. The result of
applying Gauss' method here contrasts with the prior example because we do not get a contradictory equation.
" equation in that example. It is not the signal that a system has many solutions.
Example 1.15
The absence of a "
" does not keep a system from having many different solutions. This system is in echelon
form
has no "
", and yet has infinitely many solutions. (For instance, each of these is a solution:
,
, and
. There are infinitely many solutions because any triple whose first
component is and whose second component is the negative of the third is a solution.)
Nor does the presence of a "
" mean that the system must have many solutions. Example 1.11 shows that. So
does this system, which does not have many solutions in fact it has none despite that when it is brought to
echelon form it has a "
" row.
We will finish this subsection with a summary of what we've seen so far about Gauss' method.
Gauss' method uses the three row operations to set a system up for back substitution. If any step shows a
contradictory equation then we can stop with the conclusion that the system has no solutions. If we reach echelon
form without a contradictory equation, and each variable is a leading variable in its row, then the system has a
unique solution and we find it by back substitution. Finally, if we reach echelon form without a contradictory
equation, and there is not a unique solution (at least one variable is not a leading variable) then the system has many
solutions.
13
The next subsection deals with the third case we will see how to describe the solution set of a system with many
solutions.
Exercises
This exercise is recommended for all readers.
Problem 1
Use Gauss' method to find the unique solution for each system.
1.
2.
This exercise is recommended for all readers.
Problem 2
Use Gauss' method to solve each system or conclude "many solutions" or "no solutions".
1.
2.
3.
4.
5.
6.
and substitute that expression into the second equation. Find the resulting
, but this time substitute that expression into the third equation. Find this
What extra step must a user of this method take to avoid erroneously concluding a system has a solution?
14
and yet we can nonetheless apply Gauss' method. Do so. Does the system have a solution?
This exercise is recommended for all readers.
Problem 6
What conditions must the constants, the
's, satisfy so that each of these systems has a solution? Hint. Apply Gauss'
method and see what happens to the right side (Anton 1987).
1.
2.
Problem 7
True or false: a system with more unknowns than equations has at least one solution. (As always, to say "true" you
must prove it, while to say "false" you must produce a counterexample.)
Problem 8
Must any Chemistry problem like the one that starts this subsection a balance the reaction problem have
infinitely many solutions?
This exercise is recommended for all readers.
Problem 9
Find the coefficients
, and
Problem 10
, and
Gauss' method works by combining the equations in a system to make new equations.
1. Can the equation
this system?
15
Problem 11
Prove that, where
, if
then
possibilities: there is a unique solution, there is no solution, and there are infinitely many solutions.
Problem 14
Finish the proof of Theorem 1.4.
Problem 15
Is there a two-unknowns linear system whose solution set is all of
16
3. 23
4. 29
5. 17
(Salkind 1975, 1955 problem 38)
This exercise is recommended for all readers.
? Problem 20
Laugh at this:
simple example in addition, and it is required to identify the letters and prove the solution unique (Ransom & Gupta
1935).
? Problem 21
The Wohascum County Board of Commissioners, which has 20 members, recently had to elect a President. There
were three candidates (
,
, and
); on each ballot the three candidates were to be listed in order of
preference, with no abstentions. It was found that 11 members, a majority, preferred
preferred
over
suggested that
over
,
over
and
over
protested, and it
as their first choice (Gilbert, Krusemeyer & Larson 1993, Problem number 2)?
? Problem 22
"This system of
and
?"
"Quite so," said the Great Mathematician, pulling out his bassoon. "Indeed, the system has a unique solution. Can
you find it?"
"Good heavens!" cried the Poor Nut, "I am baffled."
Are you? (Dudley, Lebow & Rothman 1963)
Solutions
References
Anton, Howard (1987), Elementary Linear Algebra, John Wiley & Sons.
Dudley, Underwood (proposer); Lebow, Arnold (proposer); Rothman, David (solver) (Jan. 1963), "Elemantary
problem 1151", American Mathematical Monthly 70 (1): 93.
Gilbert, George T.; Krusemeyer, Mark; Larson, Loren C. (1993), The Wohascum County Problem Book, The
Mathematical Association of America.
Ransom, W. R. (proposer); Gupta, Hansraj (solver) (Jan. 1935), "Elementary problem 105", American
Mathematical Monthly 42 (1): 47.
Salkind, Charles T. (1975), Contest Problem Book No 1: Annual High School Mathematics Examinations
1950-1960.
17
not all of the variables are leading variables. The Gauss' method theorem showed that a triple satisfies the first
system
if
and
only
if
it
satisfies
the
third.
Thus,
the
solution
set
can
also
be
described
as
. However, this second description is not much of an
improvement. It has two equations instead of three, but it still involves some hard-to-understand interaction among
the variables.
To get a description that is free of any such interaction, we take the variable that does not lead any equation,
use it to describe the variables that do lead,
first
equation
and
gives
Thus,
the
solution
, and
and the
set
can
be
described
. For instance,
as
is a solution
because taking
gives a first component of
and a second component of
.
The advantage of this description over the ones above is that the only variable appearing, , is unrestricted it can
be any real number.
Definition 2.2
The non-leading variables in an echelon-form linear system are free variables.
In the echelon form system derived in the above example,
and
is free.
Example 2.3
A linear system can end with more than one variable free. This row reduction
ends with
and
and
and
with
. Next, moving up to
and solving for
.
18
We prefer this description because the only variables that appear, and , are unrestricted. This makes the job of
deciding which four-tuples are system solutions into an easy one. For instance, taking
and
gives the
solution
be
. In contrast,
Example 2.4
After this reduction
and
lead,
and
. For instance,
and
. The four-tuple
and
and
and
parameters. The terms "parameter" and "free" are related because, as we shall show later in this chapter, the solution
set of a system can always be parametrized with the free variables. Consequently, we shall parametrize all of our
descriptions in this way.)
Example 2.5
This is another system with infinitely many solutions.
, and
. The variable
. To express
in terms of
.) Write
, substitute for
in terms of
with
We finish this subsection by developing the notation for linear systems and their solution sets that we shall use in the
rest of this book.
Definition 2.6
An
an entry.
rows and
19
and column
stated first.) The entry in the second row and first column is
since
. (The parentheses around the array are a typographic device so that when two
matrices are side by side we can tell where one ends and the other starts.)
Matrices occur throughout this book. We shall use
to denote the collection of
matrices.
Example 2.7
We can abbreviate this linear system
The vertical bar just reminds a reader of the difference between the coefficients on the systems's left hand side and
the constants on the right. When a bar is used to divide a matrix into parts, we call it an augmented matrix. In this
notation, Gauss' method goes this way.
. One advantage of the new notation is that the clerical load of Gauss' method the
copying of variables, the writing of 's and 's, etc. is lighter.
We will also use the array notation to clarify the descriptions of solution sets. A description like
from Example 2.3 is hard to read. We will rewrite it to
group all the constants together, all the coefficients of
together. We will
help us picture the solution sets when they are written in this way.
Definition 2.8
A vector (or column vector) is a matrix with a single column. A matrix with a single row is a row vector. The
entries of a vector are its components.
Vectors are an exception to the convention of representing matrices with capital roman letters. We use lower-case
roman or greek letters overlined with an arrow: , , ... or , , ... (boldface is also common: or ). For
instance, this is a column vector with a third component of
20
Definition 2.9
The linear equation
with unknowns
if
is satisfied by
The style of description of solution sets that we use involves adding the vectors, and also multiplying them by real
numbers, such as the and . We need to define these operations.
Definition 2.10
The vector sum of
and
is this.
In general, two matrices with the same number of rows and the same number of columns add in this way,
entry-by-entry.
Definition 2.11
The scalar multiplication of the real number
is this.
or
to scalar multiplication as "scalar product" because that name is used for a different operation.)
Example 2.12
Notice that the definitions of vector addition and scalar multiplication agree where they overlap, for instance,
.
With the notation defined, we can now solve systems in the way that we will use throughout this book.
Example 2.13
This system
21
form.
Note again how well vector notation sets off the coefficients of each parameter. For instance, the third row of the
vector form shows plainly that if is held fixed then increases three times as fast as .
That format also shows plainly that there are infinitely many solutions. For example, we can fix
range over the real numbers, and consider the first component
hence infinitely many solutions.
Another thing shown plainly is that setting both
and
as
, let
reduces
Before the exercises, we pause to point out some things that we have yet to do.
The first two subsections have been on the mechanics of Gauss' method. Except for one result, Theorem 1.4
without which developing the method doesn't make sense since it says that the method gives the right answers we
have not stopped to consider any of the interesting questions that arise.
and
free)?
In the rest of this chapter we answer these questions. The answer to each is "yes". The first question is answered in
the last subsection of this section. In the second section we give a geometric description of solution sets. In the final
section of this chapter we tackle the last set of questions. Consequently, by the end of the first chapter we will not
only have a solid grounding in the practice of Gauss' method, we will also have a solid grounding in the theory. We
will be sure of what can and cannot happen in a reduction.
Exercises
This exercise is recommended for all readers.
Problem 1
Find the indicated entry of the matrix, if it is defined.
1.
2.
3.
4.
This exercise is recommended for all readers.
Problem 2
Give the size of each matrix.
1.
2.
3.
This exercise is recommended for all readers.
Problem 3
Do the indicated vector operation, if it is defined.
1.
2.
22
3.
4.
5.
6.
This exercise is recommended for all readers.
Problem 4
Solve each system using matrix notation. Express the solution using vectors.
1.
2.
3.
4.
5.
6.
This exercise is recommended for all readers.
Problem 5
Solve each system using matrix notation. Give each solution set in vector notation.
1.
2.
3.
4.
This exercise is recommended for all readers.
Problem 6
The vector is in the set. What value of the parameters produces that vector?
23
1.
24
2.
3.
Problem 7
Decide if the vector is in the set.
1.
2.
3.
4.
Problem 8
Parametrize the solution set of this one-equation system.
, and
matrix whose
1.
2.
to the
Problem 12
power.
-th entry is
, the transpose of
25
, written
. Find
4.
This exercise is recommended for all readers.
Problem 13
1. Describe all functions
2. Describe all functions
such that
such that
and
.
Problem 14
Show that any set of five points from the plane
lie on a common conic section, that is, they all satisfy some
where some of
are nonzero.
Problem 15
Make up a four equations/four unknowns system having
1. a one-parameter solution set;
2. a two-parameter solution set;
3. a three-parameter solution set.
? Problem 16
1. Solve the system of equations.
For what values of does the system fail to have solutions, and for what values of
solutions?
2. Answer the above question for the system.
aluminum, copper, silver, or lead. When weighed successively under standard conditions in water, benzene, alcohol,
and glycerine its respective weights are
,
,
, and
grams. How much, if any, of the
forenamed metals does it contain if the specific gravities of the designated substances are taken to be as follows?
26
Aluminum
2.7
Alcohol
0.81
Copper
8.9
Benzene
0.90
Gold
Lead
11.3
Silver
10.8
Water
1.00
References
The USSR Mathematics Olympiad, number 174.
Duncan, Dewey (proposer); Quelch, W. H. (solver) (Sept.-Oct. 1952), Mathematics Magazine 26 (1): 48
That example shows an infinite solution set conforming to the pattern. We can think of the other two kinds of
solution sets as also fitting the same pattern. A one-element solution set fits in that it has a particular solution, and
the unrestricted combination part is a trivial sum (that is, instead of being a combination of two vectors, as above, or
a combination of one vector, it is a combination of no vectors). A zero-element solution set fits the pattern since
there is no particular solution, and so the set of sums of that form is empty.
We will show that the examples from the prior subsection are representative, in that the description pattern discussed
above holds for every solution set.
Theorem 3.1
For any linear system there are vectors
, ...,
27
free variables.
's. We
shall prove the theorem in two corresponding parts, with two lemmas.
Homogeneous Systems
We will focus first on the unrestricted combination part. To do that, we consider systems that have the vector of
zeroes as one of the particular solutions, so that
can be shortened to
.
Definition 3.2
A linear equation is homogeneous if it has a constant of zero, that is, if it can be put in the form
.
(These are "homogeneous" because all of the terms involve the same power of their variable the first power
including a "
" that we can imagine is on the right side.)
Example 3.3
With any linear system like
Our interest in the homogeneous system associated with a linear system can be understood by comparing the
reduction of the system
Obviously the two reductions go in the same way. We can study how linear systems are reduced by instead studying
how the associated homogeneous systems are reduced.
Studying the associated homogeneous system has a great advantage over studying the original system.
Nonhomogeneous systems can be inconsistent. But a homogeneous system must be consistent since there is always
at least one solution, the vector of zeros.
Definition 3.4
A column or row vector of all zeros is a zero vector, denoted
There are many different zero vectors, e.g., the one-tall zero vector, the two-tall zero vector, etc. Nonetheless, people
often refer to "the" zero vector, expecting that the size of the one being discussed will be clear from the context.
Example 3.5
Some homogeneous systems have the zero vector as their only solution.
Example 3.6
28
Some homogeneous systems have many solutions. One example is the Chemistry problem from the first page of this
book.
We now have the terminology to prove the two parts of Theorem 3.1. The first lemma deals with unrestricted
combinations.
Lemma 3.7
For any homogeneous linear system there exist vectors
where
, ...,
Before the proof, we will recall the back substitution calculations that were done in the prior subsection.
Imagine that we have brought a system to this echelon form.
We next perform back-substitution to express each variable in terms of the free variable . Working from the
bottom up, we get first that is
, next that is
, and then substituting those two into the top
equation
gives
of the solution set by starting at the bottom equation and using the free variables as the parameters to work
row-by-row to the top. The proof below follows this pattern.
Comment: That is, this proof just does a verification of the bookkeeping in back substitution to show that we haven't
overlooked any obscure cases where this procedure fails, say, by leading to a division by zero. So this argument,
while quite detailed, doesn't give us any new insights. Nevertheless, we have written it out for two reasons. The first
reason is that we need the result the computational procedure that we employ must be verified to work as
promised. The second reason is that the row-by-row nature of back substitution leads to a proof that uses the
technique of mathematical induction.[1] This is an important, and non-obvious, proof technique that we shall use a
number of times in this book. Doing an induction argument here gives us a chance to see one in a setting where the
proof material is easy to follow, and so the technique can be studied. Readers who are unfamiliar with induction
29
arguments should be sure to master this one and the ones later in this chapter before going on to the second chapter.
Proof
First use Gauss' method to reduce the homogeneous system to echelon form. We will show that each leading variable
can be expressed in terms of free variables. That will finish the argument because then we can use those free
variables as the parameters. That is, the 's are the vectors of coefficients of the free variables (as in Example 3.6,
where the solution is
, and
).
We will proceed by mathematical induction, which has two steps. The base step of the argument will be to focus on
the bottom-most non-"
" equation and write its leading variable in terms of the free variables. The inductive
step of the argument will be to argue that if we can express the leading variables from the bottom rows in terms of
free variables, then we can express the leading variable of the next row up the
-th row up from the
bottom in terms of free variables. With those two steps, the theorem will be proved because by the base step it is
true for the bottom equation, and by the inductive step the fact that it is true for the bottom equation shows that it is
true for the next one up, and then another application of the inductive step implies it is true for third equation up, etc.
For the base step, consider the bottom-most non-"
is trivial). We call that the
where
row
"
-th row:
".) Either there are variables in this equation other than the leading one
" equation (the case where all the equations are "
to express this leading variable in terms of free variables. If there are no free variables in this equation then
(see the "tricky point" noted following this proof).
For the inductive step, we assume that for the
-th equation, we can express the leading variable in terms of free variables (where
prove that the same is true for the next equation up, the
leads in a lower-down equation
). To
, to end with
expressed in terms of free variables.
Because we have shown both the base step and the inductive step, by the principle of mathematical induction the
proposition is true.
We say that the set
. There is a tricky point to this definition. If a homogeneous system has a unique solution, the zero
vector, then we say the solution set is generated by the empty set of vectors. This fits with the pattern of the other
solution sets: in the proof above the solution set is derived by taking the 's to be the free variables and if there is a
unique solution then there are no free variables.
This proof incidentally shows, as discussed after Example 2.4, that solution sets can always be parametrized using
the free variables.
30
Nonhomogeneous Systems
The next lemma finishes the proof of Theorem 3.1 by considering the particular solution part of the solution set's
description.
Lemma 3.8
For a linear system, where
Proof
We will show mutual set inclusion, that any solution to the system is in the above set and that anything in the set is a
solution to the system.[2]
For set inclusion the first way, that if a vector solves the system then it is in the set described above, assume that
solves the system. Then
where
and
are the
solves the associated homogeneous system since for each equation index
-th components of
and
. We can write
in the required
For set inclusion the other way, take a vector of the form
as
, where
is the
-th component of
solves the
form.
, where
associated homogeneous system, and note that it solves the given system: for any equation index
where
solves the
The two lemmas above together establish Theorem 3.1. We remember that theorem with the slogan "
".
Example 3.9
This system illustrates Theorem 3.1.
Gauss' method
That single vector is, of course, a particular solution. The associated homogeneous system reduces via the same row
operations
31
As the theorem states, and as discussed at the start of this subsection, in this single-solution case the general solution
results from taking the particular solution and adding to it the unique solution of the associated homogeneous
system.
Example 3.10
Also
discussed
there
is
that
the
case
fits
the
"
shows that it has no solutions. The associated homogeneous system, of course, has a solution.
However, because no particular solution of the original system exists, the general solution set is empty there are
no vectors of the form
because there are no 's.
Corollary 3.11
Solution sets of linear systems are either empty, have one element, or have infinitely many elements.
Proof
We've seen examples of all three happening so we need only prove that those are the only possibilities.
First, notice a homogeneous system with at least one nonset of multiples
is infinite if
solution
then
, and so
.
Now, apply Lemma 3.8 to conclude that a solution set
solution, and thus by the prior paragraph has infinitely many solutions).
This table summarizes the factors affecting the size of a general solution.
32
infinitely many
particular yes
solution
exists?
unique
solution
infinitely
many
solutions
no
no
solutions
no
solutions
The factor on the top of the table is the simpler one. When we perform Gauss' method on a linear system, ignoring
the constants on the right side and so paying attention only to the coefficients on the left-hand side, we either end
with every variable leading some row or else we find that some variable does not lead a row, that is, that some
variable is free. (Of course, "ignoring the constants on the right" is formalized by considering the associated
homogeneous system. We are simply putting aside for the moment the possibility of a contradictory equation.)
A nice insight into the factor on the top of this table at work comes from considering the case of a system having the
same number of equations as variables. This system will have a solution, and the solution will be unique, if and only
if it reduces to an echelon form system where every variable leads its row, which will happen if and only if the
associated homogeneous system has a unique solution. Thus, the question of uniqueness of solution is especially
interesting when the system has the same number of equations as variables.
Definition 3.12
A square matrix is nonsingular if it is the matrix of coefficients of a homogeneous system with a unique solution. It
is singular otherwise, that is, if it is the matrix of coefficients of a homogeneous system with infinitely many
solutions.
Example 3.13
The systems from Example 3.3, Example 3.5, and Example 3.9 each have an associated homogeneous system with a
unique solution. Thus these matrices are nonsingular.
The Chemistry problem from Example 3.6 is a homogeneous system with more than one solution so its matrix is
singular.
Example 3.14
The first of these matrices is nonsingular while the second is singular
because the first of these homogeneous systems has a unique solution while the second has infinitely many solutions.
We have made the distinction in the definition because a system (with the same number of equations as variables)
behaves in one of two ways, depending on whether its matrix of coefficients is nonsingular or singular. A system
where the matrix of coefficients is nonsingular has a unique solution for any constants on the right side: for instance,
33
and
coefficients is singular never has a unique solution it has either no solutions or else has infinitely many, as with
these.
with the same left sides but different right sides. Obviously, the first has a solution while the second does not, so here
the constants on the right side decide if the system has a solution. We could conjecture that the left side of a linear
system determines the number of solutions while the right side determines if solutions exist, but that guess is not
correct. Compare these two systems
with the same right sides but different left sides. The first has a solution but the second does not. Thus the constants
on the right side of a system don't decide alone whether a solution exists; rather, it depends on some interaction
between the left and right sides.
For some intuition about that interaction, consider this system with one of the coefficients left as the parameter
If
this system has no solution because the left-hand side has the third row as a sum of the first two, while the
if one row of the matrix of coefficients on the left is a linear combination of other rows, then on the right the constant
from that row must be the same combination of constants from the same rows.
More intuition about the interaction comes from studying linear combinations. That will be our focus in the second
chapter, after we finish the study of Gauss' method itself in the rest of this chapter.
Exercises
This exercise is recommended for all readers.
Problem 1
Solve each system. Express the solution set using vectors. Identify the particular solution and the solution set of the
homogeneous system.
1.
2.
3.
4.
5.
6.
Problem 2
Solve each system, giving the solution set in vector notation. Identify the particular solution and the solution of the
homogeneous system.
1.
2.
3.
4.
This exercise is recommended for all readers.
Problem 3
For the system
which of these can be used as the particular solution part of some general solution?
1.
2.
3.
34
35
. Find, if possible, a general solution to this system
1.
2.
3.
Problem 5
One of these is nonsingular while the other is singular. Which is which?
1.
2.
This exercise is recommended for all readers.
Problem 6
Singular or nonsingular?
1.
2.
3.
(Careful!)
4.
5.
This exercise is recommended for all readers.
Problem 7
Is the given vector in the set generated by the given set?
1.
2.
3.
4.
Problem 8
Prove that any linear system with a nonsingular matrix of coefficients has a solution, and that the solution is unique.
Problem 9
To tell the whole truth, there is another tricky point to the proof of Lemma 3.7. What happens if there are no non-"
" equations? (There aren't any more tricky points after this one.)
This exercise is recommended for all readers.
Problem 10
Prove that if
and
1.
2.
3.
for
What's wrong with: "These three show that if a homogeneous system has one solution then it has many solutions
any multiple of a solution is another solution, and any sum of solutions is a solution also so there are no
homogeneous systems with exactly one solution."?
Problem 11
Prove that if a system with only rational coefficients and constants has a solution then it has at least one all-rational
solution. Must it have infinitely many?
Solutions
Footnotes
[1] More information on mathematical induction is in the appendix.
[2] More information on equality of sets is in the appendix.
36
37
(take
and
but that clashes with the third component, similarly the first component gives
but the third component gives something different). Here is a third description of the same set:
We need to decide when two descriptions are describing the same set. More pragmatically stated, how can a person
tell when an answer to a homework question describes the same set as the one described in the back of the book?
Set Equality
Sets are equal if and only if they have the same members. A common way to show that two sets,
equal is to show mutual inclusion: any member of
is also in
is also in
and
, are
[1]
Example 4.1
To show that
equals
For the first half we must check that any vector from
is also in
we need
and
such that
and
Similarly, if we try
38
and
and
is in
we need
and
such that
and
and
is a member of
and
there
such that
gives
and
and
is a member of
and
this way:
shows that
and
and
and
and
and
39
Example 4.2
Of course, sometimes sets are not equal. The method of the prior example will help us see the relationship between
the two sets. These
is a subset of
, it is a proper subset of
, and
because
is not a subset of
if we fix
Thus
with
, and
But, for the other direction, the reduction resulting from fixing
. For instance,
, and
and
.
and
, we
is in
but not in
Exercises
Problem 1
Decide if the vector is a member of the set.
1.
2.
3.
4.
5.
6.
Problem 2
Produce two descriptions of this set that are different than this one.
40
41
and
2.
3.
and
and
4.
5.
and
and
Solutions
Footnotes
[1] More information on set equality is in the appendix.
Linear Algebra/Automation
This is a PASCAL routine to do
to an augmented matrix.
Linear Algebra/Automation
42
(The second two lines are hard to tell apart.) Both have
In the first system, some small change in the numbers will produce only a small change in the solution:
gives a solution of
. Geometrically, changing one of the lines by a small amount does not change
digits to represent reals. In short, systems that are nearly singular may be hard to compute with.
Another thing that can go wrong is error propagation. In a system with a large number of equations (say, 100 or
more), small rounding errors early in the procedure can snowball to overwhelm the solution at the end.
These issues, and many others like them, are outside the scope of this book, but remember that just because Gauss'
method always works in theory and just because a program correctly implements that method and just because the
answer appears on green-bar paper, doesn't mean that answer is right. In practice, always use a package where
experts have worked hard to counter what can go wrong.
43
No solutions
Infinitely many
solutions
These pictures don't prove the results from the prior section, which apply to any number of linear equations and any
number of unknowns, but nonetheless they do help us to understand those results. This section develops the ideas
that we need to express our results from the prior section, and from some future sections, geometrically. In
particular, while the two-dimensional case is familiar, to extend to systems with more than two unknowns we shall
need some higher-dimensional geometry.
one-dimensional space
and make the usual correspondence with
Now, with a scale and a direction, finding the point corresponding to, say
the direction of
, is easy start at
and head in
times as far.
The basic idea here, combining magnitude with direction, is the key to extending to higher dimensions.
An object comprised of a magnitude and a direction is a vector (we will use the same word as in the previous section
because we shall show below how to describe such an object with a column vector). We can draw a vector as having
some length, and pointing somewhere.
44
are equal, even though they start in different places, because they have equal lengths and equal directions. Again:
those vectors are not just alike, they are equal.
How can things that are in different places be equal? Think of a vector as representing a displacement ("vector" is
Latin for "carrier" or "traveler"). These squares undergo the same displacement, despite that those displacements
start in different places.
Sometimes, to emphasize this property vectors have of not being anchored, they are referred to as free vectors. Thus,
these free vectors are equal as each is a displacement of one over and two up.
More generally, vectors in the plane are the same if and only if they have the same change in first components and
the same change in second components: the vector extending from
to
equals the vector from
to
if and only if
and
.
, would extend to
so that, for instance, the "one over and two up" arrows shown above picture this vector.
We often draw the arrow as starting at the origin, and we then say it is in the canonical position (or natural
position). When the vector
45
rather than "the endpoint of the canonical position of" that vector.
Thus, we will call both of these sets
In the prior section we defined vectors and vector operations with an algebraic motivation;
And, where
and
represent displacements,
The long arrow is the combined displacement in this sense: if, in one minute, a ship's motion gives it the
displacement relative to the earth of and a passenger's motion gives a displacement relative to the ship's deck of
, then
Another way to understand the vector sum is with the parallelogram rule. Draw the parallelogram formed by the
vectors
and then the sum
extends along the diagonal to the far corner.
46
The above drawings show how vectors and vector operations behave in
. We can extend to
, or to even
higher-dimensional spaces where we have no pictures, with the obvious generalization: the free vector that, if it starts
at
, ends at
, is represented by this column
(vectors are equal if they have the same representation), we aren't too careful to distinguish between a point and the
vector whose canonical representation ends at that point,
and
The vector associated with the parameter has its whole body in the line it is a direction vector for the line. Note
that points on the line to the left of
are described using negative values of .
In
and
are two vectors whose whole bodies lie in the plane). As with the line, note that some points in this plane are
described with negative 's or negative 's or both.
A description of planes that is often encountered in algebra and calculus uses a single equation as the condition that
describes the relationship among the first, second, and third coordinates of points in a plane.
The translation from such a description to the vector description that we favor in this book is to think of the condition
as a one-equation linear system and parametrize
.
47
48
-flat) in
. For example, in
to be
,
is a line,
is a plane, and
is a three-dimensional linear surface. Again, the intuition is that a line permits motion in one direction, a plane
permits motion in combinations of two directions, etc.
A linear surface description can be misleading about the dimension this
We shall see in the Linear Independence section of Chapter Two what relationships among vectors causes the linear
surface they generate to be degenerate.
We finish this subsection by restating our conclusions from the first section in geometric terms. First, the solution set
of a linear system with unknowns is a linear surface in
. Specifically, it is a -dimensional linear surface,
where
is the number of free variables in an echelon form version of the system. Second, the solution set of a
homogeneous linear system is a linear surface passing through the origin. Finally, we can view the general solution
set of any linear system as being the solution set of its associated homogeneous system offset from the origin by a
vector, namely by any particular solution.
49
Exercises
This exercise is recommended for all readers.
Problem 1
Find the canonical name for each vector.
1.
2.
3.
4.
to
to
in
in
to
to
in
in
to
to
to
to
and
Problem 5
Describe the plane that contains this point and line.
2.
Problem 8
, and
50
When a plane does not pass through the origin, performing operations on vectors whose bodies lie in it is more
complicated than when the plane passes through the origin. Consider the picture in this subsection of the plane
, and
1. Redraw the picture, including the vector in the plane that is twice as long as the one with endpoint
and
Problem 10
How should
be defined?
miles per hour finds that the wind appears to blow directly from the north.
On doubling his speed it appears to come from the north east. What was the wind's velocity? (Klamkin 1957)
This exercise is recommended for all readers.
Problem 12
Euclid describes a plane as "a surface which lies evenly with the straight lines on itself". Commentators (e.g., Heron)
have interpreted this to mean "(A plane surface is) such that, if a straight line pass through two points on it, the line
coincides wholly with it at every spot, all ways". (Translations from Heath 1956, pp. 171-172.) Do planes, as
described in this section, have that property? Does this description adequately define planes?
Solutions
References
Klamkin, M. S. (proposer) (Jan.-Feb. 1957), "Trickie T-27", Mathematics Magazine 30 (3): 173.
Heath, T. (1956), Euclid's Elements, 1, Dover.
51
as "lines" and "planes" doesn't make them act like the lines
and planes of our prior experience. Rather, we must ensure that the names suit the sets. While we can't prove that the
sets satisfy our intuition we can't prove anything about intuition in this subsection we'll observe that a result
familiar from
and
, when generalized to arbitrary
, supports the idea that a line is straight and a plane is
flat. Specifically, we'll see how to do Euclidean geometry in a "plane" by giving a definition of the angle between
two
vectors in the plane that they generate.
Definition 2.1
The length of a vector
is this.
Remark 2.2
This is a natural generalization of the Pythagorean Theorem. A classic discussion is in (Plya 1954).
We can use that definition to derive a formula for the angle between two vectors. For a model of what to do, consider
two vectors in
.
Put them in canonical position and, in the plane that they determine, consider the triangle formed by
, and
, where
52
and simplify.
In higher dimensions no picture suffices but we can make the same argument analytically. First, the form of the
numerator is clear it comes from the middle terms of the squares
,
, etc.
Definition 2.3
The dot product (or inner product, or scalar product) of two
of their components.
Note that the dot product of two vectors is a real number, not a vector, and that the dot product of a vector from
with a vector from
equals
Remark 2.4
The wording in that definition allows one or both of the two to be a row vector instead of a column vector. Some
books require that the first vector be a row vector and that the second vector be a column vector. We shall not be that
strict.
Still reasoning with letters, but guided by the pictures, we use the next theorem to argue that the triangle formed by
, , and
in
lies in the planar subset of
generated by and .
Theorem 2.5 (Triangle Inequality)
For any
with equality if and only if one of the vectors is a nonnegative scalar multiple of the other one.
This inequality is the source of the familiar saying, "The shortest distance between two points is in a straight line."
Proof
(We'll use some algebraic properties of dot product that we have not yet checked, for instance that
and that
. See Problem 8.) The desired inequality holds if and only if
its square holds.
That, in turn, holds if and only if the relationship obtained by multiplying both sides by the nonnegative numbers
and
53
and rewriting
shows that this certainly is true since it only says that the square of the length of the vector
is not
negative.
As for equality, it holds when, and only when,
is
if and only if
" line.
is a
large number, with absolute value bigger than the right-hand side, it is a negative large number. The next result says
that no such pair of vectors exists.
Corollary 2.6 (Cauchy-Schwartz Inequality)
For any
with equality if and only if one vector is a scalar multiple of the other.
Proof
The Triangle Inequality's proof shows that
so if
is
54
(the angle between the zero vector and any other vector is defined to be a right angle).
Thus vectors from
are orthogonal (or perpendicular) if and only if their dot product is zero.
Example 2.8
These vectors are orthogonal.
The arrows are shown away from canonical position but nevertheless the vectors are orthogonal.
Example 2.9
The
angle formula given at the start of this subsection is a special case of the definition. Between these two
the angle is
approximately
be perpendicular to the
-plane, in fact the two planes are that way only in the weak sense that there are vectors in
each orthogonal to all vectors in the other. Not every vector in each is orthogonal to all vectors in the other.
Exercises
This exercise is recommended for all readers.
Problem 1
Find the length of each vector.
1.
2.
3.
4.
5.
2.
3.
This exercise is recommended for all readers.
Problem 3
During maneuvers preceding the Battle of Jutland, the British battle cruiser Lion moved as follows (in nautical
miles):
miles north,
miles
degrees east of south,
miles at
degrees east of north, and
miles
at
degrees east of north. Find the distance between starting and ending positions (O'Hanian 1985).
Problem 4
Find
Problem 5
Describe the set of vectors in
55
56
axes?
Problem 7
Is any vector perpendicular to itself?
This exercise is recommended for all readers.
Problem 8
Describe the algebraic properties of dot product.
1.
2.
3.
4.
2. Show that
then
and
Problem 10
Suppose that
and
. Must
with
in
. Generalize to
Problem 13
Show that if
then
Problem 14
Show that if
then
is
times as long as
. What if
of length one is a unit vector. Show that the dot product of two unit vectors has absolute value
less than or equal to one. Can "less than" happen? Can "equal to"?
Problem 16
Prove that
Problem 17
for every
57
then
Problem 18
Is
Problem 19
What is the ratio between the sides in the Cauchy-Schwartz inequality?
Problem 20
Why is the zero vector defined to be perpendicular to every vector?
Problem 21
Describe the angle between two vectors in
Problem 22
Give a simple necessary and sufficient condition to determine whether the angle between two vectors is acute, right,
or obtuse.
This exercise is recommended for all readers.
Problem 23
Generalize to
and
.
Problem 24
Show that
if and only if
and
Problem 25
Show that if a vector is perpendicular to each of two others then it is perpendicular to each vector in the plane they
generate. (Remark. They could generate a degenerate plane a line or a point but the statement remains true.)
Problem 26
Prove that, where
Problem 27
Verify that the definition of angle is dimensionally correct: (1) if
and
and
, and (2) if
and
and
that
the
inner
product
operation
is
linear:
for
and
.
This exercise is recommended for all readers.
Problem 29
The geometric mean of two positive reals
is
Use the Cauchy-Schwartz inequality to show that the geometric mean of any
arithmetic mean.
? Problem 30
58
; the wind blows apparently (judging by the vane on the mast) in the
direction of a vector
direction of a vector
to
Find the vector velocity of the wind (Ivanoff & Esty 1933).
Problem 31
Verify the Cauchy-Schwartz inequality by first proving Lagrange's identity:
and then noting that the final term is positive. (Recall the meaning
and
of the
notation.) This result is an improvement over Cauchy-Schwartz because it gives a formula for the
Solutions
References
O'Hanian, Hans (1985), Physics, 1, W. W. Norton
Ivanoff, V. F. (proposer); Esty, T. C. (solver) (Feb. 1933), "Problem 3529", American Mathematical Mothly 39
(2): 118
Plya, G. (1954), Mathematics and Plausible Reasoning: Volume II Patterns of Plausible Inference, Princeton
University Press
59
with
. The third
(after the first pivot the matrix is already in echelon form so the
We can keep going to a second stage by making the leading entries into ones
and then to a third stage that uses the leading entries to eliminate all of the other entries in each column by pivoting
upwards.
The answer is
, and
60
Note that the pivot operations in the first stage proceed from column one to column three while the pivot operations
in the third stage proceed from column three to column one.
Example 1.2
We often combine the operations of the middle stage into a single step, even though they are operations on different
rows.
The answer is
and
This extension of Gauss' method is Gauss-Jordan reduction. It goes past echelon form to a more refined, more
specialized, matrix form.
Definition 1.3
A matrix is in reduced echelon form if, in addition to being in echelon form, each leading entry is a one and is the
only nonzero entry in its column.
The disadvantage of using Gauss-Jordan reduction to solve a system is that the additional row operations mean
additional arithmetic. The advantage is that the solution set can just be read off.
In any echelon form, plain or reduced, we can read off when a system has an empty solution set because there is a
contradictory equation, we can read off when a system has a one-element solution set because there is no
contradiction and every variable is the leading variable in some row, and we can read off when a system has an
infinite solution set because there is no contradiction and at least one variable is free.
In reduced echelon form we can read off not just what kind of solution set the system has, but also its description.
Whether or not the echelon form is reduced, we have no trouble describing the solution set when it is empty, of
course. The two examples above show that when the system has a single solution then the solution can be read off
from the right-hand column. In the case when the solution set is infinite, its parametrization can also be read off of
the reduced echelon form. Consider, for example, this system that is shown brought to echelon form and then to
reduced echelon form.
Starting with the middle matrix, the echelon form version, back substitution produces
, then another back substitution gives
and then the final back substitution gives
. Thus the solution set is this.
implying that
so that
,
implying that
nonetheless we have in this book stuck to a convention of parametrizing using the unmodified free variables (that is,
instead of
). We can easily see that a reduced echelon form version of a system is equivalent to
a parametrization in terms of unmodified free variables. For instance,
(to move from left to right we also need to know how many equations are in the system). So, the convention of
parametrizing with the free variables by solving each equation for its leading variable and then eliminating that
leading variable from every other equation is exactly equivalent to the reduced echelon form conditions that each
leading entry must be a one and must be the only nonzero entry in its column.
Not as straightforward is the other part of the reason that the reduced echelon form version allows us to read off the
parametrization that we would have gotten had we stopped at echelon form and then done back substitution. The
prior paragraph shows that reduced echelon form corresponds to some parametrization, but why the same
parametrization? A solution set can be parametrized in many ways, and Gauss' method or the Gauss-Jordan method
can be done in many ways, so a first guess might be that we could derive many different reduced echelon form
versions of the same starting system and many different parametrizations. But we never do. Experience shows that
starting with the same system and proceeding with row operations in many different ways always yields the same
reduced echelon form and the same parametrization (using the unmodified free variables).
In the rest of this section we will show that the reduced echelon form version of a matrix is unique. It follows that the
parametrization of a linear system in terms of its unmodified free variables is unique because two different ones
would give two different reduced echelon forms.
We shall use this result, and the ones that lead up to it, in the rest of the book but perhaps a restatement in a way that
makes it seem more immediately useful may be encouraging. Imagine that we solve a linear system, parametrize,
and check in the back of the book for the answer. But the parametrization there appears different. Have we made a
mistake, or could these be different-looking descriptions of the same set, as with the three descriptions above of ?
The prior paragraph notes that we will show here that different-looking parametrizations (using the unmodified free
variables) describe genuinely different sets.
Here is an informal argument that the reduced echelon form version of a matrix is unique. Consider again the
example that started this section of a matrix that reduces to three different echelon form matrices. The first matrix of
the three is the natural echelon form version. The second matrix is the same as the first except that a row has been
halved. The third matrix, too, is just a cosmetic variant of the first. The definition of reduced echelon form outlaws
this kind of fooling around. In reduced echelon form, halving a row is not possible because that would change the
row's leading entry away from one, and neither is combining rows possible, because then a leading entry would no
longer be alone in its column.
This informal justification is not a proof; we have argued that no two different reduced echelon form matrices are
related by a single row operation step, but we have not ruled out the possibility that multiple steps might do. Before
we go to that proof, we finish this subsection by rephrasing our work in a terminology that will be enlightening.
61
62
Many different matrices yield the same reduced echelon form matrix. The three echelon form matrices from the start
of this section, and the matrix they were derived from, all give this reduced echelon form matrix.
We think of these matrices as related to each other. The next result speaks to this relationship.
Lemma 1.4
Elementary row operations are reversible.
Proof
For any matrix
, the effect of swapping rows is reversed by swapping them back, multiplying a row by a nonzero
is undone by multiplying by
(The
from row
to row
(with
) is undone by
, we shouldn't think of
as "after"
or
. Instead we should think of them as interreducible or interrelated. Below is a picture of the idea.
The matrices from the start of this section and their reduced echelon form version are shown in a cluster. They are all
interreducible; these relationships are shown also.
We say that matrices that reduce to each other are "equivalent with respect to the relationship of row reducibility".
The next result verifies this statement using the definition of an equivalence.[1]
Lemma 1.5
Between matrices, "reduces to" is an equivalence relation.
Proof
We must check the conditions (i) reflexivity, that any matrix reduces to itself, (ii) symmetry, that if
then
reduces to
reduces to
and
reduces to
then
reduces to
reduces to
.
Reflexivity is easy; any matrix reduces to itself in zero row operations.
That the relationship is symmetric is Lemma 4 if
to
reduces to
reduces
reduces to
and that
reduces to
Definition 1.6
Two matrices that are interreducible by the elementary row operations are row equivalent.
One of the classes in this partition is the cluster of matrices shown above, expanded to include all of the nonsingular
matrices.
The next subsection proves that the reduced echelon form of a matrix is unique; that every matrix reduces to one and
only one reduced echelon form matrix. Rephrased in terms of the row-equivalence relationship, we shall prove that
every matrix is row equivalent to one and only one reduced echelon form matrix. In terms of the partition what we
shall prove is: every equivalence class contains one and only one reduced echelon form matrix. So each reduced
echelon form matrix serves as a representative of its class.
After that proof we shall, as mentioned in the introduction to this section, have a way to decide if one matrix can be
derived from another by row reduction. We just apply the Gauss-Jordan procedure to both and see whether or not
they come to the same reduced echelon form.
Exercises
This exercise is recommended for all readers.
Problem 1
Use Gauss-Jordan reduction to solve each system.
1.
2.
3.
4.
This exercise is recommended for all readers.
Problem 2
Find the reduced echelon form of each matrix.
1.
63
64
2.
3.
4.
This exercise is recommended for all readers.
Problem 3
Find each solution set by using Gauss-Jordan reduction, then reading off the parametrization.
1.
2.
3.
4.
Problem 4
Give two distinct echelon form versions of this matrix.
.
3. Expand the proof of that lemma to make explicit exactly where the
. Show that in
operation is not reversed by
65
Solutions
Footnotes
[1] More information on equivalence relations is in the appendix.
[2] More information on partitions and class representatives is in the appendix.
where the
's
are scalars.
(We have already used the phrase "linear combination" in this book. The meaning is unchanged, but the next result's
statement makes a more formal definition in order.)
Lemma 2.2 (Linear Combination Lemma)
A linear combination of linear combinations is a linear combination.
Proof
Given the linear combinations
through
, consider a
combination of those
where the
's.
In this subsection we will use the convention that, where a matrix is named with an upper case roman letter, the
matching lower-case greek letter names the rows.
Corollary 2.3
Where one matrix reduces to another, each row of the second is a linear combination of the rows of the first.
66
The proof below uses induction on the number of row operations used to reduce one matrix to the other. Before we
proceed, here is an outline of the argument (readers unfamiliar with induction may want to compare this argument
with the one used in the "
" proof).[1] First, for the base step of the
argument, we will verify that the proposition is true when reduction can be done in zero row operations. Second, for
the inductive step, we will argue that if being able to reduce the first matrix to the second in some number
of
operations implies that each row of the second is a linear combination of the rows of the first, then being able to
reduce the first to the second in
operations implies the same thing. Together, this base step and induction step
prove this result because by the base step the proposition is true in the zero operations case, and by the inductive step
the fact that it is true in the zero operations case implies that it is true in the one operation case, and the inductive
step applied again gives that it is therefore true in the two operations case, etc.
Proof
We proceed by induction on the minimum number of row operations that take a first matrix
to a second one
In the base step, that zero reduction operations suffice, the two matrices are equal and each row of
combination of
's rows:
are
more
than
. This
zero
operations,
is only
there
be
that takes
next-to-last
matrix
in
or
operations.
so
that
is obviously a
reordered
operation, that it multiplies a row by a scalar and that it adds a multiple of one row to another, both result in the rows
of
being linear combinations of the rows of
. But therefore, by the Linear Combination Lemma, each row of
is a linear combination of the rows of
With that, we have both the base step and the inductive step, and so the proposition follows.
Example 2.4
In the reduction
, and
. The methods of the proof show that there are three sets of linear
relationships.
The prior result gives us the insight that Gauss' method works by taking linear combinations of the rows. But to what
end; why do we go to echelon form as a particularly simple, or basic, version of a linear system? The answer, of
course, is that echelon form is suitable for back substitution, because we have isolated the variables. For instance, in
this matrix
's row.
Independence of a collection of row vectors, or of any kind of vectors, will be precisely defined and explored in the
next chapter. But a first take on it is that we can show that, say, the third row above is not comprised of the other
67
rows, that
, and
The first row's leading entry is in the first column and narrowing our consideration of the above relationship to
consideration only of the entries from the first column
gives that
. The second
row's leading entry is in the third column and the equation of entries in that column
with the knowledge that
, gives that
, along
, along with
and
, gives
an impossibility.
The following result shows that this effect always holds. It shows that what Gauss' linear elimination method
eliminates is linear relationships among the rows.
Lemma 2.5
In an echelon form matrix, no nonzero row is a linear combination of the other rows.
Proof
Let
be in echelon form. Suppose, to obtain a contradiction, that some nonzero row is a linear combination of the
others.
, ...,
The base step of the induction argument is to show that the first coefficient
is zero. Let the first row's leading
entry be in column number and consider the equation of entries in that column.
, ...,
, ...,
, including
between
must be zero.
and
, if the coefficient
and the
is also zero. That argument, and the contradiction that finishes this
where
matrix to be the
and
if there is
no leading entry in that row. The lemma says that if two echelon form matrices are row equivalent then their forms
are equal sequences.
Proof
Let
and
be echelon form matrices that are row equivalent. Because they are row equivalent they must be the
. Let the column number of the leading entry in row
of
be
of
, that
be
68
This induction argument relies on the fact that the matrices are row equivalent, because the Linear Combination
Lemma and its corollary therefore give that each row of
is a linear combination of the rows of
and vice
versa:
where the
's and
The base step of the induction is to verify the lemma for the first rows of the matrices, that is, to verify that
. If either row is a zero row then the entire matrix is a zero matrix since it is in echelon form, and therefore both
matrices are zero matrices (by Corollary 2.3), and so both and are
. For the case where neither
nor
is a zero row, consider the
the interval
, but
symmetric argument shows that
The inductive step is to show that if
would be
isn't zero since it leads its row and so this is an impossibility. Next, a
also is impossible. Thus the
, and
, ..., and
(for
in
That lemma answers two of the questions that we have posed: (i) any two echelon form versions of a matrix have the
same free variables, and consequently, and (ii) any two echelon form versions have the same number of free
variables. There is no linear system and no combination of row operations such that, say, we could solve the system
one way and get and free but solve it another way and get and free, or solve it one way and get two free
variables while solving it another way yields three.
We finish now by specializing to the case of reduced echelon form matrices.
Theorem 2.7
Each matrix is row equivalent to a unique reduced echelon form matrix.
Proof
Clearly any matrix is row equivalent to at least one reduced echelon form matrix, via Gauss-Jordan reduction. For
the other half, that any matrix is equivalent to at most one reduced echelon form matrix, we will show that if a matrix
Gauss-Jordan reduces to each of two others then those two are equal.
Suppose that a matrix is row equivalent to two reduced echelon form matrices
and
equivalent to each other. The Linear Combination Lemma and its corollary allow us to write the rows of one, say
, as a linear combination of the rows of the other
2.6, says that in the two matrices, the same collection of rows are nonzero. Thus, if
rows of
are
through
through
-th column
between
and
-th row of
Since
69
up to
's in column
. But
, which is
. Therefore, each
. Thus each
, ..., and
.
We have shown that the only nonzero coefficient in the linear combination labelled (
Therefore
, which is
) is
, and
, which is
We end with a recap. In Gauss' method we start with a matrix and then derive a sequence of other matrices. We
defined two matrices to be related if one can be derived from the other. That relation is an equivalence relation,
called row equivalence, and so partitions the set of all matrices into row equivalence classes.
(There are infinitely many matrices in the pictured class, but we've only got room to show two.) We have proved
there is one and only one reduced echelon form matrix in each row equivalence class. So the reduced echelon form is
a canonical form[2] for row equivalence: the reduced echelon form matrices are representatives of the classes.
70
We can answer questions about the classes by translating them into questions about the representatives.
Example 2.8
We can decide if matrices are interreducible by seeing if Gauss-Jordan reduction produces the same reduced echelon
form result. Thus, these are not row equivalent
Example 2.9
Any nonsingular
Example 2.10
We can describe the classes by listing all possible reduced echelon form matrices. Any
these: the class of matrices row equivalent to this,
the infinitely many classes of matrices row equivalent to one of this type
where
(including
matrices).
Exercises
This exercise is recommended for all readers.
Problem 1
Decide if the matrices are row equivalent.
1.
2.
3.
71
4.
5.
Problem 2
Describe the matrices in each of the classes represented in Example 2.10.
Problem 3
Describe all matrices in the row equivalence class of these.
1.
2.
3.
Problem 4
How many row equivalence classes are there?
Problem 5
Can row equivalence classes contain different-sized matrices?
Problem 6
How big are the row equivalence classes?
1. Show that the class of any zero matrix is finite.
2. Do any other classes contain only finitely many members?
This exercise is recommended for all readers.
Problem 7
Give two reduced echelon form matrices that have their leading entries in the same columns, but that are not row
equivalent.
This exercise is recommended for all readers.
Problem 8
Show that any two
nonsingular matrices are row equivalent. Are any two singular matrices row equivalent?
matrices
matrices
matrices
matrices
Problem 10
1. Show that a vector
linear relationship
where
2. Use that to simplify the proof of Lemma 2.5.
This exercise is recommended for all readers.
Problem 11
case.)
72
for
also.
3. Find the contradiction.
Problem 12
Finish the induction argument in Lemma 2.6.
1. State the inductive hypothesis, Also state what must be shown to follow from that hypothesis.
2. Check that the inductive hypothesis implies that in the relationship
the coefficients
are each zero.
3. Finish the inductive step by arguing, as in the base case, that
and
are impossible.
Problem 13
Why, in the proof of Theorem 2.7, do we bother to restrict to the nonzero rows? Why not just stick to the relationship
that we began with,
, with
instead of , and argue using it that the only
nonzero coefficient is
, which is
Footnotes
[1] More information on mathematical induction is in the appendix.
[2] More information on canonical representatives is in the appendix.
References
Hoffman, Kenneth; Kunze, Ray (1971), Linear Algebra (Second ed.), Prentice Hall
Trono, Tony (compilier) (1991), University of Vermont Mathematics Department High School Prize
Examinations 1958-1991, mimeograhed printing
It can be done by hand, but it would take a while and be error-prone. Using a computer is better.
We illustrate by solving that system under Maple (for another system, a user's manual would obviously detail the
exact syntax needed). The array of coefficients can be entered in this way
> A:=array( [[1,-1,-1,0,0,0,0],
[0,1,0,-1,0,-1,0],
[0,0,1,0,-1,1,0],
[0,0,0,1,1,0,-1],
[0,5,0,10,0,0,0],
[0,0,2,0,4,0,0],
[0,5,-2,0,0,50,0]] );
(putting the rows on separate lines is not necessary, but is done for clarity). The vector of constants is entered
similarly.
> u:=array( [0,0,0,0,10,10,0] );
Then the system is solved, like magic.
> linsolve(A,u);
7 2 5 2 5 7
73
Exercises
Answers for this Topic use Maple as the computer algebra system. In particular, all of these were tested on Maple V
running under MS-DOS NT version 4.0. (On all of them, the preliminary command to load the linear algebra
package along with Maple's responses to the Enter key, have been omitted.) Other systems have similar commands.
Problem 1
Use the computer to solve the two problems that opened this chapter.
1. This is the Statics problem.
Problem 2
Use the computer to solve these systems from the first subsection, or conclude "many solutions" or "no solutions".
1.
2.
3.
4.
5.
6.
Problem 3
Use the computer to solve these systems from the second subsection.
1.
2.
3.
74
75
4.
5.
6.
Problem 4
What does the computer give for the solution of the general
system?
Solutions
used
by
auto
used
by
others
total
value
of
steel
5 395
2 664
25 448
value
of
auto
48
9 030
30 346
For instance, the dollar value of steel used by the auto industry in this year is
external demands and of how auto and steel interact, this year, to meet them.
76
Now, imagine that the external demand for steel has recently been going up by
next year it will be
. Imagine also that for similar reasons we estimate that next year's external demand for
to
For one thing, a rise in steel will cause that industry to have an increased demand for autos, which will mitigate, to
some extent, the loss in external demand for autos. On the other hand, the drop in external demand for autos will
cause the auto industry to use less steel, and so lessen somewhat the upswing in steel's business. In short, these two
industries form a system, and we need to predict the totals at which the system as a whole will settle.
For that prediction, let
form these equations.
On the left side of those equations go the unknowns and . At the ends of the right sides go our external demand
estimates for next year
and
. For the remaining four terms, we look to the table of this year's
information about how the industries interact.
For instance, for next year's use of steel by steel, we note that this year the steel industry used
units of steel
input to produce
units out, we
units of steel output. So next year, when the steel industry will produce
is proportional to output. (We are assuming that the ratio of input to output remains constant over time; in practice,
models may try to take account of trends of change in the ratios.)
Next year's use of steel by the auto industry is similar. This year, the auto industry uses
units of steel input to
produce
units of auto output. So next year, when the auto industry's total output is
consume
, we expect it to
units of steel.
Filling in the other equation in the same way, we get this system of linear equation.
gives
and
Looking back, recall that above we described why the prediction of next year's totals isn't as simple as adding
to last year's steel total and subtracting
from last year's auto total. In fact, comparing these totals for next year to
the ones given at the start for the current year shows that, despite the drop in external demand, the total production of
the auto industry is predicted to rise. The increase in internal demand for autos caused by steel's sharp rise in
business more than makes up for the loss in external demand for autos.
One of the advantages of having a mathematical model is that we can ask "What if ...?" questions. For instance, we
can ask "What if the estimates for next year's external demands are somewhat off?" To try to understand how much
the model's predictions change in reaction to changes in our estimates, we can try revising our estimate of next year's
external steel demand from
down to
, while keeping the assumption of next year's external
77
and
We are seeing how sensitive the predictions of our model are to the accuracy of the assumptions.
Obviously, we can consider larger models that detail the interactions among more sectors of an economy. These
models are typically solved on a computer, using the techniques of matrix algebra that we will develop in Chapter
Three. Some examples are given in the exercises. Obviously also, a single model does not suit every case; expert
judgment is needed to see if the assumptions underlying the model are reasonable for a particular case. With those
caveats, however, this model has proven in practice to be a useful and accurate tool for economic analysis. For
further reading, try (Leontief 1951) and (Leontief 1965).
Exercises
Hint: these systems are easiest to solve on a computer.
Problem 1
With the steel-auto system given above, estimate next year's total productions in these cases.
1. Next year's external demands are: up
2. Next year's external demands are: up
3. Next year's external demands are: up
Problem 2
In the steel-auto system, the ratio for the use of steel by the auto industry is
, about
Imagine that a new process for making autos reduces this ratio to
.
1. How will the predictions for next year's total productions change compared to the first example discussed above
(i.e., taking next year's external demands to be
for steel and
for autos)?
2. Predict next year's totals if, in addition, the external demand for autos rises to be
are cheaper.
Problem 3
This table gives the numbers for the auto-steel system from a different year, 1947 (see Leontief 1951). The units here
are billions of 1947 dollars.
used
by
steel
used
by
auto
used
by
others
value
of
steel
6.90
1.28
18.69
value
of
auto
4.40
14.27
total
1. Solve for total output if next year's external demands are: steel's demand up 10% and auto's demand up 15%.
2. How do the ratios compare to those given above in the discussion for the 1958 economy?
3. Solve the 1947 equations with the 1958 external demands (note the difference in units; a 1947 dollar buys about
what $1.30 in 1958 dollars buys). How far off are the predictions for total output?
Problem 4
Predict next year's total productions of each of the three sectors of the hypothetical economy shown below
78
used
by
farm
used
by
rail
used
used by
by
shipping others total
value of
farm
25
50
100
500
value of
rail
25
50
50
300
value of
shipping
15
10
500
for farm,
for farm,
for rail,
for rail,
for shipping
for shipping
Problem 5
This table gives the interrelationships among three segments of an economy (see Clark & Coupe 1967).
used
by
food
used by
wholesale
used
by
retail
used
by
others
value of
food
2 318
4 679
11 869
value of
wholesale
393
1 089
22 459
122 242
value of
retail
53
75
116 041
total
References
Leontief, Wassily W. (Oct. 1951), "Input-Output Economics", Scientific American 185 (4): 15.
Leontief, Wassily W. (Apr. 1965), "The Structure of the U.S. Economy", Scientific American 212 (4): 25.
Clark, David H.; Coupe, John D. (Mar. 1967), "The Bangor Area Economy Its Present and Future", Reprot to the
City of Bangor, ME.
79
80
matrix a,
pivoting with the first row, then with the second row, etc.
for(pivot_row=1;pivot_row<=n-1;pivot_row++){
for(row_below=pivot_row+1;row_below<=n;row_below++){
multiplier=a[row_below,pivot_row]/a[pivot_row,pivot_row];
for(col=pivot_row;col<=n;col++){
a[row_below,col]-=multiplier*a[pivot_row,col];
}
}
}
(This code is in the C language. Here is a brief translation. The loop construct
for(pivot_row=1;pivot_row<=n-1;pivot_row++){...} sets pivot_row to 1 and then iterates
while pivot_row is less than or equal to
, each time through incrementing pivot_row by one with the
"++" operation. The other non-obvious construct is that the "-=" in the innermost loop amounts to the
a[row_below,col]=-multiplier*a[pivot_row,col]+a[row_below,col]} operation.)
While this code provides a quick take on how Gauss' method can be mechanized, it is not ready to use. It is naive in
many ways. The most glaring way is that it assumes that a nonzero number is always found in the pivot_row,
pivot_row position for use as the pivot entry. To make it practical, one way in which this code needs to be reworked
is to cover the case where finding a zero in that location leads to a row swap, or to the conclusion that the matrix is
singular.
Adding some if
statements to cover those cases is not hard, but we will instead consider some more subtle
ways in which the code is naive. There are pitfalls arising from the computer's reliance on finite-precision floating
point arithmetic.
For example, we have seen above that we must handle as a separate case a system that is singular. But systems that
are nearly singular also require care. Consider this one.
and
numbers to eight significant places (as is common, usually called single precision) will represent the second
equation internally as
, losing the digits in the ninth place. Instead of
reporting the correct solution, this computer will report something that is not even close this computer thinks that
the system is singular because the two equations are represented internally as equal.
For some intuition about how the computer could come up with something that far off, we can graph the system.
81
At the scale of this graph, the two lines cannot be resolved apart. This system is nearly singular in the sense that the
two lines are nearly the same line. Near-singularity gives this system the property that a small change in the system
can cause a large change in its solution; for instance, changing the
to
changes the
intersection point from
to
why the eight-place computer has trouble. A problem that is very sensitive to inaccuracy or uncertainties in the input
values is ill-conditioned.
The above example gives one way in which a system can be difficult to solve on a computer. It has the advantage
that the picture of nearly-equal lines gives a memorable insight into one way that numerical difficulties can arise.
Unfortunately this insight isn't very useful when we wish to solve some large system. We cannot, typically, hope to
understand the geometry of an arbitrary large system. In addition, there are ways that a computer's results may be
unreliable other than that the angle between some of the linear surfaces is quite small.
For an example, consider the system below, from (Hamming 1971).
, so
and thus both variables have values that are just less
. A computer using two digits represents the system internally in this way (we will do this example in
two-digit floating point arithmetic, but a similar one with eight digits is easy to invent).
, which the
. This
is quite bad.
Thus, another cause of unreliable output is a mixture of floating point arithmetic and a reliance on pivots that are
small.
An experienced programmer may respond that we should go to double precision where sixteen significant digits are
retained. This will indeed solve many problems. However, there are some difficulties with it as a general approach.
For one thing, double precision takes longer than single precision (on a '486 chip, multiplication takes eleven ticks in
single precision but fourteen in double precision (Microsoft 1993)) and has twice the memory requirements. So
attempting to do all calculations in double precision is just not practical. And besides, the above systems can
obviously be tweaked to give the same trouble in the seventeenth digit, so double precision won't fix all problems.
What we need is a strategy to minimize the numerical trouble arising from solving systems on a computer, and some
guidance as to how far the reported solutions can be trusted.
Mathematicians have made a careful study of how to get the most reliable results. A basic improvement on the naive
code above is to not simply take the entry in the pivot_row, pivot_row position for the pivot, but rather to look at all
of the entries in the pivot_row column below the pivot_row row, and take the one that is most likely to give reliable
results (e.g., take one that is not too small). This strategy is partial pivoting. For example, to solve the troublesome
system ( ) above, we start by looking at both equations for a best first pivot, and taking the in the second
equation as more likely to give good results. Then, the pivot step of
, which the computer will represent as
and, after back-substitution,
, both of which are close to right. The code from above can be adapted
to this purpose.
for(pivot_row=1;pivot_row<=n-1;pivot_row++){
/* find the largest pivot in this column (in row max) */
max=pivot_row;
for(row_below=pivot_row+1;pivot_row<=n;row_below++){
if (abs(a[row_below,pivot_row]) > abs(a[max,row_below]))
82
max=row_below;
}
/* swap rows to move that pivot entry up */
for(col=pivot_row;col<=n;col++){
temp=a[pivot_row,col];
a[pivot_row,col]=a[max,col];
a[max,col]=temp;
}
/* proceed as before */
for(row_below=pivot_row+1;row_below<=n;row_below++){
multiplier=a[row_below,pivot_row]/a[pivot_row,pivot_row];
for(col=pivot_row;col<=n;col++){
a[row_below,col]-=multiplier*a[pivot_row,col];
}
}
}
A full analysis of the best way to implement Gauss' method is outside the scope of the book (see (Wilkinson 1965)),
but the method recommended by most experts is a variation on the code above that first finds the best pivot among
the candidates, and then scales it to a number that is less likely to give trouble. This is scaled partial pivoting.
In addition to returning a result that is likely to be reliable, most well-done code will return a number, called the
condition number that describes the factor by which uncertainties in the input numbers could be magnified to
become inaccuracies in the results returned (see (Rice 1993)).
The lesson of this discussion is that just because Gauss' method always works in theory, and just because computer
code correctly implements that method, and just because the answer appears on green-bar paper, doesn't mean that
the answer is reliable. In practice, always use a package where experts have worked hard to counter what can go
wrong.
Exercises
Problem 1
Using two decimal places, add
and
Problem 2
This intersect-the-lines problem contrasts with the example discussed above.
Illustrate that in this system some small change in the numbers will produce only a small change in the solution by
changing the constant in the bottom equation to
and solving. Compare it to the solution of the unchanged
system.
Problem 3
83
as unequal to
and
1. Solve the system by hand. Notice that the 's divide out only because there is an exact cancelation of the integer
parts on the right side as well as on the left.
2. Solve the system by hand, rounding to two decimal places, and with
.
Solutions
References
Hamming, Richard W. (1971), Introduction to Applied Numerical Analysis, Hemisphere Publishing.
Rice, John R. (1993), Numerical Methods, Software, and Analysis, Academic Press.
Wilkinson, J. H. (1965), The Algebraic Eigenvalue Problem, Oxford University Press.
Microsoft (1993), Microsoft Programmers Reference, Microsoft Press.
84
The designer of such a network needs to answer questions like: How much electricity flows when both the hi-beam
headlights and the brake lights are on? Below, we will use linear systems to analyze simpler versions of electrical
networks.
For the analysis we need two facts about electricity and two facts about electrical networks.
The first fact about electricity is that a battery is like a pump: it provides a force impelling the electricity to flow
through the circuits connecting the battery's ends, if there are any such circuits. We say that the battery provides a
potential to flow. Of course, this network accomplishes its function when, as the electricity flows through a circuit,
it goes through a light. For instance, when the driver steps on the brake then the switch makes contact and a circuit is
formed on the left side of the diagram, and the electrical current flowing through that circuit will make the brake
lights go on, warning drivers behind.
The second electrical fact is that in some kinds of network components the amount of flow is proportional to the
force provided by the battery. That is, for each such component there is a number, it's resistance, such that the
potential is equal to the flow times the resistance. The units of measurement are: potential is described in volts, the
rate of flow is in amperes, and resistance to the flow is in ohms. These units are defined so that
.
Components with this property, that the voltage-amperage response curve is a line through the origin, are called
resistors. (Light bulbs such as the ones shown above are not this kind of component, because their ohmage changes
as they heat up.) For example, if a resistor measures ohms then wiring it to a
volt battery results in a flow of
amperes. Conversely, if we have flow of electrical current of
volt
potential difference between it's ends. This is the voltage drop across the resistor. One way to think of a electrical
circuits like the one above is that the battery provides a voltage rise while the other components are voltage drops.
The two facts that we need about networks are Kirchhoff's Laws.
Current Law. For any point in a network, the flow in equals the flow out.
Voltage Law. Around any circuit the total drop equals the total rise.
In the above network there is only one voltage rise, at the battery, but some networks have more than one.
For a start we can consider the network below. It has a battery that provides the potential to flow and three resistors
(resistors are drawn as zig-zags). When components are wired one after another, as here, they are said to be in series.
85
volts.
ohms (the resistance of the wires is negligible), we get that the current
oh m resistor,
volts
ohm resistor.)
The prior network is so simple that we didn't use a linear system, but the next network is more complicated. In this
one, the resistors are in parallel. This network is more like the car lighting diagram shown earlier.
We begin by labeling the branches, shown below. Let the current through the left branch of the parallel portion be
and that through the right branch be
Kirchoff's Current Law; for instance, all points in the right branch have the same current, which we call
. Note
that we don't need to know the actual direction of flow if current flows in the direction opposite to our arrow then
we will simply get a negative number in the solution.)
The Current Law, applied to the point in the upper right where the flow
. Applied to the lower right it gives
meets
and
, gives that
86
battery, down the left branch of the parallel portion, and back into the bottom of the battery, the voltage rise is
the voltage drop is
. And, in the circuit that simply loops around in the left and right branches of the
The solution is
while
, and
This network is a Wheatstone bridge (see Problem 4). To analyze it, we can place the arrows in this way.
Kirchoff's Current Law, applied to the top node, the left node, the right node, and the bottom node gives these.
to
to
to
87
,
, and
.
Networks of other kinds, not just electrical ones, can also be analyzed in this way. For instance, networks of streets
are given in the exercises.
Exercises
Many of the systems for these problems are mostly easily solved on a computer.
Problem 1
Calculate the amperages in each part of each network.
1. This is a simple network.
Problem 2
88
In the first network that we analyzed, with the three resistors in series, we just added to get that they acted together
like a single resistor of
ohms. We can do a similar thing for parallel circuits. In the second circuit analyzed,
of
ohms.
1. What is the equivalent resistance if we change the
ohm resistor to ohms?
2. What is the equivalent resistance if the two are each ohms?
3. Find the formula for the equivalent resistance if the two resistors in parallel are
ohms and
ohms.
Problem 3
For the car dashboard example that opens this Topic, solve for these amperages (assume that all resistances are
ohms).
1. If the driver is stepping on the brakes, so the brake lights are on, and no other circuit is closed.
2. If the hi-beam headlights and the brake lights are on.
Problem 4
Show that, in this Wheatstone Bridge,
equals
, and
. At
is placed a
The hourly flow of cars into this network's entrances, and out of its exits can be observed.
(Note that to reach Jay a car must enter the network via some other road first, which is why there is no "into Jay"
entry in the table. Note also that over a long period of time, the total in must approximately equal the total out, which
is why both rows add to
cars.) Once inside the network, the traffic may flow in different ways, perhaps filling
Willow and leaving Jay mostly empty, or perhaps flowing in some other way. Kirchhoff's Laws give the limits on
that freedom.
1. Determine the restrictions on the flow inside this network of streets by setting up a variable for each block,
establishing the equations, and solving them. Notice that some streets are one-way only. (Hint: this will not yield
89
90
a unique solution, since traffic can flow through this network in various ways; you should get at least one free
variable.)
2. Suppose that some construction is proposed for Winooski Avenue East between Willow and Jay, so traffic on that
block will be reduced. What is the least amount of traffic flow that can be allowed on that block without
disrupting the hourly flow into and out of the network?
Solutions
and
(This code fragment is for illustration only, and is incomplete. For example, see the later topic on the Accuracy of
Gauss' Method. Nonetheless, this fragment will do for our purposes because analysis of finished versions, including
all the tests and sub-cases, is messier but gives essentially the same result.)
PIVINV=1./A(ROW,COL)
DO 10 I=ROW+1, N
DO 20 J=I, N
A(I,J)=A(I,J)-PIVINV*A(ROW,J)
20 CONTINUE
B(J)=B(J)-PIVINV*B(ROW)
10 CONTINUE
The outermost loop (not shown) runs through
arithmetic on the entries in A that are below and to the right of the pivot entry (and also on the entries in B, but to
simplify the analysis we will not count those operations---see Exercise ). We will assume the pivot is found in the
usual place, that is, that
(as above, analysis of the general case is messier but gives essentially the
same result). That means there are
. Thus we estimate the nested loops above will run something like
proportional to the square of the number of equations. Taking into account the outer loop that is not shown, we get
the estimate that the running time of the algorithm is proportional to the cube of the number of equations.
91
Algorithms that run in time directly proportional to the size of the data set are fast, algorithms that run in time
proportional to the square of the size of the data set are less fast, but typically quite usable, and algorithms that run in
time proportional to the cube of the size of the data set are still reasonable in speed.
Speed estimates like these are a good way of understanding how quickly or slowly an algorithm can be expected to
run on average. There are special cases, however, of systems on which the above Gauss' method code is especially
fast, so there may be factors about a problem that make it especially suitable for this kind of solution.
In practice, the code found in computer algebra systems, or in the standard packages, implements a variant on Gauss'
method, called triangular factorization. To state this method requires the language of matrix algebra, which we will
not see until Chapter Three. Nonetheless, the above code is conceptually quite close to that usually used in
applications.
There have been some theoretical speed-ups in the running time required to solve linear systems. Algorithms other
than Gauss' method have been invented that take a time proportional not to the cube of the size of the data set, but
instead to the (approximately)
power (this is still under active research, so this exponent may come down
somewhat over time). However, these theoretical improvements have not come into widespread use, in part because
the new methods take a quite large data set before they overtake Gauss' method (although they will outperform
Gauss' method on very large sets, there is some startup overhead that keeps them from being faster on the systems
that have, so far, been solved in practice).
Exercises
Problem 1
Computer systems allow the generation of random numbers (of course, these are only pseudo-random, in that they
are generated by some algorithm, but the sequence of numbers that is gotten passes a number of reasonable statistical
tests for apparent randomness).
1. Fill a
Repeat that experiment ten times. Are singular matrices frequent or rare (in this sense)?
2. Time the computer at solving ten
arrays of random numbers. Find the average time. (Notice that some
3.
4.
5.
6.
systems can be found to be singular quite quickly, for instance if the first row equals the second. In the light of the
first part, do you expect that singular systems play a large role in your average?)
Repeat the prior item for
arrays.
Repeat the prior item for
arrays.
Repeat the prior item for
arrays.
Graph the input size versus the average time.
Problem 2
What
array can you invent that takes your computer system the longest to reduce? The shortest?
Problem 3
Write the rest of the FORTRAN program to do a straightforward implementation of Gauss' method. Compare the
speed of your code to that used in a computer algebra system. Which is faster? (Most computer algebra systems will
apply some of the techniques of matrix algebra that we will have later, in Chapter Three.)
Problem 4
Extend the code fragment to handle the case where the B array has more than one column. That solves more than one
system at a time (all with the same matrix of coefficients A).
Problem 5
The FORTRAN language specification requires that arrays be stored "by column", that is, the entire first column is
stored contiguously, then the second column, etc. Does the code fragment given take advantage of this, or can it be
92
93
Chapter II
Linear Algebra/Vector Spaces
The first chapter began by introducing Gauss' method and finished with a fair understanding, keyed on the Linear
Combination Lemma, of how it finds the solution set of a linear system. Gauss' method systematically takes linear
combinations of the rows. With that insight, we now move to a general study of linear combinations.
We need a setting for this study. At times in the first chapter, we've combined vectors from
vectors from
to work in
, at other times
, and at other times vectors from even higher-dimensional spaces. Thus, our first impulse might be
, leaving
unspecified. This would have the advantage that any of the results would hold for
and for
and for many other spaces, simultaneously.
But, if having the results apply to many spaces at once is advantageous then sticking only to
's is overly
restrictive. We'd like the results to also apply to combinations of row vectors, as in the final section of the first
chapter. We've even seen some spaces that are not just a collection of all of the same-sized column vectors or row
vectors. For instance, we've seen a solution set of a homogeneous system that is a plane, inside of
. This solution
set is a closed system in the sense that a linear combination of these solutions is also a solution. But it is not just a
collection of all of the three-tall column vectors; only some of them are in this solution set.
We want the results about linear combinations to apply anywhere that linear combinations are sensible. We shall call
any such set a vector space. Our results, instead of being phrased as "Whenever we have a collection in which we
can sensibly take linear combinations ...", will be stated as "In any vector space ...".
Such a statement describes at once what happens in many spaces. The step up in abstraction from studying a single
space at a time to studying a class of spaces can be hard to make. To understand its advantages, consider this
analogy. Imagine that the government made laws one person at a time: "Leslie Jones can't jay walk." That would be a
bad idea; statements have the virtue of economy when they apply to many cases at once. Or, suppose that they ruled,
"Kim Ke must stop when passing the scene of an accident." Contrast that with, "Any doctor must stop when passing
the scene of an accident." More general statements, in some ways, are clearer.
es:lgebra Lineal/Espacios Vectoriales fr:Algbre linaire/Espace Vectoriel pt:lgebra linear/Espaos vetoriais
94
) consists of a set
along with two operations "+" and " " subject to these conditions.
3. For any
is an element of
.
for all
.
such that
.
then the scalar multiple
then
, then
, then
.
is in
.
.
Remark 1.2
Because it involves two kinds of addition and two kinds of multiplication, that definition may seem confused. For
instance, in condition 7 "
", the first "+" is the real number addition operator while
the "+" to the right of the equals sign represents vector addition in the structure
ambiguous because, e.g.,
and
The best way to go through the examples below is to check all ten conditions in the definition. That check is written
out at length in the first example. Use it as a model for the others. Especially important are the first condition "
is in " and the sixth condition "
is in ". These are the closure conditions. They specify that the
addition and scalar multiplication operations are always sensible they are defined for every pair of vectors, and
every scalar and vector, and the result of the operation is a member of the set (see Example 1.4).
Example 1.3
The set
the result of
95
. For 2, that addition of vectors commutes, take all entries to
(the second equality follows from the fact that the components of the vectors are real numbers, and the addition of
real numbers is commutative). Condition 3, associativity of vector addition, is similar.
For the fourth condition we must produce a zero element the vector of zeroes is it.
we have
For 8, that scalar multiplication distributes from the left over vector addition, we have this.
The ninth
is a vector space with the usual operations of vector addition and scalar multiplication.
, we usually do not write the members as column vectors, i.e., we usually do not write "
".)
Example 1.4
This subset of
". Instead we
96
is a vector space if "+" and " " are interpreted in this way.
The addition and scalar multiplication operations here are just the ones of
inherits these operations from
illustrates that
sum of their three entries is zero and the result is a vector also in
(membership in
is also in
to
. We say that
means that
and
. To show that
(so that
satisfies that
the other conditions in the definition of a vector space are just as straightforward.
Example 1.5
Example 1.3 shows that the set of all two-tall vectors with real entries is a vector space. Example 1.4 gives a subset
of an
that is also a vector space. In contrast with those two, consider the set of two-tall columns with entries that
are integers (under the obvious operations). This is a subset of a vector space, but it is not itself a vector space. The
reason is that this set is not closed under scalar multiplication, that is, it does not satisfy condition 6. Here is a
column with integer entries, and a scalar, such that the outcome of the operation
is not a member of the set, since its entries are not all integers.
Example 1.6
The singleton set
97
A vector space must have at least one element, its zero vector. Thus a one-element vector space is the smallest one
possible.
Definition 1.7
A one-element vector space is a trivial space.
Warning!
The examples so far involve sets of column vectors with the usual operations. But vector spaces need not be
collections of column vectors, or even of row vectors. Below are some other types of vector spaces. The term "vector
space" does not mean "collection of columns of reals". It means something more like "collection in which any linear
combination is sensible".
Examples
Example 1.8
Consider
(in this book, we'll take constant polynomials, including the zero polynomial, to be of degree zero). It is a vector
space under the operations
and
(the verification is easy). This vector space is worthy of attention because these are the polynomial operations
familiar
from
high
school
algebra.
For
instance,
.
Although this space is not a subset of any
as "the same" as
98
Things we are thinking of as "the same" add to "the same" sum. Chapter Three makes precise this idea of vector
space correspondence. For now we shall just leave it as an intuition.
Example 1.9
The set
of
matrices with real number entries is a vector space under the natural entry-by-entry
operations.
Example 1.10
The set
of all real-valued functions of one natural number variable is a vector space under the
operations
so
that
if,
for
example,
and
.
We can view this space as a generalization of Example 1.3 instead of
then
-tall vectors, these functions are like
infinitely-tall vectors.
Addition and scalar multiplication are component-wise, as in Example 1.3. (We can formalize "infinitely-tall" by
saying that it means an infinite sequence, or that it means a function from
to
.)
Example 1.11
The set of polynomials with real coefficients
"
of Example 1.8. This space contains not just degree three polynomials, but
degree thirty polynomials and degree three hundred polynomials, too. Each individual polynomial of course is of a
finite degree, but the set has no single bound on the degree of all of its members.
This example, like the prior one, can be thought of in terms of infinite-tuples. For instance, we can think of
as corresponding to
. However, don't confuse this space with the one from
Example 1.10. Each member of this set has a bounded degree, so under our correspondence there are no elements
from this space matching
. The vectors in this space correspond to infinite-tuples that end in
zeroes.
Example 1.12
The set
of all real-valued functions of one real variable is a vector space under these.
99
The difference between this and Example 1.10 is the domain of the functions.
Example 1.13
The set
is a vector space
inherited from the space in the prior example. (We can think of
corresponds to the vector with components
and
as "the same" as
in that
.)
Example 1.14
The set
and
of basic Calculus. This turns out to equal the space from the prior example functions satisfying this differential
equation have the form
but this description suggests an extension to solutions sets of other
differential equations.
Example 1.15
The set of solutions of a homogeneous linear system in
from
. For closure under addition, if
then
100
One answer is that this is just a definition it gives the rules of the game from here on, and if you don't like it, put
the book down and walk away.
Another answer is perhaps more satisfying. People in this area have worked hard to develop the right balance of
power and generality. This definition has been shaped so that it contains the conditions needed to prove all of the
interesting and important properties of spaces of linear combinations. As we proceed, we shall derive all of the
properties natural to collections of linear combinations from the conditions given in the definition.
The next result is an example. We do not need to include these properties in the definition of vector space because
they follow from the properties already listed there.
Lemma 1.17
In any vector space
1.
, and
2.
3.
, for any
and
, we have
, and
Proof
For 1, note that
that
, the vector
such
For 3, this
will do.
Summary
We finish with a recap.
Our study in Chapter One of Gaussian reduction led us to consider collections of linear combinations. So in this
chapter we have defined a vector space to be a structure in which we can form such combinations, expressions of the
form
(subject to simple conditions on the addition and scalar multiplication operations). In
a phrase: vector spaces are the right context in which to study linearity.
Finally, a comment. From the fact that it forms a whole chapter, and especially because that chapter is the first one, a
reader could come to think that the study of linear systems is our purpose. The truth is, we will not so much use
vector spaces in the study of linear systems as we will instead have linear systems start us on the study of vector
spaces. The wide variety of examples from this subsection shows that the study of vector spaces is interesting and
important in its own right, aside from how it helps us understand linear systems. Linear systems won't go away. But
from now on our primary objects of study will be vector spaces.
101
Exercises
Problem 1
Name the zero vector for each of these vector spaces.
1. The space of degree three polynomials under the natural operations
2. The space of
matrices
3. The space
4. The space of real-valued functions of one natural number variable
This exercise is recommended for all readers.
Problem 2
Find the additive inverse, in the vector space, of the vector.
1. In
, the vector
2. In the space
,
3. In
vector
.
This exercise is recommended for all readers.
Problem 3
Show that each of these is a vector space.
1. The set of linear polynomials
multiplication operations.
2. The set of
matrices with real entries under the usual matrix operations.
3. The set of three-component row vectors with their usual operations.
4. The set
, this set
, this set
102
Problem 5
Define addition and scalar multiplication operations to make the complex numbers a vector space over
Problem 7
Show that the set of linear combinations of the variables
Problem 9
Prove or disprove that
1.
2.
This exercise is recommended for all readers.
Problem 10
For each, decide if it is a vector space; the intended operations are the natural ones.
1. The diagonal
2. This set of
matrices
matrices
3. This set
103
(so that
is
), and "
-th power of
Problem 13
Is
1.
2.
and
and
Problem 14
Prove or disprove that this is a vector space: the set of polynomials of degree greater than or equal to two, along with
the zero polynomial.
Problem 15
At this point "the same" is only an intuition, but nonetheless for each vector space identify the
is "the same" as
1. The
matrices under the usual operations
2. The
matrices (under their usual operations)
3. This set of
matrices
4. This set of
matrices
then
implies that
Problem 18
The definition of vector spaces does not explicitly say that
). Show
104
Problem 21
1. Prove that every point, line, or plane thru the origin in
2. What if it doesn't contain the origin?
Prove that
if and only if
.
Prove that
if and only if
.
Prove that any nontrivial vector space is infinite.
Use the fact that a nonempty solution set of a homogeneous linear system is a vector space to draw the
conclusion.
Problem 23
Is this a vector space under the natural operations: the real-valued functions of one real variable that are
differentiable?
Problem 24
A vector space over the complex numbers
scalars are drawn from
instead of from
has the same definition as a vector space over the reals except that
. Show that each of these is a vector space over the complex numbers.
and
.)
Problem 25
Name a property shared by all of the
105
and
is
Solutions
is a subspace of
. As specified in the definition, the operations are the ones that are inherited from the larger
as they add in
. To show that
routine. For instance, for closure under addition, just note that if the summands satisfy that
then
the
sum
satisfies
and
that
.
Example 2.3
The
-axis in
is a subspace where the addition and scalar multiplication operations are the inherited ones.
As above, to verify that this is a subspace, we simply note that it is a subset and then check that it satisfies the
conditions in definition of a vector space. For instance, the two closure conditions are satisfied: (1) adding two
vectors with a second component of zero results in a vector with a second component of zero, and (2) multiplying a
scalar times a vector with a second component of zero results in a vector with a second component of zero.
Example 2.4
Another subspace of
is
106
. At the opposite extreme, any vector space has itself for a subspace.
These two are the improper subspaces. Other subspaces are proper.
Example 2.5
The condition in the definition requiring that the addition and scalar multiplication operations must be the ones
inherited from the larger space is important. Consider the subset
of the vector space
. Under the operations
and
that set is a vector space, specifically, a trivial space. But it is not a subspace of
has
subspace
comprised
of
all
linear
polynomials
.
Example 2.7
Another example of a subspace not taken from an
Example 2.8
Being vector spaces themselves, subspaces must satisfy the closure conditions. The set
vector space
because with the inherited operations it is not closed under scalar multiplication: if
then
.
The next result says that Example 2.8 is prototypical. The only way that a subset can fail to be a subspace (if it is
nonempty and the inherited operations are used) is if it isn't closed.
Lemma 2.9
For a nonempty subset
of a vector space, under the inherited operations, the following are equivalent
statements.[1]
1.
2.
3.
and scalars
vector
is in
is closed under linear combinations of any number of vectors: for any vectors
the vector
is in
the
and scalars
Briefly, the way that a subset gets to be a subspace is by being closed under linear combinations.
Proof
"The following are equivalent" means that each pair of statements are equivalent.
and
. This strategy is
are easy and so we need only argue the single
and that
is closed under
The first item in the vector space definition has five conditions. First, for closure under addition, if
, as
, the sum
in
a vector space, its addition is commutative), and that in turn equals the sum
then
in
(because
is
in
third condition is similar to that for the second. For the fourth, consider the zero vector of
107
Remark 2.10
At the start of this chapter we introduced vector spaces as collections in which linear combinations are "sensible".
The above result speaks to this.
The vector space definition has ten conditions but eight of them the conditions not about closure simply ensure
that referring to the operations as an "addition" and a "scalar multiplication" is sensible. The proof above checks that
these eight are inherited from the surrounding vector space provided that the nonempty set satisfies Lemma 2.9's
statement (2) (e.g., commutativity of addition in
). So, in this
's such expressions are "sensible" in that the vector described is defined and is in the
This second meaning suggests that a good way to think of a vector space is as a collection of unrestricted linear
combinations. The next two examples take some spaces and describe them in this way. That is, in these examples we
parametrize, just as we did in Chapter One to describe the solution set of a homogeneous linear system.
Example 2.11
This subset of
is a subspace under the usual addition and scalar multiplication operations of column vectors (the check that it is
nonempty and closed under linear combinations of two vectors is just like the one in Example 2.2). To parametrize,
we can take
to be a one-equation linear system and expressing the leading variable in terms of
the free variables
Now the subspace is described as the collection of unrestricted linear combinations of those two vectors. Of course,
in either description, this is a plane through the origin.
Example 2.12
This is a subspace of the
is i
matrices
(checking that it is nonempty and closed under linear combinations is easy). To parametrize, express the condition as
.
108
As above, we've described the subspace as a collection of unrestricted linear combinations (by coincidence, also of
two elements).
Parametrization is an easy technique, but it is important. We shall use it often.
Definition 2.13
The span(or linear closure) of a nonempty subset
from
The span of the empty subset of a vector space is the trivial subspace.
No notation for the span is completely standard. The square brackets used here are common, but so are "
"
and "
".
Remark 2.14
In Chapter One, after we showed that the solution set of a homogeneous linear system can be written as
, we described that as the set "generated" by the 's. We now have the
technical term; we call that the "span" of the set
Recall also the discussion of the "tricky point" in that proof. The span of the empty set is defined to be the set
because we follow the convention that a linear combination of no vectors sums to
set's span to be the trivial subspace is a convienence in that it keeps results like the next one from having annoying
exceptional cases.
Lemma 2.15
In a vector space, the span of any subset is a subspace.
Proof
Call the subset
. If
and
, a linear combination
and so is in
's forming
Example 2.16
In any vector space
vector
true even when
Example 2.17
, the set
is a subspace of
is a subspace of
is the zero vector, in which case the subspace is the degenerate line, the trivial subspace.
. This is
109
and
and
Gauss' method
and
instance, for
and
the coefficients
and
and
of
are
there
and
such
that
, and
satisfying these.
, and
.
This shows, incidentally, that the set
also spans this subspace. A space can have more than one spanning
and
(Naturally, we usually prefer to work with spanning sets that have only a few members.)
Example 2.19
These are the subspaces of
that we now know of, the trivial subspace, the lines through the origin, the planes
through the origin, and the whole space (of course, the picture shows only a few of the infinitely many subspaces). In
the next section we will prove that
has no other type of subspaces, so in fact this picture shows them all.
110
The subsets are described as spans of sets, using a minimal number of members, and are shown connected to their
supersets. Note that these subspaces fall naturally into levels planes on one level, lines on another, etc.
according to how many vectors are in a minimal-sized spanning set.
So far in this chapter we have seen that to study the properties of linear combinations, the right setting is a collection
that is closed under these combinations. In the first subsection we introduced such collections, vector spaces, and we
saw a great variety of examples. In this subsection we saw still more spaces, ones that happen to be subspaces of
others. In all of the variety we've seen a commonality. Example 2.19 above brings it out: vector spaces and
subspaces are best understood as a span, and especially as a span of a small number of vectors. The next section
studies spanning sets that are minimal.
Exercises
This exercise is recommended for all readers.
Problem 1
Which of these subsets of the vector space of
one that is a subspace, parametrize its description. For each that is not, give a condition that fails.
1.
2.
3.
4.
This exercise is recommended for all readers.
Problem 2
Is this a subspace of
1.
2.
, in
,
3.
111
, in
,
, in
Problem 4
Which of these are members of the span
variable?
1.
2.
3.
4.
This exercise is recommended for all readers.
Problem 5
Which of these sets spans
? That is, which of these sets has the property that any three-tall vector can be
2.
3.
4.
5.
This exercise is recommended for all readers.
Problem 6
Parametrize each subspace's description. Then express each subspace as a span.
1. The subset
2. This subset of
3. This subset of
4. The subset
5. The subset of
of quadratic polynomials
of
such that
112
-plane in
2.
in
3.
in
4.
5. The set
6.
in
in the space
in
Problem 8
Parametrize it with
to get
to get
Problem 9
Is
a subspace of
and
.
Problem 11
Example 2.16 says that for any vector
subspace of
, the set
is a
113
Problem 13
Example 2.19 shows that
has infinitely many subspaces. Does every nontrivial space have infinitely many
subspaces?
Problem 14
Finish the proof of Lemma 2.9.
Problem 15
Show that each vector space has only one trivial subspace.
This exercise is recommended for all readers.
Problem 16
Show that for any subset
. Members of
. (Hint. Members of
.)
All of the subspaces that we've seen use zero in their description in some way. For example, the subspace in
Example 2.3 consists of all the vectors from
with a second component of zero. In contrast, the collection of
vectors from
with a second component of one does not form a subspace (it is not closed under scalar
multiplication). Another example is Example 2.2, where the condition on the vectors is that the three components
add to zero. If the condition were that the three components add to one then it would not be a subspace (again, it
would fail to be closed). This exercise shows that a reliance on zero is not strictly necessary. Consider the set
What is the difference between this sum of three vectors and the sum of the first two of these three?
What is the difference between the prior sum and the sum of just the first one vector?
What should be the difference between the prior sum of one vector and the sum of no vectors?
So what should be the definition of the sum of no vectors?
Problem 19
Is a space determined by its subspaces? That is, if two vector spaces have the same subspaces, must the two be
equal?
Problem 20
1. Give a set that is closed under scalar multiplication but not addition.
114
in
is a subspace of
in
and
is a subset of
Problem 26
Is the relation "is a subspace of" transitive? That is, if
be a subspace of
is a subspace of
and
is a subspace of
, must
If
are subsets of a vector space, is
? Always? Sometimes? Never?
If
are subsets of a vector space, is
?
If
are subsets of a vector space, is
?
Is the span of the complement equal to the complement of the span?
Problem 28
Reprove Lemma 2.15 without doing the empty set separately.
Problem 29
Find a structure that is closed under linear combinations, and yet is not a vector space. (Remark. This is a bit of a
trick question.)
Solutions
115
References
[1] More information on equivalence of statements is in the appendix.
. The prior section also showed that a space can have many sets that span it. The space of linear
In this section we will use the second sense of "minimal spanning set" because of this technical convenience.
However, the most important result of this book is that the two senses coincide; we will prove that in the section
after this one.
pt:lgebra linear/Dependncia linear
116
for any
Proof
The left to right implication is easy. If
gives that
then, since
to show that
, write an element of
and substitute
, where
the spans
and
is in the span
The lemma says that if we have a spanning set then we can remove a
and only if
.
to get a new set
set is minimal if and only if it contains no vectors that are linear combinations of the others in that set. We have a
term for this important property.
Definition 1.3
A subset of a vector space is linearly independent if none of its elements is a linear combination of the others.
Otherwise it is linearly dependent.
Here is an important observation:
although this way of writing one vector as a combination of the others visually sets
algebraically there is nothing special in that equation about
can rewrite the relationship to set off
. For any
with a coefficient
that is nonzero, we
When we don't want to single out any vector by writing it alone on one side of the equation we will instead say that
are in a linear relationship and write the relationship with all of the vectors on the same side. The
next result rephrases the linear independence definition in this style. It gives what is usually the easiest way to
compute whether a finite set is dependent or independent.
117
Lemma 1.4
A subset
Proof
This is a direct consequence of the observation above.
If the set
from
is not linearly
is a linear combination
, and subtracting
of
in front of
is linearly
and
are zero. So the only linear relationship between the two given row vectors is the trivial
with
and
Remark 1.6
Recall the Statics example that began this book. We first set the unknown-mass objects at
got a balance, and then we set the objects at
cm and
cm and
cm and
information we could compute values of the unknown masses. Had we instead first set the unknown-mass objects at
cm and
cm, and then at
cm and
cm, we would not have been able to compute the values of the
unknown masses (try it). Intuitively, the problem is that the
information that is,
is linearly independent in
because
gives
since polynomials are equal only if their coefficients are equal. Thus, the only linear relationship between these two
members of
is the trivial one.
Example 1.8
118
, where
the set
where not all of the scalars are zero (the fact that some of the scalars are zero doesn't matter).
Remark 1.9
That example illustrates why, although Definition 1.3 is a clearer statement of what independence is, Lemma 1.4 is
more useful for computations. Working straight from the definition, someone trying to compute whether
is
linearly independent would start by setting
and
But knowing that the first vector is not dependent on the other two is not enough. This person would have to go on to
try
to find the dependence
,
. Lemma 1.4 gets the same conclusion with
only one computation.
Example 1.10
The empty subset of a vector space is linearly independent. There is no nontrivial linear relationship among its
members as it has no members.
Example 1.11
In any vector space, any subset containing the zero vector is linearly dependent. For example, in the space
quadratic polynomials, consider the subset
of
One way to see that this subset is linearly dependent is to use Lemma 1.4: we have
and this is a nontrivial relationship as not all of the coefficients are zero. Another way to see that this subset is
linearly dependent is to go straight to Definition 1.3: we can express the third member of the subset as a linear
combination of the first two, namely,
is satisfied by taking
and
(in contrast to
the lemma, the definition allows all of the coefficients to be zero).
(There is still another way to see that this subset is dependent that is subtler. The zero vector is equal to the trivial
sum, that is, it is the sum of no vectors. So in a set containing the zero vector, there is an element that can be written
as a combination of a collection of other vectors from the set, specifically, the zero vector can be written as a
combination of the empty collection.)
The above examples, especially Example 1.5, underline the discussion that begins this section. The next result says
that given a finite set, we can produce a linearly independent subset by discarding what Remark 1.6 calls "repeats".
Theorem 1.12
In a vector space, any finite subset has a linearly independent subset with the same span.
Proof
If the set
linearly dependent.
By the definition of dependence, there is a vector
Discard it define the set
Now, if
is linearly independent then we are finished. Otherwise iterate the prior paragraph: take a vector
such that
.
that
.
119
Example 1.13
This set spans
gives a three equations/five unknowns linear system whose solution set can be parametrized in this way.
So
and
first two. Thus, Lemma 1.1 says that discarding the fifth vector
get
is linearly
independent (this is easily checked), and so discarding any of its elements will shrink the span.
120
Example 1.15
In each of these three paragraphs the subset
is linearly independent.
the span
is the
dependent:
independent:
the span
is the
\qquad
If
then
. The reason is that for any vector that we would add to make a
has a solution
, and
So, in general, a linearly independent set may have a superset that is dependent. And, in general, a linearly
independent set may have a superset that is independent. We can characterize when the superset is one and when it is
the other.
Lemma 1.16
Where
for any
Proof
with
121
then
where each
and so
and
.
The other implication requires the assumption that
linearly dependent,
and independence of
then
equation as
shows that
.
(Compare this result with Lemma 1.1. Both say, roughly, that is a "repeat" if it is in the span of
. However,
Consider
is a linear combination
then
to a superset
independent
dependent
must be independent
may be either
may be either
must be dependent
In developing this table we've uncovered an intimate relationship between linear independence and span.
Complementing the fact that a spanning set is minimal if and only if it is linearly independent, a linearly independent
set is maximal if and only if it spans the space.
In summary, we have introduced the definition of linear independence to formalize the idea of the minimality of a
spanning set. We have developed some properties of this idea. The most important is Lemma 1.16, which tells us
that a linearly independent set is maximal when it spans the space.
122
Exercises
This exercise is recommended for all readers.
Problem 1
Decide whether each subset of
1.
2.
3.
4.
This exercise is recommended for all readers.
Problem 2
Which of these subsets of
1.
2.
3.
4.
This exercise is recommended for all readers.
Problem 3
Prove that each set
1.
2.
3.
to
and
and
and
This exercise is recommended for all readers.
Problem 4
Which of these subsets of the space of real-valued functions of one real variable is linearly dependent and which is
linearly independent? (Note that we have abbreviated some constant functions; e.g., in the first item, the " " stands
for the constant function
.)
1.
2.
3.
4.
5.
6.
Problem 5
Does the equation
is a linearly dependent subset of the set of all real-valued functions with domain the interval
of real
and
123
Problem 6
Why does Lemma 1.4 say "distinct"?
This exercise is recommended for all readers.
Problem 7
Show that the nonzero rows of an echelon form matrix form a linearly independent set.
This exercise is recommended for all readers.
Problem 8
1. Show that if the set
2. What is the relationship between the linear independence or dependence of the set
independence or dependence of
Problem 9
and the
Problem 11
Show that if
,
,
Problem 12
is in the span of
by finding
and
is unique.
. Prove that if
is in
, so that
unique (that is, unique up to reordering and adding or taking away terms of the form
adding to
). Thus
is
as a
124
Prove that a polynomial gives rise to the zero function if and only if it is the zero polynomial. (Comment. This
question is not a Linear Algebra matter, but we often use the result. A polynomial gives rise to a function in the
obvious way:
.)
Problem 14
Return to Section 1.2 and redefine point, line, plane, and other linear surfaces to avoid degenerate cases.
Problem 15
1. Show that any set of four vectors in
is linearly dependent.
2. Is this true for any set of five? Any set of three?
3. What is the most number of elements that a linearly independent subset of
can have?
Problem 17
Must every linearly dependent set have a subset that is dependent and a subset that is independent?
Problem 18
In
, what is the biggest linearly independent set you can find? The smallest? The biggest linearly dependent set?
The smallest? ("Biggest" and "smallest" mean that there are no supersets or subsets with the same property.)
This exercise is recommended for all readers.
Problem 19
Linear independence and linear dependence are properties of sets. We can thus naturally ask how those properties act
with respect to the familiar elementary set relations and operations. In this body of this subsection we have covered
the subset and superset relations. We can also consider the operations of intersection, complementation, and union.
1. How does linear independence relate to intersection: can an intersection of linearly independent sets be
independent? Must it be?
2. How does linear independence relate to complementation?
3. Show that the union of two linearly independent sets need not be linearly independent.
4. Characterize when the union of two linearly independent sets is linearly independent, in terms of the intersection
of the span of each.
This exercise is recommended for all readers.
Problem 20
For Theorem 1.12,
1. fill in the induction for the proof;
2. give an alternate proof that starts with the empty set and builds a sequence of linearly independent subsets of the
given finite set until one appears with the same span as the given set.
Problem 21
With a little calculation we can get formulas to determine whether or not a set of vectors is linearly independent.
1. Show that this subset of
125
linearly independent?
4. This is an opinion question: for a set of four vectors from
entries that determines independence of the set? (You needn't produce such a formula, just decide if one exists.)
This exercise is recommended for all readers.
Problem 22
1. Prove that a set of two perpendicular nonzero vectors from
2. What if
?
?
3. Generalize to more than two vectors.
Problem 23
Consider the set of functions from the open interval
to
1. Show that this set is a vector space under the usual operations.
2. Recall the formula for the sum of an infinite geometric series:
for all
is a subspace of
. Is that "only if"?
, if a subset
of
is linearly independent in
then
is also linearly
126
Linear Algebra/Basis
Definition 1.1
A basis for a vector space is a sequence of vectors that form a set that is linearly independent and that spans the
space.
We denote a basis with angle brackets
the elements is significant. (The requirement that a basis be ordered will be needed, for instance, in Definition 1.13.)
Example 1.2
This is a basis for
It is linearly independent
and it spans
Example 1.3
This basis for
differs from the prior one because the vectors are in a different order. The verification that it is a basis is just as in the
prior example.
Example 1.4
The space
Linear Algebra/Basis
127
different in a discussion of
Example 1.6
, and
instead of
and
, and
instead of
and
Another basis is
's
Example 1.7
A natural for the vector space of cubic polynomials
is
and
Example 1.9
The space of finite-degree polynomials has a basis with infinitely many elements
Example 1.10
We have seen bases before. In the first chapter we described the solution set of homogeneous systems such as this
one
by parametrizing.
That is, we described the vector space of solutions as the span of a two-element set. We can easily check that this
two-vector set is also linearly independent. Thus the solution set is a subspace of
with a two-element basis.
Example 1.11
Parameterization helps find bases for other vector spaces, not just for solution sets of homogeneous systems. To find
a basis for this subspace of
The above work shows that it spans the space. To show that it is linearly independent is routine.
Consider again Example 1.2. It involves two verifications.
Linear Algebra/Basis
128
In the first, to check that the set is linearly independent we looked at linear combinations of the set's members that
total to the zero vector
that
must be
and
The second verification, that the set spans the space, looks at linear combinations that total to any member of the
space
. In Example 1.2 we noted only that the resulting calculation shows that such a
there is a
must be
and
That is, the first calculation is a special case of the second. The next result says that this holds in general for a
spanning set: the combination totaling to the zero vector is unique if and only if the combination totaling to any
vector is unique.
Theorem 1.12
In any vector space, a subset is a basis if and only if each vector in the space can be expressed as a linear
combination of elements of the subset in a unique way.
We consider combinations to be the same if they differ only in the order of summands or in the addition or deletion
of terms of the form "
".
Proof
By definition, a sequence is a basis if and only if its vectors form both a spanning set and a linearly independent set.
A subset is a spanning set if and only if each vector in the space is a linear combination of elements of that subset in
at least one way.
Thus, to finish we need only show that a subset is linearly independent if and only if every vector in the space is a
linear combination of elements from the subset in at most one way. Consider two expressions of a vector as a linear
combination of the members of the basis. We can rearrange the two sums, and if necessary add some
terms, so
that the two sums combine the same
and
. Now
holds if and only if
holds, and so asserting that each coefficient in the lower equation is zero is the same thing as asserting that
for each
Definition 1.13
In a vector space with basis
used to express
where
the representation of
with respect to
and
. The
with
respect to
We will later do representations in contexts that involve more than one basis. To help with the bookkeeping, we shall
often attach a subscript
to the column vector.
Example 1.14
Linear Algebra/Basis
In
(note
129
that
the
coordinates
, the representation of
are
With
respect
is
to
different
basis
is different.
Remark 1.15
This use of column notation and the term "coordinates" has both a down side and an up side.
The down side is that representations look like vectors from
are working with is
, especially since we sometimes omit the subscript base. We must then infer the intent from
, where
we solve
to get that
and
from the column, the fact that the right side is a representation is
Our main use of representations will come in the third chapter. The definition appears here because the fact that
every vector is a linear combination of basis vectors in a unique way is a crucial property of bases, and also to help
make two points. First, we fix an order for the elements of a basis so that coordinates can be stated in that order.
Second, for calculation of coordinates, among other things, we shall restrict our attention to spaces with bases having
only finitely many elements. We will see that in the next subsection.
Linear Algebra/Basis
130
Exercises
This exercise is recommended for all readers.
Problem 1
Decide if each is a basis for
1.
2.
3.
4.
This exercise is recommended for all readers.
Problem 2
Represent the vector with respect to the basis.
1.
2.
3.
Problem 3
Find a basis for
, the space of all quadratic polynomials. Must any such basis contain a polynomial of each
, the space of
matrices.
matrices
Linear Algebra/Basis
131
Problem 7
Check Example 1.6.
This exercise is recommended for all readers.
Problem 8
Find the span of each set and then find a basis for that span.
1.
in
2.
in
This exercise is recommended for all readers.
Problem 9
Find a basis for each of these subspaces of the space
of cubic polynomials.
such that
and
,
,
, and~
,
, and~
Problem 10
We've seen that it is possible for a basis to remain a basis when it is reordered. Must it remain a basis?
Problem 11
Can a basis contain a zero vector?
This exercise is recommended for all readers.
Problem 12
Let
1. Show that
is a basis when
2. Prove that
is a basis where
Problem 13
Find one vector
1.
in
2.
in
3.
in
This exercise is recommended for all readers.
Problem 14
Where
each of the
Problem 15
A basis contains some of the vectors from a vector space; can it contain them all?
Problem 16
is
Linear Algebra/Basis
132
Theorem 1.12 shows that, with respect to a basis, every linear combination is unique. If a subset is not a basis, can
linear combinations be not unique? If so, must they be?
This exercise is recommended for all readers.
Problem 17
A square matrix is symmetric if for all indices
and
, entry
equals entry
Find a basis.
Solutions
Footnotes
[1] More information on sequences is in the appendix.
pt:lgebra linear/Bases
.)
Linear Algebra/Dimension
133
Linear Algebra/Dimension
In the prior subsection we defined the basis of a vector space, and we saw that a space can have many different
bases. For example, following the definition of a basis, we saw three different bases for
. So we cannot talk
about "the" basis for a vector space. True, some vector spaces have bases that strike us as more natural than others,
for instance,
's basis
or
's basis
or
's basis
. But, for example in the space
, no particular basis leaps out at us as the most natural one. We cannot, in
general, associate with a space any single basis that best describes that space.
We can, however, find something about the bases that is uniquely associated with the space. This subsection shows
that any two bases for a space have the same number of elements. So, with each space we can associate a number,
the number of vectors in any of its bases.
This brings us back to when we considered the two things that could be meant by the term "minimal spanning set".
At that point we defined "minimal" as linearly independent, but we noted that another reasonable interpretation of
the term is that a spanning set is "minimal" when it has the fewest number of elements of any set with the same span.
At the end of this subsection, after we have shown that all bases have the same number of elements, then we will
have shown that the two senses of "minimal" are equivalent.
Before we start, we first limit our attention to spaces where at least one basis has only finitely many members.
Definition 2.1
A vector space is finite-dimensional if it has a basis with only finitely many vectors.
(One reason for sticking to finite-dimensional spaces is so that the representation of a vector with respect to a basis is
a finitely-tall vector, and so can be easily written.) From now on we study only finite-dimensional vector spaces. We
shall take the term "vector space" to mean "finite-dimensional vector space". Other spaces are interesting and
important, but they lie outside of our scope.
To prove the main theorem we shall use a technical result.
Lemma 2.2 (Exchange Lemma)
Assume that
. Then exchanging
for
the relationship
Proof
Call the outcome of the exchange
We first show that
the members of
is zero. Because
other
. The basis
is assumed to be nonzero,
among
is linearly independent.
member
, is easy; any
can
be
written
, and hence is in
with
. For the
,
then
equation
.
of
members of
, substitute for
Now,
can
be
consider
rearranged
any
to
member
, and recognize (as in the first half of this argument) that the result is a linear combination of linear
Linear Algebra/Dimension
134
combinations, of members of
, and hence is in
Theorem 2.3
In any finite-dimensional vector space, all of the bases have the same number of elements.
Proof
Fix a vector space with at least one finite basis. Choose, from among all of this space's bases, one
of minimal size. We will show that any other basis
also has the same
number of members,
. Because
vectors.
The basis
is in the space, so
, resulting in a basis
. By
members of
and
members of
. We know that
new basis
with one more and one fewer than the previous basis
Repeat the inductive step until no 's remain, so that
contains
vectors because any
has at least
these
(for
. Exchange
for
to get a
.
. Now,
is linearly independent.
The dimension of a vector space is the number of vectors in any of its bases.
Example 2.5
Any basis for
has
is
has
-dimensional.
Example 2.6
The space
basis
has dimension
Example 2.7
A trivial space is zero-dimensional since its basis is empty.
Again, although we sometimes say "finite-dimensional" as a reminder, in the rest of this book all vector spaces are
assumed to be finite-dimensional. An instance of this is that in the next result the word "space" should be taken to
mean "finite-dimensional vector space".
Corollary 2.8
No linearly independent set can have a size greater than the dimension of the enclosing space.
Proof
Inspection of the above proof shows that it never uses that
is linearly independent.
Example 2.9
Recall the subspace diagram from the prior section showing the subspaces of
described with a minimal spanning set, for which we now have the term "basis". The whole space has a basis with
three members, the plane subspaces have bases with two members, the line subspaces have bases with one member,
and the trivial subspace has a basis with zero members. When we saw that diagram we could not show that these are
Linear Algebra/Dimension
135
the only subspaces that this space has. We can show it now. The prior corollary proves that the only subspaces of
either three-, two-, one-, or zero-dimensional. Therefore, the diagram indicates all of the subspaces. There are no
subspaces somehow, say, between lines and planes.
Corollary 2.10
Any linearly independent set can be expanded to make a basis.
Proof
If a linearly independent set is not already a basis then it must not span the space. Adding to it a vector that is not in
the span preserves linear independence. Keep adding, until the resulting set does span the space, which the prior
corollary shows will happen after only a finite number of steps.
Corollary 2.11
Any spanning set can be shrunk to a basis.
Proof
Call the spanning set
. If
then it can be shrunk to the empty basis, thereby making it linearly independent, without changing its span.
Otherwise,
contains a vector
are done.
If not then there is a
with
such that
. If
; if
then we
We can repeat this process until the spans are equal, which must happen in at most finitely many steps.
Corollary 2.12
In an
Proof
First we will show that a subset with vectors is linearly independent if and only if it is a basis. "If" is trivially
true bases are linearly independent. "Only if" holds because a linearly independent set can be expanded to a basis,
but a basis has elements, so this expansion is actually the set that we began with.
To finish, we will show that any subset with vectors spans the space if and only if it is a basis. Again, "if" is
trivial. "Only if" holds because any spanning set can be shrunk to a basis, but a basis has elements and so this
shrunken set is just the one we started with.
The main result of this subsection, that all of the bases in a finite-dimensional vector space have the same number of
elements, is the single most important result in this book because, as Example 2.9 shows, it describes what vector
spaces and subspaces there can be. We will see more in the next chapter.
Remark 2.13
The case of infinite-dimensional vector spaces is somewhat controversial. The statement "any infinite-dimensional
vector space has a basis" is known to be equivalent to a statement called the Axiom of Choice (see (Blass 1984).)
Mathematicians differ philosophically on whether to accept or reject this statement as an axiom on which to base
mathematics (although, the great majority seem to accept it). Consequently the question about infinite-dimensional
vector spaces is still somewhat up in the air. (A discussion of the Axiom of Choice can be found in the Frequently
Asked Questions list for the Usenet group sci.math. Another accessible reference is (Rucker 1982).
are
Linear Algebra/Dimension
136
Exercises
Assume that all spaces are finite-dimensional unless otherwise stated.
This exercise is recommended for all readers.
Problem 1
Find a basis for, and the dimension of,
Problem 2
Find a basis for, and the dimension of, the solution set of this system.
matrices.
Problem 4
Find the dimension of the vector space of matrices
and
,
, and
such that
such that
such that
such that
and
,
,
, and
,
, and
Problem 6
What is the dimension of the span of the set
Problem 8
What is the dimension of the vector space
of
matrices?
Linear Algebra/Dimension
137
under the
natural operations.
Problem 13
(See Problem 11.) What is the dimension of the vector space of functions
operations, where the domain
Problem 14
Show that any set of four vectors in
is linearly dependent.
Problem 15
Show that the set
is a basis if and only if there is no plane through the origin containing all
three vectors.
Problem 16
1. Prove that any subspace of a finite dimensional space has a basis.
2. Prove that any subspace of a finite dimensional space is finite dimensional.
Problem 17
Where is the finiteness of
and
then
is non-trivial. Generalize.
Problem 19
Because a basis for a space is a subset of that space, we are naturally led to how the property "is a basis" interacts
with set operations.
1. Consider first how bases might be related by "subset". Assume that
and that
for
for
and
such that
must
be a subset of
?
2. Is the intersection of bases a basis? For what space?
3. Is the union of bases a basis? For what space?
for
for
such that
? For any bases
and
Linear Algebra/Dimension
138
.)
and
1. Prove that
.
2. Prove that equality of dimension holds if and only if
.
3. Show that the prior item does not hold if they are infinite-dimensional.
? Problem 21
For any vector
in
of the numbers
, ...,
(that is,
is a rearrangement of
and let
, ..., and
be the span of
References
Blass, A. (1984), "Existence of Bases Implies the Axiom of Choice", in Baumgartner, J. E., Axiomatic Set Theory,
Providence RI: American Mathematical Society, pp.3133.
Rucker, Rudy (1982), Infinity and the Mind, Birkhauser.
Gilbert, George T.; Krusemeyer, Mark; Larson, Loren C. (1993), The Wohascum County Problem Book, The
Mathematical Association of America.
139
then
The linear dependence of the second on the first is obvious and so we can simplify this description to
.
Lemma 3.3
If the matrices
(for
and
and
) then their row spaces are equal. Hence, row-equivalent matrices have the same row space,
For the other containment, recall that row operations are reversible:
that,
. Further,
if and only if
. With
also follows from the prior paragraph, and so the two sets are equal.
Thus, row operations leave the row space unchanged. But of course, Gauss' method performs the row operations
systematically, with a specific goal in mind, echelon form.
Lemma 3.4
The nonzero rows of an echelon form matrix make up a linearly independent set.
Proof
A result in the first chapter, Lemma One.III.2.5, states that in an echelon form matrix, no nonzero row is a linear
combination of the other rows. This is a restatement of that result into new terminology.
Thus, in the language of this chapter, Gaussian reduction works by eliminating linear dependences among rows,
leaving the span unchanged, until no nontrivial linear relationships remain (among the nonzero rows). That is, Gauss'
140
for the row space. This is a basis for the row space of
both the starting and ending matrices, since the two row spaces are equal.
Using this technique, we can also find bases for spans not directly involving row vectors.
Definition 3.6
The column space of a matrix is the span of the set of its columns. The column rank is the dimension of the column
space, the number of linearly independent columns.
Our interest in column spaces stems from our study of linear systems. An example is that this system
Example 3.7
Given this matrix,
to get a basis for the column space, temporarily turn the columns into rows and reduce.
The result is a basis for the column space of the given matrix.
Definition 3.8
The transpose of a matrix is the result of interchanging the rows and columns of that matrix. That is, column
the matrix
is row
of
of
141
So the instructions for the prior example are "transpose, reduce, and transpose back".
We can even, at the price of tolerating the as-yet-vague idea of vector spaces being "the same", use Gauss' method to
find bases for spans in other types of vector spaces.
Example 3.9
To get a basis for the span of
polynomials
as
"the
same"
in the space
as
the
row
vectors
and
The column space of the left-hand matrix contains vectors with a second component that is nonzero. But the column
space of the right-hand matrix is different because it contains only vectors whose second component is zero. It is this
knowledge that row operations can change the column space that makes next result surprising.
Lemma 3.10
Row operations do not change the column rank.
Proof
Restated, if
reduces to
We will be done if we can show that row operations do not affect linear relationships among columns (e.g., if the
fifth column is twice the second plus the fourth before a row operation then that relationship still holds afterwards),
because the column rank is just the size of the largest set of unrelated columns. But this is exactly the first theorem of
this book: in a relationship among columns,
Another way, besides the prior result, to state that Gauss' method has something to say about the column space as
well as about the row space is to consider again Gauss-Jordan reduction. Recall that it ends with the reduced echelon
form of a matrix, as here.
Consider the row space and the column space of this result. Our first point made above says that a basis for the row
space is easy to get: simply collect together all of the rows with leading entries. However, because this is a reduced
echelon form matrix, a basis for the column space is just as easy: take the columns containing the leading entries,
that is,
. (Linear independence is obvious. The other columns are in the span of this set, since they all have
142
a third component of zero.) Thus, for a reduced echelon form matrix, bases for the row and column spaces can be
found in essentially the same way by taking the parts of the matrix, the rows or columns, containing the leading
entries.
Theorem 3.11
The row rank and column rank of a matrix are equal.
Proof
First bring the matrix to reduced echelon form. At that point, the row rank equals the number of leading entries since
each equals the number of nonzero rows. Also at that point, the number of leading entries equals the column rank
because the set of columns containing leading entries consists of some of the 's from a standard basis, and that set
is linearly independent and spans the set of columns. Hence, in the reduced echelon form matrix, the row rank equals
the column rank, because each equals the number of leading entries.
But Lemma 3.3 and Lemma 3.10 show that the row rank and column rank are not changed by using row operations
to get to reduced echelon form. Thus the row rank and the column rank of the original matrix are also equal.
Definition 3.12
The rank of a matrix is its row rank or column rank.
So our second point in this subsection is that the column space and row space of a matrix have the same dimension.
Our third and final point is that the concepts that we've seen arising naturally in the study of vector spaces are exactly
the ones that we have studied with linear systems.
Theorem 3.13
For linear systems with
, the statements
1. the rank of
is
2. the space of solutions of the associated homogeneous system has dimension
are equivalent.
So if the system has at least one particular solution then for the set of solutions, the number of parameters equals
, the number of variables minus the rank of the matrix of coefficients.
Proof
The rank of
is
have
ends with
free variables.
Remark 3.14
(Munkres 1964)
Sometimes that result is mistakenly remembered to say that the general solution of an unknown system of
equations uses
parameters. The number of equations is not the relevant figure, rather, what matters is the
number of independent equations (the number of equations in a maximal independent set). Where there are
independent equations, the general solution involves
parameters.
Corollary 3.15
Where the matrix
is
, the statements
1. the rank of
is
2.
is nonsingular
3. the rows of
form a linearly independent set
4. the columns of
form a linearly independent set
5. any linear system whose matrix of coefficients is
are equivalent.
143
Proof
Clearly
. The last,
Exercises
Problem 1
Transpose each.
1.
2.
3.
4.
5.
This exercise is recommended for all readers.
Problem 2
Decide if the vector is in the row space of the matrix.
1.
2.
,
This exercise is recommended for all readers.
Problem 3
Decide if the vector is in the column space.
1.
2.
,
This exercise is recommended for all readers.
Problem 4
Find a basis for the row space of this matrix.
column
2.
3.
4.
This exercise is recommended for all readers.
Problem 6
Find a basis for the span of each set.
1.
2.
3.
4.
Problem 7
Which matrices have rank zero? Rank one?
This exercise is recommended for all readers.
Problem 8
Given
, what choice of
Problem 9
Find the column rank of this matrix.
Problem 10
Show that a linear system with at least one solution has at most one solution if and only if the matrix of coefficients
has rank equal to the number of its columns.
144
145
, which set must be dependent, its set of rows or its set of columns?
Problem 12
Give an example to show that, despite that they have the same dimension, the row space and column space of a
matrix need not be equal. Are they ever equal?
Problem 13
Show that the set
is a subspace of
. Find a basis.
Problem 15
Show that the transpose operation is linear:
for
and
is bigger than
Problem 18
Show that the row rank of an
matrix is at most
146
Prove that a linear system has a solution if and only if that system's matrix of coefficients has the same rank as its
augmented matrix.
Problem 23
An
1. Show that a matrix can have both full row rank and full column rank only if it is square.
2. Prove that the linear system with matrix of coefficients
, ...,
if and only if
has full row rank.
3. Prove that a homogeneous system has a unique solution if and only if its matrix of coefficients
column rank.
4. Prove that the statement "if a system with matrix of coefficients
solution" holds if and only if
has full
Problem 24
How would the conclusion of Lemma 3.3 change if Gauss' method is changed to allow multiplying a row by zero?
This exercise is recommended for all readers.
Problem 25
What is the relationship between
is the relationship between
Solutions
and
,
? Between
, and
and
? What, if any,
References
Munkres, James R. (1964), Elementary Linear Algebra, Addison-Wesley.
, so the benchmark model would be left out. Besides, union is all wrong for this reason:a union of
subspaces need not be a subspace (it need not be closed; for instance, this
vector
is in none of the three axes and hence is not in the union). In addition to the members of the subspaces, we must at
least also include all of the linear combinations.
Definition 4.1
are
147
subspaces
of
vector
space,
their
sum
is
the
span
of
their
union
.
(The notation, writing the "
" between sets in addition to using it between vectors, fits with the practice of using
is a member of the
and so
Example 4.3
A sum of subspaces can be less than the entire space. Inside of
and let
not all of
, let
. Then
is
Example 4.4
A space can be described as a combination of subspaces in more than one way. Besides the decomposition
, we can also write
. To check this, note that
any
-plane;
The above definition gives one way in which a space can be thought of as a combination of some of its parts.
However, the prior example shows that there is at least one interesting property of our benchmark model that is not
captured by the definition of the sum of subspaces. In the familiar decomposition of
, we often speak of a
vector's "
part" or "
part" or "
part". That is, in this model, each vector has a unique decomposition into parts
that come from the parts making up the whole space. But in the decomposition used in Example 4.4, we cannot refer
to the "
part" of a vector these three sums
all describe the vector as comprised of something from the first plane plus something from the second plane, but the
"
part" is different in each.
That is, when we consider how
is put together from the three axes "in some way", we might mean "in such a
way that every vector has at least one decomposition", and that leads to the definition above. But if we take it to
mean "in such a way that every vector has one and only one decomposition" then we need another condition on
combinations. To see what this condition is, recall that vectors are uniquely represented in terms of a basis. We can
use this to break a space into a sum of subspaces such that any vector in the space breaks uniquely into a sum of
members of those subspaces.
Example 4.5
The benchmark is
is the
is the
148
. And, the fact that each such expression is unique reflects that fact that
is
linearly independent any equation like the one above has a unique solution.
Example 4.6
We don't have to take the basis vectors one at a time, the same idea works if we conglomerate them into larger
sequences. Consider again the space
and the vectors from the standard basis
. The subspace with the basis
is the
is the
the fact that any member of the space is a sum of members of the two subspaces in one and only one way
is a reflection of the fact that these vectors form a basis this system
These examples illustrate a natural way to decompose a space into a sum of subspaces in such a way that each vector
decomposes uniquely into a sum of vectors from the parts. The next result says that this way is the only way.
Definition 4.7
The concatenation of the sequences
, ...,
is their
adjoinment.
Lemma 4.8
Let
. Let
, ...,
be any
(with
) is unique.
(with
.
149
and is linearly independent. It spans the space because the assumption that
every
can be expressed as
means that
) to an expression of
as a
's from the concatenation. For linear independence, consider this linear relationship.
, ...,
to be
. Because of the assumption that decompositions are unique, and because the zero vector
obviously has the decomposition
, assume that
in order to show that it is trivial. (The relationship is written in this way because we are considering a combination of
nonzero vectors from only some of the
's; for instance, there might not be a
in this combination.) As in (
),
independence of
one of the
trivial.
Finally, for
is zero. Now,
and
in order to show
Definition 4.9
A collection of subspaces
is a linear
the
collection
is
independent.
if
We
.
Example 4.11
The benchmark model fits:
Example 4.12
The space of
It is the direct sum of subspaces in many other ways as well; direct sum decompositions are not unique.
Corollary 4.13
The dimension of a direct sum is the sum of the dimensions of its summands.
Proof
write
150
In Lemma 4.8, the number of basis vectors in the concatenation equals the sum of the number of vectors in the
subbases that make up the concatenation.
The special case of two subspaces is worth mentioning separately.
Definition 4.14
When a vector space is the direct sum of two of its subspaces, then they are said to be complements.
Lemma 4.15
A vector space
and
Proof
Suppose first that
intersection, let
. By definition,
be a vector from
is a member of
is the sum of the two. To show that the two have a trivial
, and on the right side is a linear combination of members (actually, of only one member) of
is a
direct sum of the two, we need only show that the spaces are independent no nonzero member of the first is
expressible as a linear combination of members of the second, and vice versa. This is true because any relationship
(with
and
for all ) shows that the vector on the left is
also in
so
. The same argument works for any
Example 4.16
In the space
, the
. A space can
have more than one pair of complementary subspaces; another pair here are the subspaces consisting of the lines
and
.
Example 4.17
In
the
space
the
subspaces
and
, the
is
-planes are not complements, which is the point of the discussion following
-plane is the
.
Example 4.19
Following Lemma 4.15, here is a natural question:is the simple sum
and only if the intersection of the subspaces is trivial? The answer is that if there are more than two subspaces then
having a trivial intersection is not enough to guarantee unique decomposition (i.e., is not enough to ensure that the
spaces are independent). In
, let
be the -axis, let
be the -axis, and let
be this.
151
(This example also shows that this requirement is also not enough:that all pairwise intersections of the subspaces be
trivial. See Problem 11.)
In this subsection we have seen two ways to regard a space as built up from component parts. Both are useful; in
particular, in this book the direct sum definition is needed to do the Jordan Form construction in the fifth chapter.
Exercises
This exercise is recommended for all readers.
Problem 1
Decide if
1.
2.
3.
and
4.
5.
,
This exercise is recommended for all readers.
Problem 2
Show that
1. the -axis
2. the line
Problem 3
Is
and
-axis,
:the plane
:the
-axis,
,
:the
:the
-axis,
-plane
152
can be combined to
1. sum to
?
2. direct sum to
Problem 7
What is
if
Problem 8
Does Example 4.5 generalize? That is, is this true or false:if a vector space
has a basis
then it is
?
Problem 10
This exercise makes the notation of writing "
are
, and
are
nontrivial.
This exercise is recommended for all readers.
Problem 12
Prove that if
then
is trivial whenever
the proof of Lemma 4.15 extends to the case of more than two subspaces. (Example 4.19 shows that this implication
does not reverse; the other half does not extend.)
Problem 13
Recall that no linearly independent set contains the zero vector. Can an independent set of subspaces contain the
trivial subspace?
This exercise is recommended for all readers.
Problem 14
Does every subspace have a complement?
This exercise is recommended for all readers.
Problem 15
Let
spans
spans
. Can
and that
span
? Must it?
? Must it?
153
Problem 16
When a vector space is decomposed as a direct sum, the dimensions of the subspaces add to the dimension of the
space. The situation with a space that is given as the sum of its subspaces is not as simple. This exercise considers
the two-subspace special case.
1. For these subspaces of
2. Suppose that
for
find
and
, and
is a basis
. Finally, suppose that the prior sequence has been expanded to give a sequence
that is a basis for
for
, and a sequence
that is a basis
.
be eight-dimensional subspaces of a ten-dimensional space. List all values possible for
.
Problem 17
Let
suppose that
. Prove
Problem 18
A matrix is symmetric if for each pair of indices
and
antisymmetric if each
1. Give a symmetric
, the
entry. A matrix is
entry.
matrix. (Remark. For the second one, be careful
is the direct sum of the space of symmetric matrices and the space of antisymmetric matrices.
Problem 19
Let
be
subspaces
of
vector
space.
Prove
that
. Can
-axis in
shows that
and
happen?
is a subspace of
(read "
perp").
-axis in
-axis in
-axis in
.
.
154
5. Show that if
is the orthocomplement of then is the orthocomplement of
6. Prove that a subspace and its orthocomplement have a trivial intersection.
7. Conclude that for any
and subspace
8. Show that
we have that
, is
?
Problem 23
We know that if
Problem 24
We can ask about the algebra of the "
" operation.
1. Is it commutative; is
2. Is it associative; is
3. Let
be a subspace of some vector space. Show that
4. Must there be an identity element, a subspace
?
.
such that
then
? Right cancelation?
Problem 25
Consider the algebraic properties of the direct sum operation.
1. Does direct sum commute: does
imply that
imply
right-cancel?
5. There is an identity element with respect to this operation. Find it.
6. Do some, or all, subspaces have inverses with respect to this operation:is there a subspace
space such that there is a subspace
item?
Solutions
References
Halsey, William D. (1979), Macmillian Dictionary, Macmillian.
? Does it
of some vector
155
1. for any
the result of
is in
and
if
2. for any
then
the result of
is in
and
if
then
3. if
then
4. there is an element
if
such that
then
for each
there is an element
5. there is an element
such that
if
such that
then
of
there is an element
such that
The number system consisting of the set of real numbers along with the usual addition and multiplication operation
is a field, naturally. Another field is the set of rational numbers with its usual addition and multiplication operations.
An example of an algebraic structure that is not a field is the integer number system it fails the final condition.
Some examples are surprising. The set
", and thus by taking coefficients, vector entries, and matrix entries to be elements of
("almost" because statements involving distances or angles are exceptions). Here are some examples; each
applies to a vector space
For any
i.
and
over a field
, and
ii.
, and
iii.
.
The span (the set of linear combinations) of a subset of
is a subspace of
156
We won't develop vector spaces in this more general setting because the additional abstraction can be a distraction.
The ideas we want to bring out already appear when we stick to the reals.
The only exception is in Chapter Five. In that chapter we must factor polynomials, so we will switch to considering
vector spaces over the field of complex numbers. We will discuss this more, including a brief review of complex
arithmetic, when we get there.
Exercises
Problem 1
Show that the real numbers form a field.
Problem 2
Prove that these are fields.
1. The rational numbers
2. The complex numbers
Problem 3
Give an example that shows that the integer number system is not a field.
Problem 4
Consider the set
Problem 5
Give suitable operations to make the set
Solutions
a field.
157
Remarkably, the explanation for the cubical external shape is the simplest one possible: the internal shape, the way
the atoms lie, is also cubical. The internal structure is pictured below. Salt is sodium chloride, and the small spheres
shown are sodium while the big ones are chloride. (To simplify the view, only the sodiums and chlorides on the
front, top, and right are shown.)
The specks of salt that we see when we spread a little out on the table consist of many repetitions of this fundamental
unit. That is, these cubes of atoms stack up to make the larger cubical structure that we see. A solid, such as table
salt, with a regular internal structure is a crystal.
We can restrict our attention to the front face. There, we have this pattern repeated many times.
The distance between the corners of this cell is about 3.34 ngstroms (an ngstrom is
meters). Obviously
that unit is unwieldly for describing points in the crystal lattice. Instead, the thing to do is to take as a unit the length
of each side of the square. That is, we naturally adopt this basis.
Then we can describe, say, the corner in the upper right of the picture above as
Another crystal from everyday experience is pencil lead. It is graphite, formed from carbon atoms arranged in this
shape.
This is a single plane of graphite. A piece of graphite consists of many of these planes layered in a stack. (The
chemical bonds between the planes are much weaker than the bonds inside the planes, which explains why graphite
writes---it can be sheared so that the planes slide off and are left on the paper.) A convienent unit of length can be
made by decomposing the hexagonal ring into three regions that are rotations of this unit cell.
A natural basis then would consist of the vectors that form the sides of that unit cell. The distance along the bottom
and slant is
ngstroms, so this
is a good basis.
The selection of convienent bases extends to three dimensions. Another familiar crystal formed from carbon is
diamond. Like table salt, it is built from cubes, but the structure inside each cube is more complicated than salt's. In
addition to carbons at each corner,
158
159
(To show the added face carbons clearly, the corner carbons have been reduced to dots.) There are also four more
carbons inside the cube, two that are a quarter of the way up from the bottom and two that are a quarter of the way
down from the top.
(As before, carbons shown earlier have been reduced here to dots.) The distance along any edge of the cube is
ngstroms. Thus, a natural basis for describing the locations of the carbons, and the bonds between them, is this.
Even the few examples given here show that the structures of crystals is complicated enough that some organized
system to give the locations of the atoms, and how they are chemically bound, is needed. One tool for that
organization is a convienent basis. This application of bases is simple, but it shows a context where the idea arises
naturally. The work in this chapter just takes this simple idea and develops it.
Exercises
Problem 1
How many fundamental regions are there in one face of a speck of salt? (With a ruler, we can estimate that face is a
square that is
cm on a side.)
Problem 2
In the graphite picture, imagine that we are interested in a point
ngstroms up and
the origin.
1. Express that point in terms of the basis given for graphite.
2. How many hexagonal shapes away is this point from the origin?
3. Express that point in terms of a second basis, where the first basis vector is the same, but the second is
perpendicular to the first (going up the plane) and of the same length.
Problem 3
Give the locations of the atoms in the diamond cube both in terms of the basis, and in ngstroms.
Problem 4
This illustrates how the dimensions of a unit cell could be computed from the shape in which a substance crystalizes
(see Ebbing 1993, p. 462).
1. Recall that there are
atoms in a mole (this is Avagadro's number). From that, and the fact that
160
grams per cubic centimeter. From this, and the mass of a unit cell,
References
Ebbing, Darrell D. (1993), General Chemistry (Fourth ed.), Houghton Mifflin.
Number with
that
preference
161
straight-line order. That is, the majority cycle seems to arise in the aggregate, without being present in the elements
of that aggregate, the preference lists. Recently, however, linear algebra has been used (Zwicker 1991) to argue that a
tendency toward cyclic preference is actually present in each voter's list, and that it surfaces when there is more
adding of the tendency than cancelling.
For this argument, abbreviating the choices as
, and
as preferred to
way.) The descriptions for the other preference lists are in the Voting preferences table below.
Now, to conduct the election we linearly combine these descriptions; for instance, the Political Science mock
election
We will decompose vote vectors into two parts, one cyclic and the other acyclic. For the first part, we say that a
vector is purely cyclic if it is in this subspace of
.
For the second part, consider the subspace (see Problem 6) of vectors that are perpendicular to all of the vectors in
.
We can represent votes with respect to this basis, and thereby decompose them into a cyclic part and an acyclic part.
(Note for readers who have covered the optional section in this chapter: that is, the space is the direct sum of and
.)
162
voter discussed above. The representation in terms of the basis is easily
found,
so that
, and
. Then
gives the desired decomposition into a cyclic part and and an acyclic part.
Thus, this
The
voter's rational preference list can indeed be seen to have a cyclic part.
voter is opposite to the one just considered in that the "
decomposition
shows that these opposite preferences have decompositions that are opposite. We say that the first voter has positive
spin since the cycle part is with the direction we have chosen for the arrows, while the second voter's spin is
negative.
The fact that that these opposite voters cancel each other is reflected in the fact that their vote vectors add to zero.
This suggests an alternate way to tally an election. We could first cancel as many opposite preference lists as
possible, and then determine the outcome by adding the remaining lists.
The rows of the table below contain the three pairs of opposite preference lists. The columns group those pairs by
spin. For instance, the first row contains the two voters just considered.
Voting preferences
positive sign
negative spin
163
If we conduct the election as just described then after the cancellation of as many opposite pairs of voters as possible,
there will be left three sets of preference lists, one set from the first row, one set from the second row, and one set
from the third row. We will finish by proving that a voting paradox can happen only if the spins of these three sets
are in the same direction. That is, for a voting paradox to occur, the three remaining sets must all come from the left
of the table or all come from the right (see Problem 3). This shows that there is some connection between the
majority cycle and the decomposition that we are using---a voting paradox can happen only when the tendencies
toward cyclic preference reinforce each other.
For the proof, assume that opposite preference orders have been cancelled, and we are left with one set of preference
lists from each of the three rows. Consider the sum of these three (here, the numbers , , and could be
positive, negative, or zero).
and
and
all nonnegative or all nonpositive. On the left, at least two of the three numbers,
nonnegative or both nonpositive. We can assume that they are
nonnegative and
and
and
and
, are
and
, are both
only the first case, since the second is similar and the other two are also easy.
So assume that the cycle is nonnegative and that
add to give that
and
and
proof.
This result says only that having all three spin in the same direction is a necessary condition for a majority cycle. It is
not sufficient; see Problem 4.
Voting theory and associated topics are the subject of current research. There are many intriguing results, most
notably the one produced by K. Arrow (Arrow 1963), who won the Nobel Prize in part for this work, showing that
no voting system is entirely fair (for a reasonable definition of "fair"). For more information, some good introductory
articles are (Gardner 1970), (Gardner 1974), (Gardner 1980), and (Neimi Riker 1976). A quite readable recent book
is (Taylor 1995). The long list of cases from recent American political history given in (Poundstone 2008) show that
manipulation of these paradoxes is routine in practice (and the author proposes a solution).
This Topic is largely drawn from (Zwicker 1991). (Author's Note: I would like to thank Professor Zwicker for his
kind and illuminating discussions.)
164
Exercises
Problem 1
Here is a reasonable way in which a voter could have a cyclic preference. Suppose that this voter ranks each
candidate on each of three criteria.
1. Draw up a table with the rows labelled "Democrat", "Republican", and "Third", and the columns labelled
"character", "experience", and "policies". Inside each column, rank some candidate as most preferred, rank
another as in the middle, and rank the remaining oneas least preferred.
2. In this ranking, is the Democrat preferred to the Republican in (at least) two out of three criteria, or vice versa? Is
the Republican preferred to the Third?
3. Does the table that was just constructed have a cyclic preference order? If not, make one that does.
So it is possible for a voter to have a cyclic preference among candidates. The paradox described above, however, is
that even if each voter has a straight-line preference list, a cyclic preference can still arise for the entire group.
Problem 2
Compute the values in the table of decompositions.
Problem 3
Do the cancellations of opposite preference orders for the Political Science class's mock election. Are all the
remaining preferences from the left three rows of the table or from the right?
Problem 4
The necessary condition that is proved abovea voting paradox can happen only if all three preference lists
remaining after cancellation have the same spinis not also sufficient.
1. Continuing the positive cycle case considered in the proof, use the two inequalities
and
to show that
.
2. Also show that
, and hence that
.
3. Give an example of a vote where there is a majority cycle, and addition of one more voter with the same spin
causes the cycle to go away.
4. Can the opposite happen; can addition of one voter with a "wrong" spin cause a cycle to appear?
5. Give a condition that is both necessary and sufficient to get a majority cycle.
Problem 5
A one-voter election cannot have a majority cycle because of the requirement that we've imposed that the
voter's list must be rational.
1. Show that a two-voter election may have a majority cycle. (We consider the group preference a majority cycle if
all three group totals are nonnegative or if all three are nonpositive---that is, we allow some zero's in the group
preference.)
2. Show that for any number of voters greater than one, there is an election involving that many voters that results in
a majority cycle.
Problem 6
Let
be a subspace of
is also a subspace of
165
References
Arrow, J. (1963), Social Choice and Individual Values, Wiley.
Gardner, Martin (April 1970), "Mathematical Games, Some mathematical curiosities embedded in the solar
system", Scientific American: 108-112.
Gardner, Martin (October 1974), "Mathematical Games, On the paradoxical situations that arise from
nontransitive relations", Scientific American.
Gardner, Martin (October 1980), "Mathematical Games, From counting votes to making votes count: the
mathematics of elections", Scientific American.
Neimi, G.; Riker, W. (June 1976), "The Choice of Voting Systems", Scientific American: 21-27.
Poundstone, W. (2008), Gaming the vote, Hill and Wang, ISBN978-0-8090-4893-9.
Taylor, Alan D. (1995), Mathematics and Politics: Strategy, Voting, Power, and Proof, Springer-Verlag.
Zwicker, S. (1991), "The Voters' Paradox, Spin, and the Borda Count", Mathematical Social Sciences 22:
187-227
However, the idea of including the units can be taken beyond bookkeeping. It can be used to draw conclusions about
what relationships are possible among the physical quantities.
To start, consider the physics equation:
seconds then this is a true statement about falling bodies. However it is not correct in other unit systems; for
instance, it is not correct in the meter-second system. We can fix that by making the
a dimensional constant.
So our first point is that by "including the units" we mean that we are restricting our attention to equations that use
dimensional constants.
By using dimensional constants, we can be vague about units and say only that all quantities are measured in
combinations of some units of length
, mass
, and time . We shall refer to these three as dimensions
(these are the only three dimensions that we shall need in this Topic). For instance, velocity could be measured in
or
, but in all events it involves some unit of length divided by some unit of time
so the dimensional formula of velocity is
. We
shall prefer using negative exponents over the fraction bars and we shall include the dimensions with a zero
exponent, that is, we shall write the dimensional formula of velocity as
.
In this context, "You can't add apples to oranges" becomes the advice to check that all of an equation's terms have
the same dimensional formula. An example is this version of the falling body equation:
. The
dimensional formula of the
(
term is
is
is
, so
166
term is
dimensionally homogeneous.
Quantities with dimensional formula
.
The classic example of using the units for more than bookkeeping, using them to draw conclusions, considers the
formula for the period of a pendulum.
. So the quantities on the other side of the equation must have dimensional
's and
below has the quantities that an experienced investigator would consider possibly relevant. The only dimensional
formulas involving are for the length of the string and the acceleration due to gravity. For the 's of these two
to cancel, when they appear in the equation they must be in ratio, e.g., as
, or as
, or as
for numbers
, ...,
For the second, observe that an easy way to construct a dimensionally homogeneous expression is by taking a
product of dimensionless quantities or by adding such dimensionless terms. Buckingham's Theorem states that any
complete relationship among quantities with dimensional formulas can be algebraically manipulated into a form
where there is some function such that
of dimensionless products. (The first example below describes what makes a set
, ...,
don't involve
(as with
, here
167
By the first fact cited above, we expect the formula to have (possibly sums of terms of) the form
. To use the second fact, to find which combinations of the powers
, ...,
Note that
is
and so the mass of the bob does not affect the period. Gaussian reduction and parametrization of
(we've taken
as one of the parameters in order to express the period in terms of the other quantities).
Here is the linear algebra. The set of dimensionless products contains all terms
conditions above. This set forms a vector space under the "
subject to the
" " operation of raising such a product to the power of the scalar (see Problem 5). The term "complete set of
dimensionless products" in Buckingham's Theorem means a basis for this vector space.
We can get a basis by first taking
products are
and
and then
. Because the set
says that
where
is a function that we cannot determine from this analysis (a first year physics text will show by other
168
As earlier, the linear algebra here is that the set of dimensionless products of these quantities forms a vector space,
and we want to produce a basis for that space, a "complete" set of dimensionless products. One such set, gotten from
setting
and
,
and
also
setting
and
is
. With that, Buckingham's Theorem says that any complete
relationship among these quantities is stateable this form.
, and
, the same
acceleration, about the same center (approximately). Hence, the orbit will be the same and so its period will be the
same, and thus the right side of the above equation also remains unchanged (approximately). Therefore,
is approximately constant as
varies. This is Kepler's Third Law: the square of the period of a
planet is proportional to the cube of the mean radius of its orbit about the sun.
The final example was one of the first explicit applications of dimensional analysis. Lord Raleigh considered the
speed of a wave in deep water and suggested these as the relevant quantities.
169
The equation
, and so
is
times a constant (
is constant since it is a
function of no arguments).
As the three examples above show, dimensional analysis can bring us far toward expressing the relationship among
the quantities. For further reading, the classic reference is (Bridgman 1931)this brief book is delightful. Another
source is (Giordano, Wells & Wilde 1987).. A description of dimensional analysis's place in modeling is in
(Giordano, Jaye & Weir 1986)..
Exercises
Problem 1
Consider a projectile, launched with initial velocity
, at an angle
with the guess that these are the relevant quantities. (de Mestre 1990)
quantity dimensional formula
horizontal position
vertical position
initial speed
angle of launch
acceleration due to gravity
time
1. Show that
finding the appropriate free variables in the linear system that arises, but there is a shortcut that uses the properties
of a basis.)
2. These two equations of motion for projectiles are familiar:
and
.
Manipulate each to rewrite it as a relationship among the dimensionless products of the prior item.
170
Problem 2
Einstein (Einstein 1911) conjectured that the infrared characteristic frequencies of a solid may be determined
by the same forces between atoms as determine the solid's ordanary elastic behavior. The relevant quantities
are
quantity dimensional formula
characteristic frequency
compressibility
number of atoms per cubic cm
mass of an atom
Show that there is one dimensionless product. Conclude that, in any complete relationship among quantities with
these dimensional formulas, is a constant times
. This conclusion played an important role in
the early study of quantum phenomena.
Problem 3
The torque produced by an engine has dimensional formula
engine's rotation rate (with dimensional formula
formula
between
. (Tilley)
Problem 5
Prove that the dimensionless products form a vector space under the
and the
operation of raising such the product to the power of the scalar. (The vector arrows are a precaution
against confusion.) That is, prove that, for any particular homogeneous system, this set of products of powers of
, ...,
is a vector space under:
and
and
and
References
Bridgman, P. W. (1931), Dimensional Analysis, Yale University Press.
de Mestre, Neville (1990), The Mathematics of Projectiles in sport, Cambridge University Press.
Giordano, R.; Jaye, M.; Weir, M. (1986), "The Use of Dimensional Analysis in Mathematical Modeling", UMAP
Modules (COMAP) (632).
Giordano, R.; Wells, M.; Wilde, C. (1987), "Dimensional Analysis", UMAP Modules (COMAP) (526).
Einstein, A. (1911), Annals of Physics 35: 686.
Tilley, Burt, Private Communication.
171
172
Chapter III
Linear Algebra/Isomorphisms
In the examples following the definition of a vector space we developed the intuition that some spaces are "the
same" as others. For instance, the space of two-tall column vectors and the space of two-wide row vectors are not
equal because their elements column vectors and row vectors are not equal, but we have the idea that these
spaces differ only in how their elements appear. We will now make this idea precise.
This section illustrates a common aspect of a mathematical investigation. With the help of some examples, we've
gotten an idea. We will next give a formal definition, and then we will produce some results backing our contention
that the definition captures the idea. We've seen this happen already, for instance, in the first section of the Vector
Space chapter. There, the study of linear systems led us to consider collections closed under linear combinations. We
defined such a collection as a vector space, and we followed it with some supporting results.
Of course, that definition wasn't an end point, instead it led to new insights such as the idea of a basis. Here too, after
producing a definition, and supporting it, we will get two surprises (pleasant ones). First, we will find that the
definition applies to some unforeseen, and interesting, cases. Second, the study of the definition will lead to new
ideas. In this way, our investigation will build a momentum.
then this correspondence preserves the operations, for instance this addition
173
and
. A natural
correspondence is this.
Definition 1.3
An isomorphism between two vector spaces
and
and
(we write
is a map
that
[1]
then
, read "
is isomorphic to
of functions of
only when
. If
and
is one-to-one.
174
of
preserves structure.
preserves addition.
With that, conditions (1) and (2) are verified, so we know that
are isomorphic
Example 1.5
Let
be the space
is isomorphic to
, and
, the space of
quadratic polynomials.
To show this we will produce an isomorphism map. There is more than one possibility; for instance, here are four.
The first map is the more natural correspondence in that it just carries the coefficients over. However, below we shall
verify that the second one is an isomorphism, to underline that there are isomorphisms other than just the obvious
one (showing that
is an isomorphism is Problem 3).
To show that
then
The
gives,
assumption
by
the
definition
Thus
that
of
that
implies
and therefore
that
is one-to-one.
of the codomain is the image of some member of the
. For instance,
is
175
The computations for structure preservation are like those in the prior example. This map preserves addition
Thus
We are sometimes interested in an isomorphism of a space with itself, called an automorphism. An identity map is
an automorphism. The next two examples show that there are others.
Example 1.6
A dilation map
is an automorphism of
is a map
is an automorphism.
through
176
For
instance,
under
this
to
map
and
. This map is an automorphism of this space; the
with itself does more than just tell us that the space is "the same" as itself. It gives us some
insight into the space's structure. For instance, below is shown a family of parabolas, graphs of members of
Each has a vertex at
and
and
, etc.
for
in any function's argument shifts its graph to the right by one. Thus,
's action is to shift all of the parabolas to the right by one. Notice that the picture before
is
is applied, because while each parabola moves to the right, another one
comes in from the left to take its place. This also holds true for cubics, etc. So the automorphism
gives us the
insight that
has a certain horizontal-homogeneity; this space looks the same near
as near
.
As described in the preamble to this section, we will next produce some results supporting the contention that the
definition of isomorphism above captures our intuition of vector spaces being the same.
Of course the definition itself is persuasive: a vector space consists of two components, a set and some structure, and
the definition simply requires that the sets correspond and that the structures correspond also. Also persuasive are the
examples above. In particular, Example 1.1, which gives an isomorphism between the space of two-wide row vectors
and the space of two-tall column vectors, dramatizes our intuition that isomorphic spaces are the same in all relevant
respects. Sometimes people say, where
, that "
is just painted green" any differences are merely
cosmetic.
Further support for the definition, in case it is needed, is provided by the following results that, taken together,
suggest that all the things of interest in a vector space correspond under an isomorphism. Since we studied vector
spaces to study linear combinations, "of interest" means "pertaining to linear combinations". Not of interest is the
way that the vectors are presented typographically (or their color!).
As an example, although the definition of isomorphism doesn't explicitly say that the zero vectors must correspond,
it is a consequence of that definition.
Lemma 1.8
An isomorphism maps a zero vector to a zero vector.
Proof
Where
. Then
The definition of isomorphism requires that sums of two vectors correspond and that so do scalar multiples. We can
extend that to say that all linear combinations correspond.
Lemma 1.9
177
1.
preserves structure
2.
3.
Proof
Since the implications
and
For the inductive step assume that statement 3 holds whenever there are
, or
, ..., or
. Assume statement 1.
. Consider the
-term sum.
when applied
times.
In addition to adding to the intuition that the definition of isomorphism does indeed preserve the things of interest in
a vector space, that lemma's second item is an especially handy way of checking that a map preserves structure.
We close with a summary. The material in this section augments the chapter on Vector Spaces. There, after giving
the definition of a vector space, we informally looked at what different things can happen. Here, we defined the
relation "
" between vector spaces and we have argued that it is the right way to split the collection of vector
spaces into cases because it preserves the features of interest in a vector space in particular, it preserves linear
combinations. That is, we have now said precisely what we mean by "the same", and by "different", and so we have
precisely classified the vector spaces.
Exercises
This exercise is recommended for all readers.
Problem 1
Verify, using Example 1.4 as a model, that the two correspondences given before the definition are isomorphisms.
1. Example 1.1
2. Example 1.2
This exercise is recommended for all readers.
Problem 2
For the map
given by
178
3.
Show that this map is an isomorphism.
Problem 3
Show that the natural map
given by
2.
given by
3.
given by
4.
given by
Problem 5
Show that the map
given by
Problem 9
Find two isomorphisms between
and
is
isomorphic to
Problem 11
For what
Problem 12
is
isomorphic to
, it is isomorphic to the
-plane subspace of
to
179
given by
Problem 13
Why, in Lemma 1.8, must there be a
be nonempty?
Problem 14
Are any two trivial spaces isomorphic?
Problem 15
In the proof of Lemma 1.9, what about the zero-summands case (that is, if
is zero)?
Problem 16
Show that any isomorphism
. Thus, if
to
then also
is isomorphic to .
3. Show that a composition of isomorphisms is an isomorphism: if
is an isomorphism then so also is
isomorphic to
Problem 18
, then also
Suppose that
mapped by
is isomorphic to
is
is an isomorphism and
. Thus, if
is isomorphic to
and
is
is isomorphic
Problem 19
Suppose that
is linearly dependent.
Problem 20
Show that each type of map from Example 1.6 is an automorphism.
1. Dilation
2. Rotation
3. Reflection
by a nonzero scalar .
through an angle .
over a line through the origin.
Hint. For the second and third items, polar coordinates are useful.
Problem 21
Produce an automorphism of
other than the identity map, and other than a shift map
Problem 22
1. Show that a function
2. Let
.
be an automorphism of
for some
such that
. Find
for some
with
if and only if
.
4. Let be an automorphism of
180
with
Find
Problem 23
Refer to Lemma 1.8 and Lemma 1.9. Find two more things preserved by isomorphism.
Problem 24
We show that isomorphisms can be tailored to fit in that, sometimes, given vectors in the domain and in the range we
can produce an isomorphism associating those vectors.
1. Let
be a basis for
so that any
to
to
Problem 25
Prove that a space is
consider the map sending a vector over to its representation with respect to
Problem 26
(Requires the subsection on Combining Subspaces, which is optional.) Let
and
and
and
.
?
(in this case we say that
given by
181
is an isomorphism. Thus if the internal direct sum is defined then the internal and external direct sums are
isomorphic.
Solutions
Footnotes
[1] More information on one-to-one and onto maps is in the appendix.
is not an isomorphism because it is not onto. Of course, being a function, a homomorphism is onto some set, namely
its range; the map is onto the
-plane subset of
.
Lemma 2.1
Under a homomorphism, the image of any subspace of the domain is a subspace of the codomain. In particular, the
image of the entire space, the range of the homomorphism, is a subspace of the codomain.
Proof
Let
codomain
. The image
is a subset of the
is a subspace of
and
then
we need
are members of
is also a member of
from
sometimes denoted
.
is
(We shall soon see the connection between the rank of a map and the rank of a matrix.)
Example 2.3
Recall
that
the
derivative
map
is linear. The rangespace
polynomials
Example 2.4
With this homomorphism
given
by
182
an image vector in the range can have any constant term, must have an
same coefficient of
as of
. That is, the rangespace is
is the inverse
. Above, the three sets of many elements on the left are inverse images.
Example 2.5
Consider the projection
which is a homomorphism that is many-to-one. In this instance, an inverse image set is a vertical line of vectors in
the domain.
Example 2.6
This homomorphism
183
, the inverse image
The above examples have only to do with the fact that we are considering functions, specifically, many-to-one
functions. They show the inverse images as sets of vectors that are related to the image vector . But these are
more than just arbitrary functions, they are homomorphisms; what do the two preservation conditions say about the
relationships?
In generalizing from isomorphisms to homomorphisms by dropping the one-to-one condition, we lose the property
that we've stated intuitively as: the domain is "the same as" the range. That is, we lose that the domain corresponds
perfectly to the range in a one-vector-by-one-vector way.
What we shall keep, as the examples below illustrate, is that a homomorphism describes a way in which the domain
is "like", or "analgous to", the range.
Example 2.7
We think of
components
as being like
,
, and
, except that vectors have an extra component. That is, we think of the vector with
and
, we
make precise which members of the domain we are thinking of as related to which members of the codomain.
Understanding in what way the preservation conditions in the definition of homomorphism show that the domain
elements are like the codomain elements is easiest if we draw
as the
-plane inside of
. (Of course,
is
a set of two-tall vectors while the
-plane is a set of three-tall vectors with a third component of zero, but there is
is the "shadow" of
above
above
plus
above
equals
. (Preservation of scalar
184
lie in the domain in a vertical line (only one such vector is shown,
vectors". Now,
then
sense that any
and
vector equals a
and
Example 2.8
A homomorphism can be used to express an analogy between spaces that is more subtle than the prior one. For the
map
in the range
.A
that maps to
. Call
a"
Restated, if a
vector"
plus
vector is added to a
a"
vector"
equals a "
vector".
to a
-axis.
185
We won't describe how every homomorphism that we will use is an analogy because the formal sense that we make
of "alike in that ..." is "a homomorphism exists such that ...". Nonetheless, the idea that a homomorphism between
two spaces expresses how the domain's vectors fall into classes that act like the the range's vectors is a good way to
view homomorphisms.
Another reason that we won't treat all of the homomorphisms that we see as above is that many vector spaces are
hard to draw (e.g., a space of polynomials). However, there is nothing bad about gaining insights from those spaces
that we are able to draw, especially when those insights extend to all vector spaces. We derive two such insights
from the three examples 2.7 , 2.8, and 2.9.
First, in all three examples, the inverse images are lines or planes, that is, linear surfaces. In particular, the inverse
image of the range's zero vector is a line or plane through the origin a subspace of the domain.
Lemma 2.10
For any homomorphism, the inverse image of a subspace of the range is a subspace of the domain. In particular, the
inverse image of the trivial subspace of the range is a subspace of the domain.
Proof
Let
and
, as
Example 2.12
be elements, so that
and
. Consider
are elements of
, since
is
.
186
Example 2.13
The map from Example 2.4 has this nullspace.
Now for the second insight from the above pictures. In Example 2.7, each of the vertical lines is squashed down to a
single point , in passing from the domain to the range, takes all of these one-dimensional vertical lines and
"zeroes them out", leaving the range one dimension smaller than the domain. Similarly, in Example 2.8, the
two-dimensional domain is mapped to a one-dimensional range by breaking the domain into lines (here, they are
diagonal lines), and compressing each of those lines to a single member of the range. Finally, in Example 2.9, the
domain breaks into planes which get "zeroed out", and so the map starts with a three-dimensional domain but ends
with a one-dimensional range this map "subtracts" two from the dimension. (Notice that, in this third example, the
codomain is two-dimensional but the range of the map is only one-dimensional, and it is the dimension of the range
that is of interest.)
Theorem 2.14
A linear map's rank plus its nullity equals the dimension of its domain.
Proof
Let
is a basis for the rangespace. Then counting the size of these bases gives the result.
To see that
is linearly independent, consider the equation
gives that
and so
is in the nullspace of
is a basis for
To show that
is linearly independent.
and write
of
, and so
is
. As
of
. This
as a linear combination
This
gives
and since
, ...,
. Thus,
is
187
is
that
with
and
some
nonzero.
because
Then,
,
we
have
because
that
and the
-plane inside of
The prior observation allows us to adapt some results about isomorphisms to this setting.
Theorem 2.21
In an
1.
2.
3.
4.
5. if
, these:
then
is a basis for
.
188
range of
and so a linear combination of two members of that domain has the form
. On that
gives this.
Thus the inverse of a one-to-one linear map is automatically linear. But this also gives the
implication,
to
, but a one-to-one
, to show that
. Consider
. Expressing
, as desired.
implication, assume that
. Then every
to
as a
is a basis for
so that
by
(uniqueness of the representation makes this well-defined). Checking that it is linear and that it is the inverse of
are easy.
We've now seen that a linear map shows how the structure of the domain is like that of the range. Such a map can be
thought to organize the domain space into inverse images of points in the range. In the special case that the map is
one-to-one, each inverse image is a single point and the map is an isomorphism between the domain and the range.
Exercises
This exercise is recommended for all readers.
Problem 1
Let
be given by
rangespace?
1.
2.
3.
4.
5.
This exercise is recommended for all readers.
Problem 2
Find the nullspace, nullity, rangespace, and rank of each map.
1.
given by
2.
given by
3.
given by
189
of rank five
of rank one
, an onto map
4.
, onto
This exercise is recommended for all readers.
Problem 4
What is the nullspace of the differentiation transformation
derivative, as a transformation of
Problem 5
? The
-th derivative?
Example 2.7 restates the first condition in the definition of homomorphism as "the shadow of a sum is the sum of the
shadows". Restate the second condition in the same style.
Problem 6
For
the
homomorphism
given
find these.
1.
2.
3.
This exercise is recommended for all readers.
Problem 7
For the map
given by
, and
by
190
Problem 9
Describe the nullspace and rangespace of a transformation given by
Problem 10
List all pairs
to
Problem 11
Does the differentiation map
have an inverse?
given by
Problem 13
1. Prove that a homomorphism is onto if and only if its rank equals the dimension of its codomain.
2. Conclude that a homomorphism between vector spaces with the same dimension is one-to-one if and only if it is
onto.
Problem 14
Show that a linear map is nonsingular if and only if it preserves linear independence.
Problem 15
Corollary 2.17 says that for there to be an onto homomorphism from a vector space
necessary that the dimension of
to
to a vector space
, it is
, then
that is onto.
Problem 16
Let
is a basis for
Problem 17
Recall that the nullspace is a subset of the domain and the rangespace is a subset of the codomain. Are they
necessarily distinct? Is there a homomorphism that has a nontrivial intersection of its nullspace and its rangespace?
Problem 18
Prove that the image of a span equals the span of the images. That is, where
is a subset of
of
then
equals
is
is any subspace
.
for
with
denoted
2. Consider the map
(if
and any
, the set
is not onto then this set may be empty). Such a set is a coset of
.
given by
and is
191
where
is a particular solution of that linear system (if there is no particular solution then the above set is
empty).
4. Show that this map
is linear
, ...,
for each
, ...,
Problem 20
Prove that for any transformation
that is rank one, the map given by composing the operator with itself
satisfies
Problem 21
Show that for any space
is isomorphic to
of dimension
. It is often denoted
. Conclude that
Problem 22
Show that any linear map is the sum of maps of rank one.
Problem 23
Is "is homomorphic to" an equivalence relation? (Hint: the difficulty is to decide on an appropriate meaning for the
quoted phrase.)
Problem 24
Show that the rangespaces and nullspaces of powers of linear maps
form descending
and ascending
is such that
.
Similarly,
.
Solutions
then
192
Footnotes
[1] More information on many-to-one maps is in the appendix.
shows that, if we know the value of the map on the vectors in a basis, then we can compute the value of the map on
any vector at all. We just need to find the 's to express with respect to the basis.
This section gives the scheme that computes, from the representation of a vector in the domain
, the
, ...,
with domain
and codomain
(fixing
as the bases for these spaces) that is determined by this action on the vectors in the domain's basis.
To compute the action of this map on any vector at all from the domain, we first express
and
with
and
(these are easy to check). Then, as described in the preamble, for any member
image
in terms of the
's.
193
Thus,
with
then
For instance,
with
then
We will express computations like the one above with a matrix notation.
and
by a column vector
and
from
and
and
with bases
and
is a linear map. If
then
with respect to
, and that
194
Observe that the number of columns of the matrix is the dimension of the domain of the map, and the number of
rows
is the dimension of the codomain.
Example 1.3
If
is given by
then where
the action of
on
is given by
We will use lower case letters for a map, upper case for the matrix, and lower case again for the entries of the matrix.
Thus for the map , the matrix representing it is
, with entries
.
Theorem 1.4
Assume that
and
is a linear map. If
and
is represented by
is this.
and
with bases
and
, and that
195
Proof
Problem 18.
We will think of the matrix
.
Definition 1.5
The matrix-vector product of a
matrix and a
vector is this.
The point of Definition 1.2 is to generalize Example 1.1, that is, the point of the definition is Theorem 1.4, that the
matrix describes how to get from the representation of a domain vector with respect to the domain's basis to the
representation of its image in the codomain with respect to the codomain's basis. With Definition 1.5, we can restate
this as: application of a linear map is represented by the matrix-vector product of the map's representative and the
vector's representative.
Example 1.6
With the matrix from Example 1.3 we can calculate where that map sends this vector.
To find
, by
Example 1.7
Let
For each vector in the domain's basis, we find its image under the map.
196
Then we find the representation of each image with respect to the codomain's basis
(these are easily checked). Finally, adjoining these representations gives the matrix representing
.
with respect to
We can illustrate Theorem 1.4 by computing the matrix-vector product representing the following statement about
the projection map.
Representing this vector from the domain with respect to the domain's basis
checks that the map's action is indeed reflected in the operation of the matrix. (We will sometimes compress these
three displayed equations into one
197
both as a domain basis and as a codomain basis is natural, Now, we find the
Then we represent these images with respect to the codomain's basis. Because this basis is
, vectors are
represented by themselves. Finally, adjoining the representations gives the matrix representing the map.
The advantage of this scheme is that just by knowing how to represent the image of the two basis vectors, we get a
formula that tells us the image of any vector at all; here a vector rotated by
.
We have already seen the addition and scalar multiplication operations of matrices and the dot product operation of
vectors. Matrix-vector multiplication is a new operation in the arithmetic of vectors and matrices. Nothing in
Definition 1.5 requires us to view it in terms of representations. We can get some insight into this operation by
turning away from what is being represented, and instead focusing on how the entries combine.
Example 1.9
In the definition the width of the matrix equals the height of the vector. Hence, the first product below is defined
while the second is not.
One reason that this product is not defined is purely formal: the definition requires that the sizes match, and these
sizes don't match. Behind the formality, though, is a reason why we will leave it undefined the matrix represents a
map with a three-dimensional domain while the vector represents a member of a two-dimensional space.
A good way to view a matrix-vector product is as the dot products of the rows of the matrix with the column vector.
Looked at in this row-by-row way, this new operation generalizes dot product.
Matrix-vector product can also be viewed column-by-column.
198
Example 1.10
The result has the columns of the matrix weighted by the entries of the vector. This way of looking at it brings us
back to the objective stated at the start of this section, to compute
as
.
We began this section by noting that the equality of these two enables us to compute the action of
argument knowing only
, ...,
on any
map by taking the matrix-vector product of the matrix representing the map and the vector representing the
argument. In this way, any linear map is represented with respect to some bases by a matrix. In the next subsection,
we will show the converse, that any matrix represents a linear map.
Exercises
This exercise is recommended for all readers.
Problem 1
Multiply the matrix
2.
3.
Problem 2
Perform, if possible, each matrix-vector multiplication.
1.
199
2.
3.
This exercise is recommended for all readers.
Problem 3
Solve this matrix equation.
where does
to
that sends
go?
1. Represent
2. Represent
with respect to
with respect to
where
where
.
.
with respect to
2.
with respect to
3.
with respect to
where
, given by
where
where
, given by
and
, given by
4.
with respect to
5.
200
where
with respect to
and
where
, given by
, given by
Problem 8
Represent the identity map on any nontrivial space with respect to
, where
is any basis.
Problem 9
Represent, with respect to the natural basis, the transpose transformation on the space
of
matrices.
Problem 10
Assume that
the
2.
3.
,
,
Problem 11
Example 1.8 shows how to represent the rotation transformation of the plane with respect to the standard basis.
Express these other transformations also with respect to the standard basis.
1. the dilation map
2. the reflection map
4. Using
Problem 13
Suppose that
image
the
has components
, ...,
201
, the column vector that is all zeroes except for a single one in the
-th position.
This exercise is recommended for all readers.
Problem 15
For each vector space of functions of one real variable, represent the derivative transformation with respect to
.
1.
2.
3.
Problem 16
Find the range of the linear transformation of
1.
2.
3. a matrix of the form
This exercise is recommended for all readers.
Problem 17
Can one matrix represent two different linear maps? That is, can
Problem 18
Prove Theorem 1.4.
This exercise is recommended for all readers.
Problem 19
Example 1.8 shows how to represent rotation of all vectors in the plane through an angle
about the
-axis is a transformation of
Represent it with respect to the standard bases. Arrange the rotation so that to someone whose feet are at the
origin and whose head is at
, the movement appears clockwise.
2. Repeat the prior item, only rotate about the -axis instead. (Put the person's head at .)
3. Repeat, about the -axis.
4. Extend the prior item to
-plane".)
Problem 20 (Schur's Triangularization Lemma)
1. Let
be a subspace of
vector from
with respect to
) with
respect to
?
2. What about maps?
3. Fix a basis
for
form a strictly increasing chain of subspaces. Show that for any linear map
of subspaces of
such that
there is a chain
for each
202
respect to
is upper-triangular (that is, each entry
5. Is an upper-triangular representation unique?
with
with
is zero).
Solutions
is described by a matrix
, with respect to
In this subsection, we will show the converse, that each matrix represents a linear map.
Recall that, in the definition of the matrix representation of a linear map, the number of columns of the matrix is the
dimension of the map's domain and the number of rows of the matrix is the dimension of the map's codomain. Thus,
for instance, a
matrix cannot represent a map from
to
. The next result says that, beyond this
restriction on the dimensions, there are no other limitations: the
fix any
and any
in
203
is,
is
defined
to
be
and
is linear. If
by the matrix
then
while
represented by
with respect to
represented by
maps
with respect to
is this map.
axis.
So not only is any linear map described by a matrix but any matrix describes a linear map. This means that we can,
when convenient, handle linear maps entirely as matrices, simply doing the computations, without have to worry that
a matrix of interest does not represent a linear map on some pair of spaces of interest. (In practice, when we are
working with a matrix but no spaces or bases have been specified, we will often take the domain and codomain to be
and
and use the standard bases. In this case, because the representation is transparent the representation
with respect to the standard basis of
Consequently, the column space of
is
the column space of the matrix equals the range of the map.
is often denoted by
.)
With the theorem, we have characterized linear maps as those maps that act in this matrix way. Each linear map is
described by a matrix and each matrix describes a linear map. We finish this section by illustrating how a matrix can
be used to tell things about its maps.
Theorem 2.3
The rank of a matrix equals the rank of any map that it represents.
Proof
Suppose that the matrix
is
bases
and
and
of dimension
and
, with
204
. The rank of
The rank of the matrix is its column rank (or its row rank; the two are equal). This is the dimension of the column
space of the matrix, which is the span of the set of column vectors
.
To see that the two spans have the same dimension, recall that a representation with respect to a basis gives an
isomorphism
. Under this isomorphism, there is a linear relationship among members of the
rangespace if and only if the same relationship holds in the column space, e.g,
and only if
if
independent if and only if the corresponding subset of the column space is linearly independent. This means that the
size of the largest linearly independent subset of the rangespace equals the size of the largest linearly independent
subset of the column space, and so the two spaces have the same dimension.
Example 2.4
Any map represented by
must, by definition, be from a three-dimensional domain to a four-dimensional codomain. In addition, because the
rank of this matrix is two (we can spot this by eye or get it with Gauss' method), any map represented by this matrix
has a two-dimensional rangespace.
Corollary 2.5
Let
. Then
Proof
For the first half, the dimension of the rangespace of
theorem. Since the dimension of the codomain of
is the rank of
, if the rank of
by the
equals the
number of rows, then the dimension of the rangespace equals the dimension of the codomain. But a subspace with
the same dimension as its superspace must equal that superspace (a basis for the rangespace is a linearly independent
subset of the codomain, whose size is equal to the dimension of the codomain, and so this set is a basis for the
codomain).
For the second half, a linear map is one-to-one if and only if it is an isomorphism between its domain and its range,
that is, if and only if its domain has the same dimension as its range. But the number of columns in
is the
dimension of
's range.
The above results end any confusion caused by our use of the word "rank" to mean apparently different things when
applied to matrices and when applied to maps. We can also justify the dual use of "nonsingular". We've defined a
matrix to be nonsingular if it is square and is the matrix of coefficients of a linear system with a unique solution, and
we've defined a linear map to be nonsingular if it is one-to-one.
Corollary 2.6
A square matrix represents nonsingular maps if and only if it is a nonsingular matrix. Thus, a matrix represents an
isomorphism if and only if it is square and nonsingular.
Proof
Immediate from the prior result.
Example 2.7
Any map from
to
205
represented by
Exercises
This exercise is recommended for all readers.
Problem 1
Decide if the vector is in the column space of the matrix.
1.
2.
3.
,
This exercise is recommended for all readers.
Problem 2
Decide if each vector lies in the range of the map from
to
the matrix.
1.
2.
,
This exercise is recommended for all readers.
Problem 3
Consider this matrix, representing a transformation of
mapped?
and
by this matrix?
206
Problem 4
What
transformation
of
is
and
represented
with
respect
to
by this matrix?
to
and
by this matrix.
Problem 6
Example 2.8 gives a matrix that is nonsingular, and is therefore associated with maps that are nonsingular.
1.
2.
3.
4.
5.
Find the set of column vectors representing the members of the nullspace of any map represented by this matrix.
Find the nullity of any such map.
Find the set of column vectors representing the members of the rangespace of any map represented by this matrix.
Find the rank of any such map.
Check that rank plus nullity equals the dimension of the domain.
This exercise is recommended for all readers.
Problem 7
Because the rank of a matrix equals the rank of any map it represents, if one matrix represents two different maps
(where
) then the dimension of the rangespace of equals
the dimension of the rangespace of
be an
vector representing
with respect to
linear transformation of
and
with respect to
, the column
. Show that is a
Problem 9
Example 2.2 shows that changing the pair of bases can change the map that a matrix represents, even though the
domain and codomain remain the same. Could the map ever not change? Is there a matrix
, vector spaces and
, and associated pairs of bases
represented by
with respect to
and
(with
or
by this
207
Problem 12
The fact that for any linear map the rank plus the nullity equals the dimension of the domain shows that a necessary
condition for the existence of a homomorphism between two spaces, onto the second space, is that there be no gain
in dimension. That is, where
is onto, the dimension of
must be less than or equal to the
dimension of
1. Show that this (strong) converse holds: no gain in dimension implies that there is a homomorphism and, further,
any matrix with the correct size and correct rank represents such a map.
2. Are there bases for
to
plane subspace of
Problem 13
Let
be an
. Fix a basis
for
Problem 14
Let
1. Suppose that
by the matrix
by expressing it in terms of
and
. Give the matrix
with respect to
by expressing it in terms of
and
is represented with respect to
by
and
by
with respect to
.
is represented with
by expressing it in terms of
208
of a map is related to the representation of that map. In later subsections we will see how to represent
and
The easiest way to see how the representations of the maps combine to represent the map sum is with an example.
Example 1.1
Suppose that
by these matrices.
and
is a transformation represented by
entry of
209
Definition 1.3
The sum of two same-sized matrices is their entry-by-entry sum. The scalar multiple of a matrix is the result of
entry-by-entry scalar multiplication.
Remark 1.4
These extend the vector addition and scalar multiplication operations that we defined in the first chapter.
Theorem 1.5
Let
by the matrices
by
by
and
, and let
Proof
Problem 2; generalize the examples above.
A notable special case of scalar multiplication is multiplication by zero. For any map
homomorphism and for any matrix
is the zero
Example 1.6
The zero map from any three-dimensional space to any two-dimensional space is represented by the
matrix
Exercises
This exercise is recommended for all readers.
Problem 1
Perform the indicated operations, if defined.
1.
2.
3.
4.
5.
Problem 2
Prove Theorem 1.5.
1. Prove that matrix addition represents addition of linear maps.
2. Prove that matrix scalar multiplication represents scalar multiplication of linear maps.
This exercise is recommended for all readers.
Problem 3
zero
210
and
, and
are scalars.
.
.
5.
6. Matrices have an additive inverse
7.
8.
Problem 4
Fix domain and codomain spaces. In general, one matrix can represent many different maps with respect to different
bases. However, prove that a zero matrix represents only a zero map. Are there other such matrices?
This exercise is recommended for all readers.
Problem 5
Let
and
and
to
is isomorphic to
.
This exercise is recommended for all readers.
Problem 6
Show that it follows from the prior questions that for any six transformations
scalars
such that
there are
question.)
Problem 7
The trace of a square matrix is the sum of the entries on the main diagonal (the
entry, etc.;
we will see the significance of the trace in Chapter Five). Show that
entry is the
entry of
. Verifiy these
identities.
1.
2.
This exercise is recommended for all readers.
Problem 9
A square matrix is symmetric if each
211
shows that
To see how the representation of the composite arises out of the representations of the two compositors, consider an
example.
Example 2.2
Let
and
, fix bases
representations.
The representation of
we fix a
is the product of
is the product of
, represent
of
's vector.
's gives
's vector.
of that. The
212
Definition 2.3
The matrix-multiplicative product of the
matrix
and the
matrix
is the
matrix
where
-th column.
Example 2.4
The matrices from Example 2.2 combine in this way.
Example 2.5
Theorem 2.6
A composition of linear maps is represented by the matrix product of the representatives.
Proof
(This argument parallels Example 2.2.) Let
respect to bases
component of
and so the
, and
, of sizes
be represented by
,
, and
. For any
and
, the
with
-th
is
-th component of
and
is this.
's.
The theorem is an example of a result that supports a definition. We can picture what the definition and theorem
together say with this arrow diagram ("wrt" abbreviates "with respect to").
213
Above the arrows, the maps show that the two ways of going from
else by way of
to
(this is just the definition of composition). Below the arrows, the matrices indicate that the product does the same
thing multiplying
into the column vector
has the same effect as multiplying the column first by
and then multiplying the result by
The definition of the matrix-matrix product operation does not restrict us to view it as a representation of a linear
map composition. We can get insight into this operation by studying it as a mechanical procedure. The striking thing
is the way that rows and columns combine.
One aspect of that combination is that the sizes of the matrices involved is significant. Briefly,
.
Example 2.7
This product is not defined
because the number of columns on the left does not equal the number of rows on the right.
In terms of the underlying maps, the fact that the sizes must match up reflects the fact that matrix multiplication is
defined only when a corresponding function composition
is possible.
Remark 2.8
The order in which these things are written can be confusing. In the "
equation, the number written first
is the dimension of
"
is applied, that
" aloud as " following ".) That order then carries over to matrices:
is represented by
.
Another aspect of the way that rows and columns combine in the matrix product operation is that in the definition of
the
entry
may be unequal
214
Example 2.9
Matrix multiplication hardly ever commutes. Test that by multiplying randomly chosen matrices both ways.
Example 2.10
Commutativity can fail more dramatically:
while
while
. True, this
is not linear
and we might have hoped that linear functions commute, but this perspective shows that the failure of commutativity
for matrix multiplication fits into a larger context.
Except for the lack of commutativity, matrix multiplication is algebraically well-behaved. Below are some nice
properties and more are in Problem 10 and Problem 11.
Theorem 2.12
If
, and
are matrices, and the matrix products are defined, then the product is associative
and
distributes
over
matrix
addition
and
.
Proof
Associativity holds because matrix multiplication represents function composition, which is associative: the maps
and
are equal as both send to
.
Distributivity
is
similar.
For
instance,
the
first
one
goes
(the
).
Remark 2.13
We could alternatively prove that result by slogging through the indices. For example, associativity goes: the
-th entry of
(where
is
, and
are
, and
matrices), distribute
to get the
215
's
entry of
Contrast these two ways of verifying associativity, the one in the proof and the one just above. The argument just
above is hard to understand in the sense that, while the calculations are easy to check, the arithmetic seems
unconnected to any idea (it also essentially repeats the proof of Theorem 2.6 and so is inefficient). The argument in
the proof is shorter, clearer, and says why this property "really" holds. This illustrates the comments made in the
preamble to the chapter on vector spaces at least some of the time an argument from higher-level constructs is
clearer.
We have now seen how the representation of the composition of two linear maps is derived from the representations
of the two maps. We have called the combination the product of the two matrices. This operation is extremely
important. Before we go on to study how to represent the inverse of a linear map, we will explore it some more in the
next subsection.
Exercises
This exercise is recommended for all readers.
Problem 1
Compute, or state "not defined".
1.
2.
3.
4.
This exercise is recommended for all readers.
Problem 2
Where
216
3.
4.
Problem 3
Which products are defined?
1.
2.
3.
4.
times
times
times
times
This exercise is recommended for all readers.
Problem 4
Give the size of the product or state "not defined".
1.
2.
3.
4.
a
a
a
a
matrix times a
matrix times a
matrix times a
matrix times a
matrix
matrix
matrix
matrix
Problem 6
As Definition 2.3 points out, the matrix product operation generalizes the dot product. Is the dot product of a
row vector and a
with respect to
where
. Show
that the product of this matrix with itself is defined; what the map does it represent?
Problem 8
Show that composition of linear transformations on
Problem 9
Why is matrix multiplication not defined as entry-wise multiplication? That would be easier, and commutative too.
This exercise is recommended for all readers.
Problem 10
1. Prove that
2. Prove that
and
for positive integers
for any positive integer and scalar
.
.
217
? Is
?
2. How does matrix multiplication interact with linear combinations: is
Is
Problem 12
We can ask how the matrix product operation interacts with the transpose operation.
1. Show that
2. A square matrix is symmetric if each
transpose. Show that the matrices
.
entry equals the
and
about an axis is a linear map. Show that linear maps do not commute by showing
matrices?
onto the
and
with respect to
. Show that this matrix plays the role in matrix multiplication that the number
multiplication:
Problem 20
In real number algebra, quadratic equations have at most two solutions. That is not so with matrix algebra. Show that
the
matrix equation
has more than two solutions, where is the identity matrix (this matrix has
and
218
Problem 21
1. Prove that for any
matrix
the matrix
(where
. If
is the
's in its
to be
, such that
radians
counterclockwise.)
Problem 22
The infinite-dimensional space
map.
Show that the two maps don't commute
and
is this.
be the shift
219
of the right one. For instance, here a second row and a third column combine to make a
entry.
We can view this as the left matrix acting by multiplying its rows, one at a time, into the columns of the right matrix.
Of course, another perspective is that the right matrix uses its columns to act on the left matrix's rows. Below, we
will examine actions from the left and from the right for some simple matrices.
The first case, the action of a zero matrix, is very easy.
Example 3.1
Multiplying by an appropriately-sized zero matrix from the left or from the right
entry is an
unit matrix.
Example 3.3
This is the
unit matrix with three rows and two columns, multiplying from the left.
of the result.
Example 3.4
Rescaling these matrices simply rescales the result. This is the action from the left of the matrix that is twice the one
in the prior example.
And this is the action of the matrix that is minus three times the one from the prior example.
220
Next in complication are matrices with two nonzero entries. There are two cases. If a left-multiplier has entries in
different rows then their actions don't interact.
Example 3.5
But if the left-multiplier's nonzero entries are in the same row then that row of the result is a combination.
Example 3.6
and
, the columns of
times
221
is indeed the same as the right side of GH, except for the extra parentheses (the ones marking the columns as column
vectors). The other equation is similarly easy to recognize.
An application of those observations is that there is a matrix that just copies out the rows and columns.
Definition 3.8
The main diagonal (or principle diagonal or diagonal) of a square matrix goes from the upper left to the lower
right.
Definition 3.9
An identity matrix is square and has with all entries zero except for ones in the main diagonal.
Example 3.10
The
Example 3.11
So does the
identity matrix.
matrix multiplication.
We next see two ways to generalize the identity matrix.
The first is that if the ones are relaxed to arbitrary reals, the resulting matrix will rescale whole rows or columns.
Definition 3.12
A diagonal matrix is square and has zeros off the main diagonal.
Example 3.13
From the left, the action of multiplication by a diagonal matrix is to rescales the rows.
The second generalization of identity matrices is that we can put a single one in each row and column in ways other
than putting them down the diagonal.
Definition 3.14
A permutation matrix is square and is all zeros except for a single one in each row and column.
Example 3.15
From the left these matrices permute rows.
We finish this subsection by applying these observations to get matrices that perform Gauss' method and
Gauss-Jordan reduction.
Example 3.16
We have seen how to produce a matrix that will rescale rows. Multiplying by this diagonal matrix rescales the
second row of the other by a factor of three.
We have seen how to produce a matrix that will swap rows. Multiplying by this permutation matrix swaps the first
and third rows.
To see how to perform a pivot, we observe something about those two examples. The matrix that rescales the second
row by a factor of three arises in this way from the identity.
Similarly, the matrix that swaps first and third rows arises in this way.
Example 3.17
The
222
223
.
Definition 3.18
The elementary reduction matrices are obtained from identity matrices with one Gaussian operation. We denote
them:
1.
for
2.
for
3.
for
Lemma 3.19
Gaussian reduction can be done through matrix multiplication.
1. If
2. If
then
then
3. If
then
.
.
.
Proof
Clear.
Example 3.20
This is the first system, from the first chapter, on which we performed Gauss' method.
It can be reduced with matrix multiplication. Swap the first and third rows,
and to finish, clear the third column and then the second column.
224
We have observed the following result, which we shall use in the next subsection.
Corollary 3.22
For any matrix
, ...,
such that
is in
Exercises
This exercise is recommended for all readers.
Problem 1
Predict the result of each multiplication by an elementary reduction matrix, and then check by multiplying it out.
1.
2.
3.
4.
5.
This exercise is recommended for all readers.
Problem 2
The need to take linear combinations of rows and columns in tables of numbers arises often in practice. For instance,
this is a map of part of Vermont and New York.
In part because of Lake Champlain, there are no roads directly connecting some pairs of towns. For instance, there is no
way to go from Winooski to Grand Isle without going through Colchester. (Of course, many other roads and towns have
been left off to simplify the graph. From top to bottom of this map is about forty miles.)
to city
. Produce the incidence matrix of this map (take the cities in alphabetical order).
2. A matrix is symmetric if it equals its transpose. Show that an incidence matrix is symmetric. (These are all
two-way streets. Vermont doesn't have many one-way streets.)
3. What is the significance of the square of the incidence matrix? The cube?
225
regular overtime
Alan
40
12
Betty
35
Catherine
40
18
Donald
28
wage
regular
$25.00
overtime $45.00
(Remark. This illustrates, as did the prior problem, that in practice we often want to compute linear combinations of
rows and columns in a context where we really aren't interested in any associated linear maps.)
Problem 4
Find the product of this matrix with its transpose.
Problem 6
Does the identity matrix represent the identity map if the bases are unequal?
Problem 7
Show that every multiple of the identity commutes with every square matrix. Are there other matrices that commute
with all square matrices?
Problem 8
Prove or disprove: nonsingular matrices commute.
This exercise is recommended for all readers.
Problem 9
Show that the product of a permutation matrix and its transpose is an identity matrix.
Problem 10
Show that if the first and second rows of
Problem 11
Describe the product of two diagonal matrices.
Problem 12
Write
. Generalize.
226
Problem 13
Show that if
(if defined) has a row of zeros. Does that work for columns?
Problem 14
Show that the set of unit matrices forms a basis for
Problem 15
Find the formula for the
matrix and a
matrix?
2. Matrix multiplication is associative, so all associations yield the same result. The cost in number of
multiplications, however, varies. Find the association requiring the fewest real number multiplications to compute
the matrix product of a
matrix, a
matrix, a
matrix, and a
matrix.
3. (Very hard.) Find a way to multiply two
matrices using only seven multiplications instead of the eight
suggested by the naive approach.
? Problem 22
If
and
iff
iff
.
.
227
.
.
Problem 24
Prove (where
to
where
1.
2.
3.
is an
-dimensional space
is a basis) that
iff
iff
iff
with respect
. Conclude
;
;
and
4.
iff
5. (Requires the Direct Sum subsection, which is optional.)
;
iff
.
(Ackerson 1955)
Solutions
References
Ackerson, R. H. (Dec. 1955), "A Note on Vector Spaces", American Mathematical Monthly (American
Mathematical Society) 62 (10): 721.
Liebeck, Hans. (Dec. 1966), "A Proof of the Equality of Column Rank and Row Rank of a Matrix", American
Mathematical Monthly (American Mathematical Society) 73 (10): 1114.
William Lowell Putnam Mathematical Competition, Problem A-5, 1990.
Linear Algebra/Inverses
We now consider how to represent the inverse of a linear map.
We start by recalling some facts about function inverses.[1] Some functions have no inverse, or have an inverse on
the left side or right side only.
Example 4.1
Where
and
is the embedding
the composition
We say
. However,
doesn't give the identity map here is a vector that is not sent to itself under
Linear Algebra/Inverses
228
(An example of a function with no inverse on either side is the zero transformation on
two-sided inverse map, another function that is the inverse of the first, both from the left and from the right. For
instance, the map given by
on two-sided inverses. The appendix shows that a function has a two-sided inverse if and only if it is both one-to-one
and onto. The appendix also shows that if a function has a two-sided inverse then it is unique, and so it is called
"the" inverse, and is denoted
and
has an inverse, to
Section II of this chapter, that if a linear map has an inverse then the inverse is a linear map also).
Definition 4.2
A matrix
if
Because of the correspondence between linear maps and matrices, statements about map inverses translate into
statements about matrix inverses.
Lemma 4.3
If a matrix has both a left inverse and a right inverse then the two are equal.
Theorem 4.4
A matrix is invertible if and only if it is nonsingular.
Proof
(For both results.) Given a matrix
, fix spaces of appropriate dimension for the domain and codomain. Fix bases
represents a map
and
is defined then
is
Proof
(This is just like the prior proof except that it requires two maps.) Fix appropriate spaces and bases and consider the
represented maps
and
. Note that
is a two-sided map inverse of
since
and
. This equality
Linear Algebra/Inverses
229
Beyond its place in our general program of seeing how to represent map operations, another reason for our interest in
inverses comes from solving linear systems. A linear system is equivalent to a matrix equation, as here.
and
is mapped by
to the result
to get
. Then
? If we could
.
Answer:
, and
Remark 4.7
Why solve systems this way, when Gauss' method takes less arithmetic (this assertion can be made precise by
counting the number of arithmetic operations, as computer algorithm designers do)? Beyond its conceptual appeal of
fitting into our program of discovering how to represent the various map operations, solving linear systems by using
the matrix inverse has at least two advantages.
First, once the work of finding an inverse has been done, solving a system with the same coefficients but different
constants is easy and fast: if we change the entries on the right of the system ( ) then we get a related problem
Linear Algebra/Inverses
230
In applications, solving many systems having the same matrix of coefficients is common.
Another advantage of inverses is that we can explore a system's sensitivity to changes in the constants. For example,
tweaking the on the right of the system ( ) to
to show that
changes by
moves by
for example, to decide how accurately data must be specified in a linear model to ensure that the solution has a
desired accuracy.
We finish by describing the computational procedure usually used to find the inverse matrix.
Lemma 4.8
A matrix is invertible if and only if it can be written as the product of elementary reduction matrices. The inverse can
be computed by applying to the identity matrix the same row steps, in the same order, as are used to Gauss-Jordan
reduce the invertible matrix.
Proof
A matrix
is invertible if and only if it is nonsingular and thus Gauss-Jordan reduces to the identity. By Corollary
, etc., gives
shows that
, etc., yields the inverse
of
.
Example 4.9
To find the inverse of
we do Gauss-Jordan reduction, meanwhile performing the same operations on the identity. For clerical convenience
we write the matrix and the identity side-by-side, and do the reduction steps together.
Example 4.10
This one happens to start with a row swap.
Linear Algebra/Inverses
231
Example 4.11
A non-invertible matrix is detected by the fact that the left half won't reduce to the identity.
matrix. The
case is handy.
Corollary 4.12
The inverse for a
if and only if
Proof
This computation is Problem 10.
We have seen here, as in the Mechanics of Matrix Multiplication subsection, that we can exploit the correspondence
between linear maps and matrices. So we can fruitfully study both maps and matrices, translating back and forth to
whichever helps us the most.
Over the entire four subsections of this section we have developed an algebra system for matrices. We can compare
it with the familiar algebra system for the real numbers. Here we are working not with numbers but with matrices.
We have matrix addition and subtraction operations, and they work in much the same way as the real number
operations, except that they only combine same-sized matrices. We also have a matrix multiplication operation and
an operation inverse to multiplication. These are somewhat like the familiar real number operations (associativity,
and distributivity over addition, for example), but there are differences (failure of commutativity, for example). And,
we have scalar multiplication, which is in some ways another extension of real number multiplication. This matrix
system provides an example that algebra systems other than the elementary one can be interesting and useful.
Exercises
Problem 1
Supply the intermediate steps in Example 4.10.
This exercise is recommended for all readers.
Problem 2
Use Corollary 4.12 to decide if each matrix has an inverse.
1.
Linear Algebra/Inverses
232
2.
3.
This exercise is recommended for all readers.
Problem 3
For each invertible matrix in the prior problem, use Corollary 4.12 to find its inverse.
This exercise is recommended for all readers.
Problem 4
Find the inverse, if it exists, by using the Gauss-Jordan method. Check the answers for the
matrices with
Corollary 4.12.
1.
2.
3.
4.
5.
6.
This exercise is recommended for all readers.
Problem 5
What matrix has this one for its inverse?
Problem 6
How does the inverse operation interact with scalar multiplication and addition of matrices?
1. What is the inverse of
2. Is
?
?
Problem 8
Is
invertible?
Problem 9
For each real number
let
Linear Algebra/Inverses
Show that
233
Problem 10
Do the calculations for the proof of Corollary 4.12.
Problem 11
Show that this matrix
has infinitely many right inverses. Show also that it has no left inverse.
Problem 12
In Example 4.1, how many left inverses has
Problem 13
If a matrix has infinitely many right-inverses, can it have infinitely many left-inverses? Must it have?
This exercise is recommended for all readers.
Problem 14
Assume that
is a zero matrix.
Problem 15
Prove that if
if and only if
itself
is square and if
. Generalize.
be diagonal. Describe
appropriately.
Problem 18
Prove that any matrix row-equivalent to an invertible matrix is also invertible.
Problem 19
The first question below appeared as Problem 15 in the Matrix Multiplication subsection.
1. Show that the rank of the product of two matrices is less than or equal to the minimum of the rank of each.
2. Show that if and are square then
if and only if
.
Problem 20
Show that the inverse of a permutation matrix is its transpose.
Problem 21
The first two parts of this question appeared as Problem 12. of the Matrix Multiplication subsection
1. Show that
2. A square matrix is symmetric if each
.
entry equals the
Linear Algebra/Inverses
234
map.
2. Prove that the composition of the derivatives
and
matrices?
Problem 24
Is the relation "is a two-sided inverse of" transitive? Reflexive? Symmetric?
Problem 25
Prove: if the sum of the elements of a square matrix is
matrix is
. (Wilansky 1951)
Solutions
Footnotes
[1] More information on function inverses is in the appendix.
References
Wilansky, Albert (Nov. year=1951), "The Row-Sum of the Inverse Matrix", American Mathematical Monthly
(American Mathematical Society) 58 (9): 614.
235
for
, the vector
and
With our point of view that the objects of our studies are vectors and maps, in fixing bases we are adopting a scheme
of tags or names for these objects, that are convienent for computation. We will now see how to translate among
these names we will see exactly how representations vary as the bases vary.
to
accomplished by the identity map on the space, described so that the domain space vectors are represented with
respect to
and the codomain space vectors are represented with respect to
.
(The diagram is vertical to fit with the ones in the next subsection.)
Definition 1.1
The change of basis matrixfor bases
with
Lemma 1.2
Left-multiplication by the change of basis matrix for
respect to
to one with
then
matrix
the
236
Example 1.3
With these bases for
because
We finish this subsection by recognizing that the change of basis matrices are familiar.
Lemma 1.4
A matrix changes bases if and only if it is nonsingular.
Proof
For one direction, if left-multiplication by a matrix changes bases then the matrix represents an invertible function,
simply because the function is inverted by changing the bases back. Such a matrix is itself invertible, and so
nonsingular.
To finish, we will show that any nonsingular matrix
starting basis
to some ending basis. Because the matrix is nonsingular, it will Gauss-Jordan reduce to the identity,
their inverses are also elementary, so multiplying from the left first by
product of elementary matrices
, etc., gives
as a
, then by
changes
changes
to
, and
changes
changes
representation
with
respect
in this way.
to
to
one
with
respect
to
237
in this way.
and
Corollary 1.5
A matrix is nonsingular if and only if it represents the identity map with respect to some pair of bases.
In the next subsection we will see how to translate among representations of maps, that is, how to change
to
. The above corollary is a special case of this, where the domain and range are the
same space, and where the map is the identity map.
Exercises
This exercise is recommended for all readers.
Problem 1
In
, where
to
and from
to
2.
3.
4.
Problem 3
For the bases in Problem 2, find the change of basis matrix in the other direction, from
This exercise is recommended for all readers.
Problem 4
Find the change of basis matrix for each
1.
2.
3.
This exercise is recommended for all readers.
to
238
Problem 5
Decide if each changes bases on
. To what basis is
changed?
1.
2.
3.
4.
Problem 6
Find bases such that this matrix represents the identity map with respect to those bases.
Problem 7
Conside
the
vector
space
of
real-valued
functions
with
basis
Show
that
is also a basis for this space. Find the change of basis matrix in each direction.
Problem 8
Where does this matrix
Problem 10
Prove that a matrix changes bases if and only if it is invertible.
Problem 11
Finish the proof of Lemma 1.4.
This exercise is recommended for all readers.
Problem 12
Let
be a
does
with basis
Find a basis
239
2. State and prove that any nonzero vector representation can be changed to any other.
Hint. The proof of Lemma 1.4 is constructive it not only says the bases change, it shows how they change.
Problem 14
Let
be bases for
and
be bases for
. Where
is
Problem 15
Show that the columns of an
can the vectors from any
that left-multiplies the starting vector to yield the ending vector. Is there a matrix having these two
effects?
1.
2.
Give a necessary and sufficient condition for there to be a matrix such that
Solutions
and
240
To move from the lower-left of this diagram to the lower-right we can either go straight over, or else up to
over to
simply using
and
then
either by
then multiplying by
(To compare this equation with the sentence before it, remember that the equation is read from right to left because
function composition is read right to left and matrix multiplication represent the composition.)
Example 2.2
On
the map
then
in a way that is simpler, in that the action of a diagonal matrix is easy to understand.
Naturally, we usually prefer basis changes that make the representation easier to understand. When the
representation with respect to equal starting and ending bases is a diagonal matrix we say the map or matrix has been
diagonalized. In Chaper Five we shall see which maps and matrices are diagonalizable, and where one is not, we
shall see how to get a representation that is nearly diagonal.
We finish this subsection by considering the easier case where representations are with respect to possibly different
starting and ending bases. Recall that the prior subsection shows that a matrix changes bases if and only if it is
nonsingular. That gives us another version of the above arrow diagram and equation ( ).
241
Definition 2.3
Same-sized matrices
and
and
such that
.
Corollary 2.4
Matrix equivalent matrices represent the same map, with respect to appropriate pairs of bases.
Problem 10 checks that matrix equivalence is an equivalence relation. Thus it partitions the set of matrices into
matrix equivalence classes.
All matrices:
matrix
equivalent
to
We can get some insight into the classes by comparing matrix equivalence with row equivalence (recall that matrices
are row equivalent when they can be reduced to each other by row operations). In
, the matrices
and
are nonsingular and thus each can be written as a product of elementary reduction matrices (see Lemma 4.8
Therefore, matrix equivalence is a generalization of row equivalence two matrices are row equivalent if one can
be converted to the other by a sequence of row reduction steps, while two matrices are matrix equivalent if one can
be converted to the other by a sequence of row reduction steps followed by a sequence of column reduction steps.
Thus, if matrices are row equivalent then they are also matrix equivalent (since we can take
to be the identity
matrix and so perform no column operations). The converse, however, does not hold.
Example 2.5
These two
are matrix equivalent because the second can be reduced to the first by the column operation of taking
times the
first column and adding to the second. They are not row equivalent because they have different reduced echelon
forms (in fact, both are already in reduced form).
We will close this section by finding a set of representatives for the matrix equivalence classes.[1]
Theorem 2.6
Any
matrix of rank
Proof
242
As discussed above, Gauss-Jordan reduce the given matrix and combine all the reduction matrices used there to
make . Then use the leading entries to do column reduction and finish by swapping columns to put the leading
ones on the diagonal. Combine the reduction matrices used for those column operations into
Example 2.7
We illustrate the proof by finding the
and
to get the
equation.
Corollary 2.8
Two same-sized matrices are matrix equivalent if and only if they have the same rank. That is, the matrix
equivalence classes are characterized by rank.
Proof
Two same-sized matrices with the same rank are equivalent to the same block partial-identity matrix.
- <TD>All
matrices:
In this subsection we have seen how to change the representation of a map with respect to a first pair of bases to one
with respect to a second pair. That led to a definition describing when matrices are equivalent in this way. Finally we
noted that, with the proper choice of (possibly different) starting and ending bases, any map can be represented in
block partial-identity form.
One of the nice things about this representation is that, in some sense, we can completely understand the map when it
is expressed in this way: if the bases are
and
then the map sends
where
is the map's rank. Thus, we can understand any linear map as a kind of projection.
243
Of course, "understanding" a map expressed in this way requires that we understand the relationship between
. However, despite that difficulty, this is a good classification of linear maps.
Exercises
This exercise is recommended for all readers.
Problem 1
Decide if these matrices are matrix equivalent.
1.
2.
3.
,
This exercise is recommended for all readers.
Problem 2
Find the canonical representative of the matrix-equivalence class of each matrix.
1.
2.
Problem 3
Suppose that, with respect to
the transformation
2.
and
in the equation
and
244
then
and
Problem 13
How many matrix equivalence classes are there?
Problem 14
Are matrix equivalence classes closed under scalar multiplication? Addition?
Problem 15
Let
represented by
1. Find
with respect to
2. Describe
Problem 16
1. Let
have bases
and
that computes
from
2. Repeat the prior question with one basis for
. Where
Problem 17
1. If two matrices are matrix-equivalent and invertible, must their inverses be matrix-equivalent?
2. If two matrices have matrix-equivalent inverses, must the two be matrix-equivalent?
3. If two matrices are square and matrix-equivalent, must their squares be matrix-equivalent?
4. If two matrices are square and have matrix-equivalent squares, must they be matrix-equivalent?
245
is similar to
then
is similar to
exercise.
5. Prove that there are matrix equivalent matrices that are not similar.
Solutions
Footnotes
[1] More information on class representatives is in the appendix.
Linear Algebra/Projection
This section is optional; only the last two sections of Chapter Five require this material.
We have described the projection
from
into its
is the
246
onto a line
, darken a
point on the line if someone on that line and looking straight up or down (from that person's point of view) sees
The picture shows someone who has walked out on the line until the tip of
coefficient
is orthogonal to
itself, and then the consequent fact that the dot product
it must be
.
Definition 1.1
The orthogonal projection of
is this vector.
Problem 13 checks that the outcome of the calculation depends only on the line and not on which vector
happens
". This
247
Example 1.4
In
onto the
-axis is
orthogonal projection of a vector onto a line. We finish this subsection with two other ways.
Example 1.5
A railroad car left on an east-west track without its brake is pushed by a wind blowing toward the northeast at fifteen
miles per hour; what speed will the car reach?
The car can only be affected by the part of the wind blowing in the east-west direction the part of
direction of the
-axis is this (the picture has the same perspective as the railroad car picture above).
in the
248
Thus, another way to think of the picture that precedes the definition is that it shows
parts, the part with the line (here, the part with the tracks,
lying on the north-south axis). These two are "not interacting" or "independent", in the sense that the east-west car is
not at all affected by the north-south part of the wind (see Problem 5). So the orthogonal projection of onto the
line spanned by
Finally, another useful way to think of the orthogonal projection is to have the person stand not on the line, but on
the vector that is to be projected to the line. This person has a rope over the line and pulls it tight, naturally making
the rope orthogonal to the line.
Example 1.6
A submarine is tracking a ship moving along the line
stay where it is, at the origin on the chart below, or must it move to reach a place where the ship will pass within
range?
The formula for projection onto a line does not immediately apply because the line doesn't pass through the origin,
and so isn't the span of any . To adjust for this, we start by shifting the entire map down two units. Now the line is
, which is a subspace, and we can project to get the point
through the origin closest to
and
249
is approximately
This subsection has developed a natural projection map: orthogonal projection onto a line. As suggested by the
examples, it is often called for in applications. The next subsection shows how the definition of orthogonal
projection onto a line gives us a way to calculate especially convienent bases for vector spaces, again something that
is common in applications. The final subsection completely generalizes projection, orthogonal or not, onto any
subspace at all.
Exercises
This exercise is recommended for all readers.
Problem 1
Project the first vector orthogonally onto the line spanned by the second vector.
1.
2.
3.
4.
,
This exercise is recommended for all readers.
Problem 2
Project the vector orthogonally onto the line.
1.
2.
, the line
Problem 3
Although the development of Definition 1.1 is guided by the pictures, we are not restricted to spaces that we can
draw. In
project this vector onto this line.
and projecting
and
250
1.
2.
Show that in general the projection tranformation is this.
and
interacting". Recall that the two are orthogonal. Show that any two nonzero orthogonal vectors make up a linearly
independent set.
Problem 6
1. What is the orthogonal projection of
2. Show that if
onto a line if
is linearly independent.
Problem 7
Definition 1.1 requires that
be nonzero. Why? What is the right definition of the orthogonal projection of a vector
Problem 10
Find the formula for the distance from a point to a line.
Problem 11
Find the scalar
such that
consider the distance function, set the first derivative equal to zero, and solve). Generalize to
This exercise is recommended for all readers.
Problem 12
Prove that the orthogonal projection of a vector onto a line is shorter than the vector.
This exercise is recommended for all readers.
Problem 13
Show that the definition of orthogonal projection onto a line does not depend on the spanning vector: if
nonzero multiple of
then
equals
is a
. These two
each show that the map is linear, the first one in a way that is bound to the coordinates (that is, it fixes a basis and
then computes) and the second in a way that is more conceptual.
251
let
line spanned by
spans of
if
and
, let
. That is,
radians counterclockwise.
be the projection of
, let
is the projection of
if
be the projection of
onto the
is odd. Must that sequence of vectors eventually settle down must there be a sufficiently large
that
equals
Solutions
and
equals
such
Linear Algebra/Gram-Schmidt
Orthogonalization
This subsection is optional. It requires material from the prior, also optional, subsection. The work done here will
only be needed in the final two sections of Chapter Five.
The prior subsection suggests that projecting onto the line spanned by
decomposes a vector
that are orthogonal and so are "not interacting". We will now develop that suggestion.
Definition 2.1
Vectors
is zero.
Theorem 2.2
If the vectors in a set
independent.
Proof
Consider a linear relationship
. If
shows, since
is nonzero, that
is zero.
Corollary 2.3
If the vectors in a size
subset of a
dimensional space are mutually orthogonal and nonzero then that set is a
subset of a
252
Of course, the converse of Corollary 2.3 does not hold not every basis of every subspace of
is made of
mutually orthogonal vectors. However, we can get the partial converse that for every subspace of
there is at
and
a new basis for the same space that does have mutually orthogonal members. For
For the second member of the new basis, we take away from
pictured above, of
that is orthogonal to
We get
Finally, we get
by taking the third given vector and subtracting the part of it in the direction of
253
(we
only because we have not given a definition of orthogonality for other vector spaces).
the
then, where
Proof
We will use induction to check that each
preceding vectors:
equal to
is a member of a basis, it is
obviously in the desired span, and the "orthogonal to all preceding vectors" condition is vacuously met.
For the
case, expand the definition of
.
is
's (it is
is
shows that
254
case the second line has two kinds of terms. The first term is zero because
case. The second term is zero because
is orthogonal to
and so is
is similar.
Beyond having the vectors in the basis be orthogonal, we can do more; we can arrange for each vector to have length
one by dividing each by its own length (we can normalize the lengths).
Example 2.8
Normalizing the length of each vector in the orthogonal basis of Example 2.6 produces this orthonormal basis.
Besides its intuitive appeal, and its analogy with the standard basis
for
Exercises
Problem 1
Perform the Gram-Schmidt process on each of these bases for
1.
2.
3.
Then turn those orthogonal bases into orthonormal bases.
This exercise is recommended for all readers.
Problem 2
Perform the Gram-Schmidt process on each of these bases for
1.
2.
Then turn those orthogonal bases into orthonormal bases.
This exercise is recommended for all readers.
Problem 3
Find an orthonormal basis for this subspace of
: the plane
Problem 4
Find an orthonormal basis for this subspace of
255
Problem 5
Show that any linearly independent subset of
of
, ...,
.
2. Illustrate the prior item in
and
by using
as
is orthogonal to each
, using
as
, and taking
to have components
3. Show that
Hint. To the illustration done for the prior part, add a vector
first represent the vector with respect to the basis. Then project the vector onto the span of each basis vector
and
.
2. With this orthogonal basis for
with respect to the basis. Then project the vector onto the span of each basis vector.
Note that the coefficients in the representation and the projection are the same.
3. Let
subspace, the
from
4. Prove that
in the
.
.
Problem 10
Bessel's Inequality. Consider these orthonormal sets
, and
. Check that
, and
and
256
.
associated with the vectors in
Check that
, and
, and
and that
. Check that
.
is an orthonormal set and
's.
Problem 11
Prove or disprove: every vector in
Problem 12
Show that the columns of an
matrix form an orthonormal set if and only if the inverse of the matrix is its
)?
Problem 14
Theorem 2.7 describes a change of basis from any basis
. Consider the change of basis matrix
1. Prove that the matrix
changing bases in the direction opposite to that of the theorem has an upper
triangular shape all of its entries below the main diagonal are zeros.
2. Prove that the inverse of an upper triangular matrix is also upper triangular (if the matrix is invertible, that is).
This shows that the matrix
changing bases in the direction described in the theorem is upper
triangular.
Problem 15
Complete the induction argument in the proof of Theorem 2.7.
Solutions
257
where
and any
with
, the projection of
onto
along
is
This definition doesn't involve a sense of "orthogonal" so we can apply it to spaces other than subspaces of an
(Definitions of orthogonality for other spaces are perfectly possible, but we haven't seen any in this book.)
Example 3.2
The space
of
To project
onto
along
is a basis for the entire space, because the space is the direct sum, so we can use it to represent
onto
along
part.
Example 3.3
Both subscripts on
is an
, and changing this subspace would change the possible results. For an example showing that the
(Verification that
258
and
and represent
The projection of
onto
Representing
along
part.
is orthogonal.
259
A natural question is: what is the relationship between the projection operation defined above, and the operation of
orthogonal projection onto a line? The second picture above suggests the answer orthogonal projection onto a line
is a special case of the projection defined above; it is just projection along a subspace perpendicular to the line.
In addition to pointing out that projection along a subspace is a generalization, this scheme shows how to define
orthogonal projection onto any subspace of
, of any dimension.
Definition 3.4
The orthogonal complementof a subspace
(read "
of
is
along
Example 3.5
In
Any
We are thus left with finding the nullspace of the map represented by the matrix, that is, with calculating the solution
set of a homogeneous linear system.
260
Example 3.6
Where
is the
-plane subspace of
, what is
Instead
is the
is the
-plane.
-axis, since proceeding as in the prior example and taking the natural basis for the
-plane
gives this.
The two examples that we've seen since Definition 3.4 illustrate the first sentence in that definition. The next result
justifies the second sentence.
Lemma 3.7
Let
be a subspace of
the two
in
, the vector
Proof
First, the orthogonal complement
is a subspace of
nullspace.
Next, we can start with any basis
for
for the entire space. Apply the Gram-Schmidt process to get an orthogonal basis
This
for
that the entire space is the direct sum of the two subspaces.
Problem 9 from the prior subsection proves this about any orthogonal basis: each vector
) and
its orthogonal projections onto the lines spanned by the basis vectors.
To
check
this,
represent
the
vector
apply
to
both
sides
paragraph
this.
On
projections
onto
basis
and therefore (
vectors
from
) gives that
any
gives
is a linear combination of
gotten
by
keeping
only
the
.
combination of elements of
part
and
dropping
Therefore
. Then
the
consists
.
part
of
linear
261
We can find the orthogonal projection onto a subspace by following the steps of the proof, but the next result gives a
convienent formula.
Theorem 3.8
Let
be a vector in
and let
be a subspace of
with basis
's then
vector
Proof
The vector
. If
is a member of
. Since
such that
(this is expressed compactly with matrix multiplication as in Example 3.5 and 3.6). Because
is perpendicular to each member of the basis, we have this (again, expressed compactly).
Solving for
(showing that
is invertible is an exercise)
Example 3.9
To orthogonally project this vector onto this subspace
first make a matrix whose columns are a basis for the subspace
With the matrix, calculating the orthogonal projection of any vector onto
is easy.
262
Exercises
This exercise is recommended for all readers.
Problem 1
Project the vectors onto
along
1.
2.
3.
This exercise is recommended for all readers.
Problem 2
Find
1.
2.
3.
4.
5.
6.
7.
Problem 3
This subsection shows how to project orthogonally in two ways, the method of Example 3.2 and 3.3, and the method
of Theorem 3.8. To compare them, consider the plane specified by
in
.
1. Find a basis for
2. Find
and a basis for
.
3. Represent this vector with respect to the concatenation of the two bases from the prior item.
263
part, and the way of Theorem 3.8. For these cases, do all three ways.
1.
2.
Problem 5
Check that the operation of Definition 3.1 is well-defined. That is, in Example 3.2 and 3.3, doesn't the answer depend
on the choice of bases?
Problem 6
What is the orthogonal projection onto the trivial subspace?
Problem 7
What is the projection of
onto
along
if
Problem 8
Show that if
onto
is this.
This exercise is recommended for all readers.
Problem 9
Prove that the map
onto
along
(Recall
the
along
definition
the
difference
is the projection
of
two
maps:
.)
This exercise is recommended for all readers.
Problem 10
Show that if a vector is perpendicular to every vector in a set then it is perpendicular to every vector in the span of
that set.
Problem 11
True or false: the intersection of a subspace and its orthogonal complement is trivial.
Problem 12
Show that the dimensions of orthogonal complements add to the dimension of the entire space.
This exercise is recommended for all readers.
Problem 13
Suppose that
along
equal
, the projections of
and
onto
? (If so, what if we relax the condition to: all orthogonal projections of
be subspaces of
. The perp operator acts on subspaces; we can ask how it interacts with other such
operations.
1. Show that two perps cancel:
2. Prove that
implies that
3. Show that
.
.
.
264
given by
3. Represent
This, and related results, is called the Fundamental Theorem of Linear Algebra in (Strang 1993).
Problem 16
Define a projection to be a linear transformation
for all
there is a basis
for
such that
265
References
Strang, Gilbert (Nov. 1993), "The Fundamental Theorem of Linear Algebra", American Mathematical Monthly
(American Mathematical Society): 848-855.
Strang, Gilbert (1980), Linear Algebra and its Applications (2nd ed.), Hartcourt Brace Javanovich
30 60 90
number of heads 16 34 51
Because of randomness, we do not find the exact proportion with this sample there is no solution to this system.
That is, the vector of experimental data is not in the subspace of solutions.
The estimate (
The line with the slope
) is a bit high but not much, so probably the penny is fair enough.
is called the line of best fit for this data.
266
Minimizing the distance between the given vector and the vector used as the right-hand side minimizes the total of
these vertical lengths, and consequently we say that the line has been obtained through fitting by least-squares
(the vertical scale here has been exaggerated ten times to make the lengths visible).
We arranged the equation above so that the line must pass through
guess at) the line whose slope is this coin's true proportion of heads to flips. We can also handle cases where the line
need not pass through the origin.
For example, the different denominations of U.S. money have different average times in circulation (the $2 bill is left
off as a special case). How long should we expect a $25 bill to last?
denomination
5 10 20 50 100
20
The plot (see below) looks roughly linear. It isn't a perfect line, i.e., the linear system with equations
, ...,
has no solution, but we can again use orthogonal projection to find a best
approximation. Consider the matrix of coefficients of that linear system and also its vector of constants, the
experimentally-determined values.
The ending result in the subsection on Projection into a Subspace says that coefficients
combination of the columns of
and
267
Plugging
and a slope of
into the equation of the line shows that such a bill should last between five and six years.
We close by considering the times for the men's mile race (Oakley & Baker 1977). These are the world records that
were in force on January first of the given years. We want to project when a 3:40 mile will be run.
year
1870
1880
1890
1900
1910
1920
1930
1940
1950
1960
1970
1980
1990
2000
We can see below that the data is surprisingly linear. With this input
When will a
.
second mile be run? Solving the equation of the line of best fit gives an estimate of the year
268
This example is amusing, but serves as a caution obviously the linearity of the data will break down someday (as
indeed it does prior to 1860).
Exercises
The calculations here are best done on a computer. In addition, some of the problems require more data, available
in your library, on the net, in the answers to the exercises, or in the section following the exercises.
Problem 1
Use least-squares to judge if the coin in this experiment is fair.
flips
8 16 24 32 40
heads 4
13 17 20
Problem 2
For the men's mile record, rather than give each of the many records and its exact date, we've "smoothed" the data
somewhat by taking a periodic sample. Do the longer calculation and compare the conclusions.
Problem 3
Find the line of best fit for the men's
meter run. How does the slope compare with that for the men's mile?
meters.)
Problem 4
Find the line of best fit for the records for women's mile.
Problem 5
Do the lines of best fit for the men's and women's miles cross?
Problem 6
When the space shuttle Challenger exploded in 1986, one of the criticisms made of NASA's decision to launch was
in the way the analysis of number of O-ring failures versus temperature was made (of course, O-ring failure caused
the explosion). Four O-ring failures will cause the rocket to explode. NASA had data from 24 previous flights.
temp F 53 75 57 58 63 70 70 66 67 67 67
failures
temp F 68 69 70 70 72 73 75 76 76 78 79 80 81
failures
1. NASA based the decision to launch partially on a chart showing only the flights that had at least one O-ring
failure. Find the line that best fits these seven flights. On the basis of this data, predict the number of O-ring
failures when the temperature is
, and when the number of failures will exceed four.
2. Find the line that best fits all 24 flights. On the basis of this extra data, predict the number of O-ring failures when
the temperature is
, and when the number of failures will exceed four.
Which do you think is the more accurate method of predicting? (An excellent discussion appears in (Dalal, Folkes &
Hoadley 1989).)
Problem 7
This table lists the average distance from the sun to each of the first seven planets, using earth's average as a unit.
Mercury Venus Earth Mars Jupiter Saturn Uranus
269
0.39
0.72
1.00
1.52
5.20
9.54
19.2
, etc.) versus the distance. Note that it does not look like a line, and
This method was used to help discover Neptune (although the second item is misleading about the history; actually,
the discovery of Neptune in position prompted people to look for the "missing planet" in position ). See
(Gardner 1970)
Problem 8
William Bennett has proposed an Index of Leading Cultural Indicators for the US (Bennett 1993). Among the
statistics cited are the average daily hours spent watching TV, and the average combined SAT scores.
1960 1965 1970 1975 1980 1985 1990 1992
TV
SAT
975
969
948
910
890
906
900
899
Suppose that a cause and effect relationship is proposed between the time spent watching TV and the decline in SAT
scores (in this article, Mr. Bennett does not argue that there is a direct connection).
1. Find the line of best fit relating the independent variable of average daily TV hours to the dependent variable of
SAT scores.
2. Find the most recent estimate of the average daily TV hours (Bennett's cites Neilsen Media Research as the
source of these estimates). Estimate the associated SAT score. How close is your estimate to the actual average?
(Warning: a change has been made recently in the SAT, so you should investigate whether some adjustment needs
to be made to the reported average to make a valid comparison.)
Solutions
Computer Code
#!/usr/bin/python
# least_squares.py
calculate the line of best fit for a data set
# data file format: each line is two numbers, x and y
n = 0
sum_x = 0
sum_y = 0
sum_x_squared = 0
sum_xy = 0
fn = raw_input("Name of the data file? ")
datafile = open(fn,"r")
while 1:
ln = datafile.readline()
if ln:
270
data = ln.split()
x = float(data[0])
y = float(data[1])
n += 1
sum_x += x
sum_y += y
sum_x_squared += x*x
sum_xy += x*y
else:
break
datafile.close()
slope = (sum_xy/n - sum_x*sum_y/n) / (sum_x_squared/n - sum_x**2/n)
intercept = sum_y/n - slope*sum_x/n
print "line of best fit: slope= %f intercept= %f" % (slope, intercept)
Additional Data
Data on the progression of the world's records (taken from the Runner's World web site) is below.
Progression of Men's Mile Record
time
name
date
4:52.0
02Sep52
4:45.0
03Nov58
4:40.0
24Nov59
4:33.0
23May62
10Mar68
03Apr68
31Mar73
4:26.0
30May74
19Jun75
16Aug80
03Jun82
21Jun84
26Aug93
4:17.0
06Jul95
28Aug95
27May11
4:14.4
31May13
4:12.6
16Jul15
4:10.4
23Aug23
04Oct31
4:07.6
15Jul33
271
4:06.8
16Jun34
4:06.4
28Aug37
4:06.2
01Jul42
4:04.6
04Sep42
4:02.6
01Jul43
4:01.6
18Jul44
4:01.4
17Jul45
3:59.4
06May54
3:58.0
21Jun54
3:57.2
19Jul57
3:54.5
06Aug58
3:54.4
27Jan62
3:54.1
17Nov64
3:53.6
09Jun65
3:51.3
17Jul66
3:51.1
23Jun67
3:51.0
17May75
3:49.4
12Aug75
3:49.0
17Jul79
3:48.8
01Jul80
3:48.53
19Aug81
3:48.40
26Aug81
3:47.33
28Aug81
3:46.32
27Jul85
3:44.39
3:43.13
name
date
4:09.0
30May00
4:06.2
15Jul00
4:05.4
03Sep04
3:59.8
30May08
3:59.2
26May12
3:56.8
02Jun12
3:55.8
08Jun12
3:55.0
16Jul15
3:54.7
05Aug17
3:53.0
23Aug23
3:52.6
19Jun24
272
3:51.0
11Sep26
3:49.2
05Oct30
3:49.0
17Sep33
3:48.8
30Jun34
3:47.8
06Aug36
3:47.6
10Aug41
3:45.8
17Jul42
3:45.0
17Aug43
3:43.0
07Jul44
3:42.8
04Jun54
3:41.8
21Jun54
3:40.8
28Jul55
3:40.6
03Aug56
3:40.2
11Jul57
3:38.1
12Jul57
3:36.0
28Aug58
3:35.6
06Sep60
3:33.1
08Jul67
3:32.2
02Feb74
3:32.1
15Aug79
27Aug80
28Aug83
04Sep83
16Jul85
23Aug85
name
date
6:13.2
24Jun21
5:27.5
20Aug32
5:24.0
01Jun36
5:23.0
18Jul36
5:20.8
08May37
5:17.0
07Aug37
5:15.3
22Jul39
5:11.0
14Jun52
5:09.8
04Jul53
273
5:08.0
12Sep53
5:02.6
30Sep53
5:00.3
01Nov53
5:00.2
26May54
4:59.6
29May54
4:50.8
24May55
4:45.0
21Sep55
4:41.4
4:39.2
13May67
4:37.0
03Jun67
4:36.8
14Jun69
4:35.3
20Aug71
4:34.9
07Jul73
4:29.5
08Aug73
4:23.8
21May77
4:22.1
27Jan79
4:21.7
26Jan80
09Jul82
16Sep82
4:15.8
05Aug84
10Jul89
References
Bennett, William (March 15, 1993), "Quantifying America's Decline", Wall Street Journal
Dalal, Siddhartha; Folkes, Edward; Hoadley, Bruce (Fall 1989), "Lessons Learned from Challenger: A Statistical
Perspective", Stats: the Magazine for Students of Statistics: 14-18
Gardner, Martin (April 1970), "Mathematical Games, Some mathematical curiosities embedded in the solar
system", Scientific American: 108-112
Oakley, Cletus; Baker, Justine (April 1977), "Least Squares and the 3:40 Mile", Mathematics Teacher
274
and
, which are linear. Each of the four pictures shows the domain
codomain
and
, and
. Note how the nonlinear maps distort the domain in transforming it into the range. For instance,
further from
than it is from
is
the map is spreading the domain out unevenly so that an interval near
The linear maps are nicer, more regular, in that for each map all of the domain is spread by the same factor.
to
The transformation of
275
, where
and
by a matrix
, which are
). So if we understand the effect of a linear map described by a partial-identity matrix, and the
effect of linear mapss described by the elementary matrices, then we will in some sense understand the effect of any
linear map. (The pictures below stick to transformations of
for ease of drawing, but the statements hold for maps
from any
to any
.)
The geometric effect of the linear transformation represented by a partial-identity matrix is projection.
For the
matrices, the geometric action of a transformation represented by such a matrix (with respect to the
along the
-axis.
Note that if
or if
then the
-th component goes the other way; here, toward the left.
along
276
-th and
-th axes;
In higher dimensions, permutations involving many axes can be decomposed into a combination of swaps of pairs of
axes see Problem 5.
The remaining case is that of matrices of the form
performs
is only
would be slid up
while
is
would be affected as is
, as was
; it would be slid up by
affects
-th component.
Another way to see this same point is to consider the action of this map on the unit square. In the next picture,
vectors with a first component of , like the origin, are not pushed vertically at all but vectors with a positive first
component are slid up. Here, all vectors with a first component of
affected to the same extent. More generally, vectors on the same vertical line are slid up the same amount, namely,
277
they are slid up by twice their first component. The resulting shape, a rhombus, has the same base and height as the
square (and thus the same area) but the right angles are gone.
For contrast the next picture shows the effect of the map represented by
according to their second component. The vector
recall that under a linear map, the image of a subspace is a subspace and thus the linear transformation
represented by
maps lines through the origin to lines through the origin. (The dimension of the image space
cannot be greater than the dimension of the domain space, so a line can't map onto, say, a plane.) We will extend that
to show that any line, not just those through the origin, is mapped by
to a line. The proof is simply that the
partial-identity projection
's each turn a line input into a line output (verifying the four
cases is Problem 6), and therefore their composition also preserves lines. Thus, by understanding its components we
can understand arbitrary square matrices
, in the sense that we can prove things about them.
An understanding of the geometric effect of linear transformations on
is very important in mathematics. Here is
a familiar application from calculus. On the left is a picture of the action of the nonlinear function
. As at that start of this Topic, overall the geometric effect of this map is irregular in that at different domain points it
has different effects (e.g., as the domain point goes from to
, the associated range point
at first
decreases, then pauses instantaneously, and then increases).
278
But in calculus we don't focus on the map overall, we focus instead on the local effect of the map.
At
the derivative is
, so that near
we have
, in carrying the domain to the codomain this map causes it to grow by a factor
(When the above picture is drawn in the traditional cartesian way then the prior sentence about the rate of growth of
is usually stated: the derivative
gives the slope of the line tangent to the graph at the point
.)
In higher dimensions, the idea is the same but the approximation is not just the
case. Instead, for a function
and a point
-to-
scalar multiplication
is
279
by a factor of
by a factor of
Exercises
Problem 1
Let
1. Find the matrix
radians.
to the
identity.
2. Translate the row reduction to to a matrix equation
is
radians?
Problem 3
What combination of dilations, flips, skews, and projections produces the map
represented with
Problem 4
Show that any linear transformation of
Problem 5
280
of the numbers
, ...,
, the map
can be accomplished with a composition of maps, each of which only swaps a single pair of coordinates. Hint: it can
be done by induction on . (Remark: in the fourth chapter we will show this and we will also show that the parity
of the number of swaps used is determined by . That is, although a particular permutation could be accomplished
in two different ways with two different numbers of swaps, either both ways use an even number of swaps, or both
use an odd number.)
Problem 6
Show that linear maps preserve the linear structures of a space.
1. Show that for any linear map from
to
is between
Problem 7
Use a picture like the one that appears in the discussion of the Chain Rule to answer: if a function
has
an inverse, what's the relationship between how the function locally, approximately dilates space, and how its
inverse dilates space (assuming, of course, that it has an inverse)?
Solutions
281
and a
chance of moving to
or
Let
after flip
after
is
sumarizes.
With the initial condition that the player starts with three dollars, calculation gives this.
As this computational exploration suggests, the game is not likely to go on for long, with the player quickly ending
in either state
or state
. For instance, after the fourth flip there is a probability of
that the game is
already over. (Because a player who enters either of the boundary states never leaves, they are said to be absorbing.)
This game is an example of a Markov chain, named for A.A. Markov, who worked in the first half of the 1900's.
Each vector of 's is a probability vector and the matrix is a transition matrix. The notable feature of a Markov
chain model is that it is historyless in that with a fixed transition matrix, the next state depends only on the current
state, not on any prior states. Thus a player, say, who arrives at
by starting in state
, then going to state
,
then to
, and then to
has at this point exactly the same chance of moving next to state
as does a player
whose history was to start in , then go to , and to , and then to .
Here is a Markov chain from sociology. A study (Macdonald & Ridge 1988, p. 202) divided occupations in the
United Kingdom into upper level (executives and professionals), middle level (supervisors and skilled manual
workers), and lower level (unskilled). To determine the mobility across these levels in a generation, about two
thousand men were asked, "At which level are you, and at which level was your father when you were fourteen years
old?" This equation summarizes the results.
Markov model assumption about history seems reasonable we expect that while a parent's occupation has a direct
282
influence on the occupation of the child, the grandparent's occupation has no such direct influence. With the initial
distribution of the respondents's fathers given below, this table lists the distributions for the next five generations.
One more example, from a very important subject, indeed. The World Series of American baseball is played between
the team winning the American League and the team winning the National League (we follow [Brunner] but see also
[Woodside]). The series is won by the first team to win four games. That means that a series is in one of twenty-four
states: 0-0 (no games won yet by either team), 1-0 (one game won for the American League team and no games for
the National League team), etc. If we assume that there is a probability that the American League team wins each
game then we have the following transition matrix.
through
vectors. (The code to generate this table in the computer algebra system Octave follows the exercises.)
0-0 1
1-0 0
0.5
0-1 0
0.5
2-0 0
0.25
1-1 0
0.5
0-2 0
0.25
3-0 0
0.125
2-1 0
0.325
1-2 0
0.325
0-3 0
0.125
4-0 0
0.0625
0.0625
0.0625
0.0625
3-1 0
0.25
2-2 0
0.375
1-3 0
0.25
0-4 0
0.0625
0.0625
0.0625
0.0625
4-1 0
0.125
0.125
0.125
3-2 0
0.3125
2-3 0
0.3125
1-4 0
0.125
0.125
0.125
283
4-2 0
0.15625 0.15625
3-3 0
0.3125
2-4 0
0.15625 0.15625
4-3 0
0.15625
3-4 0
0.15625
Note that evenly-matched teams are likely to have a long series there is a probability of
Exercises
Use a computer for these problems. You can, for instance, adapt the Octave script given below.
Problem 1
These questions refer to the coin-flipping game.
1. Check the computations in the table at the end of the first paragraph.
2. Consider the second row of the vector table. Note that this row has alternating
's. Must
be
when
is
NC
0.111 0.102
NC
0.966 0.034
0.063 0.937
For example, a firm in the Northeast region will be in the West region next year with probability
entry is a "birth-death" state. For instance, with probability
. (The Z
from the Northeast will move out of this system next year: go out of business, move abroad, or move to another
category of firm. There is a
probability that a firm in the National Census of Manufacturers will move into
Electronics, or be created, or move in from abroad, into the Northeast. Finally, with probability
a firm out of
284
through
.
3. Suppose that the initial distribution is this.
NE
NC
through
and
.
. Has the system settled down to an equilibrium?
Problem 4
This model has been suggested for some kinds of learning (Wickens 1982, p. 41). The learner starts in an undecided
state
. Eventually the learner has to decide to do either response
(that is, end in state
) or response
(ending in
to do (or
). However, the learner doesn't jump right from being undecided to being sure
). Instead, the learner spends some time in a "tentative-
and
entered it is never left. For the other state changes, imagine a transition is made with probability
or
is
in either
direction.
1. Construct the transition matrix.
2. Take
up at
?
3. Do the same for
.
4. Graph versus the chance of ending at
at
and
Problem 6
For the World Series application, use a computer to generate the seven vectors for
and
1. What is the chance of the National League team winning it all, even though they have only a probability of
or
of winning any one game?
2. Graph the probability against the chance that the American League team wins it all. Is there a threshold
value a
1. Check that the three transistion matrices shown in this Topic meet these two conditions. Must any transition
matrix do so?
and
285
then
to
outcomes.
# 0-0
1-0
0-1_
2-0
1-1
0-2__
3-0
2-1
1-2_
0-3
4-0
3-1__
2-2
1-3
0-4_
4-1
3-2
2-3__
1-4
4-2
3-3_
2-4
4-3
3-4
286
References
Feller, William (1968), An Introduction to Probability Theory and Its Applications, 1 (3rd ed.), Wiley.
Iosifescu, Marius (1980), Finite Markov Processes and Their Applications, UMI Research Press.
Kelton, Christina M.L. (1983), Trends on the Relocation of U.S. Manufacturing, Wiley.
Kemeny, John G.; Snell, J. Laurie (1960), Finite Markove Chains, D.~Van Nostrand.
Macdonald, Kenneth; Ridge, John (1988), "Social Mobility", British Social Trends Since 1900 (Macmillian).
Wickens, Thomas D. (1982), Models for Behavior, W.H. Freeman.
In modern terminology, "picking the plane up ..." means considering a map from the plane to itself. Euclid has
limited consideration to only certain transformations of the plane, ones that may possibly slide or turn the plane but
not bend or stretch it. Accordingly, we define a map
to be distance-preserving or a rigid motion or
an isometry, if for all points
to
to
. We also define a plane figure to be a set of points in the plane and we say that two figures are congruent if
there is a distance-preserving map from the plane to itself that carries one figure onto the other.
Many statements from Euclidean geometry follow easily from these definitions. Some are: (i) collinearity is
invariant under any distance-preserving map (that is, if
,
, and
are collinear then so are
,
, and
is
and
is between
and
then so
distance-preserving map (if a figure is a triangle then the image of that figure is also a triangle), (iv) and the property
of being a circle is invariant under any distance-preserving map. In 1872, F. Klein suggested that Euclidean
geometry can be characterized as the study of properties that are invariant under these maps. (This forms part of
Klein's Erlanger Program, which proposes the organizing principle that each kind of geometry Euclidean,
projective, etc. can be described as the study of the properties that are invariant under some group of
transformations. The word "group" here means more than just "collection", but that lies outside of our scope.)
We can use linear algebra to characterize the distance-preserving maps of the plane.
287
First, there are distance-preserving transformations of the plane that are not linear. The obvious example is this
translation.
However, this example turns out to be the only example, in the sense that if
then the map
for some
that
to
that is
Recall that if we fix three non-collinear points then any point in the plane can be described by giving its distance
from those three. So any point in the domain is determined by its distance from the three fixed points ,
,
and
and
in the codomain is determined by its distance from the three fixed points
(these three are not collinear because, as mentioned above, collinearity is invariant and
, the distance
from
from
) works in the
,
,
in the plane
from
, its
from
, and
cases
is assumed to send
to itself)
and
) describes
and
and
we have this.
288
One thing that is neat about this characterization is that we can easily recognize matrices that represent such a map
with respect to the standard bases. Those matrices have that when the columns are written as vectors then they are of
length one and are mutually orthogonal. Such a matrix is called an orthonormal matrix or orthogonal matrix (the
second term is commonly used to mean not just that the columns are orthogonal, but also that they have length one).
We can use this insight to delimit the geometric actions possible in distance-preserving maps. Because
, any is mapped by to lie somewhere on the circle about the origin that has radius equal to the
length of
. In particular,
and
are mapped to the unit circle. What's more, once we fix the unit vector
and
as
measured counterclockwise. The first matrix above represents, with respect to the standard bases, a rotation of the
plane by radians.
The second matrix above represents a reflection of the plane through the line bisecting the angle between
.
and
289
and
runs counterclockwise, and in the first map above the angle from
to
is also counterclockwise, so the orientation of the angle is preserved. But in the second map the orientation is
reversed. A distance-preserving map is direct if it preserves orientations and opposite if it reverses orientation.
So, we have characterized the Euclidean study of congruence: it considers, for plane figures, the properties that are
invariant under combinations of (i) a rotation followed by a translation, or (ii) a reflection followed by a translation
(a reflection followed by a non-trivial translation is a glide reflection).
Another idea, besides congruence of figures, encountered in elementary geometry is that figures are similar if they
are congruent after a change of scale. These two triangles are similar since the second is the same shape as the first,
but
-ths the size.
From the above work, we have that figures are similar if there is an orthonormal matrix
on
by
Although many of these ideas were first explored by Euclid, mathematics is timeless and they are very much in use
today. One application of the maps studied above is in computer graphics. We can, for example, animate this top
view of a cube by putting together film frames of it rotating; that's a rigid motion.
Frame 1
Frame 2
Frame 3
We could also make the cube appear to be moving away from us by producing film frames of it shrinking, which
gives us figures that are similar.
Frame 1:
290
Frame 2:
Frame 3:
Computer graphics incorporates techniques from linear algebra in many other ways (see Problem 4).
So the analysis above of distance-preserving maps is useful as well as interesting. A beautiful book that explores
some of this area is (Weyl 1952). More on groups, of transformations and otherwise, can be found in any book on
Modern Algebra, for instance (Birkhoff & MacLane 1965). More on Klein and the Erlanger Program is in (Yaglom
1988).
Exercises
Problem 1
Decide if each of these is an orthonormal matrix.
1.
2.
3.
Problem 2
Write down the formula for each of these distance-preserving maps.
1. the map that rotates
radians, and then translates by
2. the map that reflects about the line
3. the map that reflects about
and translates over
and up
Problem 3
1. The proof that a map that is distance-preserving and sends the zero vector to itself incidentally shows that such a
map is one-to-one and onto (the point in the domain determined by , , and corresponds to the point in
the codomain determined by those three). Therefore any distance-preserving map has an inverse. Show that the
inverse is also distance-preserving.
2. Prove that congruence is an equivalence relation between plane figures.
Problem 4
In practice the matrix for the distance-preserving linear transformation and the translation are often combined into
one. Check that these two computations yield the same first two components.
References
291
292
Chapter IV
Linear Algebra/Determinants
In the first chapter of this book we considered linear systems and we picked out the special case of systems with the
same number of equations as unknowns, those of the form
where
is a square matrix. We noted a
distinction between two classes of
many solutions, if a particular
system
, then
's. While such systems may have a unique solution or no solutions or infinitely
is associated with a unique solution in any system, such as the homogeneous
, where every linear system for which it is the matrix of coefficients has either
a system
has a solution, and that solution is unique;
Gauss-Jordan reduction of yields an identity matrix;
the rows of form a linearly independent set;
the columns of form a basis for
;
any map that represents is an isomorphism;
6. an inverse matrix
exists.
So when we look at a particular square matrix, the question of whether it is nonsingular is one of the first things that
we ask. This chapter develops a formula to determine this. (Since we will restrict the discussion to square matrices,
in this chapter we will usually simply say "matrix" in place of "square matrix".)
More precisely, we will develop infinitely many formulas, one for
matrices, etc. Of
course, these formulas are related that is, we will develop a family of formulas, a scheme that describes the
formula for each size.
293
The
The
such that an
matrix
is
then "
".)
Linear Algebra/Exploration
This subsection is optional. It briefly describes how an investigator might come to a good general definition, which
is given in the next subsection.
The three cases above don't show an evident pattern to use for the general
term
terms
and
terms
, etc.,
have three letters. We may also observe that in those terms there is a letter from each row and column of the matrix,
e.g., the letters in the
term
come one from each row and one from each column. But these observations perhaps seem more puzzling than
enlightening. For instance, we might wonder why some of the terms are added while others are subtracted.
A good problem solving strategy is to see what properties a solution must have and then search for something with
those properties. So we shall start by asking what properties we require of the formulas.
At this point, our primary way to decide whether a matrix is singular is to do Gaussian reduction and then check
whether the diagonal of resulting echelon form matrix has any zeroes (that is, to check whether the product down the
diagonal is zero). So, we may expect that the proof that a formula determines singularity will involve applying
Gauss' method to the matrix, to show that in the end the product down the diagonal is zero if and only if the
determinant formula gives zero. This suggests our initial plan: we will look for a family of functions with the
property of being unaffected by row operations and with the property that a determinant of an echelon form matrix is
the product of its diagonal entries. Under this plan, a proof that the functions determine singularity would go, "Where
is the Gaussian reduction, the determinant of
equals the determinant of
(because the
determinant is unchanged by row operations), which is the product down the diagonal, which is zero if and only if
the matrix is singular". In the rest of this subsection we will test this plan on the
and
determinants that
we know. We will end up modifying the "unaffected by row operations" part, but not by much.
Linear Algebra/Exploration
294
and
operation of pivoting: if
then is
pivot
operation
pivot
as do the other
pivot operations.
We are exploring a possibility here and we do not yet have all the facts. Nonetheless, so far, so good.
The next step is to compare
with
row swap
. This
swap inside of a
matrix
also does not give the same determinant as before the swap again there is a sign change. Trying a different
swap
to
. One of the
and the other case has the same result. Here is one
cases is
case
Linear Algebra/Exploration
and the other two are similar. These lead us to suspect that multiplying a row by
295
. This fits with our modified plan because we are asking only that the zeroness of the determinant be unchanged and
we are not focusing on the determinant's sign or magnitude.
In summary, to develop the scheme for the formulas to compute determinants, we look for determinant functions that
remain unchanged under the pivoting operation, that change sign on a row swap, and that rescale on the rescaling of
a row. In the next two subsections we will find that for each such a function exists and is unique.
For the next subsection, note that, as above, scalars come out of each row without affecting other rows. For instance,
in this equality
the
isn't factored out of all three rows, only out of the top row. The determinant acts on each row of independently
of the other rows. When we want to use this property of determinants, we shall write the determinant as a function of
the rows: "
", instead of as "
" or "
". The definition of the
determinant that starts the next subsection is written in this way.
Exercises
This exercise is recommended for all readers.
Problem 1
Evaluate the determinant of each.
1.
2.
3.
Problem 2
Evaluate the determinant of each.
1.
2.
3.
This exercise is recommended for all readers.
Problem 3
Linear Algebra/Exploration
Verify that the determinant of an upper-triangular
296
matrix is the product down the diagonal.
2.
3.
This exercise is recommended for all readers.
Problem 6
Each pair of matrices differ by one row operation. Use this operation to compare
1.
2.
3.
Problem 7
Show this.
with
Linear Algebra/Exploration
Which real numbers
297
make this matrix singular?
Problem 9
Do the Gaussian reduction to check the formula for
is nonsingular iff
Problem 10
Show that the equation of a line in
thru
and
sum the products on the forward diagonals and subtract the products on the backward diagonals. That is, first write
1. Check that this agrees with the formula given in the preamble to this section.
2. Does it extend to other-sized determinants?
Problem 12
The cross product of the vectors
Note that the first row is composed of vectors, the vectors from the standard basis for
matrices.
.
Matrices
and
such that
. (This definition is in
Linear Algebra/Exploration
298
Problem 15
Prove that for
for
matrices, the determinant of a matrix equals the determinant of its transpose. Does that also hold
matrices?
This exercise is recommended for all readers.
Problem 16
Is the determinant function linear is
Problem 17
Show that if
is
then
Problem 18
Which real numbers
make
, ...,
Linear Algebra/Exploration
299
References
Haggett, Vern (proposer); Saunders, F. W. (solver) (Apr. 1955), "Elementary problem 1135", American
Mathematical Monthly (American Mathematical Society) 62 (5): 257.
stating such a formula. Instead, we will begin by considering the function that such a formula calculates. We will
define the function by its properties, then prove that the function with these properties exist and is unique and also
describe formulas that compute this function. (Because we will show that the function exists and is unique, from the
start we will say "
" instead of "if there is a determinant function then
" and "the determinant"
instead of "any determinant".)
Definition 2.1
A
determinant is a function
1.
2.
3.
4.
such that
for
for
for
where
(the
is an identity matrix
for
Remark 2.2
Property (2) is redundant since
swaps rows
and
The first result shows that a function satisfying these conditions gives a criteria for nonsingularity. (Its last sentence
is that, in the context of the first three conditions, (4) is equivalent to the condition that the determinant of an echelon
form matrix is the product down the diagonal.)
Lemma 2.3
A matrix with two identical rows has a determinant of zero. A matrix with a zero row has a determinant of zero. A
matrix is nonsingular if and only if its determinant is nonzero. The determinant of an echelon form matrix is the
product down its diagonal.
Proof
To verify the first sentence, swap the two equal rows. The sign of the determinant changes, but the matrix is
unchanged and so its determinant is unchanged. Thus the determinant is zero.
For the second sentence, we multiply a zero row by 1 and apply property (3). Multiplying a zero row with a
constant leaves the matrix unchanged, so property (3) implies that
. The only way this can
be is if
reduces
to a with a zero row; by the second sentence of this lemma its determinant is zero.
Finally, for the fourth sentence, if an echelon form matrix is singular then it has a zero on its diagonal, that is, the
product down its diagonal is zero. The third sentence says that if a matrix is singular then its determinant is zero. So
if the echelon form matrix is singular then its determinant equals the product down its diagonal.
300
If an echelon form matrix is nonsingular then none of its diagonal entries is zero so we can use property (3) of the
definition to factor them out (again, the vertical bars
indicate the determinant operation).
Next, the Jordan half of Gauss-Jordan elimination, using property (1) of the definition, leaves the identity matrix.
Therefore, if an echelon form matrix is nonsingular then its determinant is the product down its diagonal.
That result gives us a way to compute the value of a determinant function on a matrix. Do Gaussian reduction,
keeping track of any changes of sign caused by row swaps and any scalars that are factored out, and then finish by
multiplying down the diagonal of the echelon form result. This procedure takes the same time as Gauss' method and
so is sufficiently fast to be practical on the size matrices that we see in this book.
Example 2.4
Doing
determinants
determinant is usually easier to calculate with Gauss' method than with the formula given earlier.
Example 2.5
Determinants of matrices any bigger than
are almost always most quickly done with this Gauss' method
procedure.
The prior example illustrates an important point. Although we have not yet found a
determinant formula, if
one exists then we know what value it gives to the matrix if there is a function with properties (1)-(4) then on the
above matrix the function must return .
Lemma 2.6
For each
, if there is an
Proof
For any
matrix we can perform Gauss' method on the matrix, keeping track of how the sign alternates on row
swaps, and then multiply down the diagonal of the echelon form result. By the definition and the lemma, all
determinant functions must return this value on this matrix. Thus all
there is only one input argument/output value relationship satisfying the four conditions.
301
determinant function" emphasizes that, although we can use Gauss' method to compute the
only value that a determinant function could possibly return, we haven't yet shown that such a determinant function
exists for all . In the rest of the section we will produce determinant functions.
Exercises
For these, assume that an
2.
Problem 2
Use Gauss' method to find each.
1.
2.
Problem 3
For which values of
1.
2.
3.
This exercise is recommended for all readers.
Problem 5
Find the determinant of a diagonal matrix.
302
Problem 6
Describe the solution set of a homogeneous linear system if the determinant of the matrix of coefficients is nonzero.
This exercise is recommended for all readers.
Problem 7
Show that this determinant is zero.
Problem 8
1. Find the
,
, and
matrices with
2. Find the determinant of the square matrix with
entry given by
entry
.
Problem 9
1. Find the
,
, and
matrices with
2. Find the determinant of the square matrix with
entry given by
entry
.
Problem 11
The second condition in the definition, that row swaps change the sign of a determinant, is somewhat annoying. It
means we have to keep track of the number of swaps, to compute how the sign alternates. Can we get rid of it? Can
we replace it with the condition that row swaps leave the determinant unchanged? (If so then we would need new
,
, and
formulas, but that would be a minor matter.)
Problem 12
Prove that the determinant of any triangular matrix, upper or lower, is the product down its diagonal.
Problem 13
Refer to the definition of elementary matrices in the Mechanics of Matrix Multiplication subsection.
1. What is the determinant of each kind of elementary matrix?
2. Prove that if
is any elementary matrix then
3. (This question doesn't involve determinants.) Prove that if
4. Show that
5. Show that if
.
is nonsingular then
Problem 14
Prove that the determinant of a product is the product of the determinants
1.
2.
3.
4.
5.
Problem 15
A submatrix of a given matrix
is one that can be obtained by deleting some of the rows and columns of
Prove that for any square matrix, the rank of the matrix is
an
submatrix with a nonzero determinant.
303
if and only if
References
Anning, Norman (proposer); Trigg, C. W. (solver) (Feb. 1953), "Elementary problem 1016", American
Mathematical Monthly (American Mathematical Society) 60 (2): 115.
Following Definition 2.1 gives that both calculations yield the determinant
track of the fact that the row swap changes the sign of the result of multiplying down the diagonal. But if we follow
the supposition and change the second condition then the two calculations yield different values,
and . That
is, under the supposition the outcome would not be well-defined no function exists that satisfies the changed
second condition along with the other three.
304
Of course, observing that Definition 2.1 does the right thing in this one instance is not enough; what we will do in
the rest of this section is to show that there is never a conflict. The natural way to try this would be to define the
determinant function with: "The value of the function is the result of doing Gauss' method, keeping track of row
swaps, and finishing by multiplying down the diagonal". (Since Gauss' method allows for some variation, such as a
choice of which row to use when swapping, we would have to fix an explicit algorithm.) Then we would be done if
we verified that this way of computing the determinant satisfies the four properties. For instance, if and are
related by a row swap then we would need to show that this algorithm returns determinants that are negatives of each
other. However, how to verify this is not evident. So the development below will not proceed in this way. Instead, in
this subsection we will define a different way to compute the value of a determinant, a formula, and we will use this
way to prove that the conditions are satisfied.
The formula that we shall use is based on an insight gotten from property (3) of the definition of determinants. This
property shows that determinants are not linear.
Example 3.1
For this matrix
Since scalars come out a row at a time, we might guess that determinants are linear a row at a time.
Definition 3.2
Let
is multilinear if
1.
2.
for
and
Lemma 3.3
Determinants are multilinear.
Proof
The definition of determinants gives property (2) (Lemma 2.3 following that definition covers the
case) so
If the set
is linearly dependent then all three matrices are singular and so all three
determinants are zero and the equality is trivial. Therefore assume that the set is linearly independent. This set of
-wide row vectors has
members, so we can make a basis by adding one more vector
. Express
and
giving this.
to
305
, etc. Thus
and
and
in terms of the basis, e.g., start with the pivot operations of adding
to
and
to
, etc.
Multilinearity allows us to expand a determinant into a sum of determinants, each of which involves a simple matrix.
Example 3.4
We can use multilinearity to split this determinant into two, first breaking up the first row
and then separating each of those two, breaking along the second rows.
We are left with four determinants, such that in each row of each matrix there is a single entry from the original
matrix.
Example 3.5
In the same way, a
determinant separates into a sum of many simpler determinants. We start by splitting along
Each of these three will itself split in three along the second row. Each of the resulting nine splits in three along the
third row, resulting in twenty seven determinants
such that each row contains a single entry from the starting matrix.
So an
single entry from the starting matrix. However, many of these summand determinants are zero.
Example 3.6
In each of these three matrices from the above expansion, two of the rows have their entry from the starting matrix in
the same column, e.g., in the first matrix, the and the both come from the first column.
Any such matrix is singular, because in each, one row is a multiple of the other (or is a zero row). Thus, any such
determinant is zero, by Lemma 2.3.
306
determinant into the sum of the twenty seven determinants simplifies to
To finish, we evaluate those six determinants by row-swapping them to the identity matrix, keeping track of the
resulting sign changes.
determinant to get
separate
determinants, each with one distinguished entry per row. We can drop most of these new determinants because the
matrices are singular, with one row a multiple of another. We are left with the one-entry-per-row determinants also
having only one entry per column (one entry from the original determinant, that is). And, since we can factor scalars
out, we can further reduce to only considering determinants of one-entry-per-row-and-column matrices where the
entries are ones.
These are permutation matrices. Thus, the determinant can be computed in this three-step way (Step 1) for each
permutation matrix, multiply together the entries from the original matrix where that permutation matrix has ones,
(Step 2) multiply that by the determinant of the permutation matrix and (Step 3) do that for all permutation matrices
and sum the results together.
To state this as a formula, we introduce a notation for permutation matrices. Let
zeroes except for a one in its
Definition 3.7
is
to
's.
307
, ...,
Example 3.8
The
-permutations are
and
, and
are
-permutations
are
, and
are
, and
Definition 3.9
The permutation expansion for determinants is
where
-permutations.
". This
phrase is just a restating of the three-step process (Step 1) for each permutation matrix, compute
(Step 2) multiply that by
and (Step 3) sum all such terms together.
Example 3.10
The familiar formula for the determinant of a
(the second permutation matrix takes one row swap to pass to the identity). Similarly, the formula for the
determinant of a
matrix is this.
Computing a determinant by permutation expansion usually takes longer than Gauss' method. However, here we are
not trying to do the computation efficiently, we are instead trying to give a determinant formula that we can prove to
be well-defined. While the permutation expansion is impractical for computations, it is useful in proofs. In particular,
308
there is a
determinant function.
The proof is deferred to the following subsection. Also there is the proof of the next result (they share some
features).
Theorem 3.12
The determinant of a matrix equals the determinant of its transpose.
The consequence of this theorem is that, while we have so far stated results in terms of rows (e.g., determinants are
multilinear in their rows, row swaps change the signum, etc.), all of the results also hold in terms of columns. The
final result gives examples.
Corollary 3.13
A matrix with two equal columns is singular. Column swaps change the sign of a determinant. Determinants are
multilinear in their columns.
Proof
For the first statement, transposing the matrix results in a matrix with the same determinant, and with two equal
rows, and hence a determinant of zero. The other two are proved in the same way.
We finish with a summary (although the final subsection contains the unfinished business of proving the two
theorems). Determinant functions exist, are unique, and we know how to compute them. As for what determinants
are about, perhaps these lines (Kemp 1982) help make it memorable.
Determinant none,
Solution: lots or none.
Determinant some,
Solution: just one.
Exercises
These summarize the notation used in this book for the
- and
2.
This exercise is recommended for all readers.
- permutations.
309
Problem 2
Compute these both with Gauss' method and with the permutation expansion formula.
1.
2.
This exercise is recommended for all readers.
Problem 3
Use the permutation expansion formula to derive the formula for
determinants.
Problem 4
List all of the
-permutations.
Problem 5
A permutation, regarded as a function from the set
permutation has an inverse.
1. Find the inverse of each
2. Find the inverse of each
-permutation.
-permutation.
Problem 6
Prove that
and
, this holds.
Problem 7
Find the only nonzero term in the permutation expansion of this matrix.
Problem 9
Verify the second and third statements in Corollary 3.13.
This exercise is recommended for all readers.
Problem 10
Show that if an
can be expressed as a
entry is zero?
matrix.
310
Problem 13
How many
Problem 14
A matrix
is skew-symmetric if
Show that
, as in this matrix.
matrix has a
determinant of zero?
This exercise is recommended for all readers.
Problem 16
If
we
have
data
points
and
want
to
find
polynomial
equation/
unknown linear system. The matrix of coefficients for that system is called the
Vandermonde matrix. Prove that the determinant of the transpose of that matrix of coefficients
with
that the determinant is zero, and the linear system has no solution, if and only if the
Problem 17
. (This shows
and
ones in the upper left and lower right, and the zero blocks in
the upper right and lower left. Show that if a matrix can be partitioned as
where
and
and
matrix
distinct reals
has
arrays in
311
Show that
be the sum of the integer elements of a magic square of order three and let
? Problem 22
Show that the determinant of the
References
Kemp, Franklin (Oct. 1982), "Linear Equations", American Mathematical Monthly (American Mathematical
Society): 608.
Silverman, D. L. (proposer); Trigg, C. W. (solver) (Jan. 1963), "Quickie 237", Mathematics Magazine (American
Mathematical Society) 36 (1).
Strang, Gilbert (1980), Linear Algebra and its Applications (2nd ed.), Hartcourt Brace Javanovich
Trigg, C. W. (proposer) (Jan. 1963), "Quickie 307", Mathematics Magazine (American Mathematical Society) 36
(1): 77.
Trigg, C. W. (proposer); Walker, R. J. (solver) (Jan. 1949), "Elementary Problem 813", American Mathematical
Monthly (American Mathematical Society) 56 (1).
Rupp, C. A. (proposer); Aude, H. T. R. (solver) (Jun.-July 1931), "Problem 3468", American Mathematical
Monthly (American Mathematical Society) 37 (6): 355.
312
This reduces the problem to showing that there is a determinant function on the set of permutation matrices of that
size.
Of course, a permutation matrix can be row-swapped to the identity matrix and to calculate its determinant we can
keep track of the number of row swaps. However, the problem is still not solved. We still have not shown that the
result is well-defined. For instance, the determinant of
or with three.
some way to do it with an even number of swaps? Corollary 4.6 below proves that there is no permutation matrix
that can be row-swapped to an identity matrix in two ways, one with an even number of swaps and the other with an
odd number of swaps.
Definition 4.1
Two rows of a permutation matrix
such that
313
Example 4.2
This permutation matrix
precedes
precedes
, and
precedes
Lemma 4.3
A row-swap in a permutation matrix changes the number of inversions from even to odd, or from odd to even.
Proof
Consider a swap of rows
and
, where
then the swap changes the total number of inversions by one either removing or producing one inversion,
depending on whether
or not, since inversions involving rows not in this pair are not affected.
Consequently, the total number of inversions changes from odd to even or from even to odd.
If the rows are not adjacent then they can be swapped via a sequence of adjacent swaps, first bringing row
up
down.
Each of these adjacent swaps changes the number of inversions from odd to even or from even to odd. There are an
odd number
of them. The total change in the number of inversions is from even to odd or
from odd to even.
Definition 4.4
The signum of a permutation
is
is even, and is
if the number
of inversions is odd.
Example 4.5
With the subscripts from Example 3.8 for the
-permutations,
while
314
Corollary 4.6
If a permutation matrix has an odd number of inversions then swapping it to the identity takes an odd number of
swaps. If it has an even number of inversions then swapping to the identity takes an even number of swaps.
Proof
The identity matrix has zero inversions. To change an odd number to zero requires an odd number of swaps, and to
change an even number to zero requires an even number of swaps.
We still have not shown that the permutation expansion is well-defined because we have not considered row
operations on permutation matrices other than row swaps. We will finesse this problem: we will define a function
by altering the permutation expansion formula, replacing
with
(this gives the same value as the permutation expansion because the prior result shows that
).
This formula's advantage is that the number of inversions is clearly well-defined just count them. Therefore, we
will show that a determinant function exists for all sizes by showing that is it, that is, that satisfies the four
conditions.
Lemma 4.7
The function
Proof
We'll must check that it has the four properties from the definition.
Property (4) is easy; in
all of the summands are zero except for the product down the diagonal, which is one.
For property (3) consider
Factor the
To convert to unhatted
numbers
where
are
interchanged,
that equals
and
with
this
Replacing
gives
-th and
-th
the
.
in
Now
by a swap of the
-th and
-th
numbers. But any permutation can be derived from some other permutation by such a swap, in one and only one
way, so this summation is in fact a sum over all permutations, taken once and only once. Thus
.
To do property (1) let
and consider
(notice: that's
, not
315
where
, not
is a matrix equal to
). Thus,
of
and
is a copy of row
of
(because the
corresponding terms
shows that the corresponding permutation matrices are transposes. That is, there is a relationship between these
corresponding permutations. Problem 6 shows that they are inverses.
Theorem 4.9
The determinant of a matrix equals the determinant of its transpose.
Proof
Call the matrix
with
's so that
and we can finish the argument by manipulating the expression on the right to be recognizable as the determinant of
the transpose. We have written all permutation expansions (as in the middle expression above) with the row indices
ascending. To rewrite the expression on the right in this way, note that because is a permutation, the row indices
316
, ...,
, ...,
). Substituting on
right gives
as required.
Exercises
These summarize the notation used in this book for the
- and
- permutations.
Problem 1
Give the permutation expansion of a general
-permutation.
-permutation.
-permutation.
-permutation.
Problem 4
What is the signum of the
-permutation
? (Strang 1980)
Problem 5
Prove these.
1. Every permutation has an inverse.
2.
3. Every permutation is the inverse of another.
Problem 6
Prove that the matrix of the permutation inverse is the transpose of the matrix of the permutation
for any permutation .
This exercise is recommended for all readers.
317
Problem 7
Show that a permutation matrix with
with Corollary 4.6.
let
on all
on all
and
with
-permutations.
-permutations.
Many authors give this formula as the definition of the signum function.
Solutions
References
Strang, Gilbert (1980), Linear Algebra and its Applications (2nd ed.), Hartcourt Brace Javanovich
318
is familiar from the construction of the sum of the two vectors. One way to compute the area that it encloses is to
draw this rectangle and subtract the area of each subregion.
The fact that the area equals the value of the determinant
is no coincidence. The properties in the definition of determinants make reasonable postulates for a function that
measures the size of the region enclosed by the vectors in the matrix.
For instance, this shows the effect of multiplying one of the box-defining vectors by a scalar (the scalar used is
).
and
319
is bigger, by a factor of
and
in
general
expect
of
the
size
and
. That
measure
that
two have the same base and the same height and hence the same area. This illustrates that
.
Generalized,
, which is a restatement of the determinant
postulate.
Of course, this picture
shows that
, which is a restatement of the property that the determinant of the identity matrix is one.
With that, because property (2) of determinants is redundant (as remarked right after the definition), we have that all
of the properties of determinants are reasonable to expect of a function that gives the size of boxes. We can now cite
the work done in the prior section to show that the determinant exists and is unique to be assured that these
postulates are consistent and sufficient (we do not need any more postulates). That is, we've got an intuitive
justification to interpret
as the size of the box formed by the vectors. (Comment. An even more
basic approach, which also leads to the definition below, is in (Weston 1959).
Example 1.1
The volume of this parallelepiped, which can be found by the usual formula from high school geometry, is
320
Remark 1.2
Although property (2) of the definition of determinants is redundant, it raises an important point. Consider these two.
The only difference between them is in the order in which the vectors are taken. If we take
, follow the counterclockwise arc shown, then the sign is positive. Following a clockwise arc gives a negative sign.
The sign returned by the size function reflects the "orientation" or "sense" of the box. (We see the same thing if we
picture the effect of scalar multiplication by a negative scalar.)
Although it is both interesting and important, the idea of orientation turns out to be tricky. It is not needed for the
development below, and so we will pass it by. (See Problem 20.)
Definition 1.3
The box (or parallelepiped) formed by
321
The definition of volume gives a geometric interpretation to something in the space, boxes made from vectors. The
next result relates the geometry to the functions that operate on spaces.
Theorem 1.5
A transformation
box
is
changes the size of all boxes by the same factor, namely the size of the image of a
times the size of the box
, where
.
The two sentences state the same idea, first in map terms and then in matrix terms. Although we tend to prefer a map
point of view, the second sentence, the matrix version, is more convienent for the proof and is also the way that we
shall use this result later. (Alternate proofs are given as Problem 16 and Problem 21].)
Proof
The two statements are equivalent because
the unit box
basis).
First consider the case that
if
. A matrix has a zero determinant if and only if it is not invertible. Observe that
multiplication
shows that
then neither is
if
Now consider the case that
then
, that
such that
an
elementary
matrix
is
Example 1.6
then
then
equals
The
result
follow
. The
because
.
has been multiplied by
. But
and
to this
will
then
. The
, again by the
holds for
is not invertible
). Therefore, if
by
, and so
322
Corollary 1.7
If a matrix is invertible then the determinant of its inverse is the inverse of its determinant
Proof
Exercises
Problem 1
Find the volume of the region formed.
1.
2.
3.
. The
does equal
323
1.
2.
3.
This exercise is recommended for all readers.
Problem 5
By what factor does each transformation change the size of boxes?
1.
2.
3.
Problem 6
What is the area of the image of the rectangle
Problem 7
If
and
then
324
area is
determinant is
area is
Problem 11
1. Suppose that
and that
. Find
2. Assume that
. Prove that
.
.
be the matrix representing (with respect to the standard bases) the map that rotates plane vectors
counterclockwise thru
change sizes?
with endpoints
, and
triangle defines a plane what is the area of the triangle in that plane?)
This exercise is recommended for all readers.
Problem 16
An alternate proof of Theorem 1.5 uses the definition of determinant functions.
1. Note that the vectors forming
given by
. Show that has the first property of a determinant.
has the remaining three properties of a determinant function.
4. Conclude that
Problem 17
Give a non-identity matrix with the property that
. Show that if
then
325
and
such that
(we will
study this relation in Chapter Five). Show that similar matrices have the same determinant.
Problem 20
We usually represent vectors in
with respect to the standard basis so vectors in the first quadrant have both
coordinates positive.
gives the same counterclockwise cycle. We say these two bases have the same orientation.
1.
2.
3.
4.
5. What happens in
6. What happens in
?
?
thru
and
or
, and
, and
.
is
whose
References
Bittinger, Marvin (proposer) (Jan. 1973), "Quickie 578", Mathematics Magazine (American Mathematical
Society) 46 (5): 286,296.
Gardner, Martin (1990), The New Ambidextrous Univers, W. H. Freeman and Company.
Peterson, G. M. (Apr. 1955), "Area of a triangle", American Mathematical Monthly (American Mathematical
Society) 62 (4): 249.
Weston, J. D. (Aug./Sept. 1959), "Volume in Vector Spaces", American Mathematical Monthly (American
Mathematical Society) 66 (7): 575-577.
we can, for instance, factor out the entries from the first row
326
327
The point of the swapping (one swap to each of the permutation matrices on the second line and two swaps to each
on the third line) is that the three lines simplify to three terms.
The formula given in Theorem 1.5, which generalizes this example, is a recurrence the determinant is expressed
as a combination of determinants. This formula isn't circular because, as here, the determinant is expressed in terms
of determinants of matrices of smaller size.
Definition 1.2
For any
matrix
minor of
Example 1.3
The
, the
. The
cofactor
is
and column
cofactor of the matrix from Example 1.1 is the negative of the second
of
minor of
is the
.
determinant.
Example 1.4
Where
and
cofactors.
is an
Proof
Problem 15.
Example 1.6
We can compute the determinant
or column
328
Example 1.7
A row or column with many zeroes suggests a Laplace expansion.
We finish by applying this result to derive a new formula for the inverse of a matrix. With Theorem 1.5, the
determinant of an
matrix can be calculated by taking linear combinations of entries from a row and their
associated cofactors.
Recall that a matrix with two identical rows has a zero determinant. Thus, for any matrix
by entries from the "wrong" row row
with
) and (
gives zero
equal to row
. This equation
).
Note that the order of the subscripts in the matrix of cofactors is opposite to the order of subscripts in the other
matrix; e.g., along the first row of the matrix of cofactors the subscripts are
then
, etc.
Definition 1.8
The matrix adjoint to the square matrix
where
is the
is
cofactor.
Theorem 1.9
Where
is a square matrix,
Proof
Equations (
) and (
).
Example 1.10
If
is
329
Corollary 1.11
If
then
Example 1.12
The inverse of the matrix from Example 1.10 is
The formulas from this section are often used for by-hand calculation and are sometimes useful with special types of
matrices. However, they are not the best choice for computation with arbitrary matrices because they require more
arithmetic than, for instance, the Gauss-Jordan method.
Exercises
This exercise is recommended for all readers.
Problem 1
Find the cofactor.
1.
2.
3.
This exercise is recommended for all readers.
Problem 2
Find the determinant by expanding
330
Problem 4
Find the matrix adjoint to each.
1.
2.
3.
4.
This exercise is recommended for all readers.
Problem 5
Find the inverse of each matrix in the prior question with Theorem 1.9.
Problem 6
Find the matrix adjoint to this one.
matrix.
matrix.
331
entry is zero in the part above the diagonal, that is, when
1. Must the adjoint of an upper triangular matrix be upper triangular? Lower triangular?
2. Prove that the inverse of a upper triangular matrix is upper triangular, if an inverse exists.
Problem 15
This question requires material from the optional Determinants Exist subsection. Prove Theorem 1.5 by using the
permutation expansion.
Problem 16
Prove that the determinant of a matrix equals the determinant of its transpose using Laplace's expansion and
induction on the size of the matrix.
? Problem 17
Show that
where
of order
is the
-th term of
Solutions
References
Walter, Dan (proposer); Tytun, Alex (solver) (1949), "Elementary problem 834", American Mathematical
Monthly (American Mathematical Society) 56 (6): 409.
332
and
and
So even without determinants we can state the algebraic issue that opened this book, finding the solution of a linear
system, in geometric terms: by what factors
and
must we dilate the vectors to expand the small parallegram
to fill the larger one?
However, by employing the geometric significance of determinants we can get something that is not just a
restatement, but also gives us a new insight and sometimes allows us to compute answers quickly. Compare the sizes
of these shaded boxes.
and
times the size of the first box. Since the third box is formed from
and
column to the first column, the size of the third box equals that of the second. We have this.
333
is formed from
has the
with the vector
we do this computation.
Cramer's Rule allows us to solve many two equations/two unknowns systems by eye. It is also sometimes used for
three equations/three unknowns systems. But computing large determinants takes a long time, so solving large
systems by Cramer's Rule is not practical.
Exercises
Problem 1
Use Cramer's Rule to solve each for each of the variables.
1.
2.
Problem 2
Use Cramer's Rule to solve this system for
Problem 3
Prove Cramer's Rule.
Problem 4
Suppose that a linear system has as many equations as unknowns, that all of its coefficients and constants are
integers, and that its matrix of coefficients has determinant . Prove that the entries in the solution are all integers.
(Remark. This is often used to invent linear systems for exercises. If an instructor makes the linear system with this
property then the solution is not some disagreeable fraction.)
334
Problem 5
Use Cramer's Rule to give a formula for the solution of a two equations/two unknowns linear system.
Problem 6
Can Cramer's Rule tell the difference between a system with no solutions and one with infinitely many?
Problem 7
The first picture in this Topic (the one that doesn't use determinants) shows a unique solution case. Produce a similar
picture for the case of infintely many solutions, and the case of no solutions.
Solutions
There are
is a large value; for instance, even if
different
is only
are obtained by multiplying entries together. This is a very large number of multiplications (for instance, (Knuth
1988) suggests
steps as a rough boundary for the limit of practical calculation). The factorial function grows
faster than the square function. It grows faster than the cube function, the fourth power function, or any polynomial
function. (One way to see that the factorial function grows faster than the square is to note that multiplying the first
two factors in gives
, which for large is approximately
, and then multiplying in more factors
will make it even larger. The same argument works for the cube function, etc.) So a computer that is programmed to
use the permutation expansion formula, and thus to perform a number of operations that is greater than or equal to
the factorial of the number of rows, would take very long times as its input data set grows.
In contrast, the time taken by the row reduction method does not grow so fast. This fragment of row-reduction code
is in the computer language FORTRAN. The matrix is stored in the
array A. For each ROW between and
335
parts of the program not shown here have already found the pivot entry
(This code fragment is for illustration only and is incomplete. Still, analysis of a finished version that includes all of
the tests and subcases is messier but gives essentially the same conclusion.)
PIVINV=1.0/A(ROW,COL)
DO 10 I=ROW+1, N
DO 20 J=I, N
A(I,J)=A(I,J)-PIVINV*A(ROW,J)
20 CONTINUE
10 CONTINUE
The outermost loop (not shown) runs through
rows. For each row, the nested and
loops shown
perform arithmetic on the entries in A that are below and to the right of the pivot entry. Assume that the pivot is
found in the expected place, that is, that
. Then there are
entries below and to
the right of the pivot. On average, ROW will be
about
times, that is, will run in a time proportional to the square of the number of equations. Taking into
account the outer loop that is not shown, we get the estimate that the running time of the algorithm is proportional to
the cube of the number of equations.
Finding the fastest algorithm to compute the determinant is a topic of current research. Algorithms are known that
run in time between the second and third power.
Speed estimates like these help us to understand how quickly or slowly an algorithm will run. Algorithms that run in
time proportional to the size of the data set are fast, algorithms that run in time proportional to the square of the size
of the data set are less fast, but typically quite usable, and algorithms that run in time proportional to the cube of the
size of the data set are still reasonable in speed for not-too-big input data. However, algorithms that run in time
(greater than or equal to) the factorial of the size of the data set are not practical for input of any appreciable size.
There are other methods besides the two discussed here that are also used for computation of determinants. Those lie
outside of our scope. Nonetheless, this contrast of the two methods for computing determinants makes the point that
although in principle they give the same answer, in practice the idea is to select the one that is fast.
Exercises
Most of these problems presume access to a computer.
Problem 1
Computer systems generate random numbers (of course, these are only pseudo-random, in that they are generated by
an algorithm, but they pass a number of reasonable statistical tests for randomness).
1. Fill a
few times. Are singular matrices frequent or rare (in this sense)?
2. Time your computer algebra system at finding the determinant of ten
average time per array. Repeat the prior item for
arrays,
arrays. (Notice
that, when an array is singular, it can sometimes be found to be so quite quickly, for instance if the first row
equals the second. In the light of your answer to the first part, do you expect that singular systems play a large
role in your average?)
3. Graph the input size versus the average time.
Problem 2
Compute the determinant of each of these by hand using the two methods discussed above.
1.
2.
3.
Count the number of multiplications and divisions used in each case, for each of the methods. (On a computer,
multiplications and divisions take much longer than additions and subtractions, so algorithm designers worry about
them more.)
Problem 3
What
array can you invent that takes your computer system the longest to reduce? The shortest?
Problem 4
Write the rest of the FORTRAN program to do a straightforward implementation of calculating determinants via
Gauss' method. (Don't test for a zero pivot.) Compare the speed of your code to that used in your computer algebra
system.
Problem 5
The FORTRAN language specification requires that arrays be stored "by column", that is, the entire first column is
stored contiguously, then the second column, etc. Does the code fragment given take advantage of this, or can it be
rewritten to make it faster, by taking advantage of the fact that computer fetches are faster from contiguous
locations?
Solutions
References
Knuth, Donald E. (1988), The Art of Computer Programming, Addison Wesley.
336
337
What is there in the room, for instance where the ceiling meets the left and right walls, are lines that are parallel.
However, what a viewer sees is lines that, if extended, would intersect. The intersection point is called the vanishing
point. This aspect of perspective is also familiar as the image of a long stretch of railroad tracks that appear to
converge at the horizon.
To depict the room, da Vinci has adopted a model of how we see, of how we project the three dimensional scene to a
two dimensional image. This model is only a first approximation it does not take into account that our retina is
curved and our lens bends the light, that we have binocular vision, or that our brain's processing greatly affects what
we see but nonetheless it is interesting, both artistically and mathematically.
The projection is not orthogonal, it is a central projection from a single point, to the plane of the canvas.
(It is not an orthogonal projection since the line from the viewer to
picture suggests, the operation of central projection preserves some geometric properties lines project to lines.
However, it fails to preserve some others equal length segments can project to segments of unequal length; the
length of
is greater than the length of
because the segment projected to
is closer to the viewer and
338
closer things look bigger. The study of the effects of central projections is projective geometry. We will see how
linear algebra can be used in this study.
There are three cases of central projection. The first is the projection done by a movie projector.
We can think that each source point is "pushed" from the domain plane outward to the image point in the codomain
plane. This case of projection has a somewhat different character than the second case, that of the artist "pulling" the
source back to the canvas.
in the middle. An example of this is when we use a pinhole to shine the image of a solar eclipse onto a piece
of paper.
of
to
339
Consider again the effect of railroad tracks that appear to converge to a point. We model this with parallel lines in a
domain plane and a projection via a to a codomain plane . (The gray lines are parallel to and .)
All three projection cases appear here. The first picture below shows
points from part of
acts like
projected near to the vanishing point are the ones that are far out on the bottom left of
to the vertical gray line are sent high up on
. Points in
There are two awkward things about this situation. The first is that neither of the two points in the domain nearest to
the vertical gray line (see below) has an image because a projection from those two is along the gray line that is
parallel to the codomain plane (we sometimes say that these two are projected "to infinity"). The second awkward
thing is that the vanishing point in isn't the image of any point from because a projection to this point would be
along the gray line that is parallel to the domain plane (we sometimes say that the vanishing point is the image of a
projection "from infinity").
340
looks outward, anything in the line of vision is projected to the same spot on the dome. This includes things on the
line between and the dome, as in the case of projection by the movie projector. It includes things on the line
further from
behind
than the dome, as in the case of projection by the painter. It also includes things on the line that lie
, all of the spots on the line are seen as the same point. Accordingly, for any nonzero vector
of
representative member of the line, so that the projective point shown above can be represented in any of these three
ways.
This picture, and the above definition that arises from it, clarifies the description of central projection but there is
something awkward about the dome model: what if the viewer looks down? If we draw 's line of sight so that the
part coming toward us, out of the page, goes down below the dome then we can trace the line of sight backward, up
past and toward the part of the hemisphere that is behind the page. So in the dome model, looking down gives a
projective point that is behind the viewer. Therefore, if the viewer in the picture above drops the line of sight toward
the bottom of the dome then the projective point drops also and as the line of sight continues down past the equator,
the projective point suddenly shifts from the front of the dome to the back of the dome. This discontinuity in the
drawing means that we often have to treat equatorial points as a separate case. That is, while the railroad track
discussion of central projection has three cases, the dome model has two.
We can do better than this. Consider a sphere centered at the origin. Any line through the origin intersects the sphere
in two spots, which are said to be antipodal. Because we associate each line through the origin with a point in the
projective plane, we can draw such a point as a pair of antipodal spots on the sphere. Below, the two antipodal spots
341
are shown connected by a dashed line to emphasize that they are not two different points, the pair of spots together
make one projective point.
While drawing a point as a pair of antipodal spots is not as natural as the one-spot-per-point dome mode, on the other
hand the awkwardness of the dome model is gone, in that if as a line of view slides from north to south, no sudden
changes happen on the picture. This model of central projection is uniform the three cases are reduced to one.
So far we have described points in projective geometry. What about lines? What a viewer at the origin sees as a line
is shown below as a great circle, the intersection of the model sphere with a plane through the origin.
(One of the projective points on this line is shown to bring out a subtlety. Because two antipodal spots together make
up a single projective point, the great circle's behind-the-paper part is the same set of projective points as its
in-front-of-the-paper part.) Just as we did with each projective point, we will also describe a projective line with a
triple of reals. For instance, the members of this plane through the origin in
distinguish lines from points). In general, for any nonzero three-wide row vector
). For instance, the projective point described above by the column vector with
lies in the projective line described by
lies in the plane through the origin whose equation is of the form
, because points incident on the line are characterized by having the property that their
representatives satisfy this equation. One difference from familiar Euclidean anlaytic geometry is that in projective
342
geometry we talk about the equation of a point. For a fixed point like
the property that characterizes lines through this point (that is, lines incident on this point) is that the components of
any representatives satisfy
and so this is the equation of .
This symmetry of the statements about lines and points brings up the Duality Principle of projective geometry: in
any true statement, interchanging "point" with "line" results in another true statement. For example, just as two
distinct points determine one and only one line, in the projective plane, two distinct lines determine one and only one
point. Here is a picture showing two lines that cross in antipodal spots and thus cross at one projective point.
( )
Contrast this with Euclidean geometry, where two distinct lines may have a unique intersection or may be parallel. In
this way, projective geometry is simpler, more uniform, than Euclidean geometry.
That simplicity is relevant because there is a relationship between the two spaces: the projective plane can be viewed
as an extension of the Euclidean plane. Take the sphere model of the projective plane to be the unit sphere in
and
take Euclidean space to be the plane
corresponding to points in Euclidean space, because all of the points on the plane are projections of antipodal spots
from the sphere.
(
)
Note though that projective points on the equator don't project up to the plane. Instead, these project "out to infinity".
We can thus think of projective space as consisting of the Euclidean plane with some extra points adjoined the
Euclidean plane is embedded in the projective plane. These extra points, the equatorial points, are the ideal points or
points at infinity and the equator is the ideal line or line at infinity (note that it is not a Euclidean line, it is a
projective line).
The advantage of the extension to the projective plane is that some of the awkwardness of Euclidean geometry
disappears. For instance, the projective lines shown above in ( ) cross at antipodal spots, a single projective point,
on the sphere's equator. If we put those lines into (
) then they correspond to Euclidean lines that are parallel.
343
That is, in moving from the Euclidean plane to the projective plane, we move from having two cases, that lines either
intersect or are parallel, to having only one case, that lines intersect (possibly at a point at infinity).
The projective case is nicer in many ways than the Euclidean case but has the problem that we don't have the same
experience or intuitions with it. That's one advantage of doing analytic geometry, where the equations can lead us to
the right conclusions. Analytic projective geometry uses linear algebra. For instance, for three points of the
projective plane , , and , setting up the equations for those points by fixing vectors representing each, shows
that the three are collinear incident in a single line if and only if the resulting three-equation system has
infinitely many row vector solutions representing that line. That, in turn, holds if and only if this determinant is zero.
Thus, three points in the projective plane are collinear if and only if any three representative column vectors are
linearly dependent. Similarly (and illustrating the Duality Principle), three lines in the projective plane are incident
on a single point if and only if any three row vectors representing them are linearly dependent.
The following result is more evidence of the "niceness" of the geometry of the projective plane, compared to the
Euclidean case. These two triangles are said to be in perspective from because their corresponding vertices are
collinear.
and
, the sides
and
. Desargue's Theorem is that when the three pairs of corresponding sides are extended to lines,
, the point
We will prove this theorem, using projective geometry. (These are drawn as Euclidean figures because it is the more
familiar image. To consider them as projective figures, we can imagine that, although the line segments shown are
parts of great circles and so are curved, the model has such a large radius compared to the size of the figures that the
sides appear in this sketch to be straight.)
For this proof, we need a preliminary lemma (Coxeter 1974): if
plane (no three of which are collinear) then there are homogeneous coordinate vectors
projective points, and a basis
for
, satisfying this.
, and
for the
344
and
and
of the line
and
are of the
are similar.
and
is this.
(This is, of course, the homogeneous coordinate vector of a projective point.) The other two intersections are similar.
The proof is finished by noting that these projective points are on one projective line because the sum of the three
homogeneous coordinate vectors is zero.
Every projective theorem has a translation to a Euclidean version, although the Euclidean result is often messier to
state and prove. Desargue's theorem illustrates this. In the translation to Euclidean space, the case where
lies on
the ideal line must be treated separately for then the lines
, and
are parallel.
345
The parenthetical remark following the statement of Desargue's Theorem suggests thinking of the Euclidean pictures
as figures from projective geometry for a model of very large radius. That is, just as a small area of the earth appears
flat to people living there, the projective plane is also "locally Euclidean".
Although its local properties are the familiar Euclidean ones, there is a global property of the projective plane that is
quite different. The picture below shows a projective point. At that point is drawn an
-axis. There is something
interesting about the way this axis appears at the antipodal ends of the sphere. In the northern hemisphere, where the
axis are drawn in black, a right hand put down with fingers on the -axis will have the thumb point along the
-axis. But the antipodal axis has just the opposite: a right hand placed with its fingers on the -axis will have the
thumb point in the wrong way, instead, it is a left hand that works. Briefly, the projective plane is not orientable: in
this geometry, left and right handedness are not fixed properties of figures.
The sequence of pictures below dramatizes this non-orientability. They sketch a trip around this space in the
direction of the part of the
-axis. (Warning: the trip shown is not halfway around, it is a full circuit. True, if
we made this into a movie then we could watch the northern hemisphere spots in the drawing above gradually rotate
about halfway around the sphere to the last picture below. And we could watch the southern hemisphere spots in the
picture above slide through the south pole and up through the equator to the last picture. But: the spots at either end
of the dashed line are the same projective point. We don't need to continue on much further; we are pretty much back
to the projective point where we started by the last picture.)
part of the
-axes sticks out in the other direction. Thus, in the projective plane we
cannot describe a figure as right-{} or left-handed (another way to make this point is that we cannot describe a spiral
as clockwise or counterclockwise).
346
This exhibition of the existence of a non-orientable space raises the question of whether our universe is orientable: is
is possible for an astronaut to leave right-handed and return left-handed? An excellent nontechnical reference is
(Gardner 1990). An classic science fiction story about orientation reversal is (Clarke 1982).
So projective geometry is mathematically interesting, in addition to the natural way in which it arises in art. It is
more than just a technical device to shorten some proofs. For an overview, see (Courant & Robbins 1978). The
approach we've taken here, the analytic approach, leads to quick theorems and most importantly for us
illustrates the power of linear algebra (see Hanes (1990), Ryan (1986), and Eggar (1998)). But another approach, the
synthetic approach of deriving the results from an axiom system, is both extraordinarily beautiful and is also the
historical route of development. Two fine sources for this approach are (Coxeter 1974) or (Seidenberg 1962). An
interesting and easy application is (Davies 1990).
Exercises
Problem 1
What is the equation of this point?
Problem 2
1. Find the line incident on these points in the projective plane.
Problem 3
Find the formula for the line incident on two projective points. Find the formula for the point incident on two
projective lines.
Problem 4
Prove that the definition of incidence is independent of the choice of the representatives of
,
,
, and
, and
, prove that
and
, and
. That is, if
,
if and only if
.
Problem 5
Give a drawing to show that central projection does not preserve circles, that a circle may project to an ellipse. Can a
(non-circular) ellipse project to a circle?
Problem 6
Give the formula for the correspondence between the non-equatorial part of the antipodal modal of the projective
plane, and the plane
.
Problem 7
(Pappus's Theorem) Assume that
, and
of the lines
of
and
and
.
, and
of the lines
347
2. Apply the lemma used in Desargue's Theorem to get simple homogeneous coordinate vectors for the
's and
.
3. Find the resulting homogeneous coordinate vectors for
7. Verify that
is on the
line..
Solutions
References
Eggar, M.H. (Aug./Sept. 1998), "Pinhole Cameras, Perspective, and Projective Geometry", American
Mathematical Monthly (American Mathematical Society): 618-630.
Gardner, Martin (1990), The New Ambidextrous Universe (Third revised ed.), W. H. Freeman and Company.
Hanes, Kit (1990), "Analytic Projective Geometry and its Applications", UMAP Modules (UMAP UNIT 710):
111.
Ryan, Patrick J. (1986), Euclidean and Non=Euclidean Geometry: an Anylytic Approach, Cambridge University
Press.
Seidenberg, A. (1962), Lectures in Projective Geometry, Van Nostrandg.
348
Chapter V
Linear Algebra/Introduction to Similarity
While studying matrix equivalence, we have shown that for any homomorphism there are bases
and
such
to
action of the map is easy to understand because most of the matrix entries are zero.
This chapter considers the special case where the domain and the codomain are equal, that is, where the
homomorphism is a transformation. In this case we naturally ask to find a single basis
so that
is as
simple as possible (we will take "simple" to mean that it has many zeroes). A matrix having the above block
partial-identity form is not always possible here. But we will develop a form that comes close, a representation that is
nearly diagonal.
References
Halmos, Paul P. (1958), Finite Dimensional Vector Spaces (Second ed.), Van Nostrand.
Hoffman, Kenneth; Kunzy, Ray (1971), Linear Algebra (Second ed.), Prentice-Hall.
349
goes
times into
with remainder
" so do
polynomials.
Theorem 1.1 (Division Theorem for Polynomials)
Let
be a polynomial. If
and
such that
In this book constant polynomials, including the zero polynomial, are said to have degree
while
goes
goes
times into
with remainder
and
then
and
. Note that
is divided by
Proof
The remainder must be a constant polynomial because it is of degree less than the divisor
constant,
take
from
the
theorem
to
be
and
substitute
, To determine the
for
to
get
.
If a divisor
factor of
such that
is a
) is a root of
since
then
divides
is a factor of
Finding the roots and factors of a high-degree polynomial can be hard. But for second-degree polynomials we have
the quadratic formula: the roots of
are
is negative then the polynomial has no real number roots). A polynomial that cannot
be factored into two lower-degree polynomials with real number coefficients is irreducible over the reals.
Theorem 1.5
Any constant or linear polynomial is irreducible over the reals. A quadratic polynomial is irreducible over the reals if
and only if its discriminant is negative. No cubic or higher-degree polynomial is irreducible over the reals.
Corollary 1.6
350
Any polynomial with real coefficients can be factored into linear and irreducible quadratic polynomials. That
factorization is unique; any two factorizations have the same powers of the same factors.
Note the analogy with the prime factorization of integers. In both cases, the uniqueness clause is very useful.
Example 1.7
Because of uniqueness we know, without multiplying them out, that
.
Example 1.8
By
uniqueness,
if
then
where
and
, we know that
.
has no real roots and so doesn't factor over the real numbers, if we imagine a root traditionally
While
denoted
so that
then
to the reals and close the new system with respect to addition, multiplication, etc. (i.e., we
also add
, and
, and
and
we can factor (obviously, at least some) quadratics that would be irreducible if we were to stick to the real
numbers. Surprisingly, in
Example 1.9
The second degree polynomial
factors over the complex numbers into the product of two first degree
polynomials.
References
Ebbinghaus, H. D. (1990), Numbers, Springer-Verlag.
Birkhoff, Garrett; MacLane, Saunders (1965), Survey of Modern Algebra (Third ed.), Macmillian.
351
and multiplication.
Example 2.1
For instance,
and
Handling scalar operations with those rules, all of the operations that we've covered for real vector spaces carry over
unchanged.
Example 2.2
Matrix multiplication is the same, although the scalar arithmetic involves more bookkeeping.
Everything else from prior chapters that we can, we shall also carry over unchanged. For instance, we shall call this
352
and
and
such that
showing that
and
both represent
but with respect to different pairs of bases. We now specialize that setup
to the case where the codomain equals the domain, and where the codomain's basis equals the domain's basis.
To move from the lower left to the lower right we can either go straight over, or up, over, and then down. In matrix
terms,
(recall that a representation of composition like this one reads right to left).
Definition 1.1
The matrices
and
Example 1.3
such that
and
353
. The only matrix similar to the identity
Since matrix similarity is a special case of matrix equivalence, if two matrices are similar then they are equivalent.
What about the converse: must matrix equivalent square matrices be similar? The answer is no. The prior example
shows that the similarity classes are different from the matrix equivalence classes, because the matrix equivalence
class of the identity consists of all nonsingular matrices of that size. Thus, for instance, these two are matrix
equivalent but not similar.
So some matrix equivalence classes split into two or more similarity classes similarity gives a finer partition than
does equivalence. This picture shows some matrix equivalence classes subdivided into similarity classes.
To understand the similarity relation we shall study the similarity classes. We approach this question in the same
way that we've studied both the row equivalence and matrix equivalence relations, by finding a canonical form for
representatives[1] of the similarity classes, called Jordan form. With this canonical form, we can decide if two
matrices are similar by checking whether they reduce to the same representative. We've also seen with both row
equivalence and matrix equivalence that a canonical form gives us insight into the ways in which members of the
same class are alike (e.g., two identically-sized matrices are matrix equivalent if and only if they have the same
rank).
Exercises
Problem 1
For
check that
354
Problem 4
Consider the transformation
1. Find
described by
where
2. Find
3. Find the matrix
, and
where
such that
.
.
act by
to
and
. Then compute
and
and
-axis in
. Consider also a
Problem 11
Prove that similarity preserves determinants and rank. Does the converse hold?
Problem 12
Is there a matrix equivalence class with only one matrix similarity class inside? One with infinitely many similarity
classes?
Problem 13
Can two different diagonal matrices be in the same similarity class?
This exercise is recommended for all readers.
Problem 14
Prove that if two matrices are similar then their
. What if
355
Problem 15
Let
be
the
polynomial
Show
that
if
is
is similar to
similar
to
then
Problem 16
List all of the matrix equivalence classes of
and
and
Solutions
References
Halmos, Paul P. (1958), Finite Dimensional Vector Spaces (Second ed.), Van Nostrand.
References
[1] More information on representatives is in the appendix.
Linear Algebra/Diagonalizability
The prior subsection defines the relation of similarity and shows that, although similar matrices are necessarily
matrix equivalent, the converse does not hold. Some matrix-equivalence classes break into two or more similarity
classes (the nonsingular
matrices, for instance). This means that the canonical form for matrix equivalence, a
block partial-identity, cannot be used as a canonical form for matrix similarity because the partial-identities cannot
be in more than one similarity class, so there are similarity classes without one. This picture illustrates. As earlier in
this book, class representatives are shown with stars.
We are developing a canonical form for representatives of the similarity classes. We naturally try to build on our
previous work, meaning first that the partial identity matrices should represent the similarity classes into which they
fall, and beyond that, that the representatives should be as simple as possible. The simplest extension of the
partial-identity form is a diagonal form.
Definition 2.1
Linear Algebra/Diagonalizability
356
A transformation is diagonalizable if it has a diagonal representation with respect to the same basis for the
codomain as for the domain. A diagonalizable matrix is one that is similar to a diagonal matrix:
is
diagonalizable if there is a nonsingular
such that
is diagonal.
Example 2.2
The matrix
is diagonalizable.
Example 2.3
Not every matrix is diagonalizable. The square of
that
represents (with respect to the same basis for the domain as for the
That example shows that a diagonal form will not do for a canonical form we cannot find a diagonal matrix in
each matrix similarity class. However, the canonical form that we are developing has the property that if a matrix can
be diagonalized then the diagonal matrix is the canonical representative of the similarity class. The next result
characterizes which maps can be diagonalized.
Corollary 2.4
A transformation
such that
Proof
and scalars
This representation is equivalent to the existence of a basis satisfying the stated conditions simply by the definition
of matrix representation.
Example 2.5
To diagonalize
such that
and
and we
Linear Algebra/Diagonalizability
has solutions
and
357
In the bottom equation the two numbers multiply to give zero only if at least one of them is zero so there are two
possibilities,
and
. In the
possibility, the first equation gives that either
or
. Since the case of both
the first equation in (
and
) is
. With it,
In the
) is
) is
are vectors
In the next subsection, we will expand on that example by considering more closely the property of Corollary 2.4.
This includes seeing another way, the way that we will routinely use, to find the 's.
Linear Algebra/Diagonalizability
358
Exercises
This exercise is recommended for all readers.
Problem 1
Repeat Example 2.5 for the matrix from Example 2.2.
Problem 2
Diagonalize these upper triangular matrices.
1.
2.
This exercise is recommended for all readers.
Problem 3
What form do the powers of a diagonal matrix have?
Problem 4
Give two same-sized diagonal matrices that are not similar. Must any two different diagonal matrices come from
different similarity classes?
Problem 5
Give a nonsingular diagonal matrix. Can a diagonal matrix ever be singular?
This exercise is recommended for all readers.
Problem 6
Show that the inverse of a diagonal matrix is the diagonal of the the inverses, if no element on that diagonal is zero.
What happens when a diagonal entry is zero?
Problem 7
The equation ending Example 2.5
we must take the first matrix, which is shown as an inverse, and for
Problem 9
Find a formula for the powers of this matrix Hint: see Problem 3.
we take the
and the
Linear Algebra/Diagonalizability
359
Diagonalize these.
1.
2.
Problem 11
We can ask how diagonalization interacts with the matrix operations. Assume that
diagonalizable. Is
? What about
are each
Problem 13
Show that each of these is diagonalizable.
1.
2.
Solutions
such that
.
("Eigen" is German for "characteristic of" or "peculiar to"; some authors call these characteristic values and vectors.
No authors call them "peculiar".)
Example 3.2
The projection map
has an eigenvalue of
where
and
are non-
is not an eigenvalue of
since no non-
vector is
doubled.
That example shows why the "noneigenvalues.
360
Example 3.3
The only transformation on the trivial space
is
.
This map has no eigenvalues because there are no non-
vectors
of
themselves.
Example 3.4
Consider the homomorphism
given by
. The range of
is
where
Definition 3.5
A square matrix
eigenvector
if
Remark 3.6
Although this extension from maps to matrices is obvious, there is a point that must be made. Eigenvalues of a map
are also the eigenvalues of matrices representing that map, and so similar matrices have the same eigenvalues. But
the eigenvectors are different similar matrices need not have the same eigenvectors.
For instance, consider again the transformation
It has an eigenvalue of
given by
.
where
. If we represent
with
respect to
then
is an eigenvalue of
with respect to
gives
are these.
. In contrast,
such that
and factor
for non-
is a matrix while
has a non-
eigenvectors
; the expression
solution if and only if the matrix is singular. We can determine when that happens.
361
and
gives
is non-
gives
with
Example 3.8
If
(here
so
has eigenvalues of
for a scalar
where
and
Definition 3.9
) then
for
362
, where
is a
is
Problem 11 checks that the characteristic polynomial of a transformation is well-defined, that is, any choice of basis
yields the same polynomial.
Lemma 3.10
A linear transformation on a nontrivial vector space has at least one eigenvalue.
Proof
Any root of the characteristic polynomial is an eigenvalue. Over the complex numbers, any polynomial of degree
one or greater has a root. (This is the reason that in this chapter we've gone to scalars that are complex.)
Notice the familiar form of the sets of eigenvectors in the above examples.
Definition 3.11
The eigenspace of a transformation
is
. The
is
since
).
Example 3.13
In Example 3.8 the eigenspace associated with the eigenvalue
are these.
Example 3.14
In Example 3.7, these are the eigenspaces associated with the eigenvalues
and
Remark 3.15
The characteristic equation is
so in some sense
"twice" as many eigenvectors, in that the dimension of the eigenspace is one, not two. The next example shows a
case where a number, , is a double root of the characteristic equation and the dimension of the associated
eigenspace is two.
Example 3.16
With respect to the standard bases, this matrix
363
represents projection.
and
are associated with the same eigenvalue then any linear combination
of those two is also an eigenvector associated with that same eigenvalue. But, if two eigenvectors
associated with different eigenvalues then the sum
and
are
fact, just the opposite. If the eigenvalues are different then the eigenvectors are not linearly related.
Theorem 3.17
For any set of distinct eigenvalues of a map or matrix, a set of associated eigenvectors, one per eigenvalue, is linearly
independent.
Proof
We will use induction on the number of eigenvalues. If there is no eigenvalue or only one eigenvalue then the set of
associated eigenvectors is empty or is a singleton set with a non- member, and in either case is linearly
independent.
For induction, assume that the theorem is true for any set of
distinct eigenvalues, and let
are
be associated eigenvectors. If
displayed equation, and subtracting the first result from the second, we have this.
The induction hypothesis now applies:
eigenvalues are distinct,
are all
must be
.
Example 3.18
The eigenvalues of
are distinct:
, and
is linearly independent.
Corollary 3.19
An
Proof
matrix with
Exercises
Problem 1
For each, find the characteristic polynomial and the eigenvalues.
1.
2.
3.
4.
5.
This exercise is recommended for all readers.
Problem 2
For each matrix, find the characteristic equation, and the eigenvalues and associated eigenvectors.
1.
2.
Problem 3
Find the characteristic equation, and the eigenvalues and associated eigenvectors for this matrix. Hint. The
eigenvalues are complex.
Problem 4
Find the characteristic polynomial, the eigenvalues, and the associated eigenvectors of this matrix.
2.
This exercise is recommended for all readers.
Problem 6
364
365
be
Problem 9
Prove that
the eigenvalues of a triangular matrix (upper or lower triangular) are the entries on the diagonal.
This exercise is recommended for all readers.
Problem 10
Find the formula for the characteristic polynomial of a
matrix.
Problem 11
Prove that the characteristic polynomial of a transformation is well-defined.
This exercise is recommended for all readers.
Problem 12
1. Can any non-
, is there a transformation
2. Given a scalar , can any noneigenvalue
from a nontrivial
and a scalar
such that
?
vector in any nontrivial vector space be an eigenvector associated with the
and
associated with
non- vectors in the kernel of the map represented (with respect to the same bases) by
Problem 14
Prove that if
and
are the
then
is
and
are scalars.
then
has eigenvalues
. Is
366
with an associated eigenvector
then
is an eigenvector of
is an eigenvalue of
is not an isomorphism.
Problem 18
1. Show that if is an eigenvalue of
then
is an eigenvalue of
.
2. What is wrong with this proof generalizing that? "If is an eigenvalue of
then
is an eigenvalue for
, for, if
and
and
is an eigenvalue for
then
,
"?
(Strang 1980)
Problem 19
Do matrix-equivalent matrices have the same eigenvalues?
Problem 20
Show that a square matrix with real entries and an odd number of rows has at least one real eigenvalue.
Problem 21
Diagonalize.
Problem 22
Suppose
that
is
nonsingular
sending
matrix.
Show
that
the
similarity
transformation
map
is an isomorphism.
? Problem 23
Show that if
is an
then
is a characteristic root of
(Morrison 1967)
Solutions
References
Morrison, Clarence C. (proposer) (1967), "Quickie", Mathematics Magazine 40 (4): 232.
Strang, Gilbert (1980), Linear Algebra and its Applications (Second ed.), Harcourt Brace Jovanovich.
Linear Algebra/Nilpotence
367
Linear Algebra/Nilpotence
The goal of this chapter is to show that every square matrix is similar to one that is a sum of two kinds of simple
matrices. The prior section focused on the first kind, diagonal matrices. We now consider the other kind.
Linear Algebra/Self-Composition
This subsection is optional, although it is necessary for later material in this section and in the next one.
, because it has the same domain and codomain, can be iterated.[1] That is,
A linear transformations
compositions of
and
are defined.
Note that this power notation for the linear transformation functions dovetails with the notation that we've used
earlier for their squared matrix representations because if
then
.
Example 1.1
For the derivative map
given by
matrices
After that,
and
, etc.
These examples suggest that on iteration more and more zeros appear until there is a settling down. The next result
makes this precise.
Linear Algebra/Self-Composition
368
Lemma 1.3
For any transformation
Further, there is a
and
and
Proof
).
then
then
We will do the rangespace half and leave the rest for Problem 6. Recall, however, that for any map the dimension of
its rangespace plus the dimension of its nullspace equals the dimension of its domain. So if the rangespaces shrink
then the nullspaces must grow.
That the rangespaces form chains is clear because if
and so
equal
, so that
, then
. To verify the "further" property, first observe that if any pair of rangespaces in the chain are
then all subsequent ones are also equal
and it therefore
(and induction shows that it holds for all higher powers). So if the chain
of
everstop
stopsdecreasing.
being strictly
decreasing
then
is stable from
onward.
Butrangespaces
the chain must
Each
rangespace
is ait subspace
of thethat
onepoint
before
it. For it to be a proper subspace
it must be of strictly lower dimension (see Problem 4). These spaces are finite-dimensional and so the chain can fall
for only finitely-many steps, that is, the power is at most the dimension of .
Example 1.4
The derivative map
Example 1.5
The transformation
has
and
Example 1.6
Let
be the map
rangespace shrinks
This graph illustrates Lemma 1.3. The horizontal axis gives the power
the dimension of the rangespace of
as the distance above zero and thus also shows the dimension of the
nullspace as the distance below the gray horizontal line, because the two add to the dimension
of the domain.
Linear Algebra/Self-Composition
369
As sketched, on iteration the rank falls and with it the nullity grows until the two reach a steady state. This state must
be reached by the -th iterate. The steady state's distance above zero is the dimension of the generalized rangespace
and its distance below is the dimension of the generalized nullspace.
Definition 1.7
Let be a transformation on an
rangespace) is
Exercises
Problem 1
Give the chains of rangespaces and nullspaces for the zero and identity transformations.
Problem 2
For each map, give the chain of rangespaces and the chain of nullspaces, and the generalized rangespace and the
generalized nullspace.
1.
2.
3.
4.
Problem 3
Prove that function composition is associative
without specifying
a grouping.
Problem 4
Check that a subspace must be of dimension less than or equal to the dimension of its superspace. Check that if the
subspace is proper (the subspace does not equal the superspace) then the dimension is strictly less. (This is used in
the proof of Lemma 1.3.)
Problem 5
Linear Algebra/Self-Composition
370
is trivial, if
References
[1] More information on function interation is in the appendix.
Linear Algebra/Strings
This subsection is optional, and requires material from the optional Direct Sum subsection.
The prior subsection shows that as
's rise, in such a way that this rank and nullity split the dimension of
a basis is
?
The answer is yes for the smallest power
since
and
Proof
We will verify the second sentence, which is equivalent to the first. The first clause, that the dimension of the
domain of equals the rank of plus the nullity of , holds for any transformation and so we need only verify
the second clause.
Assume that
, to prove that
is
. Because
, the map
is in the nullspace,
is a
implies that
.
from the map
because the
[1]
domains or codomains might differ. The second one is said to be the restriction
of
to
. We shall use
later a point from that proof about the restriction map, namely that it is nonsingular.
In contrast to the
and
cases, for intermediate powers the space might not be the direct sum of
and
. The next example shows that the two can have a nontrivial intersection.
Example 2.3
Consider the transformation of
Linear Algebra/Strings
371
The vector
is in both the rangespace and nullspace. Another way to depict this map's action is with a string.
Example 2.4
A map
whose action on
has
equal
span
to
the
has
and
has
. The matrix representation is all zeros except for some subdiagonal ones.
Example 2.5
Transformations can act via more than one string. A transformation
acting on a basis
by
is represented by a matrix that is all zeros except for blocks of subdiagonal ones
Not all nilpotent matrices are all zeros except for blocks of subdiagonal ones.
Example 2.9
Linear Algebra/Strings
With the matrix
372
from Example 2.4, and this four-vector basis
The new matrix is nilpotent; it's fourth power is the zero matrix since
and
The goal of this subsection is Theorem 2.13, which shows that the prior example is prototypical in that every
nilpotent matrix is similar to one that is all zeros except for blocks of subdiagonal ones.
Definition 2.10
Let
be
nilpotent
transformation
on
-string
.A
generated
by
is
sequence
-strings.
Example 2.11
In Example 2.5, the
-strings
and
-string basis then the longest string in it has length equal to the index of nilpotency of
Proof
Suppose not. Those strings cannot be longer; if the index is
the string to
space has a
vector
then
-string basis where all of the strings are shorter than length
such that
. Represent
. Because
has index
, there is a
. We are
supposing that
sends each basis element to but that it does not send to . That is impossible.
We shall show that every nilpotent map has an associated string basis. Then our goal theorem, that every nilpotent
matrix is similar to one that is all zeros except for blocks of subdiagonal ones, is immediate, as in Example 2.5.
Looking for a counterexample, a nilpotent map without an associated string basis that is disjoint, will suggest the
idea for the proof. Consider the map
with this action.
Linear Algebra/Strings
373
Even after ommitting the zero vector, these three strings aren't disjoint, but that doesn't end hope of finding a
-string basis. It only means that will not do for the string basis.
To find a basis that will do, we first find the number and lengths of its strings. Since 's index of nilpotency is two,
Lemma 2.12 says that at least one string in the basis has length two. Thus the map must act on a string basis in one of
these two ways.
Now, the key point. A transformation with the left-hand action has a nullspace of dimension three since that's how
many basis vectors are sent to zero. A transformation with the right-hand action has a nullspace of dimension four.
Using the matrix representation above, calculation of 's nullspace
and
from
Finally, take
and
such that
and
, the matrix of
is as desired.
Linear Algebra/Strings
374
Theorem 2.13
Any nilpotent transformation is associated with a
the length of the strings is determined by .
-string basis. While the basis is not unique, the number and
This illustrates the proof. Basis vectors are categorized into kind
, kind
, and kind
Proof
Fix a vector space
then
, ...,
. If that index is
the theorem holds for any transformation with an index of nilpotency between
and
case.
First observe that the restriction to the rangespace
inductive hypothesis to get a string basis for
. Apply the
, so there are
Second, note that taking the final nonzero vector in each string gives a basis
. (These are illustrated with
so that
their number
.
Finally,
is a basis for
for
is mapped to zero if and only
if it is a linear combination of those basis vectors that are mapped to zero. Extend
(The
is the set of squares.) While many choices are possible for the
as it is the dimension of
.
's,
and so
each
is
in
and
by the addition of
extend
vectors
such
that
Linear Algebra/Strings
375
represented by
application the nullspace has dimension one and so one vector of the basis is sent to zero. On a second application,
the nullspace has dimension two and so the other basis vector is sent to zero. Thus, the action of the map is
and the canonical form of the matrix is this.
(If we take
to be a representative with respect to some nonstandard bases then this picking step is just more
, where
Linear Algebra/Strings
376
Example 2.16
The matrix
That table shows that any string basis must satisfy: the nullspace after one map application has dimension two so two
basis vectors are sent directly to zero, the nullspace after the second application has dimension four so two additional
basis vectors are sent to zero by the second iteration, and the nullspace after three applications is of dimension five
so the final basis vector is sent to zero in three hops.
then add
such that
and
) such that
Linear Algebra/Strings
Exercises
This exercise is recommended for all readers.
Problem 1
What is the index of nilpotency of the left-shift operator, here acting on the space of triples of reals?
2.
3.
Also give the canonical form of the matrix.
Problem 3
Decide which of these matrices are nilpotent.
1.
2.
3.
4.
5.
This exercise is recommended for all readers.
Problem 4
Find the canonical form of this matrix.
377
Linear Algebra/Strings
378
2.
3.
Put each in canonical form.
Problem 7
Describe the effect of left or right multiplication by a matrix that is in the canonical form for nilpotent matrices.
Problem 8
Is nilpotence invariant under similarity? That is, must a matrix similar to a nilpotent matrix also be nilpotent? If so,
with the same index?
This exercise is recommended for all readers.
Problem 9
Show that the only eigenvalue of a nilpotent matrix is zero.
Problem 10
Is there a nilpotent transformation of index three on a two-dimensional space?
Problem 11
In the proof of Theorem 2.13, why isn't the proof's base case that the index of nilpotency is zero?
This exercise is recommended for all readers.
Problem 12
Let
is such that
but
Linear Algebra/Strings
379
3. Prove that the -string is linearly independent and so is a basis for its span.
4. Represent the restriction map with respect to the -string basis.
Problem 13
Finish the proof of Theorem 2.13.
Problem 14
Show that the terms "nilpotent transformation" and "nilpotent matrix", as given in Definition 2.6, fit with each other:
a map is nilpotent if and only if it is represented by a nilpotent matrix. (Is it that a transformation is nilpotent if an
only if there is a basis such that the map's representation with respect to that basis is a nilpotent matrix, or that any
representation is a nilpotent matrix?)
Problem 15
Let
be?
Problem 16
Recall that similar matrices have the same eigenvalues. Show that the converse does not hold.
Problem 17
Prove a nilpotent matrix is similar to one that is all zeros except for blocks of super-diagonal ones.
This exercise is recommended for all readers.
Problem 18
Prove that if a transformation has the same rangespace as nullspace. then the dimension of its domain is even.
Problem 19
Prove that if two nilpotent matrices commute then their product and sum are also nilpotent.
Problem 20
Consider the transformation of
is nilpotent then so is
Problem 21
Show that if
given by
where
is nilpotent then
Solutions
References
[1] More information on map restrictions is in the appendix.
is an
380
and
linear transformation
. This chapter revisits this issue in the special case that the map is a
. Of course, the general result still applies but with the codomain and domain
equal we naturally ask about having the two bases also be equal. That is, we want a canonical form to represent
transformations as
.
After a brief review section, we began by noting that a block partial identity form matrix is not always obtainable in
this
case. We therefore considered the natural generalization, diagonal matrices, and showed that if its
eigenvalues are distinct then a map or matrix can be diagonalized. But we also gave an example of a matrix that
cannot be diagonalized and in the section prior to this one we developed that example. We showed that a linear map
is nilpotent if we take higher and higher powers of the map or matrix then we eventually get the zero map or
matrix if and only if there is a basis on which it acts via disjoint strings. That led to a canonical form for nilpotent
matrices.
Now, this section concludes the chapter. We will show that the two cases we've studied are exhaustive in that for any
linear transformation there is a basis such that the matrix representation
is the sum of a diagonal matrix
and a nilpotent matrix in its canonical form.
linearly
dependent
and
so
there
are
scalars
such
that
Definition 1.3
For any polynomial
, where
transformation
matrix
Remark 1.4
If, for instance,
is the
is the
.
, then most authors write in the identity matrix:
. But most
381
then
, and
, and
Definition 1.5
The minimal polynomial
of a transformation
or a square matrix
has a smaller degree than either and still sends the map or matrix to zero.
is the zero polynomial and the two are equal. (The leading coefficient requirement also
up to the power
Next, put
Setting
is to set
, , and to zero forces and to also come out as zero. To get a leading one, the most we can do
and to zero. Thus the minimal polynomial is quadratic.
reduction on a system with nine equations in ten unknowns. We shall develop an alternative. To begin, note that we
can break a polynomial of a map or a matrix into its components.
Lemma 1.7
Suppose that the polynomial
factors as
. If
is
and
Proof
This argument is by induction on the degree of the polynomial. The cases where the polynomial is of degree
and
are clear. The full induction argument is Problem 21 but the degree two case gives its sense.
A
quadratic
polynomial
for
factors
into
two
(the roots
and
linear
terms
might be equal). We
382
is linear.
In
polynomial
transformation
particular,
if
minimial
for
then
is minimal for
as
factors
then
sends some nonzero vectors to zero. Rewording both cases: at least some of the
such that
by considering the
. That
determinant is a polynomial in , the characteristic polynomial, whose roots are the eigenvalues. The major result
of this subsection, the next result, is that there is a connection between this characteristic polynomial and the
minimal polynomial. This results expands on the prior paragraph's insight that some roots of the minimal polynomial
are eigenvalues by asserting that every root of the minimal polynomial is an eigenvalue and further that every
eigenvalue is a root of the minimal polynomial (this is because it says "
").
where
for each
between
and
The proof takes up the next three lemmas. Although they are stated only in matrix terms, they apply equally well to
maps. We give the matrix version only because it is convenient for the first proof.
The first result is the key some authors call it the Cayley-Hamilton Theorem and call Theorem 1.8 above a
corollary. For the proof, observe that a matrix of polynomials can be thought of as a polynomial with matrix
coefficients.
Lemma 1.9
If
then
Proof
Let
be
the
matrix
whose
determinant
is
the
characteristic
polynomial
Recall that the product of the adjoint of a matrix with the matrix itself is the determinant of that matrix times the
identity.
383
where each
is a
, the coefficients of
, etc.
, etc.
We sometimes refer to that lemma by saying that a matrix or map satisfies its characteristic polynomial.
Lemma 1.10
Where
is a polynomial, if
be minimal for
the degree of
is divisable by
. Plugging
where
in shows that
. The
proof of the Cayley-Hamilton Theorem is finished by showing that in fact the characteristic polynomial has no extra
roots
, etc.
Lemma 1.11
Each linear factor of the characteristic polynomial of a square matrix is also a linear factor of the minimal
polynomial.
Proof
Let
polynomial of
is an eigenvalue of
is, that
.
In general, where is associated with the eigenvector
matrix
to
associated
the
is a factor of
by the scalar
eigenvector
. (For instance, if
, application of the
has eigenvalue
and
then
.) Now, as
and therefore
We can use the Cayley-Hamilton Theorem to help find the minimal polynomial of this matrix.
, that
384
or
and
and so
Exercises
This exercise is recommended for all readers.
Problem 1
What are the possible minimal polynomials if a matrix has the given characteristic polynomial?
1.
2.
3.
4.
What is the degree of each possibility?
This exercise is recommended for all readers.
Problem 2
Find the minimal polynomial of each matrix.
1.
2.
3.
4.
or
385
5.
6.
Problem 3
Find the minimal polynomial of this matrix.
on
Problem 6
What is the minimal polynomial of the transformation of
that sends
to
Problem 7
What is the minimal polynomial of the map
Problem 8
Find a
Problem 9
What is wrong with this claimed proof of Lemma 1.9: "if
then
"?
(Cullen 1990)
Problem 10
Verify Lemma 1.9 for
, can happen.
(not
386
Problem 12
The only eigenvalue of a nilpotent map is zero. Show that the converse statement holds.
Problem 13
What is the minimal polynomial of a zero map or matrix? Of an identity map or matrix?
This exercise is recommended for all readers.
Problem 14
Interpret the minimal polynomial of Example 1.2 geometrically.
Problem 15
What is the minimal polynomial of a diagonal matrix?
This exercise is recommended for all readers.
Problem 16
A projection is any transformation
such that
projecting each vector onto its first coordinate will, if done twice, result in the same value as if it is done just once.)
What is the minimal polynomial of a projection?
Problem 17
The first two items of this question are review.
1. Prove that the composition of one-to-one maps is one-to-one.
2. Prove that if a linear map is not one-to-one then at least one nonzero vector from the domain is sent to the zero
vector in the codomain.
3. Verify the statement, excerpted here, that preceeds Theorem 1.8.
...
if
minimial
polynomial
for
transformation
then
map. Since
as
is the zero
factors
are eigenvalues.
then the
Problem 19
Let
and
is similar to
1. Now show that similar matrices have the same characteristic polynomial.
2. Show that similar matrices have the same minimal polynomial.
3. Decide if these are similar.
Problem 20
1. Show that a matrix is invertible if and only if the constant term in its minimal polynomial is not .
2. Show that if a square matrix is not invertible then there is a nonzero matrix such that
and
equal the zero matrix.
This exercise is recommended for all readers.
Problem 21
1. Finish the proof of Lemma 1.7.
2. Give an example to show that the result does not hold if
is not linear.
both
387
Problem 22
Any transformation or square matrix has a minimal polynomial. Does the converse hold?
Solutions
References
Cullen, Charles G. (1990), Matrices and Linear Transformations (Second ed.), Dover.
is nilpotent.
We have a canonical form for nilpotent matrices, that is, for each matrix whose single eigenvalue is zero: each such
matrix is similar to one that is all zeroes except for blocks of subdiagonal ones. (To make this representation unique
we can fix some arrangement of the blocks, say, from longest to shortest.) We next extend this to all
single-eigenvalue matrices.
Observe that if
then
because
if and only if
. The natural way to extend the results for nilpotent matrices is to represent
canonical form
for
in the
works.
Lemma 2.2
If the matrices
and
and
matrices.
Proof
With
we have
diagonal matrix
since the
. Therefore
as required.
Example 2.3
The characteristic polynomial of
is
and so
, and
is nilpotent. The null spaces are routine to find; to ease this computation we
. Thus for
388
The dimensions of these null spaces show that the action of an associated map
. Thus, the canonical form for
on a string basis is
We can produce the similarity computation. Recall from the Nilpotence section how to find the change of basis
matrices and
to express
as
. The similarity diagram
describes that to move from the lower left to the upper left we multiply by
and to move from the upper right to the lower right we multiply by this matrix.
. The nullities of
and
.
has the
for
389
Jordan block. We have shown that Jordan block matrices are canonical representatives of the similarity classes of
single-eigenvalue matrices.
Example 2.5
The
separate into three similarity classes. The three classes have these
canonical representatives.
belongs to the similarity class represented by the middle one, because we have adopted the convention of ordering
the blocks of subdiagonal ones from the longest block to the shortest.
We will now finish the program of this chapter by extending this work to cover maps and matrices with multiple
eigenvalues. The best possibility for general maps and matrices would be if we could break them into a part
involving their first eigenvalue
(which we represent using its Jordan block), a part with
, etc.
This ideal is in fact what happens. For any transformation
sum of a part on which
take
three
steps
to
to
this
section's
where
major
Suppose that
linear transformation on
with
third
step
to a subspace
shows
that
need not be a
a transformation to a "part" of a space is a transformation on the partwe need the next condition.
Definition 2.6
Let
be a transformation. A subspace
(shorter:
is
invariant if whenever
then
).
then
is also a member of
Thus the spaces
because
for some
.
and
because, simply, if
are
where
is the
and then
shows that
then
of any
is nilpotent on
Lemma 2.7
A subspace is
invariant.
. In particular, where
, the spaces
is an
and
390
For the first sentence we check the two implications of the "if and only if" separately. One of them is easy: if the
subspace is
invariant for any then taking
shows that it is invariant. For the other implication
suppose that the subspace is
subspace
invariant, so that if
then
, and let
then
then
, as required.
The second sentence follows straight from the first. Because the two spaces are
. Thus if
invariant, they are therefore
invariant. From this, applying the first sentence again, we conclude that they are also
invariant.
The second step of the three that we will take to prove this section's major result makes use of an additional property
of
and
, that they are complementary. Recall that if a space is the direct sum of two
others
and
and
and
is linearly
do not "overlap"). The next result says that for any subspaces
on
and
Let
and
be
where
where
and
. Then
and
Proof
Since the two subspaces are complementary, the concatenation of a basis for
for
makes a basis
is in
is
left of
, ...,
is all zeroes.
where the
and
Proof
Suppose that
is
, that
is
, and that
is
391
is all zeroes, so if a
then
the
term
from
zero,
e.g.,
and a
column numbers
if
then
.
So the above formula reduces to a sum over all permutations with two halves: any significant
of a
. The
is the composition
law (and the fact that the signum of a composition is the product of the signums) gives that this
equals
Example 2.10
From Lemma 2.9 we conclude that if two subspaces are complementary and
only if its restrictions to both subspaces are nonsingular.
invariant then
is nonsingular if and
Now for the promised third, final, step to the main result.
Lemma 2.11
If a linear transformation
then (1)
.
Proof
Because
is the degree
is trivial whenever
and
are
to
and
.
is a linear
is trivial (Lemma V.II.3.10 shows that the only transformation without any
eigenvalues
is on the(2),
trivial
space).
To prove statement
fix the
index
. Decompose
as
By Lemma 2.9,
Arithmetic, the determinants of the blocks have the same factors as the characteristic polynomial
and
, and the sum of the
392
powers of these factors is the power of the factor in the characteristic polynomial:
if we will show that
and that
for all
, ...,
. Statement (2)
which equals
to
for all
.
Now consider the restriction of
and so
is
to
. And thus
is not an eigenvalue of
on it
is nonsingular on
is not a factor of
, and so
.
Our major result just translates those steps into matrix terms.
Theorem 2.12
Any square matrix is similar to one in Jordan form
where each
except for
Proof
Given an
matrix
where
. Because each
is
are the
is
represented by a matrix that is all zeroes except for square blocks along the diagonal. To make those blocks into
Jordan blocks, pick each
to be a string basis for the action of
on
.
Jordan form is a canonical form for similarity classes of square matrices, provided that we make it unique by
arranging the Jordan blocks from least eigenvalue to greatest and then arranging the subdiagonal blocks inside
each Jordan block from longest to shortest.
Example 2.13
This matrix has the characteristic polynomial
and
separately.
393
on this subspace. From the way that the nullities grow we know that the action of
is nilpotent
on a string basis
where many choices of basis are possible. Consequently, the action of the restriction of
to
is
to
on
is the
is the
is the concatenation of
and
is
Example 2.14
Contrast the prior example with
where
394
to
is
nilpotent of index only one. (So the contrast with the prior example is that while the characteristic polynomial tells
us to look at the action of the
on its generalized null space, the characteristic polynomial does not describe
completely its action and we must do some computations to find, in this example, that the minimal polynomial is
.) The restriction of
and
For the other eigenvalue, the arguments for the second eigenvalue of the prior example apply again. The restriction
of
to
is nilpotent of index one (it can't be of index less than one, and since
is a factor of
the characteristic polynomial to the power one it can't be of index more than one either). Thus
form
is the
zero matrix, and the associated Jordan block
Therefore, is diagonalizable.
is in the nullspace of
is the
is routine.)
Example 2.15
A bit of computing with
. This table
's canonical
.
395
to
and
.
A similar calculation for the other eigenvalue
to its generalized null space acts on a string basis via the two separate strings
We close with the statement that the subjects considered earlier in this Chpater are indeed, in this sense, exhaustive.
Corollary 2.16
Every square matrix is similar to the sum of a diagonal matrix and a nilpotent matrix.
396
Exercises
Problem 1
Do the check for Example 2.3.
Problem 2
Each matrix is in Jordan form. State its characteristic polynomial and its minimal polynomial.
1.
2.
3.
4.
5.
6.
7.
8.
9.
is
Problem 4
Find the change of basis matrices for each example.
1. Example 2.13
2. Example 2.14
397
3. Example 2.15
This exercise is recommended for all readers.
Problem 5
Find the Jordan form and a Jordan basis for each matrix.
1.
2.
3.
4.
5.
6.
7.
Problem 7
Find all possible Jordan forms of a transformation with characteristic polynomial
and minimal
and minimal
398
Problem 13
Find the Jordan form of this matrix.
and
invariant.
Problem 18
Prove or disprove: two
matrices are similar if and only if they have the same characteristic and minimal
polynomials.
Problem 19
The trace of a square matrix is the sum of its diagonal entries.
1. Find the formula for the characteristic polynomial of a
matrix.
2. Show that trace is invariant under similarity, and so we can sensibly speak of the "trace of a map". (Hint: see the
prior item.)
3. Is trace invariant under matrix equivalence?
4. Show that the trace of a map is the sum of its eigenvalues (counting multiplicities).
5. Show that the trace of a nilpotent map is zero. Does the converse hold?
Problem 20
To use Definition 2.6 to check whether a subspace is invariant, we seemingly have to check all of the infinitely
many vectors in a (nontrivial) subspace to see if they satisfy the condition. Prove that a subspace is invariant if and
only if its subbasis has the property that for all of its elements,
is in the subspace.
This exercise is recommended for all readers.
Problem 21
Is
Problem 22
399
Give a way to order the Jordan blocks if some of the eigenvalues are complex numbers. That is, suggest a reasonable
ordering for the complex numbers.
Problem 23
Let
invariant subspace of
, does any of
then
is an
, ...,
have
an invariant complement?
Problem 24
In
polynomials,
and
is even while
Are they complementary? Are they invariant under the differentiation transformation?
Problem 25
Lemma 2.8 says that if
and
(with respect to the same ending as starting basis, of course). Does the implication reverse?
Problem 26
A matrix
if
Solutions
Footnotes
[1] More information on restrictions of functions is in the appendix.
400
sends lines through the origin to lines through the origin. Thus, two points on a line
The second vector is times the first, and the image of the second is times the image of the first. Not only does
the transformation preserve the fact that the vectors are colinear, it also preserves the relative scale of the vectors.
That is, a transformation treats the points on a line through the origin uniformily. To describe the effect of the map
on the entire line, we need only describe its effect on a single non-zero point in that line.
Since every point in the space is on some line through the origin, to understand the action of a linear transformation
of
, it is sufficient to pick one point from each line through the origin (say the point that is on the upper half of
the unit circle) and show how the map's effect on that set of points.
Here is such a picture for a straightforward dilation.
Below, the same map is shown with the circle and its image superimposed.
Certainly the geometry here is more evident. For example, we can see that some lines through the origin are actually
sent to themselves: the -axis is sent to the -axis, and the -axis is sent to the -axis.
This is the flip shown earlier, here with the circle and its image superimposed.
Contrast the picture of this map's effect on the unit square with this one.
Here is a somewhat more complicated map (the second coordinate function is the same as the map in the prior
picture, but the first coordinate function is different).
401
402
Observe that some vectors are being both dilated and rotated through some angle
Exercises
Problem 1
Show the effect each matrix has on the top half of the unit circle.
1.
2.
3.
Which vectors stay on the same line through the origin?
Solutions
matrix
has the
on
distinct eigenvalues
. For any
, ...,
. Then
, where
gives these.
, has a larger absolute value than any of the other eigenvalues then its term will
and, because
gives this,
.
is not zero), as
eigenvectors associated with the dominant eigenvalue, and, consequently, the ratios of the lengths
will tend toward that dominant eigenvalue.
403
For example (sample computer code for this follows the exercises), because the matrix
and
and
. Arbitrarily taking
to have the
gives
Two implementation issues must be addressed. The first issue is that, instead of finding the powers of
applying them to
calculate
, we will compute
as
as
and
sparse. The second issue is that, to avoid generating numbers that are so large that they overflow our computer's
capability, we can normalize the
's at each step. For instance, we can divide each
by its length (other
possibilities are to divide it by its largest component, or simply by its first component). We thus implement this
method by generating
One way we could be "satisfied" is to iterate until our approximation of the eigenvalue settles down. We could
decide, for instance, to stop the iteration process not after some fixed number of steps, but instead when
differs
from
by less than one percent, or when they agree up to the second significant digit.
go to zero, where
is the
eigenvalue of second largest norm. If that ratio is much less than one then convergence is fast, but if it is only
slightly less than one then convergence can be quite slow. Consequently, the method of powers is not the most
commonly used way of finding eigenvalues (although it is the simplest one, which is why it is here as the illustration
of the possibility of computing eigenvalues without solving the characteristic polynomial). Instead, there are a
variety of methods that generally work by first replacing the given matrix with another that is similar to it and so
has the same eigenvalues, but is in some reduced form such as tridiagonal form: the only nonzero entries are on the
diagonal, or just above or below it. Then special techniques can be used to find the eigenvalues. Once the
eigenvalues are known, the eigenvectors of
can be easily computed. These other methods are outside of our
scope. A good reference is (Goult et al. 1975).
404
Exercises
Problem 1
Use ten iterations to estimate the largest eigenvalue of these matrices, starting from the vector with components
and
. Compare the answer with the one obtained by solving the characteristic equation.
1.
2.
Problem 2
Redo the prior exercise by iterating until
by dividing each vector by its length. How many iterations are required? Are the answers significantly different?
Problem 3
Use ten iterations to estimate the largest eigenvalue of these matrices, starting from the vector with components
, and
. Compare the answer with the one obtained by solving the characteristic equation.
1.
2.
Problem 4
Redo the prior exercise by iterating until
. At each step,
normalize by dividing each vector by its length. How many iterations does it take? Are the answers significantly
different?
Problem 5
What happens if
? That is, what happens if the initial vector does not to have any component in the
References
Goult, R.J.; Hoskins, R.F.; Milner, J.A.; Pratt, M.J. (1975), Computational Methods in Linear Algebra, Wiley.
405
406
and in
We can set this system up as a matrix equation (see the Markov Chain topic).
and
becomes
. The
is
and
.
If we start with a park population of ten thousand animals, so that the rest of the world has one hundred thousand,
then every year ten percent (a thousand animals) of those inside will leave the park, and every year one percent (a
thousand) of those from the rest of the world will enter the park. It is stable, self-sustaining.
Now imagine that we are trying to gradually build up the total world population of this species. We can try, for
instance, to have the world population grow at a rate of 1% per year. In this case, we can take a "stable" state for the
park's population to be that it also grows at 1% per year. The equation
leads to
, which gives this system.
and
population that we can establish at the park and expect that it will grow at the same rate as the rest of the world.
Knowing that an annual world population growth rate of 1% forces an unstable park population, we can ask which
growth rates there are that would allow an initial population for the park that will be self-sustaining. We consider
and solve for .
is an eigenvalue of
. Thus there are two ways to have a stable park population (a population that grows at the same rate as the
population of the rest of the world, despite the leaky park boundaries): have a world population that is does not grow
or shrink, and have a world population that shrinks by 11% every year.
So this is one meaning of eigenvalues and eigenvectors they give a stable state for a system. If the eigenvalue is
then the system is static. If the eigenvalue isn't
Exercises
Problem 1
What initial population for the park discussed above should be set up in the case where world populations are
allowed to decline by 11% every year?
Problem 2
What will happen to the population of the park in the event of a growth in world population of 1% per year? Will it
lag the world growth, or lead it? Assume that the inital park population is ten thousand, and the world population is
one hunderd thousand, and calculate over a ten year span.
Problem 3
The park discussed above is partially fenced so that now, every year, only 5% of the animals from inside of the park
leave (still, about 1% of the animals from the outside find their way in). Under what conditions can the park maintain
a stable population now?
Problem 4
Suppose that a species of bird only lives in Canada, the United States, or in Mexico. Every year, 4% of the Canadian
birds travel to the US, and 1% of them travel to Mexico. Every year, 6% of the US birds travel to Canada, and 4% go
to Mexico. From Mexico, every year 10% travel to the US, and 0% go to Canada.
1. Give the transition matrix.
2. Is there a way for the three countries to have constant populations?
3. Find all stable situations.
Solutions
407
408
The is an example of a recurrence relation (it is called that because the values of
other, prior, values of
The sequence of numbers defined by the above equation (of which the first few are listed) is the Fibonacci sequence.
The material of this chapter can be used to give a formula with which we can can calculate
without
having to first find
, etc.
For that, observe that the recurrence is a linear relationship and so we can give a suitable matrix formulation of it.
and
we have
, and the
, we have
matrix
is the diagonal matrix whose entries that are the -th powers of the entries of
.
The characteristic equation of
is
. The quadratic formula gives its roots as
and
We can compute
Notice that
Although we have extended the elementary model of population growth by adding a delay period before the onset of
409
(it is also called a difference equation). This recurrence relation is homogeneous because there is no constant term;
i.e, it can be put into the form
. This
is said to be a relation of order
, ...,
completely
by first
, etc. In this Topic, we shall see how linear algebra can be used to solve linear recurrence
relations.
First, we define the vector space in which we are working. Let
numbers
, that
is a subspace of
and
of
of
, and
Problem 3 shows that this map is linear. Because, as noted above, any solution of the recurrence is uniquely
determined by the initial conditions, this map is one-to-one and onto. Thus it is an isomorphism, and thus has
dimension
So (again, without any initial conditions), we can describe the set of solutions of any linear homogeneous recurrence
relation of degree by taking linear combinations of only linearly independent functions. It remains to produce
those functions.
For that, we express the recurrence
410
In trying to find the characteristic function of the matrix, we can see the pattern in the
and
case
case.
We call that the polynomial "associated" with the recurrence relation. (We will be finding the roots of this
polynomial and so we can drop the
as irrelevant.)
If
through
of
powers of those roots. Problem 2 shows that each is a solution of the recurrence and that the
linearly
independent
set.
So,
given
(that
the
)
we
homogeneous
is,
consider
, ...,
the
of them form a
linear
recurrence
associated
equation
. (The
case
repeated
is also
easily done,
butwe
weare
won't
cover itinhere
see anysolution,
text on Discrete
Mathematics.)
Now,ofgiven
someroots
initial
conditions,
so that
interested
a particular
we can solve
for
, ...,
.
For instance, the polynomial associated with the Fibonacci relation is
, whose roots are
and
so
any
solution
of
the
Fibonacci
equation
has
the
which yields
and
form
and
close
by
considering
the
411
nonhomogeneous
case,
chapter of this book, only a small adjustment is needed to make the transition from the homogeneous case. This
classic example illustrates.
In 1883, Edouard Lucas posed the following problem.
In the great temple at Benares, beneath the dome which marks the center of the world, rests a brass plate
in which are fixed three diamond needles, each a cubit high and as thick as the body of a bee. On one of
these needles, at the creation, God placed sixty four disks of pure gold, the largest disk resting on the
brass plate, and the others getting smaller and smaller up to the top one. This is the Tower of Bramah.
Day and night unceasingly the priests transfer the disks from one diamond needle to another according
to the fixed and immutable laws of Bramah, which require that the priest on duty must not move more
than one disk at a time and that he must place this disk on a needle so that there is no smaller disk below
it. When the sixty-four disks shall have been thus transferred from the needle on which at the creation
God placed them to one of the other needles, tower, temple, and Brahmins alike will crumble into dusk,
and with a thunderclap the world will vanish.
(Translation of De Parvill (1884) from Ball (1962).)
How many disk moves will it take? Instead of tackling the sixty four disk problem right away, we will consider the
problem for smaller numbers of disks, starting with three.
To begin, all three disks are on the same needle.
After moving the small disk to the far needle, the mid-sized disk to the middle needle, and then moving the small
disk to the middle needle we have this.
Now we can move the big disk over. Then, to finish, we repeat the process of moving the smaller disks, this time so
that they end up on the third needle, on top of the big disk.
412
So the thing to see is that to move the very largest disk, the bottom disk, at a minimum we must: first move the
smaller disks to the middle needle, then move the big one, and then move all the smaller ones from the middle needle
to the ending needle. Those three steps give us this recurence.
derive
this
equation
associated polynomial
write
the
original
relation as
, get its
.
is so simple, in a few minutes (or by
this one is easily spotted). So we have that without yet considering the initial condition any solution of
is the sum of the homogeneous solution and this particular solution:
.
The initial condition
, and we've gotten the formula that generates the table: the
Exercises
Problem 1
Solve each homogeneous linear recurrence relations.
1.
2.
3.
Problem 2
Give a formula for the relations of the prior exercise, with these initial conditions.
1.
2.
3.
,
,
,
Problem 3
Check that the isomorphism given betwween
and
413
, let
, ...,
Problem 6
(This refers to the value
Transferring one disk per second, how many years would it take the priests at the Tower of Hanoi to finish the job?
Computer Code
This code allows the generation of the first few values of a function defined by a recurrence and initial conditions. It
is in the Scheme dialect of LISP (specifically, it was written for A. Jaffer's free scheme interpreter SCM, although it
should run in any Scheme implementation).
First, the Tower of Hanoi code is a straightforward implementation of the recurrence.
(define (tower-of-hanoi-moves n)
(if (= n 1)
1
(+ (* (tower-of-hanoi-moves (- n 1))
2)
1) ) )
(Note for readers unused to recursive code: to compute
, the computer is told to compute
which requires, of course, computing
moment to do that. It computes
,
" aside for a
by using this same piece of code (that's what "recursive" means), and to do
. It then returns
finishes.)
The next routine calculates a table of the first few values. (Some language notes: '() is the empty list, that is, the
empty sequence, and cons pushes something onto the start of a list. Note that, in the last line, the procedure proc
is called on argument n.)
(define (first-few-outputs proc n)
(first-few-outputs-helper proc n '()) )(define (first-few-outputs-aux proc n
lst)
(if (< n 1)
lst
(first-few-outputs-aux proc (- n 1) (cons (proc n) lst)) ) )
The session at the SCM prompt went like this.
>(first-few-outputs tower-of-hanoi-moves 64)
Evaluation took 120 mSec
(1 3 7 15 31 63 127 255 511 1023 2047 4095 8191 16383 32767
65535 131071 262143 524287 1048575 2097151 4194303 8388607
16777215 33554431 67108863 134217727 268435455 536870911
1073741823 2147483647 4294967295 8589934591 17179869183
34359738367 68719476735 137438953471 274877906943 549755813887
1099511627775 2199023255551 4398046511103 8796093022207
17592186044415 35184372088831 70368744177663 140737488355327
281474976710655 562949953421311 1125899906842623
414
References
Ball, W.W. (1962), Mathematical Recreations and Essays, MacMillan (revised by H.S.M. Coxeter).
De Parville (1884), La Nature, I, Paris, pp.285-286.
Gardner, Martin (May. 1957), "Mathematical Games: About the remarkable similarity between the Icosian Game
and the Tower of Hanoi", Scientific American: 150-154.
Graham, Ronald L.; Knuth, Donald E.; Patashnik, Oren (1988), Concrete Mathematics, Addison-Wesley.
Hofstadter, Douglas R. (1985), Metamagical Themas:~Questing for the Essence of Mind and Pattern, Basic
Books.
415
Appendix
Linear Algebra/Appendix
Mathematics is made of arguments (reasoned discourse that is, not crockery-throwing). This section is a reference to
the most used techniques. A reader having trouble with, say, proof by contradiction, can turn here for an outline of
that method.
But this section gives only a sketch. For more, these are classics: Methods of Logic by Quine, Induction and Analogy
in Mathematics by Plya, and Naive Set Theory by Halmos.
Linear Algebra/Propositions
The point at issue in an argument is the proposition. Mathematicians usually write the point in full before the proof
and label it either Theorem for major points, Corollary for points that follow immediately from a prior one, or
Lemma for results chiefly used to prove other results.
The statements expressing propositions can be complex, with many subparts. The truth or falsity of the entire
proposition depends both on the truth value of the parts, and on the words used to assemble the statement from its
parts.
Not
For example, where
is not
Where the box encloses all natural numbers, and inside the circle are the primes, the shaded area holds numbers
satisfying "not ".
To prove that a "not
is false.
Linear Algebra/Propositions
416
And
Consider the statement form "
is
is prime and
To prove "
and
and
and
". For the statement to be true both halves must hold: "
is prime and so
is not" is false.
".
Or
A"
or
is prime or
prime" is false. We take "or" inclusively so that if both halves are true "
is prime or
is not prime or
is
a whole is true. (In everyday speech, sometimes "or" is meant in an exclusive way "Eat your vegetables or no
dessert" does not intend both halves to hold but we will not use "or" in that way.)
The Venn diagram for "or" includes all of both circles.
To prove "
or
", show that in all cases at least one half holds (perhaps sometimes one half and sometimes the
Linear Algebra/Propositions
417
If-then
An "if
then
is true while
materially implies
is prime then
implies
is also prime" is false. (Contrary to its use in casual speech, in mathematics "if
is false: "if
" or "
is prime then
then
is
is
is not" are both true statements, sometimes said to be vacuously true. We adopt this convention
because we want statements like "if a number is a perfect square then it is not prime" to be true, for instance when
the number is or when the number is .
The diagram
shows that
holds whenever
is sufficient to give
There are two main ways to establish an implication. The first way is direct: assume that
assumption, prove
. For instance, to show "if a number is divisible by 5 then twice that number is divisible by
is false then
is also false").
As an example, to show "if a number is prime then it is not a perfect square", argue that if it were a square
then it could be factored
where
or
don't
give
but they are nonprime by definition).
Note two things about this statement form.
First, an "if
then
number is divisible by
or strengthening
number is divisible by
then its square is divisible by
".
Second, after showing "if then
", a good next step is to look into whether there are cases where
does not. The idea is to better understand the relationship between
the proposition.
. Thus, "if a
and
holds but
Linear Algebra/Propositions
418
Equivalence
An if-then statement cannot be improved when not only does
imply
sufficient to give
if and only if
", "
", "
iff
", "
and
, but also
implies
. Some ways to
is necessary and
". For example, "a number is divisible by a prime if and only if that number
if and only if
then
then
Linear Algebra/Quantifiers
Compare these two statements about natural numbers: "there is an
"for all numbers
, that
is divisible by
such that
is divisible by
" is false. We call the "there is" and "for all" prefixes quantifiers.
For all
The "for all" prefix is the universal quantifier, symbolized
Venn diagrams aren't very helpful with quantifiers, but in a sense the box we draw to border the diagram shows the
universal quantifier since it dilineates the universe of possible members.
To prove that a statement holds in all cases, we must show that it holds in each case. Thus, to prove "every number
divisible by has its square divisible by
", take a single number of the form
and square it
.
This is a "typical element" or "generic element" proof.
Linear Algebra/Quantifiers
419
This kind of argument requires that we are careful to not assume properties for that element other than those in the
hypothesis for instance, this type of wrong argument is a common mistake: "if is divisible by a prime, say ,
so that
then
and the square of the number is divisible by the square of the prime".
, but it isn't a proof for general
There exists
We will also use the existential quantifier, symbolized
As noted above, Venn diagrams are not much help with quantifiers, but a picture of "there is a number such that
would show both that there can be more than one and that not all numbers need satisfy
"
An existence proposition can be proved by producing something satisfying the property: once, to settle the question
of primality of
, Euler produced its divisor
. But there are proofs showing that something exists
without saying how to find it; Euclid's argument given in the next subsection shows there are infinitely many primes
without naming them. In general, while demonstrating existence is better than nothing, giving an example is better,
and an exhaustive list of all instances is great. Still, mathematicians take what they can get.
Finally, along with "Are there any?" we often ask "How many?" That is why the issue of uniqueness often arises in
conjunction with questions of existence. Many times the two arguments are simpler if separated, so note that just as
proving something exists does not show it is unique, neither does proving something is unique show that it exists.
(Obviously "the natural number with more factors than any other" would be unique, but in fact no such number
exists.)
420
, and so on ...". These are called proofs by induction. Such a proof has two steps. In the base
or
Here is an example.
We will prove that
For the base step we must show that the formula holds when
number does indeed equal
For the inductive step, assume that the formula holds for the numbers
From this assumption we will deduce that the formula therefore also holds in the
We've shown in the base case that the above proposition holds for
holds for the case of
is a product of primes.
is also
to
to
, etc., down to
. So "next
number" could mean "next lowest number". Of course, at the end we have not shown the fact for all natural numbers,
only for those less than or equal to
.
421
Contradiction
Another technique of proof is to show something is true by showing it can't be false.
The classic example is Euclid's, that there are infinitely many primes.
Suppose there are only finitely many primes
. Consider
on this supposedly exhaustive list divides that number evenly, each leaves a remainder of
. But every
number is a product of primes so this can't be. Thus there cannot be only finitely many primes.
Every proof by contradiction has the same form: assume that the false proposition is true and derive some
contradiction to known facts. This kind of logic is known as Aristotelian Logic, or Term Logic<ref>http:/ / en.
wikipedia.org/wiki/Term_logic
Another example is this proof that
Suppose that
Factor out the
.
's:
and
and rewrite.
The Prime Factorization Theorem says that there must be the same number of factors of
there are an odd number
cannot be.
Both of these examples aimed to prove something doesn't exist. A negative proposition often suggests a proof by
contradiction.
such that \ldots"). We name sets with capital roman letters as with the primes
,
. To denote that something
We use "
of
but
and "
is a proper subset of
and
is a subset
.
Because of Extensionality, to prove that two sets are equal
Usually we show mutual inclusion, that both
422
Set operations
Venn diagrams are handy here. For instance,
and "
can be pictured
Note that this is a repeat of the diagram for "if \ldots then ..." propositions. That's because "
then
".
In general, for every propositional logic operator there is an associated set operator. For instance, the complement of
is
the union is
423
}}When two sets share no members their intersection is the empty set
, symbolized
set for a subset, by the "vacuously true" property of the definition of implication.
Sequences
We shall also use collections where order does matter and where repeats do not collapse. These are sequences,
denoted with angle brackets:
. A sequence of length is sometimes called an ordered pair
and written with parentheses:
ordered
Functions
We first see functions in elementary Algebra, where they are presented as formulas (e.g.,
),
but progressing to more advanced Mathematics reveals more general functions trigonometric ones, exponential
and logarithmic ones, and even constructs like absolute value that involve piecing together parts and we see that
functions aren't formulas, instead the key idea is that a function associates with its input a single output
.
Consequently, a function or map is defined to be a set of ordered pairs
such that suffices to
determine
, that is: if
then
's
domain and the set of output values is its range. Usually we don't need know what is and is not in the range and we
instead work with a superset of the range, the codomain. The notation for a function with domain
and
codomain
is
424
, read "
. The composition of
with
defined by
is equal to
identity.
A map that is both a left and right inverse of
and
. So an identity map plays the same role with respect to function composition
that the number plays in real number addition, or that the number
In line with that analogy, define a left inverse of a map
because if both
that
", or "
. It is denoted
the composition
to
'.
maps under
are inverses of
plays in multiplication.
to be a function
is a
such that
such
is the
(the middle equality comes from the associativity of function composition), so we often call it "the" inverse, written
. For instance, the inverse of the function
given by
is the function
given by
.
The superscript "
" notation for function inverse can be confusing it doesn't mean
. It is used
because it fits into a larger scheme. Functions that have the same codomain as domain can be iterated, so that where
, we can consider the composition of with itself:
, and
, etc.
Naturally enough, we write
as
is invertible, writing
and
and
as
only if it is onto (this is not hard to check). If no two arguments share an image, if
implies that
, then the function is one-to-one. A function has a left inverse if and only if it is one-to-one (this
is also not hard to check).
By the prior paragraph, a map has an inverse if and only if it is both onto and one-to-one; such a function is a
correspondence. It associates one and only one element of the domain with each element of the range (for example,
finite sets must have the same number of elements to be matched up in this way). Because a composition of
one-to-one maps is one-to-one, and a composition of onto maps is onto, a composition of correspondences is a
correspondence.
425
We sometimes want to shrink the domain of a function. For instance, we may take the function
by
given
Technically,
as its argument.
Relations
Some familiar operations are obviously functions: addition maps
here take the approach of rephrasing "
on a set
" to "
to
is in the relation
" or "
"? We
,
, and
. Neither
nor
is a member.
Those examples illustrate the generality of the definition. All kinds of relationships (e.g., "both numbers even" or
"first number is the second with the digits reversed") are covered under the definition.
Equivalence Relations
We shall need to say, formally, that two objects are alike in some way. While these alike things aren't identical, they
are related (e.g., two integers that "give the same remainder when divided by ").
A binary relation
is related to
(To see that these conditions formalize being the same, read them again, replacing "is related to" with "is like".)
Some examples (on the integers): " " is an equivalence relation, "
a equivalence, while "nearer than
" fails transitivity.
Partitions
In "same sign"
there are two kinds of pairs, the first with both numbers
positive and the second with both negative. So integers fall into exactly one of two classes, positive or negative.
A partition of a set
one
is a collection of subsets
, and if
is not equal to
. Picture
426
Thus, the first paragraph says "same sign" partitions the integers into the positives and the negatives.
Similarly, the equivalence relation "=" partitions the integers into one-element sets.
Another example is the fractions. Of course,
and
and
to be equivalent if
. We can check that this is an equivalence relation, that is, that it satisfies the above three conditions.
With that,
Before we show that equivalence relations always give rise to partitions, we first illustrate the argument. Consider
the relationship between two integers of "same parity", the set
(i.e., "give the same
remainder when divided by
"). We want to say that the natural numbers split into two pieces, the evens and the
odds, and inside a piece each member has the same parity as each other. So for each
associated with it:
. Some examples are
and
, and
is the
odds.
}}Theorem An equivalence relation induces a partition on the underlying set.
Proof
Call the set
define
.
Observe that, as
is a member if
then
Let
of
and
and
, the two
and
. To show that
are members
we
427
so that
. But
to conclude that
. Thus
is also
.
Therefore
implies
, and so
.
The same argument in the other direction gives the other inclusion, and so the two sets are equal, completing the
contrapositive argument.
}}We call each part of a partition an equivalence class (or informally, "part").
We somtimes pick a single element of each equivalence class to be the class representative.
Usually when we pick representatives we have some natural scheme in mind. In that case we call them the canonical
representatives.
An example is the simplest form of a fraction. We've defined
and
work we often use the "simplest form" or "reduced form" fraction as the class representatives.
References
[1] http:/ / joshua. smcvt. edu/ linearalgebra/
Linear Algebra/Resources
Other Books and Lectures
Linear Algebra [1] - A free textbook by Prof. Jim Hefferon of St. Michael's College. This wikibook began as a
wikified copy of Prof. Hefferon's text. Prof. Hefferon's book may differ from the book here, as both are still under
development.
A Course in Linear Algebra [1] - A free set of video lectures given at the Massachusetts Institute of Technology by
Prof. Gilbert Strang. Prof. Strang's book on linear algebra has been a widely influential book and it is referenced
many times in this text.
A First Course in Linear Algebra [2] - A free textbook by Prof. Rob Beezer at the University of Puget Sound,
released under GFDL.
Lecture Notes on Linear Algebra [3] - An online viewable set of lecture notes by Prof. Jos Figueroa-OFarrill at
the University of Edinburgh.
Software
Octave [4] a free and open soure application for Numerical Linear Algebra. Uses of this software is referenced
several times in the text. There is also an Octave Programming Tutorial wikibook under development.
A toolkit for linear algebra students [5] - An online software resource aimed at helping linear algebra students
learn and practice a basic linear algebra procedures, such as Gauss-Jordan reduction, calculating the determinant,
or checking for linear independence. This software was produced by Przemyslaw Bogacki in the Department of
Mathematics and Statistics at Old Dominion University.
428
Linear Algebra/Resources
Wikipedia
Wikipedia is frequently a great resource that often gives a general non-technical overview of a subject. Wikipedia
has many articles on the subject of Linear Algebra. Below are some articles about some of the material in this book.
References
[1]
[2]
[3]
[4]
[5]
http:/ / ocw. mit. edu/ OcwWeb/ Mathematics/ 18-06Spring-2005/ CourseHome/ index. htm
http:/ / linear. ups. edu/
http:/ / xmlearning. maths. ed. ac. uk
http:/ / www. octave. org/
http:/ / www. math. odu. edu/ ~bogacki/ cgi-bin/ lat. cgi
Linear Algebra/Bibliography
Dalal, Siddhartha; Folkes, Edward; Hoadley, Bruce (Fall 1989), "Lessons Learned from Challenger: A Statistical
Perspective", Stats: the Magazine for Students of Statistics: 14-18
429
Linear Algebra/Bibliography
Davies, Thomas D. (Jan. 1990), "New Evidence Places Peary at the Pole", National Geographic Magazine 177
(1): 44.
de Mestre, Neville (1990), The Mathematics of Projectiles in sport, Cambridge University Press.
De Parville (1884), La Nature, I, Paris, pp.285-286.
Duncan, Dewey (proposer); Quelch, W. H. (solver) (Sept.-Oct. 1952), Mathematics Magazine 26 (1): 48
Dudley, Underwood (proposer); Lebow, Arnold (proposer); Rothman, David (solver) (Jan. 1963), "Elemantary
problem 1151", American Mathematical Monthly 70 (1): 93.
Ebbing, Darrell D. (1993), General Chemistry (Fourth ed.), Houghton Mifflin.
Ebbinghaus, H. D. (1990), Numbers, Springer-Verlag.
Eggar, M.H. (Aug./Sept. 1998), "Pinhole Cameras, Perspective, and Projective Geometry", American
Mathematical Monthly (American Mathematical Society): 618-630.
Einstein, A. (1911), Annals of Physics 35: 686.
Feller, William (1968), An Introduction to Probability Theory and Its Applications, 1 (3rd ed.), Wiley.
Gardner, Martin (May. 1957), "Mathematical Games: About the remarkable similarity between the Icosian Game
and the Tower of Hanoi", Scientific American: 150-154.
Gardner, Martin (April 1970), "Mathematical Games, Some mathematical curiosities embedded in the solar
system", Scientific American: 108-112.
Gardner, Martin (October 1974), "Mathematical Games, On the paradoxical situations that arise from
nontransitive relations", Scientific American.
Gardner, Martin (October 1980), "Mathematical Games, From counting votes to making votes count: the
mathematics of elections", Scientific American.
Gardner, Martin (1990), The New Ambidextrous Universe (Third revised ed.), W. H. Freeman and Company.
Gilbert, George T.; Krusemeyer, Mark; Larson, Loren C. (1993), The Wohascum County Problem Book, The
Mathematical Association of America.
Giordano, R.; Jaye, M.; Weir, M. (1986), "The Use of Dimensional Analysis in Mathematical Modeling", UMAP
Modules (COMAP) (632).
Giordano, R.; Wells, M.; Wilde, C. (1987), "Dimensional Analysis", UMAP Modules (COMAP) (526).
Goult, R.J.; Hoskins, R.F.; Milner, J.A.; Pratt, M.J. (1975), Computational Methods in Linear Algebra, Wiley.
Graham, Ronald L.; Knuth, Donald E.; Patashnik, Oren (1988), Concrete Mathematics, Addison-Wesley.
Haggett, Vern (proposer); Saunders, F. W. (solver) (Apr. 1955), "Elementary problem 1135", American
Mathematical Monthly (American Mathematical Society) 62 (5): 257.
Halmos, Paul P. (1958), Finite Dimensional Vector Spaces (Second ed.), Van Nostrand.
Halsey, William D. (1979), Macmillian Dictionary, Macmillian.
Hamming, Richard W. (1971), Introduction to Applied Numerical Analysis, Hemisphere Publishing.
Hanes, Kit (1990), "Analytic Projective Geometry and its Applications", UMAP Modules (UMAP UNIT 710):
111.
Heath, T. (1956), Euclid's Elements, 1, Dover.
Hoffman, Kenneth; Kunze, Ray (1971), Linear Algebra (Second ed.), Prentice Hall
Hofstadter, Douglas R. (1985), Metamagical Themas:~Questing for the Essence of Mind and Pattern, Basic
Books.
Iosifescu, Marius (1980), Finite Markov Processes and Their Applications, UMI Research Press.
Ivanoff, V. F. (proposer); Esty, T. C. (solver) (Feb. 1933), "Problem 3529", American Mathematical Mothly 39
(2): 118
Kelton, Christina M.L. (1983), Trends on the Relocation of U.S. Manufacturing, Wiley.
Kemeny, John G.; Snell, J. Laurie (1960), Finite Markove Chains, D.~Van Nostrand.
Kemp, Franklin (Oct. 1982), "Linear Equations", American Mathematical Monthly (American Mathematical
Society): 608.
430
Linear Algebra/Bibliography
Klamkin, M. S. (proposer) (Jan.-Feb. 1957), "Trickie T-27", Mathematics Magazine 30 (3): 173.
Knuth, Donald E. (1988), The Art of Computer Programming, Addison Wesley.
Leontief, Wassily W. (Oct. 1951), "Input-Output Economics", Scientific American 185 (4): 15.
Leontief, Wassily W. (Apr. 1965), "The Structure of the U.S. Economy", Scientific American 212 (4): 25.
Liebeck, Hans. (Dec. 1966), "A Proof of the Equality of Column Rank and Row Rank of a Matrix", American
Mathematical Monthly (American Mathematical Society) 73 (10): 1114.
Macdonald, Kenneth; Ridge, John (1988), "Social Mobility", British Social Trends Since 1900 (Macmillian).
Morrison, Clarence C. (proposer) (1967), "Quickie", Mathematics Magazine 40 (4): 232.
Munkres, James R. (1964), Elementary Linear Algebra, Addison-Wesley.
Neimi, G.; Riker, W. (June 1976), "The Choice of Voting Systems", Scientific American: 21-27.
O'Hanian, Hans (1985), Physics, 1, W. W. Norton
O'Nan, Micheal (1990), Linear Algebra (3rd ed.), Harcourt College Pub.
Oakley, Cletus; Baker, Justine (April 1977), "Least Squares and the 3:40 Mile", Mathematics Teacher
Plya, G. (1954), Mathematics and Plausible Reasoning: Volume II Patterns of Plausible Inference, Princeton
University Press
Peterson, G. M. (Apr. 1955), "Area of a triangle", American Mathematical Monthly (American Mathematical
Society) 62 (4): 249.
431
Linear Algebra/Bibliography
432
Linear Algebra/Index
: Top - 09 A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
A
accuracy
of Gauss' method
addition
vector
additive inverse
adjoint matrix
angle
antipodal
antisymmetric matrix
argument
Arithmetic-Geometric Mean Inequality
arrow diagram 1, 2, 3, 4, 5
augmented matrix
automorphism
dilation
reflection
rotation
Linear Algebra/Index
B
back-substitution
base step
of induction
basis 1, 2, 3
change of
definition
natural
orthogonal
orthogonalization
orthonormal
standard 1, 2
standard over the complex numbers
string
best fit line
binary relation
block matrix
box
orientation
sense
volume
C
C language
classes
equivalence
canonical form
for row equivalence
for matrix equivalence
for nilpotent matrices
for similarity
canonical representative
Cauchy-Schwartz Inequality
Caley-Hamilton theorem
change of basis
characteristic
equation
polynomial
value
433
Linear Algebra/Index
vector
characterized
Chemistry problem 1, 2, 3
central projection
circuits
parallel
series
series-parallel
closure
of rangespace
of nullspace
codomain
cofactor
column
vector
column rank
full
column space
complement
complementary subspaces
orthogonal
complex numbers
vector space over
component
composition
self
computer algebra systems
concatenation
condition number
congruent figures
congruent plane figures
contradiction
contrapostivite
convex set
coordinates
homogeneous
with respect to a basis
corollary
correspondence 1, 2
434
Linear Algebra/Index
cosets
Cramer's Rule
cross product
crystals
diamond
graphite
salt
unit cell
D
da Vinci, Leonardo
determinant 1, 2
cofactor
Cramer's Rule
definition
exists 1, 2, 3
Laplace Expansion
minor
Vandermonde
permutation expansion 1, 2
diagonal matrix 1, 2
diagonalizable
difference equation
homogeneous
dilation
matrix representation
dimension
physical
dilation 1, 2
direct map
direct sum
definition
of two subspaces
external
internal
direction vector
distance-preserving
division theorem
domain
dot product
435
Linear Algebra/Index
double precision
dual space
E
echelon form
leading variable
free variable
reduced
eigenvalue
of a matrix
of a transformation
eigenvector
of a matrix
of a transformation
eigenspace
element
elementary
matrix
elementary reduction matrices
elementary reduction operations
pivoting
rescaling
swapping
elementary row operations
empty
Erlanger Program
entry
equivalence
class
canonical representative
representitive
equivalence relation 1, 2
row equivalence
isomorphism
matrix equivalence
matrix similarity
equivalent statements
Euclid
even functions 1, 2
even polynomials
436
Linear Algebra/Index
external direct sum
F
Fibonacci sequence
field
definition
finite-dimensional vector space
flat
form
free variable
full column rank
full row rank
function 1, 2
argument
codomain
composition
composition
correspondence
domain
even
identity
inverse 1, 2
inverse image
left inverse
multilinear
range
restriction
odd
one-to-one function
onto
right inverse
structure preserving 1, 2
see homomorphism
two sided inverse
value
well-defined
zero
Fundamental Theorem
of Linear Algebra
437
Linear Algebra/Index
G
Gauss' Method
accuracy
back-substitution
elementary operations
Gauss-Jordan
Gauss-Jordan
Gaussian operations
generalized nullspace
generalized rangespace
Gram-Schmidt Orthogonalization
Geometry of Eigenvalues
Geometry of Linear Maps
H
historyless
Markov Chain
homogeneous
homogeneous coordinate vector
homogeneous coordinates
homorphism
composition
matrix representation 1, 2, 3
nonsingular 1, 2
nullity
nullspace
rank 1, 2
rangespace
rank
zero
I
ideal line
ideal point
identity function
identity matrix 1, 2
identity function
if-then statement
ill-conditioned
438
Linear Algebra/Index
image
under a function
improper subspace
incidence matrix
index
of nilpotentcy
induction 1, 2
inductive step
of induction
inherited operations
inner product
Input-Output Analysis
internal direct sum 1, 2
intersection
invariant subspace
definintion
inverse
additive
left inverse
matrix
right inverse
two-sided
inverse function
inverse image
inversion
isometry
isomorphism 1, 2, 3
characterized by dimension
definition
of a space with itself
439
Linear Algebra/Index
J
Jordon form
represents similarity classes
Jordon block
K
kernel
Kirkoff's Laws
Klein, F.
L
Laplace Expansion
leading variable
least squares
lemma
length
Leontief, W.
line
at infinity
in projective plane
of best fit
linear
tranpose operation
linear combination
Linear Combination Lemma
linear equation
coefficients
constant
homogeneous
solution of
Cramer's Rule
Gausses' Method
Gauss-Jordan
system of
satisfied by a vector
linear map
dilation
see homomorphism
reflection
440
Linear Algebra/Index
rotation 1, 2
skew
trace
linear recurrence
linear relationship
linear surface
linear transformation
linearly dependent
linearly independent
LINPACK
M
map
extended linearly
distance-preserving
self composition
Maple
Markov Chain
historyless
Markov matrix
material implication
Mathematica
mathematical induction1, 2
MATLAB
matrix
adjoint
antisymmetric
augmented
block 1, 2
change of basis
characteristic polynomial
cofactor
column
column space
condition number
determinant 1, 2
diagonal matrix 1, 2
diagonalizable
diagonalized
eigenvalue
441
Linear Algebra/Index
eigenvector
elementary reduction 1, :2
entry
equivalent
form
identity 1, 2
incidence
inverse 1, 2
existence
left inverse
main diagonal
Markov
minimal polynomial
minor
nilpotent
nonsingular
orthogonal
orthonormal
right inverse
scalar multiple
skew-symmetric
similar
similarity
singular
submatrix
sum
symmetric 1, 2, 3,4, 5, 6
trace 1, 2, 3
transition
transpose 1, 2, 3
Markov
matrix-vector product
minimal polynomial
multiplication
nonsingular 1, 2
permutation
principle diagonal
rank
representation
442
Linear Algebra/Index
row
row equivalence
row rank
row space
scalar multiple
singular
sum
symmetric 1, 2, 3,4, 5
trace 1, 2
transpose 1, 2, 3
triangular 1, 2, 3
unit
Vandermonde
matrix equivalence
definition
cononical form
rank characterization
mean
arithmetic
geometric
member
method of powers
minimal polynomial 1, 2
minor
morphism
multilinear
multiplication
matrix-matrix
matrix-vector
MuPAD
mutual inclusion 1, 2
443
Linear Algebra/Index
N
natural representative
networks
Kirkoff's Laws
nilpotent
canonical form for
definition
matrix
transformation
nilpotentcy
index
nonsingular
homomorphism
matrix
normalize
nullity
nullspace
closure of
generalized
O
Octave
odd functions 1, 2
odd polynomials
one-to-one function
onto function
opposite map
order of a recurrence
ordered pair
orientation 1, 2
orthogonal
basis
complement
mutually
projection
matrix
orthogonalization
orthonormal basis
orthonormal matrix
444
Linear Algebra/Index
P
pair
ordered
parallelepiped
parallelogram rule
parameter
partition
matrix equivalence classes
partial pivoting
partitions
row equivalence classes
isomorphism classes
matrix equivalence classes
Pascal's triangle
permutation
inversions
matrix
signum
permutation expansion 1, 2
perp
perpendicular
permutation expansion
perspective
triangles
Physics problem
pivoting
on rows
full
partial
scaled
plane figure
congruence
polynomial
division theorem
even
odd
of a map
of a matrix
minimal
445
Linear Algebra/Index
point
at infinity
in the projective plane
populations, stable
potential
powers, method of
preserves structure
probability vector
projection 1, 2, 3, 4
along a subspace
central
vanishing point
onto a line
onto a subspace
orthogonal 1, 2
Projective Geometry
projective plane
Duality Principle
ideal line
ideal points
lines
proof techniques
induction
proper
subspace
subset
propositions
equivalent
Q
quantifier 1, 2
existential
universal
R
range
rangespace
closure of
generalized
rank 1, 2
446
Linear Algebra/Index
column
of a homomorphism 1, 2
row
recurrence 1, 2
homogeneous
initial conditions
reduced echelon form
reflection
glide
reflection about a line
matrix representation
reflexivity
relation
equivalence
reflexive
symmetric
transitive
relationship
linear
representation
of a vector
of a matrix
representative
canonical
for row equivalence
of matrix equivalence classes
of similarity classes
rescaling rows
resistance
equivalent
resistor
restriction
rigid motion
rotation 1, 2, 3
matrix representation
rounding error
row
vector
row equivalence
447
Linear Algebra/Index
row rank
full
row space
S
scaled partial pivoting
scalar
scalar multiple
matrix
vector
scalar multiplication
vector 1, 2
scalar product
Schur's triangularization lemma
Schwartz Inequality
SciLab
self composition
of maps
sense
sequence
concatenation
set
complement
element
empty
empty
intersection
linearly dependent
linearly independent
member
mutual inclusion 1, :2
proper subset
span of
subset
union
signum
similar 1, 2, 3
canonical form
similar triangles
similarity transformation
448
Linear Algebra/Index
single precision
singular
matrix
size 1, 2
sgn
see signum
skew
skew-symmetric
span
of a singleton
spin
square root
stable populations
standard basis
state
absorbing
Statics problem
string
basis
of basis vectors
structure
preservation
submatrix
subset
subspace
closed
complementary
direct sum
definition
improper
independence
invariant
orthogonal complement
proper
sum
sum
of matrices
of subspaces
vector 1, 2, 3, 4
449
Linear Algebra/Index
summation notation
for permutation expansion
swapping rows
symmetric matrix 1, 2, 3, 4
symmetry
system of linear equations
Gauss' Method
solving
T
theorem
trace 1, 2, 3
transformation
characteristic polynomial
composed with itself
diagonalizable
eigenvalue
eigenvector
eigenspace
Jordon form
minimal polynomial
nilpotent
canonical representative
projection
size change
transition matrix
transitivity
translation
transpose 1, 2
interaction with sum and scalar multiplication
determinant 1, 2
triangles
similar
Triangle Inequality
triangular matrix
triangularization
trivial space 1, 2
turning map
matrix representation
450
Linear Algebra/Index
U
union
unit matrix
V
vacuously true
value
Vandermonde
determinant
matrix
vanishing point
vector 1, 2
angle between vecots
canonical position
closure
column
complex scalars
component
cross product
direction
dot product
free
homogeneous coordinate
length
natural position
orthogonal
probability
representation of 1, 2
row
satisfies an equation
scalar multiple
scalar multiplication 1, 2
sum 1, 2, 3, 4
unit
zero vector 1, 2
vector space
basis
definition 1, 2
dimension
451
Linear Algebra/Index
dual
finite-dimensional
homomorphism
isomorphism
map
over complex numbers
subspace
trivial 1, 2
Venn diagram
voltage drop
volume
voting paradox:
majority cycle
rational preference
spin
voting paradoxes
W
Wheatstone bridge
well-defined
Z
zero divisor 1, 2
zero homomorphism
zero vector 1, 2
452
453
454
455
456
457
458
459
460
461
License
License
Creative Commons Attribution-Share Alike 3.0 Unported
//creativecommons.org/licenses/by-sa/3.0/
462