Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
20 views

MATH100 Lecture Notes

This document outlines the key concepts in the course MATH100 Applied Linear Algebra. It covers topics like vectors, vector properties, linear combinations, dot products, distance between vectors, projections, lines and planes in R2 and R3, systems of linear equations, matrices, subspaces, nullspace, column space, linear transformations, composition of linear transformations, inverses, eigenvalues and eigenvectors, determinants, cofactor expansion, invertibility, Cramer's rule, similarity and diagonalization, distance and least squares approximation. The document provides definitions, examples, and notes for understanding these fundamental concepts in linear algebra.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

MATH100 Lecture Notes

This document outlines the key concepts in the course MATH100 Applied Linear Algebra. It covers topics like vectors, vector properties, linear combinations, dot products, distance between vectors, projections, lines and planes in R2 and R3, systems of linear equations, matrices, subspaces, nullspace, column space, linear transformations, composition of linear transformations, inverses, eigenvalues and eigenvectors, determinants, cofactor expansion, invertibility, Cramer's rule, similarity and diagonalization, distance and least squares approximation. The document provides definitions, examples, and notes for understanding these fundamental concepts in linear algebra.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

MATH100

Applied Linear Algebra

Zed Chance

Spring 2021

Contents
1 Vectors 2
1.1 Vector properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Linear combinations and coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Dot Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Distance between vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5 Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.6 Lines and planes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.7 Lines in R3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.8 Planes in R3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2 Systems of Linear Equations 13


2.1 Direct methods of solving systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2 Gaussian and Gauss-Jordan Elimination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3 Spanning Sets and Linear Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.4 Linear Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

3 Matrices 20
3.1 Subspaces of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.2 Nullspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.3 Column space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.4 Linear transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.5 Composition of linear transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.6 Inverses of linear transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

4 Eigenvalues and Eigenvectors 31

5 Determinants 33
5.1 Cofactor expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.2 Invertibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.3 Cramer’s rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.4 Determinants and Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.5 Similarity and Diagonlization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

6 Distance and approximation 35


6.1 Least squares approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

Date Index 37

1
1 VECTORS

1 Vectors
Definition 1 (Vectors). Vectors are directed line segments, they have both magnitude and direction. Jan 23
They exist in a “space,” such as the plane R2 , ordinary space R3 , or an n-dimensional space Rn .
• In R3 , the vector v can be represented by its components as v = [v1 , v2 , v3 ].
• v can also be represented as a line segment with an arrowhead pointing in the direction of v.

Properties Vectors can be combined to form new vectors. Whether we are combining our vectors al-
gebraically (manipulating their components) or geometrically (manipulating their graphs), the following
properties apply: Let u, v, and w be vectors, and c and d be real numbers, then
u+v =v+u commutative
(u + v) + w = v + (u + w) associative
c(du) = (cd)u associative
u+0=u additive identity
u + (−u) = 0 additive inverse
c(u + v) = cu + cv distributive
(c + d)u = cu + du distributive
1u = u multiplicative identity
Jan 27

Representing vectors Row vector:

V̄ = [2, 3]

Column vector:

 
2
V̄ =
3

Note 1. Vectors u and v are equivalent if they have the same length and direction.

1.1 Vector properties


Let ū = [1, 2] and v̄ = [3, 1].

ū + v̄ = [1, 2] + [3, 1]
= [1 + 3, 2 + 1]
= [4, 3]

Geometrically, this is the “tip to tail” method. Any two vectors define a parallelogram.
Let ū = [1, 2], and think about ū + ū.

2
1 VECTORS 1.1 Vector properties

ū + ū = [1, 2] + [1, 2]
= [2, 4]
2ū = 2[1, 2]
= [2, 4]

Also think about multiplying ū by -1:

(−1)ū = (−1)[1, 2]
= [−1, −2]

This points the vector in the opposite direction, which is considered “antiparallel”. So if the scalar in the
multiplication is a negative number, it will point the vector in the other direction (as well as being scaled).

Definition 2 (Scalar multiplication). For constant c and V̄ = [v1 , v2 , v3 ], then

cV̄ = [cv1 , cv2 , cv3 ]

Definition 3 (Vector subtraction).

ū − v̄ = ū + (−v̄)

So with our existing vectors:

ū − v̄ = ū + (−v̄)
= [1, 2] + [−3, −1]
= [−2, 1]

The sum and difference is the diagonals of the parallelogram created by adding the vectors.

Note 2. Vector addition is commutative, but vector subtraction is not (it is anticommutative).

v̄ − ū = [3, 1] + [−1, −2]


= [2, −1]

Note 3. All of these properties hold true in all dimensions: Rn .

3
1 VECTORS 1.2 Linear combinations and coordinates

Concerning the additive identity: In R3 the “zero vector” is 0̄ = [0, 0, 0].


How to represent the length of a vector:

ū = [1, 2]
p
= 12 + 22

= 5

||ū|| = 5

We use the double bars to represent the length of a vector.


Jan 29

1.2 Linear combinations and coordinates


Definition 4. v̄ is a linear combination of a set of vectors, v̄1 , v̄2 , . . . , v̄k , if v̄ = c1 v̄1 , . . . , ck , v̄k for
scalars ci .

Example 1. See Handout 1.

Definition 5 (Standard Basis Vectors and Standard Coordinates). In R2 : ē1 = [1, 0], ē2 = [0, 1], these
are the standard basis vectors.
Then v̄ = [v1 , v2 ],
and the standard coordinates of v̄ are v1 , v2 .

1.3 Dot Product


Definition 6 (Dot Product). If ū = [u1 , u2 , . . . , un ], v̄ = [v1 , v2 , . . . , vn ], then the dot product of ū Feb 01
with v̄ is

ū · v̄ = u1 v1 + u2 v2 + · · · + un vn

Example 2.

[2, −1, 7] · [3, 5, −2] = (2)(3) + (−1)(5) + (7)(−2)


= 6 − 5 − 14
= −13

Properties of dot products (scalar products) Let ū, v̄, w̄ be vectors, and c be a scalar, then
ū · v̄ = v̄ · ū commutative
ū · (v̄ + w̄) = (ū · v̄) + (ū · w̄) distributive
(cū) · v̄ = c(ū · v̄)
0̄ · v̄ = 0
v̄ · v̄ = v12 + v22 + · · · + vn2

4
1 VECTORS 1.3 Dot Product

p
Length In R2 : ||v̄|| = v12 + v22
p
In general: ||v̄|| = v12 + v22 + · · · + vn2

Example 3. If v̄ = [2, −1, 7], then the length is

p
||v̄|| = 22 + (−1)2 + 72

= 4 + 1 + 49

=3 6

Note 4.

||v̄|| = v̄ · v̄


Definition 7. A vector of length 1 is called unit vector. For any vector v̄ 6= 0̄: ||v̄|| is a unit vector
in the same direction as v̄.

Note 5.
   
||v̄|| v̄
v̄ = ||v̄||
||v̄|| ||v̄||

   
1 0
Example 4. In R2 : ē1 = , ē2 = , these are unit vectors.
0 1

Important inequalities
• Triangle inequality
The triangle created by the parallelogram of a vector addition, the length of any one side cannot be
greater than the sum of the other two sides.

||ū + v̄|| ≤ ||ū|| + ||v̄||

• Cauchy-Schwarz inequality

|ū · v̄| ≤ ||ū|| ||v̄||

Proof by the Law of Cosines:

5
1 VECTORS 1.4 Distance between vectors

c2 = a2 + b2 − 2ab cos θ
||ū − v̄||2 = ||ū||2 + ||v̄||2 − 2||ū|| ||v̄|| cos θ
(ū − v̄) · (ū − v̄) =
||ū|| − 2(ū · v̄) + ||v̄||2 =
2

||ū||2 − 2(ū · v̄) + ||v̄||2 = ||ū||2 + ||v̄||2 − 2||ū|| ||v̄|| cos θ


ū · v̄ = ||ū|| ||v̄|| cos θ
|ū · v̄| = ||ū|| ||v̄||| cos θ|
|ū · v̄| ≤ ||ū|| ||v̄||.

Angle θ between vectors ū and v̄ Excluding the zero vector:


Let 0 ≤ θ ≤ π,

ū · v̄
cos θ =
||ū|| ||v̄||

 
So, θ = cos−1 ū·v̄
||ū|| ||v̄||

π
Note 6. If ū, v̄ 6= 0, then θ = 2, if and only if ū · v̄ = 0.

ū ⊥ v̄, iff ū · v̄ = 0

Feb 03

1.4 Distance between vectors


Definition 8. The distance between two vectors is the distance between their tips.
If ū = [u1 , u2 ] and v̄ = [v1 , v2 ], then

p
d(ū, v̄) = (u1 − v1 )2 + (u2 − v2 )2
= ||ū − v̄||
= d(v̄, ū)
= ||v̄ − ū||

Example 5. In R3 :
For ū = [2, −1, 7] and v̄ = [3, 5, −2] :
Find the distance:

6
1 VECTORS 1.5 Projections

d(ū, v̄) = ||ū − v̄||


= ||[(2 − 3), (−1 − 5), (7 + 2)]||
= ||[−1, −6, 9]||
p
= (−1)2 + (−6)2 + 92

= 1 + 36 + 81

= 118

1.5 Projections
Definition 9. Let projū v̄ be the vector projection of v̄ onto ū, then the signed length of projū v̄ is given
by

ū · v̄
||v̄|| cos θ = ||v̄||
||v̄||||ū||
v̄ · ū
=
||ū||

So,

 
v̄ · ū ū
projū v̄ =
||ū|| ||ū||
 v̄ · ū 
= ū
ū · ū

Note 7. Recall, ū · ū = ||ū||2


Note 8. Remember, ||ū|| is the unit vector.

v̄ · ū ū
= v̄
||ū|| ||ū||

Example 6. For ū = [2, 1, −2] and v̄ = [3, 0, 8], find the projection of v̄ onto ū:

7
1 VECTORS 1.6 Lines and planes

[3, 0, 8] · [2, 1, −2]


projū v̄ = [2, 1, −2]
[2, 1, −2] · [2, 1, −2]
6 + 0 − 16
= [2, 1, −2]
4+1+4
−10
= [2, 1, −2]
9

Since the coefficient is negative, the angle between the two vectors is more than 90 degrees.

1.6 Lines and planes


See Handout 2
Comparing vector and parametric forms:

x̄ = p̄ + td¯
     
x p1 d
= +t 1
y p2 d2
x = p1 + td1
y = p2 + td2

The solution to this is the line l.


Comparing the normal and general forms:
 
a
Let n̄ = :
b

n̄ · x̄ = n̄ · p̄
       
a x a p
· = · 1
b y b p2
ax + by = ap1 + bp2
ax + by = c

Remember, ap1 + bp2 are constants, so we can call them c.

Example 7. See Handout 1.


Find an equation for that line that passes through the point (-3, 2) and is parallel to the vector [2, 1].
1. Vector form

8
1 VECTORS 1.7 Lines in R3

x̄ = p̄ + td¯
     
x −3 2
= +t
y 2 1

2. Parametric form

x = −3 + 2t
y =2+t

Feb 05
Example 8. Cont from previous example
3. General form

x+3 y−2
t= =
2 1
x + 3 = 2y − 4
x − 2y = −7

4. Normal form

n̄ · x̄ = n̄ · p̄
       
1 x 1 −3
· = ·
−2 y −2 2

Making sense of the normal form See handout 2’s graph


1. Note that x̄ − p̄ is parallel to the line l.
2. Also, n̄ ⊥ (x̄ − p̄) by definition of n̄.
3. Then, n̄ · (x̄ − p̄) = 0 by a property of dot products.
We can use the distributive property:

n̄ · x̄ − n̄ · p̄ = 0
n̄ · x̄ = n̄ · p̄

1.7 Lines in R3
See handout 2

9
1 VECTORS 1.7 Lines in R3

Vector form

x̄ = p̄ + td¯
     
x p1 d1
y  = p2  + t d2 
z p3 d3

Parametric form

x = p1 + td1
y = p2 + td2
z = p3 + td3

Example 9. See handout 2


Find vector and parametric forms for the equation for the line containing the points (2, 4, -3) and (3,
-1, 1).

 
2
p¯1 =  4 
−3
 
3
p¯2 = −1
1
¯
d = p¯2 − p¯1
   
3 2
= −1 −  4 
1 −3
 
1
d¯ = −5
4

Pick one of the points for our point vector p̄.


1. Vector form

x̄ = p̄ + td¯
     
x 2 1
y  =  4  + t −5
z −3 4

2. Parametric form

10
1 VECTORS 1.8 Planes in R3

x=2+t
y = 4 − 5t
z = −3 + 4t

1.8 Planes in R3
See handout 2
 
a
Normal form Let n̄ =  b 
c

n̄ · x̄ = n̄ · p̄
       
a x a p1
 b  · y  =  b  · p2 
c z c p3

General form

ax + by + cz = ap1 + bp2 + cp3


ax + by + cz = d

We can combine the constants on the right into one single constant, d.

Example 10. See handout 2


Find normal and general forms for the equation of the plane orthogonal to the vector [2,3,4] that passes
through the point (2,4,-1).
   
2 2
Let n̄ = 3, and p̄ =  4 . We can start off by putting this in the normal form.
4 −1
1. Normal form

n̄ · x̄ = n̄ · p̄
       
2 x 2 2
3 · y  = 3 ·  4 
4 z 4 −1

2. General form

2x + 3y + 4z = (2)(2) + (3)(4) + (4)(−1)


2x + 3y + 4z = 12

11
1 VECTORS 1.8 Planes in R3

Example 11. See handout 2


Find a vector form for the plane in the previous example.
Let ū, v̄ be in the plane. Then, ū ⊥ n̄, v̄ ⊥ n̄, so

ū · n̄ = 0 v̄ · n̄ = 0

And, ū is not parallel to v̄.


 
u1
Let ū = u2 , where u3 = 0. Then,
0

n̄ · ū = 0
   
2 u1
3 · u2  = 0
4 0
2u1 + 3u2 = 0

 
3
Let ū = −2.
0
 
v1
Let v̄ =  0 , where v2 = 0.
v3
Then,

n̄ · v̄ = 0
   
2 v1
3 ·  0  = 0
4 v3
2v1 + 4v3 = 0

 
2
Let v̄ =  0 .
−1
So the vector form is

x̄ = p̄ + sū + tv̄
       
x 2 3 2
y  =  4  + s −2 + t  0 
z −1 0 −1

12
2 SYSTEMS OF LINEAR EQUATIONS

And our parametric form is

x = 2 + 3s + 2t
y = 4 − 2s
z = −1 − t

2 Systems of Linear Equations


Definition 10. A linear equation in the n variables x1 , x2 , . . . , xn is an equation that can be written Feb 08
in the form:

a1 x2 + a2 x2 + · · · + an xn = b

where the coefficients a1 , . . . , an and the constant term b is constant.

Definition 11. A finite set of linear equations is a system of linear equations. A solution set of
a system of linear equations is the set of all solutions of the system. A system of linear equations is
either “consistent” if it has a solution, or it is “inconsistent” if there is no such solution.

Theorem 1. A system of linear equations has either


1. A unique solution – consistent
2. Infinitely many solutions – consistent
3. No solution – inconsistent

Definition 12. Two linear systems are said to be equivalent if they have the same solution set.

Example 12. See handout 3, problem 1

2x + y = 8
x − 3y = −3
y = 2, x = 3

Example 13. See handout 3, problem 2

Example 14. See handout 3, problem 1


Let

13
2 SYSTEMS OF LINEAR EQUATIONS 2.1 Direct methods of solving systems

 
2 1
A=
1 −3

be the matrix of coefficients, and let

 
8
b̄ =
−3

Be the vector of constants.


Then,

 
2 1|8
[A | b̄] =
1 −3 | −3

is called the augmented matrix.

2.1 Direct methods of solving systems


Example 15. For the system Feb 10

2x − y = 3
x + 3y = 5

The coefficient matrix A is

 
2 −1
A=
1 3

The constant vector b̄ is

 
3
b̄ =
5

The augmented matrix is

 
2 −1 | 3
[A | b̄] =
1 3 | 5

Definition 13 (Row echelon form of a matrix). See handout 4

14
2 SYSTEMS OF LINEAR EQUATIONS 2.2 Gaussian and Gauss-Jordan Elimination

2.2 Gaussian and Gauss-Jordan Elimination


To solve a system of linear equations using Guassian Elimination: Feb 12
• Write out an augmented matrix for the system of linear equations
• Use elementary row operations to reduce the matrix to row echelon form
• Write out a system of equations corresponding to the row echelon matrix
• Use back substitution to find solution(s), if any exist, to the new system of equations
To solve the system of linear equations using Gauss-Jordan Elimination: reduce the augmented matrix to
reduced row echelon form
• Write out the system of equations corresponding to the reduced row echelon matrix
• Use back substitution to find solution(s), if any exist, to the new system of equations

Example 16. See handout 5

Feb 15
Example 17. See handout 5

2.3 Spanning Sets and Linear Independence


Definition 14 (Linear Combinations of Vectors). Let V̄ be a linear combination of the set of vectors Feb 17
V̄1 , . . . , V̄k , if you can write V̄ as the sum of scalar multiples of the set of vectors,

V̄ = c1 V̄1 + · · · + ck V̄k

for constants c1 , . . . , ck .


    
8 2 1
Example 18. Is a linear combination of and ?
−3 1 −3
     
8 2 1
Alternatively, does =x +y have a solution?
−3 1 −3
Equivalently, does the following system have a solution?

2x + y = 8
x − 3y = −3

15
2 SYSTEMS OF LINEAR EQUATIONS 2.3 Spanning Sets and Linear Independence

   
2 1 8 1 −3 −3
= R1 ↔ R2
1 −3 −3 2 1 8
 
1 −3 −3
= R2 − 2R1
0 7 14
 
1 1 −3 −3
= R2
7 0 1 2
 
1 0 3
= R1 + 3R2
0 1 2

So,

x=3 y=2

     
a 2 1
Example 19. For what values a, b will be a linear combination of and ?
b 1 −3

     
a 2 1
=x +y
b 1 −3

If and only if,

2x + y = a
x − 3y = b

So we can solve it using our augmented matrix:

   
2 1 a 1 −3 b
= R1 ↔ R2
1 −3 b 2 1 a
 
1 −3 b
= R2 − 2R1
0 7 a − 2b
 
1 1 −3 b
= R2 a−2b
7 0 1 7
1 0 b + 3( a−2b
 
= R1 + 3R2 7 )
a−2b
0 1 7
1 0 3a+b
 
= 7
0 1 a−2b7

So we have

16
2 SYSTEMS OF LINEAR EQUATIONS 2.3 Spanning Sets and Linear Independence

3a + b
x=
7
a − 2b
y=
7

   
2 1
So the answer is that any choice of a, b will work. We can say that , “span” the plane (R2 ).
1 −3

Definition 15 (Spanning Sets). If S = {V̄1 , . . . , V̄k } is a set of vectors in Rn , then the set of all linear
combinations of V̄1 , . . . , V̄k is called the span of V̄1 , . . . , V̄k , or

span(S)

If the span(S) = Rn , then we say S is a spanning set of Rn .

Example 20. Describe the span of S where

   
1 −1
S = {0 ,  1 }
3 −3

 
a
Another way to think about is it, what vectors  b  are in the span of S?
c

     
a 1 −1
 b  = x + 0 + y  1 
c 3 −3

So,

x−y =a
y=b
3x − 3y = c

So we can use our augmented matrix:

17
2 SYSTEMS OF LINEAR EQUATIONS 2.4 Linear Independence

   
1 −1 a 1 −1 a
0 1 b  = R3 − 3R1 0 1 b 
3 −3 c 0 0 c − 3a
 
1 0 a+b
= R1 + R2 0 1 b 
0 0 c − 3a

 
a
So this system only has solutions if c − 3a = 0 or c = 3a. So vectors of the form  b  form the span of
3a
S.

Note 9. Linear systems of the form

x−y =a
y=b
3x − 3y = 3a

have solutions for a, b arbitrarily. a, b are the free variables, but the third variable must be 3 times a.

2.4 Linear Independence


Definition 16. A set of vectors v̄1 , . . . , v̄k is linearly dependent if there are scalars c1 , . . . , ck (not Feb 22
all zero), such that

c1 v̄1 + · · · ck v̄k = 0̄

Otherwise, the set is linearly independent.

     
1 1 1
Example 21. Decide if the set {2 ,  1  , 4} is linearly independent.
0 −1 2
So this is asking if this is true:

       
1 1 1 0
c1 + 2 + c2  1  + c3 4 = 0
0 −1 2 0

for non-trivial constants.

18
2 SYSTEMS OF LINEAR EQUATIONS 2.4 Linear Independence

This leads to an augmented matrix:

   
1 1 1 0 1 1
1 0
2 1 4 0 = R2 − 2R1 0 −1
2 0
0 −1 2 0 0 −1
2 0
 
1 0 3 0
= R1 + R2 and R3 − R2 0 −1 2 0
0 0 0 0
 
1 0 3 0
= (−1)R2 0 1 −2 0
0 0 0 0

So we have

c1 + 3c3 = 0
c2 − 3c3 = 0

So we can solve for c3 :

c1 = −3c3
c2 = 2c3

So c3 is arbitrary, it doesn’t have to be 0. So the answer has non-trivial solutions, therefore it is linearly
dependent.

       
1 1 1 0
−3c3 2 + 2c3  1  + c3 4 = c3 0
0 −1 2 0

This is called the “linear dependence relation.”

Note 10. A matrix with all 0s in the rightmost column is called a homogeneous system of equations.

Theorem 2. Any set of m vectors in Rn is linearly dependent if m > n.

Example 22. Consider this set of vectors

     
1 0 0
S={ , , }
0 1 2

19
3 MATRICES

We can tell without doing anything else that these vectors have to be dependent. They are in R2 , but
there are 3 vectors total. We are guaranteed that there is a non-trivial linear combination that will
make the 0̄.

       
1 0 0 0
0 +2 − =
0 1 2 0

However, it does not guarantee that one of the vectors can be solved as a linear combination of the
others:

     
1 0 0
= c1 + c2
0 1 2

has no solution.
Feb 24
Example 23. See handout 6

3 Matrices
See handout X
See handout 7 Mar 01
See handout 8 Mar 05

3.1 Subspaces of Matrices


Definition 17. Subspaces of Rn : A collection S of vectors in Rn such that Mar 10
1. The zero vector 0̄ is in S
2. If ū, v̄ are both in S, then ū + v̄ are in S
3. If ū is in S, then any scalar multiple cū is in S.
If all are true, then S is a subspace in Rn

You can combine 2 and 3 above as: If ū1 , . . . , ūk are in S and c1 , . . . , ck are scalars, then c1 ū1 + · · · + ck ūk
is in S. S is closed under linear combinations.

20
3 MATRICES 3.2 Nullspace

Theorem 3. Let v̄1 , . . . , v̄k be vectors in Rn , then S = span(v̄1 , . . . , v̄k ) is a subspace of Rn .


Proof:
Recall that the span of a set of vectors (v̄1 , . . . , v̄k ) is the set of all linear combinations of v̄1 , . . . , v̄k .
1. 0̄ = 0v̄1 + · · · + 0v̄k , so 0̄ is in the span S.
2. Let ū = c1 ū1 + · · · + ck ūk , then by definition ū is in the span S. Also, v̄ = d1 v̄1 + · · · + dk v̄k

ū + v̄ = (c1 v̄1 + · · · + ck v̄k ) + (d1 v̄1 + · · · + dk v̄k )


= (c1 + d1 )v̄1 + · · · + (ck + dk )v̄k

So ū + v̄ is in the span S.
3. If ū is in S, then cū = c(c1 v̄1 + · · · + ck v̄k ), then

cū = c(c1 v̄1 + · · · + ck v̄k )


= cc1 v̄1 + · · · + cck v̄k

So cū is in S.

Example 24. See handout 11

3.2 Nullspace
Example 25. See handout 11 Mar 12

3.3 Column space


3.4 Linear transformations Mar 17

See handout 12
Mar 29

Definition 18. The map T : Rn → Rm is linear if


1. T (ū + v̄) = T (|{z}
ū ) + T (|{z}
v̄ )
| {z } m m
Rn R R

2. T (cv̄) = cT (v̄)
Alternatively: T (cū + dv̄) = cT (ū) + dT (v̄)
For every ū, v̄ ∈ Rn , and c, d 6= 0

21
3 MATRICES 3.4 Linear transformations

Theorem 4. See Theorem 3.30 in Poole


Let A be an m × n matrix. Then the map defined by Ax̄ is a linear for x̄ ∈ Rn
Proof:
A(cū + dv̄) = cAū + dAv̄ by properties of matrix multiplication.
We can write this as TA (x̄) = Ax̄

Theorem 5. See Theorem 3.31 in Poole This is an


important
Let T : Rn → Rm be a linear transformation. Then there is a m × n matrix A such that T = TA .
theorem!
Specifically let ē1 , . . . , ēn be the standard basis for Rn .

   
1 0
0  .. 
ē1 =   ēn =  . 
..
. 1

So we can find A by:

A = [T (ē1 ) | . . . | T (ēn )]m×n

See notes for proof

x̄ = x̄1 ē1 + · · · + x̄n ēn


T (x̄) = T (x̄1 ē1 + · · · + x̄n ēn
= x̄1 T (ē1 ) + · · · + xn T (ēn )
 
x1
 x2 
 
= T (ē1 ) . . . T (ēn )  . 
 .. 
xn
T (x̄) = Ax̄

Example 26. See handout 14

Apr 02
Example 27. See handout 14, example 3
Let T : R2 → R2 be the projection of the vector v̄ onto the line ` through the origin.
See notes for drawing
Show that T is a linear transformation.

22
3 MATRICES 3.4 Linear transformations

 
d
Let dˆ = 1 be a direction vector for `, where ||d||
ˆ = d2 + d2 . Note that
p
d2 1 2

T (v̄) = projdˆv̄
!
v̄ · dˆ
=
ˆ2
||d||

So our strategy is to find T (ē1 ) and T (ē2 ).

T (ē1 ) = projdˆē1
   
1 d1
 0 · d2 
=  dˆ
 ||d|| ˆ2 

 
d
= d1 1
d2
 2 
d1
=
d1 d2

T (ē2 ) = projdˆē2
   
0 d1
 1 · d2 
=  dˆ
 1 

 
d
= d2 1
d2
 
d d
= 122
d2

So the standard matrix of T is


d21
 
d1 d2
A=
d1 d2 d22

So, the projection onto a line through the origin is a linear transformation.

Example 28. Special case:


Project v̄, in the plane, onto the x-axis.
 
x
Let v̄ =
y
We can drop this to the x-axis, and see that
 
x
T (v̄) =
0

23
3 MATRICES 3.4 Linear transformations

Let
 
d
dˆ = 1
d2
 
1
=
0
= ē1

We can use our standard matrix we found in the previous example. Note that d1 = 1 and d2 = 0.
 2 
d1 d1 d2
A=
d1 d2 d22
 
1 0
=
0 0

Example 29. Another special case:


Project v̄, in the plane, onto the line y = x.
We can use our standard matrix we found in the previous example.
 
x
Let x̄ = , and
y
 
d
dˆ = 1
d2
 
a
=
a

where
p
ˆ =
||d|| a2 + a2

= d 2a2 A

= |a|d 2
=1
1
|a| = √
2

Let
" #
√1
dˆ = √1
2
2

24
3 MATRICES 3.4 Linear transformations

So
1 1

A= 2 2
1 1
2 2
 
1 1 1
=
2 1 1

T (v̄) = Av̄
  
1 1 1 x
=
2 1 1 y
 
1 x+y
=
2 x+y
 x+y 
= x+y 2
2

Both components equal the average of the components in the original vector. This is important in
statistics.

Example 30. Derive the formula for cos(α + β) and sin(α + β).
Recall the rotation matrix from Example 1 on Handout 14 that
 
cos θ − sin θ
A=
sin θ cos θ

  
cos β − sin β cos α
T (v̄) =
sin β cos β sin α
 
cos β cos α − sin β sin α
=
sin β cos α + cos β sin α
 
cos(α + β)
=
sin(α + β)

So, since the components of equal vectors are equal to each other:

cos(α + β) = cos β cos α − sin β sin α


sin(α + β) = sin β cos α + cos β sin α

However, usually the text book will rearrange this:

cos(α + β) = cos α cos β − sin α sin β


sin(α + β) = sin α cos β + cos α sin β

25
3 MATRICES 3.4 Linear transformations

We can also find the difference of the angles by thinking about

cos(α − β) = cos(α + (−β))


cos(−β) = cos β
sin(−β) = − sin β

Apr 05
Example 31. Additional questions for example 3 on Handout 14 :
Recall: T : R2 → R2 and T (v̄) = Av̄.
1. What is the range of the projection?
The line ` is the range. Since every vector gets projected onto `, that is the range.
2. Is the line ` a subspace of R2 ?
Yes! Recall that spaces need to include the zero vector 0̄.
• The line ` contains the point (0, 0).
• If ū||`, then ū + v̄||`.
• If ū||`, then cū||`.
3. Find a basis for the subspace `.
 
d
{ 1 }
d2

ū = adˆ
v̄ = bdˆ
ū + v̄ = adˆ + bdˆ
= (a + b)dˆ

ˆ
So, (a + b)d||`.

ˆ
cū = c(ad)
= (ca)dˆ

ˆ
and, (ca)d||`.
4. Describe the column space of A. Recall:

d21
 
d1 d2
A=
d1 d2 d22

The answer is the line `.

Av̄ = b̄

A big takeaway: the range of T is the column space of A.

26
3 MATRICES 3.4 Linear transformations

Lets find the column space of A, using the augmented matrix:


 2 
d1 d1 d2 a
[A | b̄] =
d1 d2 d22 b
 2
d1 d2 d1 d22 d2 a

= (d2 )R1 , (d1 )R2 2
d1 d2 d1 d22 d1 b
 2
d1 d2 d1 d22

d2 a
= R2 − R1
0 0 d1 b − d2 a
d1 b = d2 a
d2
b= a
d1

Notice that this is a line through the origin with a slope of m = dd12 , which is the line `. So the
column space of the matrix A is the same as the range of T . The basis for the column space is
the same as the basis of the range of T .
 
d
Col(A) = { 1 }
d2

5. What is the rank of A?


The Rank(A) = dim(Col(A)) = dim(Row(A)), so the rank is 1.
6. Describe the null space of A. We are looking for Av̄ = 0̄.
Any vector that is orthogonal to ` and passes through the origin will be projected to the zero
vector. This is the line `2 where the slope is m = − dd12 .
So lets show this analytically:

d21
 
d1 d2 0
[A | 0̄] =
d1 d2 d22 0
 2
d1 d22

d d 0
= 1 2
0 0 0
d1 x + d2 y = 0
d1
y=− x
d2

7. Find a basis for the null space N ull(A).


 
d2
{ }
−d1

Example 32. Let T : Rm → Rn with a standard matrix A, and S : Rn → Rp with a standard matrix
B.

S(T (v̄)) = S(Bv̄)


= A(Bv̄)
= (AB)v̄
= (S ◦ T )(v̄)

27
3 MATRICES 3.5 Composition of linear transformations

3.5 Composition of linear transformations


Definition 19. If S(ū) = Aū and T (v̄) = Bv̄, then Apr 07

S(T (v̄)) = (S ◦ T )(v̄)


= ABv̄

and AB is the standard matrix for this composition (S ◦ T )(v̄).

Example 33. Show that reflection in the plane about the x-axis is a linear transformation.
See notes for drawing
T : R2 → R2 .
 
1
T (ē1 ) = T
0
 
1
=
0
= ē1

 
1
T (ē1 ) = T
0
 
0
=
−1
= −ē2

So the standard matrix for T is


 
1 0
A=
0 −1

Since we found a matrix that implements the transformation, that means that reflection about the
x-axis must be linear.

Example 34. Let Fx : R2 → R2 be a reflection about the x-axis. Let Rθ : R2 → R2 be a rotation by


θ. Find the standard matrix for R60 deg (Fx (v̄)).
See notes for drawing
 
cos 60 −sin60
R60 deg =
sin60 cos60
 
1 0
Fx =
0−1
" √ #
1 3
− −2
R60 ◦ Fx = √23 1
2 2
" √ #
1 3
= √2 2 v̄
3
2 − 12

28
3 MATRICES 3.6 Inverses of linear transformations

This is the standard matrix for reflection about the x-axis followed by rotation of 60 degrees.

Example 35. Find the standard matrix that rotates by 60 degrees, then reflects about the x-axis. This
is reverse order of the previous problem.

Fx (R60 (v̄)) = (Fx ◦ R60 )(v̄)


" 1 √ #
3

1 0 √2
− 2
=
0 −1 3 1
2 2
" √ #
1 3
2√ − 2
= 3
− 2 − 12

This is the standard matrix for (Fx ◦ R60 )(v̄).

3.6 Inverses of linear transformations


Definition 20. Let T : Rn → Rn be a linear transformation, then T −1 is the inverse linear transfor-
mation, if

T −1 (T (v̄)) = v̄
T (T −1 (v̄)) = v̄

Let A be the standard matrix of T , then T has an inverse T −1 if and only if A has an inverse. Further-
more, the standard matrix of the inverse T −1 is A−1 .

T −1 (T (v̄)) = T −1 (Av̄)
= A−1 (Av̄)
= (A−1 A)v̄
= I v̄
= v̄

−1
Example 36. Let R60 : R2 → R2 be a rotation by 60 degrees. What is the inverse R60 ?
See notes for drawing
We are looking for something that rotates by a negative 60 degrees.
−1
R60 = R−60
 
cos(−60) − sin(−60)
=
sin(−60) cos(−60)
 
cos(60) sin(60)
=
− sin(60) cos(60)
" √ #
1 3
= 2√ 2
3 1
− 2 2

29
3 MATRICES 3.6 Inverses of linear transformations

Lets check
" √ #" √ #
1 3 1 3
−1 2√ 2 √2
− 2
R60 (R60 (v̄)) =
− 23 1
2 2
3 1
2
 
1 0
=
0 1

Example 37. Find the inverse of the reflection Fx . We are looking for Fx−1 . Since the reflection
happening a second time returns the vector to its original position, it is its own inverse. The standard
matrix is
 
1 0
0 −1

You can check this by multiplying it by itself, and it returns the identity matrix I.

Example 38. Does projection onto the line ` (through the origin) have an inverse (in the plane)?
P` : R2 → R2 .
See notes for drawing.
The standard matrix of P` is

d21
 
d1 d2
P` =
d1 d2 d21
 
ˆ d
Where d = 1 is a unit direction vector for `.
d2
Since there is an infinite number of vectors that will project to the new vector on `, so there is no
inverse. Also, since the standard matrix P` is invertible, P`−1 does not exist.

Apr 09
Example 39. See problem 26 in 3.6 of Poole
If the angle between ` and the positive x-axis is θ, show that the matrix of F` is
 
cos 2θ sin 2θ
sin 2θ − cos 2θ

See notes for drawing

30
4 EIGENVALUES AND EIGENVECTORS

We can rotate the entire plane so it is then a reflection about the x-axis.

Rθ (Fx (Rθ−1 (v̄))) = Rθ (Fx (R−θ (v̄)))


= (Rθ ◦ Fx ◦ R−θ )(v̄)
= F` (v̄)
   
cos θ − sin θ 1 0 cos θ sin θ
=
sin θ cos θ 0 −1 − sin θ cos θ
| {z }
standard matrix of F`
  
cos θ − sin θ cos θ sin θ
=
sin θ cos θ sin θ − cos θ
cos2 θ − sin2 θ
 
cos θ sin θ + sin θ cos θ
=
sin θ cos θ + cos θ sin θ sin2 θ − cos2 θ
cos θ − sin2 θ
 2 
2 sin θ cos θ
=
2 sin θ cos θ sin2 θ − cos2 θ
 
cos 2θ sin 2θ
=
sin 2θ − cos 2θ

Aside 1.

cos 2θ = cos(θ + θ)
= cos2 θ − sin2 θ

4 Eigenvalues and Eigenvectors


Definition 21. Let A be a n × n matrix. A scalar λ is an eigenvalue of the matrix A if there is a
non-zero vector v̄ such that

Av̄ = λv̄

where v̄ is an eigenvector associated with λ.


Eigenvector can be abbreviated e-vector, and eigenvalue can be abbreviated e-value.

Note 11. If λ is real, then the new vector will be parallel to the original vector. It is possible that
λ is complex.

31
4 EIGENVALUES AND EIGENVECTORS

   
2 1 −2
Example 40. Show that is an eigenvector of the matrix and find its eigenvalue.
−3 −3 2

Av̄ = λv̄
    
1 −2 2 8
=
−3 2 −3 −12
 
2
=4
−3

 
2
So is an e-vector with an e-value of λ = 4.
−3

 
2 3
Example 41. Show that λ1 = −2 and λ2 = 5 are e-values of the matrix and find associated
4 1
e-vectors.
We’ll start with λ1 = −2 :

Av¯1 = −2v¯1
Av¯1 + 2v¯1 = 0̄
Av¯1 + 2I v¯1 = 0̄
(A + 2I)v¯1 = 0̄

So v¯1 is in the null space of A + 2I.

Aside 2.
 
1 0
2I = 2
0 1
 
2 0
=
0 2

A + 2I = A − λI
   
2 3 2 0
= +
4 1 0 2
 
4 3
=
4 3

We are looking for v̄ that is in the null space.


 
  4 3 0
A + 2I 0̄ =
4 3 0
 
4 3 0
=
0 0 0
4x + 3y = 0

32
5 DETERMINANTS

   
3 3
Let v¯1 = , then v¯1 = is an e-vector for λ1 = −2. We can check this by
−4 −4

Av¯1 = −2v¯1
    
2 3 3 3
= −2
4 1 −4 −4
 
−6
=
8
 
3
= −2
−4

Apr 14
Example 42. See handout 16

Apr 16
Example 43. See handout 16 example 2

5 Determinants
See handout 18

5.1 Cofactor expansion


Example 44. See handout 18 example at end on cofactors Apr 19

5.2 Invertibility
Definition 22. If a matrix A is full rank and square (n × n ), then it will row reduce to the identity
matrix In×n . Therefore,
• The matrix is invertible.

A | I → I | A−1
   

• The determinant is non-zero.

Less than full rank n × n matrices row reduce to a row of zeros at the bottom of the matrix. Therefore, Apr 21
• It will have a zero determinant.
• It will not be invertible.

Theorem 6. The n × n matrix A is invertible if and only if det(A) 6= 0.

See more theorems in handout 18

5.3 Cramer’s rule

33
5 DETERMINANTS 5.4 Determinants and Eigenvalues

Definition 23. See handout 18


Let A be an invertible n × n matrix, and let b̄ be any vector in Rn . Then the unique solution x̄ of the
system Ax̄ = b̄ is given by

det(Ai (b))
xi =
detA
for i = 1, . . . , n.
Note that Ai (b) is created by replacing the ith column of A with the vector b̄.

Example 45. See handout 18, example on Cramer’s rule

5.4 Determinants and Eigenvalues


See handout 19 Apr 23
To find the eigenvalues and eigenvectors:
1. Find λ such that det(A − λI) = 0.
2. Substitute into the equation

[A − λI]v̄ = 0̄

and solve for v̄.

Example 46. See handout 19 example 1

Example 47. See handout 19 example 2a/b

Apr 26
Example 48. See handout 19 example 3

Example 49. See handout 19 example 4

5.5 Similarity and Diagonlization


Definition 24. For n × n matrices A and B, A is similar to B, written A ∼ B, if an invertible n × n Apr 28
matrix P exists such that

P −1 AP = B

Definition 25. An n × n matrix A is diagonalizable if there is a diagon matrix D that is similar to


A.

Theorem 7. The n × n matrix A is diagonalizable if and only if A has n linearly independent eigen-
vectors. (Deficient matrices need not apply!)

34
6 DISTANCE AND APPROXIMATION

Example 50. See handout 20 example 1

Apr 30
Theorem 8. Let P be the matrix whose columns are independent eigenvectors of matrix A. Then the
entries of diagonal marix D = P −1 AP are the eigenvalues of A.
Proof:
Let P be an invertible matrix of eigenvectors of An×n . Let P̄j be the jth column of vector P .

P = P¯1 · · · P¯n
 

Then

P −1 P = P −1 P¯1 · · · P¯n
 

= P −1 P¯1 · · · P −1 P¯n
 
 
= e¯1 · · · e¯n
= In×n

Now,

P −1 AP = P −1 A P¯1 · · · P¯n
 

= P −1 AP¯1 · · · AP¯n
 

= P −1 λ1 P¯1 · · · λn P¯n
 

= λ1 P −1 P¯1 · · · λn P −1 P¯n
 
 
= λ1 e¯1 · · · λn e¯n
= λIn×n

Where λIn×n is the corresponding eigenvalues along the diagonal of I. So, A ∼ D where the diagonal
entries of D are the corresponding eigenvalues.

Example 51. See handout 20 example 3

Example 52. See handout 20 example 4

May 03

6 Distance and approximation


6.1 Least squares approximation
See handout 21
Recall our Ax̄ = b̄ problem, where A is a m × m matrix, and x̄ is what we’re solving for.
Recognizing that Ax̄ = b̄ has no solution for most overdetermined systems, we transform the problem into
a related (but different) problem,

AT Ax̃ = AT b̄

Note 12. Overdetermined systems are when we have more equations than variables. It is also certain

35
6 DISTANCE AND APPROXIMATION 6.1 Least squares approximation

that we don’t have a solution because we have too many constraints on the variables.

We are only considering the case where A is full rank, rank(A) < min{m, n} for skinny matrices, m > n,
rank(A) ≤ n, where A is full rank if rank(A) = n, if and only if the columns of A form a linearly independent
set.

rank(AT A) = rank(AAT ) = rank(A) = n

x̃ is called the least squares approximation for Ax̄ = b̄.

36
Date Index
Apr 02, 22 Apr 30, 35 Jan 23, 2
Apr 05, 26 Jan 27, 2
Feb 01, 4 Jan 29, 4
Apr 07, 28
Feb 03, 6
Apr 09, 30
Feb 05, 9
Apr 14, 33 Feb 08, 13 Mar 01, 20
Apr 16, 33 Feb 10, 14 Mar 05, 20
Apr 19, 33 Feb 12, 15 Mar 10, 20
Apr 21, 33 Feb 15, 15 Mar 12, 21
Apr 23, 34 Feb 17, 15 Mar 17, 21
Apr 26, 34 Feb 22, 18 Mar 29, 21
Apr 28, 34 Feb 24, 20 May 03, 35

37

You might also like