MATH100 Lecture Notes
MATH100 Lecture Notes
Zed Chance
Spring 2021
Contents
1 Vectors 2
1.1 Vector properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Linear combinations and coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Dot Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Distance between vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5 Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.6 Lines and planes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.7 Lines in R3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.8 Planes in R3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3 Matrices 20
3.1 Subspaces of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.2 Nullspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.3 Column space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.4 Linear transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.5 Composition of linear transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.6 Inverses of linear transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5 Determinants 33
5.1 Cofactor expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.2 Invertibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.3 Cramer’s rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.4 Determinants and Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.5 Similarity and Diagonlization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Date Index 37
1
1 VECTORS
1 Vectors
Definition 1 (Vectors). Vectors are directed line segments, they have both magnitude and direction. Jan 23
They exist in a “space,” such as the plane R2 , ordinary space R3 , or an n-dimensional space Rn .
• In R3 , the vector v can be represented by its components as v = [v1 , v2 , v3 ].
• v can also be represented as a line segment with an arrowhead pointing in the direction of v.
Properties Vectors can be combined to form new vectors. Whether we are combining our vectors al-
gebraically (manipulating their components) or geometrically (manipulating their graphs), the following
properties apply: Let u, v, and w be vectors, and c and d be real numbers, then
u+v =v+u commutative
(u + v) + w = v + (u + w) associative
c(du) = (cd)u associative
u+0=u additive identity
u + (−u) = 0 additive inverse
c(u + v) = cu + cv distributive
(c + d)u = cu + du distributive
1u = u multiplicative identity
Jan 27
V̄ = [2, 3]
Column vector:
2
V̄ =
3
Note 1. Vectors u and v are equivalent if they have the same length and direction.
ū + v̄ = [1, 2] + [3, 1]
= [1 + 3, 2 + 1]
= [4, 3]
Geometrically, this is the “tip to tail” method. Any two vectors define a parallelogram.
Let ū = [1, 2], and think about ū + ū.
2
1 VECTORS 1.1 Vector properties
ū + ū = [1, 2] + [1, 2]
= [2, 4]
2ū = 2[1, 2]
= [2, 4]
(−1)ū = (−1)[1, 2]
= [−1, −2]
This points the vector in the opposite direction, which is considered “antiparallel”. So if the scalar in the
multiplication is a negative number, it will point the vector in the other direction (as well as being scaled).
ū − v̄ = ū + (−v̄)
ū − v̄ = ū + (−v̄)
= [1, 2] + [−3, −1]
= [−2, 1]
The sum and difference is the diagonals of the parallelogram created by adding the vectors.
Note 2. Vector addition is commutative, but vector subtraction is not (it is anticommutative).
3
1 VECTORS 1.2 Linear combinations and coordinates
ū = [1, 2]
p
= 12 + 22
√
= 5
√
||ū|| = 5
Definition 5 (Standard Basis Vectors and Standard Coordinates). In R2 : ē1 = [1, 0], ē2 = [0, 1], these
are the standard basis vectors.
Then v̄ = [v1 , v2 ],
and the standard coordinates of v̄ are v1 , v2 .
ū · v̄ = u1 v1 + u2 v2 + · · · + un vn
Example 2.
Properties of dot products (scalar products) Let ū, v̄, w̄ be vectors, and c be a scalar, then
ū · v̄ = v̄ · ū commutative
ū · (v̄ + w̄) = (ū · v̄) + (ū · w̄) distributive
(cū) · v̄ = c(ū · v̄)
0̄ · v̄ = 0
v̄ · v̄ = v12 + v22 + · · · + vn2
4
1 VECTORS 1.3 Dot Product
p
Length In R2 : ||v̄|| = v12 + v22
p
In general: ||v̄|| = v12 + v22 + · · · + vn2
p
||v̄|| = 22 + (−1)2 + 72
√
= 4 + 1 + 49
√
=3 6
Note 4.
√
||v̄|| = v̄ · v̄
v̄
Definition 7. A vector of length 1 is called unit vector. For any vector v̄ 6= 0̄: ||v̄|| is a unit vector
in the same direction as v̄.
Note 5.
||v̄|| v̄
v̄ = ||v̄||
||v̄|| ||v̄||
1 0
Example 4. In R2 : ē1 = , ē2 = , these are unit vectors.
0 1
Important inequalities
• Triangle inequality
The triangle created by the parallelogram of a vector addition, the length of any one side cannot be
greater than the sum of the other two sides.
• Cauchy-Schwarz inequality
5
1 VECTORS 1.4 Distance between vectors
c2 = a2 + b2 − 2ab cos θ
||ū − v̄||2 = ||ū||2 + ||v̄||2 − 2||ū|| ||v̄|| cos θ
(ū − v̄) · (ū − v̄) =
||ū|| − 2(ū · v̄) + ||v̄||2 =
2
ū · v̄
cos θ =
||ū|| ||v̄||
So, θ = cos−1 ū·v̄
||ū|| ||v̄||
π
Note 6. If ū, v̄ 6= 0, then θ = 2, if and only if ū · v̄ = 0.
ū ⊥ v̄, iff ū · v̄ = 0
Feb 03
p
d(ū, v̄) = (u1 − v1 )2 + (u2 − v2 )2
= ||ū − v̄||
= d(v̄, ū)
= ||v̄ − ū||
Example 5. In R3 :
For ū = [2, −1, 7] and v̄ = [3, 5, −2] :
Find the distance:
6
1 VECTORS 1.5 Projections
1.5 Projections
Definition 9. Let projū v̄ be the vector projection of v̄ onto ū, then the signed length of projū v̄ is given
by
ū · v̄
||v̄|| cos θ = ||v̄||
||v̄||||ū||
v̄ · ū
=
||ū||
So,
v̄ · ū ū
projū v̄ =
||ū|| ||ū||
v̄ · ū
= ū
ū · ū
ū
Note 8. Remember, ||ū|| is the unit vector.
v̄ · ū ū
= v̄
||ū|| ||ū||
Example 6. For ū = [2, 1, −2] and v̄ = [3, 0, 8], find the projection of v̄ onto ū:
7
1 VECTORS 1.6 Lines and planes
Since the coefficient is negative, the angle between the two vectors is more than 90 degrees.
x̄ = p̄ + td¯
x p1 d
= +t 1
y p2 d2
x = p1 + td1
y = p2 + td2
n̄ · x̄ = n̄ · p̄
a x a p
· = · 1
b y b p2
ax + by = ap1 + bp2
ax + by = c
8
1 VECTORS 1.7 Lines in R3
x̄ = p̄ + td¯
x −3 2
= +t
y 2 1
2. Parametric form
x = −3 + 2t
y =2+t
Feb 05
Example 8. Cont from previous example
3. General form
x+3 y−2
t= =
2 1
x + 3 = 2y − 4
x − 2y = −7
4. Normal form
n̄ · x̄ = n̄ · p̄
1 x 1 −3
· = ·
−2 y −2 2
n̄ · x̄ − n̄ · p̄ = 0
n̄ · x̄ = n̄ · p̄
1.7 Lines in R3
See handout 2
9
1 VECTORS 1.7 Lines in R3
Vector form
x̄ = p̄ + td¯
x p1 d1
y = p2 + t d2
z p3 d3
Parametric form
x = p1 + td1
y = p2 + td2
z = p3 + td3
2
p¯1 = 4
−3
3
p¯2 = −1
1
¯
d = p¯2 − p¯1
3 2
= −1 − 4
1 −3
1
d¯ = −5
4
x̄ = p̄ + td¯
x 2 1
y = 4 + t −5
z −3 4
2. Parametric form
10
1 VECTORS 1.8 Planes in R3
x=2+t
y = 4 − 5t
z = −3 + 4t
1.8 Planes in R3
See handout 2
a
Normal form Let n̄ = b
c
n̄ · x̄ = n̄ · p̄
a x a p1
b · y = b · p2
c z c p3
General form
We can combine the constants on the right into one single constant, d.
n̄ · x̄ = n̄ · p̄
2 x 2 2
3 · y = 3 · 4
4 z 4 −1
2. General form
11
1 VECTORS 1.8 Planes in R3
ū · n̄ = 0 v̄ · n̄ = 0
n̄ · ū = 0
2 u1
3 · u2 = 0
4 0
2u1 + 3u2 = 0
3
Let ū = −2.
0
v1
Let v̄ = 0 , where v2 = 0.
v3
Then,
n̄ · v̄ = 0
2 v1
3 · 0 = 0
4 v3
2v1 + 4v3 = 0
2
Let v̄ = 0 .
−1
So the vector form is
x̄ = p̄ + sū + tv̄
x 2 3 2
y = 4 + s −2 + t 0
z −1 0 −1
12
2 SYSTEMS OF LINEAR EQUATIONS
x = 2 + 3s + 2t
y = 4 − 2s
z = −1 − t
a1 x2 + a2 x2 + · · · + an xn = b
Definition 11. A finite set of linear equations is a system of linear equations. A solution set of
a system of linear equations is the set of all solutions of the system. A system of linear equations is
either “consistent” if it has a solution, or it is “inconsistent” if there is no such solution.
Definition 12. Two linear systems are said to be equivalent if they have the same solution set.
2x + y = 8
x − 3y = −3
y = 2, x = 3
13
2 SYSTEMS OF LINEAR EQUATIONS 2.1 Direct methods of solving systems
2 1
A=
1 −3
8
b̄ =
−3
2 1|8
[A | b̄] =
1 −3 | −3
2x − y = 3
x + 3y = 5
2 −1
A=
1 3
3
b̄ =
5
2 −1 | 3
[A | b̄] =
1 3 | 5
14
2 SYSTEMS OF LINEAR EQUATIONS 2.2 Gaussian and Gauss-Jordan Elimination
Feb 15
Example 17. See handout 5
V̄ = c1 V̄1 + · · · + ck V̄k
for constants c1 , . . . , ck .
8 2 1
Example 18. Is a linear combination of and ?
−3 1 −3
8 2 1
Alternatively, does =x +y have a solution?
−3 1 −3
Equivalently, does the following system have a solution?
2x + y = 8
x − 3y = −3
15
2 SYSTEMS OF LINEAR EQUATIONS 2.3 Spanning Sets and Linear Independence
2 1 8 1 −3 −3
= R1 ↔ R2
1 −3 −3 2 1 8
1 −3 −3
= R2 − 2R1
0 7 14
1 1 −3 −3
= R2
7 0 1 2
1 0 3
= R1 + 3R2
0 1 2
So,
x=3 y=2
a 2 1
Example 19. For what values a, b will be a linear combination of and ?
b 1 −3
a 2 1
=x +y
b 1 −3
2x + y = a
x − 3y = b
2 1 a 1 −3 b
= R1 ↔ R2
1 −3 b 2 1 a
1 −3 b
= R2 − 2R1
0 7 a − 2b
1 1 −3 b
= R2 a−2b
7 0 1 7
1 0 b + 3( a−2b
= R1 + 3R2 7 )
a−2b
0 1 7
1 0 3a+b
= 7
0 1 a−2b7
So we have
16
2 SYSTEMS OF LINEAR EQUATIONS 2.3 Spanning Sets and Linear Independence
3a + b
x=
7
a − 2b
y=
7
2 1
So the answer is that any choice of a, b will work. We can say that , “span” the plane (R2 ).
1 −3
Definition 15 (Spanning Sets). If S = {V̄1 , . . . , V̄k } is a set of vectors in Rn , then the set of all linear
combinations of V̄1 , . . . , V̄k is called the span of V̄1 , . . . , V̄k , or
span(S)
1 −1
S = {0 , 1 }
3 −3
a
Another way to think about is it, what vectors b are in the span of S?
c
a 1 −1
b = x + 0 + y 1
c 3 −3
So,
x−y =a
y=b
3x − 3y = c
17
2 SYSTEMS OF LINEAR EQUATIONS 2.4 Linear Independence
1 −1 a 1 −1 a
0 1 b = R3 − 3R1 0 1 b
3 −3 c 0 0 c − 3a
1 0 a+b
= R1 + R2 0 1 b
0 0 c − 3a
a
So this system only has solutions if c − 3a = 0 or c = 3a. So vectors of the form b form the span of
3a
S.
x−y =a
y=b
3x − 3y = 3a
have solutions for a, b arbitrarily. a, b are the free variables, but the third variable must be 3 times a.
c1 v̄1 + · · · ck v̄k = 0̄
1 1 1
Example 21. Decide if the set {2 , 1 , 4} is linearly independent.
0 −1 2
So this is asking if this is true:
1 1 1 0
c1 + 2 + c2 1 + c3 4 = 0
0 −1 2 0
18
2 SYSTEMS OF LINEAR EQUATIONS 2.4 Linear Independence
1 1 1 0 1 1
1 0
2 1 4 0 = R2 − 2R1 0 −1
2 0
0 −1 2 0 0 −1
2 0
1 0 3 0
= R1 + R2 and R3 − R2 0 −1 2 0
0 0 0 0
1 0 3 0
= (−1)R2 0 1 −2 0
0 0 0 0
So we have
c1 + 3c3 = 0
c2 − 3c3 = 0
c1 = −3c3
c2 = 2c3
So c3 is arbitrary, it doesn’t have to be 0. So the answer has non-trivial solutions, therefore it is linearly
dependent.
1 1 1 0
−3c3 2 + 2c3 1 + c3 4 = c3 0
0 −1 2 0
Note 10. A matrix with all 0s in the rightmost column is called a homogeneous system of equations.
1 0 0
S={ , , }
0 1 2
19
3 MATRICES
We can tell without doing anything else that these vectors have to be dependent. They are in R2 , but
there are 3 vectors total. We are guaranteed that there is a non-trivial linear combination that will
make the 0̄.
1 0 0 0
0 +2 − =
0 1 2 0
However, it does not guarantee that one of the vectors can be solved as a linear combination of the
others:
1 0 0
= c1 + c2
0 1 2
has no solution.
Feb 24
Example 23. See handout 6
3 Matrices
See handout X
See handout 7 Mar 01
See handout 8 Mar 05
You can combine 2 and 3 above as: If ū1 , . . . , ūk are in S and c1 , . . . , ck are scalars, then c1 ū1 + · · · + ck ūk
is in S. S is closed under linear combinations.
20
3 MATRICES 3.2 Nullspace
So ū + v̄ is in the span S.
3. If ū is in S, then cū = c(c1 v̄1 + · · · + ck v̄k ), then
So cū is in S.
3.2 Nullspace
Example 25. See handout 11 Mar 12
See handout 12
Mar 29
2. T (cv̄) = cT (v̄)
Alternatively: T (cū + dv̄) = cT (ū) + dT (v̄)
For every ū, v̄ ∈ Rn , and c, d 6= 0
21
3 MATRICES 3.4 Linear transformations
1 0
0 ..
ē1 = ēn = .
..
. 1
Apr 02
Example 27. See handout 14, example 3
Let T : R2 → R2 be the projection of the vector v̄ onto the line ` through the origin.
See notes for drawing
Show that T is a linear transformation.
22
3 MATRICES 3.4 Linear transformations
d
Let dˆ = 1 be a direction vector for `, where ||d||
ˆ = d2 + d2 . Note that
p
d2 1 2
T (v̄) = projdˆv̄
!
v̄ · dˆ
=
ˆ2
||d||
T (ē1 ) = projdˆē1
1 d1
0 · d2
= dˆ
||d|| ˆ2
d
= d1 1
d2
2
d1
=
d1 d2
T (ē2 ) = projdˆē2
0 d1
1 · d2
= dˆ
1
d
= d2 1
d2
d d
= 122
d2
So, the projection onto a line through the origin is a linear transformation.
23
3 MATRICES 3.4 Linear transformations
Let
d
dˆ = 1
d2
1
=
0
= ē1
We can use our standard matrix we found in the previous example. Note that d1 = 1 and d2 = 0.
2
d1 d1 d2
A=
d1 d2 d22
1 0
=
0 0
where
p
ˆ =
||d|| a2 + a2
√
= d 2a2 A
√
= |a|d 2
=1
1
|a| = √
2
Let
" #
√1
dˆ = √1
2
2
24
3 MATRICES 3.4 Linear transformations
So
1 1
A= 2 2
1 1
2 2
1 1 1
=
2 1 1
T (v̄) = Av̄
1 1 1 x
=
2 1 1 y
1 x+y
=
2 x+y
x+y
= x+y 2
2
Both components equal the average of the components in the original vector. This is important in
statistics.
Example 30. Derive the formula for cos(α + β) and sin(α + β).
Recall the rotation matrix from Example 1 on Handout 14 that
cos θ − sin θ
A=
sin θ cos θ
cos β − sin β cos α
T (v̄) =
sin β cos β sin α
cos β cos α − sin β sin α
=
sin β cos α + cos β sin α
cos(α + β)
=
sin(α + β)
So, since the components of equal vectors are equal to each other:
25
3 MATRICES 3.4 Linear transformations
Apr 05
Example 31. Additional questions for example 3 on Handout 14 :
Recall: T : R2 → R2 and T (v̄) = Av̄.
1. What is the range of the projection?
The line ` is the range. Since every vector gets projected onto `, that is the range.
2. Is the line ` a subspace of R2 ?
Yes! Recall that spaces need to include the zero vector 0̄.
• The line ` contains the point (0, 0).
• If ū||`, then ū + v̄||`.
• If ū||`, then cū||`.
3. Find a basis for the subspace `.
d
{ 1 }
d2
ū = adˆ
v̄ = bdˆ
ū + v̄ = adˆ + bdˆ
= (a + b)dˆ
ˆ
So, (a + b)d||`.
ˆ
cū = c(ad)
= (ca)dˆ
ˆ
and, (ca)d||`.
4. Describe the column space of A. Recall:
d21
d1 d2
A=
d1 d2 d22
Av̄ = b̄
26
3 MATRICES 3.4 Linear transformations
Notice that this is a line through the origin with a slope of m = dd12 , which is the line `. So the
column space of the matrix A is the same as the range of T . The basis for the column space is
the same as the basis of the range of T .
d
Col(A) = { 1 }
d2
d21
d1 d2 0
[A | 0̄] =
d1 d2 d22 0
2
d1 d22
d d 0
= 1 2
0 0 0
d1 x + d2 y = 0
d1
y=− x
d2
Example 32. Let T : Rm → Rn with a standard matrix A, and S : Rn → Rp with a standard matrix
B.
27
3 MATRICES 3.5 Composition of linear transformations
Example 33. Show that reflection in the plane about the x-axis is a linear transformation.
See notes for drawing
T : R2 → R2 .
1
T (ē1 ) = T
0
1
=
0
= ē1
1
T (ē1 ) = T
0
0
=
−1
= −ē2
Since we found a matrix that implements the transformation, that means that reflection about the
x-axis must be linear.
28
3 MATRICES 3.6 Inverses of linear transformations
This is the standard matrix for reflection about the x-axis followed by rotation of 60 degrees.
Example 35. Find the standard matrix that rotates by 60 degrees, then reflects about the x-axis. This
is reverse order of the previous problem.
T −1 (T (v̄)) = v̄
T (T −1 (v̄)) = v̄
Let A be the standard matrix of T , then T has an inverse T −1 if and only if A has an inverse. Further-
more, the standard matrix of the inverse T −1 is A−1 .
T −1 (T (v̄)) = T −1 (Av̄)
= A−1 (Av̄)
= (A−1 A)v̄
= I v̄
= v̄
−1
Example 36. Let R60 : R2 → R2 be a rotation by 60 degrees. What is the inverse R60 ?
See notes for drawing
We are looking for something that rotates by a negative 60 degrees.
−1
R60 = R−60
cos(−60) − sin(−60)
=
sin(−60) cos(−60)
cos(60) sin(60)
=
− sin(60) cos(60)
" √ #
1 3
= 2√ 2
3 1
− 2 2
29
3 MATRICES 3.6 Inverses of linear transformations
Lets check
" √ #" √ #
1 3 1 3
−1 2√ 2 √2
− 2
R60 (R60 (v̄)) =
− 23 1
2 2
3 1
2
1 0
=
0 1
Example 37. Find the inverse of the reflection Fx . We are looking for Fx−1 . Since the reflection
happening a second time returns the vector to its original position, it is its own inverse. The standard
matrix is
1 0
0 −1
You can check this by multiplying it by itself, and it returns the identity matrix I.
Example 38. Does projection onto the line ` (through the origin) have an inverse (in the plane)?
P` : R2 → R2 .
See notes for drawing.
The standard matrix of P` is
d21
d1 d2
P` =
d1 d2 d21
ˆ d
Where d = 1 is a unit direction vector for `.
d2
Since there is an infinite number of vectors that will project to the new vector on `, so there is no
inverse. Also, since the standard matrix P` is invertible, P`−1 does not exist.
Apr 09
Example 39. See problem 26 in 3.6 of Poole
If the angle between ` and the positive x-axis is θ, show that the matrix of F` is
cos 2θ sin 2θ
sin 2θ − cos 2θ
30
4 EIGENVALUES AND EIGENVECTORS
We can rotate the entire plane so it is then a reflection about the x-axis.
Aside 1.
cos 2θ = cos(θ + θ)
= cos2 θ − sin2 θ
Av̄ = λv̄
Note 11. If λ is real, then the new vector will be parallel to the original vector. It is possible that
λ is complex.
31
4 EIGENVALUES AND EIGENVECTORS
2 1 −2
Example 40. Show that is an eigenvector of the matrix and find its eigenvalue.
−3 −3 2
Av̄ = λv̄
1 −2 2 8
=
−3 2 −3 −12
2
=4
−3
2
So is an e-vector with an e-value of λ = 4.
−3
2 3
Example 41. Show that λ1 = −2 and λ2 = 5 are e-values of the matrix and find associated
4 1
e-vectors.
We’ll start with λ1 = −2 :
Av¯1 = −2v¯1
Av¯1 + 2v¯1 = 0̄
Av¯1 + 2I v¯1 = 0̄
(A + 2I)v¯1 = 0̄
Aside 2.
1 0
2I = 2
0 1
2 0
=
0 2
A + 2I = A − λI
2 3 2 0
= +
4 1 0 2
4 3
=
4 3
32
5 DETERMINANTS
3 3
Let v¯1 = , then v¯1 = is an e-vector for λ1 = −2. We can check this by
−4 −4
Av¯1 = −2v¯1
2 3 3 3
= −2
4 1 −4 −4
−6
=
8
3
= −2
−4
Apr 14
Example 42. See handout 16
Apr 16
Example 43. See handout 16 example 2
5 Determinants
See handout 18
5.2 Invertibility
Definition 22. If a matrix A is full rank and square (n × n ), then it will row reduce to the identity
matrix In×n . Therefore,
• The matrix is invertible.
A | I → I | A−1
Less than full rank n × n matrices row reduce to a row of zeros at the bottom of the matrix. Therefore, Apr 21
• It will have a zero determinant.
• It will not be invertible.
33
5 DETERMINANTS 5.4 Determinants and Eigenvalues
det(Ai (b))
xi =
detA
for i = 1, . . . , n.
Note that Ai (b) is created by replacing the ith column of A with the vector b̄.
[A − λI]v̄ = 0̄
Apr 26
Example 48. See handout 19 example 3
P −1 AP = B
Theorem 7. The n × n matrix A is diagonalizable if and only if A has n linearly independent eigen-
vectors. (Deficient matrices need not apply!)
34
6 DISTANCE AND APPROXIMATION
Apr 30
Theorem 8. Let P be the matrix whose columns are independent eigenvectors of matrix A. Then the
entries of diagonal marix D = P −1 AP are the eigenvalues of A.
Proof:
Let P be an invertible matrix of eigenvectors of An×n . Let P̄j be the jth column of vector P .
P = P¯1 · · · P¯n
Then
P −1 P = P −1 P¯1 · · · P¯n
= P −1 P¯1 · · · P −1 P¯n
= e¯1 · · · e¯n
= In×n
Now,
P −1 AP = P −1 A P¯1 · · · P¯n
= P −1 AP¯1 · · · AP¯n
= P −1 λ1 P¯1 · · · λn P¯n
= λ1 P −1 P¯1 · · · λn P −1 P¯n
= λ1 e¯1 · · · λn e¯n
= λIn×n
Where λIn×n is the corresponding eigenvalues along the diagonal of I. So, A ∼ D where the diagonal
entries of D are the corresponding eigenvalues.
May 03
AT Ax̃ = AT b̄
Note 12. Overdetermined systems are when we have more equations than variables. It is also certain
35
6 DISTANCE AND APPROXIMATION 6.1 Least squares approximation
that we don’t have a solution because we have too many constraints on the variables.
We are only considering the case where A is full rank, rank(A) < min{m, n} for skinny matrices, m > n,
rank(A) ≤ n, where A is full rank if rank(A) = n, if and only if the columns of A form a linearly independent
set.
36
Date Index
Apr 02, 22 Apr 30, 35 Jan 23, 2
Apr 05, 26 Jan 27, 2
Feb 01, 4 Jan 29, 4
Apr 07, 28
Feb 03, 6
Apr 09, 30
Feb 05, 9
Apr 14, 33 Feb 08, 13 Mar 01, 20
Apr 16, 33 Feb 10, 14 Mar 05, 20
Apr 19, 33 Feb 12, 15 Mar 10, 20
Apr 21, 33 Feb 15, 15 Mar 12, 21
Apr 23, 34 Feb 17, 15 Mar 17, 21
Apr 26, 34 Feb 22, 18 Mar 29, 21
Apr 28, 34 Feb 24, 20 May 03, 35
37