Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Lect3 06web

Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

MATH 304

Linear Algebra
Lecture 22:
Eigenvalues and eigenvectors (continued).
Characteristic polynomial.
Eigenvalues and eigenvectors of a matrix

Definition. Let A be an n×n matrix. A number


λ ∈ R is called an eigenvalue of the matrix A if
Av = λv for a nonzero column vector v ∈ Rn .
The vector v is called an eigenvector of A
belonging to (or associated with) the eigenvalue λ.

Remarks. • Alternative notation:


eigenvalue = characteristic value,
eigenvector = characteristic vector.
• The zero vector is never considered an
eigenvector.
Diagonal matrices

Let A be an n×n matrix. Then A is diagonal if and


only if vectors e1 , e2 , . . . , en of the standard basis
for Rn are eigenvectors of A.
If this is the case, then the diagonal entries of the
matrix A are the corresponding eigenvalues:
λ1 O
 
λ2
A=
 
 ... 
 ⇐⇒ Aei = λi ei
O λn
Eigenspaces

Let A be an n×n matrix. Let v be an eigenvector


of A belonging to an eigenvalue λ.
Then Av = λv =⇒ Av = (λI )v =⇒ (A − λI )v = 0.
Hence v ∈ N(A − λI ), the nullspace of the matrix
A − λI .
Conversely, if x ∈ N(A − λI ) then Ax = λx.
Thus the eigenvectors of A belonging to the
eigenvalue λ are nonzero vectors from N(A − λI ).
Definition. If N(A − λI ) 6= {0} then it is called
the eigenspace of the matrix A corresponding to
the eigenvalue λ.
How to find eigenvalues and eigenvectors?
Theorem Given a square matrix A and a scalar λ,
the following statements are equivalent:
• λ is an eigenvalue of A,
• N(A − λI ) 6= {0},
• the matrix A − λI is singular,
• det(A − λI ) = 0.

Definition. det(A − λI ) = 0 is called the


characteristic equation of the matrix A.
Eigenvalues λ of A are roots of the characteristic
equation. Associated eigenvectors of A are nonzero
solutions of the equation (A − λI )x = 0.
 
a b
Example. A = .
c d

a −λ b
det(A − λI ) =
c d −λ
= (a − λ)(d − λ) − bc
= λ2 − (a + d)λ + (ad − bc).
 
a11 a12 a13
Example. A = a21 a22 a23 .
a31 a32 a33

a11 − λ a 12 a 13


det(A − λI ) = a21 a22 − λ a23
a31 a32 a33 − λ
= −λ3 + c1 λ2 − c2 λ + c3 ,
where c1 = a11 + a22 + a33 (the trace of A),

a11 a12 a11 a13 a22 a23
c2 = + + ,
a21 a22 a31 a33 a32 a33
c3 = det A.
Theorem. Let A = (aij ) be an n×n matrix.
Then det(A − λI ) is a polynomial of λ of degree n:
det(A − λI ) = (−1)n λn + c1 λn−1 + · · · + cn−1 λ + cn .
Furthermore, (−1)n−1 c1 = a11 + a22 + · · · + ann
and cn = det A.

Definition. The polynomial p(λ) = det(A − λI ) is


called the characteristic polynomial of the matrix A.

Corollary Any n×n matrix has at most n


eigenvalues.
 
2 1
Example. A = .
1 2
2−λ 1
Characteristic equation:
1
= 0.
2−λ
(2 − λ)2 − 1 = 0 =⇒ λ1 = 1, λ2 = 3.
    
1 1 x 0
(A − I )x = 0 ⇐⇒ =
1 1 y 0
    
1 1 x 0
⇐⇒ = ⇐⇒ x + y = 0.
0 0 y 0
The general solution is (−t, t) = t(−1, 1), t ∈ R.
Thus v1 = (−1, 1) is an eigenvector associated
with the eigenvalue 1. The corresponding
eigenspace is the line spanned by v1 .
    
−1 1 x 0
(A − 3I )x = 0 ⇐⇒ =
1 −1 y 0
    
1 −1 x 0
⇐⇒ = ⇐⇒ x − y = 0.
0 0 y 0

The general solution is (t, t) = t(1, 1), t ∈ R.


Thus v2 = (1, 1) is an eigenvector associated with
the eigenvalue 3. The corresponding eigenspace is
the line spanned by v2 .
 
2 1
Summary. A = .
1 2
• The matrix A has two eigenvalues: 1 and 3.
• The eigenspace of A associated with the
eigenvalue 1 is the line t(−1, 1).
• The eigenspace of A associated with the
eigenvalue 3 is the line t(1, 1).
• Eigenvectors v1 = (−1, 1) and v2 = (1, 1) of
the matrix A form an orthogonal basis for R2 .
• Geometrically, the mapping x 7→ Ax is a stretch
by a factor of 3 away from the line x + y = 0 in
the orthogonal direction.
 
1 1 −1
Example. A = 1 1 1.
0 0 2
Characteristic equation:

1−λ 1 −1

1 1 − λ 1 = 0.

0 0 2−λ
Expand the determinant by the 3rd row:

1−λ 1
(2 − λ) = 0.
1 1−λ

(1 − λ)2 − 1 (2 − λ) = 0 ⇐⇒ −λ(2 − λ)2 = 0
=⇒ λ1 = 0, λ2 = 2.
    
1 1 −1 x 0
Ax = 0 ⇐⇒ 1 1 1
   y = 0
 
0 0 2 z 0
Convert the matrix to reduced row echelon form:
     
1 1 −1 1 1 −1 1 1 0
1 1 1 → 0 0 2 → 0 0 1
0 0 2 0 0 2 0 0 0

x + y = 0,
Ax = 0 ⇐⇒
z = 0.
The general solution is (−t, t, 0) = t(−1, 1, 0),
t ∈ R. Thus v1 = (−1, 1, 0) is an eigenvector
associated with the eigenvalue 0. The
corresponding eigenspace is the line spanned by v1 .
    
−1 1 −1 x 0
(A − 2I )x = 0 ⇐⇒  1 −1 1   y = 0
 
0 0 0 z 0
    
1 −1 1 x 0
⇐⇒ 0 0 0
  y = 0 ⇐⇒ x − y + z = 0.
 
0 0 0 z 0
The general solution is x = t − s, y = t, z = s,
where t, s ∈ R. Equivalently,
x = (t − s, t, s) = t(1, 1, 0) + s(−1, 0, 1).
Thus v2 = (1, 1, 0) and v3 = (−1, 0, 1) are
eigenvectors associated with the eigenvalue 2.
The corresponding eigenspace is the plane spanned
by v2 and v3 .
 
1 1 −1
Summary. A = 1 1 1.
0 0 2
• The matrix A has two eigenvalues: 0 and 2.
• The eigenvalue 0 is simple: the corresponding
eigenspace is a line.
• The eigenvalue 2 is of multiplicity 2: the
corresponding eigenspace is a plane.
• Eigenvectors v1 = (−1, 1, 0), v2 = (1, 1, 0), and
v3 = (−1, 0, 1) of the matrix A form a basis for R3 .
• Geometrically, the map x 7→ Ax is the projection
on the plane Span(v2 , v3 ) along the lines parallel to
v1 with the subsequent scaling by a factor of 2.
Eigenvalues and eigenvectors of an operator
Definition. Let V be a vector space and L : V → V
be a linear operator. A number λ is called an
eigenvalue of the operator L if L(v) = λv for a
nonzero vector v ∈ V . The vector v is called an
eigenvector of L associated with the eigenvalue λ.
(If V is a functional space then eigenvectors are also
called eigenfunctions.)
If V = Rn then the linear operator L is given by
L(x) = Ax, where A is an n×n matrix.
In this case, eigenvalues and eigenvectors of the
operator L are precisely eigenvalues and
eigenvectors of the matrix A.
Eigenspaces
Let L : V → V be a linear operator.
For any λ ∈ R, let Vλ denotes the set of all
solutions of the equation L(x) = λx.
Then Vλ is a subspace of V since Vλ is the kernel
of a linear operator given by x 7→ L(x) − λx.
Vλ minus the zero vector is the set of all
eigenvectors of L associated with the eigenvalue λ.
In particular, λ ∈ R is an eigenvalue of L if and
only if Vλ 6= {0}.
If Vλ 6= {0} then it is called the eigenspace of L
corresponding to the eigenvalue λ.
Example. V = C ∞ (R), D : V → V , Df = f ′ .

A function f ∈ C ∞ (R) is an eigenfunction of the


operator D belonging to an eigenvalue λ if
f ′ (x) = λf (x) for all x ∈ R.
It follows that f (x) = ce λx , where c is a nonzero
constant.
Thus each λ ∈ R is an eigenvalue of D.
The corresponding eigenspace is spanned by e λx .
Theorem If v1 , v2 , . . . , vk are eigenvectors of a
linear operator L associated with distinct
eigenvalues λ1 , λ2 , . . . , λk , then v1 , v2 , . . . , vk are
linearly independent.

Corollary 1 If λ1 , λ2 , . . . , λk are distinct real


numbers, then the functions e λ1 x , e λ2 x , . . . , e λk x are
linearly independent.
Proof: Consider a linear operator
D : C ∞ (R) → C ∞ (R) given by Df = f ′ .
Then e λ1 x , . . . , e λk x are eigenfunctions of D
associated with distinct eigenvalues λ1 , . . . , λk .
Corollary 2 Let A be an n×n matrix such that
the characteristic equation det(A − λI ) = 0 has n
distinct real roots. Then Rn has a basis consisting
of eigenvectors of A.
Proof: Let λ1 , λ2 , . . . , λn be distinct real roots of the
characteristic equation. Any λi is an eigenvalue of A, hence
there is an associated eigenvector vi . By the theorem, vectors
v1 , v2 , . . . , vn are linearly independent. Therefore they form a
basis for Rn .

Corollary 3 Let λ1 , λ2 , . . . , λk be distinct


eigenvalues of a linear operator L. For any
1 ≤ i ≤ k let Si be a basis for the eigenspace
associated with the eigenvalue λi . Then the union
S1 ∪ S2 ∪ · · · ∪ Sk is a linearly independent set.
Diagonalization

Suppose L : V → V is a linear operator on a vector space V


of dimension n.
Let v1 , v2 , . . . , vn be a basis for V and B be the matrix of the
operator L with respect to this basis.
Theorem The matrix B is diagonal if and only if vectors
v1 , v2 , . . . , vn are eigenvectors of the operator L.
If this is the case, then the diagonal entries of the matrix B
are the corresponding eigenvalues of L:
λ1 O
 
 λ2 
L(vi ) = λi vi ⇐⇒ B =   . ..


O λn
Characteristic polynomial of an operator

Let L be a linear operator on a finite-dimensional


vector space V . Let u1 , u2 , . . . , un be a basis for V .
Let A be the matrix of L with respect to this basis.
Definition. The characteristic polynomial of the
matrix A is called the characteristic polynomial
of the operator L.
Then eigenvalues of L are roots of its characteristic
polynomial.
Theorem. The characteristic polynomial of the
operator L is well defined. That is, it does not
depend on the choice of a basis.
Theorem. The characteristic polynomial of the
operator L is well defined. That is, it does not
depend on the choice of a basis.
Proof: Let B be the matrix of L with respect to a
different basis v1 , v2 , . . . , vn . Then A = UBU −1 ,
where U is the transition matrix from the basis
v1 , . . . , vn to u1 , . . . , un . We obtain
det(A − λI ) = det(UBU −1 − λI )
= det UBU −1 − U(λI )U −1 = det U(B − λI )U −1
 

= det(U) det(B − λI ) det(U −1 ) = det(B − λI ).

You might also like