Eigenvector and Eigenvalue Project
Eigenvector and Eigenvalue Project
1.0 Introduction
nonzero vector which, when that transformation is applied to it, may change in length, but not
direction.
For each eigenvector of a linear transformation, there is a corresponding scalar value called
an eigenvalue for that vector, which determines the amount the eigenvector is scaled under the
linear transformation. For example, an eigenvalue of +2 means that the eigenvector is doubled in
length and points in the same direction. An eigenvalue of +1 means that the eigenvector is
unchanged, while an eigenvalue of −1 means that the eigenvector is reversed in direction.
An eigenspace of a given transformation for a particular eigenvalue is the set (linear span) of the
eigenvectors associated to this eigenvalue, together with the zero vector (which has no direction).
In linear algebra, every linear transformation between finite-dimensional vector spaces can be
expressed as a matrix, which is a rectangular array of numbers arranged in rows and columns.
discussed below.
in nonlinear mathematics.
Many kinds of mathematical objects can be treated as vectors: functions, harmonic
of direction loses its ordinary meaning, and is given an abstract definition. Even so, if this
abstract direction is unchanged by a given linear transformation, the prefix “eigen” is used, as
than the multiplication operation of arithmetics. Two matrices can only be multiplied if their
dimensions are compatible, which means the number of columns in the first matrix is the same as
the number of rows in the second matrix. An important property of matrix multiplication is that
for matrices A and B the A*B is not equal to B*A for the general case (with the exception of A
or B to be the identity matrix). Similarly, for the multiplication of a matrix and a vector, the
number of columns in the first matrix must be the same as the number of rows of the vector. For
a matrix A with dimensions n x m and a matrix B with dimensions with m x p the multiplication
product C = A*B has dimensions n x p. For the special case of a square matrix of A and B with
dimensions n x n the multiplication product is always n x n, while for a square matrix and a
⃗
C =⃗
A.⃗
B , A=[ aij ] :m ×n , B=[ b ij ] :n ×p, C=[ C ij ] :m× p
n
C = a i1 b1 j + ai 2 b 2 j +. ..+ a¿ bnj =∑ aik bkj
k=1
Every squared matrix has its own eigenvalues (which are scalars or "numbers") and eigenvectors.
It is easier to understand what these special values of a matrix are by an example of matrix
The result of the previous multiplication is the initial vector multiplied by a number (or scalar).
The vectors that follow this property are the eigenvectors of that matrix while the number in
front of them is the eigenvalue. A special case of eigenvectors is the unit eigenvector, which is
Mathematically, eigenvectors are the vectors that, after the linear transformation (which is the
matrix multiplication), change only by a scalar, with that scalar being the eigenvalue and
representing the change of the magnitude of the initial vector. Eigenvalues can take any real or
complex value which means that they can be positive or negative, but not zero (because all
vectors are eigenvectors of a zero eigenvalue). An interesting relation between eigenvalues and
eigenvectors can be found. Consider vector x = c*y, where c is a constant and y a unit vector,
A . ⃗x =λ . ⃗x ⇒ ⃗
⃗ A . ( c . ⃗y )=λ . ( c . ⃗y ) ⇒ c . ⃗
A . ⃗y=c . λ . ⃗y ⇒ A . ⃗y = λ.⃗y
An eigenvector multiplied by a scalar is a new eigenvector and considering that this scalar can be
an arbitrary real or complex number, there are infinite eigenvectors. Each eigenvalue leads to a
different unit eigenvector and has its own set of an infinite number of eigenvectors. So, the
number of eigenvalues of a matrix are the same as its possible unit vectors.
1.1 Background of the Study
Eigenvalues are often introduced in the context of linear algebra or matrix theory. Historically,
dimensional array of expressions or numbers, which defines a system of linear equations. The
roots of this system are termed as eigenvalues. Eigenvalues are also known as characteristic
values or characteristic roots. In branches like: physics and engineering, the knowledge of
|A–λI|=0
Two parallel lines | | represent the determinant of expression written within it.
1) A matrix possesses inverse if and only if all of its eigenvalues are nonzero.
λ 1k , λ 2k ,… λ nk
iv) If the matrix A is invertible, then its inverse A-1 does have eigenvalues;
1 1 1
, ,…., ,
λ1 λ2 λn
There may be situations that arise such that zero becomes one of the eigenvalues of a matrix. In
this case it is obviously implied that any of the solutions of eigenvalue equation of given matrix
is zero. This happens when there are more than one equilibrium point that lies at origin (0, 0).
(4) If A is an n × n triangular matrix (upper triangular, lower triangular, or diagonal), then the
A is invertible.
λ = 0 is not an eigenvalue of A
If λ is an eigenvalue of matrix invertible A, and x ≠ 0 corresponding eigenvectors, then
det(A) ≠ 0.
AT A is invertible.
A is diagonalizable.
A has rank n.
A has nullity 0.
TA is one-to-one.
λ = 0 is not an eigenvalue of A.
For every eigenvalue of A, the geometric multiplicity is less than or equal to the algebraic
multiplicity.
(9) Let A and B are similar matrices. If the similarity transformations performed by the
B=QTAQ or B=UHA,
We will say that the matrices A and B are unitary similar. Since the unitary similar matrices are a
special case of a similar matrix, the eigenvalues of unitary similar matrices are the same.
are λ = 0 and λ = 4.
1.1.3 Dominant and Complex Eigenvalue
Dominant Eigenvalue
eigenvalues.
Let us suppose that A is a square matrix of order n and λ1, λ2, …., λn be its eigenvalues, such that:
Then, λ1, which is the biggest value of all eigenvalues of matrix A, is known as the dominant
eigenvalue.
Complex Eigenvalue
So far, we know that all the values of λ computed by the eigenvalue equation:
When the characteristic equation, thus solved, gives the roots that are complex in nature, the
In other words, complex eigenvalues of a matrix are the eigenvalues that are of the form:
Where “a” and “b” are real and imaginary parts, respectively.
application. For example, this problem is crucial in solving systems of differential equations,
analyzing population growth models, and calculating powers of matrices (in order to define the
exponential matrix). Other areas such as physics, sociology, biology, economics and statistics
their computations.
When eigenvalues and eigenvectors are introduced to students, the formal world concept
definition may be given in words, but since it has an embedded symbolic form the student is
soon into symbolic world manipulations of algebraic and matrix representations, e.g.
transforming Ax = "x to |A–λI|=0. In this way the strong visual, or embodied metaphorical,
image of eigenvectors can be obscured by the strength of the formal and symbolic thrust.
However, using an enactive, embodied approach first could give a feeling for what eigenvalues,
and their associated eigenvectors are, and how they relate to the algebraic representation.
https://byjus.com/jee/eigenvalues-properties/