Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
193 views75 pages

Module 1 Theory of Matrices

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 75

Theory of Matrices

An arrangement of certain numbers in an array of m rows


and n columns such as
𝒂𝟏𝟏 ⋯ 𝒂𝟏𝒏
𝐀= ⋮ ⋱ ⋮
𝒂𝒎𝟏 ⋯ 𝒂𝒎𝒏

m – no. of rows row in A, n – number of columns in A. Thus,


matrix is denoted by 𝑨𝒎×𝒏 .
𝑨 = 𝒂𝒊𝒋 where 𝒂𝒊𝒋 denotes an element
𝒎×𝒏
belonging to 𝒊𝒕𝒉 𝒓𝒐𝒘 𝒂𝒏𝒅 𝒋𝒕𝒉 𝒄𝒐𝒍𝒖𝒎𝒏.
Types of Matrices
Operations of Matrices
Operations of Matrices
Scaler 𝒙 Multiplied to a Matrix Matrix Multiplication
Important remarks
(Find example wherever possible)

• In general, 𝐴𝐵 ≠ 𝐵𝐴.
• If 𝐴𝐵 = 0 does not imply that 𝐴 = 0 𝑜𝑟 𝐵 = 0.
• If 𝐴𝐵 = 𝐴𝐶 does not imply that, 𝐵 = 𝐶.
• Distributive Law holds for Matrix multiplication
𝐴 𝐵 + 𝐶 = 𝐴𝐵 + 𝐴𝐶, 𝐴 + 𝐵 𝐶 = 𝐴𝐶 + 𝐵𝐶
• 𝐴2 = 𝐴 × 𝐴, 𝐴3 = 𝐴2 × 𝐴 𝑖. 𝑒 𝐴𝑚 𝑛 = 𝐴𝑚𝑛
• 𝐴𝑚 𝐴𝑛 = 𝐴𝑚+𝑛
• 𝐴′ ′ = 𝐴
• 𝐴𝐵 ′ = 𝐵′ 𝐴′
• 𝐴 + 𝐵 ′ = 𝐴′ + 𝐵′
• (𝐴𝐵)−1 = 𝐵 −1 𝐴−1
• 𝐴𝐵 = 𝐴 . |𝐵|
Elementary Row and Column
Operations
Rank of a Matrix
The matrix is said to be of rank r if there is
1. At least one minor of the order r which is not equal to zero
2. Every minor of order (r+1) is equal to Zero.
Rank is denoted by 𝜌 𝐴 .
1 2 3
𝐸𝑥. 𝐴 = 2 4 6 Note that |A|=0 and also any minor of
3 6 9
order 2 × 2 is 0. Hence, largest order of its non vanishing minor
is 1.

Remark: 𝜌 𝐴𝑚×𝑛 ≤ min(𝑚, 𝑛)


Ways to find Rank
Echelon Form Normal Form
Given Matrix is to be Given matrix is converted
converted to Row Echelon to normal form(roughly
form using only identity matrix) using both
elementary row elementary row and
operations. Then, number column operations. Then
of non-zero rows the order of (roughly)
represents rank of Matrix. identity matrix is rank of
matrix.
Example 1.
Example 2.
Normal Form

By performing elementary row and column


transformations, any non-zero matrix A can be reduced to one
of the following four forms, called the Normal form.
• 𝐼𝑟
• 𝐼𝑟 0
𝐼𝑟

0
𝐼𝑟 0

0 0
The number ‘r‘so obtained is called the rank of the
Matrix.
Example on Rank using Normal form
Example 2
System of Linear Equations
Matrix form of System of Linear
Equations

In above form,

is coefficient Matrix.
Solution of system of Linear Equations
AX=B
(m equations, n
unknowns)

𝜌 𝐴 𝐵 = 𝜌(𝐴) 𝜌 𝐴 𝐵 ≠ 𝜌(𝐴)
System AX=B is System AX=B is
consistent inconsistent

𝜌 𝐴 𝐵 = 𝜌(𝐴)< no. of If 𝜌 𝐴 𝐵 = 𝜌(𝐴) = no. of


variables variables
Infinitely many solutions Unique Solution
Solution of System AX=B(Method)
1. Write the given system in matrix form(AX=B).
2. Consider the augmented matrix[A|B] from the given system.
3. Reduce the augmented matrix to the Echelon form.
Note : In [A|B], first part represents 𝜌 𝐴 and whole matrix represents
𝜌 𝐴𝐵 .
4. Conclude the system has unique, infinite or no solution.
5. If consistent with 𝜌 𝐴 𝐵 = 𝜌 𝐴 = no. of variables then
rewrite equations and find values.
6. If consistent with 𝜌 𝐴 𝐵 = 𝜌 𝐴 = 𝑟 < no. of variables
then put 𝑛 − 𝑟 variables(free variables) as u, v, w etc. find
values of other variables in terms of free variables.
System of linear
equations
having unique
solution
System of
Linear
Equation with
Infinitely
many
solutions

Note that
different
values of ‘t’
will yield
different
solutions
Inconsistent
system of
Linear
equations
Follow the
chart for
solution of
system of
linear
equations
Linear Dependence and Independence

By Triangle law,
𝑐Ԧ = 𝑎Ԧ + 𝑏

Set {𝑎,
Ԧ 𝑏, 𝑐}
Ԧ is Linearly dependent.
Set {𝑎,
Ԧ 𝑏} is linearly Independent.
Definition
Linearly Independent Linearly Dependent
The system n vectors Set The system n vectors Set
𝑥1 , 𝑥2 , 𝑥3 … … . 𝑥𝑛 is said to be 𝑥1 , 𝑥2 , 𝑥3 … … . 𝑥𝑛 is said to be
Linearly Independent if every Linearly dependent if every
relation of the type relation of the type
𝑐1 𝑥1 + 𝑐2 𝑥2 + ⋯ . . 𝑐𝑛 𝑥𝑛 = 0 𝑐1 𝑥1 + 𝑐2 𝑥2 + ⋯ . . 𝑐𝑛 𝑥𝑛 = 0
has a unique solution 𝑐1 = 𝑐2 = Has some 𝑐𝑖 ≠ 0.
… … . = 𝑐𝑛 = 0
Remarks
• 𝑐1 𝑥1 + 𝑐2 𝑥2 + ⋯ . . 𝑐𝑛 𝑥𝑛 = 0 is a linear equation with
𝑐1 , c2 , c3 … … cn as unknowns.
• If vectors 𝑥1 , 𝑥2 , 𝑥3 , … … 𝑥𝑛 are linearly dependent then in
equation 𝑐1 𝑥1 + 𝑐2 𝑥2 + ⋯ . . 𝑐𝑛 𝑥𝑛 = 0 by definition some 𝑐𝑘 ≠ 0.
• By simplification we get
• 𝑐𝑘 𝑥𝑘 = −𝑐1 𝑥1 − 𝑐2 𝑥2 … … − 𝑐𝑛 𝑥𝑛
𝑐1 𝑐2 𝑐𝑛
• 𝑥𝑘 = − 𝑥1 − 𝑥2 − … … … . − 𝑥𝑛
𝑐𝑘 𝑐𝑘 𝑐𝑘
• If vectors are dependent then there exist a relation between
the given vectors.
Linearly
Independent
set of Vectors
Linearly
Dependent.
Hence, relation
between vectors
is found.
Linear Transformation
Matrix S is
applied to
vectors
1 0
,
0 1

Matrix S
transforms
A Linear Transformation is a map from ‘n’ each and
dimensional space to itself generally represented every
by vector on
𝑦1 = 𝑎11 𝑥1 + 𝑎12 𝑥2 … . +𝑎1𝑛 𝑥𝑛 the plane
𝑦2 = 𝑎21 𝑥1 + 𝑎22 𝑥2 … . +𝑎2𝑛 𝑥𝑛
.
.
𝑦𝑛 = 𝑎𝑛1 𝑥1 + 𝑎𝑛2 𝑥2 … . +𝑎𝑛𝑛 𝑥𝑛
Matrix Representation of Linear Transformation

• Matrix representation of the above Linear Transformation is


given by
𝑦1 𝑎11 … 𝑎1𝑛 𝑥1
⋮ = ⋮ ⋱ ⋮ ⋮
𝑦𝑛 𝑎𝑛1 … 𝑎𝑛𝑛 𝑥𝑛
• Every Linear Transformation is the form Y = AX (Expressing Y
in terms of X)
• Properties of Linear Transformation depends on matrix ‘A’
• If 𝐴 = 0, then matrix A is called singular and transformation
is called Singular transformation.
• If 𝐴 ≠ 0, then matrix A is called non-singular and
transformation is called non-singular or regular
transformation.
Orthogonal Transformation
• Transformation 𝑌 = 𝐴𝑋 is said to be Orthogonal if A is
Orthogonal matrix.
• Matrix A is called orthogonal if 𝐴′ 𝐴 = 𝐴𝐴′ = 𝐼 where 𝐴′ is
transpose of A,.

Properties of Orthogonal Matrix :


1. 𝐴−1 = 𝐴′
2. 𝐴 = ±1
3. A is Orthogonal then 𝐴−1 is also Orthogonal.
Eigen Values and Eigen Vectors

Linearly Transformed
(Using some matrix ‘A’)

• Observe blue and red vectors before and after the transformation.

Blue vector has same direction even after the transformation where
as red vector changes its direction.
Eigen Value and Eigen Vectors

Remember
Blue vector?

Graphically Eigen vector is the one that doesn’t change its direction
And Eigen values are the extent by which Eigen vector changes its
length.
Characteristic Equation
• Characteristic Equation is also called characteristic Polynomial
because 𝐴 − 𝜆𝐼 = 0 is a polynomial of degree equal to order of
square matrix A.
• For 2 × 2 matrix, characteristic polynomial is given by

𝑎11 𝑎12 𝜆2 − 𝑆1 𝜆 + 𝐴 = 0
𝑎21 𝑎22 Where 𝑆1 = sum of diagonal matrix = Trace (A)

• For 3 × 3 matrix, characteristic polynomial is given by

𝜆3 − 𝑆1 𝜆2 + 𝑆2 𝜆 − 𝐴 = 0
𝑎11 𝑎12 𝑎13 Where 𝑆1 = sum of diagonal matrix =Trace(A)
𝑎21 𝑎22 𝑎23 𝑆2 = sum of minors of order two of diagonal elements
𝑎31 𝑎32 𝑎33 𝑎22 𝑎23 𝑎11 𝑎13 𝑎11 𝑎12
𝑆2 = 𝑎 + +
32 𝑎33 𝑎31 𝑎33 𝑎21 𝑎22
Method
(To Eigen values and vectors for a given matrix ‘A’)
1. Write the characteristic polynomial for ‘A’ i.e 𝐴 − 𝜆𝐼 = 0. (use
the shortcuts for 2 × 2 and 3 × 3 matrix)
2. Roots of the Characteristic polynomial are Eigen Values. (say 𝜆1 ,
𝜆2 , 𝜆3 … . . )
3. For each Eigen Value 𝜆𝑖 , we can find Eigen vectors by solving the
system of linear equation
𝐴 − 𝜆𝑖 𝐼 𝑋 = 0

Brain Teaser
Can same Eigen vector can correspond to two different Eigen Values?
If no, then why? If yes, can you find it?
HINT: USE THE BASIC DEFINITION OF EIGEN VECTOR
Can you quickly check
Eigen values for
0 −1
?
1 0

Any
conclusion?
As a homework,
Try finding
Eigen vectors
for 𝜆 = 1 & 2?
Interesting facts about this
particular example:
1. Repeated Eigen value.
2. Two variables
3. Matrix symmetric

−𝑟 − 𝑠 −1 −1
𝑟 =𝑟 1 +𝑠 0
𝑠 0 1

Two vectors (-1,1,0) and (-1,0,1)


are linearly independent.
Cayley-Hamilton Theorem
Statement: Every square matrix satisfies its own characteristic
equation.
We know that characteristic equation of a square matrix is
𝑨 − ⅄𝑰 = 𝟎
i.e. 𝒂𝟎 ⅄𝒏 + 𝒂𝟏⅄𝒏−𝟏 + ⋯ + 𝒂𝒏−𝟏 ⅄ + 𝒂𝒏 = 𝟎
C-H theorem says that A satisfies above equation, which means if
we replace ⅄ by matrix A, we will get null matrix on RHS.
i.e. 𝒂𝟎 𝑨𝒏 + 𝒂𝟏 𝑨𝒏−𝟏 + ⋯ + 𝒂𝒏−𝟏 𝑨 + 𝒂𝒏 𝑰 = 𝟎
Where I is identity matrix of size n
• Applications of Cayley-Hamilton Theorem:
1. Nth power of any square matrix can be expressed as linear
combination of lower powers of A.
2. To find inverse of non-singular matrix A.
𝟐 𝟏 𝟏
e. g. 2 Verify Cayley-Hamilton theorem for 𝑨 = 𝟎 𝟏 𝟎 and use it to find the matrix
𝟏 𝟏 𝟐

𝑨𝟖 − 𝟓𝑨𝟕 + 𝟕𝑨𝟔 − 𝟑𝑨𝟓 + 𝑨𝟒 − 𝟓𝑨𝟑 + 𝟖𝑨𝟐 − 𝟐𝑨 + 𝑰


Application of Matrices in 2D & 3D Transformations
Application in 2D
Reflection about x-axis
Reflection about Y-axis
Rotation in 3D:
NOTE
• Let P(x,y,z) be coordinates of given point. If origin is shifted to
(u,v,w). Let Z-axis be the axis of rotation. Then co-ordinates of
P (X,Y,Z) in NEW COORDINATE SYSTEM is:

𝑋 𝑥
𝑐𝑜𝑠𝜃 𝑠𝑖𝑛𝜃 0 −𝑢
𝑌 = −𝑠𝑖𝑛𝜃 𝑐𝑜𝑠𝜃 0 −𝑣 𝑦
𝑍 𝑧
0 0 1 −𝑤 1
1
• If axis of rotation is changed then only rotation matrix (first
three columns) in above expression will change. Everything
else remains same.
Examples:
Exercises:

You might also like