Chapter 4
Chapter 4
Chapter 4
ALGEBRAIC SYSTEM
WEEK 6: MATRIX ALGEBRA FOR NON-HOMOGENEOUS LINEAR ALGEBRAIC SYSTEM
6.1 INTRODUCTION
A matrix is an array of mn elements (where m and n are integers) arranged in m rows and n columns.
The difference between matrix and column/ row vector is shown in Table 6.1.
where 𝑎𝑚𝑛 is the element of the matrix at 𝑚th row and 𝑛th column. If 𝑚 = 𝑛, it is known as square
matrix. Non-square matrix has 𝑚 ≠ 𝑛.
Matrix Vector
1 2 1
upper-case non-italic bold letter (e.g. 𝐀 = [ ]) upper-case italic bold letter (e.g. 𝑨 = { })
3 4 3
1 2 1
symbol in box bracket (e.g. [𝐴] = [ ]) symbol in curly bracket (e.g. {𝐴} = { })
3 4 3
1
Table 6.3 Type of matrices.
0 0 0 5 1 2 5 0 0
[0 0 0] [1 3 7] [0 3 0]
0 0 0 2 7 8 0 0 8
where 𝑎𝑖𝑗 = 𝑎𝑗𝑖 • All elements off the main
diagonal are equal to 0.
1 0 0 2 −5 6 2 0 0
[0 1 0] [ 0 3 8] [9 3 0]
0 0 1 0 0 1 20 −3 1
• Diagonal matrix with • All the elements below • All the elements above
element = 1 main diagonal = 0 main diagonal = 0
The basic operations of matrices such as trace, transpose, equality, addition/subtraction, scalar
multiplication, transpose, multiplication, determinants, cofactor, adjoint, and inverse are provided in
Table 6.4.
Trace 4 13 3
𝐅 = [−2 19 1]
3 2 0
Trace (F) =Summation of diagonal element= 4+19+0=23
Equality 4 13 4 13 4 13 2
𝐀=[ ];𝐁=[ ]; 𝐂 = [ ] ∴𝐀=𝐁;𝐀 ≠𝐂
−2 19 −2 19 −2 19 1
2
Addition/Subtraction 8 26
𝐃=𝐀+𝐁=[ ]
−4 38
0 0
𝐄 = 𝐀−𝐁 = [ ]
0 0
Scalar Multiplication 16 52
2𝐃 = [ ]
−8 76
Transpose, [•]𝑇 4 13 2
4 −2
𝐂=[ ]; 𝐂 𝑇 = [13 19 ]
−2 19 1
2 1
Matrix 3 1 22 29
5 9
Multiplication [8 6] [ ] = [82 84]
⏟7 2
⏟0 4 (Size 2x2) ⏟28 8
(Size 3x2) (Size 3x2)
3 1
Note: 𝐀𝐁 ≠ 𝐁𝐀 5 9
[ ] [8 6] = 𝑒𝑟𝑟𝑜𝑟
⏟
⏟7 2 ⏟
(Size 2x2)
0 4 (because non−equal interior dimensions)
(Size 3x2)
3
Note: It is inefficient 19 1 −2 1 −2 19
| | −| | | |
to calculate cofactor 4 13 3 2 0 3 0 3 2
13 3 4 3 4 13
& adjoint manually 𝐅 = [ −2 19 1] ; cofactor(𝐅)= − | | | | −| | =
2 0 3 0 3 2
for 4x4 matrix and 3 2 0 13 3 4 3 4 13
[ |19 1| −| | | |
above. −2 1 −2 19 ]
−2 3 −61 −2 6 −44
[ 6 −9 31 ] ; adjoint(𝐅) = [ 3 −9 −10]
−44 −10 102 −61 31 102
4 13 3 3
−2 19 1 1
𝐆=[ ];
3 2 0 0
0 2 1 0
19 1 1 −2 1 1 −2 19 1 −2 19 1
|2 0 0| − | 3 0 0| | 3 2 0| − | 3 2 0|
2 1 0 0 1 0 0 2 0 0 2 1
13 3 3 4 3 3 4 13 3 4 13 3
−| 2 0 0| |3 0 0| − |3 2 0| |3 2 0|
cofactor(𝐆)= 132 3
1
3
0 0 1 0
4 3 3 4
0 2
13
0
3
0 2 1
4 13 3
=
|19 1 1| − |−2 1 1| |−2 19 1| − |−2 19 1|
2 1 0 0 1 0 0 2 0 0 2 1
13 3 3 4 3 3 4 13 3 4 13 3
− |19 1 1| |−2 1 1| − |−2 19 1| |−2 19 1|
[ 2 0 0 3 0 0 3 2 0 3 2 0 ]
2 −3 6 55 2 −6 44 0
−6 9 −18 −13 −3 9 10 0
cofactor(𝐆) = [ ]; adjoint(𝐆) = [ ]
44 10 −20 −82 6 −18 −20 152
0 0 152 −152 55 −13 −82 −152
4
6.2 Solving Non-Homogeneous System of Linear Equations
Linear Algebraic Equations Coefficient Matrix, [𝐴] Unknown, {𝑋} Non-zero {𝐵}
4𝑥1 + 13𝑥2 = 8
−2𝑥1 + 19𝑥2 = 2
4 13 𝑥1 8
n=2 where 2 sets of eqns. are [ ] {𝑥 } { }
−2 19 2 2
given to solve 𝑥1 & 𝑥2
respectively.
For 𝑛 ≤ 3, methods frequently used to solve the non-homogeneous system of linear equations are
given below:
(i) Matrix Inversion (Moderate efficiency for 𝑛 = 3 & high efficiency for 𝑛 = 2)
(ii) Graphical method (Less efficiency but useful for visualizing & enhancing intuition)
(iii) Cramer’s rule (High efficiency for 𝑛 ≤ 3 - Main Focus)
(iv) Method of elimination (Less efficiency)
However, method (i)- (iv) are less efficiency for 𝑛 > 3, thus more advanced methods are introduced:
[𝐴]{𝑋} = {𝐵}
5
6.2.2 Graphical Method
Rearrange the equations into linear plot format and then plot it.
Using graphical method, the solution that satisfies both equations is the intersection point.
For singular system, the slopes of the equations are equal (or zero determinant), and it leads to
1 1
𝑥2 = + (2) 𝑥1 + (1) 𝑥2 = + (2) 𝑥1 + (1)
1 1 1
𝑥2 = + (2) 𝑥1 + (2) 𝑥2 = + (2) 𝑥1 + (1)
1 1
− 1 − 1
1 1 Diff of slope= | 2 | = −1 − (−1) = 0
Diff of slope= | 21 | = − 2 − (− 2) = 0 −1 2
−2 1
6
For ill-conditioned system (also known as ill-posed system), the slopes of the equations are almost
equal (or close-to-zero determinant), and it leads to
2.3
𝑥2 = + ( ) 𝑥1 + (1.1)
5
1
𝑥2 = + ( ) 𝑥1 + (1)
2
2.3
− 1
Diff of slope,| 51 | = −0.46 − (−0.5) = 0.04 ≈ 0
− 1
2
For example,
7
6.2.4 Method of Elimination (Or Substitution Method)
Obtain 𝑥2 = ⋯ in 𝑥3 term
Obtain 𝑥3 solution
Limitation: Extremely tedious to solve manually. However, the elimination approach can be extended
and made more systematically to improve the efficiency such as Gauss Elimination method.
It is an extension of the method of elimination which has a systematic scheme with forward
elimination & back substitution procedure.
Forward elimination #1
𝑎 ′
R3’=R3-R1x𝑓31 where factor, 𝑓31 = 𝑎31 ; For example, 𝑎31 = 𝑎31 − 𝑎11 𝑓31 = 0
11
Forward elimination #2
𝑎′ ′′ ′ ′
R3’’=R3’-R2’x𝑓32 where factor, 𝑓32 = 𝑎32
′ ; For example, 𝑎32 = 𝑎32 − 𝑎22 𝑓32 = 0
22
Note: •′ and •′′ indicate change of value after first and second elimination procedures, respectively.
8
Back Substitution
′′ 𝑥3
0 0 𝑎33 𝑏3′′
For example:
−19.5617−(−0.293333)(7.0000)
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Back substitution #2 𝑥2 = = −2.50000
7.00333
7.85−(−0.1)(−2.5)−(−0.2)(7)
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Back substitution #3 𝑥1 = = 3.00000
3
Limitation: Suffer the division by zero issue or the solution is sensitive to round-off error
For example,
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Forward elimination #1
0 2 3 𝑥1 8 4
[4 6 7] {𝑥2 } = {−3} 𝑅2′ = 𝑅2 − 𝑅1 × Error!
0
2 1 6 𝑥3 5 𝑅3′ = 𝑅3 − 𝑅1 × 0
2
9
2 100000 𝑥1 ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
100000 Forward elimination #2
[ ] {𝑥 } = { } 1
1 1 2 2 𝑅2′ = 𝑅2 − 𝑅1 × 2
Verification of solution:
LHS: RHS:
100000−100000
% 𝑒𝑟𝑟𝑜𝑟_𝑏1 100000
𝑥100% 0%
∴ 𝐿𝐻𝑆 ≠ 𝑅𝐻𝑆 as percentage of error, { }={ 2−1 } ={ }
% 𝑒𝑟𝑟𝑜𝑟_𝑏2 𝑥100% 50%
2
𝑥1 0
Thus, {𝑥 } = { } is a poor solution as it is different from the actual solution. The solution is sensitive
2 1
to the round-off error which leads to high error discrepancy.
The limitation of Naïve Gauss Elimination can be improved by using GEwPP that consists of scaling
analysis & pivoting strategy:
(a) Scaling analysis: Indicates the requirement of having pivoting to avoid divide by zero issue.
0.00002 1 𝑥1 1
2 100000 𝑥1 100000 ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Scaling the coeffient matrix [ ] {𝑥 } = { }
[ ] {𝑥 } = { } 1 1 2 2
1 1 2 2 to have max value of 1 𝑝𝑖𝑣𝑜𝑡 𝑒𝑙𝑒𝑚𝑒𝑛𝑡 𝑖𝑠 𝑠𝑚𝑎𝑙𝑙𝑒𝑟
Rule of thumbs: If the pivot element is smaller than other rows, then pivoting is needed.
(b) Pivoting strategy: Switch row/ column to avoid pivot element to be zero or close to zero
(i) Naïve Gauss Elimination - Gaussian Elimination (GE) without pivoting strategy
2 100000 𝑥1 100000
[ ] {𝑥 } = { }
1 1 2 2
Note: Previously we remain the original formulation and get poor solution after solving it.
(ii) Gauss Elimination with Partial Pivoting (GEwPP) -Switch row so that largest element is the pivot
element (Main Focus).
1 1 𝑥1 2
0.00002 1 𝑥1 1 [ ]{ } = { }
[ ] {𝑥 } = { } ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
𝑃𝑎𝑟𝑡𝑖𝑎𝑙 𝑝𝑖𝑣𝑜𝑡𝑖𝑛𝑔 0.00002 1 𝑥2 1
1 1 2 2 𝑝𝑖𝑣𝑜𝑡 𝑒𝑙𝑒𝑚𝑒𝑛𝑡 𝑖𝑠 𝑡ℎ𝑒 𝑙𝑎𝑟𝑔𝑒𝑠𝑡
10
Example of GEwPP
2 100000 𝑥1 100000
[ ] {𝑥 } = { }
1 1 2 2
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗ 0.00002 1 𝑥1 1
Scaling [ ]{ } = { } Note: Scaling indicates partial pivoting is needed
1 1 𝑥2 2
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗ 1 1 𝑥1 2
𝑃𝑎𝑟𝑡𝑖𝑎𝑙 𝑃𝑖𝑣𝑜𝑡𝑖𝑛𝑔 [ ]{ } = { } Note: Pivot element is the largest after PP.
0.00002 1 𝑥2 1
1 1 𝑥1 2
[ ]{ } = { } Note: Round-off error happens when we use approximate value
0 1 𝑥2 1
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Back substitution #1 𝑥2 = 1
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Back substitution #2 𝑥1 = 2−𝑥2 = 1
Verification of solution:
LHS: RHS:
|100000−100002|
% 𝑒𝑟𝑟𝑜𝑟_𝑏1 𝑥100% 0.002%
100000
∴ 𝐿𝐻𝑆 ≈ 𝑅𝐻𝑆 as percentage of error, { }={ }={ }
% 𝑒𝑟𝑟𝑜𝑟_𝑏2 |2−2|
𝑥100% 0%
2
𝑥1 1
Thus, {𝑥 } = { } is an accurate solution as it is close to the actual solution. The solution is less
2 1
sensitive to the round-off error by using the GEwPP, as compares to naïve GE.
Determinant analysis can be done before GEwPP to know if you have well-conditioned system or
singular system. Precaution: scaling is performed to standardize matrix before calculating determinant.
11
Example: Solving a well-conditioned system using GEwPP
−1 1 2 𝑥1 2
[ 3 −1 1] {𝑥2 } = {6}
−1 3 4 𝑥3 4
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Forward elimination #1
1
(− 2) 1 −1/3 1/3 2
𝑅2′ = 𝑅2 − 𝑅1 × [0 1/3 7/6 2 ]
1
1 0 2/3 13/12 1.5
(− )
𝑅3′ = 𝑅3 − 𝑅1 × 4
1
1 −1/3 1/3 2 1 −1/3 1/3 2
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Scaling [ 0 2/7 1 12/7 ] ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Pivoting [ 0 8/13 1 18/13 ]
0 8/13 1 18/13 0 2/7 1 12/7
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Forward elimination #2 1 −1/3 1/3 2
2
( )
7 [0 8/13 1 18/13]
𝑅3′′ = 𝑅3′ − 𝑅2′ × 8
( )
13
0 0 15/28 15/14
15/14
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Back substitution 𝑥3 = 15/28 = 2
𝑥1 1
18/13−(1)𝑥3
𝑥2 = = −1 ∴ {𝑥2 } = {−1} is an accurate solution
8/3
𝑥3 2
2−(−1/3)𝑥2 −(1/3)𝑥3 as LHS=RHS (verification).
𝑥1 = 1
=1
−1 1 2 𝑥1 2
[ 3 −1 1] {𝑥2 } = {6}
−2 2 4 𝑥3 4
−1/2 1/2 1 1
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗ [ 1
Scaling −1/3 1/3 2]
−2/4 2/4 1 1
1 −1/3 1/3 2
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Pivoting [ −1/2 1/2 1 1]
−2/4 2/4 1 1
12
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Forward elimination #1
1
(− ) 1 −1/3 1/3 2
𝑅2′ = 𝑅2 − 𝑅1 × 2
1 [0 1/3 7/6 2 ]
2 0 1/3 7/6 2
(− 4)
𝑅3′ = 𝑅3 − 𝑅1 ×
1
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Forward elimination #2
1 1 −1/3 1/3 2
(3) [0 1/3 7/6 2]
𝑅3′′ = 𝑅3′ − 𝑅2′ ×
1 0 0 0 0
(3)
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Back substitution 0𝑥3 = 0 2−(7/6)𝑡
2−(−1/3)[ ]−(1/3)𝑡
1/3
𝑥1
𝑥3 = 𝑡, 𝑤ℎ𝑒𝑟𝑒 − ∞ ≤ 𝑡 ≤ ∞ 1
∴ {𝑥2 } = 2−(7/6)𝑡
2−(7/6)𝑥3
𝑥3 1/3
𝑥2 = 1/3 { 𝑡 }
Infinite solutions that can satisfy the
2−(−1/3)𝑥2 −(1/3)𝑥3 eqns.
𝑥1 = 1
−1 1 2 𝑥1 2
[ 3 −1 1] {𝑥2 } = {6}
−2 2 4 𝑥3 8
−1/2 1/2 1 1
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Scaling [ 1 −1/3 1/3 2]
−2/4 2/4 1 2
1 −1/3 1/3 2
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Pivoting [−1/2 1/2 1 1]
−2/4 2/4 1 2
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Forward elimination #1
1
(− ) 1 −1/3 1/3 2
𝑅2′ = 𝑅2 − 𝑅1 × 2
1 [0 1/3 7/6 2 ]
2 0 1/3 7/6 3
(− 4)
𝑅3′ = 𝑅3 − 𝑅1 ×
1
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Forward elimination #2
1 1 −1/3 1/3 2
(3) [0 1/3 7/6 2]
𝑅3′′ = 𝑅3′ − 𝑅2′ ×
1 0 0 0 1
(3)
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Back substitution 0𝑥3 = 1 ∴ 𝑁𝑜 𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛𝑠 𝑡ℎ𝑎𝑡 𝑠𝑎𝑡𝑖𝑠𝑓𝑦 𝑡ℎ𝑒 𝑒𝑞𝑛𝑠.
13
6.3 Row Echelon Form, Reduced Row Echelon Form, Rank, & Linear Dependency
After the GEwPP, the coefficient matrix will be in the Row Echelon Form (REF). From the previous
example, we obtain:
1 −1/3 1/3
1 −1/3 1/3
[𝐴]𝐺𝐸𝑤𝑃𝑃 = [0 1/3 7/6]
[𝐴]𝐺𝐸𝑤𝑃𝑃 = [0 8/3 1 ]
0 0 0
0 0 15/28
REF can be further reduced to Reduced Row Echelon Form (RREF) by using Gauss-Jordan Elimination
with Partial Pivoting (GJEwPP) as shown in the example below:
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Scale the pivot element to 1 ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Scale the pivot element to 1 1 −1/3 1/3
1 −1/3 1/3 𝑅2 → 𝑅2 × 3
𝑅2 → 𝑅2 ×
3 [0 1 3.5 ]
8 [0 1 3/8] 𝑅3 → 𝑅3 ×
−1
28 1.25
0 0 0
𝑅3 → 𝑅3 × 0 0 1
15
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Forward elimination 1 0 1.5
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Forward elimination 1 0 11/24 1
(− ) [0 1 3.5]
3
(− )
1
[0 1 3/8 ] 𝑅1 → 𝑅1 − 𝑅2 × 1 0 0 0
3
𝑅1 → 𝑅1 − 𝑅2 × 1 0 0 1
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Forward elimination
11
( )
24
1 0 0
𝑅1 → 𝑅1 − 𝑅3 × 1 [ 0 1 0]
3
( )
8
0 0 1
𝑅2 → 𝑅2 − 𝑅3 × 1
14
RREF has the following characteristics:
• Also a REF
• Unique; Scale the pivot element to 1
• Element above the pivot element is 0
Once RREF is obtained, rank of a matrix can be evaluated by counting the number of non-zero rows
of RREF. Note: Rank is the maximum number of linearly independent vector.
1 0 0 1 0 1.5
RREF= [0 1 0] RREF= [0 1 3.5]
0 0 1 0 0 0
Rank=3 Rank=2
It means that all the 3 equations given are linear It means that only 2 out of 3 equations given are
independent, therefore finding 3 unknowns linear independent, therefore finding 3
from 3 linear independent equations are unknowns from 2 linear independent equations
possible. are difficult.
−1𝑥1 + 1𝑥2 + 2𝑥3 = 2 −1𝑥1 + 1𝑥2 + 2𝑥3 = 2
3𝑥1 − 1𝑥2 + 1𝑥3 = 6 3𝑥1 − 1𝑥2 + 1𝑥3 = 6
−1𝑥1 + 3𝑥2 + 4𝑥3 = 4 −2𝑥1 + 2𝑥2 + 4𝑥3 = 4
As a rule of thumbs, 𝑛 linearly independent equations are required to solve 𝑛 unknowns. To solve 3
unknowns, if we have less than 3 linearly independent equations, i.e. more unknowns than knowns,
then we get the singular system issue.
(a) Transform information into multiple linear algebraic equations to be solved simultaneously.
The amounts of metal, plastic, and If totals of 2120, 43.4 and 164 g of metal, plastic, and
rubber needed for electrical rubber respectively are available each day. How many
components types #1, #2, and #3 are components can be produced per day?
shown in the following Table.
15𝐶𝑜𝑚𝑝1 + 17𝐶𝑜𝑚𝑝2 + 19𝐶𝑜𝑚𝑝3 = 2120
0.25𝐶𝑜𝑚𝑝1 + 0.33𝐶𝑜𝑚𝑝2 + 0.42𝐶𝑜𝑚𝑝3 = 43.4
1.0𝐶𝑜𝑚𝑝1 + 1.2𝐶𝑜𝑚𝑝2 + 1.6𝐶𝑜𝑚𝑝3 = 164
15 17 19 𝐶𝑜𝑚𝑝1 2120
[0.25 0.33 0.42] {𝐶𝑜𝑚𝑝2 } = { 43.4 }
Note: It is important for student to convert the
information into multiple linear algebraic
1.0 1.2 1.6 𝐶𝑜𝑚𝑝3 164
equations & matrix format. Then, it can be solved by using the GEwPP, Cramer’s rule, etc.
15
(b) Electrical system
𝑅𝐴 + 𝑅𝐵 −𝑅𝐵 −𝑅𝐴 𝐼1 +𝑉1
[ −𝑅𝐵 𝑅𝐵 + 𝑅𝐶 −𝑅𝐶 𝐼 −𝑉
] { 2 } = { 2}
−𝑅𝐴 −𝑅𝐶 𝑅𝐴 + 𝑅𝐶 + 𝑅𝐷 𝐼3 +𝑉3
𝑘1 + 𝑘2 − 𝜔2 𝑚1 −𝑘2 𝑋 𝐹
[ 2 ] { 1} = { 1}
−𝑘2 𝑘2 + 𝑘3 − 𝜔 𝑚2 𝑋2 𝐹2
Source:
https://www.brown.edu/Departments/Engineering/Courses/
En4/Notes/vibrations_mdof/vibrations_mdof.htm Eg. If the stiffness, 𝑘 , mass, 𝑚, force, 𝐹, and excitation
frequency, 𝜔 are given, estimate the output response of
the 2 dof mass-spring vibration system.
70 1 0 𝑎 70(9.81) − 10(5)
[60 −1 1 ] {𝑇 } = {60(9.81) − 14(5)}
40 0 −1 𝑅 40(9.81) − 17(5)
16
MATRIX ALGEBRA FOR HOMOGENEOUS LINEAR
ALGEBRAIC SYSTEM
WEEK 7: MATRIX ALGEBRA FOR HOMOGENEOUS LINEAR ALGEBRAIC SYSTEM
7.1 Solving Homogeneous System of Linear Equations
(i) In this study, methods used to solve the total solution of {𝑋} by using the [𝐶] & zero {𝐵} is out of
scope.
Linear Algebraic Equations Coefficient Matrix, [𝐶] Unknown, {𝑋} Zero {𝐵}
Depending on the coefficient matrix that represents any physical system or application.
• If |𝐶| = 0 & {𝐵} = {0}, then the solutions of {𝑋} due to initial/boundary conditions are non-
zero/ non-trivial.
• If |𝐶| ≠ 0 & {𝐵} = {0}, then the solutions of {𝑋} due to initial/boundary conditions are zero/
trivial.
(ii) In this study, the main focus is to find the characteristic of the system in terms of the eigenvalue,
𝑥1
λ𝑖 & the corresponding eigenvector, {𝑥2 } where 𝑖 = 1,2, … , 𝑛 𝑚𝑜𝑑𝑒. This is known as eigenvalue/
𝑥3 λ
𝑖
eigenvector problem.
{𝑥}𝑖 is the corresponding eigenvector for each λ𝑖 and {𝑥}𝑖 ≠ {0}, i.e. non-trivial solutions;
Note: In general, 𝑛 dof system has 𝑛 number of eigenvalue & eigenvector. For example: 2 dof mass-
spring system has 2 eigenvalues and 2 eigenvectors, while 3 dof electrical circuit system has 3
eigenvalues and 3 eigenvectors.
17
7.2 Eigenvalue/Eigenvector Problem
Example: Link the eigenvalue and eigenvector to the characteristic of the given system.
−𝑘𝑥1 − 𝑘(𝑥1 − 𝑥2 ) = 𝑚1 𝑥̈ 1
𝑘(𝑥1 − 𝑥2 ) − 𝑘𝑥2 = 𝑚2 𝑥̈ 2
𝑤ℎ𝑒𝑟𝑒 𝑥1 = 𝐴1 𝑠𝑖𝑛(𝜔𝑡 + 𝜃1 ) , 𝑥̈ 1 = −𝜔2 𝑥1
𝑥2 = 𝐴2 𝑠𝑖𝑛(𝜔𝑡 + 𝜃2 ), 𝑥̈ 2 = −𝜔2 𝑥2
2𝑘 𝑘
Given 𝑠𝑡𝑖𝑓𝑓𝑛𝑒𝑠𝑠, 𝑘 = 𝑘1 = 𝑘2 = 200𝑁/𝑚 ; − 𝜔2 −
𝑚1 𝑚1 𝑥1 0
{𝑥 } = { }
𝑚𝑎𝑠𝑠, 𝑚 = 𝑚1 = 𝑚2 = 40𝑘𝑔 𝑘 2𝑘 2 0
− − 𝜔2
[ 𝑚2 𝑚2 ]
[10 − 𝜔
2
−5 ] {𝑥1 } = {0} ([𝐴]−λ𝑖 [𝐼]){𝑥}𝑖 = {0}
−5 10 − 𝜔2 𝑥2 0
𝐴11 − λ𝑚𝑜𝑑𝑒 𝑖 𝐴12 𝑥1 0
10 −5 1 0 𝑥1 0 [ ] {𝑥 } ={ }
([ ] − 𝜔2 [ ]) { } = { } 𝐴21 𝐴22 − λ𝑚𝑜𝑑𝑒 𝑖 2 𝑚𝑜𝑑𝑒 𝑖 0
−5 10 0 1 𝑥2 0
𝐴 𝐴12 1 0 𝑥1 0
10 −5 𝑥1 𝑥1 0 ([ 11 ] − λ𝑚𝑜𝑑𝑒 𝑖 [ ]) { } ={ }
[ ] {𝑥 } − 𝜔2 {𝑥 } = { } 𝐴21 𝐴22 0 1 𝑥2 𝑚𝑜𝑑𝑒 𝑖 0
−5 10 2 2 0
10 −5 𝑥1 𝑥1 𝐴11 𝐴12 𝑥1 𝑥1 0
[ ] {𝑥 } = 𝜔2 {𝑥 } [ ]{ } − λ𝑚𝑜𝑑𝑒 𝑖 {𝑥 } ={ }
−5 10 2 2 𝐴21 𝐴22 𝑥2 𝑚𝑜𝑑𝑒 𝑖 2 𝑚𝑜𝑑𝑒 𝑖 0
𝐴11 𝐴12 𝑥1 𝑥1
[ ]{ } = λ𝑚𝑜𝑑𝑒 𝑖 {𝑥 }
𝐴21 𝐴22 𝑥2 𝑚𝑜𝑑𝑒 𝑖 2 𝑚𝑜𝑑𝑒 𝑖
10 −5
We find that the coefficient matrix, [𝐴] = [ ]
−5 10
Note: Natural frequency and mode shape are important characteristics for a vibration system
that can be obtained from the eigenvalue/eigenvector problem.
18
Example: Solving the eigenvalue/eigenvector problem.
[10 − 𝜔
2
−5 ] {𝑥1 } = {0}
−5 10 − 𝜔2 𝑥2 0
𝑥1 0
To have non-trivial solution, {𝑥 } ≠ { }. The determinant must be zero.
2 0
2
|10 − 𝜔 −5 | = 0
−5 10 − 𝜔2
Let the eigenvalue, λ=𝜔2
10 − λ −5
| |=0
−5 10 − λ
Since λ=𝜔2 , we can obtain the natural frequencies for the system: 𝜔1 = √5 and 𝜔2 = √15
5 −5 𝑥1 0 5𝑥1 − 5𝑥2 = 0 𝑥1 1
[ ] { } = { } ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
𝑒𝑥𝑝𝑎𝑛𝑑 ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
𝑎𝑡 𝑥1 = 1, 𝑥2 = 1 𝑓𝑜𝑟 𝑏𝑜𝑡ℎ 𝑒𝑞𝑛𝑠 {𝑥 } = { }
−5 5 𝑥2 1 0 −5𝑥1 + 5𝑥2 = 0 2 1 1
Normalized eigenvector has unique shape and unique scale by using the following formula:
19
At mode 2, 𝜔2 = √15 or λ2 =15, we obtain the unscaled eigenvector as following:
−5 −5 𝑥1 0 −5𝑥1 − 5𝑥2 = 0 𝑥1 1
[ ] {𝑥 } = { } ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
𝑒𝑥𝑝𝑎𝑛𝑑 ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
𝑎𝑡 𝑥1 = 1, 𝑥2 = −1 𝑓𝑜𝑟 𝑏𝑜𝑡ℎ 𝑒𝑞𝑛𝑠 {𝑥 } = { }
−5 −5 2 1 0 −5𝑥1 − 5𝑥2 = 0 2 2 −1
Note: Depending on questions, student should know how to find unscaled eigenvector/ eigenspace/
normalized eigenvector. In general, if it is not stated for manual calculation, then providing unscaled
eigenvector (𝑡 =1) is sufficient. For software calculation, normalised eigenvector is used usually.
20
Example: Find the eigenvalues & eigenvectors of the following matrix.
1 −3 3
𝐀 = [3 −5 3]
6 −6 4
Eigenvalues/eigenvectors problem: (𝐀 − λ𝐈)𝒙 = 𝟎
1 −3 3 1 0 0 𝑥1 0
([3 −5 3] − λ [0 𝑥
1 0]) { 2 } = {0}
6 −6 4 0 0 1 𝑥3 0
1−λ −3 3 𝑥1 0
[ 3 −5 − λ 3 ] { 𝑥2 } = {0}
6 −6 4−λ 3 𝑥 0
𝑥1 0 1−λ −3 3
Since {𝑥2 } ≠ {0} , | 3 −5 − λ 3 |=0
𝑥3 0 6 −6 4−λ
(1 − λ )[(−5 − λ)(4 − λ) − 3(−6)] − (−3)[3(4 − λ) − 3(6)] + 3[3(−6) − (−5 − λ)(6)] = 0
(λ − 4)(λ2 + 4λ + 4) = 0
3 −3 3 𝑥1 0
[3 −3 3] {𝑥2 } = { 0}
6 −6 6 𝑥3 λ=−2 0
1 −1 1 0
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗ [3
𝑎𝑢𝑔𝑚𝑒𝑛𝑡𝑒𝑑 −3 3 0]
6 −6 6 0
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
𝑠𝑐𝑎𝑙𝑒 𝑝𝑖𝑣𝑜𝑡 𝑒𝑙𝑒𝑚𝑒𝑛𝑡 𝑡𝑜 1 1 −1 1 0
R1 [3 −3 3 0]
R1 → 3 6 −6 6 0
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
𝐹𝑜𝑟𝑤𝑎𝑟𝑑 𝑒𝑙𝑖𝑚𝑖𝑛𝑎𝑡𝑖𝑜𝑛 1 −1 1 0
R2 → R2 − 3R1 [0 0 0 0] Note: RREF shows rank 1 (i.e. 1 linearly independent vector)
R3 → R3 − 6R1 0 0 0 0
𝑥1 − 𝑥2 + 𝑥3 = 0
𝑥1 = 𝑥2 − 𝑥3
𝑥1 𝑥2 − 𝑥3 𝑥2 −𝑥3 1 −1
𝐸𝑖𝑔𝑒𝑛𝑠𝑝𝑎𝑐𝑒 𝑓𝑜𝑟 {𝑥2 } = { 𝑥2 } = {𝑥2 } + { 0 } = 𝑡 {1}| + 𝑠 { 0 }| ,where t&S ∈ 𝐑
𝑥3 λ=-2 𝑥3 0 𝑥3 0 𝑥 1 𝑥3 =𝑠
2 =𝑡
𝑥1 1 −1
𝑥
𝐸𝑖𝑔𝑒𝑛vectors, { 2 } = {1} & { 0 } for repeated eigenvalues λ1 = −2 , λ2 = −2 respectively.
𝑥3 λ=-2 0 1
21
Case 3: λ3 = 4 (Distinct eigenvalue case)
−3 −3 3 𝑥1 0
[ 3 −9 3] {𝑥2 } = {0}
6 −6 0 𝑥3 λ=4 0
−3 −3 3 0
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
𝑎𝑢𝑔𝑚𝑒𝑛𝑡𝑒𝑑 [ 3 −9 3 0]
6 −6 0 0
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
𝑠𝑐𝑎𝑙𝑒 𝑝𝑖𝑣𝑜𝑡 𝑒𝑙𝑒𝑚𝑒𝑛𝑡 𝑡𝑜 1 1 1 −1 0
R1 [3 −9 3 0]
R1 → − 3 6 −6 0 0
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
𝐹𝑜𝑟𝑤𝑎𝑟𝑑 𝑒𝑙𝑖𝑚𝑖𝑛𝑎𝑡𝑖𝑜𝑛 1 1 −1 0
R2 → R2 − 3R1 [0 −12 6 0]
R3 → R3 − 6R1 0 −12 6 0
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗ 1 1 −1 0
𝑠𝑐𝑎𝑙𝑒 𝑝𝑖𝑣𝑜𝑡 𝑒𝑙𝑒𝑚𝑒𝑛𝑡 𝑡𝑜 1 1
R [0 1 − 0]
R 2 → − 122 2
0 −12 6 0
1 1 −1 0
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
𝐹𝑜𝑟𝑤𝑎𝑟𝑑 𝑒𝑙𝑖𝑚𝑖𝑛𝑎𝑡𝑖𝑜𝑛 1
[0 1 − 0]
R 3 → R 3 + 12R 2 2
0 0 0 0
1
1 0 − 0
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗ 2
𝐹𝑜𝑟𝑤𝑎𝑟𝑑 𝑒𝑙𝑖𝑚𝑖𝑛𝑎𝑡𝑖𝑜𝑛 [0 0]
1 Note: RREF shows rank 2 (i.e. 2 linearly independent vectors)
R1 → R1 − R 2 1 −
2
0 0 0 0
1 1
𝑥1 − 2 𝑥3 = 0 >> 𝑥1 = 2 𝑥3
1 1
𝑥2 − 2 𝑥3 = 0 >> 𝑥2 = 2 𝑥3
1 1
𝑥1 𝑥
2 3 2
𝑥
𝐸𝑖𝑔𝑒𝑛𝑠𝑝𝑎𝑐𝑒 𝑓𝑜𝑟 { 2 } = { 𝑥 } = 𝑡 { 1 }|
1 ,where t ∈ 𝐑
𝑥3 λ=4 2 3 2
𝑥3 1 𝑥 3 =𝑡
𝑥1 0.5
𝐸𝑖𝑔𝑒𝑛vector, {𝑥2 } = {0.5}
𝑥3 λ=4 1
For verification of the eigenvector results, it should satisfy the eigenvalue/eigenvector problem:
1 −3 3 1 1 1 −3 3 −1 −1 1 −3 3 0.5 0.5
[3 −5 3] {1} = −2 {1} [3 −5 3] { 0 } = −2 { 0 } [3 −5 3] {0.5} = 4 {0.5}
6 −6 4 0 0 6 −6 4 1 1 6 −6 4 1 1
22
Or we can combine all cases into single matrix operation:
(a) Diagonalization
1 −3 3 1−λ −3 3 𝑥1 0
𝐀 = [3 −5 3] → [ 3 −5 − λ 3 ] {𝑥2 } = {0}
6 −6 4 6 −6 4 − λ 𝑥3 0
𝑥1 1 −1
𝑥
𝐸𝑖𝑔𝑒𝑛vectors, { 2 } = {1} & { 0 }
𝑥3 λ=-2 0 1
𝑥1 0.5
𝑥
𝐸𝑖𝑔𝑒𝑛vector, { 2 } = {0.5}
𝑥3 λ=4 1
𝑥1 𝑥1 𝑥1 1 −1 0.5
𝑥
∴Eigenvector matrix consists of all eigenvectors, P = [{ 2 } 𝑥
{ 2} 𝑥
{ 2 } ] = [1 0 0.5]
𝑥3 λ 𝑥3 λ 𝑥3 λ 0 1 1
1 2 3
1 −1 0.5 −𝟏 1 −3 3 1 −1 0.5 −2 0 0
𝐃 = 𝐏 −1 𝐀𝐏 = [1 0 0.5] [3 −5 3] [1 0 0.5] = [ 0 −2 0]
0 1 1 6 −6 4 0 1 1 0 0 4
Note: We can convert a non-diagonal matrix 𝐀 to a diagonal matrix, where the diagonal matrix, 𝐃
consists of the eigenvalues of matrix 𝐀 at the diagonal elements. λ1 = −2 , λ2 = −2, λ3 = 4
23
(b) Extension from Diagonalization
𝐃 = 𝐏 −1 𝐀𝐏
𝐀 = 𝐏𝐃𝐏 −1
,where 𝐀 can be expressed in terms of the eigenvector matrix, 𝐏 and eigenvalue matrix, 𝐃
⋮
𝒌
λ1 0 0 λ1𝑘 0 0
𝒌 𝒌 −1
𝐀 = 𝐏𝐃 𝐏 where 𝐃 = [ 0 λ2 0 ] = [ 0 λ𝑘2 0 ] , where 𝑘 𝜖 𝐑
𝒌
𝑘
0 0 λ3 0 0 λ3
(Comment: power of a diagonal matrix can be computed easily!)
Note: This formula implies that change of power of 𝐀 will change the eigenvalue matrix while
remain the eigenvector matrix, If 𝐀 has eigenvalues of λ1 , λ2 , λ3 . Then, 𝐀𝒌 has eigenvalues of
1 1 1
λ1𝑘 , λ𝑘2 , λ𝑘3 . e.g. 𝐀−𝟏 has eigenvalues of , , .
λ1 λ2 λ3
You can find 𝐀𝑘 (power of a matrix) with the eigenvalue & eigenvector matrix by using this formula.
1 −3 3 −2 0 0 1 −1 0.5
𝐀 = [3 −5 3] ; Eigenvalue matrix,𝐃 = [ 0 −2 0]; Eigenvector matrix, 𝐏 = [1 0 0.5]
6 −6 4 0 0 4 0 1 1
Eigenvalue (𝑘𝐀) = 𝑘λ𝑖 ,where 𝑘 𝜖 𝐑 → Eigenvalue(𝟓𝐀) = 5λ1 = −10; 5λ2 = −10; 5λ3 = 20
Eigenvalue (𝐀𝑘 ) = λ𝑘𝑖 ,where 𝑘 𝜖 𝐑 → Eigenvalue(𝐀5 ) = λ15 = (−2)5; λ52 = (−2)5 ; λ53 = (4)5
24
(b) Cayley-Hamilton Theorem
1 −3 3 1−λ −3 3 𝑥1 0
𝐀 = [3 −5 3] → [ 3 −5 − λ 3 ] {𝑥2 } = {0}
6 −6 4 6 −6 4 − λ 𝑥3 0
• Characteristic eqn: 𝑓(λ) = |(𝐀 − λ𝐈)| = 0, where λ=eigenvalue
𝑝0 + 𝑝1 λ + 𝑝2 λ2 + ⋯ + 𝑝𝑛 λ𝑛 = 0
λ3 − 12λ − 16 = 0
,where 𝐀 is the matrix that has the eigenvalue, λ. It shows that not only eigenvalue can satisfy
the characteristic equation, but also the original coefficient matrix, 𝐀.
You can find 𝐀𝑛 (power of a matrix) with the characteristic equation only by using this
theorem. For example:
1
𝐀2 + 12𝐈 − 16𝐀−1 = 𝟎 → 𝐀−1 = (𝐀2 + 12𝐈)
16
𝐀3 = 12𝐀 + 16𝐈
25
6.5 Engineering Application of Solving Homogeneous System of Linear Equations
So far, we have determined the eigenvalue & eigenvector for a homogeneous system of linear
equations. This information is useful to find the eigenfunction (i.e. Each of a set of independent
functions which are the solutions to a given differential equation.)
−𝑘𝑥1 − 𝑘(𝑥1 − 𝑥2 ) = 𝑚1 𝑥̈ 1
𝑘(𝑥1 − 𝑥2 ) − 𝑘𝑥2 = 𝑚2 𝑥̈ 2
𝑤ℎ𝑒𝑟𝑒 𝑥1 = 𝐴1 𝑠𝑖𝑛(𝜔𝑡 + 𝜃1 ) , 𝑥̈ 1 = −𝜔2 𝑥1
𝑥2 = 𝐴2 𝑠𝑖𝑛(𝜔𝑡 + 𝜃2 ), 𝑥̈ 2 = −𝜔2 𝑥2
The total solution of the homogeneous linear algebraic system is equal to the superposition of all the
eigenfunctions:
𝑥1 1 1
{𝑥 } = 𝑐1 { } 𝑠𝑖𝑛(√5𝑡 + 𝜃1 ) + 𝑐2 { } 𝑠𝑖𝑛(√15𝑡 + 𝜃2 )
2 1 −1
,where 𝑐1 & 𝑐2 are unknown constants that can be obtained from the initial or boundary conditions.
You will learn it in the ODE chapter later.
26
𝑥1 1 𝑥̈ −5
Verification of eigenfunction #1: {𝑥 } = { } 𝑠𝑖𝑛(√5𝑡 + 𝜃1 ) ; { 1 } = { } 𝑠𝑖𝑛(√5𝑡 + 𝜃1 )
2 1 𝑥̈ 2 −5
LHS RHS
= −200(𝑠𝑖𝑛(√5𝑡 + 𝜃1 ) ) = (−200𝑠𝑖𝑛(√5𝑡 + 𝜃1 ))
= −200(𝑠𝑖𝑛(√5𝑡 + 𝜃1 ) ) = (−200𝑠𝑖𝑛(√5𝑡 + 𝜃1 ))
∴ 𝑆𝑖𝑛𝑐𝑒 𝐿𝐻𝑆 = 𝑅𝐻𝑆, thus it is proven that eigenfunction #1 is one of the solution
1 −15
Verification of eigenfunction #2: { } 𝑠𝑖𝑛(√15𝑡 + 𝜃2 ) ; { } 𝑠𝑖𝑛(√15𝑡 + 𝜃2 )
−1 15
LHS RHS
= −600(𝑠𝑖𝑛(√15𝑡 + 𝜃2 ) ) = (−600𝑠𝑖𝑛(√15𝑡 + 𝜃2 ))
= 600(𝑠𝑖𝑛(√15𝑡 + 𝜃1 ) ) = (600𝑠𝑖𝑛(√15𝑡 + 𝜃1 ))
∴ 𝑆𝑖𝑛𝑐𝑒 𝐿𝐻𝑆 = 𝑅𝐻𝑆, thus it is proven that eigenfunction #2 is one of the solution
27