Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Chapter 4

Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

MATRIX ALGEBRA FOR NON-HOMOGENEOUS LINEAR

ALGEBRAIC SYSTEM
WEEK 6: MATRIX ALGEBRA FOR NON-HOMOGENEOUS LINEAR ALGEBRAIC SYSTEM
6.1 INTRODUCTION

A matrix is an array of mn elements (where m and n are integers) arranged in m rows and n columns.
The difference between matrix and column/ row vector is shown in Table 6.1.

Table 6.1 Matrix, column vector and row vector.

Matrix Column vector Row vector

(i.e. matrix with one (i.e. matrix with one row)


column)

𝑎11 𝑎12 𝑎13 ⋯ 𝑎1𝑛 𝑐11


𝑎21 𝑎22 𝑎23 ⋯ 𝑎2𝑛 𝑐21
𝐀 = 𝑎31 𝑎32 𝑎33 ⋯ 𝑎3𝑛 𝑪 = 𝑐31 𝑹 = {𝑟11 𝑟12 𝑟13 ⋯ 𝑟1𝑛 }
⋮ ⋮ ⋮ ⋱ ⋮ ⋮
[𝑎𝑚1 𝑎𝑚2 𝑎𝑚3 ⋯ 𝑎𝑚𝑛 ] {𝑐𝑚1 }

Size (𝑨) = 𝑚 × 𝑛 Size (𝑪) = 𝑚 × 1 Size (𝑹) = 1 × 𝑛

where 𝑎𝑚𝑛 is the element of the matrix at 𝑚th row and 𝑛th column. If 𝑚 = 𝑛, it is known as square
matrix. Non-square matrix has 𝑚 ≠ 𝑛.

The common notation of matrix and vector is shown in Table 6.2:

Table 6.2 Common notation of matrix and vector.

Matrix Vector

1 2 1
upper-case non-italic bold letter (e.g. 𝐀 = [ ]) upper-case italic bold letter (e.g. 𝑨 = { })
3 4 3

1 2 1
symbol in box bracket (e.g. [𝐴] = [ ]) symbol in curly bracket (e.g. {𝐴} = { })
3 4 3

1
Table 6.3 Type of matrices.

Zero/ Null Matrix Symmetric Matrix Diagonal Matrix

0 0 0 5 1 2 5 0 0
[0 0 0] [1 3 7] [0 3 0]
0 0 0 2 7 8 0 0 8
where 𝑎𝑖𝑗 = 𝑎𝑗𝑖 • All elements off the main
diagonal are equal to 0.

Identity/ Unit Matrix Upper Triangular matrix Lower Triangular matrix

1 0 0 2 −5 6 2 0 0
[0 1 0] [ 0 3 8] [9 3 0]
0 0 1 0 0 1 20 −3 1

• Diagonal matrix with • All the elements below • All the elements above
element = 1 main diagonal = 0 main diagonal = 0

Banded Matrix Tridiagonal Matrix Anti-Symmetric / Skew-


Symmetric Matrix
𝑎11 𝑎12 𝑎13 0 0 1 3 0 0
𝑎21 𝑎22 𝑎23 𝑎24 0 5 2 9 0 5 1 −2
[ ]
𝑎31 𝑎32 𝑎33 𝑎34 𝑎35 0 6 5 1 [−1 −3 7 ]
0 𝑎42 𝑎43 𝑎44 𝑎45 0 0 −3 −9 2 −7 8
[ 0 0 𝑎53 𝑎54 𝑎55 ]
• Banded matrix that has where 𝑎𝑖𝑗 = −𝑎𝑗𝑖
• All elements = 0, except for bandwidth of 3.
a band centered on the
main diagonal

The basic operations of matrices such as trace, transpose, equality, addition/subtraction, scalar
multiplication, transpose, multiplication, determinants, cofactor, adjoint, and inverse are provided in
Table 6.4.

Table 6.4 Basic Operations of Matrices

Basic Matrix Algebra Example

Trace 4 13 3
𝐅 = [−2 19 1]
3 2 0
Trace (F) =Summation of diagonal element= 4+19+0=23

Equality 4 13 4 13 4 13 2
𝐀=[ ];𝐁=[ ]; 𝐂 = [ ] ∴𝐀=𝐁;𝐀 ≠𝐂
−2 19 −2 19 −2 19 1

2
Addition/Subtraction 8 26
𝐃=𝐀+𝐁=[ ]
−4 38
0 0
𝐄 = 𝐀−𝐁 = [ ]
0 0

Scalar Multiplication 16 52
2𝐃 = [ ]
−8 76

Transpose, [•]𝑇 4 13 2
4 −2
𝐂=[ ]; 𝐂 𝑇 = [13 19 ]
−2 19 1
2 1

Matrix 3 1 22 29
5 9
Multiplication [8 6] [ ] = [82 84]
⏟7 2
⏟0 4 (Size 2x2) ⏟28 8
(Size 3x2) (Size 3x2)

3 1
Note: 𝐀𝐁 ≠ 𝐁𝐀 5 9
[ ] [8 6] = 𝑒𝑟𝑟𝑜𝑟

⏟7 2 ⏟
(Size 2x2)
0 4 (because non−equal interior dimensions)
(Size 3x2)

Determinant, |•| 4 13 |𝐀| 4 13


𝐀=[ ]; =| | = 4(19) − (−2)(13) = 102
−2 19 −2 19
4 13 3 3
4 13 3
Note: |•| is −2 19 1 1
𝐅 = [−2 19 1] ; 𝐆 = [ ]
3 2 0 0
determinant, not 3 2 0
0 2 1 0
absolute in this case.
4 13 3
|𝐅| = |−2 19 1| = 4 |19 1| − 13 |−2 1| + 3 |−2 19| = −152
2 0 3 0 3 2
3 2 0
Note: It is inefficient
4 13 3 3
to calculate
−2 19 1 1
determinant |𝐆| = | |
3 2 0 0
manually for 4x4 0 2 1 0
matrix and above.
19 1 1 −2 1 1 −2 19 1 −2 19 1
= 4| 2 0 0| − 13 | 3 0 0| + 3 | 3 2 0| − 3 | 3 2 0|
2 1 0 0 1 0 0 2 0 0 2 1
= 4(2) − 13(3) + 3(6) − 3(−55) = 152

Cofactor & Adjoint 4 13 |19| −|−2| 19 2


𝐀=[ ] ; cofactor(𝐀)= [ ]= [ ]
−2 19 −|13| |4| −13 4
Note: adjoint =
𝐜𝐨𝐟𝐚𝐜𝐭𝐨𝐫 𝑇 19 2 𝑇 19 −13
adjoint(𝐀)=(cofactor(𝐀))𝑇 = [ ] =[ ]
−13 4 2 4

3
Note: It is inefficient 19 1 −2 1 −2 19
| | −| | | |
to calculate cofactor 4 13 3 2 0 3 0 3 2
13 3 4 3 4 13
& adjoint manually 𝐅 = [ −2 19 1] ; cofactor(𝐅)= − | | | | −| | =
2 0 3 0 3 2
for 4x4 matrix and 3 2 0 13 3 4 3 4 13
[ |19 1| −| | | |
above. −2 1 −2 19 ]
−2 3 −61 −2 6 −44
[ 6 −9 31 ] ; adjoint(𝐅) = [ 3 −9 −10]
−44 −10 102 −61 31 102
4 13 3 3
−2 19 1 1
𝐆=[ ];
3 2 0 0
0 2 1 0
19 1 1 −2 1 1 −2 19 1 −2 19 1
|2 0 0| − | 3 0 0| | 3 2 0| − | 3 2 0|
2 1 0 0 1 0 0 2 0 0 2 1
13 3 3 4 3 3 4 13 3 4 13 3
−| 2 0 0| |3 0 0| − |3 2 0| |3 2 0|
cofactor(𝐆)= 132 3
1
3
0 0 1 0
4 3 3 4
0 2
13
0
3
0 2 1
4 13 3
=
|19 1 1| − |−2 1 1| |−2 19 1| − |−2 19 1|
2 1 0 0 1 0 0 2 0 0 2 1
13 3 3 4 3 3 4 13 3 4 13 3
− |19 1 1| |−2 1 1| − |−2 19 1| |−2 19 1|
[ 2 0 0 3 0 0 3 2 0 3 2 0 ]

2 −3 6 55 2 −6 44 0
−6 9 −18 −13 −3 9 10 0
cofactor(𝐆) = [ ]; adjoint(𝐆) = [ ]
44 10 −20 −82 6 −18 −20 152
0 0 152 −152 55 −13 −82 −152

Inverse, [•]−1 𝐀=[


4 13 −1 1 1 19
]; 𝐀 = |𝐀| 𝑎𝑑𝑗𝑜𝑖𝑛𝑡(𝐀) = 102 [
−13
]
−2 19 2 4
4 13 3 1 1 −2 6 −44
𝐅 = [−2 19 1] ; 𝐅 −1 = 𝑎𝑑𝑗𝑜𝑖𝑛𝑡(𝐅) = [ 3 −9 −10]
|𝐅| −152
3 2 0 −61 31 102
4 13 3 3 2 −6 44 0
−2 19 1 1 −1
1 1 −3 9 10 0
𝐆=[ ];𝐆 = 𝑎𝑑𝑗𝑜𝑖𝑛𝑡(𝐆) = [ ]
3 2 0 0 |𝐆| 152 6 −18 −20 152
0 2 1 0 55 −13 −82 −152

0.1𝑥1 + 7𝑥2 − 0.3𝑥3 = −19.3


Augmentation
3𝑥1 − 0.1𝑥2 − 0.2𝑥3 = 7.85
0.3𝑥1 − 0.2𝑥2 + 10𝑥3 = 71.4
(i) Conventional Matrix form
0.1 7 −0.3 𝑥1 −19.3
[ 3 −0.1 −0.2] {𝑥2 } = { 7.85 }
0.3 −0.2 10 𝑥3 71.4

(ii) Augmented Matrix form


0.1 7 −0.3 −19.3
[ 3 −0.1 −0.2 7.85 ]
0.3 −0.2 10 71.4

4
6.2 Solving Non-Homogeneous System of Linear Equations

Multicomponent systems result in n set(s) of mathematical equations that must be solved


simultaneously. It can be represented by the following matrix format: [𝐴]{𝑋} = {𝐵}. If {𝐵} ≠ {0}, it is
known as non-homogeneous system of linear equations. In this study, several methods used to solve
the unknown {𝑋} by using the [𝐴] & non-zero {𝐵} will be discussed next.

Linear Algebraic Equations Coefficient Matrix, [𝐴] Unknown, {𝑋} Non-zero {𝐵}

4𝑥1 + 13𝑥2 = 8
−2𝑥1 + 19𝑥2 = 2
4 13 𝑥1 8
n=2 where 2 sets of eqns. are [ ] {𝑥 } { }
−2 19 2 2
given to solve 𝑥1 & 𝑥2
respectively.

0.5𝑥1 + 2.5𝑥2 − 9𝑥3 = −6


−4.5𝑥1 + 3.5𝑥2 − 2𝑥3 = 5
−8𝑥1 − 9𝑥2 + 22𝑥3 = 2 0.5 2.5 −9 𝑥1 −6
[−4.5 3.5 −2] 𝑥
{ 2} {5}
n=3 where 3 sets of eqns. are 𝑥3
−8 −9 22 2
given to solve 𝑥1 , 𝑥2 and
𝑥3 respectively.

For 𝑛 ≤ 3, methods frequently used to solve the non-homogeneous system of linear equations are
given below:

(i) Matrix Inversion (Moderate efficiency for 𝑛 = 3 & high efficiency for 𝑛 = 2)
(ii) Graphical method (Less efficiency but useful for visualizing & enhancing intuition)
(iii) Cramer’s rule (High efficiency for 𝑛 ≤ 3 - Main Focus)
(iv) Method of elimination (Less efficiency)

However, method (i)- (iv) are less efficiency for 𝑛 > 3, thus more advanced methods are introduced:

(a) Gaussian Elimination (Naïve vs Partial Pivoting) --- Main Focus


(b) LU Decomposition --- Out of scope
(c) Thomas algorithm --- Out of scope
(d) Gauss Seidel Method --- Out of scope

6.2.1 Matrix Inversion Approach

[𝐴]{𝑋} = {𝐵}

If [𝐴] is a square and non-singular matrix, [𝐴][𝐴]−1 = [𝐴]−1 [𝐴] = [𝐼]

{𝑋} = [𝐴]−1 {𝐵}

4 13 −1 1 1 19 −13 {𝑋} 1 19 −13 8 126/102


𝐀=[ ]; 𝐀 = |𝐀| 𝑎𝑑𝑗𝑜𝑖𝑛𝑡(𝐀) = 102 [ ]; = 102 [ ]{ } = { }
−2 19 2 4 2 4 2 24/102

5
6.2.2 Graphical Method

Rearrange the equations into linear plot format and then plot it.

Original linear equations 𝑎11 𝑥1 + 𝑎12 𝑥2 = 𝑏1 3𝑥1 + 2𝑥2 = 18


−𝑥1 + 2𝑥2 = 2
𝑎21 𝑥1 + 𝑎22 𝑥2 = 𝑏2

Linear plot format 𝑎 𝑏 3 18


𝑥2 = − (𝑎11 ) 𝑥1 + (𝑎 1 ) 𝑥2 = − (2) 𝑥1 + ( 2 )
12 12
𝑥2 = 𝑚𝑥1 + 𝑐 𝑎21 𝑏2 −1 2
where 𝑚 = 𝑠𝑙𝑜𝑝𝑒 ; 𝑐 = 𝑖𝑛𝑡𝑒𝑟𝑐𝑒𝑝𝑡 𝑥2 = − ( ) 𝑥1 +( ) 𝑥2 = − ( ) 𝑥1 + ( )
𝑎22 𝑎22 2 2

Using graphical method, the solution that satisfies both equations is the intersection point.

For singular system, the slopes of the equations are equal (or zero determinant), and it leads to

(a) No solution case when there is no interception between the lines or


(b) Infinite solutions case when there are infinite interception points between the lines.

(a) No solution case (b) Infinite solutions case

1 1
𝑥2 = + (2) 𝑥1 + (1) 𝑥2 = + (2) 𝑥1 + (1)
1 1 1
𝑥2 = + (2) 𝑥1 + (2) 𝑥2 = + (2) 𝑥1 + (1)

1 1
− 1 − 1
1 1 Diff of slope= | 2 | = −1 − (−1) = 0
Diff of slope= | 21 | = − 2 − (− 2) = 0 −1 2
−2 1

6
For ill-conditioned system (also known as ill-posed system), the slopes of the equations are almost
equal (or close-to-zero determinant), and it leads to

(c) Many solutions case and it is sensitive to round-off error.

(c) Many solutions case

2.3
𝑥2 = + ( ) 𝑥1 + (1.1)
5
1
𝑥2 = + ( ) 𝑥1 + (1)
2

2.3
− 1
Diff of slope,| 51 | = −0.46 − (−0.5) = 0.04 ≈ 0
− 1
2

6.2.3 Cramer’s Rule


𝑎11 𝑎12 𝑎13 𝑥1 𝑏1
[𝑎21 𝑎22 𝑎23 ] {𝑥2 } = {𝑏2 }
𝑎31 𝑎32 𝑎33 𝑥3 𝑏3

𝑏1 𝑎12 𝑎13 𝑎11 𝑏1 𝑎13 𝑎11 𝑎12 𝑏1


𝑏
| 2 𝑎22 𝑎23 | 𝑎
| 21 𝑏2 𝑎23 | 𝑎
| 21 𝑎22 𝑏2 |
𝑏3 𝑎32 𝑎33 𝑎31 𝑏3 𝑎33 𝑎31 𝑎32 𝑏3
𝑥1 = , 𝑥2 = , 𝑥3 =
|𝐴| |𝐴| |𝐴|

For example,

0.3 0.52 1 𝑥1 −0.01


[0.5 1 ]
1.9 2{𝑥 } = { 0.67 }
0.1 0.3 0.5 3 𝑥 −0.44
−0.01 0.52 1 0.3 −0.01 1 0.3 0.52 −0.01
| 0.67 1 1.9| |0.5 0.67 1.9| |0.5 1 0.67 |
𝑥1 = −0.44 0.3 0.5 = −14.9, 𝑥2 = 0.1 −0.44 0.5 = −29.5, 𝑥3 = 0.1 0.3 −0.44 = 19.8
0.3 0.52 1 0.3 0.52 1 0.3 0.52 1
|0.5 1 1.9| |0.5 1 1.9| |0.5 1 1.9|
0.1 0.3 0.5 0.1 0.3 0.5 0.1 0.3 0.5

Limitation: Impractical for eqns (𝑛 > 3).

7
6.2.4 Method of Elimination (Or Substitution Method)

0.3 0.52 1 𝑥1 −0.01


[0.5 1 𝑥
1.9] { 2 } = { 0.67 }
0.1 0.3 0.5 𝑥3 −0.44
• Step 1: 𝑥1 = ⋯ in 𝑥2 & 𝑥3 terms for 1st eqn

• Step 2: Substitute 𝑥1 = ⋯ to 2nd & 3rd eqns.

Obtain 𝑥2 = ⋯ in 𝑥3 term

• Step 3: Substitute 𝑥2 = ⋯ to 3rd eqn.

Obtain 𝑥3 solution

• Step 4: Back Substitute to obtain 𝑥1 & 𝑥2 solutions

Limitation: Extremely tedious to solve manually. However, the elimination approach can be extended
and made more systematically to improve the efficiency such as Gauss Elimination method.

6.2.5 Naïve Gauss Elimination

It is an extension of the method of elimination which has a systematic scheme with forward
elimination & back substitution procedure.

Forward elimination #1

𝑎11 𝑎12 𝑎13 𝑥1 𝑏1 𝑎11 𝑎12 𝑎13 𝑥1 𝑏1


′ ′
[𝑎21 𝑎22 𝑎23 ] {𝑥2 } = {𝑏2 } ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Forward elimination #1 [ 0 𝑎22 𝑎23 ] {𝑥2 } = {𝑏2′ }
𝑎31 𝑎32 𝑎33 𝑥3 ′ ′ 𝑥3
𝑏3 0 𝑎32 𝑎33 𝑏3′
′ ′
R1 is the pivot equation, where 𝑎11 is the pivot element to turn 𝑎21 & 𝑎31 into 0
𝑎 ′
R2’=R2-R1x𝑓21 where factor, 𝑓21 = 𝑎21 ; For example, 𝑎21 = 𝑎21 − 𝑎11 𝑓21 = 0
11

𝑎 ′
R3’=R3-R1x𝑓31 where factor, 𝑓31 = 𝑎31 ; For example, 𝑎31 = 𝑎31 − 𝑎11 𝑓31 = 0
11

Forward elimination #2

𝑎11 𝑎12 𝑎13 𝑥1 𝑏1 𝑎11 𝑎12 𝑎13 𝑥1 𝑏1


′ ′ ′ ′
[ 0 𝑎22 𝑎23 ] {𝑥2 } = {𝑏2′ } ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Forward elimination #2 [ 0 𝑎22 𝑎23 ] {𝑥2 } = { 𝑏2′ }
′ ′ 𝑥3 ′′ 𝑥3
0 𝑎32 𝑎33 𝑏3′ 0 0 𝑎33 𝑏3′′
′ ′′
R2 is the pivot equation, where 𝑎22 is the pivot element to turn 𝑎32 to be 0

𝑎′ ′′ ′ ′
R3’’=R3’-R2’x𝑓32 where factor, 𝑓32 = 𝑎32
′ ; For example, 𝑎32 = 𝑎32 − 𝑎22 𝑓32 = 0
22

Note: •′ and •′′ indicate change of value after first and second elimination procedures, respectively.

8
Back Substitution

𝑎11 𝑎12 𝑎13 𝑥1 𝑏1


′ ′ 𝑏 ′′
[ 0 𝑎22 𝑎23 ] {𝑥2 } = { 𝑏2′ } ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Back substitution #1 𝑆𝑜𝑙𝑢𝑡𝑖𝑜𝑛, 𝑥3 = 𝑎′′3
′′
0 0 𝑎33 𝑥3 𝑏3′′
33

𝑎11 𝑎12 𝑎13 𝑥1 𝑏1


′ ′ 𝑏2′ −𝑎23′ 𝑥
[ 0 𝑎22 𝑎23 ] {𝑥2 } = { 𝑏2′ } ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Back substitution #2 𝑆𝑜𝑙𝑢𝑡𝑖𝑜𝑛, 𝑥2 = ′
𝑎22
3

′′ 𝑥3
0 0 𝑎33 𝑏3′′

𝑎11 𝑎12 𝑎13 𝑥1 𝑏1


′ ′ 𝑏1 −𝑎12 𝑥2 −𝑎13 𝑥3
[ 0 𝑎22 𝑎23 ] { 2 } = { 𝑏2′ }
𝑥 ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Back substitution #3 𝑆𝑜𝑙𝑢𝑡𝑖𝑜𝑛, 𝑥1 = 𝑎11
′′ 𝑥3
0 0 𝑎33 𝑏3′′

For example:

3 −0.1 −0.2 𝑥1 7.85


[0.1 7 −0.3] {𝑥2 } = {−19.3}
0.3 −0.2 10 𝑥3 71.4
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Forward elimination #1
0.1 3 −0.1 −0.2 𝑥1 7.85
𝑅2′ = 𝑅2 − 𝑅1 × 3 [0 7.00333 𝑥
−0.293333] { 2 } = {−19.5617}
0.3 0 −0.190000 10.0200 𝑥3 70.6150
𝑅3′ = 𝑅3 − 𝑅1 ×
3
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗ 3 −0.1 −0.2 𝑥1 7.85
Forward elimination #2
−0.19 [0 7.00333 −0.293333] {𝑥2 } = {−19.5617}
𝑅3′′ = 𝑅3′ − 𝑅2′ × 𝑥3
7.00333 0 0 10.0120 70.0843
70.0843
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Back substitution #1 𝑥3 = 10.0120 = 7.0000

−19.5617−(−0.293333)(7.0000)
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Back substitution #2 𝑥2 = = −2.50000
7.00333

7.85−(−0.1)(−2.5)−(−0.2)(7)
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Back substitution #3 𝑥1 = = 3.00000
3

Limitation: Suffer the division by zero issue or the solution is sensitive to round-off error

For example,

⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Forward elimination #1
0 2 3 𝑥1 8 4
[4 6 7] {𝑥2 } = {−3} 𝑅2′ = 𝑅2 − 𝑅1 × Error!
0
2 1 6 𝑥3 5 𝑅3′ = 𝑅3 − 𝑅1 × 0
2

9
2 100000 𝑥1 ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
100000 Forward elimination #2
[ ] {𝑥 } = { } 1
1 1 2 2 𝑅2′ = 𝑅2 − 𝑅1 × 2

2 100000 𝑥1 100000 ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗


[ ]{ } = { } Back substitution #1 𝑥2 = 1
0 −49999 𝑥2 −49998
100000−100000𝑥2
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Back substitution #2 𝑥1 = =0
2

Verification of solution:

LHS: RHS:

2 100000 0 100000 100000


[ ]{ } = { } { }
1 1 1 1 2

100000−100000
% 𝑒𝑟𝑟𝑜𝑟_𝑏1 100000
𝑥100% 0%
∴ 𝐿𝐻𝑆 ≠ 𝑅𝐻𝑆 as percentage of error, { }={ 2−1 } ={ }
% 𝑒𝑟𝑟𝑜𝑟_𝑏2 𝑥100% 50%
2
𝑥1 0
Thus, {𝑥 } = { } is a poor solution as it is different from the actual solution. The solution is sensitive
2 1
to the round-off error which leads to high error discrepancy.

6.2.5 Gauss Elimination with Partial Pivoting (GEwPP)

The limitation of Naïve Gauss Elimination can be improved by using GEwPP that consists of scaling
analysis & pivoting strategy:

(a) Scaling analysis: Indicates the requirement of having pivoting to avoid divide by zero issue.
0.00002 1 𝑥1 1
2 100000 𝑥1 100000 ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Scaling the coeffient matrix [ ] {𝑥 } = { }
[ ] {𝑥 } = { } 1 1 2 2
1 1 2 2 to have max value of 1 𝑝𝑖𝑣𝑜𝑡 𝑒𝑙𝑒𝑚𝑒𝑛𝑡 𝑖𝑠 𝑠𝑚𝑎𝑙𝑙𝑒𝑟

Rule of thumbs: If the pivot element is smaller than other rows, then pivoting is needed.

(b) Pivoting strategy: Switch row/ column to avoid pivot element to be zero or close to zero

(i) Naïve Gauss Elimination - Gaussian Elimination (GE) without pivoting strategy

2 100000 𝑥1 100000
[ ] {𝑥 } = { }
1 1 2 2
Note: Previously we remain the original formulation and get poor solution after solving it.

(ii) Gauss Elimination with Partial Pivoting (GEwPP) -Switch row so that largest element is the pivot
element (Main Focus).

1 1 𝑥1 2
0.00002 1 𝑥1 1 [ ]{ } = { }
[ ] {𝑥 } = { } ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
𝑃𝑎𝑟𝑡𝑖𝑎𝑙 𝑝𝑖𝑣𝑜𝑡𝑖𝑛𝑔 0.00002 1 𝑥2 1
1 1 2 2 𝑝𝑖𝑣𝑜𝑡 𝑒𝑙𝑒𝑚𝑒𝑛𝑡 𝑖𝑠 𝑡ℎ𝑒 𝑙𝑎𝑟𝑔𝑒𝑠𝑡

10
Example of GEwPP

2 100000 𝑥1 100000
[ ] {𝑥 } = { }
1 1 2 2

⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗ 0.00002 1 𝑥1 1
Scaling [ ]{ } = { } Note: Scaling indicates partial pivoting is needed
1 1 𝑥2 2

⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗ 1 1 𝑥1 2
𝑃𝑎𝑟𝑡𝑖𝑎𝑙 𝑃𝑖𝑣𝑜𝑡𝑖𝑛𝑔 [ ]{ } = { } Note: Pivot element is the largest after PP.
0.00002 1 𝑥2 1
1 1 𝑥1 2
[ ]{ } = { } Note: Round-off error happens when we use approximate value
0 1 𝑥2 1
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Back substitution #1 𝑥2 = 1

⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Back substitution #2 𝑥1 = 2−𝑥2 = 1

Verification of solution:

LHS: RHS:

2 100000 1 100002 100000


[ ]{ } = { } { }
1 1 1 2 2

|100000−100002|
% 𝑒𝑟𝑟𝑜𝑟_𝑏1 𝑥100% 0.002%
100000
∴ 𝐿𝐻𝑆 ≈ 𝑅𝐻𝑆 as percentage of error, { }={ }={ }
% 𝑒𝑟𝑟𝑜𝑟_𝑏2 |2−2|
𝑥100% 0%
2
𝑥1 1
Thus, {𝑥 } = { } is an accurate solution as it is close to the actual solution. The solution is less
2 1
sensitive to the round-off error by using the GEwPP, as compares to naïve GE.

Determinant analysis can be done before GEwPP to know if you have well-conditioned system or
singular system. Precaution: scaling is performed to standardize matrix before calculating determinant.

Well-conditioned system, Singular system, |•| = 0


|•| ≠ 0

−1𝑥1 + 1𝑥2 + 2𝑥3 = 2 −1𝑥1 + 1𝑥2 + 2𝑥3 = 2 −1𝑥1 + 1𝑥2 + 2𝑥3 = 2


3𝑥1 − 1𝑥2 + 1𝑥3 = 6 3𝑥1 − 1𝑥2 + 1𝑥3 = 6 3𝑥1 − 1𝑥2 + 1𝑥3 = 6
−1𝑥1 + 3𝑥2 + 4𝑥3 = 4 −2𝑥1 + 2𝑥2 + 4𝑥3 = 4 −2𝑥1 + 2𝑥2 + 4𝑥3 = 8
𝑥1 1 1
𝑥
Unique solution for { 2 } exists −2 2
1
𝑥3 |1 −3
1 1|
=0
| 3|
1 1 2 2
−2 1 −4 4
1
2
1 1| 𝑥1
as || 1 −3 3|
= 0.4 ≠ 0 We get no solution or infinite solutions for {𝑥2 }, we can know
1 3 𝑥3
−4 4
1
either it is no solution or infinite solutions by using GEwPP.
Rule of thumb: Assume that −0.1 ≤ |•| ≤ 0.1 is considered as ill-conditioned system in this study.

11
Example: Solving a well-conditioned system using GEwPP

−1 1 2 𝑥1 2
[ 3 −1 1] {𝑥2 } = {6}
−1 3 4 𝑥3 4

−1/2 1/2 1 1 1 −1/3 1/3 2


⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗ [ 1
Scaling −1/3 1/3 2] ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗ [−1/2 1/2
Pivoting 1 1]
−1/4 3/4 1 1 −1/4 3/4 1 1

⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Forward elimination #1
1
(− 2) 1 −1/3 1/3 2
𝑅2′ = 𝑅2 − 𝑅1 × [0 1/3 7/6 2 ]
1
1 0 2/3 13/12 1.5
(− )
𝑅3′ = 𝑅3 − 𝑅1 × 4
1
1 −1/3 1/3 2 1 −1/3 1/3 2
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Scaling [ 0 2/7 1 12/7 ] ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Pivoting [ 0 8/13 1 18/13 ]
0 8/13 1 18/13 0 2/7 1 12/7

⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Forward elimination #2 1 −1/3 1/3 2
2
( )
7 [0 8/13 1 18/13]
𝑅3′′ = 𝑅3′ − 𝑅2′ × 8
( )
13
0 0 15/28 15/14

15/14
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Back substitution 𝑥3 = 15/28 = 2
𝑥1 1
18/13−(1)𝑥3
𝑥2 = = −1 ∴ {𝑥2 } = {−1} is an accurate solution
8/3
𝑥3 2
2−(−1/3)𝑥2 −(1/3)𝑥3 as LHS=RHS (verification).
𝑥1 = 1
=1

Example: Solving a singular system (infinite solutions case) using GEwPP

−1 1 2 𝑥1 2
[ 3 −1 1] {𝑥2 } = {6}
−2 2 4 𝑥3 4

−1/2 1/2 1 1
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗ [ 1
Scaling −1/3 1/3 2]
−2/4 2/4 1 1

1 −1/3 1/3 2
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Pivoting [ −1/2 1/2 1 1]
−2/4 2/4 1 1

12
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Forward elimination #1
1
(− ) 1 −1/3 1/3 2
𝑅2′ = 𝑅2 − 𝑅1 × 2
1 [0 1/3 7/6 2 ]
2 0 1/3 7/6 2
(− 4)
𝑅3′ = 𝑅3 − 𝑅1 ×
1
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Forward elimination #2
1 1 −1/3 1/3 2
(3) [0 1/3 7/6 2]
𝑅3′′ = 𝑅3′ − 𝑅2′ ×
1 0 0 0 0
(3)

⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Back substitution 0𝑥3 = 0 2−(7/6)𝑡
2−(−1/3)[ ]−(1/3)𝑡
1/3
𝑥1
𝑥3 = 𝑡, 𝑤ℎ𝑒𝑟𝑒 − ∞ ≤ 𝑡 ≤ ∞ 1
∴ {𝑥2 } = 2−(7/6)𝑡
2−(7/6)𝑥3
𝑥3 1/3
𝑥2 = 1/3 { 𝑡 }
Infinite solutions that can satisfy the
2−(−1/3)𝑥2 −(1/3)𝑥3 eqns.
𝑥1 = 1

Example: Solving a singular system (no solution case) using GEwPP

−1 1 2 𝑥1 2
[ 3 −1 1] {𝑥2 } = {6}
−2 2 4 𝑥3 8

−1/2 1/2 1 1
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Scaling [ 1 −1/3 1/3 2]
−2/4 2/4 1 2

1 −1/3 1/3 2
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Pivoting [−1/2 1/2 1 1]
−2/4 2/4 1 2

⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Forward elimination #1
1
(− ) 1 −1/3 1/3 2
𝑅2′ = 𝑅2 − 𝑅1 × 2
1 [0 1/3 7/6 2 ]
2 0 1/3 7/6 3
(− 4)
𝑅3′ = 𝑅3 − 𝑅1 ×
1
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Forward elimination #2
1 1 −1/3 1/3 2
(3) [0 1/3 7/6 2]
𝑅3′′ = 𝑅3′ − 𝑅2′ ×
1 0 0 0 1
(3)

⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Back substitution 0𝑥3 = 1 ∴ 𝑁𝑜 𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛𝑠 𝑡ℎ𝑎𝑡 𝑠𝑎𝑡𝑖𝑠𝑓𝑦 𝑡ℎ𝑒 𝑒𝑞𝑛𝑠.

13
6.3 Row Echelon Form, Reduced Row Echelon Form, Rank, & Linear Dependency

After the GEwPP, the coefficient matrix will be in the Row Echelon Form (REF). From the previous
example, we obtain:

Well-conditioned system, |•| ≠ 0 Singular system, |•| = 0

−1𝑥1 + 1𝑥2 + 2𝑥3 = 2 −1𝑥1 + 1𝑥2 + 2𝑥3 = 2 −1𝑥1 + 1𝑥2 + 2𝑥3 = 2


3𝑥1 − 1𝑥2 + 1𝑥3 = 6 3𝑥1 − 1𝑥2 + 1𝑥3 = 6 3𝑥1 − 1𝑥2 + 1𝑥3 = 6
−1𝑥1 + 3𝑥2 + 4𝑥3 = 4 −2𝑥1 + 2𝑥2 + 4𝑥3 = 4 −2𝑥1 + 2𝑥2 + 4𝑥3 = 8
Coefficient matrix,
Coefficient matrix,
−1 1 2
[𝐴] = [ 3 −1 1] −1 1 2
−1 3 4 [𝐴] = [ 3 −1 1]
−2 2 4
Coefficient matrix after GEwPP is
in REF, Coefficient matrix after GEwPP is in REF,

1 −1/3 1/3
1 −1/3 1/3
[𝐴]𝐺𝐸𝑤𝑃𝑃 = [0 1/3 7/6]
[𝐴]𝐺𝐸𝑤𝑃𝑃 = [0 8/3 1 ]
0 0 0
0 0 15/28

REF has the following characteristics:

• Zero row(s) are always below non-zero rows if there is any.


• Pivot element of the non-zero rows at the bottom must be at the right of the pivot element
above it.
• Non-unique; can be in different scale

REF can be further reduced to Reduced Row Echelon Form (RREF) by using Gauss-Jordan Elimination
with Partial Pivoting (GJEwPP) as shown in the example below:

1 −1/3 1/3 1 −1/3 1/3


[𝐴]𝐺𝐸𝑤𝑃𝑃 = [0 8/3 1 ] [𝐴]𝐺𝐸𝑤𝑃𝑃 = [0 1/3 7/6]
0 0 15/28 0 0 0

⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Scale the pivot element to 1 ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Scale the pivot element to 1 1 −1/3 1/3
1 −1/3 1/3 𝑅2 → 𝑅2 × 3
𝑅2 → 𝑅2 ×
3 [0 1 3.5 ]
8 [0 1 3/8] 𝑅3 → 𝑅3 ×
−1
28 1.25
0 0 0
𝑅3 → 𝑅3 × 0 0 1
15
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Forward elimination 1 0 1.5
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Forward elimination 1 0 11/24 1
(− ) [0 1 3.5]
3
(− )
1
[0 1 3/8 ] 𝑅1 → 𝑅1 − 𝑅2 × 1 0 0 0
3
𝑅1 → 𝑅1 − 𝑅2 × 1 0 0 1

⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
Forward elimination
11
( )
24
1 0 0
𝑅1 → 𝑅1 − 𝑅3 × 1 [ 0 1 0]
3
( )
8
0 0 1
𝑅2 → 𝑅2 − 𝑅3 × 1

14
RREF has the following characteristics:

• Also a REF
• Unique; Scale the pivot element to 1
• Element above the pivot element is 0

Once RREF is obtained, rank of a matrix can be evaluated by counting the number of non-zero rows
of RREF. Note: Rank is the maximum number of linearly independent vector.

Well-conditioned system, |•| ≠ 0 Singular system, |•| = 0

1 −1/3 1/3 1 −1/3 1/3


REF= [0 8/3 1 ] REF= [0 1/3 7/6]
0 0 15/28 0 0 0

1 0 0 1 0 1.5
RREF= [0 1 0] RREF= [0 1 3.5]
0 0 1 0 0 0

Rank=3 Rank=2

It means that all the 3 equations given are linear It means that only 2 out of 3 equations given are
independent, therefore finding 3 unknowns linear independent, therefore finding 3
from 3 linear independent equations are unknowns from 2 linear independent equations
possible. are difficult.
−1𝑥1 + 1𝑥2 + 2𝑥3 = 2 −1𝑥1 + 1𝑥2 + 2𝑥3 = 2
3𝑥1 − 1𝑥2 + 1𝑥3 = 6 3𝑥1 − 1𝑥2 + 1𝑥3 = 6
−1𝑥1 + 3𝑥2 + 4𝑥3 = 4 −2𝑥1 + 2𝑥2 + 4𝑥3 = 4

∴ [𝐴] = 𝐹𝑢𝑙𝑙 𝑟𝑎𝑛𝑘 𝑚𝑎𝑡𝑟𝑖𝑥 ∴ [𝐴] = 𝑅𝑎𝑛𝑘 − 𝑑𝑒𝑓𝑖𝑐𝑖𝑒𝑛𝑡 𝑚𝑎𝑡𝑟𝑖𝑥

As a rule of thumbs, 𝑛 linearly independent equations are required to solve 𝑛 unknowns. To solve 3
unknowns, if we have less than 3 linearly independent equations, i.e. more unknowns than knowns,
then we get the singular system issue.

6.4 Engineering Application of Non-Homogeneous System of Linear Equations

(a) Transform information into multiple linear algebraic equations to be solved simultaneously.

The amounts of metal, plastic, and If totals of 2120, 43.4 and 164 g of metal, plastic, and
rubber needed for electrical rubber respectively are available each day. How many
components types #1, #2, and #3 are components can be produced per day?
shown in the following Table.
15𝐶𝑜𝑚𝑝1 + 17𝐶𝑜𝑚𝑝2 + 19𝐶𝑜𝑚𝑝3 = 2120
0.25𝐶𝑜𝑚𝑝1 + 0.33𝐶𝑜𝑚𝑝2 + 0.42𝐶𝑜𝑚𝑝3 = 43.4
1.0𝐶𝑜𝑚𝑝1 + 1.2𝐶𝑜𝑚𝑝2 + 1.6𝐶𝑜𝑚𝑝3 = 164
15 17 19 𝐶𝑜𝑚𝑝1 2120
[0.25 0.33 0.42] {𝐶𝑜𝑚𝑝2 } = { 43.4 }
Note: It is important for student to convert the
information into multiple linear algebraic
1.0 1.2 1.6 𝐶𝑜𝑚𝑝3 164
equations & matrix format. Then, it can be solved by using the GEwPP, Cramer’s rule, etc.

15
(b) Electrical system
𝑅𝐴 + 𝑅𝐵 −𝑅𝐵 −𝑅𝐴 𝐼1 +𝑉1
[ −𝑅𝐵 𝑅𝐵 + 𝑅𝐶 −𝑅𝐶 𝐼 −𝑉
] { 2 } = { 2}
−𝑅𝐴 −𝑅𝐶 𝑅𝐴 + 𝑅𝐶 + 𝑅𝐷 𝐼3 +𝑉3

Eg. If the resistance, 𝑅 and voltage, 𝑉 are given, estimate


the output currents, 𝐼 of the 3 dof electrical circuit
system.
4+2 −2 −4 𝐼1 16
[ −2 2+8 −8 ] {𝐼2 } = {−40}
Source:
https://www.youtube.com/watch?v=2naaCxfbq_M
−4 −8 4 + 6 + 8 𝐼3 0
Note: The derivation of the eqns involves theory of circuit, thus it is not
examined in this study.

(c) Mechanical vibration system


Assume 𝑓1 = 𝐹1 𝑐𝑜𝑠𝜔𝑡, 𝑓2 = 𝐹2 𝑐𝑜𝑠𝜔𝑡, 𝑥1 = 𝑋1 𝑐𝑜𝑠𝜔𝑡,
𝑥2 = 𝑋2 𝑐𝑜𝑠𝜔𝑡, 𝑥̈ 1 = −𝑋1 𝜔2, 𝑥̈ 2 = −𝑋2 𝜔2 , 𝜔 = 10

𝑘1 + 𝑘2 − 𝜔2 𝑚1 −𝑘2 𝑋 𝐹
[ 2 ] { 1} = { 1}
−𝑘2 𝑘2 + 𝑘3 − 𝜔 𝑚2 𝑋2 𝐹2
Source:
https://www.brown.edu/Departments/Engineering/Courses/
En4/Notes/vibrations_mdof/vibrations_mdof.htm Eg. If the stiffness, 𝑘 , mass, 𝑚, force, 𝐹, and excitation
frequency, 𝜔 are given, estimate the output response of
the 2 dof mass-spring vibration system.

400 − 102 (40) −200 𝑋 𝐹


[ 2 ] { 1} = { 1}
−200 400 − 10 (40) 𝑋2 𝐹2

Note: The derivation of the eqns involves theory of vibration, thus it is


not examined in this study.

(d) Dynamic system


𝑚1 1 0 𝑎 𝑚1 𝑔 − 𝑐1 𝑣
[ 𝑚2 −1 1 ] {𝑇 } = {𝑚2 𝑔 − 𝑐2 𝑣 }
𝑚3 0 −1 𝑅 𝑚3 𝑔 − 𝑐3 𝑣

Eg. If the mass, 𝑚, drag coeffient, 𝑐, and free fall velocity,


𝑣 are given, estimate the output tension & acceleration of
the 3 dof falling parachutists.

70 1 0 𝑎 70(9.81) − 10(5)
[60 −1 1 ] {𝑇 } = {60(9.81) − 14(5)}
40 0 −1 𝑅 40(9.81) − 17(5)

Note: The derivation of the eqns involves theory of dynamic, thus it is


not examined in this study.

Advanced applications of matrix algebra including transformation matrix, image processing,


signal processing, finite element simulation, page rank algorithm, Hill Cipher encryption, etc.
Thus, mastering matrix algebra is important and it has huge impact.

16
MATRIX ALGEBRA FOR HOMOGENEOUS LINEAR
ALGEBRAIC SYSTEM
WEEK 7: MATRIX ALGEBRA FOR HOMOGENEOUS LINEAR ALGEBRAIC SYSTEM
7.1 Solving Homogeneous System of Linear Equations

Multicomponent systems result in n set(s) of mathematical equations that must be solved


simultaneously. It can be represented by the following matrix format: [𝐶]{𝑋} = {𝐵}. If {𝐵} = {0}, it is
known as homogeneous system of linear equations.

(i) In this study, methods used to solve the total solution of {𝑋} by using the [𝐶] & zero {𝐵} is out of
scope.

Linear Algebraic Equations Coefficient Matrix, [𝐶] Unknown, {𝑋} Zero {𝐵}

0.5𝑥1 + 2.5𝑥2 − 9𝑥3 = 0


−4.5𝑥1 + 3.5𝑥2 − 2𝑥3 = 0
−8𝑥1 − 9𝑥2 + 22𝑥3 = 0 0.5 2.5 −9 𝑥1 0
[−4.5 3.5 −2] 𝑥
{ 2} {0}
n=3 where 3 sets of eqns. are 𝑥3
−8 −9 22 0
given to solve 𝑥1 , 𝑥2 and
𝑥3 respectively.

Depending on the coefficient matrix that represents any physical system or application.
• If |𝐶| = 0 & {𝐵} = {0}, then the solutions of {𝑋} due to initial/boundary conditions are non-
zero/ non-trivial.
• If |𝐶| ≠ 0 & {𝐵} = {0}, then the solutions of {𝑋} due to initial/boundary conditions are zero/
trivial.

(ii) In this study, the main focus is to find the characteristic of the system in terms of the eigenvalue,
𝑥1
λ𝑖 & the corresponding eigenvector, {𝑥2 } where 𝑖 = 1,2, … , 𝑛 𝑚𝑜𝑑𝑒. This is known as eigenvalue/
𝑥3 λ
𝑖
eigenvector problem.

[𝐴]{𝑥}𝑖 = [λ𝑖 ]{𝑥}𝑖

([𝐴]−λ𝑖 [𝐼]){𝑥}𝑖 = {0}

where λ𝑖 is one of the eigenvalue of the matrix [A] ;

{𝑥}𝑖 is the corresponding eigenvector for each λ𝑖 and {𝑥}𝑖 ≠ {0}, i.e. non-trivial solutions;

[𝐼]= identity matrix

Note: In general, 𝑛 dof system has 𝑛 number of eigenvalue & eigenvector. For example: 2 dof mass-
spring system has 2 eigenvalues and 2 eigenvectors, while 3 dof electrical circuit system has 3
eigenvalues and 3 eigenvectors.

17
7.2 Eigenvalue/Eigenvector Problem

Example: Link the eigenvalue and eigenvector to the characteristic of the given system.

The equations of motion of the 2 mass spring


systems are provided:

−𝑘𝑥1 − 𝑘(𝑥1 − 𝑥2 ) = 𝑚1 𝑥̈ 1
𝑘(𝑥1 − 𝑥2 ) − 𝑘𝑥2 = 𝑚2 𝑥̈ 2
𝑤ℎ𝑒𝑟𝑒 𝑥1 = 𝐴1 𝑠𝑖𝑛(𝜔𝑡 + 𝜃1 ) , 𝑥̈ 1 = −𝜔2 𝑥1
𝑥2 = 𝐴2 𝑠𝑖𝑛(𝜔𝑡 + 𝜃2 ), 𝑥̈ 2 = −𝜔2 𝑥2

2𝑘 𝑘
Given 𝑠𝑡𝑖𝑓𝑓𝑛𝑒𝑠𝑠, 𝑘 = 𝑘1 = 𝑘2 = 200𝑁/𝑚 ; − 𝜔2 −
𝑚1 𝑚1 𝑥1 0
{𝑥 } = { }
𝑚𝑎𝑠𝑠, 𝑚 = 𝑚1 = 𝑚2 = 40𝑘𝑔 𝑘 2𝑘 2 0
− − 𝜔2
[ 𝑚2 𝑚2 ]

Note: The derivation of the eqns involves theory of


vibration, thus it is not examined in this study.

[10 − 𝜔
2
−5 ] {𝑥1 } = {0} ([𝐴]−λ𝑖 [𝐼]){𝑥}𝑖 = {0}
−5 10 − 𝜔2 𝑥2 0
𝐴11 − λ𝑚𝑜𝑑𝑒 𝑖 𝐴12 𝑥1 0
10 −5 1 0 𝑥1 0 [ ] {𝑥 } ={ }
([ ] − 𝜔2 [ ]) { } = { } 𝐴21 𝐴22 − λ𝑚𝑜𝑑𝑒 𝑖 2 𝑚𝑜𝑑𝑒 𝑖 0
−5 10 0 1 𝑥2 0
𝐴 𝐴12 1 0 𝑥1 0
10 −5 𝑥1 𝑥1 0 ([ 11 ] − λ𝑚𝑜𝑑𝑒 𝑖 [ ]) { } ={ }
[ ] {𝑥 } − 𝜔2 {𝑥 } = { } 𝐴21 𝐴22 0 1 𝑥2 𝑚𝑜𝑑𝑒 𝑖 0
−5 10 2 2 0
10 −5 𝑥1 𝑥1 𝐴11 𝐴12 𝑥1 𝑥1 0
[ ] {𝑥 } = 𝜔2 {𝑥 } [ ]{ } − λ𝑚𝑜𝑑𝑒 𝑖 {𝑥 } ={ }
−5 10 2 2 𝐴21 𝐴22 𝑥2 𝑚𝑜𝑑𝑒 𝑖 2 𝑚𝑜𝑑𝑒 𝑖 0

𝐴11 𝐴12 𝑥1 𝑥1
[ ]{ } = λ𝑚𝑜𝑑𝑒 𝑖 {𝑥 }
𝐴21 𝐴22 𝑥2 𝑚𝑜𝑑𝑒 𝑖 2 𝑚𝑜𝑑𝑒 𝑖

By comparing the general formulation of the eigenvalue/eigenvector problem,

[𝐴]{𝑥}𝑖 = [λ𝑖 ]{𝑥}𝑖

10 −5
We find that the coefficient matrix, [𝐴] = [ ]
−5 10

Eigenvalue, [λ𝑖 ] = 𝜔2 where 𝜔 =natural frequency of the system


𝑥1
Eigenvector, {𝑥}𝑖 = {𝑥 } = mode shape of the system (i.e. pattern of the maximum
2
vibration amplitude) at the corresponding ith mode/ case

Note: Natural frequency and mode shape are important characteristics for a vibration system
that can be obtained from the eigenvalue/eigenvector problem.

18
Example: Solving the eigenvalue/eigenvector problem.

[10 − 𝜔
2
−5 ] {𝑥1 } = {0}
−5 10 − 𝜔2 𝑥2 0
𝑥1 0
To have non-trivial solution, {𝑥 } ≠ { }. The determinant must be zero.
2 0
2
|10 − 𝜔 −5 | = 0
−5 10 − 𝜔2
Let the eigenvalue, λ=𝜔2

10 − λ −5
| |=0
−5 10 − λ

λ2 − 20λ + 75 = 0 Note: This is known as characteristic equation.

We obtain 2 eigenvalues for the 2 mass spring system: λ1 =5 and λ2 =15

Hint: Common practice is to arrange λ𝑖 in ascending order, i.e. λ1 < λ2

Since λ=𝜔2 , we can obtain the natural frequencies for the system: 𝜔1 = √5 and 𝜔2 = √15

At mode 1, 𝜔1 = √5 or λ1 =5, we obtain the unscaled eigenvector as following:

5 −5 𝑥1 0 5𝑥1 − 5𝑥2 = 0 𝑥1 1
[ ] { } = { } ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
𝑒𝑥𝑝𝑎𝑛𝑑 ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
𝑎𝑡 𝑥1 = 1, 𝑥2 = 1 𝑓𝑜𝑟 𝑏𝑜𝑡ℎ 𝑒𝑞𝑛𝑠 {𝑥 } = { }
−5 5 𝑥2 1 0 −5𝑥1 + 5𝑥2 = 0 2 1 1

Unscaled eigenvector for mode #1 at 𝜔1 = √5


𝑥1 1
{𝑥 } = { } means that the maximum vibration of 𝑥1
1 1 2 1 1
will be in phase with 𝑥2 , where both masses move to
+x direction by one unit to the right at same time.

Unscaled eigenvector for mode #1 at 𝜔1 = √5


𝑥1 5
{𝑥 } = { } is also an acceptable unscaled
5
2 1 5
5 eigenvector answer. The most important point is
eigenvector tells the unique shape/ vibration pattern
at each mode regardless of the scale. In general, all
acceptable solutions of eigenvector are called
𝑥1 1
eigenspace, i.e. {𝑥 } = 𝑡 { } , where −∞ ≤ 𝑡 ≤ ∞.
2 1 1

Normalized eigenvector has unique shape and unique scale by using the following formula:

𝑥1, 𝑠𝑐𝑎𝑙𝑒 1 𝑥 𝑢𝑛𝑠𝑐𝑎𝑙𝑒 1 1 1 5 0.707


{ } = { 1, } =± { }=± { } = ±{ }
𝑥2 , 𝑠𝑐𝑎𝑙𝑒 1 𝑚𝑎𝑔𝑛𝑖𝑡𝑢𝑑𝑒 𝑥2 , 𝑢𝑛𝑠𝑐𝑎𝑙𝑒 1 √1 + 1 1
2 2 √5 + 5 5
2 2 0.707

19
At mode 2, 𝜔2 = √15 or λ2 =15, we obtain the unscaled eigenvector as following:

−5 −5 𝑥1 0 −5𝑥1 − 5𝑥2 = 0 𝑥1 1
[ ] {𝑥 } = { } ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
𝑒𝑥𝑝𝑎𝑛𝑑 ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
𝑎𝑡 𝑥1 = 1, 𝑥2 = −1 𝑓𝑜𝑟 𝑏𝑜𝑡ℎ 𝑒𝑞𝑛𝑠 {𝑥 } = { }
−5 −5 2 1 0 −5𝑥1 − 5𝑥2 = 0 2 2 −1

Unscaled eigenvector for mode #2 at 𝜔2 = √15


𝑥1 1
{𝑥 } = { } means that at 𝜔2 = √15 𝑟𝑎𝑑𝑠 −1 , the
2 2 −1
1 -1
maximum vibration of 𝑥1 will be out of phase with 𝑥2 ,
where one mass moves to +x direction while other to
-x direction at same time.

Eigenspace for mode #2


𝑥1 1
{𝑥 } = 𝑡 { } , where −∞ ≤ 𝑡 ≤ ∞.
2 2 −1

Normalized eigenvector for mode #2

𝑥1, 𝑠𝑐𝑎𝑙𝑒 1 𝑥1, 𝑢𝑛𝑠𝑐𝑎𝑙𝑒 1 1 0.707


{ } = { } =± { } = ±{ }
𝑥2 , 𝑠𝑐𝑎𝑙𝑒 2 𝑚𝑎𝑔𝑛𝑖𝑡𝑢𝑑𝑒 𝑥2 , 𝑢𝑛𝑠𝑐𝑎𝑙𝑒 2 2
√1 + 1 2 −1 −0.707

Note: Depending on questions, student should know how to find unscaled eigenvector/ eigenspace/
normalized eigenvector. In general, if it is not stated for manual calculation, then providing unscaled
eigenvector (𝑡 =1) is sufficient. For software calculation, normalised eigenvector is used usually.

Visualisation of the eigenvalue & eigenvector information:

Mode 1/ Case 1 Mode 2/ Case 2


Eigenvalue, λ1 =5 λ2 =15
Natural frequency, 𝜔1 = √5 (Lower frequency) 𝜔2 = √15 (Higher frequency)
Unscaled eigenvector (also called mode shape),
𝑥1 1 𝑥1 1
{𝑥 } = ± { } {𝑥 } = ± { }
2 1 1 2 2 −1

Hint: Period = 1/ Frequency = 2𝜋/ 𝜔

20
Example: Find the eigenvalues & eigenvectors of the following matrix.

1 −3 3
𝐀 = [3 −5 3]
6 −6 4
Eigenvalues/eigenvectors problem: (𝐀 − λ𝐈)𝒙 = 𝟎

1 −3 3 1 0 0 𝑥1 0
([3 −5 3] − λ [0 𝑥
1 0]) { 2 } = {0}
6 −6 4 0 0 1 𝑥3 0
1−λ −3 3 𝑥1 0
[ 3 −5 − λ 3 ] { 𝑥2 } = {0}
6 −6 4−λ 3 𝑥 0
𝑥1 0 1−λ −3 3
Since {𝑥2 } ≠ {0} , | 3 −5 − λ 3 |=0
𝑥3 0 6 −6 4−λ
(1 − λ )[(−5 − λ)(4 − λ) − 3(−6)] − (−3)[3(4 − λ) − 3(6)] + 3[3(−6) − (−5 − λ)(6)] = 0

Characteristic eqn: λ3 − 12λ − 16 = 0

(λ − 4)(λ2 + 4λ + 4) = 0

λ1 = −2 , λ2 = −2 (Repeated eigenvalue case), λ3 = 4

Case 1: λ1 = −2 , Case 2: λ2 = −2 (Repeated eigenvalue case)

3 −3 3 𝑥1 0
[3 −3 3] {𝑥2 } = { 0}
6 −6 6 𝑥3 λ=−2 0
1 −1 1 0
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗ [3
𝑎𝑢𝑔𝑚𝑒𝑛𝑡𝑒𝑑 −3 3 0]
6 −6 6 0

⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
𝑠𝑐𝑎𝑙𝑒 𝑝𝑖𝑣𝑜𝑡 𝑒𝑙𝑒𝑚𝑒𝑛𝑡 𝑡𝑜 1 1 −1 1 0
R1 [3 −3 3 0]
R1 → 3 6 −6 6 0

⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
𝐹𝑜𝑟𝑤𝑎𝑟𝑑 𝑒𝑙𝑖𝑚𝑖𝑛𝑎𝑡𝑖𝑜𝑛 1 −1 1 0
R2 → R2 − 3R1 [0 0 0 0] Note: RREF shows rank 1 (i.e. 1 linearly independent vector)
R3 → R3 − 6R1 0 0 0 0

𝑥1 − 𝑥2 + 𝑥3 = 0

𝑥1 = 𝑥2 − 𝑥3
𝑥1 𝑥2 − 𝑥3 𝑥2 −𝑥3 1 −1
𝐸𝑖𝑔𝑒𝑛𝑠𝑝𝑎𝑐𝑒 𝑓𝑜𝑟 {𝑥2 } = { 𝑥2 } = {𝑥2 } + { 0 } = 𝑡 {1}| + 𝑠 { 0 }| ,where t&S ∈ 𝐑
𝑥3 λ=-2 𝑥3 0 𝑥3 0 𝑥 1 𝑥3 =𝑠
2 =𝑡

𝑥1 1 −1
𝑥
𝐸𝑖𝑔𝑒𝑛vectors, { 2 } = {1} & { 0 } for repeated eigenvalues λ1 = −2 , λ2 = −2 respectively.
𝑥3 λ=-2 0 1

21
Case 3: λ3 = 4 (Distinct eigenvalue case)

−3 −3 3 𝑥1 0
[ 3 −9 3] {𝑥2 } = {0}
6 −6 0 𝑥3 λ=4 0
−3 −3 3 0
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
𝑎𝑢𝑔𝑚𝑒𝑛𝑡𝑒𝑑 [ 3 −9 3 0]
6 −6 0 0

⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
𝑠𝑐𝑎𝑙𝑒 𝑝𝑖𝑣𝑜𝑡 𝑒𝑙𝑒𝑚𝑒𝑛𝑡 𝑡𝑜 1 1 1 −1 0
R1 [3 −9 3 0]
R1 → − 3 6 −6 0 0

⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
𝐹𝑜𝑟𝑤𝑎𝑟𝑑 𝑒𝑙𝑖𝑚𝑖𝑛𝑎𝑡𝑖𝑜𝑛 1 1 −1 0
R2 → R2 − 3R1 [0 −12 6 0]
R3 → R3 − 6R1 0 −12 6 0

⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗ 1 1 −1 0
𝑠𝑐𝑎𝑙𝑒 𝑝𝑖𝑣𝑜𝑡 𝑒𝑙𝑒𝑚𝑒𝑛𝑡 𝑡𝑜 1 1
R [0 1 − 0]
R 2 → − 122 2
0 −12 6 0
1 1 −1 0
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
𝐹𝑜𝑟𝑤𝑎𝑟𝑑 𝑒𝑙𝑖𝑚𝑖𝑛𝑎𝑡𝑖𝑜𝑛 1
[0 1 − 0]
R 3 → R 3 + 12R 2 2
0 0 0 0
1
1 0 − 0
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗ 2
𝐹𝑜𝑟𝑤𝑎𝑟𝑑 𝑒𝑙𝑖𝑚𝑖𝑛𝑎𝑡𝑖𝑜𝑛 [0 0]
1 Note: RREF shows rank 2 (i.e. 2 linearly independent vectors)
R1 → R1 − R 2 1 −
2
0 0 0 0
1 1
𝑥1 − 2 𝑥3 = 0 >> 𝑥1 = 2 𝑥3

1 1
𝑥2 − 2 𝑥3 = 0 >> 𝑥2 = 2 𝑥3

1 1
𝑥1 𝑥
2 3 2
𝑥
𝐸𝑖𝑔𝑒𝑛𝑠𝑝𝑎𝑐𝑒 𝑓𝑜𝑟 { 2 } = { 𝑥 } = 𝑡 { 1 }|
1 ,where t ∈ 𝐑
𝑥3 λ=4 2 3 2
𝑥3 1 𝑥 3 =𝑡

𝑥1 0.5
𝐸𝑖𝑔𝑒𝑛vector, {𝑥2 } = {0.5}
𝑥3 λ=4 1

For verification of the eigenvector results, it should satisfy the eigenvalue/eigenvector problem:

[𝐴]{𝑥}𝑖 = [λ𝑖 ]{𝑥}𝑖

Case 1 (λ = −2 ) Case 2 (λ = −2 ) Case 3 (λ = 4 )

1 −3 3 1 1 1 −3 3 −1 −1 1 −3 3 0.5 0.5
[3 −5 3] {1} = −2 {1} [3 −5 3] { 0 } = −2 { 0 } [3 −5 3] {0.5} = 4 {0.5}
6 −6 4 0 0 6 −6 4 1 1 6 −6 4 1 1

22
Or we can combine all cases into single matrix operation:

Eigenvector/ Modal Eigenvalue/ Spectral Verification of Eigenvalue/ Eigenvector


matrix consists of all matrix consists of all Problem for All Cases
eigenvectors, P eigenvalues, D

𝐏 or [𝑃] 𝐃 or [𝐷] ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗


[𝐴]{𝑥}𝑖 = [λ𝑖 ]{𝑥}𝑖 𝑒𝑥𝑡𝑒𝑛𝑑
𝑥1 𝑥1 𝑥1 λ1 0 0
𝑥
= [ 2}
{ {𝑥2 } {𝑥2 } ] = [ 0 λ2 0 ] [𝐴][𝑃] = [𝑃][𝐷]
𝑥3 1 𝑥3 2 𝑥3 3 0 0 λ3
1 −3 3 1 −1 0.5 1 −1 0.5 −2 0 0
1 −1 0.5 −2 0 0 [3 −5 3] [1 0 0.5] = [1 0 0.5] [ 0 −2 0]
= [1 0 0.5] = [ 0 −2 0] 6 −6 4 0 1 1 0 1 1 0 0 4
0 1 1 0 0 4 1 0 0 0 −1 0 0 0 0.5
= −2 [1 0 0] − 2 [0 0 0] + 4 [0 0 0.5]
where where
0 0 0 0 1 0 0 0 1
𝑥1 λ1 = 𝑒𝑖𝑔𝑒𝑛𝑣𝑎𝑙𝑢𝑒 #1 ∴ 𝑆𝑖𝑛𝑐𝑒 𝐿𝐻𝑆 = 𝑅𝐻𝑆, [𝑃]𝑎𝑛𝑑 [𝐷]𝑎𝑟𝑒 𝑣𝑒𝑟𝑖𝑓𝑖𝑒𝑑.
{𝑥2 } = 𝑒𝑖𝑔𝑒𝑛𝑣𝑒𝑐𝑡𝑜𝑟 #1
𝑥3 1

6.4 Engineering Application of Eigenvalue/Eigenvector Problem

(a) Diagonalization

• Eigenvectors are useful to diagonalize a square matrix:

• 𝐃 = 𝐏 −1 𝐀𝐏 if |𝐏| ≠ 0 𝑤ℎ𝑒𝑟𝑒 𝐏 = 𝑓𝑢𝑙𝑙 𝑟𝑎𝑛𝑘 𝑒𝑖𝑔𝑒𝑛𝑣𝑒𝑐𝑡𝑜𝑟 𝑜𝑟 𝑚𝑜𝑑𝑎𝑙 𝑚𝑎𝑡𝑟𝑖𝑥

Previously, eigenvalues/eigenvectors problem: (𝐀 − λ𝐈)𝒙 = 𝟎

1 −3 3 1−λ −3 3 𝑥1 0
𝐀 = [3 −5 3] → [ 3 −5 − λ 3 ] {𝑥2 } = {0}
6 −6 4 6 −6 4 − λ 𝑥3 0
𝑥1 1 −1
𝑥
𝐸𝑖𝑔𝑒𝑛vectors, { 2 } = {1} & { 0 }
𝑥3 λ=-2 0 1
𝑥1 0.5
𝑥
𝐸𝑖𝑔𝑒𝑛vector, { 2 } = {0.5}
𝑥3 λ=4 1
𝑥1 𝑥1 𝑥1 1 −1 0.5
𝑥
∴Eigenvector matrix consists of all eigenvectors, P = [{ 2 } 𝑥
{ 2} 𝑥
{ 2 } ] = [1 0 0.5]
𝑥3 λ 𝑥3 λ 𝑥3 λ 0 1 1
1 2 3

1 −1 0.5 −𝟏 1 −3 3 1 −1 0.5 −2 0 0
𝐃 = 𝐏 −1 𝐀𝐏 = [1 0 0.5] [3 −5 3] [1 0 0.5] = [ 0 −2 0]
0 1 1 6 −6 4 0 1 1 0 0 4
Note: We can convert a non-diagonal matrix 𝐀 to a diagonal matrix, where the diagonal matrix, 𝐃
consists of the eigenvalues of matrix 𝐀 at the diagonal elements. λ1 = −2 , λ2 = −2, λ3 = 4

23
(b) Extension from Diagonalization

𝐃 = 𝐏 −1 𝐀𝐏

𝐀 = 𝐏𝐃𝐏 −1
,where 𝐀 can be expressed in terms of the eigenvector matrix, 𝐏 and eigenvalue matrix, 𝐃

𝐀𝟐 = 𝐀𝐀 = (𝐏𝐃𝐏 −1 )(𝐏𝐃𝐏 −1 ) = (𝐏𝐃𝟐 𝐏 −1 ) where 𝐏 −1 𝐏 = 𝐈

𝐀𝟑 = 𝐀𝟐 𝐀 = (𝐏𝐃𝟐 𝐏 −1 )(𝐏𝐃𝐏−1 ) = (𝐏𝐃𝟑 𝐏−1 )


𝒌
λ1 0 0 λ1𝑘 0 0
𝒌 𝒌 −1
𝐀 = 𝐏𝐃 𝐏 where 𝐃 = [ 0 λ2 0 ] = [ 0 λ𝑘2 0 ] , where 𝑘 𝜖 𝐑
𝒌
𝑘
0 0 λ3 0 0 λ3
(Comment: power of a diagonal matrix can be computed easily!)

Note: This formula implies that change of power of 𝐀 will change the eigenvalue matrix while
remain the eigenvector matrix, If 𝐀 has eigenvalues of λ1 , λ2 , λ3 . Then, 𝐀𝒌 has eigenvalues of
1 1 1
λ1𝑘 , λ𝑘2 , λ𝑘3 . e.g. 𝐀−𝟏 has eigenvalues of , , .
λ1 λ2 λ3

You can find 𝐀𝑘 (power of a matrix) with the eigenvalue & eigenvector matrix by using this formula.

1 −3 3 −2 0 0 1 −1 0.5
𝐀 = [3 −5 3] ; Eigenvalue matrix,𝐃 = [ 0 −2 0]; Eigenvector matrix, 𝐏 = [1 0 0.5]
6 −6 4 0 0 4 0 1 1

1 −1 0.5 −2 0 0 𝟏𝟎𝟎 1 −1 0.5 −𝟏


100 𝟏𝟎𝟎 −1
𝐀 = 𝐏𝐃 𝐏 = [1 0 0.5] [ 0 −2 0] [1 0 0.5]
0 1 1 0 0 4 0 1 1

1 −1 0.5 (−2)100 0 0 1 −1 0.5 −𝟏


= [1 0 0.5] [ 0 (−2)100 0 ] [1 0 0.5]
0 1 1 0 0 (4)100 0 1 1
0.8035 −0.8035 0.8035
= 1060 × [0.8035 −0.8035 0.8035]
1.6069 −1.6069 1.6069
Some useful properties of eigenvalues, λ𝑖 :
Trace(𝐀) = ∑ λ𝑖 → 1 − 5 + 4 = −2 − 2 + 4 = 0

Determinant(𝐀) = ∏ λ𝑖 → 1(−20 + 18) − (−3)(12 − 18) + 3(−18 + 30) = (−2)(−2)(4) = 16

Eigenvalue (𝐀) = eigenvalue (𝐀𝑇 ) = λ𝑖 → λ1 = −2; λ2 = −2; λ3 = 4

Eigenvalue (𝑘𝐀) = 𝑘λ𝑖 ,where 𝑘 𝜖 𝐑 → Eigenvalue(𝟓𝐀) = 5λ1 = −10; 5λ2 = −10; 5λ3 = 20

Eigenvalue (𝐀𝑘 ) = λ𝑘𝑖 ,where 𝑘 𝜖 𝐑 → Eigenvalue(𝐀5 ) = λ15 = (−2)5; λ52 = (−2)5 ; λ53 = (4)5

Eigenvalue (𝐀 ± 𝑘𝐈) = λ𝑖 ± 𝑘 → Eigenvalue(𝐀 + 5𝐈) = λ1 + 5 = 3; λ2 + 5 = 3; λ3 + 5 = 9

24
(b) Cayley-Hamilton Theorem

• Characteristic equation of eigenvalue/eigenvector problem is useful to compute the power of


matrix.

Previously, eigenvalues/eigenvectors problem: (𝐀 − λ𝐈)𝒙 = 𝟎

1 −3 3 1−λ −3 3 𝑥1 0
𝐀 = [3 −5 3] → [ 3 −5 − λ 3 ] {𝑥2 } = {0}
6 −6 4 6 −6 4 − λ 𝑥3 0
• Characteristic eqn: 𝑓(λ) = |(𝐀 − λ𝐈)| = 0, where λ=eigenvalue
𝑝0 + 𝑝1 λ + 𝑝2 λ2 + ⋯ + 𝑝𝑛 λ𝑛 = 0
λ3 − 12λ − 16 = 0

• Cayley-Hamilton Theorem: 𝑓(𝐀) = 𝑝0 𝐈 + 𝑝1 𝐀 + 𝑝2 𝐀2 + ⋯ + 𝑝𝑛 𝐀𝑛 = 0

,where 𝐀 is the matrix that has the eigenvalue, λ. It shows that not only eigenvalue can satisfy
the characteristic equation, but also the original coefficient matrix, 𝐀.

𝐀3 − 12𝐀 − 16𝐈 = 𝟎 [𝐶𝑎𝑦𝑙𝑒𝑦 − 𝐻𝑎𝑚𝑖𝑙𝑡𝑜𝑛 𝐹𝑜𝑟𝑚]

You can find 𝐀𝑛 (power of a matrix) with the characteristic equation only by using this
theorem. For example:
1
𝐀2 + 12𝐈 − 16𝐀−1 = 𝟎 → 𝐀−1 = (𝐀2 + 12𝐈)
16

𝐀3 = 12𝐀 + 16𝐈

𝐀4 = 12𝐀2 + 16𝐀 = 12(−12𝐈 + 16𝐀−1 ) + 16𝐀

𝐀5 = 12𝐀3 + 16𝐀2 = 12(12𝐀 + 16𝐈) + 16(−12𝐈 + 16𝐀−1 )

25
6.5 Engineering Application of Solving Homogeneous System of Linear Equations

So far, we have determined the eigenvalue & eigenvector for a homogeneous system of linear
equations. This information is useful to find the eigenfunction (i.e. Each of a set of independent
functions which are the solutions to a given differential equation.)

The equations of motion of the 2 mass spring


systems are provided:

−𝑘𝑥1 − 𝑘(𝑥1 − 𝑥2 ) = 𝑚1 𝑥̈ 1
𝑘(𝑥1 − 𝑥2 ) − 𝑘𝑥2 = 𝑚2 𝑥̈ 2
𝑤ℎ𝑒𝑟𝑒 𝑥1 = 𝐴1 𝑠𝑖𝑛(𝜔𝑡 + 𝜃1 ) , 𝑥̈ 1 = −𝜔2 𝑥1
𝑥2 = 𝐴2 𝑠𝑖𝑛(𝜔𝑡 + 𝜃2 ), 𝑥̈ 2 = −𝜔2 𝑥2

Given 𝑠𝑡𝑖𝑓𝑓𝑛𝑒𝑠𝑠, 𝑘 = 𝑘1 = 𝑘2 = 200𝑁/𝑚 ;


[10 − 𝜔
2
−5 ] {𝑥1 } = {0}
𝑚𝑎𝑠𝑠, 𝑚 = 𝑚1 = 𝑚2 = 40𝑘𝑔 −5 10 − 𝜔2 𝑥2 0

Previously, solving the eigenvalue/eigenvector


problem gives:
Eigenvalue: λ1 =5; λ2 =15
(where 𝜔1 =√λ1 ; 𝜔2 =√λ2)
𝑥1 1 𝑥1 1
Eigenvector: {𝑥 } = { } ; {𝑥 } = { }
2 1 1 2 2 −1
Note: Finding eigenfunction or the solution of homogeneous system of linear equations is out of scope
and it is including here for your extra info.
𝑥1 1
Eigenfunction #1 = {𝑥 } 𝑠𝑖𝑛(√λ1 𝑡 + 𝜃1 ) = { } 𝑠𝑖𝑛(√5𝑡 + 𝜃1 )
2 1 1
𝑥1 1
Eigenfunction #2 = {𝑥 } 𝑠𝑖𝑛(√λ2 𝑡 + 𝜃2 ) = { } 𝑠𝑖𝑛(√15𝑡 + 𝜃2 )
2 2 −1

The total solution of the homogeneous linear algebraic system is equal to the superposition of all the
eigenfunctions:
𝑥1 1 1
{𝑥 } = 𝑐1 { } 𝑠𝑖𝑛(√5𝑡 + 𝜃1 ) + 𝑐2 { } 𝑠𝑖𝑛(√15𝑡 + 𝜃2 )
2 1 −1
,where 𝑐1 & 𝑐2 are unknown constants that can be obtained from the initial or boundary conditions.
You will learn it in the ODE chapter later.

Verification of eigenfunction as the solution to the equations:

−200𝑥1 − 200(𝑥1 − 𝑥2 ) = 40𝑥̈ 1

200(𝑥1 − 𝑥2 ) − 200𝑥2 = 40𝑥̈ 2

26
𝑥1 1 𝑥̈ −5
Verification of eigenfunction #1: {𝑥 } = { } 𝑠𝑖𝑛(√5𝑡 + 𝜃1 ) ; { 1 } = { } 𝑠𝑖𝑛(√5𝑡 + 𝜃1 )
2 1 𝑥̈ 2 −5

LHS RHS

−200𝑥1 − 200(𝑥1 − 𝑥2 ) 40𝑥̈ 1

= −200(𝑠𝑖𝑛(√5𝑡 + 𝜃1 ) ) − 200 ((𝑠𝑖𝑛(√5𝑡 + 𝜃1 ) ) − (𝑠𝑖𝑛(√5𝑡 + 𝜃1 ) )) = 40 (−5𝑠𝑖𝑛(√5𝑡 + 𝜃1 ))

= −200(𝑠𝑖𝑛(√5𝑡 + 𝜃1 ) ) = (−200𝑠𝑖𝑛(√5𝑡 + 𝜃1 ))

200(𝑥1 − 𝑥2 ) − 200𝑥2 40𝑥̈ 2

= 200 ((𝑠𝑖𝑛(√5𝑡 + 𝜃1 ) ) − (𝑠𝑖𝑛(√5𝑡 + 𝜃1 ) )) − 200(𝑠𝑖𝑛(√5𝑡 + 𝜃1 ) ) = 40 (−5𝑠𝑖𝑛(√5𝑡 + 𝜃1 ))

= −200(𝑠𝑖𝑛(√5𝑡 + 𝜃1 ) ) = (−200𝑠𝑖𝑛(√5𝑡 + 𝜃1 ))

∴ 𝑆𝑖𝑛𝑐𝑒 𝐿𝐻𝑆 = 𝑅𝐻𝑆, thus it is proven that eigenfunction #1 is one of the solution

1 −15
Verification of eigenfunction #2: { } 𝑠𝑖𝑛(√15𝑡 + 𝜃2 ) ; { } 𝑠𝑖𝑛(√15𝑡 + 𝜃2 )
−1 15

LHS RHS

−200𝑥1 − 200(𝑥1 − 𝑥2 ) 40𝑥̈ 1

= −200 (𝑠𝑖𝑛(√15𝑡 + 𝜃2 )) − 200 ((𝑠𝑖𝑛(√15𝑡 + 𝜃2 ) ) − (−𝑠𝑖𝑛(√15𝑡 + 𝜃2 ) )) = 40 (−15𝑠𝑖𝑛(√15𝑡 + 𝜃2 ))

= −600(𝑠𝑖𝑛(√15𝑡 + 𝜃2 ) ) = (−600𝑠𝑖𝑛(√15𝑡 + 𝜃2 ))

200(𝑥1 − 𝑥2 ) − 200𝑥2 40𝑥̈ 2

= 200 ((𝑠𝑖𝑛(√15𝑡 + 𝜃2 ) ) − (−𝑠𝑖𝑛(√15𝑡 + 𝜃2 ) )) − 200(−𝑠𝑖𝑛(√15𝑡 + 𝜃1 ) ) = 40 (15𝑠𝑖𝑛(√15𝑡 + 𝜃1 ))

= 600(𝑠𝑖𝑛(√15𝑡 + 𝜃1 ) ) = (600𝑠𝑖𝑛(√15𝑡 + 𝜃1 ))

∴ 𝑆𝑖𝑛𝑐𝑒 𝐿𝐻𝑆 = 𝑅𝐻𝑆, thus it is proven that eigenfunction #2 is one of the solution

27

You might also like