Lecture Notes 4
Lecture Notes 4
If the leading nonzero entries of the rows in the row echelon form are reduced to 1 by using
elementary row operations, the resulting form is called reduced row echelon form.
∗ ∗ 𝑏∗
1 𝑎12 𝑎13 ⋮ 1
[0 1 ∗ ⋮ 𝑏∗ ]
𝑎23 2
0 0 1 ⋮𝑏3∗
Reduced row echelon form of augmented matrix
∗
𝑎12 ∗
𝑎13 𝑏1
𝑎12 = , 𝑎13 = , 𝑏1∗ =
𝑎11 𝑎11 𝑎11
′
∗
𝑎23 ∗
𝑏2′
𝑎23 = ′ , 𝑏2 = ′
𝑎22 𝑎22
𝑏3′′
𝑏3∗ = ′′
𝑎33
Reduced row echelon form of a matrix is unique
Note that we do not need the reduced echelon form to solve systems of linear equations. This
form offers no numerical or theoretical advantage over the row echelon form.
The Gauss-Jordan elimination method uses elementary row operations and reduce the coefficient
matrix [𝐴] in the [𝐴 ⋮ 𝑏] to the identity matrix such that:
∗
1 0 0 ⋮ 𝑏1
[0 1 0 ⋮ 𝑏2∗ ]
0 0 1 ⋮𝑏3∗
1 2 1 0⋮1000
0 −1 2 −1 ⋮ 0 1 0 0
[𝐴 ⋮ 𝐼]=[−1 2 1 1 ⋮ 0 0 1 0]
−1 1 0 1⋮0001
Use Gauss-Jordan elimination and find
1 0 0 0 ⋮ 2 1 −4 5
0 1 0 0 ⋮ −1 −1 3 −4
[𝐼 ⋮ 𝐴−1 ]=[0 0 1 0 ⋮ 1 1 −2 3 ]
0 0 0 1 ⋮ 3 2 −7 10
Use Gauss-Jordan elimination and find
1 0 0 0 ⋮ −4
0100⋮
[𝐴 ⋮ 𝑏]=[0 0 1 0 ⋮ 4 ]
−2
0 0 0 1 ⋮ −7
Pitfalls of Gauss elimination method
Division by zero: During the elimination steps, it is possible for a pivot element may become
equal to or very close to zero.
Techniques to improve the solution:
Pivoting: Interchanging the rows and columns so that the largest coefficient is the pivot
element.
Round-off errors: Rounding errors can increase rapidly due to the large number of mathematical
operations in the elimination steps.
Techniques to improve the solution:
Precision: Use of more significant figures
Pivoting: reduces rounding errors
Ill-conditioned systems
If the determinant of the coefficient matrix 𝐴, 𝑑𝑒𝑡(𝐴) = 0, then the system is singular.
If the determinant of the coefficient matrix 𝐴, 𝑑𝑒𝑡(𝐴) ≅ 0 , then the system is ill-conditioned.
𝑥1 4.2
[𝑥2 ] = [ 0.496 ]
𝑥3 0.00586
With partial pivoting
3 −1 0 ⋮ −2
[0 4.67 3 ⋮ 2.33 ]
0 0 1.72 ⋮0.0121
𝑥1 −0.502
𝑥
[ 2 ] = [ 0.494 ]
𝑥3 0.00708
Example: Solve the following system of linear equations using Gauss elimination method with and
without scaling. Use 3 and 4 significant figures for two different estimates.
𝑥1 + 100000𝑥2 = 10000
𝑥1 − 𝑥2 = 1
Without scaling
1 100000 ⋮ 10000
[ ]
0 −100001 ⋮ −9999
𝑥1 0 𝑥1 1
[𝑥 ] = [ ], [𝑥 ] = [ ]
2 0.100 2 0.09999
LU decomposition method
Consider the system of equation:
𝐴𝑥 = 𝑏
The coefficient matrix 𝐴 can be written as the product of an upper triangular matrix 𝑈 and a lower
triangular matrix 𝐿.
𝐴 = 𝐿𝑈
It is possible to write any matrix with diagonal elements that are all non-zero as a product of
lower and upper triangular matrices in many ways.
3 1 5 1 0 0 3 1 5
[2 3 4] = [ 2/3 1 0] [0 7/3 2/3 ]
−1 −2 1 −1/3 −5/7 1 0 0 22/7
3 1 5 3 0 0 1 1/3 5/3
[2 3 4] = [ 2 7/3 0 ] [0 1 2/7]
−1 −2 1 −1 −5/3 22/7 0 0 1
Solution of the system of equations using LU decomposition:
𝐴𝑥 = 𝑏
𝐿𝑈𝑥 = 𝑏
Let 𝑈𝑥 = 𝑦, then
𝐿𝑦 = 𝑏
𝑈 = 𝐴∗ and 𝑦 = 𝑏 ∗
𝐿𝑈𝑥 − 𝐿𝑦 = 0
Then
𝐿𝑈𝑥 − 𝐿𝑦 = 𝐴𝑥 − 𝑏
At the end of the Gauss Elimination method row echelon form of the 𝐴 is the upper triangular
matrix 𝑈. The elements below the diagonal part of the lower triangular matrix 𝐿 are the
multiplication factors in the Gaussian elimination phase.
𝑎21
𝑙21 =
𝑎11
𝑎31
𝑙31 =
𝑎11
⋮
𝑎𝑛1
𝑙𝑛1 =
𝑎11
𝑎𝑛2
𝑙𝑛2 =
𝑎′ 22
Example: Use LU decomposition method and solve the following system of linear equations. Use
four significant figures.
2𝑥1 + 3𝑥2 + 𝑥3 = −1
5𝑥1 + 2𝑥2 − 𝑥3 = 2
3𝑥1 − 2𝑥2 + 4𝑥3 = 3
Solution:
1 0 0
𝐿 = [2.5 1 0]
1.5 1.818 1
2 3 1
𝑈 = [0 −5.5 −3.5 ]
0 0 6.636
𝑥1 0.6712
𝑥
[ 2 ] = [−0.7397]
𝑥3 −0.1233
LU decomposition and pivoting
𝐴𝑥 = 𝑏
𝑃𝐴𝑥 = 𝑃𝑏
𝑃 is the permutation matrix and it represents row interchanges due to the partial pivoting. Then
𝑃𝐴 = 𝐿𝑈
and
𝐿𝑈𝑥 = 𝑃𝑏
Example: Use LU decomposition method with partial pivoting and solve the following system of
linear equations. Use four significant figures.
𝑥1 − 4𝑥2 + 2𝑥3 = −4
3𝑥1 − 𝑥2 − 𝑥3 = 1
9𝑥1 − 3𝑥2 + 5𝑥3 = −2
Solution:
1 0 0
𝐿 = [0.1111 1 0]
0.3333 0 1
9 −3 5
𝑈 = [0 −3.667 1.444 ]
0 0 −2.667
0 0 1
𝑃 = [1 0 0]
0 1 0
𝑥1 0.3864
𝑥
[ 2 ] = [ 0.7841 ]
𝑥3 −0.6251
Exercise: Use LU decomposition method and solve the following system of linear equations. Use
four significant figures.
𝑥1 + 2𝑥2 + 𝑥3 = −3
−𝑥1 − 2𝑥2 + 3𝑥3 = 1
3𝑥1 + 4𝑥2 + 7𝑥3 = 0
Inverse of a matrix by using LU decomposition
𝐴𝐴−1 = 𝐼 𝑜𝑟 𝐴−1 𝐴 = 𝐼
Let
𝑎11 𝑎12 ⋯ 𝑎1𝑛
𝑎21 𝑎22 ⋯ 𝑎2𝑛
𝐴=[ ⋮ ⋮ ⋱ ⋮ ]
𝑎𝑛1 𝑎𝑛2 ⋯ 𝑎𝑛𝑛
𝛼11 𝛼12 ⋯ 𝛼1𝑛
𝛼21 𝛼22 ⋯ 𝛼2𝑛
𝐴−1 =[ ⋮ ⋮ ⋱ ⋮ ]
𝛼𝑛1 𝛼𝑛2 ⋯ 𝛼𝑛𝑛
Use LU decomposition to solve these matrix equations, because for the coefficient matrix A it is
sufficient to do the LU decomposition once.
Example: Find the inverse of the given matrix.
3 1 5
𝐴=[ 2 3 4]
−1 −2 1
If
1 0 0 3 1 5
𝐿 = [ 2/3 1 0] and 𝑈 = [0 7/3 2/3 ]
−1/3 −5/7 1 0 0 22/7
Solution:
3 1 5
[2 3 4] [𝛼⃗1 𝛼⃗2 𝛼⃗3 ] = [𝐼⃗1 𝐼⃗2 𝐼⃗3 ]
−1 −2 1
3 1 5 𝛼11 1
[2 𝛼
3 4] [ 21 ] = [0]
−1 −2 1 𝛼31 0
3 1 5 𝛼12 0
[2 𝛼
3 4] [ 22 ] = [1]
−1 −2 1 𝛼32 0
3 1 5 𝛼13 0
[2 𝛼
3 4] [ 23 ] = [0]
−1 −2 1 𝛼33 0
𝐶𝑜𝑛𝑑(𝐴) = ‖𝐴‖‖𝐴−1 ‖
⋮
𝑛
A large condition number means that small changes in vector 𝑏 or rounding errors in 𝐴 can cause
large changes in the solution 𝑥. A small condition number means the system is well-behaved and
stable.
0.01
Example: What is the condition number of the given system? How a change in 𝑏 by ∆𝑏 = [ ]
0.01
affects the behaviour of the system?
1.1563 0.5711 0.3142
𝐴=[ ], 𝑏=[ ]
0.3164 0.1564 0.1123
Solution:
‖𝐴‖ = 1.7274
det(𝐴) = 0.00014928
1048 −3826
𝐴−1 = [ ]
−2120 7746
‖𝐴−1 ‖ = 9866
−100.44 0.3142
𝑥=[ ] for 𝑏 = [ ]
203.91 0.1123
−128.22 0.3242
𝑥=[ ] for 𝑏 = [ ]
260.17 0.1223
Indirect methods
Jacobi Method
Consider the following system of equations:
𝑎11 𝑥1 + 𝑎12 𝑥2 + ⋯ + 𝑎1𝑛 𝑥𝑛 = 𝑏1
𝑎21 𝑥1 + 𝑎22 𝑥2 + ⋯ + 𝑎2𝑛 𝑥𝑛 = 𝑏2
⋮
𝑎𝑛1 𝑥1 + 𝑎𝑛2 𝑥2 + ⋯ + 𝑎𝑛𝑛 𝑥𝑛 = 𝑏𝑛
The above equations can be rewritten in iterative form as follows:
𝑥10
0
𝑥 0 = 𝑥2
⋮
[𝑥𝑛0 ]
And calculate the next approximations. The Jacobi method solves each equation for one variable,
assuming the other variables stay the same as they were in the previous step. It updates all
variables at the same time in each step.
Gauss-Seidel Method
Consider the following system of equations:
𝑎11 𝑥1 + 𝑎12 𝑥2 + ⋯ + 𝑎1𝑛 𝑥𝑛 = 𝑏1
𝑎21 𝑥1 + 𝑎22 𝑥2 + ⋯ + 𝑎2𝑛 𝑥𝑛 = 𝑏2
⋮
𝑎𝑛1 𝑥1 + 𝑎𝑛2 𝑥2 + ⋯ + 𝑎𝑛𝑛 𝑥𝑛 = 𝑏𝑛
The above equations can be rewritten in iterative form as follows:
𝑥10
0
𝑥 0 = 𝑥2
⋮
[𝑥𝑛0 ]
In this method, whenever an approximation of an independent variable is available, use it to
approximate the next independent variable. For example; use initial guesses and estimate 𝑥11 .
Then update 𝑥10 with 𝑥11 to approximate 𝑥21 . Replace 𝑥20 with 𝑥21 to approximate 𝑥31 and so on. The
Gauss-Seidel method is an improvement on the Jacobi method because it uses the updated values
in the current step.
Relative approximate error for the Jacobi and Gauss-Seidel methods
𝑥𝑛𝑖 + 𝑥𝑛𝑖−1
𝜀𝑎,𝑛 = | |
𝑥𝑛𝑖
Example: Use the Jacobi and Gauss-Seidel methods to solve the following system of equations.
Start with an initial guess of 𝑥 0 = [0 0 0 0]𝑇 and use four significant figures in your
calculations.
5𝑥1 + 2𝑥2 − 𝑥3 + 𝑥4 = 1
𝑥1 + 6𝑥2 − 2𝑥3 = −1
−𝑥1 − 𝑥2 − 6𝑥3 + 3𝑥4 = 0
2𝑥2 − 𝑥3 + 5𝑥4 = 2
Jacobi method
Iteration number 𝒙𝟏 𝒙𝟐 𝒙𝟑 𝒙𝟒
Gauss-Seidel method
Iteration number 𝒙𝟏 𝒙𝟐 𝒙𝟑 𝒙𝟒
In other words, the system converges if the absolute value of the diagonal term in a row is greater
than the sum of the absolute values of the remaining terms in that row. This condition is sufficient
but not necessary for convergence.
Example: Use the Gauss-Seidel method to solve the following system of equations with an
approximate relative error of less than 0.001.
The system of equations is not diagonally dominant. The iterations will diverge.
Iteration number 𝒙𝟏 𝒙𝟐 𝒙𝟑
1 10 -76.67 -153
Error
Iteration number 𝒙𝟏 𝒙𝟐 𝒙𝟑
𝐦𝐚𝐱(𝜺𝒙𝟏 , 𝜺𝒙𝟐 , 𝜺𝒙𝟑 )