Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
2 views

Lecture Notes 4

The document discusses the determinant of a matrix in row echelon form, explaining how it can be calculated and the implications of row interchanges. It also covers the Gauss-Jordan elimination method for solving systems of linear equations and finding the inverse of a matrix, highlighting the uniqueness of reduced row echelon form. Additionally, it addresses potential pitfalls in the Gauss elimination method, including division by zero and round-off errors, and introduces LU decomposition as an alternative method for solving linear equations.

Uploaded by

talhakiran122
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Lecture Notes 4

The document discusses the determinant of a matrix in row echelon form, explaining how it can be calculated and the implications of row interchanges. It also covers the Gauss-Jordan elimination method for solving systems of linear equations and finding the inverse of a matrix, highlighting the uniqueness of reduced row echelon form. Additionally, it addresses potential pitfalls in the Gauss elimination method, including division by zero and round-off errors, and introduces LU decomposition as an alternative method for solving linear equations.

Uploaded by

talhakiran122
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

The determinant of a matrix from row echelon form

𝑎11 𝑎12 ⋯ 𝑎1𝑛


𝑎21 𝑎22 ⋯ 𝑎2𝑛
𝐴=[ ⋮ ⋮ ⋱ ⋮ ]
𝑎𝑛1 𝑎𝑛2 ⋯ 𝑎𝑛𝑛

𝑎11 𝑎12 ⋯ 𝑎1𝑛


0 ′
𝑎22 ⋯ 𝑎2𝑛
𝐴∗ = [ ⋱ ⋮ ]
⋮ ⋮
𝑛−1
0 0 ⋯ 𝑎𝑛𝑛
Row echelon form of coefficient matrix
The determinant of the matrix A is equal to
′ ′′ 𝑛−1
𝑑𝑒𝑡(𝐴) = 𝑎11 ∗ 𝑎22 ∗ 𝑎33 . . .∗ 𝑎𝑛𝑛
If there are row interchanges
′ ′′
𝑑𝑒𝑡(𝐴) = (−1)𝑘 ∗ 𝑎11 ∗ 𝑎22 ∗ 𝑎33 𝑛−1
. . .∗ 𝑎𝑛𝑛
“𝑘” is the number of row interchange.
Gauss-Jordan elimination method
The form of the coefficient matrix and the augmented matrix is called the row echelon form at
the end of the Gauss elimination.
𝑎11 𝑎12 𝑎13 ⋮ 𝑏1
′ ′
[ 0 𝑎22 𝑎23 ⋮ 𝑏2′ ]
′′
0 0 𝑎33 ⋮𝑏3′′

Row echelon form of augmented matrix

If the leading nonzero entries of the rows in the row echelon form are reduced to 1 by using
elementary row operations, the resulting form is called reduced row echelon form.
∗ ∗ 𝑏∗
1 𝑎12 𝑎13 ⋮ 1
[0 1 ∗ ⋮ 𝑏∗ ]
𝑎23 2
0 0 1 ⋮𝑏3∗
Reduced row echelon form of augmented matrix


𝑎12 ∗
𝑎13 𝑏1
𝑎12 = , 𝑎13 = , 𝑏1∗ =
𝑎11 𝑎11 𝑎11


𝑎23 ∗
𝑏2′
𝑎23 = ′ , 𝑏2 = ′
𝑎22 𝑎22
𝑏3′′
𝑏3∗ = ′′
𝑎33
Reduced row echelon form of a matrix is unique

Note that we do not need the reduced echelon form to solve systems of linear equations. This
form offers no numerical or theoretical advantage over the row echelon form.

The Gauss-Jordan elimination method uses elementary row operations and reduce the coefficient
matrix [𝐴] in the [𝐴 ⋮ 𝑏] to the identity matrix such that:

1 0 0 ⋮ 𝑏1
[0 1 0 ⋮ 𝑏2∗ ]
0 0 1 ⋮𝑏3∗

Then the solution set of the system of equations is


𝑥1 = 𝑏1∗
𝑥2 = 𝑏2∗
𝑥3 = 𝑏3∗
Inverse of a matrix by using Gauss-Jordan elimination
In this method, the matrix 𝐴 and the identity matrix 𝐼 can be used to construct the augmented
matrix [𝐴 ⋮ 𝐼]. Perform Gauss-Jordan elimination and reduce 𝐴 to the identity matrix, then the
identity matrix on the right-hand side becomes the inverse matrix of 𝐴.

Consider the matrix equation:


𝐴𝑥 = 𝐼

Construct the augmented matrix:


[𝐴 ⋮ 𝐼]

Perform elementary row operations and obtain:


[𝐼 ⋮ 𝐴−1 ]
Example: Solve the given system of equations and calculate the inverse of the matrix A using
the Gauss-Jordan elimination method.
1 2 1 0 2
0 −1 2 −1 −1
𝐴 = [−1 2 1 1] , 𝑏 = [ 3 ]
−1 1 0 1 1
Construct augmented matrix [𝐴 ⋮ 𝐼]

1 2 1 0⋮1000
0 −1 2 −1 ⋮ 0 1 0 0
[𝐴 ⋮ 𝐼]=[−1 2 1 1 ⋮ 0 0 1 0]
−1 1 0 1⋮0001
Use Gauss-Jordan elimination and find

1 0 0 0 ⋮ 2 1 −4 5
0 1 0 0 ⋮ −1 −1 3 −4
[𝐼 ⋮ 𝐴−1 ]=[0 0 1 0 ⋮ 1 1 −2 3 ]
0 0 0 1 ⋮ 3 2 −7 10
Use Gauss-Jordan elimination and find
1 0 0 0 ⋮ −4
0100⋮
[𝐴 ⋮ 𝑏]=[0 0 1 0 ⋮ 4 ]
−2
0 0 0 1 ⋮ −7
Pitfalls of Gauss elimination method
Division by zero: During the elimination steps, it is possible for a pivot element may become
equal to or very close to zero.
Techniques to improve the solution:
Pivoting: Interchanging the rows and columns so that the largest coefficient is the pivot
element.

Round-off errors: Rounding errors can increase rapidly due to the large number of mathematical
operations in the elimination steps.
Techniques to improve the solution:
Precision: Use of more significant figures
Pivoting: reduces rounding errors
Ill-conditioned systems

If the determinant of the coefficient matrix 𝐴, 𝑑𝑒𝑡(𝐴) = 0, then the system is singular.

If the determinant of the coefficient matrix 𝐴, 𝑑𝑒𝑡(𝐴) ≅ 0 , then the system is ill-conditioned.

Techniques to improve the solution:


Scaling: Normalization of the elements in a row so that the largest element is set to 1.

Gauss elimination method with partial pivoting


In the partial pivoting, the rows are exchanged so that the largest element (in absolute value) in
the current column becomes the pivot.
In full pivoting, both row and column interchanges are considered to bring the largest absolute
value in the entire remaining submatrix to the pivot position. However, full pivoting is rarely used
because it makes the algorithm more complicated.
Example: Solve the following system of linear equations using Gauss elimination method with
partial pivoting. Use 3 significant figures in your calculation.
0.0001𝑥1 + 2𝑥2 + 3𝑥3 = 1.01
2𝑥1 + 4𝑥2 + 3𝑥3 = 1
3𝑥1 − 𝑥2 = −2
Solution:
Without pivoting
0.0001 2 3 ⋮ 1.01
[ 0 −39996 −59997 ⋮ −20199]
0 0 6 ⋮ 0.0352

𝑥1 4.2
[𝑥2 ] = [ 0.496 ]
𝑥3 0.00586
With partial pivoting
3 −1 0 ⋮ −2
[0 4.67 3 ⋮ 2.33 ]
0 0 1.72 ⋮0.0121
𝑥1 −0.502
𝑥
[ 2 ] = [ 0.494 ]
𝑥3 0.00708
Example: Solve the following system of linear equations using Gauss elimination method with and
without scaling. Use 3 and 4 significant figures for two different estimates.
𝑥1 + 100000𝑥2 = 10000
𝑥1 − 𝑥2 = 1
Without scaling
1 100000 ⋮ 10000
[ ]
0 −100001 ⋮ −9999
𝑥1 0 𝑥1 1
[𝑥 ] = [ ], [𝑥 ] = [ ]
2 0.100 2 0.09999

With scaling and pivoting


1 −1 ⋮ 1
[ ]
0 1 ⋮ 0.1
𝑥1 1.10 𝑥1 1.100
[𝑥 ] = [ ], [𝑥 ] = [ ]
2 0.100 2 0.09999

LU decomposition method
Consider the system of equation:
𝐴𝑥 = 𝑏

The coefficient matrix 𝐴 can be written as the product of an upper triangular matrix 𝑈 and a lower
triangular matrix 𝐿.

𝐴 = 𝐿𝑈

It is possible to write any matrix with diagonal elements that are all non-zero as a product of
lower and upper triangular matrices in many ways.

3 1 5 1 0 0 3 1 5
[2 3 4] = [ 2/3 1 0] [0 7/3 2/3 ]
−1 −2 1 −1/3 −5/7 1 0 0 22/7

3 1 5 3 0 0 1 1/3 5/3
[2 3 4] = [ 2 7/3 0 ] [0 1 2/7]
−1 −2 1 −1 −5/3 22/7 0 0 1
Solution of the system of equations using LU decomposition:
𝐴𝑥 = 𝑏

𝐿𝑈𝑥 = 𝑏

Let 𝑈𝑥 = 𝑦, then

𝐿𝑦 = 𝑏

First use forward substitution to solve 𝐿𝑦 = 𝑏.

Then use 𝑦 and apply backward substitution to solve 𝑈𝑥 = 𝑦.

LU decomposition using Gauss Elimination method


𝐴𝑥 − 𝑏 = 0

At the end of the elimination step


𝑈𝑥 − 𝑦 = 0

𝑈 = 𝐴∗ and 𝑦 = 𝑏 ∗

Multiply both sides of the equation with 𝐿

𝐿𝑈𝑥 − 𝐿𝑦 = 0

Then
𝐿𝑈𝑥 − 𝐿𝑦 = 𝐴𝑥 − 𝑏

𝑎11 𝑎12 ⋯ 𝑎1𝑛


𝑎21 𝑎22 ⋯ 𝑎2𝑛
𝐴=[ ⋮ ⋮ ⋱ ⋮ ]
𝑎𝑛1 𝑎𝑛2 ⋯ 𝑎𝑛𝑛

1 0 ⋯ 0 𝑎11 𝑎12 ⋯ 𝑎1𝑛


𝑙21 1 ⋯ 0 0 ′
𝑎22 ⋯ 𝑎2𝑛
𝐿=[ ⋮ ⋮ ⋱ ] and 𝑈 = [ ⋱ ⋮ ]
⋮ ⋮ ⋮
𝑛−1
𝑙𝑛1 𝑙𝑛2 ⋯ 1 0 0 ⋯ 𝑎𝑛𝑛

At the end of the Gauss Elimination method row echelon form of the 𝐴 is the upper triangular
matrix 𝑈. The elements below the diagonal part of the lower triangular matrix 𝐿 are the
multiplication factors in the Gaussian elimination phase.
𝑎21
𝑙21 =
𝑎11
𝑎31
𝑙31 =
𝑎11

𝑎𝑛1
𝑙𝑛1 =
𝑎11
𝑎𝑛2
𝑙𝑛2 =
𝑎′ 22

Example: Use LU decomposition method and solve the following system of linear equations. Use
four significant figures.
2𝑥1 + 3𝑥2 + 𝑥3 = −1
5𝑥1 + 2𝑥2 − 𝑥3 = 2
3𝑥1 − 2𝑥2 + 4𝑥3 = 3
Solution:
1 0 0
𝐿 = [2.5 1 0]
1.5 1.818 1
2 3 1
𝑈 = [0 −5.5 −3.5 ]
0 0 6.636
𝑥1 0.6712
𝑥
[ 2 ] = [−0.7397]
𝑥3 −0.1233
LU decomposition and pivoting
𝐴𝑥 = 𝑏

𝑃𝐴𝑥 = 𝑃𝑏

𝑃 is the permutation matrix and it represents row interchanges due to the partial pivoting. Then

𝑃𝐴 = 𝐿𝑈

and
𝐿𝑈𝑥 = 𝑃𝑏
Example: Use LU decomposition method with partial pivoting and solve the following system of
linear equations. Use four significant figures.
𝑥1 − 4𝑥2 + 2𝑥3 = −4
3𝑥1 − 𝑥2 − 𝑥3 = 1
9𝑥1 − 3𝑥2 + 5𝑥3 = −2
Solution:
1 0 0
𝐿 = [0.1111 1 0]
0.3333 0 1

9 −3 5
𝑈 = [0 −3.667 1.444 ]
0 0 −2.667

0 0 1
𝑃 = [1 0 0]
0 1 0

𝑥1 0.3864
𝑥
[ 2 ] = [ 0.7841 ]
𝑥3 −0.6251
Exercise: Use LU decomposition method and solve the following system of linear equations. Use
four significant figures.
𝑥1 + 2𝑥2 + 𝑥3 = −3
−𝑥1 − 2𝑥2 + 3𝑥3 = 1
3𝑥1 + 4𝑥2 + 7𝑥3 = 0
Inverse of a matrix by using LU decomposition

𝐴𝐴−1 = 𝐼 𝑜𝑟 𝐴−1 𝐴 = 𝐼

Let
𝑎11 𝑎12 ⋯ 𝑎1𝑛
𝑎21 𝑎22 ⋯ 𝑎2𝑛
𝐴=[ ⋮ ⋮ ⋱ ⋮ ]
𝑎𝑛1 𝑎𝑛2 ⋯ 𝑎𝑛𝑛
𝛼11 𝛼12 ⋯ 𝛼1𝑛
𝛼21 𝛼22 ⋯ 𝛼2𝑛
𝐴−1 =[ ⋮ ⋮ ⋱ ⋮ ]
𝛼𝑛1 𝛼𝑛2 ⋯ 𝛼𝑛𝑛

𝑎11 𝑎12 ⋯ 𝑎1𝑛 𝛼11 𝛼12 ⋯ 𝛼1𝑛 1 0 ⋯ 0


𝑎21 𝑎22 ⋯ 𝑎2𝑛 𝛼21 𝛼22 ⋯ 𝛼2𝑛 0 1 ⋯ 0
[ ⋮ ⋮ ][ ⋮ ⋮ ] = [⋮ ]
⋮ ⋱ ⋮ ⋱ ⋮ ⋱ ⋮
𝑎𝑛1 𝑎𝑛2 ⋯ 𝑎𝑛𝑛 𝛼𝑛1 𝛼𝑛2 ⋯ 𝛼𝑛𝑛 0 0 ⋯ 1
𝑎11 𝑎12 ⋯ 𝑎1𝑛
𝑎21 𝑎22 ⋯ 𝑎2𝑛
[ ⋮ ⋮ ⋱ ⋮ ] [𝛼⃗1 𝛼⃗2 ⋯ 𝛼⃗𝑛 ] = [𝐼⃗1 𝐼⃗2 ⋯ 𝐼⃗𝑛 ]
𝑎𝑛1 𝑎𝑛2 ⋯ 𝑎𝑛𝑛
Then one can solve following matrix equations to find 𝐴−1
𝑎11 𝑎12 ⋯ 𝑎1𝑛 𝛼11 1
𝑎21 𝑎22 ⋯ 𝑎2𝑛 𝛼21 0
[ ⋮ ⋮ ⋱ ⋮ ] [ ⋮ ] = [⋮]
𝑎𝑛1 𝑎𝑛2 ⋯ 𝑎𝑛𝑛 𝛼𝑛1 0
𝑎11 𝑎12 ⋯ 𝑎1𝑛 𝛼12 0
𝑎21 𝑎22 ⋯ 𝑎2𝑛 𝛼22 1
[ ⋮ ⋮ ⋱ ⋮ ] [ ⋮ ] = [⋮]
𝑎𝑛1 𝑎𝑛2 ⋯ 𝑎𝑛𝑛 𝛼𝑛2 0

𝑎11 𝑎12 ⋯ 𝑎1𝑛 𝛼1𝑛 0
𝑎21 𝑎22 ⋯ 𝑎2𝑛 𝛼2𝑛 0
[ ⋮ ⋮ ⋱ ⋮ ] [ ⋮ ] = [⋮]
𝑎𝑛1 𝑎𝑛2 ⋯ 𝑎𝑛𝑛 𝛼𝑛𝑛 1
𝐴−1 = [𝛼⃗1 𝛼⃗2 ⋯ 𝛼⃗𝑛 ]

Use LU decomposition to solve these matrix equations, because for the coefficient matrix A it is
sufficient to do the LU decomposition once.
Example: Find the inverse of the given matrix.
3 1 5
𝐴=[ 2 3 4]
−1 −2 1
If
1 0 0 3 1 5
𝐿 = [ 2/3 1 0] and 𝑈 = [0 7/3 2/3 ]
−1/3 −5/7 1 0 0 22/7
Solution:

3 1 5
[2 3 4] [𝛼⃗1 𝛼⃗2 𝛼⃗3 ] = [𝐼⃗1 𝐼⃗2 𝐼⃗3 ]
−1 −2 1
3 1 5 𝛼11 1
[2 𝛼
3 4] [ 21 ] = [0]
−1 −2 1 𝛼31 0
3 1 5 𝛼12 0
[2 𝛼
3 4] [ 22 ] = [1]
−1 −2 1 𝛼32 0
3 1 5 𝛼13 0
[2 𝛼
3 4] [ 23 ] = [0]
−1 −2 1 𝛼33 0

0.5 −0.5 −0.5


𝐴−1 = [ −0.2727 0.3636 −0.09091]
−0.04545 0.2727 0.3182
Matrix Condition Number
The condition number of a matrix shows how much the solution to a linear system (𝐴𝑥 = 𝑏)
changes when the input data changes. It helps us understand how stable the solution is and how
reliable the calculation is.

𝐶𝑜𝑛𝑑(𝐴) = ‖𝐴‖‖𝐴−1 ‖

‖𝐴‖ is the norm of the matrix 𝐴, and can be calculated as follows


𝑛

‖𝐴‖ = 𝑚𝑎𝑥 {∑|𝑎𝑖𝑗 |} 𝑓𝑜𝑟 1 ≤ 𝑖 ≤ 𝑛


𝑗=1

𝑎11 𝑎12 ⋯ 𝑎1𝑛


𝑎21 𝑎22 ⋯ 𝑎2𝑛
𝐴=[ ⋮ ⋮ ⋱ ⋮ ]
𝑎𝑛1 𝑎𝑛2 ⋯ 𝑎𝑛𝑛
𝑛

‖𝐴‖1 = ∑|𝑎1𝑗 | = |𝑎11 | + |𝑎12 | + ⋯ + |𝑎1𝑛 |


𝑗=1
𝑛

‖𝐴‖2 = ∑|𝑎2𝑗 | = |𝑎21 | + |𝑎22 | + ⋯ + |𝑎2𝑛 |


𝑗=1


𝑛

‖𝐴‖𝑛 = ∑|𝑎𝑛𝑗 | = |𝑎𝑛1 | + |𝑎𝑛2 | + ⋯ + |𝑎𝑛𝑛 |


𝑗=1

‖𝐴‖ = 𝑚𝑎𝑥 {‖𝐴‖1 , ‖𝐴‖2 , ⋯ , ‖𝐴‖𝑛 }

A large condition number means that small changes in vector 𝑏 or rounding errors in 𝐴 can cause
large changes in the solution 𝑥. A small condition number means the system is well-behaved and
stable.

0.01
Example: What is the condition number of the given system? How a change in 𝑏 by ∆𝑏 = [ ]
0.01
affects the behaviour of the system?
1.1563 0.5711 0.3142
𝐴=[ ], 𝑏=[ ]
0.3164 0.1564 0.1123
Solution:

‖𝐴‖ = 1.7274

det(𝐴) = 0.00014928

1048 −3826
𝐴−1 = [ ]
−2120 7746
‖𝐴−1 ‖ = 9866

𝐶𝑜𝑛𝑑(𝐴) = 1.7274 ∗ 9866 = 17042

−100.44 0.3142
𝑥=[ ] for 𝑏 = [ ]
203.91 0.1123

−128.22 0.3242
𝑥=[ ] for 𝑏 = [ ]
260.17 0.1223
Indirect methods
Jacobi Method
Consider the following system of equations:
𝑎11 𝑥1 + 𝑎12 𝑥2 + ⋯ + 𝑎1𝑛 𝑥𝑛 = 𝑏1
𝑎21 𝑥1 + 𝑎22 𝑥2 + ⋯ + 𝑎2𝑛 𝑥𝑛 = 𝑏2

𝑎𝑛1 𝑥1 + 𝑎𝑛2 𝑥2 + ⋯ + 𝑎𝑛𝑛 𝑥𝑛 = 𝑏𝑛
The above equations can be rewritten in iterative form as follows:

−𝑎12 𝑥2𝑖 − ⋯ − 𝑎1𝑛 𝑥𝑛𝑖 + 𝑏1


𝑥1𝑖+1 =
𝑎11
−𝑎21 𝑥1𝑖 − ⋯ − 𝑎2𝑛 𝑥𝑛𝑖 + 𝑏2
𝑥2𝑖+1 =
𝑎22

−𝑎𝑛1 𝑥1𝑖 − ⋯ − 𝑎𝑛𝑛−1 𝑥𝑛−1
𝑖
+ 𝑏𝑛
𝑥𝑛𝑖+1 =
𝑎𝑛𝑛
Start with initial guess

𝑥10
0
𝑥 0 = 𝑥2

[𝑥𝑛0 ]
And calculate the next approximations. The Jacobi method solves each equation for one variable,
assuming the other variables stay the same as they were in the previous step. It updates all
variables at the same time in each step.
Gauss-Seidel Method
Consider the following system of equations:
𝑎11 𝑥1 + 𝑎12 𝑥2 + ⋯ + 𝑎1𝑛 𝑥𝑛 = 𝑏1
𝑎21 𝑥1 + 𝑎22 𝑥2 + ⋯ + 𝑎2𝑛 𝑥𝑛 = 𝑏2

𝑎𝑛1 𝑥1 + 𝑎𝑛2 𝑥2 + ⋯ + 𝑎𝑛𝑛 𝑥𝑛 = 𝑏𝑛
The above equations can be rewritten in iterative form as follows:

−𝑎12 𝑥2𝑖 − ⋯ − 𝑎1𝑛 𝑥𝑛𝑖 + 𝑏1


𝑥1𝑖+1 =
𝑎11
−𝑎21 𝑥1𝑖+1 − 𝑎23 𝑥3𝑖 − ⋯ − 𝑎2𝑛 𝑥𝑛𝑖 + 𝑏2
𝑥2𝑖+1 =
𝑎22

−𝑎𝑛1 𝑥1𝑖+1 − 𝑎𝑛2 𝑥2𝑖+1 − ⋯ − 𝑎𝑛𝑛−1 𝑥𝑛−1


𝑖+1
+ 𝑏𝑛
𝑥𝑛𝑖+1 =
𝑎𝑛𝑛
Start with inital guess to approximate.

𝑥10
0
𝑥 0 = 𝑥2

[𝑥𝑛0 ]
In this method, whenever an approximation of an independent variable is available, use it to
approximate the next independent variable. For example; use initial guesses and estimate 𝑥11 .
Then update 𝑥10 with 𝑥11 to approximate 𝑥21 . Replace 𝑥20 with 𝑥21 to approximate 𝑥31 and so on. The
Gauss-Seidel method is an improvement on the Jacobi method because it uses the updated values
in the current step.
Relative approximate error for the Jacobi and Gauss-Seidel methods

𝑥𝑛𝑖 + 𝑥𝑛𝑖−1
𝜀𝑎,𝑛 = | |
𝑥𝑛𝑖

Example: Use the Jacobi and Gauss-Seidel methods to solve the following system of equations.
Start with an initial guess of 𝑥 0 = [0 0 0 0]𝑇 and use four significant figures in your
calculations.
5𝑥1 + 2𝑥2 − 𝑥3 + 𝑥4 = 1
𝑥1 + 6𝑥2 − 2𝑥3 = −1
−𝑥1 − 𝑥2 − 6𝑥3 + 3𝑥4 = 0
2𝑥2 − 𝑥3 + 5𝑥4 = 2
Jacobi method

Iteration number 𝒙𝟏 𝒙𝟐 𝒙𝟑 𝒙𝟒

1 0.2 -0.1667 0 0.4

2 0.1867 -0.2 0.1944 0.4667

3 0.2256 -0.133 0.2356 0.5189

4 0.1965 -0.1257 0.244 0.5003

5 0.199 -0.1181 0.2384 0.4991

6 0.1951 -0.1204 0.2361 0.4949

7 0.1964 -0.1205 0.235 0.4954

8 0.1961 -0.1211 0.235 0.4952

Gauss-Seidel method

Iteration number 𝒙𝟏 𝒙𝟐 𝒙𝟑 𝒙𝟒

1 0.2 -0.2 0 0.48

2 0.184 -0.1973 0.2422 0.5274

3 0.2219 -0.1229 0.2472 0.4986

4 0.1989 -0.1174 0.2357 0.4941

5 0.1953 -0.1206 0.2346 0.4952

6 0.1961 -0.1212 0.2351 0.4955


Convergence of iteration methods
The Jacobi and Gauss-Seidel methods converge when the system of equations is diagonally
dominant.
𝑛

|𝑎𝑖𝑖 | > ∑|𝑎𝑖𝑗 |


𝑗≠𝑖

In other words, the system converges if the absolute value of the diagonal term in a row is greater
than the sum of the absolute values of the remaining terms in that row. This condition is sufficient
but not necessary for convergence.

Example: Use the Gauss-Seidel method to solve the following system of equations with an
approximate relative error of less than 0.001.

0.1𝑥1 + 0.2𝑥2 − 0.8𝑥3 = 1


−2𝑥1 − 0.3𝑥2 − 0.6𝑥3 = 3
−0.1𝑥1 + 6𝑥2 − 3𝑥3 = −2

The system of equations is not diagonally dominant. The iterations will diverge.

Iteration number 𝒙𝟏 𝒙𝟐 𝒙𝟑

1 10 -76.67 -153

2 -1061 7367 14770

3 103438 -719135 -1441718


Convert the system of equations into the diagonally dominant one.

Error
Iteration number 𝒙𝟏 𝒙𝟐 𝒙𝟑
𝐦𝐚𝐱(𝜺𝒙𝟏 , 𝜺𝒙𝟐 , 𝜺𝒙𝟑 )

1 -1.5 -0.3583 -1.5271 -

2 -0.9881 -1.1133 -1.6519 0.6782

3 -0.8374 -1.1732 -1.648 0.18

4 -0.8296 -1.1712 -1.6465 0.0094

5 -0.8304 -1.1704 -1.6464 0.0009633

You might also like