Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
153 views

Iterative Methods For Solution of Systems of Linear Equations

The document discusses iterative methods for solving systems of linear equations, including the Jacobi method and Gauss-Seidel method. The Jacobi method constructs a convergent sequence by iteratively solving for x using the previous value of x on the right-hand side. The Gauss-Seidel method similarly solves for each unknown iteratively, but uses already updated values instead of the previous iteration's values. Both methods converge if the matrix is diagonally dominant. The document provides examples applying these methods to systems of equations.

Uploaded by

pedroquiroga7
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
153 views

Iterative Methods For Solution of Systems of Linear Equations

The document discusses iterative methods for solving systems of linear equations, including the Jacobi method and Gauss-Seidel method. The Jacobi method constructs a convergent sequence by iteratively solving for x using the previous value of x on the right-hand side. The Gauss-Seidel method similarly solves for each unknown iteratively, but uses already updated values instead of the previous iteration's values. Both methods converge if the matrix is diagonally dominant. The document provides examples applying these methods to systems of equations.

Uploaded by

pedroquiroga7
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

Chapter 5: Iterative

Methods for Solution of


Systems of Linear Equations

PEDRO FERNANDO QUIROGA


NOVOA

Escuela de Ingeniería de Petróleos

UNIVERSIDAD INDUSTRIAL DE
SANTANDER
Iterative methods offer an alternative to the
Iterative Methods for disposal methods described in the previous
Solution of Systems of submission (Direct Methods for Solving Linear
Equations Systems) to approximate the solution.
Linear Equations
1. Special Matrices:
Banded matrix:

 Special Matrices The banded matrix is a square matrix in which all


elements are zero, except for a band centered on
the main diagonal.
 Jacobi method

 Gauss-Seidel method

 Gauss-Seidel method with


relaxation
The Gauss elimination and LU
decomposition are used for banded systems
but can often be inefficient, so there are
alternative algorithms that allow a more
efficient solution. One example is the
By LU decomposition we have:
Thomas algorithm.

 Tridiagonal Matrix:

Forward substitution is performed as follows:


r 2=0.8−(−0.49 ) 40.8=20.8

r 3=0.8−(−0.645 ) 20.8=14.221

r 3=0.8 — (−0.717)14.221=210.996

Example: For a tridiagonal system

Therefore the right side vector is


The Jacobi method is an iterative method, used
40.8 for solving systems of linear equations Ax = b.
{ }20.8
14.221
210.996
Type the algorithm is named after the German
mathematician Carl
Which is used with the matrix [U] to perform the Gustav Jakob
back substitution, and then we have: Jacobi. The basis of
the method is to
construct a
210.996 convergent
T 4= =159.480
1.323
sequence defined
T 3=
[14.221−(−159.48)]
=124.538
iteratively.
1.395
Given a square system
[20.800−(−124.538)] AX=b
T 2= =93.778 of n linear equations:
1.550

[ 40.800−(−93.778)] Where:
T 1= =65.970
2.040

2. Jacobi method:
Dx+ Rx=b
Dx=b−Rx

The Jacobi method is an iterative technique that


solves the left hand side of this expression for x,
using previous value for x on the right hand side.
Then A can be decomposed into a diagonal Analytically, this may be written as:
component D, and the remainder R. The system
x(k +1)=D −1 (b−R x(k) )
of linear equations may be rewritten as:
The element-based formula is thus:
1
x(ki +1)= b i−∑ a ii x(k)
j i=1,2 , … , n
(
aii j ≠i
)

The Jacobi method always converges if the matrix


A is strictly diagonally dominant and can
converge even if this condition is not satisfied. It
is necessary, however, that the diagonal
elements in the matrix are greater (in magnitude)
R=L+U than the other elements.
( D+ R ) x=b
Example: With T and C calculated, we estimate x as
x(1)= D−1 ( b−R x (0 ) ) =T x (0 )+ C
A= [ 25 71] , b=[ 1113]∧x =[11]
(0 )

0 −0.5 1 5.5
x1=[−0.714
( )
0 ][ ] [ ]
+
1 1.857

x =[ 5.0 ]
( 1)
1
D−1= 2
0[ ] 0
1
7
, L= 0 0 ∧U= 0 1
[ ]
5 0 0 0 [ ]
1.143

The next iteration yields


0 −0.5 5.0 + 5.5
We determine T =−D−1 ( L+U ) as x (2 )= [−0.714 0 ][1.143 ] [ 1.857 ]

1 4.929

[ ]{[
0 =[
−1.713 ]
(2)
x
T= 2 0 0+ 0 1
0
1 ] [ ]}
5 0 0 0
7

0 −0.5
T= [−0.714 0 ] This process is repeated until convergence (i.e.,
until ||A x (n )−b||is small). The solution after 25
And C=D−1 b as: iterations is
1
C= 2
0 [ ] 0
11 = 5.5
1 13
7
[ ][ ]
1.857
x= 7.111
−3.222 [ ]
b1−a12 x 2−a13 x3
3. Gauss-Seidel method x 1=
a 11

b2−a21 x 1−a23 x 3
x 2=
a22

b3−a3 1 x 1−a 32 x 2
x 3=
a33

You can start solving choosing initial values for x.


You can start assuming that all are zero, which
b1
would x 1= a , then substitute this value in the
11

other equations to obtain the new values of x 2


and x 3. Then back to the first equation and the
This is the most commonly used iterative
procedure is repeated until there is a good
method. Assuming a system of three equations:
approximation.
[ A ] [ X ] =[ B ]
The convergence is evaluated using the following
If the diagonal elements are all zero, the first
criteria:
equation can be solved for each of the unknowns
x i j −xi j−1
is: ε a ,i=
| xi j |100 %< ε s
Assuming x 2and x 3are zero we have:

The superscript j indicates the current iteration 7.85


x 1= =2.616667
3
and j-1 the previous iteration.
Substituting this value and assuming x2 ¿ x3 still be
 Example: zero we have:
−19.3−0.1 ( 2.616667 )
Solve the system using the Gauss-Seidel x 2= =−2.794524
7
3 x 1−0.1 x 2−0.2 x 3=7.85
To complete the first iteration values are replaced
0.1 x 1+7 x 2−0.3 x 3=−19.3 in x 3
0.3 x 1−0.2 x2 +10 x 3=71.4 71.4−0.3 (2.616667 )+ 0.2 (−2.794524 )
x 3=
10
Solving each unknown on the diagonal we have:
x 3=7.005610
7.85+ 0.1 x 2 +0.2 x3
x 1=
3 For the second iteration of the procedure is
−19.3−0.1 x 1+ 0.3 x 3 repeated.
x 2=
7

71.4−0.3 x 1 +0.2 x 2
x 3=
10 7.85+ 0.1 ( ¿−2.794524 )+ 0.2 ( 7.005610 )
x 1=
3
x 1=2.990557 ε a ,3=0.076 %

−19.3−0.1 ( 2.990557 ) +0.3 ( 7.005610 )


x 2=
7

x 2=−2.499625
4. Gauss-Seidel method with relaxation:
71.4−0.3 (2.990557 )+ 0.2 (−2.499625 )
x 3=
10

Relaxation is a modification of Gauss-Seidel


method which improves the convergence. After
calculating each new value of x, this value is
x 3=7.000291
modified by using the expression:
Like a real problem was not initially known x inuevo=λ x inuevo +(1−λ) xianterior
errors, the following equation is a way to
determine the error. Where λ has a value between 0 and 2.

x i j −xi j−1  If λ is equal to 1 the result is unchanged.


ε a ,i=
| xi j |
100 %
 If λ has a value between 0 and 1, the result
is an average of current and previous
ε a ,1= |2.990557−2.616667
2.990557 |100 %=12.5 % results. This modification is known as sub
ε a ,2=11.8 %
relaxation, making a non-convergent,
converge or hasten the convergence.
 If λ has a value between 1 and 2, it gives
extra weight to the current value. This type
of modification is called Overrelaxation, can
Bibliography
accelerate the convergence of a system that
is convergent.
 CHAPRA STEVEN, (2007) “Métodos
The choice of an appropriate value of λ is
numéricos para ingeniería”. Chapter 11
specified by the problem and is determined
empirically.
 http://es.wikipedia.org/wiki/Metodo de
Jacobi

You might also like