Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
6 views

Lecture 9

Uploaded by

Fareh Iqbal
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Lecture 9

Uploaded by

Fareh Iqbal
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 36

CSC475: Numerical Computing

Tanveer Ahmed Siddiqui

Department of Computer Science


COMSATS University, Islamabad
Lecture # 09

Solution of
System of Linear equation

Department of Computer Science 2


Department of Computer Science 3
Recap

The steps in LU decomposition.

Department of Computer Science


Today Covered
,

After completing this lecture you should be able to


know
 Jacobi’s Iterative Method
 Example
 Gauss- Seidal Iterative Method
 Example

Department of Computer Science


Background

 Why do we need another method to solve a


set of simultaneous linear equations?
 In certain cases, such as when a system of
equations is large, iterative methods of solving
equations are more advantageous.
 Elimination methods, such as Gaussian elimination, are
prone to large round-off errors for a large set of
equations.
 Iterative methods, such as the Gauss-Seidel method,
give the user control of the round-off error.
 Also, if the physics of the problem are well known, initial
guesses needed in iterative methods can be made more
judiciously leading to faster convergence.
Department of Computer Science 6
Background

 A system of linear equations can also be solved


by using an iterative approach.
 The process, in principle, is the same as in the
fixed-point iteration method used for solving a
single nonlinear equation.
 Let us first review fixed-point iteration method.

Department of Computer Science 7


Background

 In this method, we rewrite the given equation f


(x) = 0 in the form.
 x = φ(x)
 Let x = x0 be the initial approximation to the actual
root, then the first approximation is
x1 = φ(x0 )
 and the successive approximation are
x2 = φ(x1 )
x3 = φ(x2 ),
x4 = φ(x3 ),
………....,
Department of Computer Science xn = φ(xn−1 ) 8
Background

 In this method, we rewrite the given equation f


(x) = 0 in the form.
 x = φ(x)
 Let x = x0 be the initial approximation to the actual
root, then the first approximation is
x1 = φ(x0 )
 and the successive approximation are
x2 = φ(x1 )
x3 = φ(x2 ),
x4 = φ(x3 ),
………....,
Department of Computer Science xn = φ(xn−1 ) 9
Background

 In an iterative process for solving a system of


equations, the equations are written in an
explicit form in which each unknown is written in
terms of the other unknown. The explicit form
for a system of four equations is illustrated in
following fig.

Department of Computer Science 10


Background

 The solution process starts by assuming initial


values for the unknowns (first estimated
solution).
 In the first iteration, the first assumed solution is
substituted on the right-hand side of the equations, and
the new values that are calculated for the unknowns are
the second estimated solution.
 In the second iteration, the second solution is
substituted back in the equations to give new values for
the unknowns, which are the third estimated solution.
 The iterations continue in the same manner, and when
the method does work, the solutions that are obtained
as successive iterations converge toward the actual
solution.
Department of Computer Science 11
Condition for convergence

 For a system of n equations [a][x] = [b], a


sufficient condition for convergence is that in
each row of the matrix of coefficients [a] the
absolute value of the diagonal element is greater
than the sum of the absolute values of the off-
diagonal elements.

Department of Computer Science 12


Condition for convergence

 This condition is sufficient but not necessary for


convergence when the iteration method is used.
 When condition is satisfied, the matrix [a] is
classified as diagonally dominant, and the
iteration process converges toward the solution.
 The solution, however, might converge even
when following condition is not satisfied.

Department of Computer Science 13


Iterative methods

 Two specific iterative methods for executing the


iterations, the Jacobi and Gauss-Seidel methods,
are presented next.
 It should be noted that these methods makes
two assumptions.
 First, the system of linear equations to be solved,
must have a unique solution
 Second, there should not be any zeros on the main
diagonal of the coefficient matrix A.
 In case, there exist zeros on its main diagonal, then rows
must be interchanged to obtain a coefficient matrix that does
not have zero entries on the main diagonal.

Department of Computer Science 14


Jacobi Iterative methods

 In the Jacobi method, an initial (first) value is


assumed for each of the unknowns,
 If no information is available regarding the
approximate values of the unknown, the initial
value of all the unknowns can be assumed to
be zero.
 The second estimate of the solution
is calculated by substituting the first estimate in
the right-hand side of Eqs. (4.51):

Department of Computer Science 15


Iterative methods

 In general, the (k + 1)th estimate of the solution


is calculated from the ( k) th estimate by:

 The iterations continue until the differences


between the values that are obtained in
successive iterations are small.

Department of Computer Science 16


Iterative methods

 The iterations can be stopped when the absolute


value of the estimated relative error of all the
unknowns is smaller than some predetermined
value:

Department of Computer Science 17


Example

 Find the solution to the following system of


equations using Jacobi’s iterative method for the
first five iterations:

83 x  11 y  4 z  95
7 x  52 y  13 z  104
3 x  8 y  29 z  71

Department of Computer Science


Solution

95 11 4 
x  y z 
83 83 83

104 7 13 
y  x  z
52 52 52 
71 3 8 
z  x y
29 29 29 
Department of Computer Science
First Iteration

 Taking the initial starting of solution vector as


(0, 0, 0)T ,
 from Eq. ,we have the first approximation as

4 
95 11
x  y z 
83 83 83
 x   1.1446 
(1)
  (1)   
 y    2.0000 
104 7 13 
y  x  z
52 52 52 
71 3 8   z (1)   2.4483 
z  x
29 29
y
29     
Department of Computer Science
Second Iteration

Now, using Eq. ,the second approximation is


computed from the equations
x
95 11 4 
 y z 
83 83 83
 x (1)   1.1446 
  (1)   
y
104 7 13 
 x  z
52 52 52 
y
    2.0000 
71 3 8   z (1)   2.4483 
z  x
29 29
y
29     

x  1.1446  0.1325 y  0.0482 z 


(2) (1) (1)


y  2.0  0.1346 x  0.25 z
(2) (1) (1)

(1) 
z  2.4483  0.1035 x  0.2759 y 
(2) (1)

Department of Computer Science


Second Iteration

 Making use of the last two equations we get the


second approximation as

 x   0.9976 
(2)

 (2)   
 y 
  1.2339 
 z (2)   1.7424 
   

Department of Computer Science


 Similar procedure yields the third, fourth and
fifth approximations to the required solution and
they are tabulated as below;

Department of Computer Science


Variables

Iteration x y z
number r
1 1.1446 2.0000 2.4483
2 0.9976 1.2339 1.7424
3 1.0651 1.4301 2.0046
4 1.0517 1.3555 1.9435
5 1.0587
Department of Computer Science
1.3726 1.9655
Your Turn

 Solve the system of linear equations by Jacobi’s


method

Department of Computer Science


Gauss-Seidel Iterative Method

Department of Computer Science 26


Example

 Find the solution of the following system of


equations using Gauss-Seidel method and
perform the first five iterations:

4 x1  x2  x3  2
 x1  4 x2  x4  2
 x1  4 x3  x4  1
 x2  x3  4 x4  1
Department of Computer Science
Solution
 The given system of equations can be rewritten as

x1  0.5  0.25 x2  0.25 x3 


x2  0.5  0.25 x1  0.25 x4  

x3  0.25  0.25 x1  0.25 x4 
x4  0.25  0.25 x2  0.25 x3 

Department of Computer Science


Solution

 Taking x2  x3  x4  0 on the right-hand side


of the first equation of the system , we get x1(1)  0.5.
x3  x4  0 x1 ,
 Taking and the current value of
we get from the 2nd equation of the system

x (1)
2  0.5  (0.25)(0.5)  0  0.625

Department of Computer Science


Solution

Further, we take x4 = 0 and the current value of


x1 we obtain from the third equation of the
system
x3(1)  0.25  (0.25)(0.5)  0
 0.375

Department of Computer Science


Solution

 Now, using the current values of x2 and x3 the


fourth equation of system gives

x  0.25  (0.25)(0.625)
(1)
4

(0.25)(0.375)  0.5

Department of Computer Science


Solution

 The Gauss-Seidel iterations for the given set of


equations can be written as
( r 1)
x 1  0.5  0.25 x(r )
2  0.25 x (r )
3
( r 1) ( r 1)
x 2  0.5  0.25 x1  0.25 x (r )
4
( r 1) ( r 1)
x 3  0.25  0.25 x 1  0.25 x (r )
4
( r 1) ( r 1) ( r 1)
x 4  0.25  0.25 x 2  0.25 x 3

Department of Computer Science


Solution

 Now, by Gauss-Seidel procedure, the 2nd and


subsequent approximations can be obtained and
the sequence of the first five approximations are
tabulated as below:

Department of Computer Science


Solution
Variables

Iteration x1 x2 x3 x4
number r
1 0.5 0.625 0.375 0.5

2 0.75 0.8125 0.5625 0.59375

3 0.84375 0.85938 0.60938 0.61719

4 0.86719 0.87110 0.62110 0.62305

5 0.87305 0.87402 0.62402 0.62451


Department of Computer Science
Your Turn

 Solve the system of linear equations by Jacobi’s


method

Department of Computer Science


Iterative methods

 The difference between the two methods is in the


way that the new calculated values of the
unknowns are used.
 In the Jacobi method, the estimated values of the
unknowns that are used on the right-hand side of Eq.
(4.51) are updated all at once at the end of each
iteration.
 In the Gauss-Seidel method, the value of each
unknown is updated (and used in the calculation of the
new estimate of the rest of the unknowns in the same
iteration) when a new estimate for this unknown is
calculated.

Department of Computer Science 36

You might also like