Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Numerical Methods Lecture2a

Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

NUMERICAL METHODS

Solving a System of Linear Equation


Systems of linear equations that have to be solved
simultaneously arise in problems that include
several (possibly many) variables that are
dependent on each other. A system of two (or three)
equations with two (or three) unknowns can be
solved manually by substitution or other
mathematical methods (e.g., Cramer's rule).
Solving a system in this way is practically
impossible as the number of equations (and
unknowns) increases beyond three.
F.L. MARCOS 2
Solving a System of Linear Equation
An example of a problem in
electrical engineering that requires
a solution of a system of
equations is shown. Using
Kirchhoff's law, the currents i1,
i2, i3, and i4 can be determined by
solving the system of four
equations:

F.L. MARCOS 3
Overview of Numerical Methods for Solving a
System of Linear Algebraic Equations
The general form of a system of n linear algebraic
equations is:

In matrix form:

F.L. MARCOS 4
Methods:
Direct Methods - the solution is calculated by performing
arithmetic operations with the equations.
•  Gaussian Elimination
•  Gauss-Jordan Elimination Method
•  LU Decomposition
•  Cramer’s Rule, Inverse Method
Indirect / Iterative Methods - an initial approximate
solution is assumed and then used in an iterative process
for obtaining successively more accurate solutions.
• Jacobi Iteration
• Gauss-Seidel Iteration F.L. MARCOS 5
Upper Triangular Form
The system in this form has all zero coefficients
below the diagonal and is solved by a procedure
called back substitution. This is used in Gauss
Elimination method.

F.L. MARCOS 6
Lower Triangular Form
The system in this form has all zero coefficients above
the diagonal and is solved by a procedure called
forward substitution. The Lower and the Upper
triangular form is used in LU Decomposition
method.

F.L. MARCOS 7
Diagonal Form
The system in this form has nonzero coefficients along
the diagonal and zeros everywhere else. This is used in
Gauss-Jordan Method.

F.L. MARCOS 8
Gauss Elimination Method

F.L. MARCOS 9
F.L. MARCOS 10
F.L. MARCOS 11
F.L. MARCOS 12
F.L. MARCOS 13
F.L. MARCOS 14
Potential Difficulties When Applying
the Gauss Elimination Method
•  The pivot element is zero
•  The pivot element is small relative to the
other terms in the pivot row

Example:

F.L. MARCOS 15
Gauss - Jordan Elimination Method

F.L. MARCOS 16
F.L. MARCOS 17
F.L. MARCOS 18
F.L. MARCOS 19
F.L. MARCOS 20
F.L. MARCOS 21
F.L. MARCOS 22
Condition for Convergence
For a system of n equations [a][x] = [b], a
sufficient condition for convergence is that in
each row of the matrix of coefficients [a] the
absolute value of the diagonal element is greater
than the sum of the absolute values of the off-
diagonal elements.

F.L. MARCOS 23
Condition for Convergence
This condition is sufficient but not necessary for
convergence when the iteration method is used.
When condition this equation is satisfied, the matrix
[a] is classified as diagonally dominant, and the
iteration process converges toward the solution. The
solution, however, might converge even this
equation is not satisfied.

F.L. MARCOS 24
Gauss- Siedel Iterative Method
In the Gauss-Seidel method, initial (first)
values are assumed for the unknowns x2,
x3, .. ., xn (all of the unknowns except x1). If
no information is available regarding the
approximate value of the unknowns, the
initial value of all the unknowns can be
assumed to be zero.

F.L. MARCOS 25
Gauss- Siedel Iterative Method

F.L. MARCOS 26
Gauss- Siedel Iterative Method

F.L. MARCOS 27
F.L. MARCOS 28
Jacobi Iterative Method
In the Jacobi method, an initial (first) value is assumed for
each of the unknowns,​ 𝑥↓1↑  ,​ 𝑥↓2↑  , … ​ 𝑥↓𝑛↑  . If no
information is available regarding the approximate values
of the unknown, the initial value of all the unknowns can
be assumed to be zero. The second estimate of the solution ​
𝑥↓1↑  ,​ 𝑥↓2↑  , … ​ 𝑥↓𝑛↑  is calculated by substituting the
first estimate in the right-hand side of equation:

F.L. MARCOS 29
Jacobi Iterative Method
In general, the (k+1)th estimate of the solution is calculated
from the (k)th estimate by:

The iterations continue until the differences between the


values that are obtained in successive iterations are small.
The iterations can be stopped when the absolute value of the
estimated relative error of all the unknowns is smaller than
some predetermined value:

F.L. MARCOS 30

You might also like