Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Lec3 Numerical Model

Download as pdf or txt
Download as pdf or txt
You are on page 1of 57

Universiti Teknologi PETRONAS

MDB3053 Numerical Methods

LINEAR ALGEBRAIC EQUATIONS


(LAE)

Reading Assignment:
Chapter 9 & 10 in Textbook

1
LEARNING OUTCOMES
• Numerical Methods to solve Linear Algebraic Equations
(LA E)
1. Direct methods exact solution
2. Iterative methods  approximate solution

Chap9/3

2
LINEAR ALGEBRAIC EQUATIONS (LAE)

• Example of a system of L.A.E


3x1  5x2  2x3  8
2x1  3x2  x3  1 Three coupled,
linear equations
x1  2x2  3x3  1

•Why coupled? Each equation has terms in common with


the others, that is x1, x2 and x3.
•Why linear? It means the effects are proportional to their
cause. Each equation contains only first order terms of x1, x2
and x3. No terms like x12, or log(x3) or 1/x2x3.
A x b
3
GENERAL FORM OF L.A.E

•The general form of a system of L.A.E for n number of


equations

a11 x1  a12 x2    a1n xn  b1


Coefficients

a21 x1  a22 x2    a2 n xn  b2 constants


 
an1 x1  an 2 x2    ann xn  bn

unknown
variables
4
MATRIX FORM OF L.A.E

• Linear Algebraic Equations in a matrix form: Ax = b

A x b

 a11  a1n   x1   b1 
     
A    , x    , b    
a  amn  x  b 
 m1  n  n
A = coefficient matrix; dimension: m x n
x = vector of unknowns; dimension: n x 1
b = vector of constants; dimension: n x 1

5
MATRIX FORM OF L.A.E

•LAE in a matrix form: Ax = b can also be represented in


augmented form as [A | b]

 a11 ... a1n b1 


 
 A | B    . ... . b2 
 a ... a b 
 m1 m 3

Example: augmented form


A x b
 2 3 x1  1  2 3 1
2x1  3x2  1         
  4 1 5
 4x1  x2  5   4 1  x2  5  
6
SPECIAL TYPES OF MATRICES

7
Numerical methods to solve LAE

• Graphical Method – by plotting


• Cramer’s Rule – by determinant concept*

• Direct Method – Gauss Elimination


(Exact Sol) – Gauss-Jordan
– LU Factorization

• Iterative Method – Gauss-Seidel Method


(Approx. Sol) – Jacobi Method

8
GRAPHICAL METHOD

•The SOLUTION is
shown graphically
•They are in the forms of
straight lines because of
the linear equations
3x1  2x2  18
 x1  2x2  2

3x1
x2  9  .......(1)
solve for x 2 2
x1
x2  1..........(2) Chap9/10
2
9
BASIC CONCEPTS OF SOLUTION

• Unique Solution (Non homogeneous equation)


x2
Linearly 2x1  3x2  6
independent
equations 2x1  9x2  12

x1

• Without Solution
x2
3x1  9x2  5 parallel lines

x1  3x2  6
x1

10
(con’t)

• Infinite Solutions (No unique solutions)


Linearly 2x  3x  4
x2
1 2
dependent  identical lines
equations 4x1  6x2  8
x1

•Ill-conditioned system  equations are close to each


other, very hard to determine the solution
x2
2x1  2.2x2  5.7 close equations
2x1  2x2  5.5
x1

11
GAUSS ELIMINATION

• Most frequently used Direct Method


• Principle:to reduce the coefficient matrix, A into equivalent
upper triangular form, U
• Consists of TWO stages:
1.Forward Elimination – To transform the original
matrix, A into an upper, U triangular matrix

2.Back Substitution – Unknowns are solved through


Back Substitution in the upper triangular matrix

12
GAUSS ELIMINATION - ALGORITHM

Augmented
form

Upper
triangular
matrix, U

13
FORWARD ELIMINATION

 2 3 1
• Forward Elimination stage reduces the  
  4 1 5
augmented matrix into an upper triangular  
matrix, U.
  4 1 5
• It requires Elementary Row  
 2 3 1
Transformations (3 Basic approaches).  
You can:
  4 1 5
1. Interchange any two row in position  
 4 6 2
2. Multiply (or divide) a row by a  
nonzero scalar , k (eg. k i R )
  4 1 5
3. Multiply (or divide) a row by a  
scalar and adding it (or subtracting  0 7 7
 
it from) another row ( Rj + k  Ri)

14
EXAMPLE: Reduction to an upper triangular matrix

• Consider the matrix 1 2 2 3


R1
4 4 2 6
R2
4 6 4 10
R3

• First step of the Elimination at second row, R


2

 4R 4 Elimination factor or

R2  R2  1
1 1 multiplier
‘1’ is called the 1st pivot element
•It means “ To get the new second row R2' , multiply every
element of the first row, R1 by 4 and subtract it from the
element in the second row, R2".
15
(con’t)

 Thus, the new 2nd row R2' becomes :


R2 = [4 4 2 | 6] – 4 [1 2 2 | 3] Original Matrix:
= [0 –4 –6 | –6]
4 1 2 2 3
R2  R2  R1
1 4 4 2 6
4 6 4 10
• Next, transformation at 3rd row, R3'
4 Transformation:
R3  R3  R1
1 1 2 2 3
0 -4 -6 -6
Thus, the new 3rd row R3' becomes:
0 -2 -4 -2
R3 = [4 6 4 | 10] – 4 [1 2 2 | 3]
= [ 0 –2 –4 | –2]
R3'' = [0 –2 –4 | –2] – 1/2 [0 –4 –6 | –6] You can perform one
= [ 0 0 –1 | 1] more transformation to
obtain U matrix 16
BACK SUBSTITUTION

• The new augmented matrix is:

all entries below


the diagonal are 1 2 2 3
ZEROS 0 –4 –6 –6
0 0 –1 1

By using Back Substitution,


Row #3: –x3 = 1  x3 = –1
Direct method
Row #2: –4x2 – 6x3 = –6  x2 = 3 yields exact
solution
Row #1 : x1 + 2x2 + 2x3 = 3  x1 = –1

17
4
Elimination factor =
1
4
R2  R2  R1
1

4
R3  R3  R1
1

Back substitution
-2 on U-matrix
Elimination factor =
-4
x1 = 1; x2 =3; x3 = 1
2
R"3  R3  R2
4 18
CLASS ACTIVITY
Use Gauss elimination method to solve

2x1  x2  3x3  2
4x1  x2  x3  1
 2x1  3x2  2x3  3.
Note: Use fraction number to reduce round-off errors.
Ans: x1 = 0.5; x2 =0; x3 =1

Try this in EXCEL: (p296) Try this in MATLAB: (p298)


 To inverse, highlight the >> A=[2 –1 3;4 1 1;-2 3 2]
range, enter minverse(..) and
>> b=[2 ; –1 ; 3]
press Ctrl+Shift+Enter
>> x = A\b
 To multiply,enter mmult(..)
19
SOLUTION
Original Matrix:
4
Elimination factor = 2 -1 3 2
2
4 4 1 1 -1
R2  R2  R1 -2 3 2 3
2
Transformation:
2 -1 3 2
2 0 3 -5 -5
R3  R3  R1 0 2 5 5
2

2 2 -1 3 2
Elimination factor = 0 3 -5 -5 Back substitution
3 on U-matrix
2 0 0 25/3 25/3
R"3  R3  R2 x1 = 0.5; x2 =0; x3 =1
3
20
Pitfalls of Gauss Elimination

• Division by zero. It is possible that during both


elimination and back-substitution phases a division by
zero can occur.  can be solved by (Partial) Pivoting
• Round-off errors. Because computer can only store a fixed
no. of digits  to use more significant figures or fractions
• Ill-conditioned systems. Systems where small changes in
coefficients result in large changes in the solution. It
x1
happens when two or more equations are nearly identical.

partial pivoting – by switching the rows so that the largest element is the
pivot element (diagonal term)
21
GAUSS-JORDAN METHOD

• A variation of Gauss elimination but 50% more


computational work than Gauss elimination
• Elimination steps reduce all off-diagonal elements to zero
and results in an identity matrix are due to normalization
step
• No back substitution is required

 a11 a12 a13 c1  Elimination +  1 0 0 h1 


  Normalization   Identity
 a 21 a 22 a 23 c 2   0 1 0 h2  matrix
a a 23 a 33 c3  0 0 1 h 
 31  3 

22
GAUSS-JORDAN

Elimination +
Normalization
Identity
Matrix

Answer for No back


x1 , x2 , x3 substitution !

x1 = C1 ; x2 = C2; x3 = C3 23
CLASS ACTIVITY

Solve the system of equations below using Gauss-Jordan method

x1  2 x2  x3  7
2 x1  5 x2  2 x3  6
3x1  2 x2  x3  1

Ans: x1 = 2 ; x2 = 8; x3 = 21 24
SOLUTION

Gauss-Jordan
2
Elimination factor =
1
2
R2  R2  R1
1

3
R3  R3  R1
1 new R1  R1  R3
2
R1  R3  R2 new R3   R3 / 4 Identity matrix:
1
8
Elimination factor =
-1
8
R"3  R3  R2
1 R2   R2 x1 = 2; x2 = 8; x3 =21
25
*MATRIX INVERSION

• Inverse Matrix A-1 can be found through Gauss-Jordan


Method

STEP 1 : Form A I 
STEP 2 : Transform A I  to I B  using the
elementary row transformations.

STEP 3 : Finally, B = A-1, where B is the inverse matrix

26
*CLASS ACTIVITY
Find the inverse of the matrix below using the Gauss-Jordan Method

 1 2  2
 
A   1 3 1 
 2 1  6
 

  17 10 4 
1
 
Answer A   4  2  1
 5 1 
 3 27
MDB3053
Numerical Methods

LU DECOMPOSITION

Chapter 10 in Textbook

28
RECAP on previous lecture

• What is L.A.E ?
• L.A.E in matrix form
• Direct Methods to solve L.A. E
• Gaussian Elimination

• Gauss-Jordan

29
Gaussian Elimination
Gauss-Jordan Method
30
LU DECOMPOSITION

Direct method #3 using LU Decomposition

 Doolittle Method

 *Crout’s Method

31
LU DECOMPOSITION

• Gauss elimination produce an upper


triangular matrix, U, from matrix A.

• We can also get a lower triangular


matrix, L in such as a way that A = LU

32
LU DECOMPOSITION

• Advantage : Round-off errors created


each time an element of reduced matrix
is computed, are avoided

STEP 1 : A is decomposed into L and U


matrices
 L
  U

 l11 0 0  u11 u12 u13 
  
A  LU   l21 l22 0  0 u22 u23 
l  0 
 31 32 33 
l l 0 u 33 

33
(con’t)
STEP 2 : Let Ux = y
given Ax  b
 Ax  LUx  Ly  b

STEP 3 : Solve Ly=b for y by forward substitution


Ly  b find { y}

STEP 4 : With y, then solve Ux=y for x by


backward substitution Ux  y find {x}
34
(con’t)
Fig.10.1

35
Decomposing A into LU

Example of Chap 9
• Perform the Gauss elimination as Original Matrix:
usual to obtain upper tri. matrix.
1 2 2 3
4 4 2 6
• The elimination factors employed can 4 6 4 10
be assembled into lower tri. matrix
Gauss Elimination:
a21 term: f21 = 4/1= 4
Elimination 1 2 2 3
a31 term: f31 = 4/1 = 4 factors
0 -4 -6 -6
a’32 term: f32 = 2/4 = 0.5
0 0 -1 1
1 0 0  1 2 2
L  4 1 0 U  0  4  6
4 0.5 1 0 0  1

36
Example: Substitution Steps
1 2 2  3
   
given A  4 4 2 b   6 
4 6 4 10 
 
• Let’s say after applying decomposition: A = LU
1 0 0 1 2 2
A  L1U  4 1 0 0  4  6
4 0.5 1 0 0  1

• Use Forward Substitution: Ly = b

 1 0 0  y1   3   y1   3 
        
 4 1 0  y2    6    y2     6 
 4 0.5 1  y  10  y   1 
  3     3  
37
(con’t)

• Use Backward Substitution: Ux = y


1 2 2  x1   3   x1    1
        
 0  4  6  x2     6    x2    3 
 0 0  1  x   1   x    1
  3     3  

This is LU decomposition based on


Gaussian Elimination method.

38
LU DECOMPOSITION – Doolittle Method

Doolittle Method – 1’s along the diagonal of L (L is L1)


 L1 
  U

 1 0 0  u11 u12 u13 
  
A  L1U   l21 1 0  0 u22 u23 
l  0 
 31 32
l 1  0 u 33 

Crout’s Method – 1’s along the diagonal of U (U is U1)


U1
 L

  
 l11 0 0  1 u12 u13 
  
A  LU1   l21 l22 0  0 1 u23 
l  1 
 31 l32 l33  0 0
39
Example

Solve the system of equations below using


Doolittle Method:
2 x1  x2  3x3  14
x1  6 x2  4 x3  3
3x1  7 x2  2 x3  8

Answers: x1 = 1; x2 = 3; x3 = 5
40
SOLUTION

Step 1: Doolittle Method: A=L1U


 2 1 3  1 0 0  u11 u12 u13 
     
 1 6 4    l21 1 0  0 u22 u23 
 3 7 2 l 1  0 u33 
   31 l32  0

STEP 2: Carrying out the matrix multiplication on RHS

 2 1 3   u11 u12 u13 


   
 1 6 4    l21u11 l21u12  u22 l21u13  u23 
 3 7 2 l u l31u13  l32u23  u33 
   31 11 l31u12  l32u22

Next: Comparing terms by terms at each row:


41
(con’t)

Step 3: Comparing terms by terms at 1st row:

 2 1 3   u11 u12 u13 


   
 1 6 4    l21u11 l21u12  u22 l21u13  u23 
 3 7 2 l u l31u13  l32u23  u33 
   31 11 l31u12  l32u22

Row 1: u11  2; u12  1; u13  3


Row 2 : l21u11  1  l21  0.5
l21u12  u22  6  u22  6  0.5(1)  5.5
l21u13  u23  4  u23  4  0.5(3)  2.5
42
(con’t)

Row 3 : l31u11  3  l31  1.5


l31u12  l32u22  7  l32  (7 1.5(1)) / 5.5  1
l31u13  l32u23  u33  2  u33  2 1.5(3) 1(2.5)  5

Rewriting the matrix


1 0 0  u11 u12 u13   1 0 0  2 1 3 
      
 l21 1 0  0 u22 u23    0.5 1 0  0 5.5 2.5 
l 1  0 u33   1.5 1 1  0 0  5 
 31 l32  0

43
(con’t)

Step 4: Forward substitution Ly = b


 1 0 0   y1  14   y1   14 
 0.5 1 0   y    3    y    4 
   2    2  
 1.5 1 1   y  8  y  25
   3    3  

Step 5: Backward substitution Ux = y


2 1 3   x1   14   x1   1 
 0 5.5 2.5   x    4      
  2    x2   3
 0 0 5   x   25 x   5 
  3    3  

Solution
44
ITERATIVE METHODS

• 2 methods
– Gauss-Seidel
– *Jacobi

• Approx. solution using trial and error procedure –


initial solutions are assumed and refined further

• Advantages
– Can be used to solve nonlinear simultaneous
equations x2

– Can be used to solve ill-conditioned system

x1

45
GAUSS-SEIDEL METHOD
• Gauss-seidel method is an Iterative or approximate method that
continuously update the variables.

• The system [A]{X}={B} is reshaped by solving the first equation for


x1, the second equation for x2, and the third for x3, …and nth equation
for xn.

• For conciseness, we will limit ourselves to a 3x3 set of equations.

a11 x1  a12 x2  a13 x3  b1


a21 x1  a22 x2  a23 x3  b2
a31 x1  a32 x2  a33 x3  b3
GAUSS-SEIDEL METHOD
b1  a12 x2  a13 x3
x1  requires aii0
a11
i.e. diagonal
Rewrite the b2  a21 x1  a23 x3 element must
equations : x2  be non-zero
a22
b3  a31 x1  a32 x2
x1 
a33

•Next step, assume initial guesses x’s are zeros at zero iteration:
at zero iteration.
 x1  0
0

•. These zeros can be substituted into x1equation  0  


(1) to calculate a new x1=b1/a11.  x2   0
 x 0  0
 3   
• New x1 is substituted to calculate x2. And new x1 and x2 are
substituted to find x3. The procedure is repeated until the
convergence criterion is satisfied:
x2  0 ; x3  0

b1  a12 x2  a13 x3
x1 
a11
b2  a21 x1  a23 x3
x’s are x2 
continuously a22
updated
b3  a31 x1  a32 x2
x3 
a33
xij  xij 1
error estimate:  a ,i  j
100%   s
xi
For all i, where j is the present and j-1 is the previous iterations
GAUSS-SEIDEL METHOD
Continuous updating of the subdiagonal variable
as the computations proceed

( k 1) 1
x1  (b1  a12 x k2  a13 x 3k    a1n x kn )
a11
( k 1) 1 ( k 1)
x2  (b2  a21 x1  a23 x 3    a2 n x n )
k k

a22
Variables are updated
 continuously

( k 1) 1 ( k 1) ( k 1) ( k 1)


xn  (bn  an1 x1  an 2 x 2    ann 1 x n1 )
ann
CLASS ACTIVITY

Solve the system of equations below using Gauss-Seidel


method:
6 x1  x2  x3  3
6 x1  9 x2  x3  40
 3x1  x2  12 x3  50 Exact sol: x1 = 1.697; x2=2.829,
x3 = 4.355

Note: For good convergence, the equations should be arranged so that they
are diagonally dominant, that is the diagonal element must be greater than
the off-diagonal element for each row. (see text book, p293)
SOLUTION
First Iteration:
Rewrite the equations assume x2=x3 = 0
at zero iteration 3 0 0
x1   0.5
6
3  x2  x3
x1  x2 
40  6(0.5)  0
 4.1111
6 9
40  6 x1  x3 50  3(0.5)  4.11111
x2  x3 
12
 3.9491
9
50  3x1  x2
x3 
12
2nd Iteration:
Error estimate for 3  4.1111  3.9491
each variable: x1   1.8434
6
40  6(1.8434)  3.9491
1.8434  0.5 x2   2.7767
 a ,1  100%  72.88% 9
1.8434
50  3(1.8434)  2.7767
2.7767  4.1111 x3   4.3961
 a,2  100%  48.05% 12
2.7767
*JACOBI’S METHOD
• Jacobi’s method is almost similar to Gauss-Seidel
method except that the x values are not updated
immediately.

• Substitution takes places only at the end of the


iteration when all the variables are updated
simultaneously.

• It is called simultaneous updating.


Fig 11.4

Gauss-Seidel Method Jacobi’s Method


53
*CLASS ACTIVITY
Solve the system of equations below using Jacobi method.
Compute the errors.
3 x1  2 x2  18
 x1  2 x2  2
Exact solutions: x1 = 4; x2 = 3

SOLUTION: 0 iteration 2nd iteration:


X1 = 0 X1 = 16/3
18  2 x2
x1  X2 = 0 X2 = 4.0
3 a1= 12.5%

x1  2 1st iteration: a2= 75%


x2  X1 = 6
2
X2 = 1 3rd iteration:
function x = Gauss_Seidel(A,b,es, maxit)
% x = Gauss_Seidel (A,b) M-file : Gauss-Seidel
% A = coeffiecient matrix
for i = 1:n
% b = RHS vector
d(i) = b(i)/A(i,i);
% es = stop criterion (default = 0.00001%)
end
% maxit = max. iterations (default =50)
% start Gauss-Seidel iteration for x
% x = output solution vector
iter = 0;
%
while (1)
% if necessary, assign default values
xold = x;
%
for i = 1: n
[m,n] = size(A);
x(i) = d(i) - C(i,:)*x;
if m~=n,
if x(i) ~= 0
error('Matrix A must be SQUARE');end
ea(i) = abs((x(i) - xold(i))/x(i)) * 100;
C=A;
end
% set diagonal terms as zero
end
for i=1:n
iter = iter + 1;
C(i,i) = 0;
if max(ea) <= es | iter >= maxit, break, end
x(i) = 0;
fprintf('\n%d %10.7f %10.7f',iter, x',ea');
ea(i) = 100;
end
%
x = x'; %transpose >>Gauss_Seidel(A,b,es, maxit)
% equations
for i = 1:n
C(i,1:n) = C(i,1:n)/A(i,i); ans =
End ???
%

CM/Chap 5/p55
LINEAR ALGEBRAIC EQUATIONS

 We have covered Chapter 18 in textbook


THE END

57

You might also like