Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
28 views

Lecture 11

The document discusses various methods for solving systems of linear equations, including graphical methods for small systems, Cramer's rule, and naive Gaussian elimination. It provides examples of using these methods to solve sample systems of equations. It also includes an M-file that implements naive Gaussian elimination to solve systems by performing forward and back substitution on the augmented matrix.

Uploaded by

amjadtawfeq2
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views

Lecture 11

The document discusses various methods for solving systems of linear equations, including graphical methods for small systems, Cramer's rule, and naive Gaussian elimination. It provides examples of using these methods to solve sample systems of equations. It also includes an M-file that implements naive Gaussian elimination to solve systems by performing forward and back substitution on the augmented matrix.

Uploaded by

amjadtawfeq2
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Numerical Methods for

Engineers
EE0903301
Lecture 11: Gauss Elimination
Lecture 11: Gauss Elimination (Part 1)
Dr. Yanal Faouri
Solving Small Numbers of Equations: The Graphical Method

• Some appropriate methods for solving small (n ≤ 3) sets of simultaneous equations and that do not require a
computer.
• i.e., The solution of this can be obtained from the graph where the two curves intersects.

• For three simultaneous equations, each equation would be


represented by a plane in a three-dimensional coordinate
system. The point where the three planes intersect would
represent the solution.
• Beyond three equations, graphical methods break down
and, consequently, have little practical value for solving
simultaneous equations. However, they are useful in
visualizing properties of the solutions.
Solving Small Numbers of Equations: The Graphical Method

• Three cases that can pose problems when solving sets of linear equations:
• (a) no solution when the two equations represent parallel lines ➔ called singular systems (det = 0)
• (b) infinite solutions when the two lines are equal ➔ called singular systems
• (c) The slopes are so close that the point of intersection is difficult to detect visually ➔ ill-conditioned system
(det ≈ 0) and this type of systems are extremely sensitive to roundoff error.
Solving Small Numbers of Equations: Determinants and
Cramer’s Rule

• This method is based on the determinants. In addition, the determinant has relevance to the evaluation of the
ill-conditioning of a matrix.

• Cramer’s Rule. This rule states that each unknown in a system of linear algebraic equations may be expressed as
a fraction of two determinants with denominator D and with the numerator obtained from D by replacing the
column of coefficients of the unknown in question by the constants b1, b2, . . . , bn. For example, for three
equations, x1 would be computed as
Example: Cramer’s Rule

• Problem Statement.
• Use Cramer’s rule to solve
0.3x1 + 0.52x2 + x3 = −0.01
0.5x1 + x2 + 1.9x3 = 0.67
0.1x1 + 0.3 x2 + 0.5x3 = −0.44
• Solution. The determinant D can be evaluated as:

The det Function. The determinant can be computed directly in


MATLAB with the det function. For example, using the system
from the previous example:
>> A = [0.3 0.52 1;0.5 1 1.9;0.1 0.3 0.5];
>> D = det(A)
➔ D = −0.0022
>> A(:,1) = [−0.01;0.67;−0.44] >> x1 = det(A)/D ➔ x1 = −14.9000
Solving Small Numbers of Equations: Elimination of
Unknowns

• The elimination of unknowns by combining equations is an algebraic approach


a11x1 + a12x2 = b1
a21x1 + a22x2 = b2

• The basic strategy is to multiply the equations by constants so that one of the unknowns will be eliminated when
the two equations are combined. The result is a single equation that can be solved for the remaining unknown.
This value can then be substituted into either of the original equations to compute the other variable.

• The elimination of unknowns can be extended to


systems with more than two or three equations. Using Cramer’s Rule:
However, the numerous calculations that are
required for larger systems make the method
extremely tedious to implement by hand.
Naive Gauss Elimination

• A systematic techniques for forward elimination and back


substitution that comprise Gauss elimination. Although these
techniques are ideally suited for implementation on computers, some
modifications will be required to obtain a reliable algorithm.
• In particular, the computer program must avoid division by zero. The
following method is called “naive” Gauss elimination because it does
not avoid this problem.
• The approach is designed to solve a general set of n equations:
Naive Gauss Elimination

• Forward Elimination of Unknowns.


• The first phase is designed to reduce the set of equations to an upper triangular system. The initial step will be to
eliminate the first unknown x1 from the second through the nth equations. To do this, multiply the first row by
a21∕a11 to give:

• Then subtract it from the second equation:

• where the prime indicates that the elements have been changed from their original values.
• The procedure is then repeated for the remaining equations.
• The first equation is called the pivot equation and a11 is called the pivot element.
• A zero-pivot element can interfere with normalization by causing a division by zero.
Naive Gauss Elimination

• The procedure can be continued using the remaining pivot equations. The final manipulation in the sequence is
to use the (n − 1)th equation to eliminate the xn−1 term from the nth equation. At this point, the system will have
been transformed to an upper triangular system:

• Back Substitution.
• The last equation can now be solved for xn:
Example: Naive Gauss Elimination

• Problem Statement.
• Use Gauss elimination to solve:

• Solution. The first part of the procedure is forward elimination (start by multiply equation 1 by 0.1∕3 then
subtract equation 1 from equation 2) then continue till you reach:

• Then by back substitution to find x3, x2 and x1:


An M-file to implement naive Gauss elimination:
GaussNaive

function x = GaussNaive(A,b) % forward elimination


% GaussNaive: naive Gauss elimination for k = 1:n-1
% x = GaussNaive(A,b): Gauss elimination without pivoting. for i = k+1:n
% input: factor = Aug(i,k)/Aug(k,k);
% A = coefficient matrix Aug(i,k:nb) = Aug(i,k:nb)-factor*Aug(k,k:nb);
% b = right hand side vector end
% output: end
% x = solution vector % back substitution
[m,n] = size(A); x = zeros(n,1);
if m~=n, error('Matrix A must be square'); end x(n) = Aug(n,nb)/Aug(n,n);
nb = n+1; for i = n-1:-1:1
Aug = [A b]; x(i) = (Aug(i,nb)-Aug(i,i+1:n)*x(i+1:n))/Aug(i,i);
end
MATLAB M-file: GaussNaive

• An M-file that implements naive Gauss elimination combines the coefficient matrix A
and the right-hand-side vector b in the augmented matrix Aug. Thus, the operations
are performed on Aug rather than separately on A and b.
• Two nested loops provide a concise representation of the forward elimination step.
• An outer loop moves down the matrix from one pivot row to the next. The inner
loop moves below the pivot row to each of the subsequent rows where elimination
is to take place.
• Finally, the actual elimination is represented by a single line that takes advantage of
MATLAB’s ability to perform matrix operations.

You might also like