Lecture 11
Lecture 11
Engineers
EE0903301
Lecture 11: Gauss Elimination
Lecture 11: Gauss Elimination (Part 1)
Dr. Yanal Faouri
Solving Small Numbers of Equations: The Graphical Method
• Some appropriate methods for solving small (n ≤ 3) sets of simultaneous equations and that do not require a
computer.
• i.e., The solution of this can be obtained from the graph where the two curves intersects.
• Three cases that can pose problems when solving sets of linear equations:
• (a) no solution when the two equations represent parallel lines ➔ called singular systems (det = 0)
• (b) infinite solutions when the two lines are equal ➔ called singular systems
• (c) The slopes are so close that the point of intersection is difficult to detect visually ➔ ill-conditioned system
(det ≈ 0) and this type of systems are extremely sensitive to roundoff error.
Solving Small Numbers of Equations: Determinants and
Cramer’s Rule
• This method is based on the determinants. In addition, the determinant has relevance to the evaluation of the
ill-conditioning of a matrix.
• Cramer’s Rule. This rule states that each unknown in a system of linear algebraic equations may be expressed as
a fraction of two determinants with denominator D and with the numerator obtained from D by replacing the
column of coefficients of the unknown in question by the constants b1, b2, . . . , bn. For example, for three
equations, x1 would be computed as
Example: Cramer’s Rule
• Problem Statement.
• Use Cramer’s rule to solve
0.3x1 + 0.52x2 + x3 = −0.01
0.5x1 + x2 + 1.9x3 = 0.67
0.1x1 + 0.3 x2 + 0.5x3 = −0.44
• Solution. The determinant D can be evaluated as:
• The basic strategy is to multiply the equations by constants so that one of the unknowns will be eliminated when
the two equations are combined. The result is a single equation that can be solved for the remaining unknown.
This value can then be substituted into either of the original equations to compute the other variable.
• where the prime indicates that the elements have been changed from their original values.
• The procedure is then repeated for the remaining equations.
• The first equation is called the pivot equation and a11 is called the pivot element.
• A zero-pivot element can interfere with normalization by causing a division by zero.
Naive Gauss Elimination
• The procedure can be continued using the remaining pivot equations. The final manipulation in the sequence is
to use the (n − 1)th equation to eliminate the xn−1 term from the nth equation. At this point, the system will have
been transformed to an upper triangular system:
• Back Substitution.
• The last equation can now be solved for xn:
Example: Naive Gauss Elimination
• Problem Statement.
• Use Gauss elimination to solve:
• Solution. The first part of the procedure is forward elimination (start by multiply equation 1 by 0.1∕3 then
subtract equation 1 from equation 2) then continue till you reach:
• An M-file that implements naive Gauss elimination combines the coefficient matrix A
and the right-hand-side vector b in the augmented matrix Aug. Thus, the operations
are performed on Aug rather than separately on A and b.
• Two nested loops provide a concise representation of the forward elimination step.
• An outer loop moves down the matrix from one pivot row to the next. The inner
loop moves below the pivot row to each of the subsequent rows where elimination
is to take place.
• Finally, the actual elimination is represented by a single line that takes advantage of
MATLAB’s ability to perform matrix operations.