Lecture - Linear - Systems PDF
Lecture - Linear - Systems PDF
1
in matrix form
2
Gaussian Elimination for solving [A][X ] = [C]
consists of 2 steps
1. Forward Elimination of unknowns
The goal of Forward Elimination is to transform the coefficient matrix into an
Upper Triangular Matrix
⎡ 25 5 1⎤ ⎡25 5 1 ⎤
⎢ 64 8 1⎥ → ⎢ 0 − 4.8 − 1.56⎥
⎢ ⎥ ⎢ ⎥
⎢⎣144 12 1⎥⎦ ⎢⎣ 0 0 0.7 ⎥⎦
2. Back Substitution
The goal of Back Substitution is to solve each of the equations using the upper
triangular matrix.
3
Gaussian Elimination
Example 3.16
4
Forward Elimination
Linear Equations
A set of n equations and n unknowns
Forward Elimination
Transform to an Upper Triangular Matrix
Step 1: Eliminate x1 in 2nd equation using equation 1 as
the pivot equation (pivot row)
⎡ Eqn1⎤
⎢ ⎥ × (a21 )
⎣ a11 ⎦
5
Forward Elimination
Zeroing out the coefficient of x1 in the 2nd equation.
Subtract this equation from 2nd equation
⎛ a ⎞ ⎛ a ⎞ a
⎜⎜ a22 − 21 a12 ⎟⎟ x2 + ... + ⎜⎜ a2 n − 21 a1n ⎟⎟ xn = b2 − 21 b1
⎝ a11 ⎠ ⎝ a11 ⎠ a11
Or Where
'
x2 + ... + a2' n xn = b2' a21
a22 '
a22 = a22 − a12
a11
M
a21
a2' n = a2n − a1n
a11
Forward Elimination
Repeat this procedure for the remaining
equations to reduce the set of equations as
6
Forward Elimination
⎡ Eqn 2 ⎤
Eqn3 − ⎢ ⎥ × (a32 )
⎣ a22 ⎦
Forward Elimination
This procedure is repeated for the remaining
equations to reduce the set of equations as
7
Forward Elimination
Continue this procedure by using the third equation as the pivot
equation and so on.
At the end of (n-1) Forward Elimination steps, the system of
equations will look like:
( n −1) (n −1 )
ann xn = bn
Forward Elimination
At the end of the Forward Elimination steps
8
Back Substitution
The goal of Back Substitution is to solve each of
the equations using the upper triangular matrix.
⎢⎣ 0 0 a33 ⎥⎦ ⎢⎣ x 3 ⎥⎦ ⎢⎣b3 ⎥⎦
Example of a system of 3 equations
Back Substitution
Start with the last equation because it has only
one unknown
bn( n −1)
x n = ( n −1)
a nn
9
Back Substitution
Representing Back Substitution for all equations
by formula
j = i +1
xi = For i=n-1, n-2,….,1
aii(i −1)
and
bn( n −1)
x n = ( n −1)
a nn
Potential Pitfalls
-Division by zero: May occur in the forward elimination steps.
-Round-off error: Prone to round-off errors.
Improvements
Increase the number of significant digits
Decreases round off error
10
Division by zero
Trivial pivoting
11
Round-off error
• Partial pivoting
• Scaled partial pivoting
12
Partial Pivoting
( find max of all elements in the column on or below the main diagonal )
Partial Pivoting
13
Partial Pivoting: Example
Consider the system of equations
10 x1 − 7 x 2 = 7
− 3x1 + 2.099x 2 + 3x 3 = 3.901
5x 1 − x 2 + 5x 3 = 6
In matrix form
⎡ 10 7 0⎤ ⎡ x1 ⎤ ⎡ 7 ⎤
⎢− 3 2.099 6⎥ ⎢ x ⎥ ⎢3.901⎥
⎢ ⎥ ⎢ 2⎥ = ⎢ ⎥
⎢⎣ 5 − 1 5⎥⎦ ⎢⎣ x3 ⎥⎦ ⎢⎣ 6 ⎥⎦
⎡ 10 7 0⎤ ⎡ x1 ⎤ ⎡ 7 ⎤ ⎡10 −7 0⎤ ⎡ x1 ⎤ ⎡ 7 ⎤
⎢− 3 2.099 6⎥ ⎢ x ⎥ = ⎢3.901⎥
⎢
⎢⎣ 5
⎥⎢ 2 ⎥ ⎢ ⎥
− 1 5⎥⎦ ⎢⎣ x3 ⎥⎦ ⎢⎣ 6 ⎥⎦
⇒ ⎢ 0 − 0.001 6⎥ ⎢ x ⎥ = ⎢6.001⎥
⎢
⎢⎣ 0 2.5
⎥⎢ 2 ⎥ ⎢ ⎥
5⎥⎦ ⎢⎣ x3 ⎥⎦ ⎢⎣ 2.5 ⎥⎦
14
Partial Pivoting: Example
Forward Elimination: Step 2
Examining the values of the first column
|-0.001| and |2.5| or 0.0001 and 2.5
The largest absolute value is 2.5, so row 2 is switched with
row 3
⎡10 −7 0⎤ ⎡ x1 ⎤ ⎡ 7 ⎤ ⎡10 −7 0⎤ ⎡ x1 ⎤ ⎡ 7 ⎤
⎢ 0 − 0.001 6⎥ ⎢ x ⎥ = ⎢6.001⎥
⎢
⎢⎣ 0 2.5
⎥⎢ 2 ⎥ ⎢ ⎥
5⎥⎦ ⎢⎣ x3 ⎥⎦ ⎢⎣ 2.5 ⎥⎦
⇒ ⎢0
⎢ 2 . 5 5⎥⎥ ⎢⎢ x2 ⎥⎥ = ⎢⎢ 2.5 ⎥⎥
⎢⎣ 0 − 0.001 6⎥⎦ ⎢⎣ x3 ⎥⎦ ⎢⎣6.001⎥⎦
⎡10 − 7 0 ⎤ ⎡ x1 ⎤ ⎡ 7 ⎤
⎢ 0 2.5 5 ⎥⎥ ⎢⎢ x 2 ⎥⎥ = ⎢⎢ 2.5 ⎥⎥
⎢
⎢⎣ 0 0 6.002⎥⎦ ⎢⎣ x3 ⎥⎦ ⎢⎣6.002⎥⎦
15
Partial Pivoting: Example
Back Substitution
Solving the equations through back substitution
⎡10 − 7
6.002
0 ⎤ ⎡ x1 ⎤ ⎡ 7 ⎤ x3 = =1
⎢ 0 2.5 5 ⎥⎥ ⎢⎢ x 2 ⎥⎥ = ⎢⎢ 2.5 ⎥⎥ 6.002
⎢
⎢⎣ 0 0 6.002⎥⎦ ⎢⎣ x3 ⎥⎦ ⎢⎣6.002⎥⎦ 2.5 − 5 x 2
x2 = =1
2.5
7 + 7 x 2 − 0 x3
x1 = =0
10
16
Scaled partial pivoting example
Potential Pitfalls
-Division by zero: May occur in the forward elimination steps.
-Round-off error: Prone to round-off errors.
Improvements
Increase the number of significant digits
Decreases round off error
17
LU Decomposition
(Triangular Factorization)
LU Decomposition
18
LU Decomposition
Method: [A] Decompose to [L] and [U]
[U] is the same as the coefficient matrix at the end of the forward
elimination step.
[L] is obtained using the multipliers that were used in the forward
elimination process
Example
19
LU Decomposition
Given[A][X ] = [C ]
Decompose [A ] into [L] and [U ] ⇒ [L][U ][ X ] = [C ]
Define [Z ] = [U ][X ]
LU Decomposition
Example: Solving simultaneous linear equations using LU Decomposition
Using the procedure for finding the [L] and [U] matrices
⎡ 1 0 0⎤ ⎡25 5 1 ⎤
[A] = [L][U ] = ⎢2.56 1 0⎥ ⎢ 0 − 4.8 − 1.56⎥⎥
⎢ ⎥ ⎢
⎢⎣5.76 3.5 1⎥⎦ ⎢⎣ 0 0 0.7 ⎥⎦
20
LU Decomposition
Example: Solving simultaneous linear equations using LU Decomposition
⎡ 1 0 0⎤ ⎡ z1 ⎤ ⎡106.8 ⎤
Set [L][Z ] = [C ] ⎢2.56 1 0⎥ ⎢ z ⎥ = ⎢177.2 ⎥
⎢ ⎥⎢ 2 ⎥ ⎢ ⎥
⎢⎣5.76 3.5 1⎥⎦ ⎢⎣ z 3 ⎥⎦ ⎢⎣279.2⎥⎦
⎡ z1 ⎤ ⎡ 106.8 ⎤
Solve for [Z ] [Z ] = ⎢⎢ z2 ⎥⎥ = ⎢⎢− 96.21⎥⎥
⎢⎣ z3 ⎥⎦ ⎢⎣ 0.735 ⎥⎦
LU Decomposition
Example: Solving simultaneous linear equations using LU Decomposition
⎡25 5 1 ⎤ ⎡ a1 ⎤ ⎡ 106.8 ⎤
Set [U ][X ] = [Z ] ⎢ 0 − 4.8 − 1.56⎥ ⎢a ⎥ = ⎢- 96.21⎥
⎢ ⎥ ⎢ 2⎥ ⎢ ⎥
⎢⎣ 0 0 0.7 ⎥⎦ ⎢⎣ a3 ⎥⎦ ⎢⎣ 0.735 ⎥⎦
⎡ a1 ⎤ ⎡0.2900⎤
Solve for [X ] ⎢a ⎥ = ⎢ 19.70 ⎥
⎢ 2⎥ ⎢ ⎥
⎢⎣ a3 ⎥⎦ ⎢⎣ 1.050 ⎥⎦
21
Factorization with Pivoting
PA = LU.
• Theorem (PA = LU; Factorization with
Pivoting). Given that A is nonsingular. The solution X
to the linear system AX=B , is found in four steps:
22
⎡ 1 0 0⎤
[L] = ⎢⎢− 2 1 0⎥⎥
⎢⎣ 4 0 1⎥⎦
n3
To decompose [A], time is proportional to 3
23
Total computational time for LU Decomposition is proportional to
2
n3 n n3
+ 2( ) or + n2
3 2 3
n3 n2
+
3 2
LU Decomposition
What about a situation where the [C] vector changes?
In LU Decomposition, LU decomposition of [A] is independent
of the [C] vector, therefore it only needs to be done once.
Let m = the number of times the [C] vector changes
The computational times are proportional to
n3 n2 n3
Gauss Elimination = m( + ) LU decomposition = + m(n 2 )
3 2 3
24
Simultaneous Linear Equations:
Iterative Methods
25
-Algebraically solve each linear equation for xi
-Assume an initial guess
-Solve for each xi and repeat
-Check if error is within a pre-specified tolerance.
Jacobi Gauss-Seidel
26
Algorithm
Algorithm
c2 − a21 x1 − a23 x3 KK − a2 n xn
x2 = From equation 2
a22
M M M
cn −1 − an −1,1 x1 − an −1, 2 x2 KK − an −1,n − 2 xn − 2 − an −1,n xn From equation n-1
xn −1 =
an −1,n −1
cn − an1 x1 − an 2 x2 − KK − an ,n −1 xn −1 From equation n
xn =
ann
27
Stopping criterion
Absolute Relative Error
x inew − x iold
εa = × 100
i
x inew
28
Given the system of equations With an initial guess of
12x 1 + 3x 2 - 5x 3 = 1
⎡ x1 ⎤ ⎡1⎤
x 1 + 5x 2 + 3x 3 = 28 ⎢ x ⎥ = ⎢0 ⎥
⎢ 2⎥ ⎢ ⎥
3x1 + 7x2 + 13x3 = 76 ⎢⎣ x3 ⎥⎦ ⎢⎣1⎥⎦
1 − 3 x 2 + 5 x3 1 − 3(0 ) + 5(1)
x1 = x1 = = 0.50000
12
12
28 − (0.5) − 3(1)
28 − x1 − 3 x3 x2 = = 4.9000
x2 = 5
5
76 − 3 x1 − 7 x 2 76 − 3(0.50000 ) − 7(4.9000 )
x3 = x3 = = 3.0923
13 13
The maximum absolute relative error after the first iteration is 100%
29
Repeating more iterations, the following values are obtained
Iteration a1 εa 1 a2 εa a3 εa
2 3
⎡ x1 ⎤ ⎡0.99919⎤
The solution obtained ⎢ ⎥ ⎢ ⎥
⎢ x2 ⎥ = ⎢ 3.0001 ⎥
⎢⎣ x3 ⎥⎦ ⎢⎣ 4.0001 ⎥⎦
⎡ x1 ⎤ ⎡1⎤
the exact solution ⎢ x ⎥ = ⎢ 3⎥
⎢ 2⎥ ⎢ ⎥
⎢⎣ x3 ⎥⎦ ⎢⎣4⎥⎦
30
What went wrong?
Even though done correctly, the answer is not converging to the
correct answer
This example illustrates a pitfall of Jacobi/ Gauss-Siedel method: not
all systems of equations will converge.
Is there a fix?
The coefficient on the diagonal must be greater than the sum of the other
coefficients in that row.
31