Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
25 views

Lect 8

This document summarizes a lecture on solving systems of linear equations using Gaussian elimination with partial pivoting. It discusses how pivoting allows the matrix to be decomposed into LU form by interchanging rows to avoid zero pivots. The Gaussian elimination with partial pivoting algorithm finds the largest element in each column as the pivot to improve numerical stability. An example demonstrates the row interchange and triangular factorizations.

Uploaded by

Tushar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views

Lect 8

This document summarizes a lecture on solving systems of linear equations using Gaussian elimination with partial pivoting. It discusses how pivoting allows the matrix to be decomposed into LU form by interchanging rows to avoid zero pivots. The Gaussian elimination with partial pivoting algorithm finds the largest element in each column as the pivot to improve numerical stability. An example demonstrates the row interchange and triangular factorizations.

Uploaded by

Tushar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

MA423 Matrix Computations

Lecture 8: System of Linear Equations-II

Rafikul Alam
Department of Mathematics
IIT Guwahati

R. Alam, IIT Guwahati (July-Nov 2023) MA423 MC


Outline

• Gaussian elimination with pivoting


• Permutated LU decomposition

R. Alam, IIT Guwahati (July-Nov 2023) MA423 MC


Pivoting
 
0 1
Consider . Since the pivot element, that is, (1, 1) entry of the matrix is 0, the
1 1
elimination fails to reduce the matrix to upper triangular form.
However, interchanging the rows we obtain an upper triangular matrix
       
0 1 0 1 1 1 1 0 1 1
= = .
1 0 1 1 0 1 0 1 0 1
| {z } | {z } | {z } | {z }
P A L U

The matrix P is a permutation matrix. A permutation matrix is obtained by interchanging rows


of identity matrix. The process of interchanging rows is called partial pivoting

Theorem (GEPP): Let A be an n × n matrix. Then there is a permutation matrix P such that

PA = LU

where L is unit lower triangular and U is upper triangular.

R. Alam, IIT Guwahati (July-Nov 2023) MA423 MC


Gaussian elimination with partial pivoting
Let P1 be a permutation matrix so that (1, 1) entry of P1 A is nonzero. Then for
L1 := I + `1 e1> , with `i1 := ai1 /a11 , i = 2 : n, we have

a11 a12 · · · a1n


 
 0 a(1) · · · a(1) 
22 2n  (1)
L−1 P A = 2
..  , where aij = aij − `i1 a1j . Cost: 2(n − 1) flops.

1 1  .. ..
 . . . 
(1) (1)
0 an2 ··· ann
(1) (1)
If a22 = 0 then elimination breaks down.However, if say an2 6= 0 then we can interchange rows
(1)
and make an2 as the pivot element and continue elimination.

a11 a12 ··· a1n


 
(1) (1)
 0 an2 ··· ann 
P2 L−1 P A = .
 
1 1  .. .. ..
 . . . 
(1) (1)
0 a22 ··· a2n

Here P2 is the permutation matrix that interchanges second row with n-th row of L−1
1 A.
R. Alam, IIT Guwahati (July-Nov 2023) MA423 MC
Gaussian elimination with partial pivoting
(1) (1)
For L2 := I + `2 e2> with `i2 := ai2 /an2 , i = 3 : n, we have
 
a11 a12 a13 · · · a1n
 0 a(1) a(1) · · · (1)
ann 
 n2 n3 
(2) (2) 
−1 −1 0 a33 · · · a3n  , aij(2) = aij(1) − `i2 a2j
(1)
. Cost: 2(n − 2)2 flops.

L2 P2 L1 P1 A =  0
 .. .. .. .. 
 
 . . . . 
(2) (2)
0 0 an3 · · · ann

Repeating the process we have L−1 −1 −1


n−1 Pn−1 Ln−2 · · · P2 L1 P1 A = U.
2 2 3
Cost: 2(n − 1) + 2(n − 2) · · · + 2 ' 2n /3 flops.
The matrix L−1 −1 −1
n−1 Pn−1 Ln−2 · · · P2 L1 P1 may NOT be lower triangular. However, we show that

PA = L̂1 L̂2 · · · L̂n−2 Ln−1 U = LU,


| {z }
L

where L is unit lower triangular, L̂j ’s are obtained from Lj ’s by permutating their multipliers
and P := Pn−1 Pn−2 · · · P2 P1 .
R. Alam, IIT Guwahati (July-Nov 2023) MA423 MC
Gaussian Elimination with Partial Pivoting (GEPP):
function [L, U, p] = GEPP(A);
% [L U, p] = GEPP(A) produces a unit
% lower triangular matrix L, an upper
% triangular matrix U and a permutation
% vector p, so that A(p,:)= LU.

[n, n] = size(A);
p = (1:n)’;
for k = 1:n-1
% find largest element in A(k:n,k)
[r, m] = max( abs( A(k:n,k) ) );
m = m+k-1;
if (m ∼=k) % swap rows
A([k m],:) = A([m k],:);
p([k m]) = p([m k]);
end

R. Alam, IIT Guwahati (July-Nov 2023) MA423 MC


GEPP (cont.)

if (A(k,k) ∼= 0)
% compute multipliers for k-th step
A(k+1:n,k) = A(k+1:n,k)/A(k,k);
% update A(k+1:n,k+1:n)
j = k+1:n;
A(j,j) = A(j,j)-A(j,k)*A(k,j);
end
end
% strict lower triangle of A, plus I
L = eye(n,n)+ tril(A,-1);
U = triu(A); % upper triangle of A

The search for the largest entry in each column guarantees that the denominator A(k,k) in the
entries L(k+1:n,k) = A(k+1:n,k)/A(k,k) is at least as large as the numerators.
This ensures that |L(i, j)| ≤ 1 for all i, j. This is crucial for stability.

R. Alam, IIT Guwahati (July-Nov 2023) MA423 MC


Example
Consider     
0 1 0 2 4 −2 4 9 −3
1 0 0  4 9 −3 =  2 4 −2 .
0 0 1 −2 −3 7 −2 −3 7
| {z }| {z }
P1 A
   
0 4 9 −3
Then L1 = I +  12  e1> , L−1
1 A = 0 − 12 − 12  . Now
− 12 0 3
2
11
2
    
1 0 0 4 9 −3 4 9 −3
0 0 1 0 − 12 − 12  = 0 3
2
11 
2 .
3 11
0 1 0 0 2 2 0 − 21 − 12
| {z }
P2
   
0 4 9 −3
Then L2 = I +  0  e2> , L−1
2 P2 L −1
1 P1 A = 0 3
2
11 
2
− 31 0 0 4
3
R. Alam, IIT Guwahati (July-Nov 2023) MA423 MC
Example (cont.)
Thus we have L−1 −1
2 P2 L1 P1 A = U =⇒ A = MU, where M := (L−1 −1
2 P2 L1 P1 )
−1
= P1 L1 P2 L2 .
Now
   1 
1 0 0 2 1 0
P1 L1 = P1  21 1 0 =  1 0 0
− 12 0 1 −1 0 1
   2 
1 0 0 1 0 0
P2 L2 = P2 0 1
 0 = 0 − 13 1
0 − 31 1 0 1 0
shows that M is not lower triangular. Next, observe that
U = L−1 −1 −1 −1
2 P2 L1 P1 A = L2 P2 L1 P2 P2 P1 A =⇒ P2 P1 A = P2 L1 P2 L2 U = LU

where L := P2 L1 P2 L2 is unit lower triangular. Indeed


    
1 0 0 1 0 0 1 0 0
P2 L1 P2 L2 = − 12 1 0 0 1 0 = − 21 1 0 .
1
2 0 1 0 − 13 1 1
2 − 13 1

R. Alam, IIT Guwahati (July-Nov 2023) MA423 MC


Permutated LU decomposition (PA = LU)
By GEPP we have L−1 −1 −1
n−1 Pn−1 · · · L2 P2 L1 P1 A = U, where Pk is the permutation
>
matrix and Lk := I + `k ek is the elimination matrix at the k-th step.

Theorem: Set L(`k ) := Lk . Then Pm L(`k ) = L(Pm `k )Pm for m > k. Hence

Pn−1 · · · Pk+1 L(`k ) = L(Pn−1 · · · Pk+1 `k )Pn−1 · · · Pk+1 .

Lk := L(Pn−1 · · · Pk+1 `k ). Then b


Set b Lk is unit lower triangular and

L−1 −1 −1 −1 b−1 b−1b−1


n−1 Pn−1 · · · L2 P2 L1 P1 A = Ln−1 Ln−2 · · · L2 L1 Pn−1 · · · P1 A.

Thus, setting P := Pn−1 Pn−2 · · · P1 and L := b


L1 · · · b
Ln−2 Ln−1 , we have PA = LU.

Proof: Note that the first m − 1 rows of Pm (Pm is used at the m-th step of
elimination) are the same as the first m − 1 rows of In . Hence ek> Pm = ek> for
k = 1 : m − 1.
R. Alam, IIT Guwahati (July-Nov 2023) MA423 MC
Permutated LU decomposition (PA = LU)
Since ek> Pm = ek> for m > k, we have
Pm L(`k ) = Pm (I + `k ek> ) = Pm + Pm `k ek> = Pm + Pm `k ek> Pm = L(Pm `k )Pm .

Consequently, Pn−1 · · · Pk+1 L(`k ) = L(Pn−1 · · · Pk+1 `k )Pn−1 · · · Pk+1 .


Now
L−1 −1 −1
3 P3 L2 P2 L1 P1 A = L(−`3 )P3 L(−`2 )P2 L(−`1 )P1 A
= L(−`3 )L(−P3 `2 )P3 P2 L(−`1 )P1 A
= L(−`3 )L(−P3 `2 )L(−P3 P2 `1 )P3 P2 P1 A.
Continuing this process, we have

L−1 −1 −1 −1 b−1 b−1b−1


n−1 Pn−1 · · · L2 P2 L1 P1 A = Ln−1 Ln−2 · · · L2 L1 Pn−1 · · · P1 A.

Hence the results follow. 


R. Alam, IIT Guwahati (July-Nov 2023) MA423 MC
Solution of linear system using GEPP
A linear system Ax = b can be solved using GEPP as follows.

Ax = b =⇒ PAx = Pb =⇒ LUx = Pb.

1. Compute PA = LU (2n3 /3 flops)


2. Compute y = Pb (permute the entries of b, no arithmetic needed)
3. Solve Lz = y by forward substitution (n2 flops)
4. Solve Ux = z by back substitution (n2 flops).

2n3
Total Cost: 3 flops.

GEPP is the standard method used in practice for solving a linear system. GEPP is a default
method in matlab for solution of a linear system. The command x = A\b solves Ax = b
using GEPP. The command [L, U, P] = lu(A) computes PA = LU.

R. Alam, IIT Guwahati (July-Nov 2023) MA423 MC


Gaussian elimination with complete pivoting
Gaussian elimination with complete pivoting (GECP) is a variant of GE. At the k-step, GECP
searches not just column A(k:n,k) but the entire submatrix A(k:n,k:n) for the largest entry
and then swaps rows and columns to put that entry into A(k,k). After (k − 1) steps
 
∗ ··· ∗ ∗ ··· ∗
.. . .. .
. .. . · · · ..
 
 
 
∗ ∗ ··· ∗
L−1 −1
 
P
k−1 k−1 · · · L1 P1 AQ 1 · · · Qk−1 =  .

 akk · · · akn  
 .. .. .. 
 . . . 
ank ··· ann

After n − 1 steps, we have


 PAQ = LU where P and Q are permutations matrices. If
U U2
rank(A) = r then U = 1 , where U1 is an r × r nonsingular upper triangular matrix.
0 0
GECP guarantees |L(i, j)| <= 1 and |U(i, j)| <= |U(i, i)|.
Cost: 2n3 /3 + n3 /3 = n3 flops. Additional n3 /3 flops is due to finding maximum element at
each step.
R. Alam, IIT Guwahati (July-Nov 2023) MA423 MC
GEPP versus GECP

• GECP is more expensive (O(n3 ) more operations) than GEPP.


• GECP is usually no more accurate than GEPP which is why GEPP is a
default method.

• Examples exist for which GECP does much better than GEPP.
• Sparsity of A can be exploited by clever choice of row and column
interchanges (at possible detriment to stability).

• We still do not fully understand why GEPP and GECP work so well in the
presence of roundoff.

***

R. Alam, IIT Guwahati (July-Nov 2023) MA423 MC

You might also like