Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Linear Alg Equ2

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 8

Numerical Methods for Eng [ENGR 391] [Lyes KADEM 2007]

III. LU decomposition Method


Gauss elimination becomes inefficient when solving equations with the same coefficients for [A]
but with different bs.
LU decomposition separates the time consuming elimination of [A] form the manipulation of { } b .
Hence, the decomposed [A] could be used with several { } b s in an efficient manner.
LU decomposition is based on the fact that any square matrix [A] can be written as a product of
two matrices as:
[A]=[L][U]
Where [L] is a lower triangular matrix and [U] is an upper triangular matrix.
III.1. Crouts method
To illustrate the Crouts method for LU decomposition, let us start with an example, we consider
the 3 3 matrix:
1
1
1
]
1

33 32 31
23 22 21
13 12 11
a a a
a a a
a a a
=
1
1
1
]
1

33 32 31
22 21
11
0
0 0
l l l
l l
l
1
1
1
]
1

1 0 0
1 0
1
23
13 12
u
u u
Hence
1
1
1
]
1

33 32 31
23 22 21
13 12 11
a a a
a a a
a a a
=
( ) ( )
( ) ( )
( ) ( ) 1
1
1
]
1

+ + +
+ +
33 23 32 13 31 32 12 31 31
23 22 13 21 22 12 21 21
13 11 12 11 11
l u l u l l u l l
u l u l l u l l
u l u l l
We can find, therefore, the elements of the matrices [L] and [U] by equating the two above
matrices:

'

+ +

+

+
+


23 32 13 31 33 33 33 33 23 32 13 31
22
13 21 23
23 23 23 22 13 21
11
13
11
13
13 13 13 11
12 31 32 32 32 32 12 31
12 21 22 22 22 22 12 21
11
12
11
12
12 12 12 11
31 31 21 21 11 11
,
,
,
,
,
,
; ;
u l u l a l hence a l u l u l
l
u l a
u hence a u l u l
a
a
l
a
u hence a u l
u l a l hence a l u l
u l a l hence a l u l
a
a
l
a
u hence a u l
a l a l a l
Linear Algebraic Equations
52
Numerical Methods for Eng [ENGR 391] [Lyes KADEM 2007]
Oooooooooouffffffffff !!!.
For a general n n matrix, you have to apply the following expressions to find the LU
decomposition of a matrix [A]:

'

1
1
j
k
kj ik ij ij
u l a l ; i j; i=1,2,,n

'

ii
i
k
kj ik ij
ij
l
u l a
u
1
1
; i<j; j=2,3,,n
and 1
ii
u ; i=1,2,,n
III.2. Solution of equations
Now to solve our system of linear equations, we can express our initial system:
[ ]{ } { } b x A

Under the following form
[ ]{ } [ ][ ]{ } { } b x U L x A
To find the solution{ } x , we define first a vector { } z :
{ } [ ]{ } x U z
Our initial system becomes, then: [ ]{ } { } b z L
Linear Algebraic Equations
53
IMPORTANT NOTE
As for the 3 3 matrix (see above), it is better to follow a certain order when computing the
terms of the [L] and [U] matrices. This order is: li1,u1j; li2,u2j; ; li,n-1,un-1,,j; lnn.
Example
Find the LU decomposition of the following matrix using Crouts method:
[A]=
1
1
1
]
1

33 32 31
23 22 21
13 12 11
a a a
a a a
a a a
=
1
1
1
]
1

2 2 3
1 3 4
1 1 2
Numerical Methods for Eng [ENGR 391] [Lyes KADEM 2007]
As [L] is a lower triangular matrix the { } z can be computed starting by z1 until zn. Then the values
of { } x can be found using the equation:
{ } [ ]{ } x U z
as [U] is an upper triangular matrix, it is possible to compute{ } x using a back substitution
process starting xn until x1. [You will better understand with an example ]
The general form to solve a system of linear equations using LU decomposition is:
n i
l
z l b
z
l
b
z
ii
i
k
k ik i
i
,..., 3 , 2 ;
1
1
11
1
1

And
1 , 2 ,..., 2 , 1 ;
1

+
n n i x u z x
z x
n
i k
k ik i i
n n
III.3. Choleskis method for symmetric matrices
Linear Algebraic Equations
54
Example
Solve the following equations using the LU decomposition:

'

+ +
+
+
1 5 2 2 3
6 3 4
4 2
3 2 1
3 2 1
3 2 1
x x x
x x x
x x x
Note on storage of [A]; [L]; [U]
1- In practice, the matrices [L] and [U] do not need to be stored separately. By omitting
the zeroes in [L] and [U] and the ones in the diagonal of [U] it is possible to store the
elements of [L] and [U] in the same matrix.
2- Note also that in the general formula for LU decomposition once an element of the
matrix [A] is used, it is not needed in the subsequent computations. Hence, the
elements of the matrix generated in point (1) above can be stored in [A]
Numerical Methods for Eng [ENGR 391] [Lyes KADEM 2007]
In many engineering applications the matrices involved will be symmetric and defined positive. It
is, then, better to use the Choleskis method.
In the Choleskis method, our matrix [A] is decomposed into:
[A]=[U]
T
[U]
where [U] is an upper triangular matrix.
The elements of [U] are given by:
( )

'


+ +
,
_



,
_

j i u
n i i j and n i u u a
u
u
n i u a u
n j
u
a
u
a u
ij
i
k
kj ki ij
ii
ij
i
k
ki ii ii
j
j
; 0
,..., 2 , 1 ,..., 3 , 2 ;
1
,..., 3 , 2 ;
,..., 3 , 2 ;
1
1
2 / 1
1
1
2
11
1
1
2 / 1
11 11
III.4. Inverse of a symmetric matrix
If a matrix [A] is square, there is another matrix [A]
-1
, called the inverse of [A]; such as:
[A][A]
-1
=[A]
-1
[A]=[I] ; identity matrix
To compute the inverse matrix, the first column of [A]
-1
is obtained by solving the problem (for
3 3 matrix):
[ ]{ } { }

'


0
0
1
b x A ; second column: [ ]{ } { }

'


0
1
0
b x A ; and third column;
[ ]{ } { }

'


1
0
0
b x A
The best way to implement such a calculation is to use LU decomposition.
In engineering, the inverse matrix is of particular interest, since its elements represent the
response of a single part of the system to a unit stimulus of any other part of the system.
III.5. Matrix condition number
Linear Algebraic Equations
55
Numerical Methods for Eng [ENGR 391] [Lyes KADEM 2007]
III.5.1. Vector and matrix norms
A norm is a real-valued function that provides a measure of the size or length of multicomponent
mathematical entities:
For a vector x

'

n
x
x
x
x
.
.
.
2
1

The Euclidean norm of this vector is defined as:


( )
2
1
2 2
2
2
1
...
n
x x x x + + +

In general, Lp norm of a vector x

is defined as:
P
n
i
P
i P
x L
1
1
)

'

For a matrix; the first and infinity norms are defined as:
[ ]

n
i
ij
n j
a A
1
1
1
max = maximum column sum
[ ]


n
j
ij
n i
a A
1
1
max
= maximum of row sum.
III.5.2. Matrix condition number
The matrix condition number is defined as:
Linear Algebraic Equations
56
Note
If the value of (p) is increased to infinity in the above expression, the value of the L

norm will
tend to the value of the largest component of x

:
i
x L max

Numerical Methods for Eng [ENGR 391] [Lyes KADEM 2007]


[ ]
1
A A A Cond
For a matrix [A], we have that: cond [A] 1
and
[ ]
A
A
A Cond
x
x

Therefore, the error on the solution can be as large as the relative error of the norm of [A]
multiplied by the condition number.
If the precision on [A] is t-digits (10
-t
) and Cond[A]=10
C
, the solution on [x] may be valid to only t-c
digits (10
c-t
).
III.6. Jacobi iteration Method
The Jacobi method is an iterative method to solve systems of linear algebraic equations.
Consider the following system:

'

+ + + + +
+ + + +
n n n n n n n n
n n
b x a x a x a x a
b x a x a x a x a
. . .
.
.
. . .
3 3 2 2 1 1
1 1 3 1 3 2 1 2 1 1 1
This system can be written under the following form:
( )
( )

'



1 , 1 , 2 2 1 1 1
1 3 13 2 12 1
11
1
...
1
.
.
...
1
n n n n n n n
nn
n n
x a x a x a b
a
x
x a x a x a b
a
x
The general formulation is:
1
]
1




n
i j j
j ij i
ii
i
x a b
a
x
, 1
1
; i=1,2,,n
Here we start, by an initial guess for x1; x2; ; xn, and we compute the new values for the next
iteration. If no good initial guess is available, we can assume each component to be zero.
We generate the solution at the next iteration using the following expression:
Linear Algebraic Equations
57
Numerical Methods for Eng [ENGR 391] [Lyes KADEM 2007]
1
]
1




+
n
i j j
k
j ij i
ii
k
i
x a b
a
x
, 1
1
1
; i=1,2,,n; and for k=1,2, , convergence
The calculation must be stopped if:

+
k
i
k
i
k
i
x
x x
1
; is the desired precision.
It is possible to show that a sufficient condition for the convergence of the Jacobi method is:


>
n
i j j
ij ii
a a
, 1
III.7. Gauss-Seidel iteration Method
It can be seen that, in Jacobi iteration method, all the new values are computed using the values
at the previous iteration. This implies that both the present and the previous set of values have to
be stored. Gauss-Seidel method will improve the storage requirement as well as the
convergence.
In Gauss-Seidel method, the values
1 1
2
1
1
,..., ,
+ + + k
i
k k
x x x computed in the current iteration as
well as
k
n
k
i
k
i
x x x ,..., ,
3 2 + +
, are used in finding the value
1
1
+
+
k
i
x . This implies that always the
most recent approximations are used during the computation. The general expression is:

1
1
]
1

+
+ +
1
1 1
1 1
1
i
j
n
i j
OLD
k
i ij
NEW
k
j ij i
ii
k
i
x a x a b
a
x ; i=1,2,...,n; k=1,2,3, ..
Linear Algebraic Equations
58
Note
- The Gauss-Seidel method will converge to the correct solution irrespective of the initial
estimate, if the system of equations is diagonally dominant. But, in many cases, the solution
will converges, even if the system is weakly diagonally dominant.
Example
Find the solution of the following equations using the Gauss-Seidel iteration method:

'

+ +
+
+
3 2 7 2
2 3 6 2
1 2 5
3 2 1
3 2 1
3 2 1
x x x
x x x
x x x
Numerical Methods for Eng [ENGR 391] [Lyes KADEM 2007]
III.7.1. Improvement of convergence using relaxation
Relaxation is used to enhance convergence, the new value is written under the form:
OLD
i
NEW
i
NEW
i
x x x ) 1 ( +
And usually, 0< <2
If 0< <1 we are using under-relaxation, used to make a non-convergent system converge
or to damp the oscillations.
If 1< <2 we are using over-relaxation, to accelerate the convergence of an already
convergent system.
The choice of depends on the problem to be solved.
III.8. Choice of the method
1- If the equations are to be solved for different right-hand-side vectors, a direct method, like
LU decomposition, is preferred.
2- The Gauss-Seidel method will give accurate solution, even when the number of
equations is several thousands (if the system is diagonally dominant). It is usually twice
as fast as the Jacobi method.
Linear Algebraic Equations
59

You might also like