Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Chapter Five: Systems of Linear Algebric Equations

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 125

Chapter five

SYSTEMS OF LINEAR ALGEBRIC


EQUATIONS

file: matrix.ppt p. 1
System of Linear Equations
• We have focused our last lectures on
finding a value of x that satisfied a single
equation
f(x) = 0
• Now we will deal with the case of
determining the values of x1, x2, .....xn, that
simultaneously satisfy a set of equations

file: matrix.ppt p. 2
System of Linear Equations
• Simultaneous equations
f1(x1, x2, .....xn) = 0
f2(x1, x2, .....xn) = 0
.............
f3(x1, x2, .....xn) = 0
• Methods will be for linear equations
a11x1 + a12x2 +...... a1nxn =c1
a21x1 + a22x2 +...... a2nxn =c2
..........

an1x1 + an2x2 +...... annxn =cn


file: matrix.ppt p. 3
Mathematical Background
Matrix Notation
 a11 a12 a13 ... a1n 
a a22 a23 ... a2 n 
 A   21

 . . . . 
a am3 ... amn 
 m1 am2

• a horizontal set of elements is called a row


• a vertical set is called a column
• first subscript refers to the row number
• second subscript refers to column number
file: matrix.ppt p. 4
 a11 a12 a13 ... a1n 
 a a a ... a 
 A   21 22 23 2n 
 . . . . 
 
a m1 a m 2 a m3 ... a mn 
note
This matrix has m rows and n column. subscript

It has the dimensions m by n (m x n)

file: matrix.ppt p. 5
Note the consistent
column 3 scheme with subscripts
denoting row,column
 a11 a12 a13 ... a1n 
a a22 a23 ... a2 n 
 A   21
 row 2
 . . . . 
a am3 ... amn 
 m1 am2

file: matrix.ppt p. 6
Row vector: m=1
 B   b1 b2 ....... bn 

Column vector: n=1 Square matrix: m = n

 c1 
c  a11 a12 a13 
 2
 C   .   A  a21 a22 a23 

. a31 a32 a33 
 
cm 

file: matrix.ppt p. 7
The diagonal consist of the elements

a11 a22 a33 a11 a12 a13 


 A  a21 a22 a23 

a31 a32 a33 

• Symmetric matrix
• Diagonal matrix
• Identity matrix
• Upper triangular matrix
• Lower triangular matrix
• Banded matrix
file: matrix.ppt p. 8
Symmetric Matrix

aij = aji for all i’s and j’s

5 1 2 
 A  1 3 7 Does a23 = a32 ?
2 7 8 
Yes. Check the other elements
on your own.

file: matrix.ppt p. 9
Diagonal Matrix

A square matrix where all elements off


the main diagonal are zero

 a11 0 0 0
0 a22 0 0
 A   
0 0 a33 0
0 0 0 a44 

file: matrix.ppt p. 10
Identity Matrix

A diagonal matrix where all elements on


the main diagonal are equal to 1
1 0 0 0
0 1 0 0
 A   
0 0 1 0
0 0 0 1

The symbol [I] is used to denote the identify matrix.

file: matrix.ppt p. 11
Upper Triangle Matrix

Elements below the main diagonal are


zero
a11 a12 a13 
 A   0 a22 a23 

 0 0 a33 

file: matrix.ppt p. 12
Lower Triangular Matrix

All elements above the main diagonal are


zero

5 0 0
 A  1 3 0
2 7 8

file: matrix.ppt p. 13
Banded Matrix

All elements are zero with the exception


of a band centered on the main diagonal
 a11 a12 0 0
a a22 a23 0
 A   21 
0 a32 a33 a34 
0 0 a43 a44 

file: matrix.ppt p. 14
Matrix Operating Rules
• Addition/subtraction
add/subtract corresponding terms
aij + bij = cij
• Addition/subtraction are commutative
[A] + [B] = [B] + [A]
• Addition/subtraction are associative
[A] + ([B]+[C]) = ([A] +[B]) + [C]

file: matrix.ppt p. 15
Matrix Operating Rules
• Multiplication of a matrix [A] by a scalar g
is obtained by multiplying every element of
[A] by g
 ga11 ga12 . . . ga 1n 
 ga ga22 . . . ga2 n 
 21 
 . . . . . . 
 B  g A  
. . . . . . 
 
 . . . . . . 
 
 gam1 gam2 . . . gamn 

file: matrix.ppt p. 16
Matrix Operating Rules
• The product of two matrices is represented as
[C] = [A][B]

n = column dimensions of [A]


n = row dimensions of [B]

N
c ij   a ik b kj
k 1

file: matrix.ppt p. 17
Simple way to check whether
matrix multiplication is possible
exterior dimensions conform to dimension of resulting matrix

[A] m x n [B] n x k = [C] m x k

interior dimensions
must be equal

file: matrix.ppt p. 18
Matrix multiplication
• If the dimensions are suitable, matrix
multiplication is associative
([A][B])[C] = [A]([B][C])
• If the dimensions are suitable, matrix
multiplication is distributive
([A] + [B])[C] = [A][C] + [B][C]
• Multiplication is generally not commutative
[A][B] is not equal to [B][A]

file: matrix.ppt p. 19
Inverse of [A]

 A A 1
  A
1
 A   I 

file: matrix.ppt p. 20
Inverse of [A]
 A A 1
  A
1
 A  I

Transpose of [A]
 a11 a21 . . . am1 
a a22 . . . am2 
 12 
 . . . . . . 
 A  
t

. . . . . . 
 
 . . . . . . 
 
 a1n a2 n . . . amn 

file: matrix.ppt p. 21
Determinants

Denoted as det A or A

for a 2 x 2 matrix a b
 ad  bc
c d
a b
 ad  bc
c d

file: matrix.ppt p. 22
a11 a12 a13
a22 a23 a21 a23 a21 a22
D  a21 a22 a23  a11  a12  a13
a32 a33 a31 a33 a31 a32
a31 a32 a33

file: matrix.ppt p. 23
EXAMPLE
Calculate the determinant of the following 3x3 matrix.
First, calculate it using the 1st row (the way you probably
have done it all along).

1 7 9 
 4 3 2
 
6 1 5
file: matrix.ppt p. 24
Properties of Determinants
• det A = det AT
• If all entries of any row or column is zero,
then det A = 0
• If two rows or two columns are identical,
then det A = 0

file: matrix.ppt p. 25
How to represent a system of
linear equations as a matrix

[A]{X} = {C}

where {X} and {C} are both column vectors

file: matrix.ppt p. 26
0.3 x1  0.52 x2  x3  0.01
0.5 x1  x2  1.9 x3  0.67
0.1x1  0.3 x2  0.5 x3  0.44

 A{ X }  {C}

0.3 0.52 1   x1    0.01


 0 .5     
1 1.9  x2    0.67 
 
 0.1 0.3 0.5  x3   0.44

file: matrix.ppt p. 27
Practical application
• Consider a problem in structural engineering
• Find the forces and reactions associated with
a statically determinant truss

90

30 60

hinge: transmits both roller: transmits


vertical and horizontal vertical forces
forces at the surface file: matrix.ppt p. 28
First label the nodes

1
90

2 60
30

FREE BODY DIAGRAM

file: matrix.ppt p. 29
Determine where you are
evaluating tension/compression
1
F1 90
F3
2 60
30

3
F2

FREE BODY DIAGRAM

file: matrix.ppt p. 30
1000 kg
Label forces at the hinge
and roller
1
F1 90
F3
H2 2 30
60

3
F2
V2
V3

FREE BODY DIAGRAM

file: matrix.ppt p. 31
1000 kg

1
F1 90
F3
H2 2 30
60

3
F2
V2
V3

FREE BODY DIAGRAM


F  0
H

F  0
v

file: matrix.ppt p. 32
Node 1 F1,V

30 60 F1,H

F3
F1
 H
F  0   F1 cos 30 
 F3 cos 60 
 F1,H

V
F  0   F1 sin 30
 F3 sin 60 
 F1,V

 F1 cos 30  F3 cos 60  0


 F1 sin 30  F3 sin 60  1000

file: matrix.ppt p. 33
Node 2
F1

30

F2
H2
V2

 H
F  0  H2  F2  F1 cos 30 

V
F  0  V2  F1 sin 30

file: matrix.ppt p. 34
Node 3
F3
60

F2
V3

 H
F  0   F3 cos 60 
 F2

V
F  0  F3 sin 60 
 V3

file: matrix.ppt p. 35
 F1 cos 30  F3 cos 60  0
 F1 sin 30  F3 sin 60  1000
H 2  F2  F1 cos 30  0
V2  F1 sin 30  0
 F3 cos 60  F2  0
F3 sin 60  V3  0 SIX EQUATIONS
SIX UNKNOWNS

file: matrix.ppt p. 36
Do some book keeping
F1 F2 F3 H2 V2 V3

1 -cos30 0 cos60 0 0 0 0

2 -sin30 0 -sin60 0 0 0 -1000

3 cos30 1 0 1 0 0 0

4 sin30 0 0 0 1 0 0

5 0 -1 -cos60 0 0 0 0

6 0 0 sin60 0 0 1 0

file: matrix.ppt p. 37
This is the basis for your matrices and the equation
[A]{x}={c}

0.866 0 0.5 0 0 0   F1   0 
 0.5 0 0.866 0 0 0   F  1000
  2   
 0.866 1 0 1 0 0   F3   0 
 0.5 0 0 0 1 0  H    0 
  2   
 0 1 0.5 0 0 0  V2   0 
 0 0 0.866 0 0 1 V   0 
  3   

file: matrix.ppt p. 38
Matrix Methods
• Gauss elimination
• Matrix inversion
• Gauss Seidel
• LU decomposition
a11 a12 a13 
 A  a21 a22 a23 

a31 a32 a33 

file: matrix.ppt p. 39
Graphical Method
2 equations, 2 unknowns
a11 x1  a12 x2  c1 x2
a21 x1  a22 x2  c2
( x1, x2 )
 a11  c1
x2     x1 
 a12  a12
 a21  c2
x2     x1 
 a22  a22 x1

file: matrix.ppt p. 40
x2

3x1  2 x2  18
9
 x1  2 x2  2

3
 3
x2    x1  9 (4,3)
 2 2
1
 1
x2    x1  1 2
 2 1

x1
Check: 3(4) + 2(3) = 12 + 6 = 18

file: matrix.ppt p. 41
Special Cases
• No solution x2

• Infinite solution
• Ill-conditioned ( x1, x2 )

x1

file: matrix.ppt p. 42
a) No solution - same slope f(x)

f(x) x
b) infinite solution
-1/2 x1 + x2 = 1
-x1 +2x2 = 2

x
c) ill conditioned f(x)
so close that the points of
intersection are difficult to
detect visually

x
file: matrix.ppt p. 43
Let’s consider how we know if the system is
ill-conditions. Start by considering systems
where the slopes are identical
• If the determinant is zero, the slopes are
identical
a 11 x 1  a 12 x 2  c1
a 21 x 1  a 22 x 2  c2

Rearrange these equations so that we have


an alternative version in the form of a
straight line: i.e. x2 = (slope) x1 + intercept

file: matrix.ppt p. 44
a11 c1
x2   x1 
a12 a12
a21 c2
x2   x1 
a22 a22

If the slopes are nearly equal (ill-conditioned)

a11 a21

a12 a22 Isn’t this the determinant?
a11a22  a21a12 a11 a12
a11a22  a21a12  0  det A
a21 a22

file: matrix.ppt p. 45
If the determinant is zero the slopes are equal.
This can mean:
- no solution
- infinite number of solutions

If the determinant is close to zero, the system is ill


conditioned.

So it seems that we should use check the determinant


of a system before any further calculations are done.

Let’s try an example.

file: matrix.ppt p. 46
Example

Determine whether the following matrix is ill-conditioned.

37.2 4.7  x1   22 
19.2 2.5  x    12
  2   

file: matrix.ppt p. 47
Cramer’s Rule
• Not efficient for solving large numbers of
linear equations
• Useful for explaining some inherent
problems associated with solving linear
equations.

 a11 a12 a13   x1  b1 


a     
a22 a23  x2   b2   A x   b
 21 
 a31 a32 a33   x3  b3 

file: matrix.ppt p. 48
Cramer’s Rule
b1 a12 a13 a11 b1 a13
1 1
x1  b2 a22 a23 x2  a21 b2 a23
A A
b3 a32 a33 a13 b3 a33

to solve for
xi - place {b} in
the ith column

file: matrix.ppt p. 49
Cramer’s Rule
b1 a12 a13 a11 b1 a13
1 1
x1  b2 a22 a23 x2  a21 b2 a23
A A
b3 a32 a33 a13 b3 a33

to solve for
xi - place {b} in
a11 a12 b1 the ith column
1
x3  a21 a22 b2
A
a13 a32 b3 file: matrix.ppt p. 50
Cramer’s Rule
b1 a12 a13 a11 b1 a13
1 1
x1  b2 a22 a23 x2  a21 b2 a23
A A
b3 a32 a33 a31 b3 a33

a11 a12 b1 to solve for


1 xi - place {b} in
x3  a21 a22 b2
A the ith column
a31 a32 b3

file: matrix.ppt p. 51
EXAMPLE
Use of Cramer’s Rule
2 x1  3x2  5
x1  x2  5

2 3  x1  5
1 1   x   5
  2   

file: matrix.ppt p. 52
Elimination of Unknowns
( algebraic approach)
a11x1  a12 x2  c1
a 21x1  a 22 x2  c2

a11x1  a12 x2  c1   a21 


a 21x1  a 22 x2  c2   a11 

a 21a11x1  a 2`1a12 x2  a 2`1c1 SUBTRACT


a 21a11x1  a11a22 x2  a11c2

file: matrix.ppt p. 53
Elimination of Unknowns
( algebraic approach)
a21a11 x1  a2`1a12 x2  a2`1c1 SUBTRACT
a21a11 x1  a11a22 x2  a11c2

a11a21 x2  a22 a11 x2  c1a21  c2 a11

a21c1  a11c2 NOTE: same result as


x2  Cramer’s Rule
a12 a21  a22 a11

a22 c1  a12 c2
x1 
a11a22  a12 a21
file: matrix.ppt p. 54
Gauss Elimination
• One of the earliest methods developed for
solving simultaneous equations
• Important algorithm in use today
• Involves combining equations in order to
eliminate unknowns

file: matrix.ppt p. 55
Blind (Naive) Gauss Elimination
• Technique for larger matrix
• Same principals of elimination
- manipulate equations to eliminate an unknown
from an equation
- Solve directly then back-substitute into one of
the original equations

file: matrix.ppt p. 56
Two Phases of Gauss Elimination
 a11 a12 a13 | c1 
a Forward
a 22 a23 | c2 
 21  Elimination
 a31 a 32 a33 | c3 
Note: the prime indicates
 a11 a12 a13 | c1  the number of times the
0 element has changed from
'
a 22 '
a 23 | c2' 
 '' ''
 the original value.
 0 0 a 33 | c3 

file: matrix.ppt p. 57
Two Phases of Gauss Elimination
a11 a12 a13 | c1 
0 '
a22 '
a23 | c2' 
 '' ''

 0 0 a33 | c3  Back substitution
c3''
x3  ''
a33

x2 
 c '
2  a123 x3 
'
a22
 c1  a12 x2  a13 x3 
x1 
a11
file: matrix.ppt p. 58
Example

file: matrix.ppt p. 59
solution
• The first part of the procedure is forward
elimination.
Multiply the first Eq. by(0.1)/3 and subtract the
result from the second Eq. to give:

• Then multiply the first Eq. by (0.3)/3 and


subtract it from the third Eq. to eliminate
x1.After these operations, the set of equations is
file: matrix.ppt p. 60
To complete the forward elimination, x2 must be removed from
Eq. six To accomplish this, multiply Eq. five by
−0.190000/7.00333 and subtract the result from
Eq. six. This eliminates x2 from the third equation and reduces
the system to an upper triangular form, as in

file: matrix.ppt p. 61
Solution cont.
• We can now solve these equations by back
substitution
First, Eq. nine can be solved for

This result can be back-substituted into Eq.


eight

file: matrix.ppt p. 62
Cont…
• Finally, x3 and x2 can be substituted into
first Eq.:

file: matrix.ppt p. 63
Pitfalls of the Elimination
Method
• Division by zero
• Round off errors
magnitude of the pivot element is small compared to other elements
• Ill conditioned systems

file: matrix.ppt p. 64
Division by Zero
• When we normalize i.e. a12/a11 we need to
make sure we are not dividing by zero
• This may also happen if the coefficient is
very close to zero
2 x2  3x3  8
4 x1  6 x2  7 x3  3
2 x1  x2  6 x3  5

file: matrix.ppt p. 65
Techniques for Improving the
Solution
• Use of more significant figures
• Pivoting
• Scaling
 a11 a12 a13   x1   b1 
a     
 21
a22 a23 x2   b2 

 A  x   b
 a13 a23 a33   x3  b3 

file: matrix.ppt p. 66
Use of more significant figures

• Simplest remedy for ill conditioning


• Extend precision

file: matrix.ppt p. 67
Pivoting

• Problems occur when the pivot element is


zero - division by zero
• Problems also occur when the pivot element
is smaller in magnitude compared to other
elements (i.e. round-off errors)
• Prior to normalizing, determine the largest
available coefficient

file: matrix.ppt p. 68
Pivoting
• Partial pivoting
rows are switched so that the largest element is
the pivot element
• Complete pivoting
columns as well as rows are searched for the
largest element and switched
rarely used because switching columns changes
the order of the x’s adding unjustified
complexity to the computer program
file: matrix.ppt p. 69
Division by Zero - Solution
Pivoting has been developed
to partially avoid these problems

2 x2  3x3  8 4 x1  6 x2  7 x3  3
4 x1  6 x2  7 x3  3 2 x2  3 x3  8
2 x1  x2  6 x3  5 2 x1  x2  6 x3  5

file: matrix.ppt p. 70
Scaling

• Minimizes round-off errors for cases where some


of the equations in a system have much larger
coefficients than others
• In engineering practice, this is often due to the
widely different units used in the development of
the simultaneous equations
• As long as each equation is consistent, the system
will be technically correct and solvable

file: matrix.ppt p. 71
Scaling

value on the diagonal


2 x1  100,000 x2  100,000 0 .00002 x1  x2  1
x1  x2  2 x1  x2  2

put the greatest


Pivot rows to
x1  x2  2
0.00002 x1  x2  1

x1  0.00 x2  100
. x1  x2  1

file: matrix.ppt p. 72
EXAMPLE
(solution in notes)

Use Gauss Elimination to solve the following set


of linear equations
3x2  13x3  50
2 x1  6x2  x3  45
4 x1  8x3  4

file: matrix.ppt p. 73
SOLUTION
3x2  13x3  50
2 x1  6x2  x3  45
4 x1  8x3  4

First write in matrix form, employing short hand


presented in class.
 0 3 13  50 We will clearly run into
 2 6 1  45  problems of division
 
 4 0 8  4  by zero.

Use partial pivoting

file: matrix.ppt p. 74
 0 3 13  50 Pivot with equation
 2 6 1 45 


 with largest an1
 4 0 8  4 

file: matrix.ppt p. 75
 0 3 13  50
 2 6 1  45 
 
 4 0 8  4 

 4 0 8  4 
 2 6 1  45
 
 0 3 13  50

file: matrix.ppt p. 76
 0 3 13  50
 2 6 1  45 
 
 4 0 8  4 

 4 0 8  4 
 2 6 1  45
 
 0 3 13  50

 4 0 8  4  Begin developing
 0 6 3  43  upper triangular matrix
 
 0 3 13  50

file: matrix.ppt p. 77
 4 0 8  4 
 0 6 3  43 
 
 0 3 13  50

 4 0 8  4 
 0 6 3  43 
 
 0 0 14 .5  28.5

28.5 43  3 1.966
x3   1.966 x2   8149
.
14 .5 6
4  8 1.966
x1   2 .931
4
CHECK
...end of
3 8149
.   13 1.966  50 okay
problem

file: matrix.ppt p. 78
GAUSS-JORDAN
• Variation of Gauss elimination
• primary motive for introducing this method is that
it provides and simple and convenient method for
computing the matrix inverse.
• When an unknown is eliminated, it is eliminated
from all other equations, rather than just the
subsequent one

file: matrix.ppt p. 79
GAUSS-JORDAN
• All rows are normalized by dividing them by their
pivot elements
• Elimination step results in and identity matrix
rather than an UT matrix

a11 a12 a13  1 0 0 0


 A   0 a22 a23 

0 1 0 0
 0 0 a33   A   
0 0 1 0
0 0 0 1

file: matrix.ppt p. 80
Graphical depiction of Gauss-Jordan
a11 a12 a13 | c1  1 0 0 | c1n  
a a ' '
a23 | c2'    n 
 21 22   0 1 0 | c2 
a31 a32 a33 | c3 
'' ''
0 0 1 | c3 n  

1 0 0 | c1n   x1  c1n 
  n 
 0 1 0 | c2 
x2  c2 n 
0 0 1 | c3 n  
x3  c3 n 

file: matrix.ppt p. 81
Matrix Inversion
• [A] [A] -1 = [A]-1 [A] = I
• One application of the inverse is to solve
several systems differing only by {c}
[A]{x} = {c}
[A]-1[A] {x} = [A]-1{c}
[I]{x}={x}= [A]-1{c}
• One quick method to compute the inverse is
to augment [A] with [I] instead of {c}

file: matrix.ppt p. 82
Graphical Depiction of the Gauss-Jordan
Method with Matrix Inversion
 A  I
 a11 a12 a13  1 0 0 
a  Note: the superscript
a22 a23  0 1 0 “-1” denotes that
 21 
a31 a32 a33  0 0 1  the original values
 1 0 0  a111 a121 a131  have been converted
 1 1 1  to the matrix inverse,
 0 1 0  a21 a22 a23 
not 1/aij
 0 0 1  1
a31 1
a 32 a33 
1

 I  A 1

file: matrix.ppt p. 83
Stimulus-Response
Computations
• Conservation Laws
mass
force
heat
momentum
• We considered the conservation of
force in the earlier example of a
truss
file: matrix.ppt p. 84
Stimulus-Response Computations
• [A]{x}={c}
• [interactions]{response}={stimuli}
• Superposition
if a system subject to several different stimuli, the response
can be computed individually and the results summed to
obtain a total response
• Proportionality
multiplying the stimuli by a quantity results in the response
to those stimuli being multiplied by the same quantity
• These concepts are inherent in the scaling of terms during
the inversion of the matrix

file: matrix.ppt p. 85
Error Analysis and System
Condition
• Scale the matrix of coefficients, [A] so that the largest
element in each row is 1. If there are elements of [A]-1 that
are several orders of magnitude greater than one, it is
likely that the system is ill-conditioned.
• Multiply the inverse by the original coefficient matrix. If
the results are not close to the identity matrix, the system
is ill-conditioned.
• Invert the inverted matrix. If it is not close to the original
coefficient matrix, the system is ill-conditioned.

file: matrix.ppt p. 86
To further study the concepts of ill
conditioning, consider the norm and the
matrix condition number

• norm - provides a measure of the size or


length of vector and matrices
• Cond [A] >> 1 suggests that the system is
ill-conditioned

file: matrix.ppt p. 87
LU Decomposition Methods
Chapter 10
• Elimination methods
Gauss elimination
Gauss Jordan
LU Decomposition Methods

file: matrix.ppt p. 88
Naive LU Decomposition
• [A]{x}={c}
• Suppose this can be rearranged as an upper
triangular matrix with 1’s on the diagonal
• [U]{x}={d}
• [A]{x}-{c}=0 [U]{x}-{d}=0
• Assume that a lower triangular matrix exists
that has the property
[L]{[U]{x}-{d}}= [A]{x}-{c}
file: matrix.ppt p. 89
Naive LU Decomposition
• [L]{[U]{x}-{d}}= [A]{x}-{c}
• Then from the rules of matrix multiplication
• [L][U]=[A]
• [L]{d}={c}
• [L][U]=[A] is referred to as the LU
decomposition of [A]. After it is
accomplished, solutions can be obtained
very efficiently by a two-step substitution
procedure file: matrix.ppt p. 90
Consider how Gauss elimination can be
formulated as an LU decomposition

U is a direct product of forward


elimination step if each row is scaled by
the diagonal
1 a12 a13 
 U   0 1 a23 

0 0 1 

file: matrix.ppt p. 91
Although not as apparent, the matrix [L] is also
produced during the step. This can be readily
illustrated for a three-equation system
a11 a12 a13   x1   c1 
a a a   x   c 
23  2   2
 21 22

a31 a32 a33   x3  c3 

The first step is to multiply row 1 by the factor


a21
f 21 
a11
Subtracting the result from the second row eliminates a21

file: matrix.ppt p. 92
 a11 a12 a13   x1   c1 
a    
a 22 a 23   x 2   c2 
 21

a 31 a 32 a 33   x 3  c3 

Similarly, row 1 is multiplied by


a31
f 31 
a11
The result is subtracted from the third row to eliminate a31
In the final step for a 3 x 3 system is to multiply the modified
row by
a '32
f 32  Subtract the results from the third
a '22 row to eliminate a 32
file: matrix.ppt p. 93
The values f21 , f31, f32 are in fact the elements
of an [L] matrix

1 0 0
 L   f 21 1 0

 f 31 f 32 1

CONSIDER HOW THIS RELATES TO THE


LU DECOMPOSITION METHOD TO SOLVE
FOR {X}

file: matrix.ppt p. 94
[A] {x} = {c}

[U][L]

[L] {d} = {c}

{d}

[U]{x}={d} {x}
file: matrix.ppt p. 95
Crout Decomposition
• Gauss elimination method involves two
major steps
forward elimination
back substitution
• Efforts in improvement focused on
development of improved elimination
methods
• One such method is Crout decomposition

file: matrix.ppt p. 96
Crout Decomposition
Represents and efficient algorithm for decomposing [A]
into [L] and [U]

 l11 0 0  1 u12 u13  a11 a12 a13 


l l22 0  0 1 u23   a 21 a 22 a 23 
 21    
 l31 l32 l33  0 0 1  a 31 a 32 a 33 

file: matrix.ppt p. 97
 l11 0 0  1 u12 u13  a11 a12 a13 
l l22 0  0 1 u23   a 21 a 22 a 23 
 21    
 l31 l32 l33  0 0 1  a 31 a 32 a 33 

Recall the rules of matrix multiplication.

The first step is to multiply the rows of [L] by the


first column of [U]

a11   l11   1   0 0   0 0  l11 Thus the first


column of [A]
a 21  l21 is the first column
a 31  l31 of [L]

file: matrix.ppt p. 98
 l11 0 0  1 u12 u13  a11 a12 a13 
l l22 0  0 1 u23   a 21 a 22 a 23 
 21    
 l31 l32 l33  0 0 1  a 31 a 32 a 33 

Next we multiply the first row of [L] by the column


of [U] to get

l11  a11
l11u12  a12
l11u13  a13

file: matrix.ppt p. 99
l11  a11
 l11 0 0  1 u12 u13  a11 a12 a13 
l11u12  a12 l
 21
l22 0  0 1

u23   a 21 a 22
 
a 23 

 l31 l32 l33  0 0 1  a 31 a 32 a 33 
l11u13  a13

a12
u12 
l11
a13 Once the first row of [U] is established
u13 
l11 the operation can be represented concisely

a1 j
u1 j  for j  2 ,3,..... n
l11

file: matrix.ppt p. 100


Schematic
depicting
Crout
Decomposition

file: matrix.ppt p. 101


li 1  ail for i  1,2 ,....., n
a1 j
u1 j  for j  2 ,3,..... n
l11
For j  2 ,3...... n  1
j 1
lij  aij   lik ukj for i  j , j  1,..... n
k 1
j 1
a jk   l ji uik
u jk  i 1
for k  j  1, j  2.... n
l jj
n 1
lnn  a nn   lnk ukn
k 1

file: matrix.ppt p. 102


The Substitution Step
• [L]{[U]{x}-{d}}= [A]{x}-{c}
• [L][U]=[A]
• [L]{d}={c}
• [U]{x}={d}
• Recall our earlier graphical depiction of the
LU decomposition method

file: matrix.ppt p. 103


[A] {x} = {c}

[U][L]

[L] {d} = {c}

{d}

[U]{x}={d} {x}
file: matrix.ppt p. 104
c1
d1 
l11
i 1
ci   lij d j
j 1
di  for i  2 ,3,...... n
lii
Back substitution recall  U   x   d 
xn  dn
n
xi  di  u x
j  i 1
ij j for i  n  1, n  2 ,..... n

file: matrix.ppt p. 105


Gauss Seidel Method
• An iterative approach
• Continue until we converge within some pre-specified tolerance of
error
• Round off is no longer an issue, since you control the level of error
that is acceptable
• Fundamentally different from Gauss elimination this is an
approximate, iterative method particularly good for large number of
equations

file: matrix.ppt p. 106


Gauss-Seidel Method
• If the diagonal elements are all nonzero, the first equation can be solved for
x1

• Solve c1  a12equation
x2  a13for
x3 x2  a1n xn
x the
1 second , etc.
a11

To assure that you understand this, write the equation for x2

file: matrix.ppt p. 107


c1  a12 x 2  a13 x 3  a1n x n
x1 
a11
c2  a 21 x1  a 23 x 3  a 2 n x n
x2 
a 22
c3  a 31 x1  a 32 x 2  a 3n x n
x3 
a 33

cn  a n1 x1  a n 3 x 2  a nn 1 x n1
xn 
a nn

file: matrix.ppt p. 108


Gauss-Seidel Method
• Start the solution process by guessing
values of x
• A simple way to obtain initial guesses is to
assume that they are all zero
• Calculate new values of xi starting with
x1 = c1/a11
• Progressively substitute through the
equations
• Repeat until tolerance is reached file: matrix.ppt p. 109
x1   c1  a12 x2  a13 x3  / a11
x2   c2  a21 x1  a23 x3  / a22
x3   c3  a31 x1  a32 x2  / a33

c1
x1   c1  a12 0  a13 0 / a11   x '1
a11
x2   c2  a21 x '1  a23 0 / a22  x '2
x3   c3  a31 x '1  a32 x '2  / a33  x '3

file: matrix.ppt p. 110


EXAMPLE
Given the following augmented matrix,
complete one iteration of the Gauss
Seidel method.

2 3 1  2
4 1 2  2
 
 3 2 1  1 

file: matrix.ppt p. 111


Gauss-Seidel Method
convergence criterion
xij  xij 1
 a ,i  j
 100   s
xi
as in previous iterative procedures in finding the roots,
we consider the present and previous estimates.

As with the open methods we studied previously with one


point iterations

1. The method can diverge


2. May converge very slowly
file: matrix.ppt p. 112
Convergence criteria for two
linear equations
c1 a12
u x1 , x2    x2
a11 a11
c2 a21
v x1 , x2    x2
a22 a22
consider the partial derivatives of u and v
u u a12
0 
x1 x2 a11
v a21 v
 0
x1 a22 x 2
file: matrix.ppt p. 113
Convergence criteria for two
linear equations
c1 a12
u x1 , x2    x2
a11 a11 Class question:
c2 a21 where do these
v x1 , x2    x1 formulas come from?
a22 a22
consider the partial derivatives of u and v
u u a12
0 
x1 x2 a11
v a21 v
 0
x1 a22 x2
file: matrix.ppt p. 114
Convergence criteria for two linear
equations cont.

u v
 1 Criteria for convergence
x x where presented earlier
u v in class material
 1
y y for nonlinear equations.

Noting that x = x1 and


y = x2

Substituting the previous equation:

file: matrix.ppt p. 115


Convergence criteria for two linear
equations cont.
a21 a12
1 1
a22 a11

This is stating that the absolute values of the slopes must


be less than unity to ensure convergence.

Extended to n equations:
aii   aij where j  1, n excluding j  i

file: matrix.ppt p. 116


Convergence criteria for two linear
equations cont.
aii   aij where j  1, n excluding j  i

This condition is sufficient but not necessary; for convergence.

When met, the matrix is said to be diagonally dominant.

file: matrix.ppt p. 117


x2 Review the concepts
of divergence and
convergence by graphically
illustrating Gauss-Seidel
for two linear equations
x1

u: 11x1  13x2  286


v: 11x1  9 x2  99

file: matrix.ppt p. 118


v: 11x1  9 x2  99
x2
u: 11x1  13x2  286

Note: we are converging


on the solution

x1

file: matrix.ppt p. 119


u: 11x1  13x2  286
x2
v: 11x1  9 x2  99

Change the order of


the equations: i.e. change
direction of initial
estimates

x1

This solution is diverging!


file: matrix.ppt p. 120
Improvement of Convergence
Using Relaxation
This is a modification that will enhance slow convergence.

After each new value of x is computed, calculate a new value


based on a weighted average of the present and previous
iteration.

xinew  xinew   1    xiold

file: matrix.ppt p. 121


Improvement of Convergence Using
Relaxation
xinew  xinew   1    xiold

• if = 1unmodified
• if 0 <  < 1 underrelaxation
nonconvergent systems may converge
hasten convergence by dampening out oscillations
• if 1<< 2 overrelaxation
extra weight is placed on the present value
assumption that new value is moving to the correct
solution by too slowly
file: matrix.ppt p. 122
Jacobi Iteration
• Iterative like Gauss Seidel
• Gauss-Seidel immediately uses the value of
xi in the next equation to predict x i+1
• Jacobi calculates all new values of xi’s to
calculate a set of new xi values

file: matrix.ppt p. 123


Graphical depiction of difference between Gauss-Seidel and Jacobi
FIRST ITERATION
x1   c1  a12 x2  a13x3  / a11 x1   c1  a12 x2  a13x3  / a11
x2   c2  a 21x1  a 23x3  / a 22 x2   c2  a 21x1  a 23x3  / a 22
x3   c3  a 31x1  a 32 x2  / a 33 x3   c3  a 31x1  a 32 x2  / a 33

SECOND ITERATION
x1   c1  a12 x2  a13x3  / a11 x1   c1  a12 x2  a13x3  / a11
x2   c2  a 21x1  a 23x3  / a 22 x2   c2  a 21x1  a 23x3  / a 22
x3   c3  a 31x1  a 32 x2  / a 33 x3   c3  a 31x1  a 32 x2  / a 33
file: matrix.ppt p. 124
EXAMPLE
Given the following augmented matrix, complete
one iteration of the Gauss Seidel method and the
Jacobi method.

2 3 1  2
4 1 2  2
 
 3 2 1  1 

We worked the Gauss Seidel method earlier

file: matrix.ppt p. 125

You might also like