Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Week 4 Lecture Slides PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 49

Linear algebraic

equations

Dr. Meead Saberi


Research Centre for Integrated Transport
Innovation (rCITI)
School of Civil and Environmental Engineering
Mathematical Background
#$$ #$% #$&
! = #%$ #%% #%&
#&$ #&% #&&

1 3 5
! = 3 2 4 Symmetric matrix #-. = #.-
5 4 6

#$$ 0 0
! = 0 #%% 0 Diagonal matrix
0 0 #&&
Mathematical Background
1 0 0
! = 0 1 0 Identity matrix
0 0 1

%&& %&' %&(


! = 0 %'' %'( Upper triangular matrix
0 0 %((

%&& 0 0
! = %'& %'' 0 Lower triangular matrix
%(& %(' %((
Mathematical Background
! + # = %&' + (&' Addition of two matrices

! − # = %&' − (&' Subtraction of two matrices

! + # = # + [!] Commutative

( ! + # ) + - = [!] + ( # + - ) Associative

/%00 /%01 /%02


/ ! = /%10 /%11 /%12 Multiplication by scalar g
/%20 /%21 /%22
Mathematical Background
! = # $
, Matrix multiplication
%&' = ( -&) .)'
)*+

# $ [!] = # ( $ ! ) Associative

# $ + ! = # $ + # [!] Distributive

# $ ≠ [$][#] Not generally commutative


Mathematical Background
"#
! ! = [!]"# ! = ['] Analogous to division

"#
1 )** −)#*
[!] = Matrix inversion
)## )** − )#* )*# −)*# )##

)## )*# )-#


! , = )#* )** )-* Matrix transpose
)#- )*- )--
)## )*# )-# 1 0 0
! , = )#* )** )-* ⋮ 0 1 0 Augmentation
)#- )*- )-- 0 0 1
Simultaneous linear equations
How can we numerically solve simultaneous equations?

!"" #" + !"% #% + ⋯ + !"' #' = )"


!%" #" + !%% #% + ⋯ + !%' #' = )%

!'" #" + !'% #% + ⋯ + !'' #' = )'


Graphical method
For solving small sets of simultaneous equations, less than three,
we don’t really need a computer.
!"" #" + !"% #% = '"
!%" #" + !%% #% = '%

Both equations can be solved for #% Slope


!"" '" Intercept
#% = − #" +
!"% !"%
!%" '%
#% = − #" +
!%% !%%
Example
Use graphical method to solve the following simultaneous
equations
3"# + 2"& = 18
−"# + 2"& = 2

Let’s re-arrange the equations


3
"& = − "# + 9
2
1
"& = "# + 1
2
Example
3
!" = − !' + 9
2
1
!" = !' + 1
2
Cramer’s rule
Cramer’s rule is another solution for small numbers of equations.

The rule states that each unknown in a system of linear equations


may be expressed as a fraction of two determinants with
denominator D and with the numerator obtained from D by
replacing the column of coefficients of the unknown in question
by some constants.

But, what was a determinant? Let’s review that first.


Determinant
Determinant can be illustrated for a set of three equations

! " = $

where ! is the coefficient matrix


%&& %&' %&(
! = %'& %'' %'( This is a matrix
%(& %(' %((
The determinant ) of this system is
%&& %&' %&(
) = %'& %'' %'( This is a single
%(& %(' %(( number
Determinant
For example, a second-order determinant is
#$$ #$%
!= # #%% = #$$ #%% − #$% #%$
%$

For a third-order determinant


#$$ #$% #$'
# #%' #%$ #%' #%$ #%%
! = #%$ #%% #%' = #$$ %% − # + #
#'% #'' $% #
'$ #'' $' #
'$ #'%
#'$ #'% #''
Back to Cramer’s rule
So, for example in a set of three simultaneous equations

$" %"& %"'


$& %&& %&'
$' %'& %''
!" =
(
Example
User Cramer’s rule to solve the following set of simultaneous
equations
0.3$% + 0.52$) + $* = −0.01
0.5$% + $) + 1.9$* = 0.67
0.1$% + 0.3$) + 0.5$* = −0.44
Example
We start with calculating the determinant

0.3+, + 0.52+. + +/ = −0.01


0.5+, + +. + 1.9+/ = 0.67
0.1+, + 0.3+. + 0.5+/ = −0.44

0.3 0.52 1
! = 0.5 1 1.9 = −0.0022
0.1 0.3 0.5
Example
Now, we can calculate !" , !# and !$

1" 2"# 2"$


1# 2## 2#$
1$ 2$# 2$$
!" =
3
−0.01 0.52 1
0.67 1 1.9
!" = −0.44 0.3 0.5 = −14.9
−0.0022
Example
Now, we can calculate !" , !# and !$

1"" 2" 1"$


1#" 2# 1#$
1$" 2$ 1$$
!# =
3
0.3 −0.01 1
0.5 0.67 1.9
!# = 0.1 −0.44 0.5 = −29.5
−0.0022
Example
Now, we can calculate !" , !# and !$

2"" 2"# 3"


2#" 2## 3#
2$" 2$# 3$
!$ =
4
0.3 0.52 −0.01
0.5 1 0.67
!$ = 0.1 0.3 −0.44 = 19.8
−0.0022
How about more equations?
For more than three equations, Cramer’s rule becomes
impractical because calculating determinant will be time
consuming and computationally expensive.
Elimination of unknowns
The elimination of unknowns is an approach that can be easily be
illustrated for a set of two equations:

!"" #" + !"% #% = '"


!%" #" + !%% #% = '%

How? We multiply the equations by constants so that one of


the unknowns will be eliminated when two equations are
combined.
()* !"" #" + ()* !"% #% = ()* '"
(** !%" #" + (** !%% #% = (** '%
Elimination of unknowns
!"# $%% &% + !"# $%( &( = !"# *%
!## $(% &% + !## $(( &( = !## *(

Now if we subtract one equation from another, we will eliminate


one of the unknowns.

!"# $%( &( − !## $(( &( = !"# *% − !## *(


Which simply gives us
!## *( − !"# *%
&( = &% can then be computed by
!## $(( − !"# $%(
back-substitution.
Naïve Gauss elimination
Gauss elimination extends the “elimination of unknown” method
to large sets of equations by developing a systematic algorithm to
eliminate unknowns and to back-substitute.

Why is the called “Naïve”?


The technique is ideally suited for implementation by computers.
In particular, the computer program must avoid division by zero.
However, it is called “naïve” because it does avoid this problem.
Forward elimination
&"" !" + &"( !( + ⋯ + &"* !* = ,"
&(" !" + &(( !( + ⋯ + &(* !* = ,(

&*" !" + &*( !( + ⋯ + &** !* = ,*

In the first step, we reduce the set of equations to an upper


triangular system. This eliminates the first unknown, !" from
the second all the way through the nth equation.
#$%
To do this, we multiply the first equation by #%%
. We then subtract
it from the second equation.
Forward elimination
!"#
To do this, we multiply the first equation by .
!##

$%% &% + $%( &( + ⋯ + $%* &* = ,%


$(% $(% $(%
$(% &% + $%( &( + ⋯ + $%* &* = ,%
$%% $%% $%%

We then subtract it from the second equation.


$(% $(% $(%
$(( − $%( &( + ⋯ + $(* − $%* &* = ,( − ,%
$%% $%% $%%
. .
$(( &( + ⋯ + $(* &* = ,(.
Forward elimination
We then repeat this for the remaining equations. For example, we
!"#
again multiply the first equation by and we subtract it from
!##
the third equation. If we keep doing this, we will have

$%% &% + $%( &( + ⋯ + $%* &* = ,%


- -
$(( &( + ⋯ + $(* &* = ,(-
- -
$.( &( + ⋯ + $.* &* = ,.-

- - & = ,-
$*( &( + ⋯ + $** * *
Forward elimination
We repeat this to eliminate the second unknown from the third
equation all the way through the nth equation.

!"" #" + !"% #% + !"& #& + ⋯ + !"( #( = *"


+ + +
!%% #% + !%& #& + ⋯ + !%( #( = *%+
++ ++
!&& #& + ⋯ + !&( #( = *&++

++ ++
!(& #& + ⋯ + !(( #( = *(++
How does it look like at the end?
!"" #" + !"% #% + !"& #& + ⋯ + !"( #( = *"

+ + +
!%% #% + !%& #& + ⋯ + !%( #( = *%+
++ ++
!&& #& + ⋯ + !&( #( = *&++


((-") ((-")
!(( #( = *(
Back substitution
Now that we have eliminated the unknowns, we begin back
substituting by starting from the last equation.
("$%)
("$%) ("$%) )"
!"" '" = )" '" = ("$%)
!""

Now that we have '" , we can back-substitute it into (* − 1) th


equation and obtain '"$% . So on and so forth.
(.$%) " (.$%)
). ∑
− 01.2% !.0 '0 345 6 = * − 1, * − 2, … , 1
'. = (.$%)
!..
Example
Use Naïve Gauss elimination method to solve the following set of
equations

3"# − 0.1"( − 0.2"* = 7.85


0.1"# + 7"( − 0.3"* = −19.3
0.3"# − 0.2"( + 10"* = 71.4
Forward elimination: removing !"
3!" − 0.1!* − 0.2!% = 7.85
0.1!" + 7!* − 0.3!% = −19.3
0.3!" − 0.2!* + 10!% = 71.4
#."
We multiply the first equation by and subtract the result from
%
the second equation which gives us

7.00333!* − 0.293333!% = −19.5617


Forward elimination: removing !"
3!" − 0.1!* − 0.2!% = 7.85
0.1!" + 7!* − 0.3!% = −19.3
0.3!" − 0.2!* + 10!% = 71.4
#.%
We then multiply the first equation by and subtract the result
%
from the third equation which gives us

−0.19!* − 10.02 !% = 70.6150


Forward elimination: removing !"
3!& − 0.1!" − 0.2!) = 7.85
7.00333!" − 0.293333!) = −19.5617
−0.19!" − 10.02 !) = 70.6150
#$.&'
We now multiply the second equation by and subtract the
(.$$)))
result from the third equation which gives us

10.012 !) = 70.0843
System reduced to upper triangular form
3"# − 0.1"( − 0.2"* = 7.85
7.00333"( − 0.293333"* = −19.5617
10.012 "* = 70.0843

3 −0.1 −0.2 "# 7.85


0 7.00333 −0.293333 "( = −19.5617
0 0 10.012 "* 70.0843
Back-substituting
3"# − 0.1"( − 0.2"* = 7.85
7.00333"( − 0.293333"* = −19.5617
10.012 "* = 70.0843

23 = 4
25 = −5. 6
27 = 3
Pitfalls of elimination methods
Division by zero. The primary reason we called the previous
method “naïve” Gaussian is that during the elimination and back-
substituting, it is possible that a division by zero occur.

Round-off errors. Given the large number of mathematical


operations involved in elimination and back-substituting, any
round-off error may propagate throughout the subsequent steps
and thus, reduces the accuracy of the final estimation.

Ill-conditioned systems. When small changes in coefficients


result in large changes in the solution.
LU decomposition
LU decomposition is a class of elimination method which is less
time-consuming. It provides an efficient means to find solutions
of a system of equations and also for matrix inversion.
! " = $
! " − $ =0

With Gauss elimination, we managed to reduce the system to an


upper triangular form, right?
'(( '() '(*
0 ')) ')( " − + = 0 , " − + =0
0 0 '**
LU decomposition
Now assume that there is a lower traingular matrix with 1’s on
the diagonal.
1 0 0
! = %&' 1 0
%(' %(& 1

and the following property holds

! {* + − - }= / + − 0

1 2 = [4]
1 6 = {7}
Steps in LU decomposition
Example
Solve the same set of equations, as in previous example, with an
LU decomposition based on the Gauss elimination.

3"# − 0.1"( − 0.2"* = 7.85


0.1"# + 7"( − 0.3"* = −19.3
0.3"# − 0.2"( + 10"* = 71.4
Example
Let’s first re-write the system into a matrix format.
! " = $
3 −0.1 −0.2
! = 0.1 7 −0.3
0.3 −0.2 10
After forward elimination, the following upper triangular matrix
is obtained
3 −0.1 −0.2
, = 0 7.00333 −0.293333
0 0 10.012
Example
The lower triangular matrix is obtained using the factors
employed for forward elimination

0.1
!"# = = 0.03333
3
1 0 0
0.3 ) = 0.03333 1 0
!-# = = 0.1
3 0.1 −0.02713 1
−0.19
!-" = = −0.02713
7.00333
Example
Recall the system of equations can be written as
3 −0.1 −0.2 () 7.85
0.1 7 −0.3 (* = −19.3
0.3 −0.2 10 (+ 71.4

Now that we have 1 , we can write 1 2 = {4}

1 0 0 6) 7.85
0.03333 1 0 6* = −19.3
0.1 −0.02713 1 6+ 71.4 7. 89
2 = −:;. 9<:7
7=. =8>?
Example
Now that we have ! , we can write " # − ! = 0

3 −0.1 −0.2 -. 7.85


0 7.00333 −0.293333 -/ = −19.5617
0 0 10.012 -0 70.0843

56 8
# = 57 = −7. 9
58 :. ;;;;8
Matrix inversion
Recall matrix inverse ["]$% " "$% = ["]$% " = '

Interestingly, one way to numerically compute ["]$% is by


applying the LU decomposition.
" ( = ) * + ( = [*] ,
1 This gives us the first
[*] , = 0 + ( = {,}
column of ["]$%
0
0 This gives us the second
[*] , = 1 + ( = {,} column of ["]$%
0
Example
Based on the previous example, let’s compute the inverse of [A]
3 −0.1 −0.2
! = 0.1 7 −0.3
0.3 −0.2 10
Recall that decomposition resulted in [L] and [U]
1 0 0
* = 0.03333 1 0
0.1 −0.02713 1
3 −0.1 −0.2
+ = 0 7.00333 −0.293333
0 0 10.012
Example: step 1
1 1 0 0 12 1
["] $ = 0 0.03333 1 0 13 = 0
0 0.1 −0.02713 1 14 0

3 −0.1 −0.2 62 1
( ) = {$} 0 7.00333 −0.293333 63 = −0.03333
0 0 10.012 64 −0.1009

0.33249 0 0
This gives us the first
[7]82 = −0.00518 0 0
column of [7]82
−0.01008 0 0
Example: step 2
0 1 0 0 12 0
["] $ = 1 0.03333 1 0 13 = 1
0 0.1 −0.02713 1 14 0

3 −0.1 −0.2 62 12
( ) = {$} 0 7.00333 −0.293333 63 = 13
0 0 10.012 64 14

0.33249 0.004944 0
This gives us the second
[7]82 = −0.00518 0.142903 0
column of [7]82
−0.01008 0.00271 0
Example: step 3
0 1 0 0 12 0
["] $ = 0 0.03333 1 0 13 = 0
1 0.1 −0.02713 1 14 1

3 −0.1 −0.2 62 12
( ) = {$} 0 7.00333 −0.293333 63 = 13
0 0 10.012 64 14

:. ;;<=> :. ::=>== :. ::?@>A


[7]89 = −:. ::B9A :. 9=<>:; :. ::=9A;
−:. :9::A :. ::<@9 :. :>>AA

You might also like