Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
100% found this document useful (1 vote)
508 views

Numerical Methods Notes

This document discusses solving linear systems of equations using Gaussian elimination and LU decomposition. It begins by explaining Gaussian elimination, which involves forward elimination to transform the system into upper triangular form, followed by back substitution to solve for the unknowns. Potential pitfalls like division by zero are addressed through pivoting and scaling techniques. LU decomposition is then introduced as an alternative method that factors the coefficient matrix A into lower and upper triangular matrices L and U. This decomposition allows the system to be solved efficiently by forward and back substitution on the triangular systems.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
508 views

Numerical Methods Notes

This document discusses solving linear systems of equations using Gaussian elimination and LU decomposition. It begins by explaining Gaussian elimination, which involves forward elimination to transform the system into upper triangular form, followed by back substitution to solve for the unknowns. Potential pitfalls like division by zero are addressed through pivoting and scaling techniques. LU decomposition is then introduced as an alternative method that factors the coefficient matrix A into lower and upper triangular matrices L and U. This decomposition allows the system to be solved efficiently by forward and back substitution on the triangular systems.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 180

Solution of Linear System of Equations

Gaussian Elimination

Objectives:
Introduction
Gaussian Elimination
Pitfalls in Gauss Elimination Method
Pivoting
Scaling

Introduction
Well see how to solve a set of n linear equations with n
unknowns.
If n 3, the small set of linear equations can be solved
analytically.
Simultaneous linear equations arise in a variety of
engineering problems context i.e. trusses, reactors, electric
circuits,
Broadly there are two types of methods to solve the linear
equations: Direct methods and Iterative methods.

Gaussian Elimination
Solving a set of n linear equations:
a11 x1 a12 x2 ... a1n xn b1

a21 x1 a22 x2 ... a 2 n xn b2


.
.

an1 x1 an 2 x2 ... a nn xn bn
Solution consists of two phases:
1. Forward elimination of unknowns
2. Back substitution

1. Forward elimination of the unknowns


Eliminate the first unknown, x1 , from the second
through the nth equations. Multiply equation (1) by
a21 / a11
a
a
a
a21 x1 21 a12 x2 ... 21 a1n xn 21 b1
a11
a11
a11
Subtract equation (2) from second equation to get
a
a
a
(a22 21 a12 ) x2 ... (a2 n 21 a1n ) xn b2 21 b1
a11
a11
a11
or

x2 ... a2 n xn b2
a22
where the prime indicates the new values for the elements.

Similarly equation (1) can be multiplied by a31 / a11 and


the result subtracted from the third equation.
Repeating the procedure for the remaining equations
give the following modified forms:
a11 x1 a12 x2 a13 x3 ... a1n xn b1

x2 a23
x3 ... a2 n xn b2
a22
x2 a33
x3 ... a3n xn b3
a32
.

an 2 x2 an 3 x3 ... ann xn bn
The second unknown x2 is eliminated by multiplying
/ a22
and subtracting
through the next equation by a32
the result from the following equation.
The procedure is continued using the remaining pivot
equations. At the end of the elimination stage, the
original system of equations is transformed to an
upper triangular system:
a11 x1 a12 x2 a13 x3 ... a1n xn b1

x2 a23
x3 ... a2 n xn b2
a22
x3 ... a3n xn b3
a33

.
.

.
( n 1)
ann
xn bn( n 1)

2. Back Substitution
The last unknown xn is now given by:
bn( n 1)
xn ( n 1)
ann

This value of xn is then back-substituted into equation


(n 1) to find xn 1 . This procedure can be repeated to
evaluate the values of the remaining xs.

Example 1
Use Gauss elimination to solve:
3 x1 0.1 x2 0.2 x3 = 7.85

(a)

0.1 x1 + 7 x2 0.3 x3 = -19.3

(b)

0.3 x1 0.2 x2 + 10 x3 = 71.4

(c)

Use 6 significant figures in your computation.

Solution of Example 1

1. Forward elimination
Multiply 1st equation (a) by 0.1/3 and subtract from 2nd
equation (b) to eliminate x1 from the latter equation:
7.00333 x2 0.293333 x3 = -19.5617

Then multiply 1st equation (a) by 0.3/3 and subtract from


3rd equation (c) to eliminate x1 from the latter equation:
-0.190000 x2 + 10.0200 x3 = 70.6150

The system of equations is now:


3 x1 0.1 x2 0.2 x3 = 7.85

(a)

7.00333 x2 0.293333 x3 = -19.5617

(b)

-0.190000 x2 + 10.0200 x3 = 70.6150

(c)

Now lets eliminate x2 from latest 3rd equation (c) by


multiplying equation (b) by -0.190000/7.00333 and
subtract the result from equation (c):
10.0200 x3 = 70.0843
The system of equations is now reduced to an upper
triangular form:

3 x1

0.1 x2

0.2 x3

= 7.85

(a)

7.00333 x2 0.293333 x3 = -19.5617

(b)

10.0200 x3 = 70.0843

(c)

2. Back Substitution
Solve last equation (c) to find x3 :
x3 = 70.0843/10.0200 = 7.00003

Substitute x3 in equation (b) and solve for x2 :


7.00333 x2 0.293333 (7.00003) = -19.5617

x2 = -2.50000

Finally substitute values of x2 and x3 in equation (a) to


find x1 :
3 x1 0.1(-2.5000) 0.2(7.00003) = 7.85

x1 = 3.00000

3. Check your results


Substitute the values of x1 , x2 and x3 in the original system
of equations.
3 (3.00000) 0.1 (-2.50000) 0.2 (7.00003) = 7.84999
0.1 (3.00000) + 7 (-2.50000) 0.3 (7.00003) = -19.30000

0.3 (3.00000) 0.2 (-2.50000) + 10 (7.00003) = 71.40030

Some Pitfalls of Gauss Elimination Method


1. Division by zero
Caused by coefficient value equals zero or very close to
zero

This can be solved by using pivoting technique.

2. Round-off errors
This may be due to some of the following:
Large number of equations to be solved due to the fact
that every result is dependent on previous results.
Errors in early steps will tend to propagate.

Round-off errors may be solved by:


Using more significant figures.
Using fractions instead of decimals.
Always substitute your answers back into the original
system of equations to check for occurrence of
substantial errors.

Pivoting
This involves the following steps:

1. Determine the largest coefficient available in the


column below the pivot element.
1. Switch the rows so that the largest element is the pivot
element. This is known as partial pivoting.
1. If columns as well as rows are searched for the largest
element and then switched, the process is called
complete pivoting.
Example 2
Solve the following system of equations
0.0003 x1 + 3.0000 x2 = 2.0001

(1)

1.0000 x1 + 1.0000 x2 = 1.0000

(2)

1. Using nave Gauss elimination


2. Using partial pivoting

Solution of example 2
1. Firstly, lets solve these equations the same way we did in
example 1:
Forward elimination:
Multiply equation (1) by 1/0.0003 to get
x1 + 10,000 x2 = 6667
Subtract the resulting equation from equation (2) to get
-9999 x2 = -6666 x2 = 2/3

Back substitution:
0.0003 x1 + 3.0000 x2 = 2.0001

x1 = 2.0001 3(2/3)
0.0003

10

Now lets check the effects of number of significant digits


on the result of x1 due to x2 :
Significant
x1
x2
x1 (%)
figures
3

0.667

-3.33

1099

0.6667

0.0000

100

0.66667

0.30000

10

0.666667

0.330000

0.6666667

0.3330000

0.1

2. Secondly, lets solve these equations by applying partial


pivoting:
1.0000 x1 + 1.0000 x2 = 1.0000
(1)
0.0003 x1 + 3.0000 x2 = 2.0001

(2)

Forward elimination:
Multiply equation (1) by 0.0003 to get
0.0003 x1 + 0.0003 x2 = 0.0003
Subtract the resulting equation from equation (2) to get
x2 = 2/3
2.9997 x2 = 1.9998
Back substitution:

11

x1 + x2 = 1

x1 = 1 (2/3)
1

This case is much less sensitive to the number of significant


figures on the result of x1 due to x2 :
Significant
x2
figures

x1

x1 (%)

0.667

0.333

0.1

0.6667

0.3333

0.01

0.66667

0.33333

0.001

0.666667

0.333333

0.0001

0.6666667

0.3333333

0.00001

12

Scaling
Scaling is necessary when different units are used in the
same system of equations.
Scaling is used to:
1. Standardize the size of the coefficients in the system
of equations.
2. Minimize the round-off errors caused by having much
larger coefficients than others.
Example 3
Use three significant figures to solve
2 x1 + 100000 x2 = 10000

x1 +

x2 = 2.

(1)
(2)

1. Nave Gauss elimination


2. Gauss elimination with scaling and pivoting
3. Gauss elimination without scaling and with pivoting
Solution of example 3
1. Gauss elimination:
Forward elimination
2 x1 + 100000 x2 = 10000

(1)

13

- 50000 x2 = -5000
Back substitution
x2 = 0.10

(2)

and x1 = 0.00

2. Gauss elimination with scaling and pivoting:


Scaling
0.00002 x1 + x2 = 0.1
(1)

x1 + x2 = 2

(2)

Pivoting
x1 + x2 = 2

(1)

0.00002 x1 + x2 = 0.1

(2)

Forward elimination
x1 + x2 = 2

x2 = 0.1

Back substitution
x1 = 1.90 and x2 = 0.10

14

3. Gauss elimination without scaling and with pivoting of the


original equations:
Pivoting
x1 + x2 = 2.
2 x1 + 100000 x2 = 10000

(1)

Forward elimination
x1 +
x2 = 2
100000 x2 = 10000
Back substitution
x1 = 1.90 and x2 = 0.10

15

Factorization Methods

Objectives:
Introduction
LU Decomposition
Computational complexity
The Matrix Inverse
Extending the Gaussian Elimination Process
Introduction
Provides an efficient way to compute matrix inverse by
separating the time consuming elimination of the Matrix
[A] from manipulations of the right-hand side {B}.
Gauss elimination, in which the forward elimination
comprises the bulk of the computational effort, can be
implemented as an LU decomposition.

LU Decomposition
The matrix [A] for the linear system [A]{X}={B} is
factorized into the product of two matrices [L] and [U] (Llower triangular matrix and U- upper triangular matrix)
[L][U]=[A]
[L][U]{X}={B}
Similar to first phase of Gauss elimination, consider

16

[U]{X}={D}
[L]{D}={B}
The solution can be obtained by
1. First solve [L]{D}={B} to generate an intermediate
vector {D} by forward substitution
2. Then, solve [U]{X}={D} to get {X} by back
substitution.

17

In matrix form, this is written as


a11 a12 a13 x1 b1

a
a22 a23 x2 b2
21

a31 a32 a33 x3 b3

How to obtain the triangular factorization?


1 0 0 a11 a12 a13
A 0 1 0 a21 a22 a23

0 0 1 a31 a32 a33


Use Gauss elimination and store the multipliers mij as the
subdiagonal entries in [L]
The multipliers are
a
a
m21 21 ,
m31 31 ,
a11
a11

m32

a32

a22

The triangular factorization of matrix [A]


A = [L]
[U]

18

0
1
A m21 1

m31 m32

0 a11 a12

0 0 a22

1 0
0

a13

a23


a33

Example :
Use LU decomposition to solve:
3x1 0.1x2 0.2x3 = 7.85
0.1x1 + 7x2 0.3x3 = -19.3
0.3x1 0.2x2 + 10x3 = 71.4
use 6 significant figures in your computation.
Example 1 - Solution
In matrix form
3 0.1 0.2 x1 7.85

0.1
7
0.3 x2 19.3

0.3 0.2 10 x3 71.4


The multipliers are
.1
m21 0.0333333 ,
3

m32

m31

.3
0.100000
3

0.193
0.0271300
7.00333

19

The LU decomposition is

1
0
0 3
0.1
0.2

A 0.0333333
1
0 0 7.00333 0.293333

0
10.0120
0.100000 0.0271300 1 0

The solution can be obtained by


1. First solve [L]{D}={B} for {D} by forward
substitution

1
0
0 d1 7.85

0.0333333
1
0 d 2 19.3

0.100000 0.0271300 1 d3 71.4

d1 7.85
d 2 19.3 0.0333333(7.85) 19.5617
d3 71.4 0.1(7.85) 0.02713(19.5617) 70.0843

Then, solve [U]{X}={D} to get {X} by back


substitution.

0.1
0.2 x1 7.85
3
0 7.00333 0.293333 x 19.5617

2
0
10.0120 x3 70.0843
0
20


x3 70.0843 / 10.0120 7.00003
x2 (19.5617 0.293333(7.00003)) / 7.00333 2.5
x1 (7.85 0.1(2.5) 0.2(7.00003)) / 3 3

21

Computational Complexity
The triangular factorization portion of [A]=[L][U] requires
(N3-N)/3 multiplications and divisions
(2N3-3N2+N)/6 subtractions
Finding the solution to [L][U]{X}={B} requires
N2 multiplications and divisions
N2-N subtractions
The bulk of the calculation lies in the triangularization
portion.
LU decomposition is usually chosen over Gauss
elimination when the linear system is to be solved many
times, with the same [A] but with different {B}.
Saves computing time by separating time-consuming
elimination step from the manipulations of the right hand
side.
Provides efficient means to compute the matrix inverse
which provides a means to test whether systems are illconditioned

The Matrix Inverse


Find matrix A , the inverse of [A], for which
1
1
[A] A = A [A]=[I]
1

The inverse can be computed in a column-by-column


fashion by generating solutions with unit vectors {B}
constants.

22

1

The solution of [L][U]{X}={B} with 0
0

will

be the first column of A

0

The solution of [L][U]{X}={B} with 1
0

will

be the second column of A

0

The solution of [L][U]{X}={B} with 0
1

will

be the third column of A .


1

Example 2:
Use LU decomposition to determine the matrix inverse for
the following system and use it to find the solution:
3x1 0.1x2 0.2x3 = 7.85
0.1x1 + 7x2 0.3x3 = -19.3
0.3x1 0.2x2 + 10x3 = 71.4
use 6 significant figures in your computation.

23

Example 2- Solution

3 0.1 0.2
In matrix form
A 0.1 7 0.3
0.3 0.2 10

The triangular factorization of [A]

1
0
0

L 0.0333333
1
0

0.100000 0.0271300 1

0.1
0.2
3
U 0 7.00333 0.293333

0
10.0120
0

The first column of A

1
0
0 d1 1

0.0333333
1
0 d 2 0 D 0.03333

0.1009
0.100000 0.0271300 1 d3 0

The second column of A

0.1
0.2 x1 1
3

0.33249
0 7.00333 0.293333 x 0.03333 X 0.00518

2
0.01008
0
10.0120 x3 0.1009
0

24

1
0
0 d1 0

0.0333333
1
0 d 2 1 D 1

0.02713
0.100000 0.0271300 1 d3 0

0.1
0.2 x1 0
3

0.004944

0 7.00333 0.293333 x 1

0
.
142903

2
0.00271
0
10.0120 x3 0.02713
0

The third column of A

1
0
0 d1 0

0


0.0333333
1
0 d 2 0 D 0

1
0.100000 0.0271300 1 d3 1

0.1
0.2 x1 0
3
0.006798
0 7.00333 0.293333 x 0 X 0.004183

2
0.09988
0
10.0120 x3 1
0

The matrix inverse A is:


1

0.33249 0.004944 0.006798


A1 0.00518 0.142903 0.004183

0.01008 0.00271 0.09988

25

Check your result by verifying that [A] A =[I]


The final solution is
0.33249 0.004944 0.006798 7.85 3

X A1B 0.00518 0.142903 0.004183 19.3 2.50002

0.01008 0.00271 0.09988 71.4 7

26

Extending the Gaussian Elimination Process


If pivoting is required to solve [A]{X}={B}, then there
exists a permutation matrix [P] so that:
[P][A ]=[L][U]

The solution {X} is found in four steps:


1. Construct the matrices [L], [U] and [P].
2. Compute the column vector [P]{B}.
3. Solve [L]{D}=[P]{B} for {D} using forward
substitution.
4. Solve [U]{X}={D} for {X} using back substitution.
Example 3
Use LU decomposition with permutation to solve the
following system of equations
0.0003 x1 + 3.0000 x2 = 2.0001
1.0000 x1 + 1.0000 x2 = 1.0000

Example 3 - Solution
In matrix form [A ]{X}={B}

0.0003 3 x1 2.0001

1
x 1
1

27

We saw previously that pivoting is required to solve this


system of equations, hence [P][A ]=[L][U]
The solution {X} is found in four steps:
1. Construct the matrices [L], [U] and [P].

0 1

1 0

1
0

0.0003 1

1
1

0 2.9997

1. Compute the column vector [P]{B}.


0 1 2.0001 1

1 0 1

2.0001

2. Solve [L]{D}=[P]{B}
substitution.

for

{D}

using

forward

0 d1 1
1

0.0003 1 d 2.0001

1.9998

3. Solve [U]{X}={D} for {X} using back substitution.

1 x1 1
1

0.33333

0 2.9997 x 1.9998
0
.
66667

28

Iterative Methods
Objectives:
Introduction
Jacobi Method
Gauss-Seidel Method

Introduction
To solve the linear system Ax = b we may use either:
Direct Methods
- Gaussian elimination
- PLU decomposition
Iterative Methods
- Jacobi Method
- Gauss-Seidel Method

Iterative Methods
Suppose we solve Ax = b for a given matrix A by finding
the PLU decomposition
If we change the vector b, we may continue to use the PLU
If we change A, we now have to re-compute the PLU
decomposition: expensive
Instead, suppose we have solved the system Ax = b
for a given matrix A

29

Suppose we change A slightly, e.g., modify a single resistor


in a circuit
If we call that new matrix Amod, is it possible to use the
solution to Ax = b to solve Amodx = b?
They provide an alternative to the elimination method.
Let Ax = b be the set of equations to be solved.
The system Ax = b is reshaped by solving the first equation
for x1 , the second equation for x2 , and the third for x3 ,
and nth equation for xn .
For ease of computation, lets assume we have a 3x3
system of equations to solve.

a11 x1 a12 x2 a13 x3 b1

a21 x1 a22 x2 a23 x3 b2


a x a x a x b
31 1
32 2
33 3
3
If the diagonal elements are all non-zero then:

x1

b1 a12 x2 a13 x3
a11

x2

b2 a21 x1 a23 x3
a22

x3

b3 a31 x1 a32 x2
a33

30

Jacobi Iteration Method


1. Assume all the xs are zero
2. Substitute the zeros into the three equations to get:
b
b
b
x3 3
x1 1 ,
x2 2 ,
a33
a11
a22
3. Repeat the procedure until the error criterion is satisfied:
b1 a12 x2i a13 x3i
i 1
x ij x ij1
x1
a, j
100% s
a11
i
xj
i
i
b a21 x1 a23 x3
x2i 1 2
a22
i 1
3

b3 a31 x1i a32 x2i

a33

31

Gauss-Seidel Method
It is the most commonly used iterative method.
Gauss-Seidel Procedure
1. Assume all the xs are zero
2. Substitute the zeros into the first equation i.e. equation (1)
to give:
b
x1 1
a11
3. Substitute the new value of x1 and x3 = 0 into equation (2)
to compute x2
4. Substitute the value of x1 and the new value of x2 in
equation (3) to estimate x3
5. Return to equation (1) and repeat the entire procedure until
the error criterion is satisfied:

i 1
1

b1 a12 x2i a13 x3i

a11

i 1
2

b2 a21 x1i 1 a23 x3i

a22

i 1
3

b3 a31 x1i 1 a32 x2i 1

a33

x
x

a, j

x ij x ij1

100% s
x ij

32

Example 1:
Use Gauss-Seidel method to solve the following set of linear
equations:
3 x1 0.1 x2 0.2 x3 = 7.85

(1)

0.1 x1 + 7 x2 0.3 x3 = -19.3

(2)

0.3 x1 0.2 x2 + 10 x3 = 71.4

(3)

Example 1 - Solution
First we have:

7.85 0.1x2 0.2 x3


3
19.3 0.1x1 0.3 x3
x2
7
71.4 0.3 x1 0.2 x2
x3
10
x1

33

1st iteration
Assume that x2 = 0 and x3 = 0, we obtain
7.85
x1
2.616667
3
Substitute x1 = 2.616667 and x3 = 0 into equation (2)
19.3 0.1(2.616667) 0
x2
2.794524
7
Substitute x1 = 2.616667 and x2 = -2.794524 into
equation (3)

x3

71.4 0.3(2.616667) 0.2(2.794524)


7.005610
10

This completes the first iteration

2nd iteration

x1

7.85 0.1(2.79454) 0.2(7.005610)


2.990557
3

x2

19.3 0.1(2.990557) 0.3(7.005610)


2.499625
7

34

x3

71.4 0.3(2.990557) 0.2(2.499625)


7.000291
10

Error estimate
For x1
2.990557 2.6166667
a ,1
100% 12.5%
2.990557

For x2

a,2

2.499625 (2.794524)
100% 11.8%
2.499625

For x3

a ,3

7.000291 7.005610
100% 0.07%
7.000291
n

aii aij
j 1
j i

35

Convergence
Gauss-Seidel is similar in spirit to the simple fixed-point
iteration.
Gauss-Seidel will converge if for every equation of the
system, we have:
Such system is said to be diagonally dominant.
This criterion is sufficient but not necessary for
convergence.

Relaxation
Designed to Enhance convergence.
After each new value of x is computed, that value is
modified using:
xinew xinew 1 xiold
Where 0 2 is a weighting factor.
The choice of is problem-specific and is often
determined empirically.

36

Gauss-Seidel/Jacobi Iteration Methods

Gauss-Seidel/Jacobi Iteration Methods


Gauss-Seidel iteration converges more rapidly than the
Jacobi iteration does; since, it uses the latest updates.
But there are some cases that Jacobi iteration does
converge but Gauss-Seidel does not.

37

Eigenvalues and Eigenvectors


Objectives
Introduction
Mathematical background
Physical background
Polynomial Method
Power Method

Introduction
Eigenvalue problems are a special class of problems that
are common in engineering contexts involving vibrations
and elasticity.
Many systems of ODEs can be reduced to eigenvalue
problems.

Mathematical Background
So far we learned to solve [A]{x}={b}
Such systems are called nonhomogeneous because of the
presence of {b}.
If det[A] 0 unique solution of {x}

38

Homogeneous systems has the general form [A]{x}=0


Nontrivial solutions of such systems are possible but
generally they are not unique.

Eigenvalue problems are of the general form:


(a11 ) x1 a12 x2 ... a1n xn 0

a21 x1 (a22 ) x2

...

an1 x1 an 2 x2

...

a2 n xn 0

(ann ) xn 0

is the unknown parameter called the eigenvalue or


characteristic value.
A solution { x1 , x2 , , xn } for such a system is referred to
as an eigenvector.
The set of equations may also be expressed as:
A I x 0
The determinant of the matrix [[A]- [I]] must equal to
zero for nontrivial solutions to be possible.

39

Expanding the determinant yields a polynomial in .


The roots of this polynomial are the solutions to the
eigenvalues.

Physical Background
The following mass-spring system is a simple illustration of
how eigenvalues occur in engineering context.

Force balance for each mass is developed using Newtons


second law:

d 2 x1
m1 2 kx1 k ( x2 x1 )
dt

40

d 2 x2
m2 2 k ( x2 x1 ) kx2
dt
From vibration theory, the solutions to these equations
xi X i sin(t )
where
Xi is the amplitude of the vibration of mi
is the frequency of the vibration given by

2
Tp

Tp is the period.

This system of equations can be converted to an eigenvalue


problem of vibrations.

2k
k
2 ) X1 X 2 0
m1
m1

k
2k
X1 ( 2 ) X 2 0
m2
m2

41

The Polynomial Method


When dealing with complicated systems or systems with
heterogeneous properties, analytical solutions are often
difficult or impossible to obtain.
Numerical solutions to such equations may be the only
practical alternatives.
These equations can be solved by substituting a central
finite-divided difference approximation for the derivatives.
Writing this equation for a series of nodes yields a
homogeneous system of equations.

The Polynomial Method Procedure


Convert the system to an eigenvalue problem
[[A]- [I]] {x}= 0
Expand determinant det[[A]- [I]] = 0. This will yield a
polynomial in .
Solve for
For each value of , establish the relationship between the
unknowns xs called an eigenvector (note no unique
solution).

42

Example 1
Use the polynomial method to evaluate the eigenvalues and
eigenvectors of the spring-mass example for the case where m1
= m2 = 40 kg and k = 200 N/m.

Example 1 - Solution
Convert the system to an eigenvalue problem
(10 2 ) X 1 5 X 2 0

5 X 1 (10 2 ) X 2 0
Expand determinant det[[A]- [I]] = 0.
( 2 )2 20 2 75 0

Solve for 2
2 = 15 and 2 = 5 s 2

43

The frequencies for the vibrations of the masses are =


3.873 and = 2.236 s 1
The periods for the vibrations
Tp = 1.62 s and Tp = 2.81 s
For each value of 2 , plug into matrix equation to solve for
eigenvectors Xs.
- For the first mode ( 2 = 15)

(10 15) X 1 5 X 2 0
5 X 1 (10 15) X 2 0
X1 = - X 2
- Similarly, for the second mode ( 2 = 5) X 1 = X 2

What does this mean physically?


Valuable information about:
Period
Amplitude
1st mode X 1 = - X 2
2nd mode X 1 = X 2

The Power Method

44

An iterative approach that can be employed to determine


the largest eigenvalue.
With slight modification, it can also be used to determine
the smallest eigenvalue.
To determine the largest eigenvalue the system must be
expressed in the form:
A{X } {X }

45

The Power Method Procedure


Rearrange equations so that we have:
A{X } {X }
Plug in an initial guess for LHS X . Assume all the Xs on
the LHS of the equations are equal to 1.
Solve for RHS.
Pull scalar out of RHS so maximum value in vector is equal
to 1.
Plug eigenvector back into LHS and repeat until eigenvalue
converges with a < s

Example 2
Employ the power method to determine the highest eigenvalue
and its associated eigenvector of a three mass-four spring system
for the case where m1 = m2 = m3 = 1 kg and k1 = k2 = k3 = k4 = k
= 20 N/m.

46

Example 2 - Solution
Convert the system to an eigenvalue problem
2k
k
( 2 ) X1 X 2 0
m1
m1

2k

k
k
X 1 2 X 2
X3 0
m2
m
m
2

2k

k
X 2 2 X 3 0
m3
m3

Substitute the values of ms and ks and express the system


in the matrix form

A{X } {X }

0 X1
40 20
X1
20 40 20 X X
2

2
X
20 40 X 3
0
3

47

Eventually converges
Eigenvalue = 68.28427

0.707107

Eigenvector
1
=

0.707107

The Power Method - Extra


Example 2 found the largest
corresponding eigenvector.

eigenvalue and

its

48

To find the smallest eigenvalue, apply the power method to


the matrix inverse of A. The power method will converge
to the largest value of 1/ (which means the smallest ).
After finding the largest eigenvalue, it is possible to
determine the next highest by replacing the original matrix
by one that includes only the remaining eigenvalues. This
process is called deflation.

Assignment #5
27.11, 28.25, 28.27

49

Curve Fitting and Interpolation

Polynomial Interpolation

Objectives
Introduction
Newtons Divided Difference Method
i. Linear interpolation
ii. Quadratic interpolation
iii. General Form of Newtons Interpolation
Lagrange Interpolation

Introduction
Techniques to fit curves to discrete values of data to obtain
intermediate estimates.
- Regression (imprecise data)
- Interpolation (precise data)
Curve fitting is widely used in engineering
- Trend analysis: extrapolation and interpolation
- Hypothesis testing: compare a mathematical
model with measured data.

50

Linear Interpolation
Using similar triangles,

f1 ( x) f ( x0 ) f ( x1 ) f ( x0 )

x x0
x1 x0
And rearranging, we get

f1 ( x) f ( x0 )

f ( x1 ) f ( x0 )
( x x0 )
x1 x0

f1(x) is a first order interpolating polynomial.

51

f1(x) represents the slope of the line connecting the points.

f ( x1 ) f ( x0 )
x1 x0
The smaller the interval between data points, the better the
approximation.

Example 1:
Estimate the natural logarithm of 2 using linear interpolation:
1. Interpolate between ln1 and ln6
2. Use interval ln1 to ln4
Example1 - Solution
Linear interpolation

f1 ( x) f ( x0 )

f ( x1 ) f ( x0 )
( x x0 )
x1 x0

1. Using ln1 and ln6

f1 (2) ln 1

ln 6 ln 1
(2 1) 0.3583519
6 1

t=48.3%

2. Using ln1 and ln4

f1 (2) ln 1

ln 4 ln 1
(2 1) 0.4620981
4 1

t=33.3%

52

Quadratic Interpolation
This is a mean of improving an estimate by introducing a
curvature into the line connecting the points.
Using three data points, a second-order polynomial or
quadratic polynomial or parabola is used to carry out the
estimate:
f2(x) = b0 + b1(x-x0) + b2(x-x0)(x-x1)
= b0 + b1x - b1x0 + b2x2 - b2x0x - b2xx1 + b2x0x1

53

= a0 + a1x + a2x2
Where
a0 = b0 - b1x0 + b2x0x1
a1 = b1 - b2x0 - b2x1
a2 = b2
f (x) = b0 + b1(x-x0) + b2(x-x0)(x-x1)

The values of coefficients b0 , b1 and b2 are computed as


follow:
Evaluate f at x = x0 b0 f ( x0 )
Evaluate f at x = x1 b1

f ( x1 ) f ( x0 )
x1 x0

Evaluate f at x = x2
f ( x2 ) f ( x1 ) f ( x1 ) f ( x0 )

x2 x1
x1 x0
b2
x2 x0

The first two terms in equation (2) are equivalent to linear


interpolation from x0 to x1.
b1 represents the slope of the line connecting points x0 and
x1.

54

b2(x-x0)(x-x1) introduces the second-order curvature into the


formula.
Example 2:
Fit a second-order polynomial to the three points used to
evaluate the natural logarithm of 2
i.e.
x0 = 1

f (x0) = 0

x1 = 4

f (x1) = 1.386294

x2 = 6

f (x2) = 1.791759

Example 2 - Solution
First, lets compute the coefficients b0 , b1 and b2:
b0 f ( x0 ) ln 1 0

b1

f ( x1 ) f ( x0 ) ln 4 ln 1 1.386294 0

0.4620981
x1 x0
4 1
3

f ( x2 ) f ( x1 ) f ( x1 ) f ( x0 )

x2 x1
x1 x0
b2
x2 x0

55

ln 6 ln 4 ln 4 ln 1

4 1 0.0518731
64
6 1
The quadratic polynomial is then:
f2(x) = 0 + 0.4620981(x-1) 0.0518731(x-1)(x-4)

Lets now evaluate f2(x) at x=2


f2(2) = 0.5658444

and the relative error t = 18.4%

56

General Form of Newtons Interpolation


The analysis used in linear and quadratic interpolation can
be generalized to fit an (n-1)th order polynomial to n data
points.

f n1 ( x) b0 b1 ( x x0 ) ... bn1 ( x x0 )( x x1 )...(x xn2 )


The data points are used to evaluate the coefficients :

b0 f ( x0 )
b1 f [ x1 , x0 ]
b2 f [ x2 , x1 , x0 ]
.
.
bn 1 f [ xn 1 , xn 2 ,..., x1 , x0 ]
The bracketed function evaluations are finite divided
differences

f [ xi , x j ]

f ( xi ) f ( x j )
xi x j

f [ xi , x j , xk ]

f [ xi , x j ] f [ x j , xk ]
xi xk

f [ xn 1 , xn 2 ,..., x1 , x0 ]

f [ xn 1 , xn 2 ,..., x1 ] f [ xn 2 , xn 3 ,..., x0 ]
xn 1 x0
57

The general form of Newtons interpolating polynomial:

f n 1 ( x) f x0 ( x x0 ) f x1 , x0 ( x x0 )( x x1 ) f x2 , x1 , x0
... ( x x0 )( x x1 )...(x xn 2 ) f xn 1 , xn 2 ,..., x1 , x0

Example 3
Fit a third-order Newtons interpolating polynomial to the four
points used to evaluate the natural logarithm of 2
i.e.
x0 = 1

f (x0) = 0

x1 = 4

f (x1) = 1.386294

x2 = 6

f (x2) = 1.791759

x3 = 5

f (x3) = 1.609438

58

Example 3 - Solution
The third-order polynomial is
f 41 ( x) b0 b1 ( x x0 ) b2 ( x x0 )( x x1 ) b3 ( x x0 )( x x1 )( x x2 )
The values of the coefficients of the polynomial are:
b0 f ( x0 ) 0

b1 f [ x1 , x0 ] 0.462098
b2 f [ x2 , x1 , x0 ] 0.051873
b2 f [ x3 , x2 , x1 , x0 ] 0.0078655

Therefore, the third order polynomial is


f3 ( x) 0 0.462098( x x0 ) 0.051873( x x0 )( x x1 ) 0.0078655( x x0 )( x x1 )( x x2 )
Then

f3 (2) 0.6287686

t 9.3%

59

Lagrange Interpolating Polynomials


Lagrange interpolating polynomial is a reformulation of the
Newton polynomial without the computation of divided
differences.n
f n ( x) Li ( x) f ( xi )
i 0

where

x xj
j 0 xi x j
n

Li ( x)
j i

represents the product of.

For n = 1 i.e. linear (1st order) version:


x x1
x x0
f1 ( x)
f ( x0 )
f ( x1 )
x0 x1
x1 x0

For n = 2 i.e. quadratic (2nd order) version:


( x x1 )( x x2 )
f 2 ( x)
f ( x0 )
( x0 x1 )( x0 x2 )

( x x0 )( x x2 )
f ( x1 )
( x1 x0 )( x1 x2 )

( x x0 )( x x1 )
f ( x2 )
( x2 x0 )( x2 x1 )

60

Example 4:
Use a Lagrange interpolating polynomial of the first and second
order to evaluate ln2 given the following data:

x0 = 1

f (x0) = 0

x1 = 4

f (x1) = 1.386294

x2 = 6

f (x2) = 1.791759

Example 4 - Solution
Using first order Lagrange polynomial:

f1 ( x)

24
2 1
0
1.386294 0.4620981
1 4
4 1

Using second order Lagrange polynomial:


(2 4)(2 6)
f 2 ( x)
0
(1 4)(1 6)
(2 1)(2 6)

1.386294
(4 1)(4 6)
(2 1)(2 4)

1.791759 0.5658444
(6 1are
)(6similar
4) to those of Newtons interpolation.
The results

61

Interpolation by Spline Functions

Objectives
Introduction
Linear Splines
Quadratic Splines
Cubic Splines

Introduction
Higher order polynomials can lead to erroneous results
because of round off error and oscillations to be
avoided.
Lower-order polynomial fits should be used.
Alternative approach is to apply lower-order polynomials
to subsets of data points : spline functions.
The drafting technique of using a spline to draw smooth
curves through a series of points. Notice how, at the end
points, the spline straightens out This is natural spline.

62

Notation used to derive splines. Notice that there are n1


intervals and n data points.

Why Splines ?

63

1 t Order
h
Polynomial
6
Original
Functio
n
t
4 Order
h
Polynomial
8t Order
h
Polynomial

64

Higher order polynomial interpolation is a bad idea

f(x) = 1/(1+25x2)

Spline Interpolation Definition


Notation used to derive splines. Notice that there are n1
intervals and n data points.
Each interval i has its own spline function Si(x)

65

Given n distinct knots xi such that:


x1 x2 ... xn1 xn
with n knot values fi find a spline function

S1 ( x) x [ x1 , x2 ]
S ( x) x [ x , x ]

2
3
S ( x) 2

S n 1 ( x) x [ xn 1 , xn ]

with each Si(x) a polynomial of degree at most n.

66

Linear Splines
Simplest form of spline interpolation
Given the data ( x1 , f1 ), ( x2 , f 2 ),.....,( xn1 , f n1 ), ( xn , f n )
Consecutive data points connected through straight lines.
Each Si is a linear interpolation function constructed as:

Si ( x) ai bi ( x xi )

67

Using the formulation of Newton linear interpolation, the


linear splines can be written as:

f ( x2 ) f ( x1 )

S
(
x
)

f
(
x
)

( x x1 ) x [ x1 , x2 ]
1
1
x2 x1

f ( x3 ) f ( x2 )

S
(
x
)

f
(
x
)

( x x2 ) x [ x2 , x3 ]
2
2
x3 x2
S ( x)

S ( x) f ( x ) f ( xn ) f ( xn 1 ) ( x x ) x [ x , x ]
n 1
0
n 1
n
n 1
x

n
n 1

Example 1
Given the following data, evaluate the function at x=5
using linear splines.
i

xi

fi

3.0 2.5

4.5 1.0

7.0 2.5

9.0 0.5

Example 1 - Solution

68

x=5 is between x=4.5 and x=7. Hence, f(5) is evaluated


using linear splines on the second interval
f ( x3 ) f ( x2 )
S 2 ( x) f ( x2 )
( x x2 )
x3 x2

1.0

2.5 - 1.0
( x 4.5)
7.0 - 4.5

The function value at x=5

f (5) S2 (5) 1.0

2.5-1.0
(5 4.5) 1.3
7.0-4.5

69

Quadratic Splines
Quadratic splines are rarely used for interpolation for
practical purposes
Ideally quadratic splines are only used to understand cubic
splines
Each Si is a second-order polynomial constructed as:

Si ( x) ai bi ( x xi ) ci ( x xi )2

The splines are given by:

70

S1 ( x) a1 b1 ( x x1 ) c1 ( x x1 ) 2

2
S 2 ( x) a2 b2 ( x x2 ) c2 ( x x2 )
S ( x)

S ( x) a b ( x x ) c ( x x ) 2
n 1
n 1
n 1
n 1
n 1
n 1

x [ x1 , x2 ]
x [ x2 , x3 ]
x [ xn 1 , xn ]

Find ai, bi, ci for i=1,2,,n-1 3(n-1) unknown constants


to evaluate 3(n-1) equations are required.
Derivation of Quadratic Splines

1. The function must pass through all the points (Continuity


condition), i.e. for x=xi, we have:
ai fi
Si ( xi ) fi ai bi ( xi xi ) ci ( xi xi )2
(n-1) equations
2. Functions value of adjacent polynomials must be equal at
the knots i.e. for knot i+1 we have:

fi bi ( xi 1 xi ) ci ( xi 1 xi )2 fi 1 bi 1 ( xi 1 xi 1 ) ci 1 ( xi 1 xi 1 )2
By setting hi = xi+1 xi, the above equation simplifies to:
2
fi bi hi ci hi fi 1

71

this equation can be written for knots i=1,2,,n-1


(n-1) equations
3. The first derivatives at the interior nodes must be equal
(adjacent splines will be joined smoothly).
Quadric spline function can be differentiated to
S 'i ( x) bi 2ci ( x xi )
first derivatives at interior knot i+1 gives:
bi 2ci hi bi 1
(n-2) equations
4. Make one arbitrary choice i.e. assume the second derivative
is zero at the first point.
(1) equations
c1 0

72

Example 2
Given the following data, evaluate the function at x=5
using Quadratic splines.
i

xi

fi

3.0 2.5

4.5 1.0

7.0 2.5

9.0 0.5

Example 2 - Solution
Since there are 4 data points, 3 quadratic splines pass
through them

S1 ( x) a1 b1 ( x 3) c1 ( x 3) 2

x [3,4.5]

S 2 ( x) a2 b2 ( x 4.5) c2 ( x 4.5) 2

x [4.5,7]

S3 ( x) a3 b3 ( x 7) c3 ( x 7) 2

x [7,9]

73

The necessary function and interval width values are:


i

xi

fi

hi

3.0 2.5

4.5 3.0 = 1.5

4.5 1.0

7.0 4.5 = 2.5

2.5

9.0 7.0 = 2.0

0.5

Setting up the equations


1. The function must pass through all the points:
a1 f1 2.5
ai fi
a2 f 2 1.0

a3 f 3 2.5

2. Functions value of adjacent polynomials must be equal at


the knots:

f1 b1h1 c1h1 f 2
2

fi bi hi ci hi fi 1
2

f 2 b2 h2 c2 h2 f 3
2

2.5 1.5b1 1.0


1.0 2.5b2 6.25c2 2.5

f b h c h 2 f 2.5 2.0b3 4c3 0.5


3
3 3
3 3
4

74

3. The first derivatives at the interior nodes must be equal


b1 b2
b b

bi 2ci hi bi 1

b2 2c2 h2 b3

b2 5c2 b3

4. Assume the second derivative is zero at the first point.


c1 0
All the equations can be assembled in a matrix form as
0
0 0 b1 1.5
1.5 0
0 2.5 6.25 0 0 b 1.5

0
0
2 4 c2 2
0

b1 1.0
1

1
0
0
0
0
3

b2 1.0
0
1
5
1 0 c3 0
c2 0.64

b3 2.2
c3 1.6
These equations can be solved with the results:

a1 2.5
Along with

a 2 1 .0

and

c1 0

a3 2.5

75

The quadratic splines are then given by

S1 ( x) 2.5 ( x 3)

x [3,4.5]

S 2 ( x) 1.0 ( x 4.5) 0.64( x 4.5) 2

x [4.5,7]

S3 ( x) 2.5 2.2( x 7) - 1.6( x 7) 2

x [7,9]

Because x=5 lies in the second interval, we use S2 to make


the prediction

S2 (5) 1.0 (5 4.5) 0.64(5 4.5)2 0.66

76

Cubic Splines
Third-order curves used to connect each pair of data points
are called cubic splines.

si ( x) ai bi ( x xi ) ci ( x xi )2 di ( x xi )3
We have n data points n-1 intervals 4n-4 unknowns
constants 4n-4 conditions are needed
The first 3 conditions are identical to those used for the
quadratic splines:
1. Spline functions must pass through all the data points
2. Function values must be equal at the interior knots
3. First derivatives at the interior knots must be equal
4. Second derivatives at the interior knots must be equal
5. Second derivatives at the end knots are zero
Or
5. First derivatives at the first and last knots are specified

Derivation of Cubic Splines


1. The function must pass through all the points (Continuity
condition), i.e. for x=xi, we have:
ai fi
2. Functions value of adjacent polynomials must be equal at
the knots i.e. for knot i+1 we have:
2
3
fi bi hi ci hi di hi fi 1

77

3. The first derivatives at the interior nodes must be equal i.e.


at interior knot i+1 gives:
2
bi 2ci hi 3di hi bi 1

4. The second derivatives at the interior nodes must also be


equal.
Cubic spline function can be differentiated to

S ''i ( x) 2ci 6di ( x xi )


second derivatives at interior knot i+1 gives:

ci 3di hi ci 1

5. Second derivatives at the end knots are zero

c1 0
cn 0

Lets summarize the obtained equations:


(1)
ai fi

fi bi hi ci hi di hi fi 1
2

(2)

bi 2ci hi 3di hi bi 1
2

78

(3)

ci 3di hi ci 1

(4)

c1 0
cn 0

Solve Eq. (4) for di:


c c
di i 1 i
3hi

(5)

(6)

Substitute into Eq. (2) and (3)


hi2
f i bi hi (2ci ci 1 ) fi 1
3

(7)

bi 1 bi hi (ci ci 1 )

(8)

Solve Eq.(7) for bi:


f f h
bi i 1 i i (2ci ci 1 )
hi
3

(9)

Reduce the index of Eq. (8) and (9) by 1


f f i 1 hi 1
bi 1 i

(2ci 1 ci )
hi 1
3
79

(10)

bi bi 1 hi 1 (ci 1 ci )

(11)

Substitute Eq. (9) and (10) into Eq. (11)

hi 1ci 1 2(hi 1 hi )ci hi ci 1 3( f [ xi 1 , xi ] f [ xi , xi 1 ])

(12)

Equation (12) can be written in matrix form as

0
1
c1

h 2(h h ) h
c 3( f [ x , x ] f [ x x ])
1
1
2
2
3
2
2
1

h
2
(
h

h
)
h
n

2
n

2
n

1
n

n 1 3( f [ xn , xn 1 ] f [ xn 1 xn 2 ])

1 cn
0

80

Example 3
Given the following data, evaluate the function at x=5
using Cubic splines.

xi

fi

3.0

2.5

4.5

1.0

7.0

2.5

9.0

0.5

Example 2 - Solution
Since there are 4 data points, 3 quadratic splines pass
through them

S1 ( x) a1 b1 ( x 3) c1 ( x 3) 2 d1 ( x 3)3

x [3,4.5]

S 2 ( x) a2 b2 ( x 4.5) c2 ( x 4.5) 2 d 2 ( x 4.5)3

x [4.5,7]

S3 ( x) a3 b3 ( x 7) c3 ( x 7) 2 d3 ( x 7)3

x [7,9]

81

The necessary function and interval width values are:


i
xi fi
hi
1

3.0 2.5 4.5 3.0 = 1.5

4.5 1.0 7.0 4.5 = 2.5

2.5 9.0 7.0 = 2.0

0.5

Determine the c coefficients from the set of simultaneous


equations
0
1
c1

h 2(h h )
c 3( f [ x , x ] f [ x x ])
h2
1
1
2
3
2
2
1

h2
2(h2 h3 ) h3 c3 3( f [ x4 , x3 ] f [ x3 x2 ])

1 c4
0

1
c1 0
1.5 8 2.5 c 4.8

2.5 9 2 c3 4.8

1 c4 0

c1 0
c2 0.8395

c3 0.7665
c4 0

Use equations (9) and (6) to compute bi and di

b1 1.42
b2 0.16
b3 0.022

82

bi

f i 1 f i hi
(2ci ci 1 )
hi
3

c c
di i 1 i
3hi

d1 0.187

d 2 0.214
d 3 0.128

The cubic splines for each interval

S1 ( x) 2.5 1.42( x 3) 0.187( x 3)3

x [3,4.5]

S2 ( x) 1.0 0.16( x 4.5) 0.84( x 4.5) 2 0.214d 2 ( x 4.5)3

x [4.5,7]

S3 ( x) 2.5 0.022( x 7) - 0.767( x 7) 2 0.128( x 7)3

x [7,9]

The function value at x=5 is calculated using S2

S2( 5 ) 1.0 0.16( 5 4.5 ) 0.84( 5 4.5 )2 0.214d 2( 5 4.5 )3


1.103

83

Example - Summary
Spline fits of the set of four points given in the example
(a) Linear spline,
(b) quadratic spline, and
(c) cubic spline, with a cubic

84

Curve Fitting and Interpolation


Least Squares Regression

Objectives
Introduction
Linear regression
Polynomial regression
Multiple linear regression
General linear least squares
Nonlinear regression
Curve Fitting
Experimentation
Data available at discrete points or times
Estimates are required at points between the discrete
values
Curves are fit to data in order to estimate the
intermediate values
Two methods - depending on error in data
Interpolation
- Precise data
- Force through each data point

85

120

Temperature (deg F)

100

80

60

40

20

0
0

Time (s)

Regression
- Noisy data
- Represent trend of the data
9

f(x)

0
0

10

12

14

16

86

Least Squares Regression


Experimental Data
Noisy (contains errors or inaccuracies)
x values are accurate, y values are not
Find relationship between x and y = f(x)
Fit general trend without matching individual points
Derive a curve that minimizes the discrepancy
between the data points and the curve Leastsquares regression

Linear Regression: Definition


Noisy Data From Experiment
x

2.10

2.90

6.22

3.83

7.17

5.98

10.5

5.71

13.7

7.74

87

f(x)

0
0

10

12

14

16

Straight line characterizes trend without passing through


particular point

Linear Regression: criteria for a best fit


How do we measure goodness of fit of the line to the data?
9
y5

7
Data points
y3

e3
y4

f(x)

5
e2
4
y1

Residual
e = y - (a 0 + a 1x )

y2

3
Regression Model
y = a 0 + a 1x

0
0

10

12

14

16

88

Use the curve that minimizes the residual between the data
points and the line
Model: y a0 a1 x , yi a0 a1 xi

ei yi a0 a1 xi
n

Sr e [ yi a0 a1 xi ]2
i 1

2
i

i 1

Find the values of a0 and a1 that minimize Sr


Minimize
Sr
by
taking
derivatives
a0 and a1,
S

y a a x
=

First a0

a a

WRT

i =1

2 2
i =1 a 0
i =1
a 0

2y i a 0 a1x i (1)
i =1

Finally

Second a1

n
n

na0 xi a1 yi
i 1
i 1

Sr n
yi a 0 a1x i 2
=

a1 a1 i =1

2 2
i =1 a 1
i =1
a1

2y i a 0 a1x i ( x i )
i =1

0
89

n x a n x2 a n x y
i i
i 0
i 1

i 1
i 1
i 1

Finally

Set of two simultaneous linear equations with two


unknowns ( a0 and a1):
n
n

na0 xi a1 yi
i 1
i 1

n x a n x2 a n x y
i i
i 0
i 1

i 1
i 1
i 1

The normal equations can be solved simultaneously for:

a1

i 1

i 1

i 1

n xi yi xi yi
n

n x xi
i 1
i 1
n

2
i

a0 y a1 x

90

Example 1:
Fit a straight line to the values in the following table
x
y
2.10

2.90

6.22

3.83

7.17

5.98

10.5

5.71

13.7

7.74
i

xi

yi

xi2

xiyi

2.10

2.90

4.41

6.09

6.22

3.83

38.69

23.82

7.17

5.98

51.41

42.88

10.5

5.71

110.25

59.96

13.7

7.74

187.69

106.04

39.69

26.16

392.32

238.74

y 2.038392 0.4023190x
91

Linear Regression: Quantification of error


Suppose we have data points (xi, yi) and modeled (or
predicted) points (xi, i) from the model = f(x).
Data {yi} have two types of variations;
(i) variation explained by the model and
(ii) variation not explained by the model.
Residual sum of squares: variation not explained by the
model
n

Sr ( yi yi ) 2
i 1

Regression sum of squares: variation explained by the


n

model

St ( y yi ) 2
i 1

The coefficient of determination r2


S Sr
r2 t
St
For a perfect fit
Sr=0 and r=r2=1, signifying that the line explains 100% of
the variability of the data.
For r=r2=0, Sr=St, the fit represents no improvement.
In addition to r2, r
Sr
Define S y x
n2
= standard error of the estimate

92

- Represents the distribution of the residuals around the


regression line
- Large Sy|x large residuals
- Small Sy|x small residuals
Example 2
Compute the total standard deviation, the standard error of
the estimate, and the correlation coefficient for the data in
Example 1.
Example 2 - Solution
i

xi

yi

( yi y )2

( yi a0 a1 xi )2

2.1

2.9

5.4382

0.0003

6.22

3.83

1.9656

0.5045

7.17

5.98

0.5595

1.1183

10.5

5.71

0.2285

0.3049

13.7

7.74

6.2901

0.0363

39.69 26.16

14.4819

1.9643

The standard deviation is

Sy

St
14.4819

1.9028
n 1
5 1

The standard error of the estimate is

93

S y| x

Sr
1.9643

0.8092
n2
52

The correlation coefficient r is


S Sr 14.4819 1.9643
r2 t

0.8644
St
14.4819

r 0.8644 0.9297

94

Linearization of Nonlinear Relationships


If relationship is an exponential function y aebx
To make it linear, take logarithm of both sides

ln y ln a bx
Now its a linear relation between ln(y) and x

If relationship is a power function y axb

To make linear, take logarithm of both sides

ln y ln a b ln x
Now its a linear relation between ln(y) and ln(x)

95

96

Polynomial Regression
Minimize the residual between the data points and the
curve -- least-squares regression
Linear

yi a0 a1 xi

Quadratic

yi a0 a1 xi a2 xi2

Cubic

yi a0 a1 xi a2 xi2 a3 xi3

General

yi a0 a1 xi a2 xi2 a3 xi3 ... am xim

Must find values of a0 , a1, a2, am

Residual
ei yi a0 a1 xi a2 xi2 .... am xim
Sum of squared residuals

Sr e yi a0 a1 xi a x .... a x
n

i 1

2
i

2
2 i

i 1

m
m i

Minimize by taking derivatives


Normal Equations

n
xi
in=1
x i2
i =1

n xm
i

i =1

xi

i =1
n

i =1
n

2
i

x i2

x 3i

i =1
n

i =1
n

x 3i

x i4

i =1

i =1

x im 1

x im 2

i =1

i =1

yi
x

i =1

i =1
a

0
n

m 1

x
x
y
i a1 i i
i =1
a in=1

n
m2
2
2
x i x i yi
i =1

i =1

a
n
n

2m
m
x
y
xi

i
i
i =1

i =1
n

m
i

97

Example 3
Fit a third-order polynomial to the data given in the Table
below
x 0 1.0 1.5 2.3 2.5 4.0 5.1 6.0 6.5 7.0 8.1 9.0
y 0.2 0.8 2.5 2.5 3.5 4.3 3.0 5.0 3.5 2.4 1.3 2.0
x 9.3 11.0 11.3 12.1 13.1 14.0 15.5 16.0 17.5 17.8 19.0 20.0
y -0.3 -1.3 -3.0 -4.0 -4.9 -4.0 -5.2 -3.0 -3.5 -1.6 -1.4 -0.1

Example 3 - Solution
229.6
3060.2
46342.79 a 0 1.30
24
229.6
3060.2
46342.79
752835.2 a1 316.88

12780148.0 a 2 6037.242
3060.2 46342.79 752835.2

46342.79 752835.2 12780148.0 223518120.0 a 3 9943.3597

a 0 0.35934718
a 2.3051112
1

a 2 0.35319014

a 3 0.01206020

98

Regression Equation
y = - 0.359 + 2.305x - 0.353x2 + 0.012x3
6

f(x)

-2

-4

-6
0

10

15

20

25

99

Multiple Linear Regression


y = a0 + a1x1 + a2x2 + e
Again very similar.
Minimize e
Polynomial and multiple regression fall within definition of
General Linear Least Squares.

y a0 z0 a1 z1 a2 z2 am zm e
z0 , z1, , zm are m 1 basis functions

Y Z A E
Z matrix of the calculated values of

the basis functions


at the measured values of the independen t variable
Y observed valued of the dependent variable
A unknown coefficien ts
E residuals

Sr yi a j z ji
i 1
j 0

100

Minimized by taking its partial derivative w.r.t. each of the


coefficients and setting the resulting equation equal to zero

Not all equations can be broken down into General Linear


Least Squares model i.e. yi a0 (1 ea x ) e
Solve with nonlinear least squares using iterative methods
like Gauss-Newton method
Equation could possibly be transformed into linear form
Caveat: when fitting transformed data you minimize
the residuals of the data you are working with
May not give you the exact same fit as nonlinear
regression on untransformed data
1

101

102

Numerical Differentiation and Integration


Numerical Differentiation

Objectives
Introduction
Taylor Series
Forward difference method
Backward difference method
Central difference method
Introduction
Calculus is the mathematics of change. Because engineers
must continuously deal with systems and processes that
change, calculus is an essential tool of engineering.
Standing in the heart of calculus are the mathematical
concepts of differentiation and integration:

y f ( xi x) f ( xi )

x
x
dy
f ( xi x) f ( xi )
x lim 0
dx
x
b

I f ( x)dx
a

103

Integration and differentiation are closely linked processes.


They are, in fact, inversely related.
Types of functions to be differentiated or integrated:
1. Simple polynomial, exponential, trigonometric
analytically
2. Complex function numerically
3. Tabulated function of experimental data
numerically
Applications
Differentiation has so many engineering applications (heat
transfer, fluid dynamics, chemical reaction kinetics, etc)
Integration is equally used in engineering (compute work in
ME, nonuniform force in SE, cross-sectional area of a river,
etc)
Differentiation

104

y f ( xi x) f ( xi )

x
x

dy
f ( xi x) f ( xi )
lim
dx x0
x

The finite difference becomes a derivative as x approaches


zero.

105

Integration

Differentiation vs. Integration

106

107

Taylor Series Expansion


Non-elementary functions such as trigonometric,
exponential, and others are expressed in an approximate
fashion using Taylor series when their values, derivatives,
and integrals are computed.
Any smooth function can be approximated as a polynomial.
Taylor series provides a means to predict the value of a
function at one point in terms of the function value and its
derivatives at another point.
Numerical Application of Taylor Series
If f (x) and its first n+1 derivatives are continuous on an
interval containing xi+1 and xi , then:

f '' xi
xi 1 xi 2
f xi 1 f xi f xi xi 1 xi
2!
f 3 xi
f n xi
3
xi 1 xi ...
xi 1 xi n Rn

3!
n!
'

Where the remainder Rn is defined as:

f n 1
xi 1 xi n1
Rn
n 1!

108

The series is built term by term


zero order approximation
f xi 1 f xi
Continuing the addition of more terms to get better
approximation we have:
1st order approximation
f xi 1 f xi f ' xi xi 1 xi
2nd order approximation

f xi 1 f xi f ' xi xi 1 xi

f xi
xi 1 xi 2
2!
''

Finally
f '' xi
'
xi 1 xi 2
f xi 1 f xi f xi xi 1 xi
2!
f 3 xi
f n xi
3
xi 1 xi ...
xi 1 xi n Rn

3!
n!

where

f n 1
xi 1 xi n1
Rn
n 1!

is a value of x that lies somewhere between xi and xi+1.

109

Taylor Series in the Remainder Term


Limitations
is not exactly known but lies somewhere between xi
and xi+1
To evaluate Rn, the (n+1) derivative of f (x) has to be
determined. To do this f (x) must be known
if f (x) was known there would be no need to perform the
Taylor series expansion!!!
Modification
Rn = O(hn+1) the truncation error is of the order of hn+1 . (h = xi+1
xi)
If the error is O(h), halving the step size will halve the
error.
If the error is O(h2), halving the step size will quarter
the error.
In general, the truncation error is decreased by
addition of more terms in the Taylor series.

Forward Difference Formulas- 1st derivative


2nd order Taylor series expansion of f(x) can be written as:
(1)
f ' ' ( xi ) 2
f ( xi 1 ) f ( xi ) f ' ( xi )h
h O( h3 )
2
Then, the first derivative can be expressed as:

f ' ( xi )

f ( xi 1 ) f ( xi ) f ' ' ( xi )

h O(h 2 )
h
2
110

(2)

Given that f (x) can be approximated by:

f ' ' ( xi )

f ( xi 2 ) 2 f ( xi 1 ) f ( xi )
O ( h)
h2

(3)

Substituting equation (3) into equation (2):

f ' ( xi )

f ( xi 1 ) f ( xi ) f ( xi 2 ) 2 f ( xi 1 ) f ( xi )

h O(h 2 )
2
h
2h

Collecting terms and simplifying, we have:

f ' ( xi )

f ( xi 2 ) 4 f ( xi 1 ) 3 f ( xi )
O(h 2 )
2h

Note that the inclusion of the second-derivative term has


improved the accuracy to O(h2).
Forward Difference Formulas- 2nd derivative
Start with Lagrange interpolation polynomial for f(x) based
on the four points xi, xi+1, xi+2 and xi+3.
Differentiate the products in the numerators twice
Substitute x = xi and consider the fact that xj - xi = (j- i)h

111


The expression of the second derivative is then:
f ( xi 3 ) 4 f ( xi 2 ) 5 f ( xi 1 ) 2 f ( xi )
f ' ' ( xi )
O(h 2 )
2
h
Backward Difference Formulas- 1st derivative
Using backward difference in the Taylor series expansion,

f ' ' ( xi ) 2
h O( h3 )
2
And given that f (x) can be approximated by:
f ( xi 1 ) f ( xi ) f ' ( xi )h

f ' ' ( xi )

f ( xi ) 2 f ( xi 1 ) f ( xi 2 )
O ( h)
2
h

The second-order estimate of f (x) can be obtained:

f ' ( xi )

3 f ( xi ) 4 f ( xi 1 ) f ( xi 2 )
O( h 2 )
2h

Backward Difference Formulas- 2nd derivative


Start with Lagrange interpolation polynomial for f(x) based
on the four points xi, xi-1, xi-2 and xi-3 .
Differentiate the products in the numerators twice
Substitute x = xi and consider the fact that xj - xi = (j- i)h

112

The expression of the second derivative is then:


2 f ( xi ) 5 f ( xi 1 ) 4 f ( xi 2 ) f ( xi 3 )
f ' ' ( xi )
O( h 2 )
2
h

Centered Difference Formulas- 1st derivative [O(h2)]


Start with the 2nd degree Taylor expansions about x for
f(x+h) and f(x-h):

f ( xi 1 ) f ( xi ) f ' ( xi )h

f ' ' ( xi ) 2
h O( h3 )
2

f ( xi 1 ) f ( xi ) f ' ( xi )h

f ' ' ( xi ) 2
h O( h3 )
2

(4)
(5)

Subtract (5) from (4)


f ( xi 1 ) f ( xi 1 ) 2 f ' ( xi )h O(h3 )
Hence

f ' ( xi )

f ( xi 1 ) f ( xi 1 )
O( h 2 )
2h

Centered Difference Formulas- 1st derivative [O(h4)]


Start with the difference between the 4th degree Taylor
expansions about x for f(x+h) and f(x-h):
2 f ' ' ' ( xi ) 3
f ( xi 1 ) f ( xi 1 ) 2 f ' ( xi )h
h O(h5 )
3!
113

(6)

Use the step size 2h, instead of h, in (6)

f ( xi 2 ) f ( xi 2 ) 4 f ' ( xi )h

16 f ' ' ' ( xi ) 3


h O( h5 )
3!

(7)

Multiply equation (6) by 8, subtract (7) from it, and solve


for f(x)

f ' ( xi )

f ( xi 2 ) 8 f ( xi 1 ) 8 f ( xi 1 ) f ( xi 2 )
O( h 4 )
12h

Centered Difference Formulas- 2nd derivative [O(h2)]


Start with the 3rd degree Taylor expansions about x for
f(x+h) and f(x-h):

f ( xi 1 ) f ( xi ) f ' ( xi )h

f ' ' ( xi ) 2 f ' ' ' ( xi ) 3


h
h O(h 4 )
2
3!

f ( xi 1 ) f ( xi ) f ' ( xi )h

f ' ' ( xi ) 2 f ' ' ' ( xi ) 3


h
h O( h 4 )
2
3!

(8)

(9)

Add equations (8) and (9), and solve for f (x)

114

f ' ' ( xi )

f ( xi 1 ) 2 f ( xi ) f ( xi 1 )
O( h 2 )
2
h

Centered Difference Formulas- 2nd derivative [O(h4)]


Start with the addition between the 5th degree Taylor
expansions about x for f(x+h) and f(x-h):
2 f ' ' ( xi ) 2 2 f ( 4 ) ( xi ) 4
f ( xi 1 ) f ( xi 1 ) 2 f ( xi )
h
h O( h 6 )
2!
4!
(10)
Use the step size 2h, instead of h, in (10)

8 f ' ' ( xi ) 2 32 f ( 4 ) ( xi ) 4
f ( xi 2 ) f ( xi 2 ) 2 f ( xi )
h
h O( h 6 )
(11)
2!
4!
Multiply equation (10) by 16, subtract (11) from it, and
solve for f(x)
f ( xi 2 ) 16 f ( xi 1 ) 30 f ( xi ) 16 f ( xi 1 ) f ( xi 2 )
f ' ' ( xi )
O( h 4 )
2
12h

Error Analysis
Generally, if numerical differentiation is performed, only
about half the accuracy of which the computer is capable is
obtained unless we are fortunate to find an optimal step
size.

115

The total error has part due to round-off error plus a part
due to truncation error.
Total Numerical Error
Total numerical error = truncation error + round-off error.

Trade-off between truncation and round-off errors

Example 1
Estimate the first and second derivatives of:

f ( x) 1.2 0.25x 0.5 x 2 0.15x3 0.1x 4


at x = 0.5 and h = 0.25 using

116

a) forward finite-divided difference


b) Centered finite-divided difference
c) backward finite-divided difference?
Example 1 - Solution
a) Forward difference
1st derivative computation
The data needed is:

xi 0.5

f ( xi ) 0.925

xi 1 0.75

f ( xi 1 ) 0.636328

xi 2 1

f ( xi 2 ) 0.2

First derivative:

f ( xi 2 ) 4 f ( xi 1 ) 3 f ( xi )
2h
f (1) 4 f (0.75) 3 f (0.5)
f ' (0.5)
0.859375
2(0.25)
f ' ( xi )

t = 5.82%
True value=-0.9125

117

Second derivative computation


The data needed is:

xi 0.5

f ( xi ) 0.925

xi 1 0.75

f ( xi 1 ) 0.636328

xi 2 1

f ( xi 2 ) 0.2

xi 3 1.25

f ( xi 3 ) 1.94336

Second derivative:

f ( xi 3 ) 4 f ( xi 2 ) 5 f ( xi 1 ) 2 f ( xi )
h2
(1.94336) 4(0.2) 5(0.636328) 2(0.925)
f ' ' (0.5)
39.6
2
(0.25)
f ' ' ( xi )

b) Centered finite-divided difference


The data needed is:

xi 2 0

f ( xi 2 ) 1.2

xi 1 0.25

f ( xi 1 ) 1.103516

xi 1 0.75

f ( xi 1 ) 0.636328

xi 2 1

f ( xi 2 ) 0.2
118

First derivative:

f ( xi 2 ) 8 f ( xi 1 ) 8 f ( xi 1 ) f ( xi 2 )
12h
(0.2) 8(0.636328) 8(1.103516) (1.2)
f ' ( xi )
12(0.25)
f ' ( xi ) 0.9125
f ' ( xi )

t = 0%.
True value=-0.9125

c) Backward finite-divided difference


The data needed is:
xi 2 0
f ( xi 2 ) 1.2

xi 1 0.25

f ( xi 1 ) 1.103516

xi 0.5

f ( xi ) 0.925

119

First derivative :

3 f ( xi ) 4 f ( xi 1 ) f ( xi 2 )
2h
3(0.925) 4(1.103516) (1.2)
f ' ( xi )
0.878125
2(0.25)
f ' ( xi )

t = 3.77%
True value=-0.9125

120

Numerical Differentiation and Integration


Numerical Integration (I)

Objectives
Introduction
Trapezoidal Rule
Multiple-Application Trapezoidal Rule
Simpsons 1/3 Rule
Multiple-Application Simpsons 1/3 Rule
Simpsons 3/8 Rule
Integration with unequal segments
Gauss Quadrature
Integration Graphical representation

121

f ( x)dx
a

Integration Engineering applications

122

Newton-Cotes Integration Formulas


The Newton-Cotes formulas are the most common
numerical integration schemes.
This involves the replacement of a complicated function or
tabulated data with an approximating function that is easy
to integrate.

I a f ( x)dx a f n ( x)dx
b

Where fn(x) is a polynomial of order n

f n ( x) a0 a1 x a2 x 2 ... an 1 x n 1 an x n
Closed and open forms
Closed: data points at the beginning and end of the
limits of integration are known
Open: integration limits that extend beyond the range
of data

123

Trapezoidal Rule
The Trapezoidal rule is the first of the Newton-Cotes
closed integration formulas, corresponding to the case
where the polynomial is first order i.e. n = 1

I a f ( x)dx a f1 ( x)dx
b

f1(x) is expressed using linear interpolation formula


f (b) f (a)
f1 ( x) f (a)
( x a)
ba
The area under the straight line is an estimate of the
integral of f1(x):
b
f (b) f (a)

I a f (a)
( x a) dx
ba

The result of this integration is

124

I (b a)

f (a) f (b)
Trapezoidal rule
2

Area of a trapezoid = (Width) X (Average Height)

Error of the Trapezoidal Rule


Truncation error is expressed as:
1
Et (b a)3 f ' ' ( )
12
An approximation of the second derivative is given by
b
f ' ' ( x)dx
f ' ' ( ) f ' ' a
ba

125

Approximate error estimate is then:

Ea

1
(b a)3 f ' '
12

Example 1
Use Trapezoidal Rule to integrate numerically the following
function:

f ( x) 0.2 25 x 200 x 2 675 x3 900 x 4 400 x5


from a = 0 to b = 0.8
Example 1 - Solution
Evaluate f(a) and f(b)
f (0) 0.2

f (0.8) 0.232
Evaluate the integral

I (0.8 0)

0.2 0.232
0.1728
2

Approximate error estimate

(400 4050 x 10800 x 2 8000 x3 )dx

0
f ' ' ( x)
60
0.8

0.8 0

Ea

1
1
f ' ' ( x)(b a)3 (60)(0.8 0)3 2.56
12
12
126

True value of the integral is 1.640533


Et = 1.640533 0.1728
= 1.467733 t = 89.5%

Ea = 2.56
Discrepancy between Et and Ea due to the fact that f (x) is
not an accurate approximation of f ( ) for an interval of
this size.

Multiple-Application or Composite Trapezoidal Rule


Divide integration interval from a to b into n segments of
equal width improve the accuracy of the trapezoidal rule
The total integral can be represented as

I x f ( x)dx x f ( x)dx ... x f ( x)dx


x1

x2

xn

n 1

127

Substituting the trapezoidal rule for each integral, we get

f ( x0 ) f ( x1 )
f ( x1 ) f ( x2 )
f ( xn 1 ) f ( xn )
h
... h
2
2
2
By grouping terms, we have
I h

n 1
h
I
f ( x0 ) 2 f ( xi ) f ( xn )

2
i 1

Multiple-Application Trapezoidal Rule

Error of Multiple-Application Trapezoidal Rule

128

Truncation error is obtained by summing the individual


error for each segment
(b a)3 n
Et
f ' ' ( )
3
12n i 1
By estimating the average value of f (x) for the entire
interval
n
n
f ' ' ( ) nf ' '
f ' ' ( )

i 1
f ' ' i 1
n
Approximate error estimate for multiple-application of
trapezoidal rule is then:
(b a)3
Ea
f ''
2
12n
As you double the number of segments error is quartered
Example 2
Apply two segments in the trapezoidal rule to estimate the
integral of:

f ( x) 0.2 25 x 200 x 2 675 x3 900 x 4 400 x5

from a = 0 to b = 0.8

129

Example 2 - Solution
We have n = 2

f (0) 0.2

0.8 0
0.4
2

f (0.4) 2.456

f (0.8) 0.232

Evaluate the integral

I 0.8

0.2 2(2.456) 0.232


1.0688
4

True and Approximate error estimate


Et = 1.640533 1.0688 = 0.571733

(0.8 0)3
Ea
(60) 0.64
12(2) 2

Simpsons 1/3 Rule


It corresponds to the case where n = 2 i.e. a polynomial of
second order:

I a f ( x)dx a f 2 ( x)dx
b

Let a = x0 and b = x2 and f2(x) a second order Lagrange


polynomial, then
x ( x x1 )( x x2 )
( x x0 )( x x2 )
I x
f ( x0 )
f ( x1 )
( x1 x0 )( x1 x2 )
( x0 x1 )( x0 x2 )

( x x0 )( x x1 )

f ( x2 ) dx
( x2 x0 )( x2 x1 )

130

After integration, we have:


h
I f ( x0 ) 4 f ( x1 ) f ( x2 )
3

Estimated truncation error


(b a)5 ( 4 )
Ea
f
2880

ba
2

( 4)

( ) f

( 4)

( 4)
f
( x)dx

ba

Example 3
Evaluate the integral of:

f ( x) 0.2 25 x 200 x 2 675 x3 900 x 4 400 x5


from a = 0 to b = 0.8 using Simpsons 1/3 rule.
Compute the truncation errors Et and Ea
Example 3 - Solution
Evaluate the integral with h = (b-a)/2 = 0.4
h
0.4
0.2 4( 2.456 ) 0.232 1.367467
I f ( x0 ) 4 f ( x1 ) f ( x2 )
3
3

131

True truncation error


Et = 1.640533 1.367467 = 0.2730667 t = 16.6%
Approximate truncation error

f ( 4 ) ( x) 21600 48000 x

0.8

( 4)
f
( x)dx

( 4)
0
f ( x)
2400

0.8 0

(b a)5 ( 4 )
(0.8)5
Ea
f ( x)
(2400) 0.2730667
2880
2880

Multiple-Application of Simpsons 1/3 Rule


Just as the trapezoidal rule, Simpsons 1/3 rule can be
improved by dividing the integration interval into n
segments of equal width
The total integral can then be represented by
n 1
n2
h

I f ( x0 ) 4 f ( xi ) 2 f ( xi ) f ( xn )
3
i 1, 3, 5,...
j 2 , 4 , 6 ,...

(b a)5 ( 4 )
Ea
f
4
180n

Example 4:
Use n = 4 to evaluate the integral of:

f ( x) 0.2 25 x 200 x 2 675 x3 900 x 4 400 x5

132

from a = 0 to b = 0.8 using Simpsons 1/3 rule.


Compute the truncation errors Et and Ea

Example 4 Solution
The integral using composite simpsons 1/3 rule
h = (0.8-0)/4 = 0.2
f(0)=0.2 f(0.2)=1.288
f(0.8)=0.232

f(0.4)=2.456

f(0.6)=3.464

0.2
0.2 4(1.288) 2.456 0.2 2.456 4(3.464) 0.232
3
3
1.623466

Et = 1.640533 1.623466 = 0.017067 t = 1.0%,

Approximate truncation error


(b a)5 ( 4 )
(0.8)5
Ea
f
(2400) 0.017067
180n 4
180(4) 4

133

Simpsons 3/8 Rule


3rd order polynomial is fit to 4 points and integrated to
yield
3h
ba
I f ( x0 ) 3 f ( x1 ) 3 f ( x2 ) f ( x3 )
h
8
3
Simpsons 3/8 rule has an error of

(b a)5 ( 4 )
Et
f ( )
6480

There is also a multiple segment version.


Combination of Simpsons 1/3 and 3/8 rules
Can also use combinations of Simpsons 1/3 and 3/8 rules
for unevenly spaced data

134

Example 5
1. Use Simpsons 3/8 rule to integrate:

f ( x) 0.2 25 x 200 x 2 675 x3 900 x 4 400 x5


from a = 0 to b = 0.8

2. Use Simpsons 1/3 and 3/8 rules to integrate the same


function for five segments.
Example 5 Solution
1. A single application of Simpsons 3/8 rule requires 4
equally spaced points:

135

h = (0.8-0)/3 = 0.2667
f(0) = 0.2

f(0.2667) = 1.432724

f(0.5333) = 3.487177

f(0.8)=0.232

3(0.2667)
0.2 3(1.432724 3.487177) 0.232 1.51970
8

2. For five segments:


h = (0.8-0)/5 = 0.16
f(0) = 0.2,

f(0.16) = 1.296919,

f(0.48) = 3.186015, f(0.64)=3.181929,

f(0.32) = 1.743393
f(0.8) = 0.232

The integral for the first 2 segments (Simpsons 1/3 rule):


(0.16)
0.2 4(1.296919) 1.743393 0.3803237
I
3
For the last 3 segments (Simpsons 3/8 rule)
3(0.16)
1.743393 3(3.186015 3.181929) 0.232 1.264754
I
8
The total integral

I 0.3803237 1.264754 1.645077

136

Remarks Trapezoidal & Simpsons Rule


Rather than applying trapezoidal rule with finer segments,
go to higher order polynomials
Quadratic is used: Simpsons 1/3
Must be used with even number of segments
Cubic is used: Simpsons 3/8
Odd number of segments
Slightly more accurate than 1/3 but requires more
points
Simpsons 1/3 is preferred since need fewer data points for
same order accuracy
Therefore, for odd number of segments, use 1/3 rule for all
but the last 3 segments, then use 3/8 rule.
Integration with unequal segments
Experimental data is often not equally spaced
All the previous equations were for equally spaced data
points
If you have some equally spaced points in the data set, Fit
with Simpsons rules
Fit the rest of the segments with individual Trapezoidal
rules

137

Gauss Quadrature

Gauss quadrature implements a strategy of positioning two


points on a curve to define a straight line that would
balance the positive and negative errors.
The area evaluated under this straight line provides a faster
and an improved estimate of the integral compared to
Trapezoid and Simpson.
Find position of these points using Gauss-Legendre
formulas

138

Method of Undetermined Coefficients

I c0 f (a) c1 f (b)
The trapezoidal rule yields exact results when the function
being integrated is a constant or a straight line, such as y=1
and y=x:

139

c0 c1

(ba ) / 2

1 dx

(b a ) / 2

ba
b a (ba ) / 2
c0
c1
x dx
2
2
(b a ) / 2
c0 c1 b a
c0

ba
ba
c1
0
2
2

c0 c1
I

ba
2

ba
ba
f (a)
f (b)
2
2

Solve simultaneously
Trapezoidal rule

140

Two-Point Gauss-Legendre Formula


The object of Gauss quadrature is to determine the
equations of the form
I c0 f(x0) + c1 f(x1)
However, in contrast to trapezoidal rule that uses fixed end
points a and b, the function arguments x0 and x1 are not
fixed end points but unknowns.
Thus, four unknowns to be evaluated require four
conditions.
First two conditions are obtained by assuming that the
above equation fits the integral of a constant and a linear
function exactly.
The other two conditions are obtained by extending this
reasoning to a parabolic and a cubic functions.

141

c0 f ( x0 ) c1 f ( x1 ) 1 dx 2
1
1

c0 f ( x0 ) c1 f ( x1 ) x dx 0
1
1

c0 f ( x0 ) c1 f ( x1 ) x 2 dx
1

2
3

c0 f ( x0 ) c1 f ( x1 ) x 3 dx 0

Solved simultaneously

1
1
, x1
3
3
1
1
I f f
3
3

c0 c1 1, x0

Yields an integral
estimate that is third
order accurate

142

Notice that the integration limits are from -1 to 1. This was done
for simplicity and make the formulation as general as possible.

A simple change of variable is used to translate other limits


of integration into integration limits are from -1 to 1

x a0 xd a1
If x=a and x=b correspond to xd=-1 and xd=1 respectively
a a0 a1
b a0 a1

==

ba
2
ba
a1
2
a0

This yields to

(b a) xd (b a)
x
2

dx

(b a)
dxd
2

Example 6
Use two-point Gauss quadrature to integrate:

f ( x) 0.2 25 x 200 x 2 675 x3 900 x 4 400 x5


from a = 0 to b = 0.8

143

Example 6 - Solution
First, perform a variable change

x 0.4 xd 0.4

dx 0.4dxd

Evaluate the integral

I 0 (0.2 25 x 200 x 2 675 x 3 900 x 4 400 x 5 )dx


0.8

1 (0.2 25(0.4 xd 0.4) 200(0.4 xd 0.4) 2 675(0.4 xd 0.4)3


1

900(0.4 xd 0.4) 4 400(0.4 xd 0.4)5 )0.4dx

The right hand side is suitable for using Gauss quadrature

1
1
I f f 0.516741 1.305837 1.822578
3
3

t = 11.1%

144

Higher-Point Gauss Quadrature Formulas


Higher-point formulas can be developed in the general
form
I c0 f(x0) + c1 f(x1)++ cn-1 f(xn-1)
Requires function to be known, since unlikely to get data
points that match these locations in a data table
Provided that the higher order derivatives do not increase
substantially with increasing number of points (n), Gauss
quadrature is superior to Newton-Cotes formulas.

22 n 3 (n 1)!
Et
f ( 2 n 2 ) ( )
3
(2n 3)(2n 2)!
4

145

Error for the Gauss-Legendre formulas

146

Numerical Solutions of Ordinary Differential Equations


Initial Value Problem: Eulers Method
Objectives
Introduction
Eulers Method
Convergence analysis
Error analysis
Error estimate
Introduction
Differential equations: equations composed of an unknown
function and its derivatives.
dv(t )
c
g v(t )
dt
m
Where,
v- dependent variable
t- independent variable

Differential equations play a fundamental role in


engineering because many physical phenomena are best
formulated mathematically in terms of their rate of change.

147

Fundamental laws of physics, mechanics, electricity and


thermodynamics are written in terms of the rate of change
of variables (t = time and x = position):

dv F

dt m

Newtons second law of motion


Fouriers heat law

q k '

dT
dx

2T
Lapalce equation (steady state) x
Heat conduction

2T
y

T
2T
k' 2
t
x

etc.
One independent variable Ordinary Differential
Equation (ODE)
Two or more independent variables Partial Differential
Equation (PDE)
Order of a differential equation is determined by the
highest derivative.

148

Higher order equations can be reduced to a system of first


order equations, by redefining a variable.
An ODE is accompanied by auxiliary conditions. These
conditions are used to evaluate the integral that result
during the solution of the equation. An nth order equation
requires n conditions.
If all conditions are specified at the same value of the
independent variable, then we have an initial-value
problem.
If the conditions are specified at different values of the
independent variable, usually at extreme points or
boundaries of a system, then we have a boundary-value
problem.

Eulers Method
Solve ordinary differential equations of the form
dy
f ( x, y )
dx

With initial condition

y( xi ) yi

The estimate of the solution is

149

yi 1 yi h

f ( xi , yi ) is the estimate of slope at xi


f ( xi , yi ) is the differential equation evaluated at xi and yi
The estimate is given by

yi 1 yi f ( xi , yi ) h

150

This is Eulers method (Euler-Cauchy, Point-slope)

Error Analysis for Eulers Method


The numerical solution of ODE involves 2 types of error:
1. Round-off errors: this caused by limited numbers of
significant digits that can be retained by a computer.
2. Truncation errors:
a) Local truncation error: due to the application of the
method over a single step.
b) Propagated truncation error: due to approximation
produced during previous steps.

151

Total or global truncation error = local truncation error +


propagated truncation error.
Error Estimate for Eulers Method
The general form of the differential equation being
integrated:
dy
(1)
y ' f ( x, y )
dx
Taylors series expansion of y about a starting value (xi,yi):
(2)
yi'' 2
yi( n ) n
'
n 1
yi 1 yi yi h h ...
h O(h )
n!
(1) into (2) gives 2!

f ' ( xi , yi ) 2
f ( n 1) ( xi , yi ) n
yi 1 yi f ( xi , yi )h
h ...
h O(h n 1 )
(3)
2!
n!

Subtract Eulers method equation from (3) gives the


truncation error

Et

f ' ( xi , yi )
2!

h 2 ...

f ( n 1) ( xi , yi )
n!

h n O(h n 1 )

Example 1:
1. Use Eulers method to integrate numerically the ODE:
dy
2 x3 12 x 2 20 x 8.5
dx

152

from x = 0 to x = 4 with a step size of 0.5 and an


initial condition at x = 0 is y = 1?
2. Estimate and tabulate the errors
Example 1 - Solution
1. Using Eulers method
yi 1 yi f ( xi , yi ) h

h=0.5,

y(0)=1

Evaluate the first prediction y(0.5)


y(0.5) y(0) f (0, 1) (0.5)
f (0, 1) 2(0)3 12(0)2 20(0) 8.5 8.5

y(0.5) (1) (8.5)(0.5) 5.25


Evaluate the second prediction y(1)
y(1) y(0.5) f (0.5, 5.25) (0.5)

f (0, 1) 2(0.5)3 12(0.5)2 20(0.5) 8.5


y(1) (5.25) ()(0.5) 5.875

Evaluate the third prediction

y(1.5) y(1) f (1, 5.875) (0.5)


y(1.5) 5.875 (-1.5)(0.5) 5.125
153

yEuler

0.0

1.000

0.5

5.250

1.0

5.875

1.5

5.125

2.0

4.500

2.5

4.750

3.0

5.875

3.5

7.125

4.0

7.000

154

2. Estimate the truncation error


f ' ( xi , yi ) 2 f '' ( xi , yi ) 3 f ( 3) ( xi , yi ) 4
Et
h
h
h
2!
3!
4!
For the first step

f ' ( xi , yi ) 6 x 2 24 x 20

f ' (0,1) 20

f '' ( xi , yi ) 12 x 24

f '' (0,1) 24

f ( 3) ( xi , yi ) 12

f ( 3) (0,1) 12

Et

20
24
12
(0.5) 2 (0.5)3
(0.5) 4 2.03125
2
6
24

155

yEuler

Et

0.00000 1.00000 0.00000


0.50000 5.25000 -2.03125
1.00000 5.87500 -2.87500
1.50000 5.12500 -2.90625
2.00000 4.50000 -2.50000
2.50000 4.75000 -2.03125
3.00000 5.87500 -1.87500
3.50000 7.12500 -2.40625
4.00000 7.00000 -4.00000

Remarks about Eulers Method


The Taylor series provides only an estimate of the local
truncation error. It does not give a measure of the global
truncation error.
The global truncation error is O(h), that is, it is proportional
to the step size.
The error can be reduced by decreasing the step size.

156

If the underlying function, y(x), is linear, the method


provide error-free predictions. This is because for a straight
line the second derivative would be zero.

157

Numerical Solutions of Ordinary Differential


Equations
Initial Value Problem: Runge-Kutta Methods

Objectives
Introduction
Runge-Kutta Methods
Second-order Runge-Kutta
Heun Method
Midpoint Method
Ralstons Method
Fourth-order Runge-Kutta
System of ODE
Introduction
A fundamental source of error in Eulers method is that the
derivative at the beginning of the interval is assumed to
apply across the entire interval.
Improving the estimate of the slope will improve the
solution.
Runge-Kutta Methods

158

Runge-Kutta (RK) methods achieve the accuracy of a


Taylor series without requiring the calculation of higher
derivatives.
For a differential equation of the form:
dy
f ( x, y )
dx
The estimate of the solution is given by

yi 1 yi ( xi , yi , h) h
where ( xi , yi , h)
slope over the interval

is the increment function which is the

The increment function can be written in general form as

a1k1 a2 k 2 a3 k3 ... an k n

where a1, a2, a3,, an are constants and k1, k2, k3,, kn are

k1 f ( xi , yi )
k2 f ( xi p1h, yi q11k1h)
k3 f ( xi p2 h, yi q21k1h q22 k2 h)
.
.
.
kn f ( xi pn 1h, yi qn 1,1k1h qn 1, 2 k2 h ... qn 1,n 1kn 1h)

159

Where p1, p2,,pn-1 and q1, q2,,qn-1 are all constants


n is order number of the RK method
ks are recurrence functions. Because each k is a functional
evaluation, this recurrence makes RK methods efficient for
computer calculations.
Various types of RK methods can be devised by employing
different number of terms in the increment function as
specified by n.
Once n is chosen, values of as, ps, and qs are evaluated
by setting general equation equal to terms in a Taylor series
expansion.
First-order RK method (n = 1) is Eulers method.
Second-order Runge-Kutta
The second-order version of Runge-Kutta methods is
(i)
yi 1 yi (a1k1 a2 k2 ) h
where

k1 f ( xi , yi )
k2 f ( xi p1h, yi q11k1h)

Evaluate values for a1, a2, p1 and q11 by setting equation (i)
equal to a Taylor series expansion to the second-order term
(see Box 25.1 p.703 for details).
Three equations are derived to solve four unknown
constants
a1 a2 1

a2 p1 1 / 2
a2 q11 1 / 2
160

Since there are three equations with four unknowns, we


must assume a value of one of the unknowns to determine
the other three.
Suppose that we specify a value for a2, then

a1 1 a2

1
p1 q11
2a 2
There are an infinite number of second-order RK methods
Because we can choose an infinite number of values for a2,
there are an infinite number of second-order RK methods.
Every version would yield exactly the same results if the
solution to ODE were quadratic, linear, or a constant.
However, they yield different results if the solution is more
complicated (typically the case).
Three of the most commonly used RK second-order
methods are:
Heuns Method with a Single Corrector (a2=1/2)
The Midpoint Method (a2=1)
Raltsons Method (a2=2/3)
Heun Method with a Single Corrector
a2 is set to a2=1/2
a1 1 a2 1 / 2

p1 q11 1 / 2a2

161

Substituting into equation (i)


1
1
yi 1 yi ( k1 k2 ) h
2
2
Where

k1 f ( xi , yi )
k2 f ( xi h, yi k1h)

k1 is the slope at the beginning of the interval and k2 is the


slope at the end of the interval.
The Midpoint Method
a2 is set to a2=1

a1 0
p1 q11

1
2

Substituting into equation (i)


yi 1 yi k2 h
Where

k1 f ( xi , yi )
1
1
k2 f ( xi h, yi k1h)
2
2

162

Ralstons Method
a2 is set to a2=2/3

a1 1 / 3
p1 q11 3 / 4
Substituting into equation (i)
1
2
yi 1 yi ( k1 k2 ) h
3
3
Where

k1 f ( xi , yi )
3
3
k2 f ( xi h, yi k1h)
4
4

163

Fourth-order Runge-Kutta
This is the most popular of the RK methods

1
yi 1 yi (k1 2k2 2k3 k4 ) h
6
Where

k1 f ( xi , yi )
1
1
k2 f ( xi h, yi k1h)
2
2
1
1
k3 f ( xi h, yi k2 h)
2
2
k4 f ( xi h, yi k3h)

164

Example 1
Integrate using fourth-order Runge-Kutta method:

dy
4e0.8 x 0.5 y
dx

Using h = 0.5 with y(0) = 2 from x = 0 to 0.5.

Example 1 - Solution
Compute the slope at the beginning of the interval:

k1 f ( xi , yi ) f (0,2) 4e0.8( 0) 0.5(2) 3


Use this slope to compute the value of y and slope at
midpoint
y(0.25) 2 3(0.25) 2.75

k2 f (0.25,2.75) 4e0.8( 0.25) 0.5(2.75) 3.510611


Use this slope to calculate another value of y and slope at
the midpoint
y(0.25) 2 3.510611(0.25) 2.877653

k3 f (0.25,2.877653) 4e0.8( 0.25) 0.5(2.877653) 3.446785

165

Use the this slope to compute the slope at the end of the
interval:
y(0.5) 2 3.446785(0.5) 3.723392

k4 f (0.5,3.723392) 4e0.8( 0.5) 0.5(3.723392) 4.105603


Evaluate the final prediction at the end of the interval
1
yi 1 yi (k1 2k2 2k3 k4 ) h
6

1
y (0.5) 2 (3 2 3.510611 2 3.446785 4.105603) 0.5
6
3.751669
System of Equations
Many practical problems in engineering require the
solution of a system of simultaneous ODE rather than a
single equation
dy1
dx f1 ( x, y1 , y2 ,..., yn )

dy2 f ( x, y , y ,..., y )
2
1
2
n
dx
.

.
dy
n f n ( x, y1 , y2 ,..., yn )
dx

166

Solution requires that n initial conditions be known at the


starting value x.
Procedure for Solution Using RK Methods
1. Compute the slopes for all variables at the beginning of the
interval (set of k1s).
2. Use these slopes to make predictions of the dependent
variable at the midpoint of the interval (y1, y2,). Use the
midpoint values to compute a first set of slopes at the
midpoint of the interval (set of k2s)
3. The new set of slopes k2 are used to make another set of
midpoint predictions (y1, y2,). Use the latest set of
midpoint predictions (y1, y2,) to compute the second set
of midpoint slopes (set of k3s)
4. Use the values of k3s to make predictions at the end of the
interval (y1, y2,). Use the predictions at the end of the
interval to compute the endpoint slopes (set of k4s)
5. Combine the ks into a set of increment functions and
brought back to the beginning to make the final prediction
of (y1, y2,).

167

Example 2
Solve the following set of differential equations using the 4th
order RK method, assuming that at x = 0, y1 = 4 and y2 = 6.
Integrate to x = 2 with a step size of 0.5

dy1
dx 0.5 y

dy2 4 0.3 y2 0.1 y1


dx

dy1
dx 0.5 y f1 ( x, y1 , y2 )

dy2 4 0.3 y2 0.1 y1 f 2 ( x, y1 , y2 )


dxdx 0.4dxd

Example 2 - Solution
1. Compute the slopes for all variables at the beginning of the
interval (set of k1s).
k1,1 f1 (0,4,6) 0.5(4) 2

k1, 2 f 2 (0,4,6) 4 0.3(6) 0.1(4) 1.8


2. Use set of k1s to make predictions of (y1, y2,) at the
midpoint of the interval. Use these (y1, y2,) to compute a
first set of slopes at the midpoint of the interval (set of k2s)
h
y1 (0.25) y1 (0) k1,1 4 (2)(0.25) 3.5
2
h
y2 (0.25) y2 (0) k1, 2 6 (1.8)(0.25) 6.45
2
168

k2,1 f1 (0.25,3.5,6.45) 0.5(3.5) 1.75


k2, 2 f 2 (0.25,3.5,6.45) 4 0.3(6.45) 0.1(3.5) 1.715
3. Use the set of k2s to make another set of midpoint
predictions (y1, y2,). Use the latest set of (y1, y2,) to
compute the second set of midpoint slopes (set of k3s)

h
0.5
y1 (0.25) y1 (0) k2,1 4 (1.75)( ) 3.5625
2
2
h
0.5
y2 (0.25) y2 (0) k2, 2 6 (1.715)( ) 6.42875
2
2

k3,1 f1 (0.25,3.5625,6.42875) 0.5(3.5625) 1.78125


k3, 2 f 2 (0.25,3.5625,6.42875) 4 0.3(6.42875) 0.1(3.5625) 1.715125

4. Use the values of k3s to make predictions at the end of the


interval (y1, y2,). Use the predictions at the end of the
interval to compute the endpoint slopes (set of k4s)

y1 (0.5) y1 (0) k3,1h 4 (1.78125)(0.5) 3.109375


y2 (0.5) y2 (0) k3, 2 h 6 (1.715125)(0.5) 6.857563

169

k4,1 f1 (0.5,3.109375,6.857563) 0.5(3.109375) 1.554688


k4, 2 f 2 (0.5,3.109375,6.857563) 4 0.3(6.857563) 0.1(3.109375) 1.631794

5. Combine the ks into a set of increment functions and


brought back to the beginning to make the final prediction
of (y1, y2,).
1
y1 (0.5) 4 [(2) 2(1.75) 2(1.78125) (1.554688)] 0.5 3.115234
6
1
y2 (0.5) 6 [(1.8) 2(1.715) 2(1.715125) (1.631794)] 0.5 6.857670
6

170

Repeating the procedure for the remaining steps gives the


following results
x

y1

y2

0.0

0.5

3.115234

6.857670

1.0

2.426171

7.632106

1.5

1.889523

8.326886

2.0

1.471577

8.946865

171

Numerical Solutions of Ordinary Differential Equations


Boundary Value Problems

Objectives
Introduction
Shooting Method
Finite Difference Method
Introduction
An ODE is accompanied by auxiliary conditions. These
conditions are used to evaluate the integral that result
during the solution of the equation. An nth order equation
requires n conditions.
If all conditions are specified at the same value of the
independent variable, then we have an initial-value
problem.
If the conditions are specified at different values of the
independent variable, usually at extreme points or
boundaries of a system, then we have a boundary-value
problem.
Initial-value versus boundary-value problems

172

Initial-value problem where all the conditions are specified


at the same value of the independent variable.
Boundary-value problem where the conditions are specified
at different values of the independent variable.

Determination of eigenvalues: Special class of boundaryvalue problems that are common in engineering involving
vibrations, elasticity, and other oscillating systems.

173

Two general approaches for solving BVP:


Shooting method
Finite-difference method
Both approaches will be illustrated by an example of heat
balance.
Heat balance problem

Heat balance of a long, thin rod


Rod not insulated along its length and in a steady state

174

Equation describing the problem


d 2T
h(Ta T ) 0
2
dx
Ta 20

L 10 m
h 0.01m 2

Boundary value conditions

T (0) T1 40
T ( L) T2 200
Analytical solution

T 73.4523e0.1x 53.4523e0.1x 20

The Shooting Method


Converts the boundary value problem to initial-value
problem.
A trial-and-error approach is then implemented to solve the
initial value approach.

175

For example, the 2nd order equation can be expressed as


two first order ODEs:
dT
z
dx
dz
h(T Ta )
dx

An initial value is guessed, say z(0) = 10.


The solution is then obtained by integrating the two 1st
order ODEs simultaneously.
Using a 4th order RK method with a step size of 2:
T(10)=168.3797.

This differs from T(10)=200. Therefore a new guess is


made, z(0)=20 and the computation is performed again:
T(10)=285.8980

Because the original ODE is linear, the two sets of points,


(z, T)1 and (z, T)2, are linearly related, a linear interpolation
formula is used to compute the value of z(0) as
20 10
z (0) 10
(200 168.3797) 12.6907
285.8980 168.3797

176

z(0) = 12.6907 is then used to determine the correct


solution.
First shot
z(0) = 10

T(10) = 168.3797

Second shot
z(0) = 20

T(10) = 285.8980

Final exact hit


z(0) = 12.6907

T(10) = 200

177

The Shooting Method


Nonlinear Two-Point Problems.
For a nonlinear problem a better approach involves
recasting it as a roots problem.
T10 f ( z0 )

200 f ( z0 )
g ( z0 ) f ( z0 ) 200

Driving this new function, g(z0), to zero provides the


solution.
Finite Differences Methods
The most common alternatives to the shooting method.

178

Finite differences are substituted for the derivatives in the


original equation.

d 2T Ti 1 2Ti Ti 1

2
dx
x 2

Ti 1 2Ti Ti 1
h(Ti Ta ) 0
x 2

Ti 1 (2 hx 2 )Ti Ti 1 hx 2Ta
Finite differences equation applies for each of the interior
nodes. The first and last interior nodes, Ti-1 and Ti+1,
respectively, are specified by the boundary conditions.
Thus, a linear equation transformed into a set of
simultaneous algebraic equations.
It will be tridiagonal which can be solved efficiently.
If we use a segment length x = 2 m (4 interior nodes)

Ti 1 2.04Ti Ti 1 0.8
Thus, a set of simultaneous algebraic equations.

179

0
0 T1 40.8
2.04 1
1 2.04 1
0 T2 0.8

1 2.04 1 T3 0.8
0

0
1 2.04 T4 200.8
0

which can be solved for

T T 65.9698

93.7785 124.5382 159.4795

180

You might also like