Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
40 views

Function Approximation, Interpolation, and Curve Fitting

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views

Function Approximation, Interpolation, and Curve Fitting

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

Function Approximation,

Interpolation, and Curve


Fitting
APPROXIMATION
Computational procedures used in computer
softwares for the evaluation of library functions
involve polynomial approximation.

f(x) ≈ pn(x)
For example,
2 3 4
x x x
ex =1+ x + + + +!
2! 3! 4!
Taylor Polynomial Approximation
If f ∈ Cn+1[a,b] and x0 ∈ Cn+1[a,b] is a fixed value,
then

f (x ) ≈ p n (x )
Where
n (k )
p n (x ) =
f (x0 ) (x − x )k

k =0 k!
0

Note: When x0 = 0, it is called Maclaurin series.


Polynomial Approximation
2 3 4
x x x
ex =1+ x + + + +!
2! 3! 4!
Interpolation
To estimate a missing function value by
weighted average of known function
values at neighboring points
f 0 = f (x0 ) f1 = f (x1 ) ! f n = f ( xn )

Find pn(x) such that


pn (x0 ) = f 0 pn (x1 ) = f1 ! p n (x n ) = f n
How to find pn?
• Solve a linear system for its coefficients

• Lagrange interpolation

• Newton interpolation

• Splines, Chebyshev polynomials, Pade


approximations, …
Lagrange Interpolation
• Multiply each fj by a polynomial that is 1 at
xj and 0 at the other nodes, and then to
take the sum of these n+1 polynomials to
get the unique interpolation polynomial of
degree n or less.
• Linear interpolation
x − x1 x − x0
y = p1 (x ) = f 0 + f1
x 0 − x1 x1 − x 0
Example 1
Compute ln 9.2 from ln 9.0 = 2.1972, ln 9.5
= 2.2513 by linear Lagrange interpolation
and determine the error from ln 9.2 =
2.2192 (4D).
Solution:
ln 9.2 ≈ p1(9.2) = 2.1972*0.6 + 2.2513*0.4
= 2.2188
ε = 2.2192 – 2.2188 = 0.0004.
Lagrange Interpolation
Quadratic interpolation is interpolation of
given (x0, f0), (x1, f1), (x2, f2) by a second
degree polynomial p2(x).

p2 (x ) = L0 (x ) ⋅ f 0 + L1 (x ) ⋅ f1 + L2 (x ) ⋅ f 2

where L0(x0) = 1, L1(x1) = 1, L2(x2) = 1, and


L0(x1) = L0(x2) = 0, etc.
Example 2
Quadratic interpolation. Compute ln 9.2
from the data in Ex. 1 and ln 11 = 2.3979.
Solution:
2
L0 (x ) = x − 20.5 x + 104.5, L0 (9.2) = 0.5400
L1 (x ) = − 0.175 (x 2
)
− 20 x + 99 , L1 (9.2) = 0.4800
L2 (x ) 1
=3 x ( − 18.5x + 85.5),
2
L2 (9.2) = −0.0200
p2 (9.2) = 2.2192
Lagrange coefficient polynomials

1.0 1.0
Example 3
Construct P2(x) for the data points (0, -1),
(1, -1), and (2, 7).

Solution:

P2 ( x ) = (−1)
( x −1) ( x − 2) + x ( x − 2) x ( x −1)
(−1) + ( 7)
2 −1 2
Lagrange Interpolation
In general, the nth degree Lagrange
interpolation polynomial may be written
as n
f (x ) ≈ p n (x ) = ∑ Lk (x ) ⋅ f k
k =0
where
L (x ) = ∏
(x−x )
n
j
k
(x − x )
j =0, j ≠ k k j
Newton Polynomials
• It is sometimes useful to find several
approximating polynomials p1(x), p2(x), …, pn(x)
and then choose the one that suits our needs.

• If the Lagrange polynomials are used, there is


no constructive relationship between pn-1(x) and
pn(x).

• Construct Newton polynomials that have


recursive pattern.
Newton Polynomials

p1 (x ) = a0 + a1 (x − x0 )
p2 (x ) = a0 + a1 (x − x0 ) + a2 (x − x0 )(x − x1 )
p3 (x ) = a0 + a1 (x − x0 ) + a2 (x − x0 )(x − x1 )
+ a3 (x − x0 )(x − x1 )(x − x2 )
"
pn (x ) = a0 + a1 (x − x0 ) + ! + an (x − x0 )(x − x1 )!(x − xn −1 )
= pn −1 + an (x − x0 )(x − x1 )!(x − xn −1 )
How do we find ak?
• p1(x) = a0 + a1(x - x0)
f ( x1 ) − f ( x0 )
a0 = f ( x0 ) and a1 =
x1 − x0
f [ x0 ] f [x , x ]
1 0

• p2(x) = p1(x) + a2(x - x0) (x - x1)


⎛ f (x2 ) − f (x1 ) f (x1 ) − f (x0 ) ⎞
a2 = ⎜⎜ − ⎟⎟ (x2 − x0 )
⎝ x2 − x1 x1 − x0 ⎠
Newton divided differences f [ x2 , x1, x0 ]
Divided Differences
• The divided difference for a function f(x)
are defined as follows:
f [xk ] = f (xk )
f [xk ] − f [xk −1 ]
f [xk −1 , xk ] =
xk − xk −1
f [xk −1 , xk ] − f [xk − 2 , xk −1 ]
f [xk − 2 , xk −1 , xk ] =
xk − xk − 2
f [xk − 2 , xk −1 , xk ] − f [xk −3 , xk − 2 , xk −1 ]
f [xk −3 , xk − 2 , xk −1 , xk ] =
xk − xk −3
Newton Polynomials
Suppose that x0, x1, …, xn are n+1 distinct numbers
in [a, b]. The Newton form of the interpolation
polynomial is
pn (x ) = a0 + a1 (x − x0 ) + !
+ an (x − x0 )(x − x1 )!(x − xn −1 )
Where
ak = f [x0 , x1 ,!, xk ], for k = 0,1,…, n
Divided Difference Table
for y = f(x)
Example
Let f(x) = x3 – 4x. Construct the divided-
difference table based on the nodes x0 =
1, x1 = 2, …, x5 = 6, and find the Newton
Polynomial p3(x) based on x0, x1, x2, and
x3.

Solution:
p3(x) = -3+3(x-1)+6(x-1)(x-2)+(x-1)(x-2)(x-3)
Interpolation
Legend:
– piecewise linear function
– quadratic interpolating polynomial
– 6th-order Lagrange polynomial
SPLINE INTERPOLATION
The natural cubic spline function satisfies
the following:

Ø s(x) is a polynomial of degree ≤ 3 on each


subinterval [xj-1, xj], j=2,3,…, n.
Ø s(x), s’(x), and s”(x) are continuous for [a,
b].
Ø s”(x1) = s”(xn) = 0.
Spline Interpolation
Construction
Introduce variables Mi = s”(xi), i = 1, 2, …, n.
For any two points:

s"( x ) =
( x j − x ) M j−1 + ( x − x j−1 ) M j
x j − x j−1
x j−1 ≤ x ≤ x j

Setting up the second antiderivative and applying


the conditions s ( x j−1 ) = y j−1, s ( x j ) = y j we obtain
3 3

s ( x) =
( x j − x ) M j−1 + ( x − x j−1 ) M j
+
( x j − x ) y j−1 + ( x − x j−1 ) y j
6 ( x j − x j−1 ) x j − x j−1
1
− ( x j − x j−1 )"#( x j − x ) M j−1 + ( x − x j−1 ) M j $%
6
Spline Interpolation
Construction
Ensuring the continuity of s’(x), lead to the
following system of linear equations
x j − x j−1 x j+1 − x j−1 x j+1 − x j
M j−1 + Mj + M j+1
6 3 6
y j+1 − y j y j − y j−1
= − , j = 2, 3,…, n −1
x j+1 − x j x j − x j−1

These n – 2 equations is a tridiagonal system for


M1, M2, M3, …, Mn. with M1 = Mn = 0 assumed.
Spline Interpolation
Example
Calculate the natural cubic spline
interpolating the data

{(1,1), (2, ½), (3, 1/3), (4, ¼)}


Spline Interpolation
Example
The number of points is n=4; and all xj - xj-1 = 1. The system
of equations become:
1
6 M1 + 23 M 2 + 16 M 3 = 13
1
6 M 2 + 23 M 3 + 16 M 4 = 121
Together with M1=M4=0 , we obtain M2 = ½ and M3 = 0.
#
% 1
12 x 3 − 14 x 2 − 13 x + 23 1≤ x ≤ 2
%
s ( x) = $ − 121 x 3 + 43 x 2 − 37 x + 176 2≤ x≤3
% 3≤ x ≤ 4
% − 121 x + 127
&
CURVE FITTING
• Data is often given for discrete values, but
estimates at points between the discrete values
are needed.
• If data are known to a high degree of accuracy,
then the polynomial curve y=P(x) that passes
through them can be considered.
• If there is significant degree of error or “noise” in
the tabulated values, then methods of curve
fitting should be considered.
Curve Fitting
• Given experimental data (xi, yi), determine
a formula y=f(x) that relates these
variables.
• A class of allowable formulas is chosen
and then coefficients are determined. For
example, linear y = ax + b, power y = axm
• Evaluate the “goodness” of fit.
Curve Fitting
• Due to experimental errors, the true value
f(xi) and the error ei (also called deviations
or residuals) is
ei = f(xi) – yi
• The best-fit curve is the function that
minimizes the sum of squares of the errors
E. N
2
E = ∑e i
i =1
Least Squares Line
y
• Function y = ax + b (xi, yi)

• Error ei
(xi, axi+b)
ei = (axi + b ) − yi
• Sum square of errors x
N N
2
E = ∑ ei2 = ∑ [(axi + b ) − yi ]
i =1 i =1

• Obtain a and b by minimizing E.


Least Squares Line
To minimize E, take partial derivatives with respect
to unknown constants a and b
N
∂E
= 2∑ [(axi + b ) − yi ] xi = 0
∂a i =1
N
∂E
= 2∑ [(axi + b ) − yi ] = 0
∂b i =1

Rewriting, N

∑ (ax
i =1
i + b − yi ) = 0
N

∑ (ax )
2
i + bxi − xi yi = 0
i =1
Least Squares Line
The final form of the least squares equation
becomes
N N
a ∑ xi + bN = ∑ yi
i =1 i =1
N N N
2
a ∑ xi + b∑ xi = ∑ xi yi
i =1 i =1 i =1

Solve for a and b using any method.


Least Squares Line
• For the data points, write
axi + b = yi
In compact form
Bx = f
where ⎡ x1 1⎤ ⎧ y1 ⎫
⎢ x 1⎥ ⎪ y ⎪
⎧a ⎫ ⎢ 2 ⎥ ⎪ 2 ⎪
x = ⎨ ⎬ B = f = ⎨ ⎬
⎩b ⎭ ⎢ ! !⎥ ⎪ ! ⎪
⎢ ⎥
⎣ xN 1⎦ ⎪⎩ y N ⎪⎭
Least Squares Line
• Set-up the normal equation

(B B)x = B f
T T

• Solve for the coefficients of the best-fit line


−1
(
x= B B T
) (B f ) T
Goodness of Fit
The quality of curve-fit is evaluated using E
or the r2 value defined as
2 E N
2
r = 1− E m = ∑ ( y i − y )
Em i =1

where Em is the sum of the squares of the


deviations about the mean.
An r2 value close to 1 generally indicates a
good fit.
Example
An engineer has measured the force
exerted by a spring as a function of its
displacement from its equilibrium position.
The following data have been obtained.
distance 2 4 7 11 17
force 2 3.5 4.5 8 9.5
Find the least squares line for the data
points given.
Solution 1
Compute the following quantities

Then set-up the equation


41a + 5b = 27.5
479a + 41b = 299
Solution 1
• The equation of the best fit line is

y = 0.514706 x + 1.279412

• The r2 value is 0.9844.


Solution 2
The observation equations and the
corresponding normal equations become
Bx = f (BT B)x = BT f
⎡ 2 1⎤ ⎧ 2 ⎫
⎢ 4 1⎥⎥ ⎪3.5⎪
⎢ ⎧a ⎫ ⎪⎪ ⎪⎪ ⎡479 41⎤ ⎧a ⎫ ⎧ 299 ⎫
⎢ 7 1⎥ ⎨ ⎬ = ⎨4.5⎬ ⎢ 41 5 ⎥ ⎨b ⎬ = ⎨27.5⎬
⎢ ⎥ ⎩b ⎭ ⎪ ⎪ ⎣ ⎦ ⎩ ⎭ ⎩ ⎭
⎢11 1⎥
⎪ ⎪
8
⎢⎣17 1⎥⎦ ⎪⎩9.5⎪⎭

Hence, giving us the same best-fit line.


Least Squares Parabola
To fit data into a parabola of the form
y = ax2 + bx + c
we write observation equations as
Bx = f
where ⎡ x 2 x 1⎤ ⎧ y ⎫
1 1 1
⎧a ⎫ ⎢ 2 ⎥ ⎪ y ⎪
⎪ ⎪ ⎢ x2 x2 1⎥ ⎪ ⎪
x = ⎨b ⎬ B = f = ⎨ 2 ⎬
⎪c ⎪ ⎢ ! ! 1⎥ ⎪ ! ⎪
⎩ ⎭ ⎢ 2 ⎥ ⎪⎩ y N ⎪⎭
⎢⎣ xN xN 1⎥⎦
Least Squares Parabola
The normal equations become
⎡∑ xi4 ∑x3
i
2 2
∑ x ⎥ ⎧⎪a⎫⎪ ⎪∑ xi yi ⎫⎪
⎤i
⎧
⎢ 3 2 2
x
⎢∑ i ∑xi ∑ x ⎥ ⎨b ⎬ = ⎨∑ xi yi ⎬
i
⎢∑ xi2
⎣ ∑x2
i N ⎥⎦ ⎪⎩c ⎪⎭ ⎪⎩ ∑ yi ⎪⎭
Coefficients will be solved and r2 evaluated
similarly.
Example
Find the least squares parabola for the four
points (-3, 3), (0, 1), (2,1), and (4, 3).
Solution:
Data Fit to Nonlinear functions
There are two methods to obtain a nonlinear
function that best fits the given data points:

1. Data Linearization

2. Principle of least squares


Data Linearization
To fit a nonlinear curve (e.g. exponential) to
a given data, we linearize the function
using a suitable transformation of
variables.
The original points (xi, yi) in the xy-plane are
transformed into points (Xi, Yi) in the XY-
plane.
Exponential Fit y = ceax

To fit an exponential curve y = ceax, we


linearize using the following
transformations:
1. Take logarithm of both sides
ln(y) = ax + ln(c)
2. Introduce the change of variables
Y = ln(y), X = x and B = ln(c)
Exponential Fit y = ceax

3. This results into linear relation


Y = aX + B
4. The normal equation will be set-up and
the unknowns a and B will be solved.
5. The parameter c will be computed as
c = eB
Example
Use data linearization method and find the
exponential fit
y = ceax
for the five data points (0, 1.5), (1, 2.5), (2,
3.5), (3,5.0), and (4, 7.5).
Solution
• Applying the transformation to the original
points, we obtain

• The resulting normal equation becomes


⎡30 10⎤ ⎧ a ⎫ ⎧16.309742⎫
⎢10 5 ⎥ ⎨B ⎬ = ⎨ 6.198860 ⎬
⎣ ⎦ ⎩ ⎭ ⎩ ⎭
Solution
• Solution to normal equation yields
Y = aX+B = 0.3912023 X + 0.457367
r2=0.9948
• Then the coefficient c becomes
c = e0.457367 = 1.579910
• Hence the exponential fit becomes
y = 1.579910e0.3912023x
Change of Variable(s) for
Data Linearization
Change of Variable(s) for
Data Linearization
Nonlinear Least Squares
• To fit the data points to a nonlinear function, we
set-up the least squares criterion
N N
2
E = ∑ ei2 = ∑ [ f (xi ) − yi ] → minimum
i =1 i =1

• Take the partial derivatives with respect to the


unknown coefficients and equate to zero
• The resulting equation may be nonlinear, hence
we solve for the unknowns using iterative
methods, e.g., Newton’s method.
Nonlinear Least Squares
Example. Exponential fit y = ceax
• Least squares criterion
N N
2
E = ∑ e = ∑ ce
i [ axi
− yi ]
2

i =1 i =1

• Minimizing results to 2 nonlinear equations:


N
∂E
∂a
[ ](
= 2∑ ce axi − yi ⋅ cxi e axi = 0 )
i =1
N
∂E
∂c
[ ]( )
= 2∑ ce axi − yi ⋅ e axi = 0
i =1

You might also like