Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Chapter Two

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

Chapter Two

Approximation Theory
Introduction
The problem of approximating a function is an important problem in numerical analysis due to its wide
application in the development of software for digital computers. The functions commonly used for
approximating given functions are polynomials, trigonometric functions, exponential functions, and
rational functions. However, from an application point of view, the polynomial functions are mostly used.
Approximation theory involves two types of problems. The first problem arises when a function is given
explicitly, but we wish to find a simpler type of function such as a polynomial, that can be used to
approximate values of the given function.
The second kind of problem in approximation theory is concerned with fitting functions to a given set of
data and finding the “best” function in a certain class that can be used to represent the set of data. To handle
these two problems, we use the basic methods of approximations. Some of the approximation methods of
functions, in existence, includes Taylor’s Approximation, Lagrange polynomials, Least-Squares
approximation, Hermite approximation, Cubic Spline interpolation, Chebyshev approximation, Legendre
Polynomials, Rational function approximation; and some few others more.
Every approximation theory involves polynomials; hence, some methods of approximation are sometimes
called polynomials. For example, Chebyshev approximation is often referred to as Chebyshev polynomials.
2.1. Least square approximation
Least square approximation of curve fitting was suggested early in the nineteenth century by the French
mathematician Adrien Legendre. The method of least squares assumes that the best fitting line in the curve
for which the sum of the squares of the vertical distances of the points (𝑥𝑖 , 𝑦𝑖 ) from the line is minimum.
The Least Squares Approximation methods can be classified into two, namely the discrete least square
approximation and the continuous least squares approximation. The first involves fitting a polynomial
function to a set of data points using the least squares approach, while the latter requires the use of
orthogonal polynomials to determine an appropriate polynomial function that fits a given function.
1. Discrete Least-Squares Approximation
The basic idea of least square approximation is to fit a polynomial function P(x) to a set of data Points
(𝑥𝑖 , 𝑦𝑖 ) having a theoretical solution
𝑦 = 𝑓(𝑥) … … … … … … … … … . (1)
The aim is to minimize the squares of the errors. Given a set of n discrete data points (𝑥𝑖 , 𝑦𝑖 ), 𝑖 =
1, 2, . . . , 𝑚. find the algebraic polynomial
𝑃𝑛 (𝑥) = 𝑎0 + 𝑎1 𝑥 + 𝑎2 𝑥 2 + ⋯ + 𝑎𝑛 𝑥 𝑛 (𝑛 < 𝑚) … … … … … … … … … . . (2)
such that the error 𝐸(𝑎0 , 𝑎1 , . . . , 𝑎𝑛 ) in the least-squares sense is minimized; that is,

1
2
𝐸(𝑎0 , 𝑎1 , . . . , 𝑎𝑛 ) = ∑𝑚 2 𝑛
𝑖=1(𝑦𝑖 − (𝑎0 + 𝑎1 𝑥𝑖 + 𝑎2 𝑥𝑖 + ⋯ + 𝑎𝑛 𝑥𝑖 )) is minimum. Here 𝐸(𝑎0 , 𝑎1 , . . . , 𝑎𝑛 )

is a function of (𝑛 + 1) variables: 𝑎0 , 𝑎1 , . . . , 𝑎𝑛 . Since 𝐸(𝑎0 , 𝑎1 , . . . , 𝑎𝑛 ) is a function of the variable


𝑎0 , 𝑎1 , . . . , 𝑎𝑛 for this function to be minimum we must have
𝜕𝐸
𝜕𝑎𝑖
= 0, 𝑖 = 0,1,2, … , 𝑛
𝑚
𝜕𝐸
= −2 ∑ (𝑦𝑖 − (𝑎0 + 𝑎1 𝑥𝑖 + 𝑎2 𝑥𝑖2 + ⋯ + 𝑎𝑛 𝑥𝑖𝑛 )) (1)
𝜕𝑎0
𝑖=1
𝑚
𝜕𝐸
= −2 ∑ (𝑦𝑖 − (𝑎0 + 𝑎1 𝑥𝑖 + 𝑎2 𝑥𝑖2 + ⋯ + 𝑎𝑛 𝑥𝑖𝑛 )) 𝑥𝑖
𝜕𝑎1
𝑖=1
⟹ 𝑚
𝜕𝐸
= −2 ∑ (𝑦𝑖 − (𝑎0 + 𝑎1 𝑥𝑖 + 𝑎2 𝑥𝑖2 + ⋯ + 𝑎𝑛 𝑥𝑖𝑛 )) 𝑥𝑖2
𝜕𝑎2
𝑖=1

𝑚
𝜕𝐸
= −2 ∑ (𝑦𝑖 − (𝑎0 + 𝑎1 𝑥𝑖 + 𝑎2 𝑥𝑖2 + ⋯ + 𝑎𝑛 𝑥𝑖𝑛 )) 𝑥𝑖𝑛
{𝜕𝑎 𝑛
𝑖=1

Setting these equations to be zero we have


𝑚 𝑚 𝑚 𝑚 𝑚
2 𝑛
𝑎0 ∑ 1 + 𝑎1 ∑ 𝑥𝑖 + 𝑎2 ∑ 𝑥𝑖 + ⋯ + 𝑎𝑛 ∑ 𝑥𝑖 = ∑ 𝑦𝑖
𝑖=1 𝑖=1 𝑖=1 𝑖=1 𝑖=1
𝑚 𝑚 𝑚 𝑚 𝑚
2 3 𝑛+1
𝑎0 ∑ 𝑥𝑖 + 𝑎1 ∑ 𝑥𝑖 + 𝑎2 ∑ 𝑥𝑖 + ⋯ + 𝑎𝑛 ∑ 𝑥𝑖 = ∑ 𝑥𝑖 𝑦𝑖
𝑖=1 𝑖=1 𝑖=1 𝑖=1 𝑖=1

𝑚 𝑚 𝑚 𝑚 𝑚

𝑎0 ∑ 𝑥𝑖𝑛 + 𝑎1 ∑ 𝑥𝑖𝑛+1 + 𝑎2 ∑ 𝑥𝑖𝑛+2 + ⋯+ 𝑎𝑛 ∑ 𝑥𝑖2𝑛 = ∑ 𝑥𝑖𝑛 𝑦𝑖


{ 𝑖=1 𝑖=1 𝑖=1 𝑖=1 𝑖=1

… … … … … … . (4)
Solving equation (4) to determine 𝑎0 , 𝑎1 , … , 𝑎𝑘 and substituting into
equation (2) gives the best fitted curve to (1). The set of equations in (4) is a system of (𝑛 + 1) equations
in (𝑛 + 1) unknowns 𝑎0 , 𝑎1 , . . . , 𝑎𝑛 . These equations are called the Normal Equations of the Least Squares
Method Equation (4) can only be used by creating a table of values corresponding to each sum and the sum
is found for each summation. We shall now illustrate how to use the set of equations (4) in a tabular form.
Example: By using the Least Squares Approximation, fit
a. a straight line
b. a parabola to the given data below
𝑥 1 2 3 4 5 6
𝑦 120 90 60 70 35 11
Solution

2
a. In order to fit a straight line to the set of data above, we assume the equation of the form
𝑦 = 𝑎0 + 𝑎1 𝑥
Now from the straight line equation above, we have to determine two unknowns 𝑎0 and 𝑎1 the normal
equations necessary to determine these unknowns can be obtained from equation (4) as:
𝑚 𝑚

∑ 𝑦𝑖 = 𝑛𝑎0 + 𝑎1 ∑ 𝑥𝑖
𝑖=1 𝑖=1
𝑚 𝑚 𝑚

∑ 𝑥𝑖 𝑦𝑖 = 𝑎0 ∑ 𝑥𝑖 + 𝑎1 ∑ 𝑥𝑖 2
𝑖=1 𝑖=1 𝑖=1

Hence we shall need to construct columns for vales of 𝑥 , 𝑥 , 𝑥 4 , 𝑥𝑦 and 𝑥 2 𝑦 in addition to 𝑥 and 𝑦 values
2 3

already give. Thus the table below shows the necessary columns:
𝑥 𝑦 𝑥2 𝑥3 𝑥4 𝑥𝑦 𝑥2𝑦
1 120 1 1 1 120 120
2 90 4 8 16 180 360
3 60 9 27 81 180 540
4 70 16 64 256 280 1120
5 35 25 125 625 175 875
6 11 36 216 1296 66 396
∑𝑥𝑖 = 21 ∑𝑦𝑖 = 386 ∑𝑥 2 = 91 ∑𝑥 3 = 441 ∑𝑥 4 = 2275 ∑𝑥𝑦 = 1001 ∑𝑥 2 𝑦 = 3411

Substituting into the normal equations, we have


386 = 6𝑎0 + 21𝑎1
1001 = 21𝑎0 + 91𝑎1
Solving these two equations, we obtain
𝑎0 = 134.33 , 𝑎1 = −20
Therefore, the straight line fitted to the given data is
𝑦 = 134.33 – 20𝑥
(b) In a similar manner, the parabola can be written as 𝑦 = 𝑎0 + 𝑎1 𝑥 + 𝑎2 𝑥 2 . Hence the required normal
equations to determine the unknowns 𝑎0 , 𝑎1 and 𝑎2 are:

3
𝑚 𝑚 𝑚 𝑚

∑ 𝑦𝑖 = 𝑎0 ∑ 1 + 𝑎1 ∑ 𝑥𝑖 + 𝑎2 ∑ 𝑥𝑖2
𝑖=1 𝑖=1 𝑖=1 𝑖=1
𝑚 𝑚 𝑚 𝑚

∑ 𝑥𝑖 𝑦𝑖 = 𝑎0 ∑ 𝑥𝑖 + 𝑎1 ∑ 𝑥𝑖2 + 𝑎2 ∑ 𝑥𝑖3
𝑖=1 𝑖=1 𝑖=1 𝑖=1
𝑚 𝑚 𝑚 𝑚

∑ 𝑥𝑖 2 𝑦𝑖 = 𝑎0 ∑ 𝑥𝑖 2 + 𝑎1 ∑ 𝑥𝑖3 + 𝑎2 ∑ 𝑥𝑖4
𝑖=1 𝑖=1 𝑖=1 𝑖=1

Substituting into the normal equations, we have


6𝑎0 + 21𝑎1 + 91𝑎2 = 386
21𝑎0 + 91𝑎1 + 441𝑎2 = 1001
91𝑎0 + 441𝑎1 + 2275𝑎2 = 3411
Solving these equations, we obtain
85 5
𝑎0 = 136, 𝑎1 = − , 𝑎2 =
4 28
Hence the fitted curve is
85 5
𝑦 = 136 − 𝑥 + 𝑥2
4 28
2. Continuous Least-Squares Approximation of a Function
If we wish to find a least square approximation to a continuous function 𝑓(𝑥), our previous approach must
be modified since the number of points (𝑥𝑖 , 𝑦𝑖 ) at which the approximation is to be measured is now infinite
(and non-countable). Therefore, we cannot use a summation as ∑𝑛𝑖=1[𝑓(𝑥𝑖 ) − 𝑃(𝑥𝑖 )]2 , but we must use a
continuous measure, that is an integral. Hence if the interval of the approximation is [𝑎, 𝑏], so that 𝑎 ≤
𝑥 ≤ 𝑏 for all points under consideration, then we must minimize
𝑏
∫ [𝑓(𝑥) − 𝑃(𝑥)]2 𝑑𝑥
𝑎

where 𝑦 = 𝑓(𝑥) is our continuous function and 𝑃(𝑥) is our approximating function.
Here we describe Continuous least-square approximations of a function 𝑓(𝑥) by using polynomials. First,
consider approximation by a polynomial with monomial basis: {1, 𝑥, 𝑥 2 , . . . , 𝑥 𝑛 }.
2.1. Least-Square Approximations of a Function Using Monomial Polynomials
Given a function 𝑓(𝑥), continuous on [𝑎, 𝑏], find a polynomial 𝑃𝑛 (𝑥) of degree at most 𝑛:
𝑃𝑛 (𝑥) = 𝑎0 + 𝑎1 𝑥 + 𝑎2 𝑥 2 + … + 𝑎𝑛 𝑥 𝑛
such that the integral of the square of the error is minimized. That is,
𝑏
𝐸 = ∫𝑎 (𝑓(𝑥) − 𝑃𝑛 (𝑥)
2
) 𝑑𝑥 is
minimized.

4
The polynomial 𝑃𝑛 (𝑥) is called the Least-Squares Polynomial.
Since 𝐸 is a function of 𝑎0 , 𝑎1 , . . . , 𝑎𝑛 , we denote this by 𝐸(𝑎0 , 𝑎1 , . . . , 𝑎𝑛 ).
For minimization, we must have
𝜕𝐸
= 0, 𝑖 = 0, 1, . . . , 𝑛
𝜕𝑎𝑖
As before, these conditions will give rise to a system of (𝑛 + 1) normal equations in (𝑛 + 1) unknowns:
𝑎0 , 𝑎1 , . . . , 𝑎𝑛 . Solution of these equations will yield the unknowns: 𝑎0 , 𝑎1 , . . . , 𝑎𝑛 .
Setting up the Normal Equations. Since
𝑏
𝐸 = ∫𝑎 [𝑓(𝑥) − (𝑎0 + 𝑎1 𝑥 + 𝑎2 𝑥 2 + ⋯ + 𝑎𝑛 𝑥 𝑛 )]2
𝑏
𝜕𝐸
= −2 ∫ [𝑓(𝑥) − (𝑎0 + 𝑎1 𝑥 + 𝑎2 𝑥 2 + ⋯ + 𝑎𝑛 𝑥 𝑛 )]𝑑𝑥
𝜕𝑎0 𝑎
𝑏
𝜕𝐸
= −2 ∫ 𝑥[𝑓(𝑥) − (𝑎0 + 𝑎1 𝑥 + 𝑎2 𝑥 2 + ⋯ + 𝑎𝑛 𝑥 𝑛 )]𝑑𝑥
𝜕𝑎1 𝑎

𝑏
𝜕𝐸
= −2 ∫ 𝑥 𝑛 [𝑓(𝑥) − (𝑎0 + 𝑎1 𝑥 + 𝑎2 𝑥 2 + ⋯ + 𝑎𝑛 𝑥 𝑛 )] 𝑑𝑥
𝜕𝑎𝑛 𝑎
𝜕𝐸
Since 𝜕𝑎𝑖
= 0, 𝑓𝑜𝑟 𝑖 = 0,1,2, … , 𝑛 we have
𝑏 𝑏 𝑏 𝑏
∫𝑎 𝑓(𝑥)𝑑𝑥 = 𝑎0 ∫𝑎 𝑑𝑥 + 𝑎1 ∫𝑎 𝑥𝑑𝑥 + ⋯ + 𝑎𝑛 ∫𝑎 𝑥 𝑛 𝑑𝑥
𝑏 𝑏 𝑏 𝑏
∫𝑎 𝑥𝑓(𝑥)𝑑𝑥 = 𝑎0 ∫𝑎 𝑥𝑑𝑥 + 𝑎1 ∫𝑎 𝑥 2 𝑑𝑥 + ⋯ + 𝑎𝑛 ∫𝑎 𝑥 𝑛+1 𝑑𝑥

𝑏 𝑏 𝑏 𝑏
∫𝑎 𝑥 𝑛 𝑓(𝑥)𝑑𝑥 = 𝑎0 ∫𝑎 𝑥 𝑛 𝑑𝑥 + 𝑎1 ∫𝑎 𝑥 𝑛+1 𝑑𝑥 + ⋯ + 𝑎𝑛 ∫𝑎 𝑥 2𝑛 𝑑𝑥
Example: Find Linear and Quadratic least-squares approximations to 𝑓(𝑥) = 𝑒 𝑥 on [−1, 1].
Solution
Linear Approximation: 𝑛 = 1; 𝑃1 (𝑥) = 𝑎0 + 𝑎1 𝑥
1 1 1
∫ 𝑓(𝑥)𝑑𝑥 = 𝑎0 ∫ 𝑑𝑥 + 𝑎1 ∫ 𝑥𝑑𝑥
−1 −1 −1
1 1 1
∫ 𝑥𝑓(𝑥)𝑑𝑥 = 𝑎0 ∫ 𝑥𝑑𝑥 + 𝑎1 ∫ 𝑥 2 𝑑𝑥
−1 −1 −1

But
1 1 1 2
∫−1 𝑑𝑥 = 2, ∫−1 𝑥𝑑𝑥 = 0, ∫−1 𝑥 2 𝑑𝑥 = 3
1 1 1
∫−1 𝑓(𝑥)𝑑𝑥 = ∫−1 𝑒 𝑥 𝑑𝑥 = 𝑒 − 𝑒 = 2.35
1 1 2
∫−1 𝑥𝑓(𝑥)𝑑𝑥 = ∫−1 𝑥𝑒 𝑥 𝑑𝑥 = 𝑒 = 0.7358
So

5
2.35 = 2𝑎0 + (0)𝑎1
2
0.7358 = (0)𝑎0 + 𝑎1
3
𝑎0 = 1.1752 & 𝑎1 = 1.1037
Hence 𝑝(𝑥) = 1.1752 + 1.1037𝑥
Quadratic Fitting: 𝑛 = 2; 𝑃2 (𝑥) = 𝑎0 + 𝑎1 𝑥 + 𝑎2 𝑥 2
1 1 1 1
∫ 𝑓(𝑥)𝑑𝑥 = 𝑎0 ∫ 𝑑𝑥 + 𝑎1 ∫ 𝑥𝑑𝑥 + 𝑎2 ∫ 𝑥 2 𝑑𝑥
−1 −1 −1 −1
1 1 1 1
∫ 𝑥𝑓(𝑥)𝑑𝑥 = 𝑎0 ∫ 𝑥𝑑𝑥 + 𝑎1 ∫ 𝑥 2 𝑑𝑥 + 𝑎2 ∫ 𝑥 3 𝑑𝑥
−1 −1 −1 −1
1 1 1 1
∫ 𝑥 2 𝑓(𝑥)𝑑𝑥 = 𝑎0 ∫ 𝑥 2 𝑑𝑥 + 𝑎1 ∫ 𝑥 3 𝑑𝑥 + 𝑎2 ∫ 𝑥 4 𝑑𝑥
−1 −1 −1 −1

But
1 1 1 2 1 1 2
∫−1 𝑑𝑥 = 2, ∫−1 𝑥𝑑𝑥 = 0, ∫−1 𝑥 2 𝑑𝑥 = 3 , ∫−1 𝑥 3 𝑑𝑥 = 0, ∫−1 𝑥 4 𝑑𝑥 = 5

1 1 1
∫−1 𝑓(𝑥)𝑑𝑥 = ∫−1 𝑒 𝑥 𝑑𝑥 = 𝑒 − 𝑒 = 2.35
1 1 2
∫−1 𝑥𝑓(𝑥)𝑑𝑥 = ∫−1 𝑥𝑒 𝑥 𝑑𝑥 = 𝑒 = 0.7358
1 1
∫−1 𝑥 2 𝑓(𝑥) = ∫−1 𝑥 2 𝑒 𝑥 𝑑𝑥 = 0.8789
So
2
2.35 = 2𝑎0 + (0)𝑎1 + 𝑎2
3
2
0.7358 = (0)𝑎0 + 𝑎1 + (0)𝑎2
3
2 2
0.8789 = 𝑎0 + (0)𝑎1 + 𝑎2
3 5
Hence, solving the above system of equation we have the following results
𝑎0 0.9963
(𝑎1 ) = (1.1037)
𝑎2 0.5368
The quadratic least square polynomial is 𝑃2 (𝑥) = 0.9963 + 1.1037𝑥 + 0.5368𝑥 2
2.2. Least-squares Approximation of a Function Using Orthogonal Polynomials
Definition: The set of functions {𝜙0 , 𝜙1 , . . . , 𝜙𝑛 } in [𝑎, 𝑏] is called a set of orthogonal functions, with
respect to a weight function 𝑤(𝑥), if
𝑏
0 𝑖𝑓 𝑖 ≠ 𝑗
∫ 𝑤(𝑥) 𝜙𝑗 (𝑥)𝜙𝑖 (𝑥)𝑑𝑥 = {
𝑎
𝐶𝑗 𝑖𝑓 𝑖 = 𝑗

where 𝐶𝑗 is a real positive numbers

6
Furthermore, if 𝐶𝑗 = 1, 𝑗 = 0, 1, . . . , 𝑛, then the orthogonal set is
called an orthonormal set.
Using this interesting property, least-squares computations can be more numerically effective, as shown
below. Without any loss of generality, let’s assume that 𝑤(𝑥) = 1. Then finding the least-squares
approximation of 𝑓(𝑥) on [𝑎, 𝑏] using orthogonal polynomials, Can be stated as follows:
Given 𝑓(𝑥), continuous on [𝑎, 𝑏], find 𝑎0 , 𝑎1 , . . . , 𝑎𝑛 using a polynomial of the form:
𝑃𝑛 (𝑥) = 𝑎0 𝑄0 (𝑥) + 𝑎1 𝑄1 (𝑥 + ⋯ + 𝑎𝑛 𝑄𝑛 (𝑥)
Where
{𝑄𝑘 (𝑥)}𝑛𝑘=0 is a given set of orthogonal polynomials on [𝑎, 𝑏], such that the error function:.
𝑏
𝐸(𝑎0 , 𝑎1 , . . . , 𝑎𝑛 ) = ∫𝑎 [𝑓(𝑥) − (𝑎0 𝑄0 (𝑥) + 𝑎1 𝑄1 (𝑥 + ⋯ + 𝑎𝑛 𝑄𝑛 (𝑥)]2 𝑑𝑥 is minimized.
As before, we set
𝜕𝐸
= 0, 𝑖 = 0,1, … , 𝑛
𝜕𝑎𝑖
Now
𝑏 𝑏
𝜕𝐸
= 0 ⟹ ∫ 𝑄0 (𝑥)𝑓(𝑥)𝑑𝑥 = ∫ 𝑄0 (𝑥)[𝑎0 𝑄0 (𝑥) + 𝑎1 𝑄1 (𝑥 + ⋯ + 𝑎𝑛 𝑄𝑛 (𝑥)]𝑑𝑥
𝜕𝑎0 𝑎 𝑎

Since, {𝑄𝑘 (𝑥)}𝑛𝑘=0 is an orthogonal set, we have,


𝑏
∫ 𝑄02 (𝑥)𝑑𝑥 = 𝐶0
𝑎
𝑏
and ∫𝑎 𝑄0 (𝑥)𝑄𝑖 (𝑥)𝑑𝑥 = 0, for 𝑖 ≠ 0
Applying the above orthogonal property, we see from above that
𝑏
∫𝑎 𝑄0 (𝑥)𝑓(𝑥)𝑑𝑥 = 𝐶0 𝑎0 .
That is
𝑏
∫𝑎 𝑄0 (𝑥)𝑓(𝑥)𝑑𝑥
𝑎0 = 𝐶0
1 𝑏
Similarly 𝑎𝑘 = 𝐶 ∫𝑎 𝜙𝑘 (𝑥)𝑓(𝑥)𝑑𝑥, 𝑘 = 0,1,2, … , 𝑛
𝑘

𝑏
Where 𝐶𝑘 = ∫𝑎 𝜙𝑘2 (𝑥)𝑑𝑥

Expressions for 𝑎𝑘 with Weight Function 𝑤(𝑥).


If the weight function 𝑤(𝑥) is included, then 𝑎𝑘 is modified to
1 𝑏
𝑎𝑘 = ∫ 𝑤(𝑥)𝜙𝑘 (𝑥)𝑓(𝑥)𝑑𝑥, 𝑘 = 0,1,2, … , 𝑛
𝐶𝑘 𝑎
𝑏
where 𝐶𝑘 = ∫𝑎 𝑤(𝑥) 𝑄𝑘2 (𝑥)𝑑𝑥

7
2.2. Legendre polynomials
Legendre polynomial is known to possess some oscillatory property among which makes it of importance
in the field of numerical analysis. The polynomial has its root from the Legendre equation which is a second
order differential equation. The first set of solutions of the Legendre equation is known as the Legendre
polynomial.
Legendre Polynomial Approximation
When we try to find good polynomial approximations to a given function 𝑓(𝑥) we are tring to represent
𝑓(𝑥) in the form
𝑛

𝑓(𝑥) = ∑ 𝑐𝑘 𝑥 𝑘
𝑘=0

which is of the form of series equation of the form 𝜑𝑘 (𝑥) = 𝑥 𝑘 . Unfortunately the set 1, 𝑥, 𝑥 2 ,... is not
orthogonal over any non-zero interval. For example
𝑏 𝑏
∫ 𝜑1 (𝑥)𝜑3 (𝑥)𝑑𝑥 = ∫ 𝑥 4 𝑑𝑥 > 0
𝑎 𝑎

which contradicts the assertion that {𝑥 𝑘 } is orthogonal. It is however possible to construct a set of
polynomials 𝑄0 (𝑥), 𝑄1 (𝑥), 𝑄2 (𝑥), … , 𝑄𝑛 (𝑥) …. where 𝑄𝑛 (𝑥) is of degree 𝑛 which are orthogonal over the
interval [−1, 1] and from these a set of polynomials orthogonal over any given finite interval [𝑎, 𝑏] can be
obtained.
The method for finding a set of polynomials which are orthogonal and normal over [−1, 1] is a relatively
simple one and we illustrate it by finding the first three such polynomials. We shall at this junction give a
definition of Legendre Polynomial which can be used to generate the set of polynomials required.
Defination: The rodrigues formula for generating the legendre polynomial given by
1 𝑑𝑛
𝑄𝑛 (𝑥) = [(𝑥 2 − 1)𝑛 ]
2𝑛 𝑛! 𝑑𝑥 𝑛
from the the defination given above, it will be observed that an 𝑛𝑡ℎ derivative must be carried out before a
polynomial of degree 𝑛 is obtanied. Thus the first few set of legendre polynomials can be obtanied as
follows.
𝑄0 (𝑥) will not involve any derivative since 𝑛 = 0, hence we have 𝑄𝑜 (𝑥) = 1. Also for 𝑛 = 1, we shall
have
1 𝑑 1
𝑄1 (𝑥) = 21 1! 𝑑𝑥 (𝑥 2 − 1) = 2 ∗ 2𝑥 = 𝑥
1 𝑑2 1 𝑑2 1
𝑄2 (𝑥) = 22 2! 𝑑𝑥 2 (𝑥 2 − 1)2 = 8 𝑑𝑥 2 (𝑥 4 − 2𝑥 2 + 1) = 2 (3𝑥 2 − 1)

To obtain 𝑃3 ( 𝑥) it will require differentiating three times which will become cumbersome as 𝑛 increases.
With this difficulty that may be encountered with higher differentiation especially as 𝑛 > 2 in 𝑃𝑛 (𝑥) of

8
Rodrigues’ formula above, a simpler formula for generating the Legendre polynomials is given by its
recurrence relation.
Recurrence formula for legendre polynomial
The recurrence formula for the legendre polynomial 𝑄𝑛 (𝑥) is given by equation
2𝑛 + 1 𝑛
𝑄𝑛+1 (𝑥) = ( ) 𝑥𝑄𝑛 (𝑥) − ( ) 𝑄 (𝑥) … … … … … … (∗)
𝑛+1 𝑛 + 1 𝑛−1
Where 𝑄𝑛 (𝑥) is known to have satisfied the legender differential equation
(1 − 𝑥 2 )𝑄′′𝑛 (𝑥) − 2𝑥𝑄′𝑛 (𝑥) + 𝑛(𝑛 + 1)𝑄𝑛 (𝑥) = 0
Once 𝑄0 (𝑥) and 𝑄1 (𝑥) are obtained from rodrigues formula as
𝑄0 (𝑥) = 1 and 𝑄1 (𝑥) = 𝑥
We can now substituit onver the equation (∗) to gereate highre order polynomials. Thus for 𝑛 = 1, 2, 3, …
we obtain from equation (∗) as follows
3 1 3 1
For 𝑛 = 1, we have 𝑄2 (𝑥) = 𝑥 ∗ 𝑄1 (𝑥) − 𝑄0 (𝑥) = 𝑥 ∗ 𝑥 − ∗
2 2 2 2
3 1
1 = 𝑥2 −
2 2

Which is the same as the 𝑄2 (𝑥) earlier obtaind using rodrigues formula. Furthermore for 𝑛 = 2, we have
5 2 5 3 1 2 1
𝑄3 (𝑥) = 3 𝑥𝑄2 (𝑥) − 3 𝑄1 (𝑥) = 2 ∗ 𝑥 ∗ (2 𝑥 2 − 2) − 3 𝑥 = 2 (5𝑥 3 − 3𝑥)

Similarly for 𝑛 = 3, we have


7 3 1
𝑄4 (𝑥) = 𝑥 ∗ 𝑄3 (𝑥) − 𝑄2 (𝑥) = (35𝑥 4 − 30𝑥 2 + 3)
4 4 8
9 4 1
𝑄5 (𝑥) = 𝑥 ∗ 𝑄4 (𝑥) − 𝑄3 (𝑥) = (63𝑥 5 − 70𝑥 3 + 15𝑥) etc
5 5 8

Properties of legendre polynomial


The legendre polynomial 𝑃𝑛 (𝑥) satisfies the following property
1
∫ 𝑄𝑛 (𝑥)𝑄𝑚 (𝑥)𝑑𝑥
−1
0 , 𝑖𝑓 𝑚 ≠ 𝑛
={ 2
, 𝑖𝑓 𝑚 = 𝑛
2𝑛 + 1
(∗∗)
This is the orthogonality property which permts it to be a polynomial approximation to any continous
functions with in its range [−1, 1]. It follows at once from equation (∗∗) that 𝑄𝑛 (𝑥) forms an orthogonal
but not normal, set over [−1, 1] with respect to the weight function 𝑤(𝑥) = 1 and the set

9
{𝑞𝑛 (𝑥)}

2𝑛 + 1
= {√
2

forms an orthonormal set.


Least Squares Approximation by Legendre Polynomials
Legendre Polynomials are known to be applicable to least square approximation of functions. In this sense,
we mean that we can follow the least square approximation technique and adapt this to Legendre
polynomial.
Recall that the first few Legendre polynomials are given above to use the Legendre polynomials to
approximate least-squares solution of a function 𝑓(𝑥), we set 𝑤(𝑥) = 1, and [𝑎, 𝑏] = [−1, 1] and
compute 𝐶𝑘 and 𝑎𝑘
where 𝑘 = 0, 1, . . . , 𝑛. That is
𝑏
𝐶𝑘 = ∫𝑎 𝑄𝑘2 (𝑥)𝑑𝑥 and
𝑏
1
𝑎𝑘 = ∫ 𝑓(𝑥)𝑄𝑘 (𝑥)𝑑𝑥
𝐶𝑘
𝑎

The least-squares polynomial will then be given by


𝑃𝑛 (𝑥) = 𝑎0 𝑄0 (𝑥) + 𝑎1 𝑄1 (𝑥) + ⋯ + 𝑎𝑛 𝑄𝑛 (𝑥).
A few 𝐶𝑘 ’s are now listed below:
1 1
𝐶0 = ∫−1 𝑄02 (𝑥)𝑑𝑥 = ∫−1 1𝑑𝑥 = 2
1 1 2
𝐶1 = ∫−1 𝑄12 (𝑥)𝑑𝑥 = ∫−1 𝑥 2 𝑑𝑥 = 3
1 1 1 1 2 2
{ 𝐶2 = ∫−1 𝑄22 (𝑥)𝑑𝑥 = ∫−1 (𝑥 2 − ) 𝑑𝑥 =
4 3 45

And So on
Example: Find linear and quadratic least-squares approximation to 𝑓(𝑥) = 𝑒 𝑥 using Legendre
polynomials.
Solution
Linear Approximation: 𝑃1 (𝑥) = 𝑎0 𝑄0 (𝑥) + 𝑎1 𝑄1 (𝑥)
1
But 𝑄0 (𝑥) = 1, 𝑄1 (𝑥) = 𝑥 𝑎𝑛𝑑 𝑄2 (𝑥) = (3𝑥 2 − 1)
2

Now compute the value of 𝑎0 , 𝑎1 and 𝑎2 . These we follow the following steps
Step1. Compute
1 1
𝐶0 = ∫ 𝑄02 (𝑥)𝑑𝑥 = ∫ 𝑑𝑥 = 2
−1 −1

10
1 1
2
𝐶1 = ∫ 𝑄12 (𝑥)𝑑𝑥 = ∫ 𝑥 2 𝑑𝑥 =
−1 −1 3
𝑏
1 1 1 2 2
𝐶2 = ∫ 𝑄22 (𝑥)𝑑𝑥 = ∫ (𝑥 2 − ) dx =
𝑎 4 −1 3 45
Step2. Compute 𝑎0 , 𝑎1 and 𝑎2
1 𝑏 1 1 1 1
𝑎0 = ∫ 𝑄0 (𝑥)𝑓(𝑥)𝑑𝑥 = ∫ 𝑒 𝑥 𝑑𝑥 = (𝑒 − )
𝐶0 𝑎 2 −1 2 𝑒
1 𝑏 3 1 3
𝑎1 = ∫ 𝑄1 (𝑥)𝑓(𝑥)𝑑𝑥 = ∫ 𝑥𝑒 𝑥 𝑑𝑥 =
𝐶1 𝑎 2 −1 𝑒
1 𝑏 45 1 2 1 𝑥 7
𝑎2 = ∫ 𝑄2 (𝑥)𝑓(𝑥)𝑑𝑥 = ∫ (𝑥 − ) 𝑒 𝑑𝑥 = 4(𝑒 − )
𝐶2 𝑎 2 −1 3 𝑒
So
1 1 3
𝑃1 (𝑥) = 2 (𝑒 − 𝑒) + 𝑒
𝑥

Quadratic Approximation: 𝑃2 (𝑥) = 𝑎 0 𝑄0 (𝑥) + 𝑎 1 𝑄1 (𝑥) + 𝑎 2 𝑄2 (𝑥)


1 1 3 7
𝑎0 = 2 (𝑒 − 𝑒) , 𝑎1 = 𝑒 𝑎2 = 4 (𝑒 − 𝑒 )

The quadratic least square polynomial is


1 1 3
𝑃2 (𝑥) = (𝑒 − ) + ( 𝑥 )
2 𝑒 𝑒
7 1
+ 4 (𝑒 − ) ( (3𝑥 2 − 1))
𝑒 2
1 1 3
= (𝑒 − ) + 𝑥
2 𝑒 𝑒
7
+ 2 (𝑒 − ) (3𝑥 2 − 1)
𝑒
3, Chebyshev polynomials
Chebyshev polynomials are often useful in approximating some functions. For this reason we shall
examine the nature, properties and efficiency of the Chebyshev polynomial. Chebyshev Polynomial is based
on the function “𝑐𝑜𝑠 𝑛𝜃” which is a polynomial of degree 𝑛 in 𝑐𝑜𝑠𝜃. Thus we give the following basic
definition of the Chebyshev polynomial.
Definition: The Chebyshev polynomial is defined in terms of cosine function as
𝑇𝑛 (𝑥) = 𝑐𝑜𝑠[𝑛 𝑎𝑟𝑐𝑐𝑜𝑠 𝑥], 𝑛 ≥ 0 on [-1, 1]
are called the Chebyshev polynomials.
This definition can be translated to polynomials of 𝑥, if we put 𝑥 = 𝑐𝑜𝑠𝜃 , the Chebyshev polynomial
defined above becomes
𝑇𝑛 ( 𝑥) = 𝑐𝑜𝑠(𝑛𝜃 )

11
𝑇𝑛 ( 𝑥) is of the orthogonal family of polynomials of degree 𝑛 and it has a weighting function
1
𝑤(𝑥) = , −1 ≤ 𝑥 ≤ 1
√1 − 𝑥 2
It has an oscillatory property that in 0 ≤ 𝜃 ≤ 𝜋 the function has alternating equal maximum and
minimum values of ± 1 at the 𝑛 + 1 points
𝑟𝜋 𝑟𝜋
𝜃𝑟 = 𝑛
,𝑟 = 0, 1, 2, … . , 𝑛 or 𝑥𝑟 = cos ( 𝑛 ) , 𝑟 = 0, 1, 2, … . , 𝑛

Thus to obtain the Chebyshev polynomials, Starting with the definition, that is
𝑇0 (𝑥) = cos 0 = 1
(The Chebyshev polynomial of degree zero).
−1
𝑇1 (𝑥) = cos(𝑐𝑜𝑠 𝑥) = 𝑥 (The Chebyshev polynomial of degree 1).
𝑇2 (𝑥) = cos(2𝑐𝑜𝑠 −1 𝑥) = 2𝑐𝑜𝑠 2 (𝑐𝑜𝑠 −1 𝑥) − 1 = 2𝑥 2 − 1

𝑇32𝑠 𝑚𝑒𝑡ℎ𝑜𝑑 𝑜𝑓 𝑙𝑒𝑎𝑠𝑡 𝑠𝑞𝑢𝑎𝑟𝑒𝑠.222222222222222222222222222222222222222222222222222222222222222222222222222222222222


cos(3𝑐𝑜𝑠 −1 𝑥) = 4𝑐𝑜𝑠 3 (𝑐𝑜𝑠 −1 𝑥) − 3cos(𝑐𝑜𝑠 −1 𝑥) = 4𝑥 3 − 3𝑥

𝑇42𝑠 𝑚𝑒𝑡ℎ𝑜𝑑 𝑜𝑓 𝑙𝑒𝑎𝑠𝑡 𝑠𝑞𝑢𝑎𝑟𝑒𝑠.222222222222222222222222222222222222222222222222222222222222222222222222222222222222


cos(4𝑐𝑜𝑠 −1 𝑥) = 2𝑐𝑜𝑠 2 (2𝑐𝑜𝑠 −1 𝑥) − 1 = 2(2𝑥 2 − 1)2 = 8𝑥 4 − 8𝑥 2 + 1

𝑇52𝑠 𝑚𝑒𝑡ℎ𝑜𝑑 𝑜𝑓 𝑙𝑒𝑎𝑠𝑡 𝑠𝑞𝑢𝑎𝑟𝑒𝑠.222222222222222222222222222222222222222222222222222222222222222222222222222222222222


cos(5𝑐𝑜𝑠 −1 𝑥) = cos(3𝑐𝑜𝑠 −1 ) cos(2𝑐𝑜𝑠 −1 𝑥) − sin(3𝑐𝑜𝑠 −1 𝑥)sin(2 𝑐𝑜𝑠 −1 𝑥)
= 16𝑥 5 − 20𝑥 3 + 5𝑥
Similarly, 𝑇6 (𝑥) = 32𝑥 6 − 48𝑥 4 + 18𝑥 2 − 1 and so on
A Recursive Relation for Generating Chebyshev Polynomials
In order to express 𝑇𝑛 (𝑥) in terms of polynomials the definition can be used to some extent, but as 𝑛 value
increase it becomes more difficult to obtain the actual polynomial. For this reason the simpler way of
generating the Chebyshev polynomials is by using the recurrence formula for 𝑇𝑛 (𝑥) in [−1, 1].
The recurrence formula for generating the Chebyshev polynomial 𝑇𝑛 (𝑥) in [−1, 1] is given by
𝑇𝑛+1 (𝑥) = 2𝑥𝑇𝑛 (𝑥) − 𝑇𝑛−1 (𝑥)
3.1. The Least-Square Approximation using Chebyshev Polynomials
As before, the Chebyshev polynomials can be used to find least-squares approximations to a function 𝑓(𝑥)
1
as stated below. Set 𝑤(𝑥) = , [𝑎, 𝑏] = [−1,1] and
√1−𝑥 2

𝑄𝑘 (𝑥) = 𝑇𝑘 (𝑥),
Then, it is easy to see that using the orthogonal property of Chebyshev polynomials:

12
1 𝑇02 (𝑥) 1 𝑇𝑘2 (𝑥) 𝜋
𝐶0 = ∫−1 𝑑𝑥 = 𝜋 and 𝐶𝑘 = ∫−1 𝑑𝑥 = 2 , 𝑘 = 1, … , 𝑛
√1−𝑥 2 √1−𝑥 2

These
1 1 𝑓(𝑥)
𝑎0 = ∫ 𝑑𝑥
𝜋 −1 √1 − 𝑥 2
2 1 𝑓(𝑥)𝑇𝑖 (𝑥)
𝑎𝑖 = ∫ 𝑑𝑥 , 𝑖 = 1, … , 𝑛
𝜋 −1 √1 − 𝑥 2
The least-squares approximating polynomial 𝑃𝑛 (𝑥) of 𝑓(𝑥) using Chebyshev polynomials is given by
𝑃𝑛 (𝑥) = 𝑎0 𝑇0 (𝑥) + 𝑎1 𝑇1 (𝑥) + ⋯ + 𝑎𝑛 𝑇𝑛 (𝑥)
2 1 𝑓(𝑥)𝑇𝑖 (𝑥)
Where 𝑎𝑖 = 𝜋 ∫−1 𝑑𝑥,
√1−𝑥 2
1 1 𝑓(𝑥)
𝑖 = 1, … , 𝑛 and 𝑎0 = 𝜋 ∫−1 𝑑𝑥
√1−𝑥 2

Example: Find a linear least-square approximation of 𝑓(𝑥) = 𝑒 𝑥 using Chebyshev polynomials.


Solution
𝑃1 (𝑥) = 𝑎0 𝑇0 (𝑥) + 𝑎1 𝑇1 (𝑥) = 𝑎0 + 𝑎1 𝑥
1 1 𝑓(𝑥) 1 1 𝑒𝑥
Where 𝑎0 = 𝜋 ∫−1 2
𝑑𝑥 = 𝜋 ∫−1 𝑑𝑥 = 1.2660
√1−𝑥 √1−𝑥 2
2 1 𝑓(𝑥)𝑇1 (𝑥)
𝑎1 = ∫−1 𝑑𝑥 =
𝜋 √1−𝑥 2
2 1 𝑒 𝑥 (𝑥)

𝜋 −1
𝑑𝑥 =
√1−𝑥 2

1.1303
Thus, 𝑃1 (𝑥) = 1.2660 + 1.1303𝑥

13

You might also like