Chapter Two
Chapter Two
Chapter Two
Approximation Theory
Introduction
The problem of approximating a function is an important problem in numerical analysis due to its wide
application in the development of software for digital computers. The functions commonly used for
approximating given functions are polynomials, trigonometric functions, exponential functions, and
rational functions. However, from an application point of view, the polynomial functions are mostly used.
Approximation theory involves two types of problems. The first problem arises when a function is given
explicitly, but we wish to find a simpler type of function such as a polynomial, that can be used to
approximate values of the given function.
The second kind of problem in approximation theory is concerned with fitting functions to a given set of
data and finding the “best” function in a certain class that can be used to represent the set of data. To handle
these two problems, we use the basic methods of approximations. Some of the approximation methods of
functions, in existence, includes Taylor’s Approximation, Lagrange polynomials, Least-Squares
approximation, Hermite approximation, Cubic Spline interpolation, Chebyshev approximation, Legendre
Polynomials, Rational function approximation; and some few others more.
Every approximation theory involves polynomials; hence, some methods of approximation are sometimes
called polynomials. For example, Chebyshev approximation is often referred to as Chebyshev polynomials.
2.1. Least square approximation
Least square approximation of curve fitting was suggested early in the nineteenth century by the French
mathematician Adrien Legendre. The method of least squares assumes that the best fitting line in the curve
for which the sum of the squares of the vertical distances of the points (𝑥𝑖 , 𝑦𝑖 ) from the line is minimum.
The Least Squares Approximation methods can be classified into two, namely the discrete least square
approximation and the continuous least squares approximation. The first involves fitting a polynomial
function to a set of data points using the least squares approach, while the latter requires the use of
orthogonal polynomials to determine an appropriate polynomial function that fits a given function.
1. Discrete Least-Squares Approximation
The basic idea of least square approximation is to fit a polynomial function P(x) to a set of data Points
(𝑥𝑖 , 𝑦𝑖 ) having a theoretical solution
𝑦 = 𝑓(𝑥) … … … … … … … … … . (1)
The aim is to minimize the squares of the errors. Given a set of n discrete data points (𝑥𝑖 , 𝑦𝑖 ), 𝑖 =
1, 2, . . . , 𝑚. find the algebraic polynomial
𝑃𝑛 (𝑥) = 𝑎0 + 𝑎1 𝑥 + 𝑎2 𝑥 2 + ⋯ + 𝑎𝑛 𝑥 𝑛 (𝑛 < 𝑚) … … … … … … … … … . . (2)
such that the error 𝐸(𝑎0 , 𝑎1 , . . . , 𝑎𝑛 ) in the least-squares sense is minimized; that is,
1
2
𝐸(𝑎0 , 𝑎1 , . . . , 𝑎𝑛 ) = ∑𝑚 2 𝑛
𝑖=1(𝑦𝑖 − (𝑎0 + 𝑎1 𝑥𝑖 + 𝑎2 𝑥𝑖 + ⋯ + 𝑎𝑛 𝑥𝑖 )) is minimum. Here 𝐸(𝑎0 , 𝑎1 , . . . , 𝑎𝑛 )
… … … … … … . (4)
Solving equation (4) to determine 𝑎0 , 𝑎1 , … , 𝑎𝑘 and substituting into
equation (2) gives the best fitted curve to (1). The set of equations in (4) is a system of (𝑛 + 1) equations
in (𝑛 + 1) unknowns 𝑎0 , 𝑎1 , . . . , 𝑎𝑛 . These equations are called the Normal Equations of the Least Squares
Method Equation (4) can only be used by creating a table of values corresponding to each sum and the sum
is found for each summation. We shall now illustrate how to use the set of equations (4) in a tabular form.
Example: By using the Least Squares Approximation, fit
a. a straight line
b. a parabola to the given data below
𝑥 1 2 3 4 5 6
𝑦 120 90 60 70 35 11
Solution
2
a. In order to fit a straight line to the set of data above, we assume the equation of the form
𝑦 = 𝑎0 + 𝑎1 𝑥
Now from the straight line equation above, we have to determine two unknowns 𝑎0 and 𝑎1 the normal
equations necessary to determine these unknowns can be obtained from equation (4) as:
𝑚 𝑚
∑ 𝑦𝑖 = 𝑛𝑎0 + 𝑎1 ∑ 𝑥𝑖
𝑖=1 𝑖=1
𝑚 𝑚 𝑚
∑ 𝑥𝑖 𝑦𝑖 = 𝑎0 ∑ 𝑥𝑖 + 𝑎1 ∑ 𝑥𝑖 2
𝑖=1 𝑖=1 𝑖=1
Hence we shall need to construct columns for vales of 𝑥 , 𝑥 , 𝑥 4 , 𝑥𝑦 and 𝑥 2 𝑦 in addition to 𝑥 and 𝑦 values
2 3
already give. Thus the table below shows the necessary columns:
𝑥 𝑦 𝑥2 𝑥3 𝑥4 𝑥𝑦 𝑥2𝑦
1 120 1 1 1 120 120
2 90 4 8 16 180 360
3 60 9 27 81 180 540
4 70 16 64 256 280 1120
5 35 25 125 625 175 875
6 11 36 216 1296 66 396
∑𝑥𝑖 = 21 ∑𝑦𝑖 = 386 ∑𝑥 2 = 91 ∑𝑥 3 = 441 ∑𝑥 4 = 2275 ∑𝑥𝑦 = 1001 ∑𝑥 2 𝑦 = 3411
3
𝑚 𝑚 𝑚 𝑚
∑ 𝑦𝑖 = 𝑎0 ∑ 1 + 𝑎1 ∑ 𝑥𝑖 + 𝑎2 ∑ 𝑥𝑖2
𝑖=1 𝑖=1 𝑖=1 𝑖=1
𝑚 𝑚 𝑚 𝑚
∑ 𝑥𝑖 𝑦𝑖 = 𝑎0 ∑ 𝑥𝑖 + 𝑎1 ∑ 𝑥𝑖2 + 𝑎2 ∑ 𝑥𝑖3
𝑖=1 𝑖=1 𝑖=1 𝑖=1
𝑚 𝑚 𝑚 𝑚
∑ 𝑥𝑖 2 𝑦𝑖 = 𝑎0 ∑ 𝑥𝑖 2 + 𝑎1 ∑ 𝑥𝑖3 + 𝑎2 ∑ 𝑥𝑖4
𝑖=1 𝑖=1 𝑖=1 𝑖=1
where 𝑦 = 𝑓(𝑥) is our continuous function and 𝑃(𝑥) is our approximating function.
Here we describe Continuous least-square approximations of a function 𝑓(𝑥) by using polynomials. First,
consider approximation by a polynomial with monomial basis: {1, 𝑥, 𝑥 2 , . . . , 𝑥 𝑛 }.
2.1. Least-Square Approximations of a Function Using Monomial Polynomials
Given a function 𝑓(𝑥), continuous on [𝑎, 𝑏], find a polynomial 𝑃𝑛 (𝑥) of degree at most 𝑛:
𝑃𝑛 (𝑥) = 𝑎0 + 𝑎1 𝑥 + 𝑎2 𝑥 2 + … + 𝑎𝑛 𝑥 𝑛
such that the integral of the square of the error is minimized. That is,
𝑏
𝐸 = ∫𝑎 (𝑓(𝑥) − 𝑃𝑛 (𝑥)
2
) 𝑑𝑥 is
minimized.
4
The polynomial 𝑃𝑛 (𝑥) is called the Least-Squares Polynomial.
Since 𝐸 is a function of 𝑎0 , 𝑎1 , . . . , 𝑎𝑛 , we denote this by 𝐸(𝑎0 , 𝑎1 , . . . , 𝑎𝑛 ).
For minimization, we must have
𝜕𝐸
= 0, 𝑖 = 0, 1, . . . , 𝑛
𝜕𝑎𝑖
As before, these conditions will give rise to a system of (𝑛 + 1) normal equations in (𝑛 + 1) unknowns:
𝑎0 , 𝑎1 , . . . , 𝑎𝑛 . Solution of these equations will yield the unknowns: 𝑎0 , 𝑎1 , . . . , 𝑎𝑛 .
Setting up the Normal Equations. Since
𝑏
𝐸 = ∫𝑎 [𝑓(𝑥) − (𝑎0 + 𝑎1 𝑥 + 𝑎2 𝑥 2 + ⋯ + 𝑎𝑛 𝑥 𝑛 )]2
𝑏
𝜕𝐸
= −2 ∫ [𝑓(𝑥) − (𝑎0 + 𝑎1 𝑥 + 𝑎2 𝑥 2 + ⋯ + 𝑎𝑛 𝑥 𝑛 )]𝑑𝑥
𝜕𝑎0 𝑎
𝑏
𝜕𝐸
= −2 ∫ 𝑥[𝑓(𝑥) − (𝑎0 + 𝑎1 𝑥 + 𝑎2 𝑥 2 + ⋯ + 𝑎𝑛 𝑥 𝑛 )]𝑑𝑥
𝜕𝑎1 𝑎
⋮
𝑏
𝜕𝐸
= −2 ∫ 𝑥 𝑛 [𝑓(𝑥) − (𝑎0 + 𝑎1 𝑥 + 𝑎2 𝑥 2 + ⋯ + 𝑎𝑛 𝑥 𝑛 )] 𝑑𝑥
𝜕𝑎𝑛 𝑎
𝜕𝐸
Since 𝜕𝑎𝑖
= 0, 𝑓𝑜𝑟 𝑖 = 0,1,2, … , 𝑛 we have
𝑏 𝑏 𝑏 𝑏
∫𝑎 𝑓(𝑥)𝑑𝑥 = 𝑎0 ∫𝑎 𝑑𝑥 + 𝑎1 ∫𝑎 𝑥𝑑𝑥 + ⋯ + 𝑎𝑛 ∫𝑎 𝑥 𝑛 𝑑𝑥
𝑏 𝑏 𝑏 𝑏
∫𝑎 𝑥𝑓(𝑥)𝑑𝑥 = 𝑎0 ∫𝑎 𝑥𝑑𝑥 + 𝑎1 ∫𝑎 𝑥 2 𝑑𝑥 + ⋯ + 𝑎𝑛 ∫𝑎 𝑥 𝑛+1 𝑑𝑥
⋮
𝑏 𝑏 𝑏 𝑏
∫𝑎 𝑥 𝑛 𝑓(𝑥)𝑑𝑥 = 𝑎0 ∫𝑎 𝑥 𝑛 𝑑𝑥 + 𝑎1 ∫𝑎 𝑥 𝑛+1 𝑑𝑥 + ⋯ + 𝑎𝑛 ∫𝑎 𝑥 2𝑛 𝑑𝑥
Example: Find Linear and Quadratic least-squares approximations to 𝑓(𝑥) = 𝑒 𝑥 on [−1, 1].
Solution
Linear Approximation: 𝑛 = 1; 𝑃1 (𝑥) = 𝑎0 + 𝑎1 𝑥
1 1 1
∫ 𝑓(𝑥)𝑑𝑥 = 𝑎0 ∫ 𝑑𝑥 + 𝑎1 ∫ 𝑥𝑑𝑥
−1 −1 −1
1 1 1
∫ 𝑥𝑓(𝑥)𝑑𝑥 = 𝑎0 ∫ 𝑥𝑑𝑥 + 𝑎1 ∫ 𝑥 2 𝑑𝑥
−1 −1 −1
But
1 1 1 2
∫−1 𝑑𝑥 = 2, ∫−1 𝑥𝑑𝑥 = 0, ∫−1 𝑥 2 𝑑𝑥 = 3
1 1 1
∫−1 𝑓(𝑥)𝑑𝑥 = ∫−1 𝑒 𝑥 𝑑𝑥 = 𝑒 − 𝑒 = 2.35
1 1 2
∫−1 𝑥𝑓(𝑥)𝑑𝑥 = ∫−1 𝑥𝑒 𝑥 𝑑𝑥 = 𝑒 = 0.7358
So
5
2.35 = 2𝑎0 + (0)𝑎1
2
0.7358 = (0)𝑎0 + 𝑎1
3
𝑎0 = 1.1752 & 𝑎1 = 1.1037
Hence 𝑝(𝑥) = 1.1752 + 1.1037𝑥
Quadratic Fitting: 𝑛 = 2; 𝑃2 (𝑥) = 𝑎0 + 𝑎1 𝑥 + 𝑎2 𝑥 2
1 1 1 1
∫ 𝑓(𝑥)𝑑𝑥 = 𝑎0 ∫ 𝑑𝑥 + 𝑎1 ∫ 𝑥𝑑𝑥 + 𝑎2 ∫ 𝑥 2 𝑑𝑥
−1 −1 −1 −1
1 1 1 1
∫ 𝑥𝑓(𝑥)𝑑𝑥 = 𝑎0 ∫ 𝑥𝑑𝑥 + 𝑎1 ∫ 𝑥 2 𝑑𝑥 + 𝑎2 ∫ 𝑥 3 𝑑𝑥
−1 −1 −1 −1
1 1 1 1
∫ 𝑥 2 𝑓(𝑥)𝑑𝑥 = 𝑎0 ∫ 𝑥 2 𝑑𝑥 + 𝑎1 ∫ 𝑥 3 𝑑𝑥 + 𝑎2 ∫ 𝑥 4 𝑑𝑥
−1 −1 −1 −1
But
1 1 1 2 1 1 2
∫−1 𝑑𝑥 = 2, ∫−1 𝑥𝑑𝑥 = 0, ∫−1 𝑥 2 𝑑𝑥 = 3 , ∫−1 𝑥 3 𝑑𝑥 = 0, ∫−1 𝑥 4 𝑑𝑥 = 5
1 1 1
∫−1 𝑓(𝑥)𝑑𝑥 = ∫−1 𝑒 𝑥 𝑑𝑥 = 𝑒 − 𝑒 = 2.35
1 1 2
∫−1 𝑥𝑓(𝑥)𝑑𝑥 = ∫−1 𝑥𝑒 𝑥 𝑑𝑥 = 𝑒 = 0.7358
1 1
∫−1 𝑥 2 𝑓(𝑥) = ∫−1 𝑥 2 𝑒 𝑥 𝑑𝑥 = 0.8789
So
2
2.35 = 2𝑎0 + (0)𝑎1 + 𝑎2
3
2
0.7358 = (0)𝑎0 + 𝑎1 + (0)𝑎2
3
2 2
0.8789 = 𝑎0 + (0)𝑎1 + 𝑎2
3 5
Hence, solving the above system of equation we have the following results
𝑎0 0.9963
(𝑎1 ) = (1.1037)
𝑎2 0.5368
The quadratic least square polynomial is 𝑃2 (𝑥) = 0.9963 + 1.1037𝑥 + 0.5368𝑥 2
2.2. Least-squares Approximation of a Function Using Orthogonal Polynomials
Definition: The set of functions {𝜙0 , 𝜙1 , . . . , 𝜙𝑛 } in [𝑎, 𝑏] is called a set of orthogonal functions, with
respect to a weight function 𝑤(𝑥), if
𝑏
0 𝑖𝑓 𝑖 ≠ 𝑗
∫ 𝑤(𝑥) 𝜙𝑗 (𝑥)𝜙𝑖 (𝑥)𝑑𝑥 = {
𝑎
𝐶𝑗 𝑖𝑓 𝑖 = 𝑗
6
Furthermore, if 𝐶𝑗 = 1, 𝑗 = 0, 1, . . . , 𝑛, then the orthogonal set is
called an orthonormal set.
Using this interesting property, least-squares computations can be more numerically effective, as shown
below. Without any loss of generality, let’s assume that 𝑤(𝑥) = 1. Then finding the least-squares
approximation of 𝑓(𝑥) on [𝑎, 𝑏] using orthogonal polynomials, Can be stated as follows:
Given 𝑓(𝑥), continuous on [𝑎, 𝑏], find 𝑎0 , 𝑎1 , . . . , 𝑎𝑛 using a polynomial of the form:
𝑃𝑛 (𝑥) = 𝑎0 𝑄0 (𝑥) + 𝑎1 𝑄1 (𝑥 + ⋯ + 𝑎𝑛 𝑄𝑛 (𝑥)
Where
{𝑄𝑘 (𝑥)}𝑛𝑘=0 is a given set of orthogonal polynomials on [𝑎, 𝑏], such that the error function:.
𝑏
𝐸(𝑎0 , 𝑎1 , . . . , 𝑎𝑛 ) = ∫𝑎 [𝑓(𝑥) − (𝑎0 𝑄0 (𝑥) + 𝑎1 𝑄1 (𝑥 + ⋯ + 𝑎𝑛 𝑄𝑛 (𝑥)]2 𝑑𝑥 is minimized.
As before, we set
𝜕𝐸
= 0, 𝑖 = 0,1, … , 𝑛
𝜕𝑎𝑖
Now
𝑏 𝑏
𝜕𝐸
= 0 ⟹ ∫ 𝑄0 (𝑥)𝑓(𝑥)𝑑𝑥 = ∫ 𝑄0 (𝑥)[𝑎0 𝑄0 (𝑥) + 𝑎1 𝑄1 (𝑥 + ⋯ + 𝑎𝑛 𝑄𝑛 (𝑥)]𝑑𝑥
𝜕𝑎0 𝑎 𝑎
𝑏
Where 𝐶𝑘 = ∫𝑎 𝜙𝑘2 (𝑥)𝑑𝑥
7
2.2. Legendre polynomials
Legendre polynomial is known to possess some oscillatory property among which makes it of importance
in the field of numerical analysis. The polynomial has its root from the Legendre equation which is a second
order differential equation. The first set of solutions of the Legendre equation is known as the Legendre
polynomial.
Legendre Polynomial Approximation
When we try to find good polynomial approximations to a given function 𝑓(𝑥) we are tring to represent
𝑓(𝑥) in the form
𝑛
𝑓(𝑥) = ∑ 𝑐𝑘 𝑥 𝑘
𝑘=0
which is of the form of series equation of the form 𝜑𝑘 (𝑥) = 𝑥 𝑘 . Unfortunately the set 1, 𝑥, 𝑥 2 ,... is not
orthogonal over any non-zero interval. For example
𝑏 𝑏
∫ 𝜑1 (𝑥)𝜑3 (𝑥)𝑑𝑥 = ∫ 𝑥 4 𝑑𝑥 > 0
𝑎 𝑎
which contradicts the assertion that {𝑥 𝑘 } is orthogonal. It is however possible to construct a set of
polynomials 𝑄0 (𝑥), 𝑄1 (𝑥), 𝑄2 (𝑥), … , 𝑄𝑛 (𝑥) …. where 𝑄𝑛 (𝑥) is of degree 𝑛 which are orthogonal over the
interval [−1, 1] and from these a set of polynomials orthogonal over any given finite interval [𝑎, 𝑏] can be
obtained.
The method for finding a set of polynomials which are orthogonal and normal over [−1, 1] is a relatively
simple one and we illustrate it by finding the first three such polynomials. We shall at this junction give a
definition of Legendre Polynomial which can be used to generate the set of polynomials required.
Defination: The rodrigues formula for generating the legendre polynomial given by
1 𝑑𝑛
𝑄𝑛 (𝑥) = [(𝑥 2 − 1)𝑛 ]
2𝑛 𝑛! 𝑑𝑥 𝑛
from the the defination given above, it will be observed that an 𝑛𝑡ℎ derivative must be carried out before a
polynomial of degree 𝑛 is obtanied. Thus the first few set of legendre polynomials can be obtanied as
follows.
𝑄0 (𝑥) will not involve any derivative since 𝑛 = 0, hence we have 𝑄𝑜 (𝑥) = 1. Also for 𝑛 = 1, we shall
have
1 𝑑 1
𝑄1 (𝑥) = 21 1! 𝑑𝑥 (𝑥 2 − 1) = 2 ∗ 2𝑥 = 𝑥
1 𝑑2 1 𝑑2 1
𝑄2 (𝑥) = 22 2! 𝑑𝑥 2 (𝑥 2 − 1)2 = 8 𝑑𝑥 2 (𝑥 4 − 2𝑥 2 + 1) = 2 (3𝑥 2 − 1)
To obtain 𝑃3 ( 𝑥) it will require differentiating three times which will become cumbersome as 𝑛 increases.
With this difficulty that may be encountered with higher differentiation especially as 𝑛 > 2 in 𝑃𝑛 (𝑥) of
8
Rodrigues’ formula above, a simpler formula for generating the Legendre polynomials is given by its
recurrence relation.
Recurrence formula for legendre polynomial
The recurrence formula for the legendre polynomial 𝑄𝑛 (𝑥) is given by equation
2𝑛 + 1 𝑛
𝑄𝑛+1 (𝑥) = ( ) 𝑥𝑄𝑛 (𝑥) − ( ) 𝑄 (𝑥) … … … … … … (∗)
𝑛+1 𝑛 + 1 𝑛−1
Where 𝑄𝑛 (𝑥) is known to have satisfied the legender differential equation
(1 − 𝑥 2 )𝑄′′𝑛 (𝑥) − 2𝑥𝑄′𝑛 (𝑥) + 𝑛(𝑛 + 1)𝑄𝑛 (𝑥) = 0
Once 𝑄0 (𝑥) and 𝑄1 (𝑥) are obtained from rodrigues formula as
𝑄0 (𝑥) = 1 and 𝑄1 (𝑥) = 𝑥
We can now substituit onver the equation (∗) to gereate highre order polynomials. Thus for 𝑛 = 1, 2, 3, …
we obtain from equation (∗) as follows
3 1 3 1
For 𝑛 = 1, we have 𝑄2 (𝑥) = 𝑥 ∗ 𝑄1 (𝑥) − 𝑄0 (𝑥) = 𝑥 ∗ 𝑥 − ∗
2 2 2 2
3 1
1 = 𝑥2 −
2 2
Which is the same as the 𝑄2 (𝑥) earlier obtaind using rodrigues formula. Furthermore for 𝑛 = 2, we have
5 2 5 3 1 2 1
𝑄3 (𝑥) = 3 𝑥𝑄2 (𝑥) − 3 𝑄1 (𝑥) = 2 ∗ 𝑥 ∗ (2 𝑥 2 − 2) − 3 𝑥 = 2 (5𝑥 3 − 3𝑥)
9
{𝑞𝑛 (𝑥)}
2𝑛 + 1
= {√
2
And So on
Example: Find linear and quadratic least-squares approximation to 𝑓(𝑥) = 𝑒 𝑥 using Legendre
polynomials.
Solution
Linear Approximation: 𝑃1 (𝑥) = 𝑎0 𝑄0 (𝑥) + 𝑎1 𝑄1 (𝑥)
1
But 𝑄0 (𝑥) = 1, 𝑄1 (𝑥) = 𝑥 𝑎𝑛𝑑 𝑄2 (𝑥) = (3𝑥 2 − 1)
2
Now compute the value of 𝑎0 , 𝑎1 and 𝑎2 . These we follow the following steps
Step1. Compute
1 1
𝐶0 = ∫ 𝑄02 (𝑥)𝑑𝑥 = ∫ 𝑑𝑥 = 2
−1 −1
10
1 1
2
𝐶1 = ∫ 𝑄12 (𝑥)𝑑𝑥 = ∫ 𝑥 2 𝑑𝑥 =
−1 −1 3
𝑏
1 1 1 2 2
𝐶2 = ∫ 𝑄22 (𝑥)𝑑𝑥 = ∫ (𝑥 2 − ) dx =
𝑎 4 −1 3 45
Step2. Compute 𝑎0 , 𝑎1 and 𝑎2
1 𝑏 1 1 1 1
𝑎0 = ∫ 𝑄0 (𝑥)𝑓(𝑥)𝑑𝑥 = ∫ 𝑒 𝑥 𝑑𝑥 = (𝑒 − )
𝐶0 𝑎 2 −1 2 𝑒
1 𝑏 3 1 3
𝑎1 = ∫ 𝑄1 (𝑥)𝑓(𝑥)𝑑𝑥 = ∫ 𝑥𝑒 𝑥 𝑑𝑥 =
𝐶1 𝑎 2 −1 𝑒
1 𝑏 45 1 2 1 𝑥 7
𝑎2 = ∫ 𝑄2 (𝑥)𝑓(𝑥)𝑑𝑥 = ∫ (𝑥 − ) 𝑒 𝑑𝑥 = 4(𝑒 − )
𝐶2 𝑎 2 −1 3 𝑒
So
1 1 3
𝑃1 (𝑥) = 2 (𝑒 − 𝑒) + 𝑒
𝑥
11
𝑇𝑛 ( 𝑥) is of the orthogonal family of polynomials of degree 𝑛 and it has a weighting function
1
𝑤(𝑥) = , −1 ≤ 𝑥 ≤ 1
√1 − 𝑥 2
It has an oscillatory property that in 0 ≤ 𝜃 ≤ 𝜋 the function has alternating equal maximum and
minimum values of ± 1 at the 𝑛 + 1 points
𝑟𝜋 𝑟𝜋
𝜃𝑟 = 𝑛
,𝑟 = 0, 1, 2, … . , 𝑛 or 𝑥𝑟 = cos ( 𝑛 ) , 𝑟 = 0, 1, 2, … . , 𝑛
Thus to obtain the Chebyshev polynomials, Starting with the definition, that is
𝑇0 (𝑥) = cos 0 = 1
(The Chebyshev polynomial of degree zero).
−1
𝑇1 (𝑥) = cos(𝑐𝑜𝑠 𝑥) = 𝑥 (The Chebyshev polynomial of degree 1).
𝑇2 (𝑥) = cos(2𝑐𝑜𝑠 −1 𝑥) = 2𝑐𝑜𝑠 2 (𝑐𝑜𝑠 −1 𝑥) − 1 = 2𝑥 2 − 1
𝑄𝑘 (𝑥) = 𝑇𝑘 (𝑥),
Then, it is easy to see that using the orthogonal property of Chebyshev polynomials:
12
1 𝑇02 (𝑥) 1 𝑇𝑘2 (𝑥) 𝜋
𝐶0 = ∫−1 𝑑𝑥 = 𝜋 and 𝐶𝑘 = ∫−1 𝑑𝑥 = 2 , 𝑘 = 1, … , 𝑛
√1−𝑥 2 √1−𝑥 2
These
1 1 𝑓(𝑥)
𝑎0 = ∫ 𝑑𝑥
𝜋 −1 √1 − 𝑥 2
2 1 𝑓(𝑥)𝑇𝑖 (𝑥)
𝑎𝑖 = ∫ 𝑑𝑥 , 𝑖 = 1, … , 𝑛
𝜋 −1 √1 − 𝑥 2
The least-squares approximating polynomial 𝑃𝑛 (𝑥) of 𝑓(𝑥) using Chebyshev polynomials is given by
𝑃𝑛 (𝑥) = 𝑎0 𝑇0 (𝑥) + 𝑎1 𝑇1 (𝑥) + ⋯ + 𝑎𝑛 𝑇𝑛 (𝑥)
2 1 𝑓(𝑥)𝑇𝑖 (𝑥)
Where 𝑎𝑖 = 𝜋 ∫−1 𝑑𝑥,
√1−𝑥 2
1 1 𝑓(𝑥)
𝑖 = 1, … , 𝑛 and 𝑎0 = 𝜋 ∫−1 𝑑𝑥
√1−𝑥 2
1.1303
Thus, 𝑃1 (𝑥) = 1.2660 + 1.1303𝑥
13