Taylor
Taylor
Taylor
n=0
f
(n)
(x
0
)
n!
(x x
0
)
n
.
P
N
is indeed a polynomial in x as the dependence in x is restricted to the (x x
0
)
n
terms and f
(n)
(x
0
)
is just a constant (it does not depend on x). Here we used the convention that f
(0)
= f.
Theorem 1 If |x x
0
| is small enough, then
F(x) = lim
N
P
N
(x)
exists (note that the limit is taken on N, x is xed here) and F(x) = f(x).
Why is this result powerful? This theorem states that, for x close to x
0
, f(x) can be expressed as a
sum of polynomial terms. However, for the equality to hold, all the terms must be included (there is an
innity of them!). More generally,
f(x) =
N
n=0
f
(j)
(x
0
)
n!
(x x
0
)
n
. .
P
N
(x)
+
n=N+1
f
(n)
(x
0
)
n!
(x x
0
)
n
. .
R
N
(x)
(1)
1
and R
N
(x) is called the remainder: it is the truncation error that is made when replacing f(x) by P
N
(x).
When x x
0
, R
N
(x) 0 (here the limit is taken in x with N xed). P
N
(x) is a better approximation
of f(x) as we get closer to x
0
. The result is even stronger:
lim
xx
0
R
N
(x)
(x x
0
)
N
= lim
xx
0
_
k=1
f
(k+N)
(x
0
)
(k +N)!
(x x
0
)
k
_
= 0
and this tells us that R
N
(x) goes to zero faster than (x x
0
)
N
as x goes to x
0
.
Theorem 2 For a given x
0
and a given x, with R
N
(x) dened in (1), there exists such that | x
0
|
|x x
0
| (that means that is between x
0
and x) such that
R
N
(x) =
f
(N+1)
()
(N + 1)!
(x x
0
)
N+1
. (2)
This second theorem allows to replace the innite sum in R
N
by a single term (but note that changes
if we change x). And if f is such that |f
(N+1)
(t)| M
N+1
for all t near x
0
, then we have a bound on the
remainder (that is: the truncation error):
|R
N
(x)|
M
N+1
(N + 1)!
(x x
0
)
N+1
.
This allows to precise our prior statement: R
N
goes to zero as fast as (x x
0
)
N+1
. This is expressed in
the following mathematical notation also known as big O notation:
Big O notation f is said to be a big O of g when x goes to x
0
if and only if there exists A such that
|f(x)| A|g(x)|for all x close enough from x
0
. Then we note f(x) = O(g(x)).
Note that A is not precised here, we just say that there exists such a constant A. A few properties of
this notation:
if f(x) = O(g(x)) as x x
0
, then f(x) = O(g(x)) for any non-zero constant
if f(x) = O(g(x)) and h(x) = O(g(x)) as x x
0
, then f(x) +h(x) = O(g(x))
Equation (1) can be rewritten:
f(x) = f(x
0
)+f
(x
0
)(xx
0
)+
f
(x
0
)
2
(xx
0
)
2
+. . .+
f
(N1)
(x
0
)
(N 1)!
(xx
0
)
N1
+
f
(N)
(x
0
)
N!
(xx
0
)
N
+O
_
(x x
0
)
N+1
_
(3)
The big O notation allows to write an equal sign (the previous equation is NOT an approximation),
as it includes all the error terms and tells us how fast they go to zero as x approaches x
0
.
Remark: In general, Taylor series lead to an innite sum (or a big O notation to represent this innite
sum). There is only one particular case where the series is nite: if f is itself a polynomial. Then for
large enough Q, f
(n)
(x
0
) = 0 for all n Q.
2
2 Taylor Series and truncation error
A convenient way to compute a derivative using nite dierences is the following:
f
(t)
f(t +h) f(t)
h
We must use a sign instead of an equality as this is true only when h 0 (denition of a derivative as
a limit).
Using Taylor series, we can compute f(t + h) by developing f in Taylor series near t as h is small.
Substitution in (3) with x = t +h and x
0
= t, we obtain (setting N = 2 for example):
f(t +h) = f(t) +hf
(t) +
h
2
2
f
(t) +O(h
3
).
Subtracting f(t) on both sides and dividing by h, we obtain therefore that:
f(t +h) f(t)
h
= f
(t) +
h
2
f
(t) +O(h
2
). (4)
Note that in the previous equation we have been able to divide by h inside the O notation. Is it correct?
If f(x) = O(x
n
) as x goes to zero, this means that close enough from 0, |f(x)| M|x
n
|. Dividing this
inequality by |x| = 0, we obtain |f(x)/x| M|x
n1
| and f(x)/x = O(x
n1
). You can verify immediately
that you can do the same with a multiplication.
Coming back to (4), we observe that by replacing f
.
But it is important to know the order of the error as it tells us how fast we will converge to the correct
result when we decrease h. The error is said linear if it is O(h), it is quadratic if it is O(h
2
)....
Exercise 1: what is the order of the error when computing the rst derivative f
(t) as
f
(t)
f(t +h) f(t h)
2h
?
3 Examples of some famous Taylor Series
f(x) = e
x
. For any n 0, f
(n)
(x) = e
x
. Therefore, f
(n)
(0) = 1, and the Taylor series of f(x) = e
x
written for x
0
= 0 is:
f(x) = 1 +x +
x
2
2
+
x
3
6
+. . . +
x
n
n!
+. . . =
n=0
x
n
n!
.
One of the particularity of the exponential function is that the previous expansion in x
0
= 0 gives
an expression for f(x) that is valid for all real and complex x. In particular, if x = iy, then by
3
denition of the cos and sin functions: e
iy
= cos y + i siny. Substitution of x = iy in the expansion
above leads after taking the real and imaginary part of the resulting equation:
cos y = 1
y
2
2
+
y
4
24
+... =
k=0
(1)
k
y
2k
(2k)!
siny = y
y
3
6
+
y
5
120
+... =
k=0
(1)
k+1
y
2k+1
(2k + 1)!
g(x) =
1
1x
. Taking the successive derivatives, we obtain:
g
(x) =
1
(1 x)
2
, g
(x) =
2
(1 x)
3
, g
(x) =
6
(1 x)
4
, . . . g
(n)
(x) =
n!
(1 x)
n+1
therefore g
(n)
(0) = n! with the convention that 0! = 1. Then we obtain the Taylor Series expansion
of g in x
0
= 0 as
g(x) = 1 +x +x
2
+x
3
+. . . =
n=0
x
n
You can actually prove mathematically that the above expression is valid for |x| < 1. (Remember
that Taylor Series theorems only tell you that it is valid close to x
0
= 0).
One can integrate or dierentiate Taylor Series (with some care): for example, dening h(x) =
log(1 + x), we want to obtain the Taylor Series of h near x
0
= 0. h
(x) =
1
1+x
. Using the previous
result, we have:
h
(x) = 1 x +x
2
x
3
+. . . =
n=0
(1)
n
x
n
and after integration and using h(0) = 0, we obtain:
h(x) = log(1 +x) = x
x
2
2
+
x
3
3
+. . . =
n=1
(1)
n+1
x
n
n
Exercise 2 Obtain the Taylor Series expansion of tanx near x
0
= 0 up to the term of order x
5
.
If you want to know more about Taylor Series, I also recommend you to look at the following websites:
http://en.wikipedia.org/wiki/Taylor_series
http://www.math.unh.edu/
~
jjp/taylor/taylor.html
4
SOLUTIONS
Exercise 1 Using the same approach as before we can form the Taylor Series expansion of f near t in
t +h and t h:
f(t +h) = f(t) +hf
(t) +
h
2
2
f
(t) +
h
3
6
f
(t) +
h
4
24
f
(4)
(t) +O(h
5
)
f(t h) = f(t) hf
(t) +
h
2
2
f
(t)
h
3
6
f
(t)
h
4
24
f
(4)
(t) +O(h
5
)
Subtracting both equations and dividing by 2h, we obtain:
f(t +h) f(t h)
2h
= f
(t) +
h
2
6
f
(t) +O(h
4
).
Now the error varies like h
2
, so the error is quadratic: by using h/2 instead of h, the error will be divided
by 4.
Exercise 2 tanx =
sin x
cos x
. We have obtained the Taylor Series expansion of cos and sin previously:
sinx = x
x
3
6
+
x
5
120
+O(x
7
), cos x = 1
x
2
2
+
x
4
24
+O(x
6
) = 1 y, with y =
x
2
2
x
4
24
+O(x
6
)
Then
1
cos x
=
1
1 y
= 1 +y +y
2
+O(y
3
)
(it is not necessary to keep higher terms as y = O(x
2
) and therefore y
3
= O(x
6
)). Expanding y, y
2
and
y
3
as functions of x, we obtain:
1
cos x
= 1 +
_
x
2
2
x
4
24
+O(x
6
)
_
+
_
x
2
2
x
4
24
+O(x
6
)
_
2
+O(x
6
)
= 1 +
x
2
2
x
4
24
+
x
4
4
+O(x
6
)
= 1 +
x
2
2
+
5x
4
24
+O(x
6
)
Then
tanx =
sinx
cos x
=
_
x
x
3
6
+
x
5
120
+O(x
7
)
__
1 +
x
2
2
+
5x
4
24
+O(x
6
)
_
= x
x
3
6
+
x
3
2
+
x
5
120
x
5
12
+
5x
5
24
+O(x
7
)
= x +
x
3
3
+
2x
5
15
+O(x
7
)
You can check this result by using the successive derivatives of tanx and using the denition of the Taylor
Series.
5