Numerical Methods With MATLAB PDF
Numerical Methods With MATLAB PDF
Contents
Chapter 1. Solutions of Nonlinear Equations
1.1. Computer Arithmetics
1.2. Review of Calculus
1.3. The Bisection Method
1.4. Fixed Point Iteration
1.5. Newtons, Secant, and False Position Methods
1.6. Accelerating Convergence
1.7. Horners Method and the Synthetic Division
1.8. M
ullers Method
1
1
4
4
8
13
20
22
25
27
27
29
32
35
36
37
41
41
43
45
47
49
51
53
55
57
59
59
67
71
73
75
78
82
83
84
iii
iv
CONTENTS
87
87
88
91
99
100
101
104
109
122
131
131
131
134
136
136
140
Bibliography
141
143
143
145
149
155
155
156
157
159
159
161
162
164
166
Solutions to
Solutions
Solutions
Solutions
Solutions
169
169
171
172
176
Index
183
CHAPTER 1
a
a
=
a
a
respectively.
(3) A roundoff error occurs when a computer approximates a real number
by a number with only a finite number of digits to the right of the decimal
point (see Subsection 1.1.2).
(4) In scientific computation, the floating point representation of a number c of length d in the base is
c = 0.b1 b2 bd N ,
where b1 6= 0, 0 bi < . We call b1 b2 bd the mantissa or decimal
part and N the exponent of c. For instance, with d = 5 and = 10,
0.27120 102 ,
0.31224 103 .
0.1230 102 ,
0.1000 103 ,
is 4, 3, and 1, respectively.
(6) The term truncation error is used for the error committed when an
infinite series is truncated after a finite number of terms.
1
Remark 1.1. For simplicity, we shall often write floating point numbers
without exponent and with zeros immediately to the right of the decimal point
or with nonzero numbers to the left of the decimal point:
0.001203,
12300.04
1.9950000 2.00
4.9850000 4.99
Floating-point numbers are chopped to k digits by replacing the digits to the
right of the kth digit by zeros.
1.1.3. Cancellation in computations. Cancellation due to the subtraction of two almost equal numbers leads to a loss of significant digits. It is better
to avoid cancellation than to try to estimate the error due to cancellation. Example 1.2 illustrates these points.
Example 1.2. Use 10-digit rounded arithmetic to solve the quadratic equation
x2 1634x + 2 = 0.
Solution. The usual formula yields
Four of the six zeros at the end of the fractional part of x2 are the result of
cancellation and thus are meaningless. A more accurate result for x2 can be
obtained if we use the relation
x1 x2 = 2.
In this case
x2 = 1.223 991 125 103 ,
From Example 1.2, it is seen that a numerically stable formula for solving the
quadratic equation
ax2 + bx + c = 0,
a 6= 0,
is
i
p
c
1 h
x2 =
b sign (b) b2 4ac ,
,
2a
ax1
where the signum function is
(
+1, if x 0,
sign (x) =
1, if x < 0.
x1 =
Example 1.3. If the value of x rounded to three digits is 4.81 and the value
of y rounded to five digits is 12.752, find the smallest interval which contains the
exact value of x y.
Solution. Since
4.805 x < 4.815 and 12.7515 y < 12.7525,
then
4.805 12.7525 < x y < 4.815 12.7515 7.9475 < x y < 7.9365.
Example 1.4. Find the error and the relative error in the commonly used
rational approximations 22/7 and 355/113 to the transcendental number and
express your answer in three-digit floating point numbers.
Solution. The error and the relative error in 22/7 are
= 22/7 ,
r = /,
respectively. Similarly, Matlab computes the error and relative error in 355/113
as
r2 = 355/113.
r2 = 3.14159292035398
abserr2 = r2 - pi
abserr2 = 2.667641894049666e-07
relerr2 = abserr2/pi
relerr2 = 8.491367876740610e-08
Hence, the error and the relative error in 355/113 rounded to three digits are
= 0.267 106
i=1
and f
then, by Corollary 1.1, f (x) = 0 has a root between a and b. The root is either
between
a+b
a+b
< 0,
, if f (a) f
a and
2
2
y
(a n , f (an ))
y = f (x)
bn
0
an
x n+1
(b n , f (bn ))
Figure 1.1. The nth step of the bisection method.
or between
a+b
2
and b,
if f
or exactly at
a+b
2
f (b) < 0,
a+b
a+b
= 0.
, if f
2
2
The nth step of the bisection method is shown in Fig. 1.1.
The algorithm of the bisection method is as follows.
Algorithm 1.1 (Bisection Method). Given that f (x) is continuous on [a, b]
and f (a) f (b) < 0:
(1) Choose a0 = a, b0 = b; tolerance N ; maximum number of iteration N0 .
(2) For n = 0, 1, 2, . . . , N0 , compute
an + b n
.
2
If f (xn+1 ) = 0 or (bn an )/2 < T OL, then output p (= xn+1 ) and stop.
Else if f (xn+1 ) and f (an ) have opposite signs, set an+1 = an and bn+1 =
xn+1 .
Else set an+1 = xn+1 and bn+1 = bn .
Repeat (2), (3), (4) and (5).
Ouput Method failed after N0 iterations and stop.
xn+1 =
(3)
(4)
(5)
(6)
(7)
Other stopping criteria are described in Subsection 1.4.1. The rate of convergence of the bisection method is low but the method always converges.
The bisection method is programmed in the following Matlab function M-file
which is found in ftp://ftp.cs.cornell.edu/pub/cv.
function root = Bisection(fname,a,b,delta)
%
% Pre:
%
fname
string that names a continuous function f(x) of
%
a single variable.
%
%
a,b
define an interval [a,b]
%
f is continuous, f(a)f(b) < 0
%
%
delta
non-negative real number.
%
% Post:
%
root
the midpoint of an interval [alpha,beta]
%
with the property that f(alpha)f(beta)<=0 and
%
|beta-alpha| <= delta+eps*max(|alpha|,|beta|)
%
fa = feval(fname,a);
fb = feval(fname,b);
if fa*fb > 0
disp(Initial interval is not bracketing.)
return
end
if nargin==3
delta = 0;
end
while abs(a-b) > delta+eps*max(abs(a),abs(b))
mid = (a+b)/2;
fmid = feval(fname,mid);
if fa*fmid<=0
% There is a root in [a,mid].
b = mid;
fb = fmid;
else
% There is a root in [mid,b].
a = mid;
fa = fmid;
end
end
root = (a+b)/2;
an + b n
2
by
are listed in Table 1.1. The answer is
the bisection method. The results 2
2 1.414063 with an accuracy of 10 . Note that a root lies in the interval
[1.414063, 1.421875].
Example 1.6. Show that the function f (x) = x3 +4 x2 10 has a unique root
in the interval [1, 2] and give an approximation to this root using eight iterations
of the bisection method. Give a bound for the absolute error.
Solution. Since
f (1) = 5 < 0 and f (2) = 14 > 0,
xn
1.500000
1.250000
1.375000
1.437500
1.406250
1.421875
1.414063
an
1
1
1.250000
1.375000
1.375000
1.406250
1.406250
1.414063
bn
|xn1 xn | f (xn ) f (an )
2
1.500000
.500000
+
1.500000
.250000
1.500000
.125000
1.437500
.062500
+
1.437500
.031250
1.421875
.015625
+
1.421875
.007812
xn
1.500000000
1.250000000
1.375000000
1.312500000
1.343750000
1.359375000
1.367187500
1.363281250
an
1
1
1.250000000
1.250000000
1.312500000
1.343750000
1.359375000
1.359375000
1.363281250
bn
f (xn )
2
1.500000000
+
1.500000000
1.375000000
+
1.375000000
1.375000000
1.375000000
1.367187500
+
1.367187500
f (an )
then f (x) has a root, p, in [1, 2]. This root is unique since f (x) is strictly increasing
on [1, 2]; in fact
f (x) = 3 x2 + 4 x > 0 for all x between 1 and 2.
The results are listed in Table 1.2.
After eight iterations, we find that p lies between 1.363281250 and 1.367187500.
Therefore, the absolute error in p is bounded by
1.367187500 1.363281250 = 0.00390625.
Example 1.7. Find the number of iterations needed in Example 1.6 to have
an absolute error less than 104 .
Solution. Since the root, p, lies in each interval [an , bn ], after n iterations
the error is at most bn an . Thus, we want to find n such that bn an < 104 .
Since, at each iteration, the length of the interval is halved, it is easy to see that
bn an = (2 1)/2n .
Therefore, n satisfies the inequality
2n < 104 ,
that is,
ln 2n < ln 104 ,
or
n ln 2 < 4 ln 10.
Thus,
n > 4 ln 10/ln 2 = 13.28771238 = n = 14.
Hence, we need 14 iterations.
(1.1)
x = (9 x3 )/9
are equivalent. The problem is to choose a suitable function g(x) and a suitable
initial value x0 to have convergence. To treat this question we need to define the
different types of fixed points.
Definition 1.1. A fixed point, p = g(p), of an iterative scheme
xn+1 = g(xn ),
is said to be attractive, repulsive or indifferent if the multiplier, g (p), of g(x)
satisfies
|g (p)| < 1, |g (p)| > 1, or |g (p)| = 1,
respectively.
Theorem 1.6 (Fixed Point Theorem). Let g(x) be a real-valued function
satisfying the following conditions:
(1) g(x) [a, b] for all x [a, b].
(2) g(x) is differentiable on [a, b].
(3) There exists a number K, 0 < K < 1, such that |g (x)| K for all
x (a, b).
Then g(x) has a unique attractive fixed point p [a, b]. Moreover, for arbitrary
x0 [a, b], the sequence x0 , x1 , x2 , . . . defined by
xn+1 = g(xn ),
converges to p.
n = 0, 1, 2, . . . ,
By Corollary 1.1, there exists a number p ]a, b[ such that h(p) = 0, that is,
g(p) = p and p is a fixed point for g(x).
To prove uniqueness, suppose that p and q are distinct fixed points for g(x)
in [a, b]. By the Mean Value Theorem 1.3, there exists a number c between p and
q (and hence in [a, b]) such that
|p q| = |g(p) g(q)| = |g (c)| |p q| K|p q| < |p q|,
which is a contradiction. Thus p = q and the attractive fixed point in [a, b] is
unique.
We now prove convergence. By the Mean Value Theorem 1.3, for each pair
of numbers x and y in [a, b], there exists a number c between x and y such that
g(x) g(y) = g (c)(x y).
Hence,
In particular,
as
n ,
Corollary 1.1 implies that f (x) has a root, p, between 0 and 1. Condition (3) of
Theorem 1.6 is satisfied with K = 1/3 since
|g (x)| = | x2 /3| 1/3
for all x between 0 and 1. The other conditions are also satisfied.
Five iterations are performed with Matlab starting with x0 = 0.5. The function M-file exp8_8.m is
function x1 = exp8_8(x0); % Example 8.8.
x1 = (9-x0^3)/9;
10
xn
error n
n /n1
0.50000000000000 0.41490784153366
1.00000000000000
0.98611111111111
0.07120326957745
0.07120326957745
0.89345451579409 0.02145332573957 0.30129691890395
0.92075445888550
0.00584661735184 0.01940483617658
0.91326607850598 0.00164176302768
0.08460586900804
0.91536510274262
0.00045726120896
0.00540460389243
x1 ,
x2 ,
...,
xk1 ,
x2 = g 2 (x0 ),
...,
xk1 = g k1 (x0 ),
x0 = g k (x0 ).
j = 0, 1, . . . , k 1.
> 1,
= 1.
11
The multiplier of a cycle is seen to be the same at every point of the cycle.
Example 1.9. Find a root of the equation
f (x) = x3 + 4 x2 10 = 0
in the interval [1, 2] by fixed point iterative schemes and study their convergence
properties.
Solution. Since f (1)f (2) = 70 < 0, the equation f (x) = 0 has a root in
the interval [1, 2]. The exact roots are given by the Matlab command roots
p=[1 4 0 -10]; % the polynomial f(x)
r =roots(p)
r =
-2.68261500670705 + 0.35825935992404i
-2.68261500670705 - 0.35825935992404i
1.36523001341410
There is one real root, which we denote by x , in the interval [1, 2], and a pair
of complex conjugate roots.
Six iterations are performed with the following five rearrangements x = gj (x),
j = 1, 2, 3, 4, 5, of the given equation f (x) = 0. The derivative of gj (x) is evaluated
at the real root x 1.365.
x = g1 (x) =: 10 + x 4x2 x3 ,
p
x = g2 (x) =: (10/x) 4x,
1p
x = g3 (x) =:
10 x3 ,
2
p
x = g4 (x) =: 10/(4 + x),
x3 + 4x2 10
,
3x2 + 8x
The Matlab function M-file exp1_9.m is
x = g5 (x) =: x
g1 (x ) 15.51,
g2 (x ) 3.42,
g3 (x ) 0.51,
g4 (x ) 0.13
g5 (x ) = 0.
12
n
0
1
2
3
4
5
6
g1 (x)
10 + x 4x2 x3
1.5
0.8750
6.732421875
4.6972001 102
1.0275 108
1.08 1024
1.3 1072
g (x)
g (x)
p 2
3
(10/x) 4x 0.5 10 x3
1.5
1.5
0.816
1.286953
2.996
1.402540
0.00 2.94 i
1.345458
2.75 2.75 i
1.375170
1.81 3.53 i
1.360094
2.38 3.43 i
1.367846
g (x)
p 4
10/(4 + x)
1.5
1.348399
1.367376
1.364957
1.365264
1.365225
1.365230
g5 (x)
x3 +4x2 10
2x2 +8x
1.5
1.373333333
1.365262015
1.365230014
1.365230013
to produce a 10-digit correct answer. On the other hand, the sequence g2 (xn ) is
trapped in an attractive two-cycle,
with multiplier
z = 2.27475487839820 3.60881272309733 i,
Then, we have
g (p) 2
+ ....
2! n
g (p) 2
+ ...
2! n
13
y
Tangent
y = f (x)
(xn , f (x n ))
xn
p x n+1
g (p) 2
+ ....
2! n
(1.4)
= constant.
2n
2
1.5. Newtons, Secant, and False Position Methods
1.5.1. Newtons method. Let xn be an approximation to a root, p, of
f (x) = 0. Draw the tangent line
y = f (xn ) + f (xn )(x xn )
to the curve y = f (x) at the point (xn , f (xn )) as shown in Fig. 1.2. Then xn+1
is determined by the point of intersection, (xn+1 , 0), of this line with the x-axis,
0 = f (xn ) + f (xn ) (xn+1 xn ).
If f (xn ) 6= 0, solving this equation for xn+1 we obtain Newtons method, also
called the NewtonRaphson method,
xn+1 = xn
f (xn )
.
f (xn )
(1.5)
Note that Newtons method is a fixed point method since it can be rewritten in
the form
f (x)
.
xn+1 = g(xn ),
where g(x) = x
f (x)
14
xn
|xn xn1 |
2
1.5
0.5
1.416667
0.083333
1.414216
0.002451
1.414214
0.000002
xn
|xn xn1 |
1.5
1.37333333333333
0.126667
1.36526201487463
0.00807132
1.36523001391615
0.000032001
1.3652300134141 5.0205 1010
1.3652300134141 2.22045 1016
1.3652300134141 2.22045 1016
x2n + 2
x2n 2
f (xn )
=
.
=
x
n
f (xn )
2 xn
2xn
2 1.414214.
Note that the number of zeros in the errors roughly doubles as it is the case with
methods of second order.
Example 1.11. Use six iterations of Newtons method to approximate a root
p [1, 2] of the polynomial
given in Example 1.9.
f (x) = x3 + 4 x2 10 = 0
2(x3n + 2x2n + 5)
x3 + 4x2 10
f (xn )
=
.
= xn n 2 n
f (xn )
3xn + 8xn
3x2n + 8xn
Theorem 1.7. Let p be a simple root of f (x) = 0, that is, f (p) = 0 and
f (p) 6= 0. If f (p) exists, then Newtons method is at least of second order near
p.
Proof. Differentiating the function
g(x) = x
f (x)
f (x)
15
xn
n+1 /n
0.000
0.400
0.600
0.652
2.245
0.806
0.143
0.895
0.537
0.945
0.522
0.972
0.512
xn
0.00000000000000
0.80000000000000
0.98461538461538
0.99988432620012
0.99999999331095
1
1
n+1 /2n
0.2000
0.3846
0.4887
0.4999
we have
(f (x))2 f (x) f (x)
(f (x))2
f (x) f (x)
=
.
(f (x))2
g (x) = 1
1 f (p) 2
,
2 f (p) n
which explains the doubling of the number of leading zeros in the error of Newtons
method near a simple root of f (x) = 0.
Example 1.12. Use six iterations of the ordinary and modified Newtons
methods
f (xn )
f (xn )
xn+1 = xn
,
xn+1 = xn 2
f (xn )
f (xn )
to approximate the double root, x = 1, of the polynomial
f (x) = (x 1)2 (x 2).
Solution. The two methods have iteration functions
(x 1)(x 2)
(x 1)(x 2)
,
g2 (x) = x
,
g1 (x) = x
2(x 2) + (x 1)
(x 2) + (x 1)
respectively. We take x0 = 0. The results are listed in Table 1.7. One sees that
Newtons method has first-order convergence near a double zero of f (x), but one
16
y
(xn-1, f (xn-1))
y = f (x)
(xn , f (xn ))
Secant
x n-1
xn
x n+1
f (xn )
f (xn )
f (xn ) f (xn1 )
(x xn ).
xn xn1
f (xn ) f (xn1 )
(xn+1 xn ).
xn xn1
(xn xn1 )
f (xn ).
f (xn ) f (xn1 )
(1.6)
(xn xn1 )
f (xn ),
f (xn ) f (xn1 )
17
y
(a n , f (an ))
y = f (x)
Secant
p
0
an
x n+1
bn
(b n , f (bn ))
Figure 1.4. The nth step of the method of false position.
This method converges to a simple root to order 1.618 and may not converge
to a multiple root. Thus it is generally slower than Newtons method. However,
it does not require the derivative of f (x). In general applications of Newtons
method, the derivative of the function f (x) is approximated numerically by the
slope of a secant to the curve.
1.5.3. The method of false position. The method of false position, also
called regula falsi, is similar to the secant method, but with the additional condition that, for each n = 0, 1, 2, . . ., the pair of approximate values, an and bn ,
to the root, p, of f (x) = 0 be such that f (an ) f (bn ) < 0. The next iterate,
xn+1 , is determined by the intersection of the secant passing through the points
(an , f (an )) and (bn , f (bn )) with the x-axis.
The equation for the secant through (an , f (an )) and (bn , f (bn )), shown in
Fig. 1.4, is
f (bn ) f (an )
(x an ).
y = f (an ) +
b n an
Hence, xn+1 satisfies the equation
f (bn ) f (an )
(xn+1 an ),
b n an
which leads to the method of false position:
an f (bn ) bn f (an )
.
xn+1 =
f (bn ) f (an )
The algorithm for the method of false position is as follows.
0 = f (an ) +
(1.7)
an f (bn ) bn f (an )
.
f (bn ) f (an )
18
xn
1.333333
1.400000
1.411765
1.413793
1.414141
an
bn
1
2
1.333333 2
1.400000 2
1.411765 2
1.413793 2
1.414141 2
0.066667
0.011765
0.002028
0.000348
(6) Repeat (2)(5) until the selected stopping criterion is satisfied (see Subsection 1.4.1).
This method is generally slower than Newtons method, but it does not require
the derivative of f (x) and it always converges to a nested root. If the approach
to the root is one-sided, convergence can be accelerated by replacing the value of
f (x) at the stagnant end position with f (x)/2.
19
%
fpName
string that names the derivative function f(x).
%
a,b
A root of f(x) is sought in the interval [a,b]
%
and f(a)*f(b)<=0.
%
tolx,tolf
Nonnegative termination criteria.
%
nEvalsMax
Maximum number of derivative evaluations.
%
% Post:
%
x
An approximate zero of f.
%
fx
The value of f at x.
%
nEvals
The number of derivative evaluations required.
%
aF,bF
The final bracketing interval is [aF,bF].
%
% Comments:
%
Iteration terminates as soon as x is within tolx of a true zero
%
or if |f(x)|<= tolf or after nEvalMax f-evaluations
fa = feval(fName,a);
fb = feval(fName,b);
if fa*fb>0
disp(Initial interval not bracketing.)
return
end
x
= a;
fx = feval(fName,x);
fpx = feval(fpName,x);
disp(sprintf(%20.15f %20.15f
%20.15f,a,x,b))
nEvals = 1;
while (abs(a-b) > tolx ) & (abs(fx) > tolf) &
((nEvals<nEvalsMax) | (nEvals==1))
%[a,b] brackets a root and x = a or x = b.
if StepIsIn(x,fx,fpx,a,b)
%Take Newton Step
disp(Newton)
x
= x-fx/fpx;
else
%Take a Bisection Step:
disp(Bisection)
x = (a+b)/2;
end
fx = feval(fName,x);
fpx = feval(fpName,x);
nEvals = nEvals+1;
if fa*fx<=0
% There is a root in [a,x]. Bring in right endpoint.
b = x;
fb = fx;
else
20
end
disp(sprintf(%20.15f %20.15f
end
aF = a;
bF = b;
%20.15f,a,x,b))
.
xn p
xn+1 p
xn+1 an
xn+2 an
=
xn an
xn+1 an
(xn+1 xn )2
.
xn+2 2xn+1 + xn
an p
= 0.
xn p
(xn )2
.
2 xn
(1.8)
21
-5
-5
0
x
(z1 sn )2
.
z2 2z1 + sn
x0 = 1.
The following Matlab M function and script produce the results listed in Table 1.9. The second, third, and fourth columns are the iterates xn , Aitkens
and Steffensens accelerated sequences an and sn , respectively. The fifth column,
which lists n+1 /2n = (sn+2 sn+1 )/(sn+1 sn )2 tending to a constant, indicates
that the Steffensen sequence sn converges to second order.
The M function function is:
function f = twosine(x);
f = 2*sin(x);
22
xn
1.00000000000000
1.68294196961579
1.98743653027215
1.82890755262358
1.93374764234016
1.86970615363078
1.91131617912526
1.88516234821223
an
2.23242945471637
1.88318435428750
1.89201364327283
1.89399129067379
1.89492839486397
1.89525656226218
sn
n+1 /2n
1.00000000000000 0.2620
2.23242945471637
0.3770
1.83453173271065
0.3560
1.89422502453561
0.3689
1.89549367325365
0.3691
1.89549426703385
1.89549426703398
NaN
23
If bn = an and
bk = ak + bk+1 x0 ,
for
then
k = n 1, n 2, . . . , 1, 0,
b0 = p(x0 ).
Moreover, if
q(x) = bn xn1 + bn1 xn2 + + b2 x + b1 ,
then
p(x) = (x x0 )q(x) + b0 .
Proof. By the definition of q(x),
(x x0 )q(x) + b0 = (x x0 )(bn xn1 + bn1 xn2 + + b2 x + b1 ) + b0
= (bn xn + bn1 xn1 + + b2 x2 + b1 x)
= an xn + an1 xn1 + + a1 x + a0
= p(x)
and
b0 = p(x0 ).
1.7.2. Synthetic division. Evaluating a polynomial at x = x0 by Horners
method is equivalent to applying the synthetic division as shown in Example 1.15.
Example 1.15. Find the value of the polynomial
p(x) = 2x4 3x2 + 3x 4
at x0 = 2 by Horners method.
b4 = 2
a3 = 0
4
b3 = 4
a2 = 3
8
b2 = 5
a1 = 3
10
b1 = 7
a0 = 4
14
b0 = 10
24
Thus
p(x) = (x + 2)(2x3 4x2 + 5x 7) + 10
and
p(2) = 10.
Horners method can be used efficiently with Newtons method to find zeros
of a polynomial p(x). Differentiating
p(x) = (x x0 )q(x) + b0
we obtain
p (x) = (x x0 )q (x) + q(x).
Hence
p (x0 ) = q(x0 ).
Putting this in Newtons method we have
p(xn1 )
p (xn1 )
p(xn1 )
.
= xn1
q(xn1 )
xn = xn1
0
4
4
4
8
3
8
5
16
21
=3
10
7
42
4
14
49 = p (2)
Thus
p(2) = 10,
p (2) = 49,
and
x1 = 2
10
1.7959.
49
10 = p(2)
1.8. MULLERS
METHOD
25
1.8. M
ullers Method
M
ullers, or the parabola, method finds the real or complex roots of an equation
f (x) = 0.
This method uses three initial approximations, x0 , x1 , and x2 , to construct a
parabola,
p(x) = a(x x2 )2 + b(x x2 ) + c,
through the three points (x0 , f (x0 )), (x1 , f (x1 )), and (x2 , f (x2 )) on the curve
f (x) and determines the next approximation x3 as the point of intersection of the
parabola with the real axis closer to x2 .
The coefficients a, b and c defining the parabola are obtained by solving the
linear system
f (x0 ) = a(x0 x2 )2 + b(x0 x2 ) + c,
We immediately have
c = f (x2 )
and obtain a and b from the linear system
(x0 x2 )2 (x0 x2 )
a
f (x0 ) f (x2 )
=
.
(x1 x2 )2 (x1 x2 )
b
f (x1 ) f (x2 )
Then, we set
p(x3 ) = a(x3 x2 )2 + b(x3 x2 ) + c = 0
b2 4ac
2a
b b2 4ac b b2 4ac
2a
b b2 4ac
2c
.
=
b b2 4ac
x3 x2 =
2c
.
b + sign(b) b2 4ac
M
ullers method converges approximately to order 1.839 to a simple or double
root. It may not converge to a triple root.
Example 1.17. Find the four zeros of the polynomial
16x4 40x3 + 5x2 + 20x + 6,
whose graph is shown in Fig. 1.6, by means of M
ullers method.
Solution. The following Matlab commands do one iteration of M
ullers
method on the given polynomial which is transformed into its nested form:
26
10
5
0
-5
-10
-1
1
x
-0.3561 + 0.1628i
CHAPTER 2
(x1 , f1 ),
(x2 , f2 ),
...,
(xn , fn ),
(2.1)
where fi is the observed value for f (xi ). We would like to use these data to
approximate f (x) at an arbitrary point x 6= xi .
When we want to estimate f (x) for x between two of the xi s, we talk about
interpolation of f (x) at x. When x is not between two of the xi s, we talk about
extrapolation of f (x) at x.
The idea is to construct an interpolating polynomial, pn (x), of degree n whose
graph passes through the n + 1 points listed in (2.1). This polynomial will be
used to estimate f (x).
2.1. Lagrange Interpolating Polynomial
The Lagrange
interpolating polynomial, pn (x), of degree n through the n + 1
points xk , f (xk ) , k = 0, 1, . . . , n, is expressed in terms of the following Lagrange
basis:
(x x0 )(x x1 ) (x xk1 )(x xk+1 ) (x xn )
.
Lk (x) =
(xk x0 )(xk x1 ) (xk xk1 )(xk xk+1 ) (xk xn )
Clearly, Lk (x) is a polynomial of degree n and
(
1, x = xk ,
Lk (x) =
0, x = xj , j 6= k.
(2.2)
L0 (x) =
27
28
Thus,
1 (4x + 24)x 32 1 (x 4.5)x + 5
1
[(x 6.5)x + 10] +
+
2
2.5
3
4
3
= (0.05x 0.425)x + 1.15.
p(x) =
f (n+1) ((x))
(x x0 ) (x x1 ) (x xn ),
(n + 1)!
(2.3)
and
axb
(n + 1)!
for a x b.
Proof. First, note that the error is 0 at x = x0 , x1 , . . . , xn since
pn (xk ) = f (xk ),
k = 0, 1, . . . , n,
from the interpolating property of pn (x). For x 6= xk , define the auxiliary function
(t x0 )(t x1 ) (t xn )
(x x0 )(x x1 ) (x xn )
n
Y
t xi
= f (t) pn (t) [f (x) pn (x)]
.
x
xi
i=0
For t = xk ,
g(xk ) = f (xk ) pn (xk ) [f (x) pn (x)] 0 = 0
and for t = x,
g(x) = f (x) pn (x) [f (x) pn (x)] 1 = 0.
Thus g C n+1 [a, b] and it has n + 2 zeros in [a, b]. By the generalized Rolle
theorem, g (t) has n + 1 zeros in [a, b], g (t) has n zeros in [a, b], . . . , g (n+1) (t)
has 1 zero, [a, b],
" n
n+1
Y t xi
d
g (n+1) () = f (n+1) () p(n+1)
() [f (x) pn (x)] n+1
n
dt
x xi
i0
(n + 1)!
= f (n+1) () 0 [f (x) pn (x)] Qn
i=0 (x xi )
=0
t=
29
f (n+1) ((x))
(x x0 ) (x x1 ) (x xn ).
(n + 1)!
implies that
p1 (x0 ) = a0 = f0 ,
Solving for a1 we have
p1 (x1 ) = f0 + a1 (x1 x0 ) = f1 .
a1 =
If we let
f1 f0
.
x1 x0
f1 f0
x1 x0
be the first divided difference, then the divided difference interpolating polynomial of degree one is
f [x0 , x1 ] =
p1 (x) = f0 + (x x0 ) f [x0 , x1 ].
Example 2.2. Consider a function f (x) which passes through the points
(2.2, 6.2) and (2.5, 6.7). Find the divided difference interpolating polynomial of
degree one for f (x) and use it to interpolate f at x = 2.35.
Solution. Since
f [2.2, 2.5] =
then
6.7 6.2
= 1.6667,
2.5 2.2
30
Example 2.3. Approximate cos 0.2 linearly using the values of cos 0 and
cos /8.
Solution. We have the points
(0, cos 0) = (0, 1) and
8
cos2 =
=
8
, cos
1
,
8 2
2+2
1 + cos(2 )
2
to get
1
=
8
2
2+2
4
f [0, /8] =
2+22 .
=
/8 0
cos
This leads to
4
p1 (x) = 1 +
In particular,
q
2 + 2 2 x.
p1 (0.2) = 0.96125.
Note that cos 0.2 = 0.98007 (rounded to five digits). The absolute error is 0.01882.
Consider the three data points
(x0 , f0 ),
(x1 , f1 ),
(x2 , f2 ),
where xi 6= xj
for
i 6= j.
Then the divided difference interpolating polynomial of degree two through these
points is
p2 (x) = f0 + (x x0 ) f [x0 , x1 ] + (x x0 ) (x x1 ) f [x0 , x1 , x2 ]
where
f [x1 , x2 ] f [x0 , x1 ]
f1 f0
and f [x0 , x1 , x2 ] :=
x1 x0
x2 x0
are the first and second divided differences, respectively.
f [x0 , x1 ] :=
Example 2.4. Interpolate a given function f (x) through the three points
(2.2, 6.2),
(2.5, 6.7),
(2.7, 6.5),
f [2.5, 2.7] = 1
and
f [2.2, 2.5, 2.7] =
31
Therefore,
p2 (x) = 6.2 + (x 2.2) 1.6667 + (x 2.2) (x 2.5) (5.3334).
In particular, p2 (2.35) = 6.57.
2 + 2.
cos =
8
2
Hence, from the three data points
(0, 1),
(/4, 2/2),
4
f [0, /8] =
2+22 ,
f [/8, /4] =
and
q
2
2+2 ,
/4 /8
q
16
= 2
22
2+2 .
Hence,
4
p2 (x) = 1 + x
q
q
16
2+22 +x x
22
2+2 , .
8 2
p2 (0.2) = 0.97881.
The absolute error is 0.00189.
(x1 , f1 ),
...,
(xn , fn ),
where, by definition,
f [xj , xj+1 , . . . , xk ] =
32
x
x0
x1
x2
x3
x4
x5
First
f (x) divided differences
f [x0 ]
f [x0 , x1 ]
f [x1 ]
f [x1 , x2 ]
f [x2 ]
f [x2 , x3 ]
f [x3 ]
f [x3 , x4 ]
f [x4 ]
f [x4 , x5 ]
f [x5 ]
Second
Third
divided differences divided differences
f [x0 , x1 , x2 ]
f [x0 , x1 , x2 , x3 ]
f [x1 , x2 , x3 ]
f [x1 , x2 , x3 , x4 ]
f [x2 , x3 , x4 ]
f [x2 , x3 , x4 , x5 ]
f [x3 , x4 , x5 ]
Example 2.6. Construct the cubic interpolating polynomial through the four
unequally spaced points
(1.0, 2.4),
(1.3, 2.2),
(1.5, 2.3),
(1.7, 2.4),
f (xi )
1.0
2.4
1.3
2.2
f [xi , xi+1 ]
0.66667
2.33333
0.500000
1.5
2.3
0.00000
3.33333
0.500000
1.7
2.4
Therefore,
p3 (x) = 2.4 + (x 1.0) (0.66667) + (x 1.0) (x 1.3) 2.33333
2 fj := fj+1 fj ,
33
f [x0 , . . . , xk ] :=
If we set
r=
1
k f0 .
k! hk
x x0
,
h
and
n
X
r (r 1) (r k + 1) k
f0
k!
k=1
n
X
r
=
k f0 ,
k
pn (r) = f0 +
(2.5)
k=0
where
r (r 1) (r k + 1)
r
=
k!
1
k
if k > 0,
if k = 0.
xi
yi
1988
1989
36000
1990
36500
yi
2 yi
3 yi
4 yi
5 yi
35000
1000
500
500
500
3
1991
37000
1992
37800
1993
39000
500
0
300
300
800
100
400
1200
200
200
34
r(r 1)(r 2)
r(r 1)
(500) +
(500)
2
6
r(r 1)(r 2)(r 3)
r(r 1)(r 2)(r 3)(r 4)
+
(200) +
(0).
24
120
Extrapolating the data at 1994 we have r = 6 and
p5 (r) = 35000 + r (1000) +
p5 (6) = 40500.
An iterative use of the Matlab diff(y,n) command produces a difference table.
y = [35000 36000 36500 37000 37800 39000]
dy = diff(y);
dy = 1000
500
500
d2y = diff(y,2)
d2y = -500
0
300
400
d3y = diff(y,3)
d3y = 500
300
100
d4y = diff(y,4)
d4y = -200 -200
d5y = diff(y,5)
d5y = 0
800
1200
Example 2.8. Use the following equally spaced data to approximate f (1.5).
x
1.0
1.3
1.6
1.9
2.2
f (x) 0.7651977 0.6200860 0.4554022 0.2818186 0.1103623
Solution. The forward difference table is
i
xi
yi
1.0
0.7651977
1.3
0.6200860
1.6
0.4554022
2 yi
yi
0.145112
-0.164684
0.0195721
1.9
0.2818186
4 yi
0.0106723
-0.0088998
-0.173584
3 yi
0.0003548
0.0110271
0.0021273
-0.170856
4 2.2
0.1103623
Setting r = (x 1.0)/0.3, we have
r(r 1)
(0.0195721)
2
r(r 1)(r 2)(r 3)
r(r 1)(r 2)
(0.0106723) +
(0.0003548).
+
6
24
Interpolating f (x) at x = 1, we have r = 5/3 and and
p4 (r) = 0.7651977 + r (0.145112) +
p4 (5/3) = 0.511819.
35
and
n
X
r (r + 1) (r + k 1) k
fk
k!
k=1
n
X
r+k1
=
k fk ,
k
pn (r) = f0 +
(2.6)
k=0
1.0
0.7651977
1.3
0.6200860
1.6
1.9
2.2
2 yi
3 yi
4 yi
-0.145112
-0.0195721
-0.164684
0.4554022
0.0106723
-0.173584
0.2818186
0.1103623
0.0003548
-0.0088998
0.0110271
0.0021273
0.170856
r(r + 1)
(0.0021273)
2
r(r + 1)(r + 2)(r + 3)
r(r + 1)(r + 2)
(0.0110271) +
(0.0003548).
+
6
24
Since
r=
1
2.1 2.2
= ,
0.3
3
then
p4 (1/3) = 0.115904.
36
n
X
hm (x)fm +
m=0
n
X
m=0
b
hm (x)fm
,
p2n+1 (xk ) = fk ,
k = 0, 1, . . . , n.
k 6= m,
hm (xm ) = 1,
hm (xm ) = 0,
and
b
hm (xk ) = b
hm (xk ) = 0,
k 6= m,
b
hm (xm ) = 0,
b
hm (xm ) = 1.
where
Lm (x) =
n
Y
k=0,k6=m
x xk
xm xk
i = 0, 1, . . . , n,
and take
f (x0 ) for f [z0 , z1 ],
...,
in the divided difference table for the Hermite interpolating polynomial of degree
2n + 1. Thus,
p2n+1 (x) = f [z0 ] +
2n+1
X
k=1
z
z0 = x0
f (z)
f [z0 ] = f (x0 )
z1 = x0
f [z1 ] = f (x0 )
First
divided differences
37
Second
divided differences
Third
divided differences
f [z0 , z1 ] = f (x0 )
f [z0 , z1 , z2 ]
f [z1 , z2 ]
z2 = x1
f [z2 ] = f (x1 )
z3 = x1
f [z3 ] = f (x1 )
z4 = x2
f [z4 ] = f (x2 )
z5 = x2
f [z5 ] = f (x2 )
f [z2 , z3 ] = f (x1 )
f [z0 , z1 , z2 , z3 ]
f [z1 , z2 , z3 ]
f [z1 , z2 , z3 , z4 ]
f [z2 , z3 , z4 ]
f [z3 , z4 ]
f [z4 , z5 ] = f (x2 )
f [z2 , z3 , z4 , z5 ]
f [z3 , z4 , z5 ]
Example 2.10. Interpolate the underlined data, given in the table below, at
x = 1.5 by a Hermite interpolating polynomial of degree five.
Solution. In the difference table the underlined entries are the given data.
The remaining entries are generated by standard divided differences.
1.3 0.6200860
1.3 0.6200860
0.5220232
1.6 0.4554022
0.5489460
1.6 0.4554022
0.5698959
1.9 0.2818186
0.5786120
1.9 0.2818186
0.5811571
0.0897427
0.0663657
0.0698330
0.0679655
0.0290537
0.0026663
0.0010020
0.0027738
0.0685667
0.0084837
= 0.5118277.
38
(d) Sj+1
(xj+1 ) = Sj (xj+1 ) for each j = 0, 1, . . . , n 2;
for each j = 0, 1, . . . , n 1.
The following existence and uniqueness theorems hold for natural and clamped
spline interpolants, respectively.
Theorem 2.2 (Natural Spline). If f is defined at a = x0 < x1 < < xn =
b, then f has a unique natural spline interpolant S on the nodes x0 , x1 , . . . , xn
with boundary conditions S (a) = 0 and S (b) = 0.
Theorem 2.3 (Clamped Spline). If f is defined at a = x0 < x1 < <
xn = b and is differentiable at a and b, then f has a unique clamped spline
interpolant S on the nodes x0 , x1 , . . . , xn with boundary conditions S (a) = f (a)
and S (b) = f (b).
The following Matlab commands generate a sine curve and sample the spline
over a finer mesh:
x = 0:10; y = sin(x);
xx = 0:0.25:10;
yy = spline(x,y,xx);
subplot(2,2,1); plot(x,y,o,xx,yy);
The result is shown in Fig 2.1.
The following Matlab commands illustrate the use of clamped spline interpolation where the end slopes are prescribed. Zero slopes at the ends of an
interpolant to the values of a certain distribution are enforced:
x = -4:4; y = [0 .15 1.12 2.36 2.36 1.46 .49 .06 0];
cs = spline(x,[0 y 0]);
xx = linspace(-4,4,101);
plot(x,y,o,xx,ppval(cs,xx),-);
The result is shown in Fig 2.2.
39
0.5
-0.5
-1
10
x
Figure 2.1. Spline interpolant of sine curve.
1.5
1
0.5
0
-0.5
-4
-2
0
x
CHAPTER 3
f (x) = f (x0 )
f (x) = f (x0 )
f (x0 + h) f (x0 ) h
f ().
h
2
(3.1)
f (x) = f (x0 )
2xj x0 x2
2xj x1 x2
+ f (x1 )
(x0 x1 )(x0 x2 )
(x1 x0 )(x1 x2 )
+ f (x2 )
2xj x0 x1
1
+ f ((xj ))
(x2 x0 )(x2 x1 ) 6
41
2
Y
k=0,k6=j
(xj xk ).
42
1
h
1
3
h2
f (x0 ) 2f (x1 ) + f (x2 ) + f (2 ).
2
2
3
f (x0 ) =
[f (x0 + h) f (x0 h)] f (1 ).
2h
6
f (x0 ) =
(3.2)
(3.3)
1
f (x0 )h3 +
6
1
f (x0 )h3 +
6
1 (4)
f (0 )h4 ,
24
1 (4)
f (1 )h4 .
24
i
1 h (4)
f (0 ) + f (4) (1 ) h4 .
24
i
1 h (4)
1
[f (x0 h) 2f (x0 ) + f (x0 + h)]
f (0 ) + f (4) (1 ) h2 .
2
h
24
By the Mean Value Theorem 1.5 for sums, there is a value , x0 h < < x0 + h,
such that
i
1 h (4)
f (0 ) + f (4) (1 ) = f (4) ().
2
We thus obtain the three-point second-order centered difference formula
f (x0 ) =
1
h2 (4)
[f
(x
h)
2f
(x
)
+
f
(x
+
h)]
f ().
0
0
0
h2
12
(3.4)
43
1/ h
1/h*
Subtituting these values in (3.3), we have the total error, which is the sum of the
roundoff and the truncation errors,
f (x0 )
fe(x0 + h) fe(x0 h)
e(x0 + h) e(x0 h) h2 (3)
=
f ().
2h
2h
6
Taking the absolute value of the right-hand side and applying the triangle inequality, we have
2
e(x0 + h) e(x0 h) h2 (3)
1 (|e(x0 +h)|+|e(x0 h)|)+ h |f (3) ()|.
f
()
2h
6
2h
6
If
|e(x0 h)| ,
then
|f (3) (x)| M,
h2
fe(x0 + h) fe(x0 h)
M.
+
f (x0 )
h
2h
6
z(h) =
h2
+
M
h
6
first decreases and afterwards increases as 1/h increases, as shown in Fig. 3.1. The
term M h2 /6 is due to the trunctation error and the term /h is due to roundoff
errors.
Example 3.1. (a) Given the function f (x) and its first derivative f (x):
f (x) = cos x,
f (x) = sin x,
44
approxminate f (0.7) with h = 0.1 by the five-point formula, without the truncation error term,
f (x) =
h4 (5)
1
f (x + 2h) + 8f (x + h) 8f (x h) + f (x 2h) +
f (),
12h
30
max
0.5x0.9
Total error
= 1.0111 105 .
45
N1 (h) = N (h),
h
h
+ N1
N1 (h) ,
N2 (h) = N1
2
2
M = N2 (h)
Now with h/4, we have
M = N2
1
3
K 2 h2 K 3 h3 . . . .
2
4
1
3
h
K 2 h2
K 3 h3 + . . . .
2
8
32
Subtracting the second last expression for M from 4 times the last one and dividing the result by 3, we elininate the term in h2 :
N2 (h/2) N2 (h)
1
h
+
+ K 3 h3 + . . . .
M = N2
2
3
8
Now, putting
we have
N2 (h/2) N2 (h)
h
+
N3 (h) = N2
,
2
3
1
K 3 h3 + . . . .
8
The presence of the number 2j1 1 in the denominator of the second term of
Nj (h) ensures convergence. It is clear how to continue this process which is called
Richardsons extrapolation.
An important case of Richardsons extrapolation is when N (h) is the centred
difference formula (3.3) for f (x), that is,
M = N3 (h) +
h2
1
h4 (5)
f (x0 + h) f (x0 h)
f (x0 )
f (x0 ) . . . .
2h
6
120
Since, in this case, the error term contains only even powers of h, the convergence
of Richardsons extrapolation is very fast. Putting
1
f (x0 + h) f (x0 h) ,
N1 (h) = N (h) =
2h
f (x0 ) =
46
f (x0 ) = N1
f (x0 )
f (x0 ) . . . .
2
24
1920
f (x0 ) = N1 (h)
Subtracting the second last formula for f (x0 ) from 4 times the last one and
dividing by 3, we have
f (x0 ) = N2 (h)
h4 (5)
f (x0 ) + . . . ,
480
where
N1 (h/2) N1 (h)
h
+
.
N2 (h) = N1
2
3
The presence of the number 4j1 1 in the denominator of the second term of
Nj (h) provides fast convergence.
Example 3.2. Let
f (x) = x ex .
Apply Richardsons extrapolation to the centred difference formula to compute
f (x) at x0 = 2 with h = 0.2.
Solution. We have
1
[f (2.2) f (1.8)] = 22.414 160,
0.4
1
N1 (0.1) = N (0.1) =
[f (2.1) f (1.9)] = 22.228 786,
0.2
1
N1 (0.05) = N (0.05) =
[f (2.05) f (1.95)] = 22.182 564.
0.1
N1 (0.2) = N (0.2) =
Next,
N1 (0.1) N1 (0.2)
= 22.166 995,
3
N1 (0.05) N1 (0.1)
= 22.167 157.
N2 (0.1) = N1 (0.05) +
3
N2 (0.2) = N1 (0.1) +
Finally,
N2 (0.1) N2 (0.2)
= 22.167 168,
15
which is correct to all 6 decimals. The results are listed in Table 3.1. One
sees the fast convergence of Richarsons extrapolation for the centred difference
formula.
N3 (0.2) = N1 (0.1) +
47
where the function f (x) is smooth on [a, b] and a < b, we subdivide the interval
[a, b] into n subintervals of equal length h = (b a)/n. The function f (x) is
approximated on each of these subintervals by an interpolating polynomial and
the polynomials are integrated.
For the midpoint rule, f (x) is interpolated on each subinterval [xi1 , x1 ] by
f ([xi1 + x1 ]/2), and the integral of f (x) over a subinterval is estimated by the
area of a rectangle (see Fig. 3.2).
For the trapezoidal rule, f (x) is interpolated on each subinterval [xi1 , x1 ]
by a polynomial of degree one, and the integral of f (x) over a subinterval is
estimated by the area of a trapezoid (see Fig. 3.3).
For Simpsons rule, f (x) is interpolated on each pair of subintervals, [x2i , x2i+1 ]
and [x2i+1 , x2i+2 ], by a polynomial of degree two (parabola), and the integral of
f (x) over such pair of subintervals is estimated by the area under the parabola
(see Fig. 3.4).
3.4.1. Midpoint rule. The midpoint rule,
Z x1
1
f ()h3 ,
f (x) dx = hf (x1 ) +
24
x0
x0 < < x1 ,
(3.5)
where the integral over the linear term (x x1 ) is zero because this term is an odd
function with respect to the midpoint x = x1 and the Mean Value Theorem 1.4
for integrals has been used in the integral of the quadratic term (x x0 )2 which
does not change sign over the interval [x0 , x1 ]. The result follows from the value
of the integral
Z
x1
1 x1
1 3
1
(x x1 )2 dx = (x x1 )3 x0 =
h .
2 x0
6
24
48
x0 < < x1 ,
(3.6)
x x0
x x1
+ f (x1 )
.
x0 x1
x1 x0
f ()
(x x0 )(x x1 ),
2
x1
p1 (x) dx =
x0
we have
Z x1
x0
h
f (x) dx [f (x0 ) + f (x1 )] =
2
x0 < < x1 .
h
f (x0 ) + f (x1 ) ,
2
Z
x1
x0
x1
[f (x) p1 (x)] dx
f ((x))
(x x0 )(x x1 ) dx
2
x0
Z
f () x1
=
(x x0 )(x x1 ) dx
2
x0
x 1
f () x3
x0 + x1 2
=
x + x0 x1 x
2
3
2
x0
=
f () 3
h ,
12
where the Mean Value Theorem 1.4 for integrals has been used to obtain the third
equality since the term (x x0 )(x x1 ) does not change sign over the interval
[x0 , x1 ]. The last equality follows by some algebraic manipulation.
3.4.3. Simpsons rule. Simpsons rule
Z
x2
x0
f (x) dx =
h5 (4)
h
f (x0 ) + 4f (x1 ) + f (x2 )
f (),
3
90
f (x1 )
f (4) ((x))
f (x1 )
(xx1 )2 +
(xx1 )3 +
(xx1 )4 .
2
6
24
Integrating this expression from x0 to x2 and noticing that the odd terms (x x1 )
and (x x1 )3 are odd functions with respect to the point x = x1 so that their
49
f (4) (1 ) 5
h3
f (x1 ) +
h ,
3
60
where the Mean Value Theorem 1.4 for integrals was used in the integral of the
error term because the factor (x x1 )4 does not change sign over the interval
[x0 , x2 ].
Substituting the three-point centered difference formula (3.4) for f (x1 ) in
terms of f (x0 ), f (x1 ) and f (x2 ):
= 2hf (x1 ) +
f (x1 ) =
1 (4)
1
[f (x0 ) 2f (x1 ) + f (x2 )]
f (2 )h2 ,
h2
12
we obtain
Z x2
h5 1 (4)
h
1
f (x) dx =
f (x0 ) + 4f (x1 ) + f (x2 )
f (2 ) f (4) (2 ) .
3
12 3
5
x0
In this case, we cannot apply the Mean Value Theorem 1.5 for sums to express the
error term in the form of f (4) () evaluated at one point since the weights 1/3 and
1/5 have different signs. However, since the formula is exact for polynomials of
degree less than or equal to 4, to obtain the factor 1/90 it suffices to apply the
formula to the monomial f (x) = x4 and, for simplicity, integrate from h to h:
Z h
h
x4 dx =
(h)4 + 4(0)4 + h4 + kf (4) ()
3
h
2 5
2
= h + 4!k = h5 ,
3
5
where the last term is the exact value of the integral. It follows that
1
1 2 2 5
h = h5 ,
k=
4! 5 3
90
which yields (3.7).
3.5. The Composite Midpoint Rule
We subdivide the interval [a, b] into n subintervals of equal length h = (b
a)/n with end-points
x0 = a,
x1 = a + h,
...,
xi = a + ih,
...,
xn = b.
50
Rectangle
y = f (x)
x i-1 x*i x i
Multiplying and dividing the error term by n, applying the Mean Value Theorem 1.5 for sums to this term and using the fact that nh = b a, we have
n
nh3 X 1
(b a)h2
f (i ) =
f ()h2 ,
12 i=1 n
12
a < < b.
(b a)h2
f (),
24
We see that the composite midpoint rule is a method of order O(h2 ), which is
exact for polynomials of degree smaller than or equal to 1.
Example 3.3. Use the composite midpoint rule to approximate the integral
Z 1
2
I=
ex dx
0
with step size h such that the absolute truncation error is bounded by 104 .
Solution. Since
f (x) = ex
and f (x) = (2 + 4 x2 ) ex ,
then
0 f (x) 6 e for x [0, 1].
1
1
6 e(1 0)h2 = eh2 < 104 .
24
4
Thus
h < 0.0121
1
= 82.4361.
h
51
Trapezoid
y = f (x)
x i-1
xi
x1 = a + h,
...,
xi = a + ih,
...,
xn = b.
(xi , 0),
Multiplying and dividing the error term by n, applying the Mean Value Theorem 1.5 for sums to this term and using the fact that nh = b a, we have
n
(b a)h2
nh3 X 1
f (i ) =
f (),
12 i=1 n
12
a < < b.
52
(3.9)
We see that the composite trapezoidal rule is a method of order O(h2 ), which is
exact for polynomials of degree smaller than or equal to 1. Its absolute truncation
error is twice the absolute truncation error of the midpoint rule.
Example 3.4. Use the composite trapezoidal rule to approximate the integral
Z 1
2
ex dx
I=
0
with step size h such that the absolute truncation error is bounded by 104 .
Compare with Examples 3.3 and 3.6.
Solution. Since
f (x) = ex
and f (x) = (2 + 4 x2 ) ex ,
then
0 f (x) 6 e for x [0, 1].
Therefore,
|T |
1
1
6 e(1 0)h2 = eh2 < 104 ,
12
2
that is,
The following Matlab commands produce the trapesoidal integration of numerical values yk at nodes k/117, k = 0, 1, . . . , 117, with stepsize h = 1/117.
x = 0:117; y = exp((x/117).^2);
z = trapz(x,y)/117
z = 1.4627
Example 3.5. How many subintervals are necessary for the composite trapezoidal rule to approximate the integral
Z 2
1
x2 (x 1.5)4 dx
I=
12
1
with step size h such that the absolute truncation error is bounded by 103 .
53
y
y = f (x)
0 a
x 2i x 2i+1 x 2i+2
1
(x 1.5)4 .
12
Then
f (x) = 2 (x 1.5)2 .
It is clear that
This gives
h
and
p
6 103 = 0.0775
1
= 12.9099 n = 13.
h
1
,
13
n = 13.
x1 = a + h,
...,
xi = a + i h,
...,
x2m = b.
On the subinterval [x2i , x2i+2 ], the function f (x) is interpolated by the quadratic
polynomial p2 (x) which passes through the points
x2i , f (x2i ) ,
x2i+1 , f (x2i+1 ) ,
x2i+2 , f (x2i+2 ) ,
as shown in Fig. 3.4.
Thus, by the basic Simpsons rule (3.7),
Z x2i+2
h5
h
f (x) dx =
f (x2i )+4f (x2i+1 )+f (x2i+2 ) f (4) (i ),
3
90
x2i
54
Multiplying and dividing the error term by m, applying the Mean Value Theorem 1.5 for sums to this term and using the fact that 2mh = nh = b a, we
have
m
(b a)h4 (4)
2mh5 X 1 (4)
f (i ) =
f (),
a < < b.
2 90 i=1 m
180
with stepsize h such that the absolute truncation error is bounded by 104 . Compare with Examples 3.3 and 3.4.
Solution. We have
f (x) = ex
Thus
and
f (4) (x) = 4 ex
3 + 12x2 + 4x4 .
55
Example 3.7. Use the composite Simpsons rule to approximate the integral
Z 2p
1 + cos2 x dx
I=
0
Solution. We must determine the step size h such that the absolute truncation error, |S |, will be bounded by 0.0001. For
p
f (x) = 1 + cos2 x,
we have
f (4) (x) =
3 cos4 (x)
(1 + cos2 (x))
+
+
3/2
4 cos2 (x)
18 cos4 (x) sin2 (x)
+p
5/2
1 + cos2 (x)
(1 + cos2 (x))
3/2
4 sin2 (x)
15 cos4 (x) sin4 (x)
p
7/2
1 + cos2 (x)
(1 + cos2 (x))
3 sin4 (x)
(1 + cos2 (x))3/2
87
(2 0)h4 .
180
Hence,
1
> 9.915 604 269.
h
To have 2m 1/h = 9.9 we take n = 2m = 10 and h = 1/10. The approximation
is
p
p
1 hp
1 + cos2 (0/20) + 4 1 + cos2 (1/20) + 2 1 + cos2 (2/20) +
I
20 3
i
p
p
p
+ 2 1 + cos2 (18/20) + 4 1 + cos2 (19/20) + 1 + cos2 (20/20)
h < 0.100 851 140,
= 2.48332.
h2 =
h
,
2
h3 =
h
,
22
...,
hk =
h
,
2k1
...,
56
one can cancel errors of order h2 , h4 , etc. as follows. Suppose Rk,1 and Rk+1,1
have been computed, then we have
I = Rk,1 + K1 h2k + K2 h4k + K3 h6k + . . .
and
h2
h2
h2k
+ K2 k + K3 k + . . . .
4
16
64
Subtracting the first expression for I from 4 times the second expression and
dividing by 3, we obtain
Rk+1,1 Rk,1
K2 1
K3 1
4
I = Rk+1,1 +
+
1 hk +
1 h4k + . . . .
3
3 4
3 16
Put
Rk,1 Rk1,1
Rk,2 = Rk,1 +
3
and, in general,
Rk,j1 Rk1,j1
.
Rk,j = Rk,j1 +
4j1 1
Then Rk,j is a better approximation to I than Rk,j1 and Rk1,j1 . The relations
between the Rk,j are shown in Table 3.2.
I = Rk+1,1 + K1
R2,2
R3,2
R4,2
..
.
Rn,2
..
.
R3,3
R4,3
..
.
Rn,3
R4,4
Rn,4
Rn,n
0.34778141
0.34667417
0.34658054
0.34657404
0.34657362
0.34660035
0.34657430
0.34657360
0.34657359
0.34657388
0.34657359
0.34657359
0.34657359
0.34657359
0.34657359
57
Assuming that
h5 (4)
f (1 ),
I = S(a, b)
90
a+b
a+b
2 h5 (4)
I = S a,
+S
,b
f (2 ).
2
2
32 90
(3.11)
(3.12)
f (4) (2 ) f (4) (1 )
and subtracting the second expression for I from the first we have an expression
for the error term:
a+b
a+b
16
h5 (4)
S(a, b) S a,
S
f (1 )
,b .
90
15
2
2
Putting this expression in (3.12), we obtain an estimate for the absolute error:
I S a, a + b S a + b , b 1 S(a, b) S a, a + b S a + b , b .
15
2
2
2
2
If the right-hand side of this estimate is smaller than a given tolerance, then
a+b
a+b
+S
,b
S a,
2
2
shown in Fig. 3.5, with toleralance 104 , the adaptive quadrature uses 23 subintervals and requires 93 evaluations of f . On the other hand, the composite Simpsons
rule uses a constant value of h = 1/88 and requires 177 evaluations of f . It is
seen from the figure that f varies quickly over the interval [1, 1.5]. The adaptive
58
20
0
-20
-40
-60
1.5
2
x
2.5
as follows.
>> v1 = quad(sin,0,pi/2)
v1 = 1.00000829552397
>> v2 = quad8(sin,0,pi/2)
v2 = 1.00000000000000
respectively, within a relative error of 103 .
CHAPTER 4
Matrix Computations
With the advent of digitized systems in many areas of science and engineering,
matrix computation is occupying a central place in modern computer software.
In this chapter, we study the solutions of linear systems,
Ax = b,
A Rmn ,
A Rnn ,
x 6= 0,
A Rnn ,
3
9 6
x1
23
18
48 39 x2 = 136
9 27 42
x3
45
60
4. MATRIX COMPUTATIONS
Solution. Since a21 = 18 is the largest pivot in absolute value in the first
column of A,
|18| > |3|, |18| > |9|,
18
48 39
0
9 6 , where P1 = 1
P1 A = 3
9 27 42
0
18
48 39
1 0 0
1/6 1 0 3
9 6 =
9 27 42
1/2 0 1
with multipliers 1/6 and 1/2. Thus
1 0
0 0 .
0 1
18
48
39
0
1 1/2 ,
0 51 45/2
M1 P1 A = A1 .
Considering the 2 2 submatrix
1 1/2
51 45/2
18
48
39
P2 A1 = 0 51 45/2 , where
0
1 1/2
1
0 0
0
1 0
0 1/51 1
1 0
P2 = 0 0
0 1
0
1 .
0
18
48
39
18
48
39
0 51 45/2 = 0 51
22.5 ,
0
1 1/2
0
0 0.0588
M2 P2 A1 = U.
Therefore
M2 P2 M1 P1 A = U,
and
A = P11 M11 P21 M21 U = LU.
The inverse of a Gaussian transformation is easily written:
1 0 0
1 0
M1 = a 1 0
=
M11 = a 1
b 0 1
b 0
1
0 0
1 0
1 0
M2 = 0
=
M21 = 0 1
0 c 1
0 c
0
0 ,
1
0
0 ,
1
4.1. LU SOLUTION OF Ax = b
61
once the multipliers a, b, c are known. Moreover the product M11 M21 can
be easily written:
1 0 0
1 0 0
1 0 0
M11 M21 = a 1 0 0 1 0 = a 1 0 .
b 0 1
0 c 1
b c 1
It is easily seen that a permutation P , which consists of the identity matrix I
with permuted rows, is an orthogonal matrix. Hence,
P 1 = P T .
Therefore, if
L = P1T M11 P2T M21 ,
then, solely by a rearrangement of the elements of M11 and M21
arithemetic operations, we obtain
0 1 0
1 0 0
1 0 0
1
0
1
L = 1 0 0 1/6 1 0 0 0 1 0
0 0 1
1/2 0 1
0 1 0
0 1/51
1/6 1 0
1
0 0
1/6 1/51 1
=
1 0 0 0 1/51 1 = 1
0 0
1/2 0 1
0
1 0
1/2
1 0
without any
0
0
1
1/6 1/51 1
23
y1
1
0 0 y2 = 136 ,
1/2
1 0
45
y3
y1 = 136,
y2 = 45 136/2 = 23,
Ux = y
is solved by backward substitution:
18
48
39
136
x1
0 51
22.5 x2 =
23 ,
0
0 0.0588
x3
0.1176
x3 = 0.1176/0.0588 = 2,
62
4. MATRIX COMPUTATIONS
9
48
-27
6
39
42
-0.0196
0
1.0000
1.0000
0
0
U =
18.0000
0
0
48.0000
-51.0000
0
39.0000
22.5000
-0.0588
% forward substitution
y =
136.0000
-23.0000
-0.1176
>> x = U\y
% backward substitution
x =
-0.3333
1.3333
2.0000
>> z = A\b
z =
-0.3333
1.3333
2.0000
The didactic Matlab command
[L,U,P] = lu(A)
4.1. LU SOLUTION OF Ax = b
63
finds the permutation matrix P which does all the pivoting at once on the system
Ax = b
and produces the equivalent permuted system
P Ax = P b
which requires no further pivoting. Then it computes the LU decomposition of
P A,
P A = LU,
where the matrix L is unit lower triangular with lij 1, for i > j, and the matrix
U is upper triangular.
We repeat the previous Matlab session making use of the matrix P .
A = [3 9 6; 18 48 39; 9 -27 42]
A =
3
9
6
18
48
39
9
-27
42
b = [23; 136; 45]
b =
23
136
45
[L,U,P] = lu(A)
L =
1.0000
0.5000
0.1667
U =
P =
18.0000
0
0
0
0
1
1
0
0
0
1.0000
-0.0196
0
0
1.0000
48.0000
-51.0000
0
39.0000
22.5000
-0.0588
0
1
0
y = L\P*b
y = 136.0000
-23.0000
-0.1176
x = U\y
x = -0.3333
1.3333
2.0000
Theorem 4.1. The LU decomposition of a matrix A exists if and only if all
its principal minors are nonzero.
The principal minors of A are the determinants of the top left submatrices of
A. Partial pivoting attempts to make the principal minors of P A nonzero.
64
4. MATRIX COMPUTATIONS
3 2 0
A = 12 13 6 ,
3 8 9
14
b = 40 ,
28
1 0 0
3 2 0
3 2 0
4 1 0 12 13 6 = 0 5 6 .
1 0 1
3 8 9
0 10 9
For M2 A1 = U , we have
1
0 0
3
0
1 0 0
0 2 1
0
that is
2 0
3
5 6 = 0
10 9
0
M2 M1 A = U,
Thus
L = M11 M21
1 0
= 4 1
1 0
1
4
1
thus
2
0
5
6 = U,
0 3
0
1 0
0 0 1
1
0 2
0
1 0 0
0 = 4 1 0 .
1
1 2 1
to obtain y from Ly = b,
0 0
y1
14
1 0 y2 = 40 ;
2 1
y3
28
y1 = 14,
y2 = 40 56 = 16,
y3 = 28 + 14 + 32 = 18.
Finally, backward substitution is used to obtain x from U x = y,
3 2
0
x1
14
0 5
6 x2 = 16 ;
0 0 3
x3
18
thus
x3 = 6,
x2 = (16 + 36)/5 = 4,
x1 = (14 8)/3 = 2.
4.1. LU SOLUTION OF Ax = b
65
591400x2
6.130x2
=
=
591700
46.70
we find that
30.00
5.291
|a21 |
|a11 |
=
=
= 0.5073 104 ,
= 0.8631.
s1
591400
s2
6.130
Hence the scaled pivot is in the second equation. Note that the scaling is done only
for comprison purposes and the division to determine the scaled pivots produces
no roundoff error in solving the system. Thus the LU decomposition applied to
the interchanged system
5.29x1
30.00x1
6.130x2
591400x2
=
=
46.70
591700
x2 = 1.000.
66
4. MATRIX COMPUTATIONS
% Post:
%
x
Lx = b
n = length(b);
x = zeros(n,1);
for j=1:n-1
x(j) = b(j)/L(j,j);
b(j+1:n) = b(j+1:n) - L(j+1:n,j)*x(j);
end
x(n) = b(n)/L(n,n);
The backward substitution algorithm solves a upper triangular system:
function x = UTriSol(U,b)
%
% Pre:
%
U
n-by-n nonsingular upper triangular matrix
%
b
n-by-1
%
% Post:
%
x
Lx = b
n = length(b);
x = zeros(n,1);
for j=n:-1:2
x(j) = b(j)/U(j,j);
b(1:j-1) = b(1:j-1) - x(j)*U(1:j-1,j);
end
x(1) = b(1)/U(1,1);
The LU decomposition without pivoting is performed by the following function.
function
%
% Pre:
%
A
%
% Post:
%
L
%
U
%
[L,U] = GE(A);
n-by-n
[n,n] = size(A);
for k=1:n-1
A(k+1:n,k) = A(k+1:n,k)/A(k,k);
A(k+1:n,k+1:n) = A(k+1:n,k+1:n) - A(k+1:n,k)*A(k,k+1:n);
end
L = eye(n,n) + tril(A,-1);
U = triu(A);
The LU decomposition with pivoting is performed by the following function.
function [L,U,piv] = GEpiv(A);
%
% Pre:
%
A
%
% Post:
%
L
%
U
%
piv
%
%
67
n-by-n
[n,n] = size(A);
piv = 1:n;
for k=1:n-1
[maxv,r] = max(abs(A(k:n,k)));
q = r+k-1;
piv([k q]) = piv([q k]);
A([k q],:) = A([q k],:);
if A(k,k) ~= 0
A(k+1:n,k) = A(k+1:n,k)/A(k,k);
A(k+1:n,k+1:n) = A(k+1:n,k+1:n) - A(k+1:n,k)*A(k,k+1:n);
end
end
L = eye(n,n) + tril(A,-1);
U = triu(A);
4.2. Cholesky Decomposition
The important class of positive definite symmetric matrices admits the Cholesky
decomposition
A = GGT
where G is lower triangular.
if
xT Ax > 0,
In that case we write A > 0.
x Rn .
for all x 6= 0,
x 6= 0,
det
a11
a21
a12
a22
> 0,
det A > 0.
68
4. MATRIX COMPUTATIONS
i = 1, 2, . . . , n.
GT x = y.
4 6
8
A = 6 34 52 ,
8 52 129
0
b = 160 .
452
4 6
8
g11 g21 g31
g11
0
0
g21 g22
0 0 g22 g32 = 6 34 52 .
0
0 g33
8 52 129
g31 g32 g33
2
g11
=4
g11 g21 = 6
g11 g31 = 8
2
2
g21
+ g22
= 34
= g11 = 2 > 0,
= g21 = 3,
= g31 = 4,
= g22 = 5 > 0,
= g32 = 8,
2
2
2
g31
+ g32
+ g33
= 129
= g33 = 7 > 0.
Hence
2
G= 3
4
and
0 0
5 0 ,
8 7
Solving Gy = b by forward
2
3
4
substitution,
0 0
y1
0
5 0 y2 = 160 ,
8 7
y3
452
69
we have
y1 = 0,
y2 = 32,
2 3 4
x1
0
0 5 8 x2 = 32 ,
0 0 7
x3
28
we have
x3 = 4,
x2 = (32 + 32)/5 = 0,
x1 = (0 3 0 + 16)/2 = 8.
70
4. MATRIX COMPUTATIONS
end
end
The dot product of two vectors returns a scalar, c = xT y. Noticing that the
k-loop in CholScalar oversees an inner product between subrows of G, we obtain
the following level-1 dot product implementation.
function G = CholDot(A);
%
% Pre: A is a symmetric and positive definite matrix.
% Post: G is lower triangular and A = G*G.
[n,n] = size(A);
G = zeros(n,n);
for i=1:n
% Compute G(i,1:i)
for j=1:i
if j==1
s = A(j,i);
else
s = A(j,i) - G(j,1:j-1)*G(i,1:j-1);
end
if j<i
G(i,j) = s/G(j,j);
else
G(i,i) = sqrt(s);
end
end
end
An update of the form
vector vector + vector scalar
is called a saxpy operation, which stands for scalar a times x plus y, that is
y = ax + y. A column-orientation version that features the saxpy operation is
the following implementation.
function G = CholSax(A);
%
% Pre: A is a symmetric and positive definite matrix.
% Post: G is lower triangular and A = G*G.
[n,n] = size(A);
G = zeros(n,n);
s = zeros(n,1);
for j=1:n
s(j:n) = A(j:n,j);
for k=1:j-1
s(j:n) = s(j:n) - G(j:n,k)*G(j,k);
end
G(j:n,j) = s(j:n)/sqrt(s(j));
end
71
[n,n] = size(A);
G = zeros(n,n);
s = zeros(n,1);
for j=1:n
if j==1
s(j:n) = A(j:n,j);
else
s(j:n) = A(j:n,j) - G(j:n,1:j-1)*G(j,1:j-1);
end
G(j:n,j) = s(j:n)/sqrt(s(j));
end
There is also a recursive implementation which computes the Cholesky factor row
by row, just like ChoScalar
function G = CholRecur(A);
%
% Pre: A is a symmetric and positive definite matrix.
% Post: G is lower triangular and A = G*G.
[n,n] = size(A);
if n==1
G = sqrt(A);
else
G(1:n-1,1:n-1) = CholRecur(A(1:n-1,1:n-1));
G(n,1:n-1)
= LTriSol(G(1:n-1,1:n-1),A(1:n-1,n));
G(n,n)
= sqrt(A(n,n) - G(n,1:n-1)*G(n,1:n-1));
end
There is even a high performance level-3 implementation of the Cholesky decomposition CholBlock
72
4. MATRIX COMPUTATIONS
kAk = sup
There are three important vector norms in scientific computation, the l1 -norm of
x,
n
X
|xi | = |x1 | + |x2 | + + |xn |,
kxk1 =
i=1
sup
i=1,2,...,n
It can be shown that the corresponding matrix norms are given by the following
formulae.
The l1 -norm, or column sum norm, of A is
n
X
|aij |
(largest column in the l1 vector norm),
kAk1 = max
j=1,2,...,n
i=1
j=1
1/2
max {i
i=1,2,...,n
where the i2 0 are the eigenvalues of AT A. The singular values of a matrix are
considered in Subsection 4.9.
An important non-subordinate matrix norm is the Frobenius norm, or Euclidean matrix norm,
1/2
n X
n
X
kAkF =
|aij |2 .
j=1 i=1
73
where all experimental and numerical roundoff errors are lumped into A and
b. Then we have the bound
kAk kbk
kb
x xk
.
(4.2)
(A)
+
kxk
kAk
kbk
We say that a system Ax = b is well conditioned if (A) is small; otherwise it is
ill conditioned.
Example 4.5. Study the ill condition of the following system
1.0001 1
x1
2.0001
=
1
1.0001
2.0001
x2
with exact and some approximate solutions
1
2.0000
b=
x=
,
x
,
1
0.0001
respectively.
2.0001
1
2.0001
2.0003
=
2.0001
2.0001
b is
However, the relative error in x
(1.0000 + 0.9999)
kb
x xk1
=
1,
kxk1
1+1
that is 100%. This is explained by the fact that the system is very ill conditioned.
In fact,
1
1.0001 1.0000
5000.5 5000.0
=
,
A1 =
1.0001
5000.0
5000.5
0.0002 1.0000
and
1 (A) = (1.0001 + 1.0000)(5000.5 + 5000.0) = 20 002.
The l1 -norm of the matrix A of the previous example and the l1 -condition
number of A are obtained by the following numeric Matlab commands:
>> A = [1.0001 1;1 1.0001];
>> N1 = norm(A,1)
N1 = 2.0001
>> K1 = cond(A,1)
K1 = 2.0001e+04
4.4. Iterative Methods
One can solve linear systems by iterative methods, especially when dealing
with very large systems. One such method is GaussSeidels method which uses
the latest values for the variables as soon as they are obtained. This method is
best explained by means of an example.
74
4. MATRIX COMPUTATIONS
+
+
+
2x2
5x2
x2
+ x3
x3
+ 8x3
= 14,
= 10,
= 20,
(0)
with
x1
(0)
x2
(0)
x3
=
=
=
1,
1,
1.
x1
(n+1)
x2
(n+1)
x3
=
=
=
1
4 (14
1
5 (10
1
8 (20
For n = 0, we have
(1)
(n+1)
x1
(n+1)
x1
(n)
2x2
x2
(n+1)
(n)
x3 ),
(n)
x3 ),
),
(0)
x1
(0)
x2
(0)
x3
= 1,
= 1,
= 1.
1
11
(14 2 1) =
= 2.75
4
4
1
= (10 2.75 + 1) = 1.65
5
1
= (20 2.75 1.65) = 1.95.
8
x1 =
(1)
x2
(1)
x3
For n = 1:
1
(14 2 1.65 1.95) = 2.1875
4
1
(2)
x2 = (10 2.1875 + 1.95) = 1.9525
5
1
(2)
x3 = (20 2.1875 1.9525) = 1.9825
8
GaussSeidels iteration to solve the system Ax = b is given by the following
iterative scheme:
with properly chosen x(0) ,
x(m+1) = D1 b Lx(m+1) U x(m) ,
(2)
x1 =
where the matrix A has been split as the sum of three matrices,
A = D + L + U,
75
end
x
x =
1.0000
0.4286
0.1861
0.1380
0.1357
0.1356
1.0000
-0.1299
0.1492
0.1588
0.1596
0.1596
1.0000
-1.8268
-1.8848
-1.8912
-1.8915
-1.8916
It is important to rearrange the coefficient matrix of a given linear system
in as much a diagonally dominant matrix as possible since this may assure or
improve the convergence of the GaussSeidel iteration.
Example 4.7. Rearrange the system
2x1
x1
10x1
+ 10x2
+ 2x2
x2
x3
15x3
+ 2x3
=
=
=
32
17
35
35
32
17
x1
(n+1)
x2
(n+1)
x3
=
=
=
1
4 (14
1
5 (10
1
8 (20
(n)
x1
(n)
x1
(n)
2x2
x2
(n)
(n)
x3 ),
(n)
x3 ),
),
(0)
x1
(0)
x2
(0)
x3
=
=
=
1,
1,
1.
(x2 , y2 ),
...,
(xN , yN ),
76
4. MATRIX COMPUTATIONS
i = 1, 2, . . . , N.
N
X
i=1
(f (xi ) yi )2 .
Typically, N n + 1. If the functions j (x) are linearly independent, the quadratic form is nondegenerate and the minimum is attained for values of a0 , a1 , . . . , an ,
such that
Q
= 0,
j = 0, 1, 2, . . . , n.
aj
Writing the quadratic form Q explicitly,
Q=
N
X
i=1
P
P
P
a0
0 (xi )0 (xi )
1 (xi )0 (xi )
n (xi )0 (xi )
..
..
..
.
.
.
P
P
P
an
0 (xi )n (xi )
1 (xi )n (xi )
n (xi )n (xi )
P
0 (xi )yi
..
=
, (4.3)
.
P
n (xi )yi
where all sums are over i from 1 to N . Setting the N (n + 1) matrix A, and
the N vector y as
..
..
A=
y = ... ,
,
.
.
yN
0 (xN ) 1 (xN ) n (xN )
we see that the previous square system can be written in the form
a0
AT A ... = AT y.
an
1 (x) = x,
77
2 (x) = x2 ,
1 (x) = x,
PN
N
i=1 xi
P
P
N
N
2
x
i=1 xi
i=1 i
P
PN
N
3
2
i=1 xi
i=1 xi
PN 2
PN
y
x
a
i
0
i=1
i=1 i
PN
PN
x y .
x3i a1 =
i=1
i=1
PN 2i i
PN 4
a2
i=1 xi yi
i=1 xi
1
1 1 1 1 1
1
0 1 2 4 6 1
0 1 4 16 36 1
1
1 2
0 1
3 1
3 4
2 4
0 1
5
6
4
0 0
1 1
a0
2 4
a1
4 16 a2
6 36
1
= 0
0
that is
or
1 1
1 2
1 4
5 13
57
a0
9
13 57 289 a1 = 29 ,
57 289 1569
a2
161
N a = b.
2.2361
0
0
0 .
G = 5.8138 4.8166
25.4921 29.2320 8.0430
1 1
4 6
16 36
3
1
0
1
4
78
4. MATRIX COMPUTATIONS
= [0 1 2 4 6];
= [x.^0 x x.^2];
= [3 1 0 1 4];
= (A*A\(A*y))
a =
2.8252
-2.0490
0.3774
5
4
3
2
1
0
x Rn or Cn ,
(4.4)
(4.5)
(4.6)
79
(4.7)
(4.8)
xi1
xi+1
xn
x1
ai,i1
ai,i+1
ain .
xi
xi
xi
xi
Taking absolute values on both sides of this equation, applying the triangle inequality |a+b| |a|+|b| (where a and b are any complex numbers), and observing
that because of the choice of i,
x1
1, . . . , xn 1,
xj
xi
we obtain (4.7).
Example 4.9. Using Gershgorin Theorem, determine and sketch the Gershgorin disks Dk that contain the eigenvalues of the matrix
3
0.5i i
A = 1 i 1 + i 0 .
0.1i
1
i
Solution. The centres, ci , and radii, ri , of the disks are
c1 = 3,
c2 = 1 + i,
c3 = i,
r1 = |0.5i| + | i| = 1.5
r2 = |1 i| + |0| = 2
r3 = |0.1i| + 1
= 1.1
1.0347 + 1.1630i,
0.2027 1.0082i.
80
4. MATRIX COMPUTATIONS
y
D2
c1
-5
-4
c2
D1
-3
-1
-2
1+i
1
0
-1 c 3
D3
Applying Ak to x, we have
Ak x = a1 k1 z 1 + a2 k2 z 2 + + an kn z n
"
k
k #
2
n
k
= 1 a1 z 1 + a2
z 2 + + an
zn
1
1
k1 a1 z 1 = y
as k .
u(1) =
Au(1) = x(2) ,
u(2) =
..
.
x(1)
,
kx(1) k
x(2)
,
kx(2) k
Au(n) = x(n+1)
1 u(n) .
Example 4.10. Using the power method, find the largest eigenvalue and the
corresponding eigenvector of the matrix
3 2
.
2 5
Hence
1
1
, we have
3 2
1
5
=
= x(1) ,
2 5
1
7
3 2
5/7
4.14
=
= x(2) ,
2 5
1
6.43
3 2
0.644
3.933
=
= x(3) ,
2 5
1
6.288
1 6.288,
81
x1
0.6254
1
(1)
u(2) =
u(3) =
5/7
1
0.644
1
,
0.6254
1
,
.
Numeric Matlab has the command eig to find the eigenvalues and eigenvectors of a numeric matrix. For example
>> A = [3 2;2 5];
>> [X,D] = eig(A)
X =
0.8507
0.5257
-0.5257
0.8507
D =
1.7639
0
0
6.2361
where the columns of the matrix X are the eigenvectors of A and the diagonal elements of the diagonal matrix D are the eigenvalues of A. The numeric command
eig uses the QR algorithm with shifts to be described in Section 4.8.
4.6.3. The inverse power method. A more versatile method to determine
any eigenvalue of a matrix A Rnn , or Cnn , is the inverse power method . It
is derived as follows, under the simplifying assumption that A has n linearly
independent eigenvectors z 1 , . . . , z n , and is near 1 .
We have
(A I)x(1) = x(0) = a1 z 1 + + an z n ,
1
1
1
z 1 + a2
z 2 + + an
zn .
x(1) = a1
1
2
n
Similarly, by recurrence,
k
k
1
1
1
z
+
a
z
+
+
a
zn
x(k) = a1
1
2
2
n
(1 )k
2
n
1
z1 ,
as k ,
a1
(1 )k
since
1
j 6= 1.
j < 1,
Thus, the sequence x(k) converges in the direction of z 1 . In practice the vectors
x(k) are normalized and the system
(A I)x(k+1) = x(k)
82
4. MATRIX COMPUTATIONS
Choose x(0)
For k = 1, 2, 3, . . . , do
Solve
(A I)y (k) = x(k1) by the LU decomposition with partial pivoting.
x(k) = y (k) /ky(k) k
Stop if k(A I)x(k) k < ckAk , where c is a constant of order unity and
is the machine epsilon.
4.7. The QR Decomposition
A very powerful method to solve ill-conditioned and overdetermined system
A Rmn ,
Ax = b,
is the QR decomposition,
m n,
A = QR,
where Q is orthogonal, or unitary, and R is upper triangular. In this case,
kAx bk2 = kQRx bk2 = kRx QT bk2 .
obtained by solving
R1 x = c
by backward substitution and the residual is
= minn kAx bk2 = kdk2 .
xR
1
0
v = x + sign (x1 ) e1 ,
where e1 = . .
..
0
In this case,
x1
x2
..
.
xn
kxk2
0
..
.
0
83
The matrix P is symmetric and orthogonal and it is equal to its own inverse, that
is, it satisfies the relations
P T = P = P 1 .
To minimize the number of floating point operations and memory allocation, the
scalar
s = 2/vT v
is first computed and then
P x = x s(v T x)v
is computed taking the special structure of the matrix P into account. To keep
P in memory, only the number s and the vector v need be stored.
Softwares systematically use the QR decomposition to solve overdetermined
systems. So does the Matlab left-division command \ with an overdetermined or
singular system.
The numeric Matlab command qr produces the QR decomposition of a matrix:
>> A = [1 2 3; 4 5 6; 7 8 9];
>> [Q,R] = qr(A)
Q =
-0.1231
0.9045
0.4082
-0.4924
0.3015
-0.8165
-0.8616
-0.3015
0.4082
R =
-8.1240
-9.6011 -11.0782
0
0.9045
1.8091
0
0
-0.0000
It is seen that the matrix A is singular since the diagonal element r33 = 0.
4.8. The QR algorithm
The QR algorithm uses a sequence of QR decompositions
A = Q1 R1
A1 = R1 Q1 = Q2 R2
A2 = R2 Q2 = Q3 R3
..
.
to yield the eigenvalues of A, since An converges to an upper or quasi-upper triangular matrix with the real eigenvalues on the diagonal and complex eigenvalues
in 2 2 diagonal blocks, respectively. Combined with simple shifts, double shifts,
and other shifts, convergence is very fast.
For large matrices, of order n 100, one seldom wants all the eigenvalues.
To find selective eigenvalues, one may use Lanczos method.
The Jacobi method to find the eigenvalues of a symmetric matrix is being
revived since it is parallelizable for parallel computers.
84
4. MATRIX COMPUTATIONS
kA1 k2 = n .
The same decomposition holds for complex matrices A Cmn . In this case U
and V are unitary and the transpose V T is replaced by the Hermitian transpose
V = V T .
The rank of a matrix A is the number of nonzero singular values of A.
The numeric Matlab command svd produces the singular values of a matrix:
A = [1 2 3; 4 5 6; 7 8 9];
[U,S,V] = svd(A)
U =
0.2148
0.8872
-0.4082
0.5206
0.2496
0.8165
0.8263
-0.3879
-0.4082
S =
16.8481
0
0
0
1.0684
0
0
0
0.0000
V =
0.4797
-0.7767
0.4082
0.5724
-0.0757
-0.8165
0.6651
0.6253
0.4082
The diagonal elements of the matrix S are the singular values of A. The l2 norm
of A is kAk2 = 1 = 16.8481. Since 3 = 0, the matrix A is singular.
If A is symmetric, AT = A, Hermitian symmetric AH = A or, more generally,
normal , AAH = AH A, then the moduli of the eigenvalues of A are the singular
values of A.
Theorem 4.7 (Schur Decomposition). Any square matrix A admits the Schur
decomposition
A = U T U H,
where the diagonal elements of the upper triangular matrix T are the eigenvalues
of A and the matrix U is unitary.
For normal matrices, the matrix T of the Schur decomposition is diagonal.
85
CHAPTER 5
y(x0 ) = y0 .
(5.1)
(5.2)
for all (x, y1 ) and (x, y2 ) in D, where L is the Lipschitz constant, then the initial
value problem (5.1) is well-posed.
In the sequel, we shall assume that the conditions of Theorem 5.1 hold and
(5.1) is well posed. Moreover, we shall suppose that f (x, y) has mixed partial
derivatives of arbitrary order.
In considering numerical methods for the solution of (5.1) we shall use the
following notation:
h > 0 denotes the integration step size
xn = x0 + nh is the n-th node
y(xn ) is the exact solution at xn
yn is the numerical solution at xn
fn = f (xn , yn ) is the numerical value of f (x) at (xn , yn )
A function, g(x), is said to be of order p as x x0 , written g O(|x x0 |p )
if
|g(x)| < M |x x0 |p ,
M a constant,
for all x near x0 .
87
88
y (n )
(xn+1 xn )2
2
for n between xn and xn+1 , n = 0, 1, . . . , N 1. Since y (xn ) = f (xn , y(xn ))
and xn+1 xn = h, it follows that
y (n ) 2
h .
y(xn+1 ) = y(xn ) + f xn , y(xn ) h +
2
We obtain Eulers method,
y(xn+1 ) = y(xn ) + y (xn ) (xn+1 xn ) +
yn+1 = yn + hf (xn , yn ),
(5.3)
(5.4)
y(1) = 1,
(5.5)
xf = 1.5,
y0 = 1,
f (x, y) = 0.2xy.
Hence
xn = x0 + hn = 1 + 0.1n,
N=
1.5 1
= 5,
0.1
and
yn+1 = yn + 0.1 0.2(1 + 0.1n)yn ,
with y0 = 1,
for n = 0, 1, . . . , 4. The numerical results are listed in Table 5.1. Note that the
differential equation in (5.5) is separable. The (unique) solution of (5.5) is
y(x) = e(0.1x
0.1)
This formula has been used to compute the exact values y(xn ) in the previous
table.
The next example illustrates the limitations of Eulers method. In the next
subsections, we shall see more accurate methods than Eulers method.
89
xn
yn
y(xn )
0
1
2
3
4
5
1.00
1.10
1.20
1.30
1.40
s1.50
1.0000
1.0200
1.0424
1.0675
1.0952
1.1259
1.0000
1.0212
1.0450
1.0714
1.1008
1.1331
Absolute
error
0.0000
0.0012
0.0025
0.0040
0.0055
0.0073
Relative
error
0.00
0.12
0.24
0.37
0.50
0.64
xn
yn
y(xn )
0
1
2
3
4
5
1.00
1.10
1.20
1.30
1.40
1.50
1.0000
1.2000
1.4640
1.8154
2.2874
2.9278
1.0000
1.2337
1.5527
1.9937
2.6117
3.4904
Absolute
error
0.0000
0.0337
0.0887
0.1784
0.3244
0.5625
Relative
error
0.00
2.73
5.71
8.95
12.42
16.12
Example 5.2. Use Eulers method with h = 0.1 to approximate the solution
to the initial value problem
y (x) = 2xy,
y(1) = 1,
(5.6)
xf = 1.5,
y0 = 1,
xn = x0 + hn = 1 + 0.1n,
N=
1.5 1
= 5,
0.1
y0 = 1,
for n = 0, 1, 2, 3, 4. The numerical results are listed in Table 5.2. The relative
errors show that our approximations are not very good.
Definition 5.2. The local truncation error of a method of the form
yn+1 = yn + h (xn , yn ),
(5.7)
90
z = Mh / 2 + / h
1/ h
1/h*
M=
then |n |
h
2
x0 xxf
for n = 0, 1, . . . , N.
If
|e0 | < 0
and the precision in the computations is bounded by , then it can be shown that
L(xnx0 )
1 Mh
e
1 + 0 eL(xn x0 ) ,
+
|en |
L
2
h
where L is the Lipschitz constant defined in Theorem 5.1,
M=
max
x0 xxf
|y (x)|,
Mh
+
2
h
first decreases and afterwards increases as 1/h increases, as shown in Fig. 5.1.
The term M h/2 is due to the trunctation error and the term /h is due to the
roundoff errors.
91
xn
ynP
ynC
y(xn )
1.00
1.0000 1.0000
1.10 1.200 1.2320 1.2337
1.20
1.5479 1.5527
1.30
1.9832 1.9937
1.40
2.5908 2.6117
1.50
3.4509 3.4904
Absolute
error
0.0000
0.0017
0.0048
0.0106
0.0209
0.0344
Relative
error
0.00
0.14
0.31
0.53
0.80
1.13
Example 5.4. Use the improved Euler method with h = 0.1 to approximate
the solution to the initial value problem of Example 5.2.
y (x) = 2xy,
y(1) = 1,
1 x 1.5.
Solution. We have
xn = x0 + hn = 1 + 0.1n,
n = 0, 1, . . . , 5.
for n = 0, 1, . . . , 4. The numerical results are listed in Table 5.3. These results
are much better than those listed in Table 5.2 for Eulers method.
We need to develop methods of order greater than one, which, in general, are
more precise than Eulers method.
5.3. Low-Order Explicit RungeKutta Methods
RungeKutta methods are one-step multistage methods.
5.3.1. SecNond-order RungeKutta method. Two-stage explicit Runge
Kutta methods are given by the formula (left) and, conveniently, in the form of
a Butcher tableau (right):
c
A
k1 = hf (xn , yn )
k1
0
0
k
c
a
0
2
2
21
k2 = hf (xn + c2 h, yn + a21 k1 )
yn+1 = yn + b1 k1 + b2 k2
yn+1
bT
b1
b2
92
In a Butcher tableau, the components of the vector c are the increments of xn and
the entries of the matrix A are the multipliers of the approximate slopes which,
after multiplication by the step size h, increments yn . The components of the
vector b are the weights in the combination of the intermediary values kj . The
left-most column of the tableau is added here for the readers convenience.
To attain second order, c, A and b have to be chosen judiciously. We proceed
to derive two-stage second-order Runge-Kutta methods.
By Taylors Theorem, we have
1
y(xn+1 ) = y(xn ) + y (xn )(xn+1 xn ) + y (xn )(xn+1 xn )2
2
1
+ y (n )(xn+1 xn )3 (5.8)
6
for some n between xn and xn+1 and n = 0, 1, . . . , N 1. From the differential
equation
y (x) = f x, y(x) ,
and its first total derivative with respect to x, we obtain expressions for y (xn )
and y (xn ),
y (xn ) = f xn , y(xn ) ,
d
f x, y(x) x=xn
y (xn ) =
dx
= fx xn , y(xn ) + fy xn , y(xn ) f xn , y(xn ) .
In order for the expressions (5.8) and (5.9) to be equal to order h, we must have
a + b = 1,
b = 1/2,
b = 1/2.
93
Thus, we have three equations in four unknowns. This gives rise to a oneparameter family of solutions. Identifying the parameters:
c1 = ,
a21 = ,
b1 = a,
b2 = b,
k1 = hf (xn , yn )
A
0
1
k1
k2 = hf (xn + h, yn + k1 )
0
k2
1
yn+1 = yn + (k1 + k2 )
yn+1 bT 1/2 1/2
2
This is Heuns method of order 2.
Other two-stage second-order methods are the mid-point method:
k1 = hf (xn , yn )
1
1
k2 = hf xn + h, yn + k1
2
2
yn+1 = yn + k2
k1
k2
yn+1
c
A
0
0
1/2 1/2 0
bT
c
A
0
0
2/3 2/3
k1
k2
yn+1
bT
1/4 3/4
5.3.2. Third-order RungeKutta method. We list two common threestage third-order RungeKatta methods in their Butcher tableau, namely Heuns
third-order formula and Kuttas third-order rule.
k1
k2
k3
yn+1
c
A
0
0
1/3 1/3 0
2/3 0 2/3
bT
1/4
0
3/4
k1
k2
k3
yn+1
c
0
0
1/2 1/2
1
1
bT
A
0
2
94
y (xn )(xn+1 xn ) +
y (3) (xn )
y (xn )
(xn+1 xn )2 +
(xn+1 xn )3
2!
3!
y (4) (xn )
+
(xn+1 xn )4 + O(h5 )
4!
is equal to
ak1 + bk2 + ck3 + dk4 + O(h5 ),
where
k1 = hf (xn , yn ),
k2 = hf (xn + 1 h, yn + 1 k1 ),
k3 = hf (xn + 2 h, yn + 2 k2 ),
k4 = hf (xn + 3 h, yn + 3 k3 ).
This follows from the relations
xn+1 xn = h,
y (xn ) =
and Taylors Theorem for functions of two variables. The lengthy computation is
omitted.
The (classic) four-stage RungeKutta method of order 4 given by its formula
(left) and, conveniently, in the form of a Butcher tableau (right).
k1 = hf (xn , yn )
1
1
k2 = hf xn + h, yn + k1
2
2
1
1
k3 = hf xn + h, yn + k2
2
2
k4 = hf (xn + h, yn + k3 )
1
yn+1 = yn + (k1 + 2k2 + 2k3 + k4 )
6
k1
k2
k3
k4
yn+1
c
A
0
0
1/2 1/2 0
1/2 0 1/2
1
0
0
bT
0
1
95
yn
y(xn )
1.00
1.10
1.20
1.30
1.40
1.50
1.0000
1.2337
1.5527
1.9937
2.6116
3.4902
1.0000
1.2337
1.5527
1.9937
2.6117
3.4904
Absolute
error
0.0000
0.0000
0.0000
0.0000
0.0001
0.0002
Relative
error
0.0
0.0
0.0
0.0
0.0
0.0
The next example shows that the fourth-order RungeKutta method yields
better results for (5.6) than the previous methods.
Example 5.5. Use the fourth-order RungeKutta method with h = 0.1 to
approximate the solution to the initial value problem of Example 5.2,
y (x) = 2xy,
y(1) = 1,
for n = 0, 1, . . . , 5.
With the starting value y0 = 1.0, the approximation yn to y(xn ) is given by the
scheme
1
yn+1 = yn + (k1 + 2 k2 + 2 k3 + k4 )
6
where
k1 = 0.1 2(1.0 + 0.1n)yn ,
and n = 0, 1, 2, 3, 4. The numerical results are listed in Table 5.4. These results
are much better than all those previously obtained.
Example 5.6. Consider the initial value problem
y = (y x 1)2 + 2,
y(0) = 1.
xn
0.0
0.1
0.2
0.3
0.4
yn
1.000 000 000
1.200 334 589
1.402 709 878
1.609 336 039
1.822 792 993
Exact value
y(xn )
1.000 000 000
1.200 334 672
1.402 710 036
1.609 336 250
1.822 793 219
Global error
y(xn ) yn
0.000 000 000
0.000 000 083
0.000 000 157
0.000 000 181
0.000 000 226
96
y(0) = 0,
0
0.1000
0.2000
0.3000
0.4000
0.5000
0.6000
0.7000
0.8000
0.9000
0
0.0052
0.0214
0.0499
0.0918
0.1486
0.2218
0.3128
0.4228
0.5531
97
yn
0.6
0.4
0.2
0.5
1.5
xn
1.0000
0.7040
on 0 x 20, print every tenth value, and plot the numerical solution. Also,
use the ode23 code to solve (5.11) and plot the solution.
= y2 ,
= y2 1 y12 y1 ,
98
clear
h = 0.1; t0= 0; tf= 21; % step size, initial and final times
y0 = [0 0.25]; % initial conditions
n = ceil((xf-t0)/h); % number of steps
count = 2; print_control = 10; % when to write to output
t = t0; y = y0; % initialize t and y
output = [t0 y0]; % first row of matrix of printed values
w = [t0, y0]; % first row of matrix of plotted values
for i=1:n
k1 = h*exp1vdp(x,y);
k2 = h*exp1vdp(x+h/2,y+k1/2);
k3 = h*exp1vdp(x+h/2,y+k2/2); k4 = h*exp1vdp(x+h,y+k3);
z = y + (1/6)*(k1+2*k2+2*k3+k4);
t = t + h;
if count > print_control
output = [output; t z]; % augmenting matrix of printed values
count = count - print_control;
end
y = z;
w = [w; t z]; % augmenting matrix of plotted values
count = count + 1;
end
[output(1:11,:) output(12:22,:)] % print numerical values of solution
save w % save matrix to plot the solution
The command output prints the values of t, y1 , and y2 .
t
0
1.0000
2.0000
3.0000
4.0000
5.0000
6.0000
7.0000
8.0000
9.0000
10.0000
y(1)
0
0.3586
0.6876
0.4313
-0.7899
-1.6075
-0.9759
0.8487
1.9531
1.3357
-0.0939
y(2)
0.2500
0.4297
0.1163
-0.6844
-1.6222
0.1456
1.0662
2.5830
-0.2733
-0.8931
-2.2615
t
11.0000
12.0000
13.0000
14.0000
15.0000
16.0000
17.0000
18.0000
19.0000
20.0000
21.0000
y(1)
-1.9923
-1.6042
-0.5411
1.6998
1.8173
0.9940
-0.9519
-1.9688
-1.3332
0.1068
1.9949
y(2)
-0.2797
0.7195
1.6023
1.6113
-0.5621
-1.1654
-2.6628
0.3238
0.9004
2.2766
0.2625
1
yn
yn
-1
-1
-2
-2
-3
10
tn
15
99
20
-3
10
tn
15
20
as h 0,
100
k
X
n=0
(5.13)
lie inside or on the boundary of the unit disk, and those on the unit circle are
simple.
We finally can state the following fundamental theorem.
Theorem 5.2. A method is convergent as h 0 if and only if it is zero-stable
and consistent.
All numerical methods considered in this chapter are convergent.
5.5. Absolutely Stable Numerical Methods
We now turn attention to the application of a consistent and zero-stable
numerical solver with small but nonvanishing step size.
For n = 0, 1, 2, . . ., let yn be the numerical solution of (5.1) at x = xn , and
y [n] (xn+1 ) be the exact solution of the local problem:
y = f (x, y),
y(xn ) = yn .
(5.14)
(5.15)
(p+1)
n+1 Cp+1 hp+1
(xn ) + O(hp+2
n+1 y
n+1 ),
(5.16)
then we say that the local error is of order p + 1 and Cp+1 is the error constant of
the method. For consistent and zero-stable methods, the global error is of order
p whenever the local error is of order p + 1. In such case, we say that the method
is of order p. We remark that a method of order p 1 is consistent according to
Definition 5.4.
Let us now apply the solver (5.12), with its small nonvanishing parameter h,
to the linear test equation
y = y,
< 0.
(5.17)
b
The region of absolute stability, R, is that region in the complex h-plane,
where b
h = h, for which the numerical solution yn of (5.17) goes to zero, as n
goes to infinity.
101
The region of absolute stability of the explicit Euler method is the disk of
radius 1 and center (1, 0), see curve k = 1 in Fig. 5.7. The region of stability
of the implicit backward Euler method is the outside of the disk of radius 1 and
center (1, 0), hence it contains the left half-plane, see curve k = 1 in Fig. 5.10.
The region of absolute stability, R, of an explicit method is very roughly a
disk or cardioid in the left half-plane (the cardioid overlaps with the right halfplane with a cusp at the origin). The boundary of R cuts the real axis at ,
where < < 0, and at the origin. The interval [, 0] is called the interval
of absolute stability. For methods with real coefficients, R is symmetric with
respect to the real axis. All methods considered in this work have real coefficients;
hence Figs. 5.7, 5.8 and 5.10, below, show only the upper half of R.
The region of stability, R, of implicit methods extends to infinity in the left
half-plane, that is = . The angle subtended at the origin by R in the left
half-plane is usually smaller for higher order methods, see Fig. 5.10.
If the region R does not include the whole negative real axis, that is, <
< 0, then the inclusion
h R
restricts the step size:
.
Re
In practice, we want to use a step size h small enough to ensure accuracy of the
numerical solution as implied by (5.15)(5.16), but not too small.
h Re = 0 < h
< 0,
102
3i
k=4
k=3
k=2
1i
k=1
-3
-2
-1
%
% Pre:
%
%
%
%
%
%
%
%
%
%
% Post:
%
103
if k==1
k1 = h*fc;
ynew = yc + k1;
elseif k==2
k1 = h*fc;
k2 = h*feval(fname,tc+h,yc+k1);
ynew = yc + (k1 + k2)/2;
elseif k==3
k1 = h*fc;
k2 = h*feval(fname,tc+(h/2),yc+(k1/2));
k3 = h*feval(fname,tc+h,yc-k1+2*k2);
ynew = yc + (k1 + 4*k2 + k3)/6;
elseif k==4
k1 = h*fc;
k2 = h*feval(fname,tc+(h/2),yc+(k1/2));
k3 = h*feval(fname,tc+(h/2),yc+(k2/2));
k4 = h*feval(fname,tc+h,yc+k3);
ynew = yc + (k1 + 2*k2 + 2*k3 + k4)/6;
elseif k==5
k1 = h*fc;
k2 = h*feval(fname,tc+(h/4),yc+(k1/4));
k3 = h*feval(fname,tc+(3*h/8),yc+(3/32)*k1
+(9/32)*k2);
k4 = h*feval(fname,tc+(12/13)*h,yc+(1932/2197)*k1
-(7200/2197)*k2+(7296/2197)*k3);
k5 = h*feval(fname,tc+h,yc+(439/216)*k1
- 8*k2 + (3680/513)*k3 -(845/4104)*k4);
k6 = h*feval(fname,tc+(1/2)*h,yc-(8/27)*k1
+ 2*k2 -(3544/2565)*k3 + (1859/4104)*k4 - (11/40)*k5);
104
ynew
= yc + (16/135)*k1 + (6656/12825)*k3 +
(28561/56430)*k4 - (9/50)*k5 + (2/55)*k6;
end
tnew = tc+h;
fnew = feval(fname,tnew,ynew);
5.7. Embedded Pairs of RungeKutta methods
Thus far, we have only considered a constant step size h. In practice, it is
advantageous to let h vary so that h is taken larger when y(x) does not vary
rapidly and smaller when y(x) changes rapidly. We turn to this problem.
Embedded pairs of RungeKutta methods of orders p and p + 1 have built-in
local error and step-size controls by monitoring the difference between the higher
and lower order solutions, yn+1 ybn+1 . Some pairs include an interpolant which
is used to interpolate the numerical solution between the nodes of the numerical
solution and also, in some case, to control the step-size.
5.7.1. Matlabs four-stage RK pair ode23. The code ode23 consists in a
four-stage pair of embedded explicit RungeKutta methods of orders 2 and 3 with
error control. It advances from yn to yn+1 with the third-order method (so called
local extrapolation) and controls the local error by taking the difference between
the third-order and the second-order numerical solutions. The four stages are:
k1 = h f (xn , yn ),
k2 = h f (xn + (1/2)h, yn + (1/2)k1 ),
k3 = h f (xn + (3/4)h, yn + (3/4)k2 ),
k4 = h f (xn + h, yn + (2/9)k1 + (1/3)k2 + (4/9)k3 ),
The first three stages produce the solution at the next time step:
1
4
2
k1 + k2 + k3 ,
9
3
9
and all four stages give the local error estimate:
yn+1 = yn +
1
1
1
5
k1 +
k2 + k3 k4 .
72
12
9
8
However, this is really a three-stage method since the first step at xn+1 is the
[n]
[n+1]
= k4 . Such methods are called FSAL
same as the last step at xn , that is k1
methods.
The natural interpolant used in ode23 is the two-point Hermite polynomial of degree 3 which interpolates yn and f (xn , yn ) at x = xn , and yn+1 and
f (xn+1 , xn+1 ) at t = xn+1 .
E=
Example 5.9. Use Matlabs four-stage FSAL ode23 method with h = 0.1 to
approximate y(0.1) and y(0.2) to 5 decimal places and estimate the local error
for the initial value problem
y = xy + 1,
y(0) = 1.
105
With n = 0:
k1 = 0.1 1 = 0.1
k4 = 0.1 (0.1 1.105 346 458 333 33 + 1) = 0.111 053 464 583 33
y1 = 1.105 346 458 333 33
106
2.5
2
1.5
1
0.2
0.4
0.6
0.8
k1
k2
k3
k4
k5
k6
k7
1
5
3
10
4
5
8
9
1
1
ybn+1
yn+1
T
bT
b b
yn+0.5
bT
b
bT
A
0
1
5
3
40
44
45
19372
6561
9017
3168
35
384
5179
57600
35
384
71
57 600
5783653
57600000
0
9
40
56
15
25360
2187
355
33
0
0
0
0
0
32
9
64448
6561
46732
5247
500
1113
7571
16695
500
1113
1671
695
466123
1192500
0
212
729
49
176
125
192
393
640
125
192
71
1 920
41347
1920000
0
5103
18656
2187
6784
92097
339200
2187
6784
17 253
339
200
16122321
339200000
0
11
84
187
2100
11
84
22
525
7117
20000
1
40
0
1
40
183
10000
(5.19)
107
4i
2i
-4
-2
108
1
4
3
8
12
13
1
4
3
32
1932
2197
439
216
8
27
1
1
2
2197
4104
ybn+1
15
bT
b
b T bT
b
0
9
32
7200
2197
8
2
0
7296
2197
3680
513
3544
2565
845
4104
0
11
40
6656
12825
128
4275
28561
56430
2197
75240
9
50
0
1859
4104
(5.20)
0
16
135
1
360
0
0
1
50
2
55
2
55
(5.22)
where
k1 = hf (xn , yn ),
k2 = hf (xn + h/4, yn + k1 /4),
k3 = hf (xn + 3h/8, yn + 3k1 /32 + 9k2 /32),
k4 = hf (xn + 12h/13, yn + 1932k1/2197 7200k2 /2197 + 7296k3/2197),
k6 = hf (xn + h/2, yn 8k1 /27 + 2k2 + 3544k3/2565 + 1859k4 /4104 11k5 /40).
(2) If |b
yj+1 yn+1 | < h, accept yn+1 as the approximation to y(xn+1 ).
Replace h by qh where
1/4
q = h/(2|b
yj+1 yn+1 |)
and go back to step (1) to compute an approximation for yj+2 .
109
(3) If |b
yj+1 yn+1 | h, replace h by qh where
1/4
q = h/(2|b
yj+1 yn+1 |)
and go back to step (1) to compute the next approximation for yn+1 .
One can show that the local truncation error for (5.21) is approximately
|b
yj+1 yn+1 |/h.
At step (2), one requires that this error be smaller than h in order to get |y(xn )
yn | < for all j (and in particular |y(xf ) yf | < ). The formula to compute q
in (2) and (3) (and hence a new value for h) is derived from the relation between
the local truncation errors of (5.21) and (5.22).
RKF(4,5) overestimate the error in the order-four solution because its local
error constant is minimized. The next method, RKV, corrects this fault.
5.7.4. Eight-stage RungeKuttaVerner pair RKV(5,6). The eightstage RungeKuttaVerner pair RKV(5,6) of order 5 and 6 is presented in a
Butcher tableau. Note that 8 stages are necessary to get order 6. The method
attempts to keep the global error proportional to a user-specified tolerance. It is
efficient for nonstiff systems where the derivative evaluations are not expensive
and where the solution is not required at a large number of finely spaced points
(as might be required for graphical output).
c
0
k1
k2
k3
k4
k5
k6
k7
k8
1
6
4
15
2
3
5
6
1
1
15
yn+1
ybn+1
bT
bT
b
A
0
1
6
4
75
5
6
165
64
12
5
8263
15000
3501
1720
13
160
3
40
0
16
75
38
55
6
124
75
300
43
0
0
0
5
2
425
64
4015
612
643
680
297275
52632
2375
5984
875
2244
0
85
96
11
36
81
250
319
2322
5
16
23
72
0
88
255
2484
10625
24068
84065
0
0
0
12
85
264
1955
3
44
(5.23)
3850
26703
125
11592
43
616
y(a) = ,
(5.24)
where f (x) is continuous with respect to x and Lipschitz continuous with respect
to y on the strip [a, b] (, ). Then, by Theorem 5.1, the exact solution,
y(x), exists and is unique on [a, b].
We look for an approximate numerical solution {yn } at the nodes xn = a + nh
110
j yn+j = h
n=0
k
X
j fn+j ,
(5.25)
n=0
h<
1
L|k |
implies convergence.
Applying (5.25) to the test equation,
y = y,
< 0,
k
X
n=0
(j b
hj )rj
satisfy |rs (b
h)| < 1, s = 1, 2, . . . , k. In that case, we say that the linear multistep
method (5.25) is absolutely stable for given b
h. The region of absolute stability, R, in the complex plane is the set of values of b
h for with the method is
absolutely stable.
5.8.2. Adams-Bashforth-Moulton linear multistep methods. Popular linear k-step methods are (explicit) AdamsBashforth (AB) and (implicit)
AdamsMoulton (AM) methods,
yn+1 yn = h
k1
X
j=0
j fn+jk+1 ,
yn+1 yn = h
k
X
j=0
j fn+jk+1 ,
111
respectively. Tables 5.5 and 5.6 list the AB and AM methods of stepnumber 1 to
6, respectively. In the tables, the coefficients of the methods are to be divided by
Cp+1
1/2
5/12
12
3/8
24
251/720
720
95/288
6 19 087/60 480
3
23
55
1901 2774
4277 7923
59
16
37
1616 1274
9982 7298
251
9
251
Cp+1
12
19
1/12
24
106 19
720
27 1440
6 863/60 480
646 264
482 173
1/24
19/720
3/160
value yn+1
, which is then inserted in the right-hand side of an AM method used
as a corrector to obtain the corrected value yn+1 . Such combination is called an
ABM predictor-corrector which, when of the same order, comes with the Milne
estimate for the principal local truncation error
Cp+1
(yn+1 yn+1
).
n+1
Cp+1 Cp+1
The procedure called local approximation improves the higher-order solution yn+1
by the addition of the error estimator, namely,
Cp+1
yn+1 +
(yn+1 yn+1
).
Cp+1 Cp+1
112
k=1
3i
k=1
2i
k=2
k=1 k=2
k=3
1i
k=4
-1
1i
k=4
k=3
-2
3i
-6
-2
-4
2i
k=1
k=1
k=3
k=2
k=4
-1
2i
1i
1i
k=3
-2
k=2
k=4
-2
-1
113
Example 5.10. Solve to six decimal places the initial value problem
y = x + sin y,
y(0) = 0,
n
0
1
2
3
4
5
6
7
8
9
10
Starting
Predicted Corrected 105 Local Error in ynC
C
xn
yn
ynP
ynC
(ynC ynP ) 104
0.0 0.000 000 0
0.2 0.021 404 7
0.4 0.091 819 5
0.6
0.221 260 0.221 977
7
0.8
0.423 703 0.424 064
4
1.0
0.710 725 0.709 623
11
1.2
1.088 004 1.083 447
46
1.4
1.542 694 1.533 698
90
1.6
2.035 443 2.026 712
87
1.8
2.518 039 2.518 431
4
2.0
2.965 994 2.975 839
98
yn1/2 =
(5.34)
yn3/2
(5.35)
114
f
< 0,
y
where
f
< 0.
y
y(0) = 0.
n
0
1
2
3
4
5
6
7
8
9
10
xn
0.0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
1.6
1.8
2.0
We see that the method is stable since the error does not grow.
Example 5.12. Solve to six decimal places the initial value problem
y = arctan x + arctan y,
y(0) = 0,
clear
h = 0.2; x0= 0; xf= 2; y0 = 0;
n = ceil((xf-x0)/h); % number of steps
%
count = 2; print_time = 1; % when to write to output
x = x0; y = y0; % initialize x and y
output = [0 x0 y0 0];
%RK4
for i=1:3
k1 = h*exp5_12(x,y);
k2 = h*exp5_12(x+h/2,y+k1/2);
k3 = h*exp5_12(x+h/2,y+k2/2);
k4 = h*exp5_12(x+h,y+k3);
z = y + (1/6)*(k1+2*k2+2*k3+k4);
x = x + h;
if count > print_time
output = [output; i x z 0];
count = count - print_time;
end
y = z;
count = count + 1;
end
% ABM4
for i=4:n
zp = y + (h/24)*(55*exp5_12(output(i,2),output(i,3))-...
59*exp5_12(output(i-1,2),output(i-1,3))+...
37*exp5_12(output(i-2,2),output(i-2,3))-...
9*exp5_12(output(i-3,2),output(i-3,3)) );
z = y + (h/24)*( 9*exp5_12(x+h,zp)+...
19*exp5_12(output(i,2),output(i,3))-...
5*exp5_12(output(i-1,2),output(i-1,3))+...
exp5_12(output(i-2,2),output(i-2,3)) );
x = x + h;
if count > print_time
errest = -(19/270)*(z-zp);
output = [output; i x z errest];
count = count - print_time;
end
y = z;
count = count + 1;
end
output
save output %for printing the graph
Error estimate
115
116
yn
2
1.5
1
0.5
0
0.5
1
xn
1.5
0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
1.6
1.8
2.0
0
0.02126422549044
0.08962325332457
0.21103407185113
0.39029787517821
0.62988482479868
0.92767891924367
1.27663327419538
1.66738483675693
2.09110753309673
2.54068815072267
0
0
0
0
0.00001007608281
0.00005216829834
0.00004381671342
-0.00003607372725
-0.00008228934754
-0.00005318684309
-0.00001234568256
% Pre:
%
%
%
%
%
%
% Post:
%
%
117
[tvals,yvals,fvals] = StartAB(fname,t0,y0,h,k);
tc = tvals(k);
yc = yvals(:,k);
fc = fvals(:,k);
for j=k:n
% Take a step and then update.
[tc,yPred,fPred,yc,fc] = PCstep(fname,tc,yc,fvals,h,k);
tvals = [tvals tc];
yvals = [yvals yc];
fvals = [fc fvals(:,1:k-1)];
end
The starting values are obtained by the following M-file by means of a Runge
Kutta method.
function [tvals,yvals,fvals] = StartAB(fname,t0,y0,h,k)
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
118
tvals = tc;
yvals = yc;
fvals = fc;
for j=1:k-1
[tc,yc,fc] = RKstep(fname,tc,yc,fc,h,k);
tvals = [tvals tc];
yvals = [yvals yc];
fvals = [fc fvals];
end
The function M-file Rkstep is found in Subsection 5.6 The Adams-Bashforth
predictor step is taken by the following M-file.
function [tnew,ynew,fnew] = ABstep(fname,tc,yc,fvals,h,k)
%
% Pre: fname is a string that names a function of the form f(t,y)
%
where t is a scalar and y is a column d-vector.
%
%
yc is an approximate solution to y(t) = f(t,y(t)) at t=tc.
%
%
fvals is an d-by-k matrix where fvals(:,i) is an approximation
%
to f(t,y) at t = tc +(1-i)h, i=1:k
%
%
h is the time step.
%
%
k is the order of the AB method used, 1<=k<=5.
%
% Post: tnew=tc+h, ynew is an approximate solution at t=tnew, and
%
fnew = f(tnew,ynew).
if k==1
ynew = yc + h*fvals;
elseif k==2
ynew = yc + (h/2)*(fvals*[3;-1]);
elseif k==3
ynew = yc + (h/12)*(fvals*[23;-16;5]);
elseif k==4
ynew = yc + (h/24)*(fvals*[55;-59;37;-9]);
elseif k==5
ynew = yc + (h/720)*(fvals*[1901;-2774;2616;-1274;251]);
end
tnew = tc+h;
fnew = feval(fname,tnew,ynew);
The Adams-Moulton corrector step is taken by the following M-file.
function [tnew,ynew,fnew] = AMstep(fname,tc,yc,fvals,h,k)
%
% Pre: fname is a string that names a function of the form f(t,y)
%
where t is a scalar and y is a column d-vector.
%
119
%
yc is an approximate solution to y(t) = f(t,y(t)) at t=tc.
%
%
fvals is an d-by-k matrix where fvals(:,i) is an approximation
%
to f(t,y) at t = tc +(2-i)h, i=1:k
%
%
h is the time step.
%
%
k is the order of the AM method used, 1<=k<=5.
%
% Post: tnew=tc+h, ynew is an approximate solution at t=tnew, and
%
fnew = f(tnew,ynew).
if k==1
ynew = yc + h*fvals;
elseif k==2
ynew = yc + (h/2)*(fvals*[1;1]);
elseif k==3
ynew = yc + (h/12)*(fvals*[5;8;-1]);
elseif k==4
ynew = yc + (h/24)*(fvals*[9;19;-5;1]);
elseif k==5
ynew = yc + (h/720)*(fvals*[251;646;-264;106;-19]);
end
tnew = tc+h;
fnew = feval(fname,tnew,ynew);
The predictor-corrector step is taken by the following M-file.
function [tnew,yPred,fPred,yCorr,fCorr] = PCstep(fname,tc,yc,fvals,h,k)
%
% Pre: fname is a string that names a function of the form f(t,y)
%
where t is a scalar and y is a column d-vector.
%
%
yc is an approximate solution to y(t) = f(t,y(t)) at t=tc.
%
%
fvals is an d-by-k matrix where fvals(:,i) is an approximation
%
to f(t,y) at t = tc +(1-i)h, i=1:k
%
%
h is the time step.
%
%
k is the order of the Runge-Kutta method used, 1<=k<=5.
%
% Post: tnew=tc+h,
%
yPred is the predicted solution at t=tnew
%
fPred = f(tnew,yPred)
%
yCorr is the corrected solution at t=tnew
%
fCorr = f(tnew,yCorr).
120
[tnew,yPred,fPred] = ABstep(fname,tc,yc,fvals,h,k);
[tnew,yCorr,fCorr] = AMstep(fname,tc,yc,[fPred fvals(:,1:k-1)],h,k);
5.8.4. Specification of multistep methods. The left-hand side of Adams
methods is of the form
yn+1 yn .
AdamsBashforth methods are explicit and AdamsMoulton methods are implicit. In the following formulae, Adams methods are obtained by taking a = 0
and b = 0. The integer k is the number of steps of the method. The integer p is
the order of the method and the constant Cp+1 is the constant of the top-order
error term.
Explicit Methods
k=1:
1 = 1,
0 = 1, 0 = 1,
p = 1;
Cp+1 = 21 .
k=2:
2 = 1,
1 = 1 a, 1 = 21 (3 a),
0 = a,
0 = 21 (1 + a),
p = 2;
Cp+1 =
1
12 (5
+ a).
2 = 1 a, 2 =
1 = a + b,
0 = b,
p = 3;
3 = 1 a, 3 =
2 = a + b,
1 = b c,
0 = c,
p = 4;
121
k=1:
1 = 12 ,
1 = 1,
0 = 1, 0 = 21 ,
1
Cp+1 = 12
.
p = 2;
k=2:
2 = 1,
1
12 (5 + a),
1 = 32 (1 a),
1
0 = 12
(1 5a),
1
(1 + a),
Cp+1 = 24
1
Cp+1 = 90 .
2 =
1 = 1 a,
0 = a,
If a 6= 1, p = 3;
If a = 1, p = 4;
k=3:
1
24 (9 + a + b),
1
2 = 24
(19 13a 5b),
1
1 = 24 (5 13a + 19b),
1
(1 + a + 9b),
0 = 24
1
(19 + 11a + 19b).
Cp+1 = 720
3 = 1,
3 =
2 = 1 a,
1 = a + b,
0 = b,
p = 4;
4 =
3 = 1 a, 3 =
2 = a + b,
2 =
1 = b c,
1 =
0 = c,
0 =
1
720 (251 + 19a + 11b + 19c),
1
360 (323 173a 37b 53c),
1
30 (11 19a + 19b + 11c),
1
360 (53 + 37a + 173b 323c),
1
720 (19 11a 19b 251c).
Cp+1 =
1
(27 + 11a + 11b + 27c).
1440
Cp+1 =
1
(74 + 10a 10b 74c).
15 120
122
(5.36)
where Nagumos matrix index notation has been used. We assume that the n
eigenvalues, 1 , . . . , n , of the matrix J have negative real parts, Re j < 0, and
are ordered as follows:
Re n Re 2 Re 1 < 0.
(5.37)
Definition 5.6. The stiffness ratio of the system y = f (x, y) is the positive
number
Re n
,
(5.38)
r=
Re 1
where the eigenvalues of the Jacobian matrix (5.36) of the system satisfy the
relations (5.37).
The phenomenon of stiffness appears under various aspects (see [2], p. 217
221):
A linear constant coefficient system is stiff if all of its eigenvalues have
negative real parts and the stiffness ratio is large.
Stiffness occurs when stability requirements, rather than those of accuracy, constrain the step length.
Stiffness occurs when some components of the solution decay much more
rapidly than others.
A system is said to be stiff in a given interval I containing t if in I the
neighboring solution curves approach the solution curve at a rate which
is very large in comparison with the rate at which the solution varies in
that interval.
A statement that we take as a definition of stiffness is one which merely relates
what is observed happening in practice.
Definition 5.7. If a numerical method with a region of absolute stability,
applied to a system of differential equation with any initial conditions, is forced
to use in a certain interval I of integration a step size which is excessively small
123
in relation to the smoothness of the exact solution in I, then the system is said
to be stiff in I.
Explicit RungeKutta methods and predictor-corrector methods, which, in
fact, are explicit pairs, cannot handle stiff systems in an economical way, if they
can handle them at all. Implicit methods require the solution of nonlinear equations which are almost always solved by some form of Newtons method. Two
such implicit methods are in the following two sections.
5.9.2. Backward differentiation formulae. We define a k-step backward differentiation formula (BDF) in standard form by
k
X
j yn+jk+1 = hk fn+1 ,
j=0
where k = 1. BDFs are implicit methods. Table 5.7 lists the BDFs of stepnumber 1 to 6, respectively. In the table, k is the stepnumber, p is the order,
Cp+1 is the error constant, and is half the angle subtended at the origin by the
region of absolute stability R.
Table 5.7. Coefficients of the BDF methods.
k
1
2
5
6
1
1
360
147
300
137
450
147
48
25
300
137
400
147
18
11
36
25
200
137
225
147
Cp+1
90
2
3
6
11
12
25
60
137
60
147
29
90
34
9
11
16
25
75
137
72
147
1
3
2
= 11
3
25
12
137
10
147
3
22
86
12
4 125
73
20
6 343
18
110
5 137
51
The left part of Fig. 5.10 shows the upper half of the region of absolute
stability of the 1-step BDF, which is the exterior of the unit disk with center 1,
and the regions of absolute stability of the 2- and 3-step BDFs which are the
exterior of closed regions in the right-hand plane. The angle subtended at the
origin is = 90 in the first two cases and = 88 in the third case. The right
part of Fig. 5.10 shows the regions of absolute stability of the 4-, 5-, and 6-steps
BDFs which include the negative real axis and make angles subtended at the
origin of 73 , 51 , and 18 , respectively.
A short proof of the instability of the BDF formulae for k 7 is found in [4].
BDF methods are used to solve stiff systems.
5.9.3. Numerical differentiation formulae. Numerical differentiation formulae (NDF) are a modification of BDFs. Letting
yn = yn yn1
124
k=4
6i
6i
k=5
k=3
3i
k=2
k=1
k=6
-6
-3
yn+1 =
k
X
1 m
yn .
m
m=0
k
X
1 m
[0]
yn+1 = hfn+1 + k yn+1 yn+1 ,
m
m=1
P
where is a scalar parameter and k = kj=1 1/j. The NDF of order 1 to 5 are
given in Table 5.8.
37/200
2
3
4
5
1/9
2
1
0.0823
0.0415
1
1
1
300
137
48
25
300
137
18
11
36
25
200
137
Cp+1
90
2
3
6
11
12
25
60
137
92
90
12
125
66
43
9
11
16
25
75
137
1
3
2
11
3
25
12
137
3
4
5
3
22
110
137
80
51
or
y1 (0)
y2 (0)
1
1
125
(5.39)
y(0) = y 0 .
1 = 1,
the stiffness ratio (5.38) of the system is
r = 10q .
The solution is
ex
q
.
e10 x
Even though the second part of the solution containing the fast decaying factor
exp(10q t) for large q numerically disappears quickly, the large stiffness ratio
continues to restrict the step size of any explicit schemes, including predictorcorrector schemes.
y1 (x)
y2 (x)
Example 5.13. Study the effect of the stiffness ratio on the number of steps
used by the five Matlab ode codes in solving problem (5.39) with q = 1 and
q = 5.
Solution. The function M-file exp5_13.m is
function uprime = exp5_13(x,u); % Example 5.13
global q % global variable
A=[-1 0;0 -10^q]; % matrix A
uprime = A*u;
The following commands solve the non-stiff initial value problem with q = 1,
and hence r = e10 , with relative and absolute tolerances equal to 1012 and
1014 , respectively. The option stats on requires that the code keeps track of
the number of function evaluations.
clear;
global q; q=1;
tspan = [0 1]; y0 = [1 1];
options = odeset(RelTol,1e-12,AbsTol,1e-14,Stats,on);
[x23,y23] = ode23(exp5_13,tspan,y0,options);
[x45,y45] = ode45(exp5_13,tspan,y0,options);
[x113,y113] = ode113(exp5_13,tspan,y0,options);
[x23s,y23s] = ode23s(exp5_13,tspan,y0,options);
[x15s,y15s] = ode15s(exp5_13,tspan,y0,options);
Similarly, when q = 5, and hence r = exp(105 ), the program solves a pseudostiff initial value problem (5.39). Table 5.9 lists the number of steps used with
q = 1 and q = 5 by each of the five methods of the ODE suite.
It is seen from the table that nonstiff solvers are hopelessly slow and very
expensive in solving pseudo-stiff equations.
126
(103 , 106 )
1
5
29
39 823
13
30 143
28
62 371
37
57
43
89
(1012 , 1014 )
1
5
24 450 65 944
601 30 856
132 64 317
30 500 36 925
773 1 128
We consider another example of a second-order equation, with one real parameter q, which we first solve analytically. We shall obtain a coupled system in
this case.
Example 5.14. Solve the initial value problem
y + (10q + 1)y + 10q = 0
on [0, 1],
y(0) = 2,
and real parameter q.
Solution. Substituting
y(x) = ex
in the differential equation, we obtain the characteristic polynomial and eigenvalues:
2 + (10q + 1) + 10q = ( + 10q )( + 1) = 0 = 1 = 10q ,
y1 = e10 x ,
The general solution is
2 = 1.
y2 (x) = ex .
q
y(x) = c1 e10 x + c2 ex .
Using the initial conditions, one finds that c1 = 1 and c2 = 1. Thus the unique
solution is
q
y(x) = e10 x + ex .
In view of solving the problem in Example 5.14 with numeric Matlab, we
reformulate it into a system of two first-order equations.
Example 5.15. Reformulate the initial value problem
y + (10q + 1)y + 10q y = 0
on [0, 1],
y (0) = 10q 1,
and real parameter q, into a system of two first-order equations and find its vector
solution.
127
Solution. Set
u2 = y .
u1 = y,
Hence,
u2 = u1 ,
with
u1 (0)
u2 (0)
2
10q 1
u(x) = c ex
in the differential system, we obtain the matrix eigenvalue problem
1
(A I)c =
c = 0,
10q (10q + 1)
1 = 10q ,
2 = 1.
and
10q
10q
1
10q
The general solution is
1
1
1
10q
v 1 = 0 = v 1 =
v 2 = 0 = v 2 =
1
10q
1
1
u(x) = c1 e10 x v 1 + c2 ex v 2 .
The initial conditions implies that c1 = 1 and c2 = 1. Thus the unique solution is
u1 (x)
1
1
10q x
=
e
+
ex .
u2 (x)
10q
1
We see that the stiffness ratio of the equation in Example 5.15 is
10q .
Example 5.16. Use the five Matlab ode solvers to solve the nonstiff differential equations
y + (10q + 1)y + 10q = 0
on [0, 1],
y (0) = 10q 1,
128
1x1
2x1
26x2
32x2
20x2
25x2
49x2
26x1
32x1
20x1
25x1
49x1
1x2
8
16
416
512
320
400
784
208
256
160
200
392
16
double
double
double
double
double
double
double
double
double
double
double
double
double
array (global)
array
array
array
array
array
array
array
array
array
array
array
array
y (0) = 10q 1,
[x23,u23] =
[x45,u45] =
[x113,u113]
[x23s,u23s]
[x15s,u15s]
whos
Name
q
u0
u113
u15s
u23
u23s
u45
x113
x15s
x23
x23s
x45
xspan
129
ode23(exp5_16,xspan,u0);
ode45(exp5_16,xspan,u0);
= ode113(exp5_16,xspan,u0);
= ode23s(exp5_16,xspan,u0);
= ode15s(exp5_16,xspan,u0);
Size
1x1
2x1
62258x2
107x2
39834x2
75x2
120593x2
62258x1
107x1
39834x1
75x1
120593x1
1x2
Bytes
8
16
996128
1712
637344
1200
1929488
498064
856
318672
600
964744
16
Class
double
double
double
double
double
double
double
double
double
double
double
double
double
array (global)
array
array
array
array
array
array
array
array
array
array
array
array
CHAPTER 6
All these methods have a built-in local error estimate to control the step size.
Moreover ode113 and ode15s are variable-order packages which use higher order
methods and smaller step size when the solution varies rapidly.
The command odeset lets one create or alter the ode option structure.
The ODE suite is presented in a paper by Shampine and Reichelt [5] and
the Matlab help command supplies precise information on all aspects of their
use. The codes themselves are found in the toolbox/matlab/funfun folder of
Matlab 6. For Matlab 4.2 or later, it can be downloaded for free by ftp on
ftp.mathworks.com in the
pub/mathworks/toolbox/matlab/funfun directory.
131
132
hf (xn , yn ),
hf (xn + (1/2)h, yn + (1/2)k1 ),
k3 =
k4 =
The first three stages produce the solution at the next time step:
yn+1 = yn + (2/9)k1 + (1/3)k2 + (4/9)k3 ,
and all four stages give the local error estimate:
E=
1
1
1
5
k1 +
k2 + k2 k4 .
72
12
9
8
However, this is really a three-stage method since the first step at xn+1 is the
[n+1]
[n]
same as the last step at xn , that is k1
= k4 (that is, a FSAL method).
The natural interpolant used in ode23 is the two-point Hermite polynomial of degree 3 which interpolates yn and f (xn , yn ) at x = xn , and yn+1 and
f (xn+1 , xn+1 ) at t = xn+1 .
6.2.2. The ode45 method. The code ode45 is the Dormand-Prince pair
DP(5,4)7M with a high-quality free interpolant of order 4 that was communicated to Shampine and Reichelt [5] by Dormand and Prince. Since ode45 can use
long step size, the default is to use the interpolant to compute solution values at
four points equally spaced within the span of each natural step.
6.2.3. The ode113 method. The code ode113 is a variable step variable
order method which uses AdamsBashforthMoulton predictor-correctors of order
1 to 13. This is accomplished by monitoring the integration very closely. In the
Matlab graphics context, the monitoring is expensive. Although more than
graphical accuracy is necessary for adequate resolution of moderately unstable
problems, the high accuracy formulae available in ode113 are not nearly as helpful
in the present context as they are in general scientific computation.
133
6.2.4. The ode23s method. The code ode23s is a triple of modified implicit Rosenbrock methods of orders 3 and 2 with error control for stiff systems.
It advances from yn to yn+1 with the second-order method (that is, without local
extrapolation) and controls the local error by taking the difference between the
third- and second-order numerical solutions. Here is the algorithm:
f0
= hf (xn , yn ),
k1
= W 1 (f0 + hdT ),
f1
k2
= W 1 (f1 k1 ) + k1 ,
= yn + hk2 ,
yn+1
f2
= f (xn+1 , yn+1 ),
where
and
W = I hdJ,
d = 1/(2 +
2 ),
c32 = 6 +
2,
f
f
(xn , yn ),
T
(xn , yn ).
y
t
This method is FSAL (First Step As Last). The interpolant used in ode23s is
the quadratic polynomial in s:
s(1 s)
s(s 2d)
yn+s = yn + h
k1 +
k2 .
1 2d
1 2d
J
134
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
The following commands solve a problem with different methods and different
options.
[t,
[t,
[t,
[t,
[t,
The ode options are used in the demo problems in Sections 8 and 9 below. Others
ways of inserting the options in the ode M-file are explained in [7].
The command ODESET creates or alters ODE OPTIONS structure as follows
OPTIONS = ODESET(NAME1, VALUE1, NAME2, VALUE2, . . . )
creates an integrator options structure OPTIONS in which the named
properties have the specified values. Any unspecified properties have
default values. It is sufficient to type only the leading characters that
uniquely identify the property. Case is ignored for property names.
OPTIONS = ODESET(OLDOPTS, NAME1, VALUE1, . . . ) alters an
existing options structure OLDOPTS.
OPTIONS = ODESET(OLDOPTS, NEWOPTS) combines an existing
options structure OLDOPTS with a new options structure NEWOPTS.
Any new properties overwrite corresponding old properties.
ODESET with no input arguments displays all property names and their
possible values.
Here is the list of the odeset properties.
RelTol : Relative error tolerance [ positive scalar 1e-3 ] This scalar
applies to all components of the solution vector and defaults to 1e-3
135
136
137
6.5.1. The a2ode and a3ode problems. A2ODE and A3ODE are stiff
linear problems with real eigenvalues (problem A2 of [11]). These nine- and fourequation systems from circuit theory have a constant tridiagonal Jacobian and
also a constant partial derivative with respect to t because they are autonomous.
Remark 6.1. When the ODE solver JConstant property is set to off, these
examples test the effectiveness of schemes for recognizing when Jacobians need
to be refreshed. Because the Jacobians are constant, the ODE solver property
JConstant can be set to on to prevent the solvers from unnecessarily recomputing
the Jacobian, making the integration more reliable and faster.
6.5.2. The b5ode problem. B5ODE is a stiff problem, linear with complex eigenvalues (problem B5 of [11]). See Ex. 5, p. 298 of Shampine [10] for a
discussion of the stability of the BDFs applied to this problem and the role of
the maximum order permitted (the MaxOrder property accepted by ODE15S).
ODE15S solves this problem efficiently if the maximum order of the NDFs is
restricted to 2. Remark 6.1 applies to this example.
This six-equation system has a constant Jacobian and also a constant partial
derivative with respect to t because it is autonomous.
6.5.3. The buiode problem. BUIODE is a stiff problem with analytical
solution due to Bui. The parameter values here correspond to the stiffest case of
[12]; the solution is
y(1) = e4t ,
y(2) = et .
6.5.4. The brussode problem. BRUSSODE is a stiff problem modelling
a chemical reaction (the Brusselator) [1]. The command BRUSSODE(T, Y) or
BRUSSODE(T, Y, [ ], N) returns the derivatives vector for the Brusselator problem. The parameter N >= 2 is used to specify the number of grid points; the
resulting system consists of 2N equations. By default, N is 2. The problem becomes increasingly stiff and increasingly sparse as N is increased. The Jacobian
for this problem is a sparse matrix (banded with bandwidth 5).
BRUSSODE([ ], [ ], jpattern) or BRUSSODE([ ], [ ], jpattern, N)
returns a sparse matrix of 1s and 0s showing the locations of nonzeros in the Jacobian F/Y . By default, the stiff solvers of the ODE Suite generate Jacobians
numerically as full matrices. However, if the ODE solver property JPattern is
set to on with ODESET, a solver calls the ODE file with the flag jpattern. The
ODE file returns a sparsity pattern that the solver uses to generate the Jacobian
numerically as a sparse matrix. Providing a sparsity pattern can significantly
reduce the number of function evaluations required to generate the Jacobian and
can accelerate integration. For the BRUSSODE problem, only 4 evaluations of
the function are needed to compute the 2N 2N Jacobian matrix.
6.5.5. The chm6ode problem. CHM6ODE is the stiff problem CHM6 from
Enright and Hull [13]. This four-equation system models catalytic fluidized bed
dynamics. A small absolute error tolerance is necessary because y(:,2) ranges from
7e-10 down to 1e-12. A suitable AbsTol is 1e-13 for all solution components. With
this choice, the solution curves computed with ode15s are plausible. Because the
step sizes span 15 orders of magnitude, a loglog plot is appropriate.
6.5.6. The chm7ode problem. CHM7ODE is the stiff problem CHM7 from
[13]. This two-equation system models thermal decomposition in ozone.
138
6.5.7. The chm9ode problem. CHM9ODE is the stiff problem CHM9 from
[13]. It is a scaled version of the famous Belousov oscillating chemical system.
There is a discussion of this problem and plots of the solution starting on p. 49
of Aiken [14]. Aiken provides a plot for the interval [0, 5], an interval of rapid
change in the solution. The default time interval specified here includes two full
periods and part of the next to show three periods of rapid change.
6.5.8. The d1ode problem. D1ODE is a stiff problem, nonlinear with real
eigenvalues (problem D1 of [11]). This is a two-equation model from nuclear
reactor theory. In [11] the problem is converted to autonomous form, but here
it is solved in its original non-autonomous form. On page 151 in [15], van der
Houwen provides the reference solution values
t = 400,
y(1) = 22.24222011,
y(2) = 27.11071335
6.5.9. The fem1ode problem. FEM1ODE is a stiff problem with a timedependent mass matrix,
M (t)y = f (t, y).
Remark 6.2. FEM1ODE(T, Y) or FEM1ODE(T, Y, [ ], N) returns the
derivatives vector for a finite element discretization of a partial differential equation. The parameter N controls the discretization, and the resulting system
consists of N equations. By default, N is 9.
FEM1ODE(T, [ ], mass) or FEM1ODE(T, [ ], mass, N) returns the timedependent mass matrix M evaluated at time T. By default, ODE15S solves systems of the form
y = f (t, y).
However, if the ODE solver property Mass is set to on with ODESET, the solver
calls the ODE file with the flag mass. The ODE file returns a mass matrix that
the solver uses to solve
M (t)y = f (t, y).
If the mass matrix is a constant M, then the problem can be also be solved with
ODE23S.
FEM1ODE also responds to the flag init (see RIGIDODE).
For example, to solve a 20 20 system, use
[t, y] = ode15s(fem1ode, [ ], [ ], [ ], 20);
6.5.10. The fem2ode problem. FEM2ODE is a stiff problem with a timeindependent mass matrix,
M y = f (t, y).
Remark 6.2 applies to this example, which can also be solved by ode23s with
the command
[t, y] = ode23s(fem2ode, [ ], [ ], [ ], 20).
6.5.11. The gearode problem. GEARODE is a simple stiff problem due to
Gear as quoted by van der Houwen [15] who, on page 148, provides the reference
solutionvalues
t = 50,
y(1) = 0.5976546988,
y(2) = 1.40234334075
139
6.5.12. The hb1ode problem. HB1ODE is the stiff problem 1 of Hindmarsh and Byrne [16]. This is the original Robertson chemical reaction problem
on a very long interval. Because the components tend to a constant limit, it
tests reuse of Jacobians. The equations themselves can be unstable for negative
solution components, which is admitted by the error control. Many codes can,
therefore, go unstable on a long time interval because a solution component goes
to zero and a negative approximation is entirely possible. The default interval is
the longest for which the Hindmarsh and Byrne code EPISODE is stable. The
system satisfies a conservation law which can be monitored:
y(1) + y(2) + y(3) = 1.
6.5.13. The hb2ode problem. HB2ODE is the stiff problem 2 of [16]. This
is a non-autonomous diurnal kinetics problem that strains the step size selection
scheme. It is an example for which quite small values of the absolute error tolerance are appropriate. It is also reasonable to impose a maximum step size so
as to recognize the scale of the problem. Suitable values are an AbsTol of 1e-20
and a MaxStep of 3600 (one hour). The time interval is 1/3; this interval is used
by Kahaner, Moler, and Nash, p. 312 in [17], who display the solution on p. 313.
That graph is a semilog plot using solution values only as small as 1e-3. A small
threshold of 1e-20 specified by the absolute error control tests whether the solver
will keep the size of the solution this small during the night time. Hindmarsh and
Byrne observe that their variable order code resorts to high orders during the day
(as high as 5), so it is not surprising that relatively low order codes like ODE23S
might be comparatively inefficient.
6.5.14. The hb3ode problem. HB3ODE is the stiff problem 3 of Hindmarsh and Byrne [16]. This is the Hindmarsh and Byrne mockup of the diurnal
variation problem. It is not nearly as realistic as HB2ODE and is quite special in
that the Jacobian is constant, but it is interesting because the solution exhibits
quasi-discontinuities. It is posed here in its original non-autonomous form. As
with HB2ODE, it is reasonable to impose a maximum step size so as to recognize the scale of the problem. A suitable value is a MaxStep of 3600 (one hour).
Because y(:,1) ranges from about 1e-27 to about 1.1e-26, a suitable AbsTol is
1e-29.
Because of the constant Jacobian, the ODE solver property JConstant prevents the solvers from recomputing the Jacobian, making the integration more
reliable and faster.
6.5.15. The vdpode problem. VDPODE is a parameterizable van der Pol
equation (stiff for large mu) [18]. VDPODE(T, Y) or VDPODE(T, Y, [ ], MU)
returns the derivatives vector for the van der Pol equation. By default, MU is
1, and the problem is not stiff. Optionally, pass in the MU parameter as an
additional parameter to an ODE Suite solver. The problem becomes more stiff
as MU is increased.
When MU is 1000 the equation is in relaxation oscillation, and the problem
becomes very stiff. The limit cycle has portions where the solution components
change slowly and the problem is quite stiff, alternating with regions of very sharp
change where it is not stiff (quasi-discontinuities). The initial conditions are close
to an area of slow change so as to test schemes for the selection of the initial step
size.
140
Bibliography
[1] E. Hairer and G. Wanner, Solving ordinary differential equations II, stiff and differentialalgebraic problems, Springer-Verlag, Berlin, 1991, pp. 58.
[2] J. D. Lambert, Numerical methods for ordinary differential equations. The initial value
problem, Wiley, Chichester, 1991.
[3] J. R. Dormand and P. J. Prince, A family of embedded RungeKutta formulae, J. Computational and Applied Mathematics, 6(2) (1980), 1926.
[4] E. Hairer and G. Wanner, On the instability of the BDF formulae, SIAM J. Numer. Anal.,
20(6) (1983), 12061209.
[5] L. F. Shampine and M. W. Reichelt, The Matlab ODE suite, SIAM J. Sci. Comput.,
18(1), (1997) 122.
[6] R. Ashino and R. Vaillancourt, Hayawakari Matlab (Introduction to Matlab), Kyoritsu
Shuppan, Tokyo, 1997, xvi211 pp., 6th printing, 1999 (in Japanese). (Korean translation,
1998.)
[7] Using MATLAB, Version, 5.1, The MathWorks, Chapter 8, Natick, MA, 1997.
[8] L. F. Shampine and M. K. Gordon, Computer solution of ordinary differential equations,
W.H. Freeman & Co., San Francisco, 1975.
[9] T. E. Hull, W. H. Enright, B. M. Fellen, and A. E. Sedgwick, Comparing numerical
methods for ordinary differential equations, SIAM J. Numer. Anal., 9(4) (1972) 603637.
[10] L. F. Shampine, Numerical solution of ordinary differential equations, Chapman & Hall,
New York, 1994.
[11] W. H. Enright, T. E. Hull, and B. Lindberg, Comparing numerical methods for stiff
systems of ODEs, BIT 15(1) (1975), 1048.
[12] L. F. Shampine, Measuring stiffness, Appl. Numer. Math., 1(2) (1985), 107119.
[13] W. H. Enright and T. E. Hull, Comparing numerical methods for the solution of stiff
systems of ODEs arising in chemistry, in Numerical Methods for Differential Systems, L.
Lapidus and W. E. Schiesser eds., Academic Press, Orlando, FL, 1976, pp. 4567.
[14] R. C. Aiken, ed., Stiff computation, Oxford Univ. Press, Oxford, 1985.
[15] P. J. van der Houwen, Construction of integration formulas for initial value problems,
North-Holland Publishing Co., Amsterdam, 1977.
[16] A. C. Hindmarsh and G. D. Byrne, Applications of EPISODE: An experimental package
for the integration of ordinary differential equations, in Numerical Methods for Differential Systems, L. Lapidus and W. E. Schiesser eds., Academic Press, Orlando, FL, 1976,
pp. 147166.
[17] D. Kahaner, C. Moler, and S. Nash, Numerical methods and software, Prentice-Hall,
Englewood Cliffs, NJ, 1989.
[18] L. F. Shampine, Evaluation of a test set for stiff ODE solvers, ACM Trans. Math. Soft.,
7(4) (1981) 409420.
141
CHAPTER 7
Orthogonal polynomials
Orthoggonal polynomials are solutions of SturmLiouville problems given by
second-order differential equations with boundary conditions. These polynomials
have desirable properties in the applications.
7.1. FourierLegendre Series
Properties of the Legendre polynomials are listed in Section 8.1 We present
simple examples of expansions in FourierLegendre series.
Example 7.1. Expand the polynomial
p(x) = x3 2x2 + 4x + 1
over [1, 1] in terms of the Legendre polynomials P0 (x), P1 (x),. . .
Solution. We express the powers of x in terms of the basis of Legendre
polynomials:
P0 (x) = 1 = 1 = P0 (x),
P1 (x) = x = x = P1 (x),
2
1
1
P2 (x) = (3x2 1) = x2 = P2 (x) + P0 (x),
2
3
3
1
2
3
3
3
P3 (x) = (5x 3x) = x = P3 (x) + P1 (x).
2
5
5
This way, one avoids computing integrals. Thus
3
4
2
2
p(x) = P3 (x) + P1 (x) P2 (x) P0 (x) + 4P1 (x) + P0 (x)
5
5
3
3
2
4
23
1
= P3 (x) P2 (x) + P1 (x) + P0 (x).
5
3
5
3
.
Example 7.2. Expand the polynomial
p(x) = 2 + 3x + 5x2
over [3, 7] in terms of the Legendre polynomials P0 (x), P1 (x),. . .
Solution. To map the segment x [3, 7] onto the segment s [1, 1] (see
Fig. 7.1) we consider the affine transformation
s 7 x = s + ,
such that
1 7 3 = + ,
1 7 7 = + .
(7.1)
144
7. ORTHOGONAL POLYNOMIALS
1
s
1
2
= 142P0 (s) + 106P1 (s) + 20 P2 (s) + P0 (s) ;
3
3
consequently, we have
20
x5
x5
40
x5
p(x) = 142 +
P0
+ 106P1
+ P2
.
3
2
2
3
2
.
Example 7.3. Compute the first three terms of the FourierLegendre expansion of the function
(
0, 1 < x < 0,
f (x) =
x,
0 < x < 1.
Solution. Putting
f (x) =
am Pm (x),
1 < x < 1,
m=0
we have
am =
Hence
a0 =
a1 =
1
2
3
2
5
a2 =
2
2m + 1
2
f (x)P0 (x) dx =
1
1
f (x)P1 (x) dx =
1
1
1
2
3
2
5
f (x)P2 (x) dx =
2
1
x dx =
x2 dx =
1
,
4
1
,
2
5
1
(3x2 1) dx =
.
2
16
1
5
1
P0 (x) + P1 (x) +
P2 (x).
4
2
16
145
Example 7.4. Compute the first three terms of the FourierLegendre expansion of the function
f (x) = ex ,
0 x 1.
Solution. To use the orthogonality of the Legendre polynomials, we transform the domain of f (x) from [0, 1] to [1, 1] by the substitution
s 1
1
, that is x = + .
s=2 x
2
2 2
Then
f (x) = ex = e(1+s)/2 =
am Pm (s),
1 s 1,
m=0
where
Z
2m + 1 1 (1+s)/2
e
Pm (s) ds.
2
1
We first compute the following three integrals by recurrence:
Z 1
I0 =
es/2 ds = 2 e1/2 e1/2 ,
am =
1
1
I1 =
se
s/2
ds = 2s e
1
Z
2
s/2
= 2 e1/2 + e1/2 2I0
es/2 ds
= 2 e1/2 + 6 e1/2 ,
1
Z
Z 1
s2 es/2 ds = 2s2 es/2 4
I2 =
1
= 2 e1/2 e1/2 4I1
s es/2 ds
= 10 e1/2 26 e1/2 .
Thus
a0 =
1 1/2
e I0 = e 1 1.7183,
2
3 1/2
e I1 = 3 e + 9 0.8452,
2
1
5
a2 = e1/2 (3I2 I0 ) = 35 e 95 0.1399.
2
2
We finally have the approximation
a1 =
146
7. ORTHOGONAL POLYNOMIALS
0=
0=
0=
1
Z 1
1
Z 1
(7.3)
(7.4)
(7.5)
1
1
(3x2 1) = 0 x1 = x2 = = 0.577 350 27.
2
3
(7.6)
147
formula will be exact for polynomials of degree five or less. By Example 7.1, it
suffices to consider the basis P0 (x), . . . , P5 (x). Thus,
Z 1
2=
P0 (x) dx = aP0 (x1 ) + bP0 (x2 ) + cP0 (x3 ),
(7.7)
1
1
0=
0=
0=
1
Z 1
1
Z 1
(7.8)
(7.9)
(7.10)
(7.11)
(7.12)
1
1
0=
0=
1
Z 1
1
1
(5x3 3x) = x(5x2 3)
2
2
r
3
= 0.774 596 7,
5
x2 = 0.
r
3
3
a+
c = 0 a = c.
5
5
We immediately see that (7.12) is satisfied since P5 (x) is odd. Moreover, by
substituting a = c in (7.9), we have
3
1
3
1
1
3 1 +b
+a
3 1 = 0,
a
2
5
2
2
5
that is,
4a 5b + 4a = 0
or 8a 5b = 0.
2a + b = 2 or 10a + 5b = 10.
Adding the second expressions in (7.13) and (7.14), we have
a=
10
5
= = 0.555.
18
9
Thus
10
8
= = 0.888.
9
9
Finally, we verify that (7.11) is satisfied. Since
b=2
P4 (x) =
1
(35x4 30x2 + 3),
8
(7.13)
(7.14)
148
7. ORTHOGONAL POLYNOMIALS
we have
9
8 3
3
8 3
2 5 315 450 + 75
51
35
+
30 + 3 + =
2
98
25
5
9 8
98
25
9 8
2 5 (60) 8 3
+
=
98
25
98
24 + 24
= 0.
=
98
Therefore, the three-point Gaussian quadrature formula is
r !
r !
Z 1
5
3
3
8
5
f (x) dx = f
+ f (0) + f
.
(7.15)
9
5
9
9
5
1
Remark 7.1. The interval of integration in the Gaussian quadrature formulae
is normalized to [1, 1]. To integrate over the interval [a, b] we use the change of
independent variable (see Example 7.2)
s 7 x = s + ,
such that
1 7 a = + ,
leading to
x=
Then,
(b a)t + b + a
,
2
ba
f (x) dx =
2
dx =
ba
2
(b a)t + b + a
2
1 7 b = + ,
dt.
dt.
/2
sin x dx
by applying the two-point Gaussian quadrature formula once over the interval
[0, /2] and over the half-intervals [0, /4] and [/4, /2].
Solution. Let
(/2)t + /2
,
dx = dt.
2
4
At t = 1, x = 0 and, at t = 1, x = /2. Hence
Z
t +
1
dt
sin
I=
4 1
4
1
1
+1
sin
+ 1 + sin
8
8
8
3
3
1
1
+3
+ 3 + sin
+ sin
8
8
3
3
= 0.999 910 166 769 89.
The error is 8.983 105 . The Matlab solution is as follows. For generality, it is
convenient to set up a function M-file exp7_7.m,
function f=exp7_7(t)
% evaluate the function f(t)
f=sin(t);
The two-point Gaussian quadrature is programmed as follows.
>> clear
>> a = 0; b = pi/2; c = (b-a)/2; d= (a+b)/2;
>> weight = [1 1]; node = [-1/sqrt(3) 1/sqrt(3)];
>> syms x t
>> x = c*node+d;
>> nv1 = c*weight*exp7_7(x) % numerical value of integral
nv1 = 0.9985
>> error1 = 1 - nv1 % error in solution
error1 = 0.0015
The other part is done in a similar way.
Remark 7.2. The Gaussian quadrature formulae are the most accurate integration formulae for a given number of nodes. The error in the n-point formula
is
n
2
2 (n!)2
2
f (2n) (),
1 < < 1.
En (f ) =
(2n + 1)! (2n)!
This formula is therefore exact for polynomials of degree 2n 1 or less.
Matlabs adaptive Simpsons rule quad and adaptive NewtonCotes 8-panel
rule quad8 evaluate the integral of Example 7.7 as follows.
>> v1 = quad(sin,0,pi/2)
v1 = 1.00000829552397
>> v2 = quad8(sin,0,pi/2)
v2 = 1.00000000000000
respectively, within a relative error of 103 .
Uniformly spaced composite rules that are exact for degree d polynomials
are efficient if the (d + 1)st derivative f (d+1) is uniformly behaved across the interval of integration [a, b]. However, if the magnitude of this derivative varies
widely across this interval, the error control process may result in an unnecessary
number of function evaluations. This is because the number n of nodes is determined by an interval-wide derivative bound Md+1 . In regions where f (d+1) is
small compared to this value, the subintervals are (possibly) much shorter than
necessary. Adaptive quadrature methods addresses this problem by discovering
where the integrand is ill behaved and shortening the subintervals accordingly.
See Section 3.9 for an example.
7.3. Numerical Solution of Integral Equations of the Second Kind
The theory and application of integral equations is an important subject in
applied mathematics, science and engineering. In this section we restrict attention
150
7. ORTHOGONAL POLYNOMIALS
to Fredholm integral equations of the second kind in one variable. The general
form of such equation is
Z b
f (t) =
K(t, s)f (s) ds + g(t),
=
6 0.
(7.16)
a
We shall assume that the kernel K(t, s) is continuous on the square [a, b] [a, b]
R2 .
A significant use of Gaussian quadrature formulae is in the numerical solution
of Fredholm integral equations of the second kind by the Nystrom method. We
explain this method.
Let a numerical integration scheme be given:
Z b
N
X
wj y(sj ),
(7.17)
y(s) ds
a
j=1
where the N numbers {wj } are the weights of the quadrature rule and the N
points {sj } are the nodes used by the method. One may use the trapezoidal or
Simpsons rules, but for smooth nonsingular problems Gaussian quadrature seems
by far superior.
If we apply the numerical integration scheme to the integral equation (7.16),
we get
N
X
wj K(t, sj )f (sj ) + g(t),
(7.18)
f (t) =
j=1
where, for simplicity, we have written f (t) for fN (t). We evaluate this equation
at the quadrature points:
f (tj ) =
N
X
(7.19)
j=1
Let fi be the vector f (ti ), gi the vector g(ti ), Kij the matrix K(ti , sj ), and define
e ij = Kij wj .
K
(7.20)
(7.21)
with = 0.5 and f (t) = et . Compare the errors in the numerical solutions at the
nodes of Simpsons rule and three-point Gaussian quadrature, respectively.
Solution. Substituting f (t) = et in the integral equation, we see that the
function g(t) on the right-hand side is
1
1 et+1 .
g(t) = et
2(t + 1)
t2 = 0.5,
t3 = 1,
and solving the resulting algebraic system (7.20), say, by the LU decomposition,
we have the error in the solution f3 :
f (0)
f3 (0)
0.0047
f (0.5) f3 (0.5) = 0.0080
f (1)
f3 (1)
0.0164
1 + 0.6
1 0.6
0.112 701 67,
t2 = 0.5,
t3 =
0.887 298 33,
t1 =
2
2
and solving the resulting algebraic system (7.20), say, by the LU decomposition,
we have the error in the solution f3 :
f (t1 )
f3 (t1 )
0.2099 104
f (t2 ) f3 (t2 ) = 0.3195 104
(7.22)
f (t3 )
f3 (t3 )
0.6315 104
which is much smaller than with Simpsons rule when using the same number of
nodes.
The function M-file exp7_8.m:
function g=exp7_8(t) % Example 7.8
% evaluate right-hand side
global lambda
syms s
g = exp(t)-lambda*int(exp(t*s)*exp(s),s,0,1);
\end{varbatim}
computes the value of the function $g(t)$, and
the following Matlab commands produce these results.
\begin{verbatim}
clear; global lambda
lambda = 1/2; h = 1/2;
snode = [0 1/2 1]; sweight = [1/3 4/3 1/3]; % Simpsons rule
sK = h*exp(snode*snode)*diag(sweight);
sA = eye(3)-lambda*sK;
sb = double(exp7_8(snode));
152
7. ORTHOGONAL POLYNOMIALS
E1
Ratio
5.35E03
1.35E03
3.9
3.39E04
4.0
8.47E05
4.0
E2
Ratio
5.44E3
1.37E03
4.0
3.44E04
4.0
8.61E05
4.0
sf3 = sA\sb;
serror = exp(snode)-sf3
serror =
-0.0047
-0.0080
-0.0164
gnode = [(1-sqrt(0.6))/2 1/2 (1+sqrt(0.6))/2]; % Gaussian quadrature
gweight = [5/18 8/18 5/18];
gK = exp(gnode*gnode)*diag(gweight);
gA = eye(3)-lambda*gK;
gb = double(exp7_8(gnode));
gf3 = gA\gb;
gerror = exp(gnode)-gf3
gerror = 1.0e-04 *
0.2099
0.3195
0.6315
Note that the use of matrices in computing sK and gK avoids recourse to loops.
Quadratic interpolation can be used to extend the numerical solution to all
other t [0, 1], but it generally results in a much larger error. For example,
f (1.0) P2 f3 (1.0) = 0.0158,
where P2 f3 (t) denotes the quadratic polynomial interpolating the Nystrom solution at the Gaussian quadrature nodes given above. In contrast, the Nystrom
formula (7.18) gives errors that are consistent in size with those in (7.22). For
example,
f (1.0) f3 (1.0) = 8.08 105 .
Example 7.9. Consider the integral equation of Example 7.8 with = 1/50
and f (t) = et . Compare the errors in the Nystromtrapezoidal method and
NystromGaussian method, respectively.
Solution. In Table 7.1 we give numerical results when using the trapezoidal
rule with n nodes, with N = 2, 4, 8, 16. In Table 7.2 we give results when using
n-point Gaussian quadratures for N = 1, 2, 3, 4, 5. The following norms are used
E1 = max |f (ti ) fN (ti )|,
1iN
For E2 , fN (x) is obtained using the Nystrom interpolation formula (7.18). The
results for the trapezoidal rule show clearly the O(h2 ) behavior of the error. It is
seen that the use of Gaussian quadrature leads to very rapid convergence of fN
to f (x).
E1
Ratio
4.19E03
1.22E04
34
1.20E06 100
5.09E09 200
1.74E11 340
E2
Ratio
9.81E03
2.18E04
45
1.86E06 117
8.47E09 220
2.39E11 354
CHAPTER 8
1 x 1.
where [n/2] denotes the greatest integer smaller than or equal to n/2.
(3) The three-point recurrence relation is
(n + 1)Pn+1 (x) = (2n + 1)xPn (x) nPn1 (x).
(4) The standardization is
Pn (1) = 1.
(5) The norm of Pn (x) is
Z 1
[Pn (x)]2 dx =
2
.
2n + 1
X
1
=
Pn (x)tn ,
1 2xt + t2
n=0
1 x 1.
(8.1)
(8.2)
156
Pn (x)
P0
1
P1
0.5
P4
1
0.5
P3
0.5
0.5
p2
1
Figure 8.1. Plot of the first five Legendre polynomials.
L n (x)
8
L2
6
4
L0
2
-2
-2
8
L3
-4
L1
-6
ex dn (xn ex )
, n = 0, 1, . . .
n!
dxn
The first four Laguerre polynomials are (see figure 8.2)
Ln (x) =
L0 (x) = 1,
L1 (x) = 1 x,
3
1
1
L3 (x) = 1 3x + x2 x3 .
L2 (x) = 1 2x + x2 ,
2
2
6
The Ln (x) can be obtained by the three-point recurrence formula
157
xy + (1 x)y + ny = 0
X
f (x) =
an Pn (x), 1 x 1,
n=0
where
Z
2n + 1 1
f (x)Pn (x) dx,
n = 0, 1, 2, . . .
2
1
This expansion follows from the orthogonality relations
(
Z 1
0,
m 6= n,
Pm (x)Pn (x) dx =
2
,
m
= n.
1
2n+1
an =
(8.3)
1.1. Use the bisection method to find x3 for f (x) = x cos x on [0, 1]. Angles
in radian measure.
1.2. Use the bisection method to find x3 for
1
f (x) = 3(x + 1)(x )(x 1)
2
on the following intervals:
[2, 1.5],
[1.25, 2.5].
1.3. Use the bisection method to find a solution accurate to 103 for f (x) =
x tan x on [4, 4.5]. Angles in radian measure.
xn+1 = 2xn + 3
for the solving the equation f (x) = x2 2x 3 = 0 converges in the interval [2, 4].
1.6. Use a fixed point iteration method, other than Newtons method, to determine a solution accurate to 102 for f (x) = x3 x 1 = 0 on [1, 2]. Use
x0 = 1.
160
1.12 Compute a root of the equation f (x) = 2x tan x given in Exercise 1.10
with the secant method with starting values x0 = 1 and x1 = 0.5. Find the order
of convergence to the root.
1.13. Repeat Exercise 1.12 with the method of false position. Find the order of
convergence of the method.
1.14. Repeat Exercise 1.11 with the secant method with starting values x0 = 1
and x1 = 0.5. Find the order of convergence of the method.
1.15. Repeat Exercise 1.14 with the method of false position. Find the order of
convergence of the method.
1.16. Consider the fixed point method of Exercise 1.5:
xn+1 = 2xn + 3.
Complete the table:
xn
xn
2 xn
x1 = 4.000
x2 =
x3 =
2
x1
=
2 x1
1.17. Apply Steffensens method to the result of Exercise 1.9. Find the order of
convergence of the method.
1.18. Use M
ullers method to find the three zeros of
f (x) = x3 + 3x2 1.
1.19. Use M
ullers method to find the four zeros of
f (x) = x4 + 2x2 x 3.
1.20. Sketch the function f (x) = x tan x and compute a root of the equation
f (x) = 0 to six decimals by means of Newtons method with x0 = 1. Find the
multiplicity of the root and the order of convergence of Newtons method to this
root.
161
f (8.3) = 17.56492,
f (8.6) = 18.50515,
f (8.7) = 18.82091.
(0.2, 1.04081077),
(0.4, 1.1735109)
lie on the graph of a certain function f (x). Use these points to estimate f (0.3).
2.5. Complete the following table of divided differences:
i
xi
f [xi ]
3.2
22.0
2.7
17.8
8.400
2.856
0.528
2
1.0
14.2
4.8
38.3
5.6
5.17
Write the interpolating polynomial of degree 3 that fits the data at all points from
x0 = 3.2 to x3 = 4.8.
2.6. Interpolate the data
(1, 2),
(0, 0),
(1.5, 1),
(2, 4),
162
(0, 0),
(1, 1),
(2, 4),
(0, 1),
(1, 0),
(2, 5),
0.0
0.2
0.4
0.6
0.8
1.00000 1.22140 1.49182 1.82212 2.22554
2.12. Approximate f (0.65) using the data in Exercise 2.11 and GregoryNewtons
backward interpolating polynomial of degree four.
2.13. Construct a Hermite interpolating polynomial of degree three for the data
x
8.3
8.6
f (x)
f (x)
17.56492 3.116256
18.50515 3.151762
163
= ndf df ;
(d) verify that || is bounded by the absolute value of the error term.
3.2. Use Richardsons extrapolation with h = 0.4, h/2 and h/4 to improve the
value f (1.4) obtained by formula (4.5).
Z 1
dx
by the trapezoidal rule with n = 10.
3.3. Evaluate
0 1+x
Z 1
dx
3.4. Evaluate
by Simpsons rule with n = 2m = 10.
0 1+x
Z 1
dx
by the trapezoidal rule with n = 10.
3.5. Evaluate
2
0 1 + 2x
Z 1
dx
by Simpsons rule with n = 2m = 10.
3.6. Evaluate
2
0 1 + 2x
Z 1
dx
3.7. Evaluate
by the trapezoidal rule with h for an error of 104 .
1
+
x3
0
Z 1
dx
by Simpsons rule with with h for an error of 106 .
3.8. Evaluate
1
+
x3
0
3.9. Determine the values of h and n to approximate
Z 3
ln x dx
1
to 10
to 10
1
dx
x+4
164
+
+
2x2
2x2
2x3
3x3
4x3
= 4
= 32
= 17
+ x2
+ 2x2
+ 2x2
+
+
+
x3
2x3
3x3
=
=
=
4.2.
x1
x1
x1
5
6
8
x2
3x2
3x2
5x3
9x3
=
=
=
+
+
9x2
48x2
27x2
6x3
39x3
42x3
= 23
= 136
= 45
+
+
x2
3x2
10x2
4.4.
3x1
18x1
9x1
4
6
2
4.5. Scale each equation in the l -norm, so that the largest coefficient of each
row on the left-hand side be equal to 1 in absolute value, and solve the scaled
system by the LU decomposition with partial pivoting.
x1
4x1
5x1
2x3
x3
3x3
=
=
=
3.8
5.7
2.8
1
a
b
c
0
1
0
0
0
0
1
0
0
0
.
0
1
1
a
b
c
0
1
0
0
0
0
1
0
0
1
0
0
0 0
1
0
0
1
d
e
0
0
1
0
0
1 0
0 1
0
0 0 0
1
0 0
0
0
1
f
0
0
.
0
1
165
9
9
9
0
1 1 2
9 13 13 2
5 4 , B =
A = 1
9 13 14 3 .
2 4 22
0 2 3 18
16 4
4
x1
12
4 10 1 x2 =
3 .
4 1
5
x3
1
4.10.
4 10 8
x1
44
10 26 26 x2 = 128 .
8 26 61
x3
214
Do three iterations of GaussSeidels scheme on the following properly permuted
systems with given initial values x(0) .
4.11.
6x1
x1
x1
+
+
+
x2
x2
5x2
+
+
x3
7x3
x3
=
3
= 17
=
0
(0)
with
x1
(0)
x2
(0)
x3
=
=
=
1
1
1
(0)
with
x1
(0)
x2
(0)
x3
=
=
=
1
1
1
4.12.
2x1
x1
7x1
+ x2
+ 4x2
+ 2x2
+ 6x3
+ 2x3
x3
=
=
=
22
13
6
(0.5, 5),
(1.6, 15),
(2.1, 20),
where s is the elongation of an elastic spring under a force F , and estimate from
it the spring modulus k = F/s. (F = ks is called Hookes law).
4.14. Using least squares, fit a parabola to the data
(1, 2),
(0, 0),
(1, 1),
(2, 2).
(1, 3.0),
(2, 2.4),
(3, 1.8).
1
e1
0.5
e1/2
0 0.25 0.5
1 e1/4 e1/2
0.75 1
e3/4 e
by means of
f (x) = a0 P0 (x) + a1 P1 (x) + a2 P2 (x),
where P0 , P1 and P2 are the Legendre polynomials of degree 0, 1 and 2 respectively. Plot f (x) and g(x) = ex on the same graph.
Using Theorem 4.3, determine and sketch disks that contain the eigenvalues of
the following matrices.
166
4.17.
4.18.
2
1/2 i/2
1/2
0
i/2 .
i/2 i/2 2
4.19. Find the l1 -norm of the matrix in exercise 17 and the l -norm of the
matrix in exercise 18.
Do three iterations of the power method to find the largest eigenvalue, in absolute
value, and the corresponding eigenvector of the following matrices.
10 4
1
(0)
4.20.
with x =
.
4 2
1
3 2 3
1
4.21. 2 6 6
with x(0) = 1 .
3 6 3
1
Exercises for Chapter 5
Use Eulers method with h = 0.1 to obtain a four-decimal approximation for each
initial value problem on 0 x 1 and plot the numerical solution.
5.1. y = ey y + 1,
5.2. y = x + sin y,
y(0) = 0.
5.3. y = x + cos y,
y(0) = 0.
5.4. y = x + y ,
5.5. y = 1 + y ,
y(0) = 1.
y(0) = 1.
y(0) = 0.
Use the improved Euler method with h = 0.1 to obtain a four-decimal approximation for each initial value problem on 0 x 1 and plot the numerical
solution.
5.6. y = ey y + 1,
5.7. y = x + sin y,
5.8. y = x + cos y,
y(0) = 0.
y(0) = 0.
y(0) = 1.
y(0) = 0.
5.9. y = x + y ,
y(0) = 1.
5.10. y = 1 + y ,
5.12. y = x + sin y,
y(0) = 1.
y(0) = 0.
5.13. y = x + cos y,
5.14. y = e
167
y(0) = 0.
y(0) = 0.
5.15. y = y + 2y x,
y(0) = 0.
Use the Matlab ode23 embedded pair of order 3 with h = 0.1 to obtain a sixdecimal approximation for each initial value problem on 0 x 1 and estimate
the local truncation error by means of the given formula.
5.16. y = x2 + 2y 2 ,
y(0) = 1.
5.17. y = x + 2 sin y,
y(0) = 0.
5.18. y = x + 2 cos y,
5.19. y = e
y(0) = 0.
y(0) = 0.
5.20. y = y + 2y x,
y(0) = 0.
5.22. y = x + cos y,
5.23. y = y y + 1,
y(0) = 0.
y(0) = 0.
y(0) = 0.
y(0) = 0.
5.25. y = x + cos y,
y(0) = 0.
5.26. y = y 2 y + 1,
y(0) = 0.
170
1
0
3
y(x)
y(x)
-1
2
-2
1
-3
0.5
1.5
-4
0.5
1.5
Figure 8.3. Graph of two functions and their difference for Exercise 1.11.
Ex. 1.12 Compute a root of the equation f (x) = x tan x given in Exercise 1.10 with the secant method with starting values x0 = 1 and x1 = 0.5. Find
the order of convergence to the root.
Solution. x0 = 1; x1 = 0.5; % starting values
x = zeros(20,1);
x(1) = x0; x(2) = x1;
for n = 3:20
x(n) = x(n-1) -(x(n-1)-x(n-2)) ...
/(x(n-1)-tan(x(n-1))-x(n-2)+tan(x(n-2)))*(x(n-1)-tan(x(n-1)));
end
dx = abs(diff(x));
p = 1; % checking convergence of order 1
dxr = dx(2:19)./(dx(1:18).^p);
table = [[0:19] x [0; dx] [0; 0; dxr]]
table =
n
x_n
x_n - x_{n-1}
|x_n - x_{n-1}|
/|x_{n-1} - x_{n-2}|
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
1.00000000000000
0.50000000000000
0.45470356524435
0.32718945543123
0.25638399918811
0.19284144711319
0.14671560243705
0.11082587909404
0.08381567002072
0.06330169146740
0.04780894321090
0.03609714636358
0.02725293456160
0.02057409713542
0.01553163187404
0.50000000000000
0.04529643475565
0.12751410981312
0.07080545624312
0.06354255207491
0.04612584467614
0.03588972334302
0.02701020907332
0.02051397855331
0.01549274825651
0.01171179684732
0.00884421180198
0.00667883742618
0.00504246526138
0.09059286951131
2.81510256824784
0.55527546204022
0.89742451283310
0.72590481763723
0.77808273420254
0.75258894628889
0.75948980985777
0.75522884145761
0.75595347277403
0.75515413367179
0.75516479882196
0.75499146627099
15
16
17
18
19
An
ratio
0.01172476374403
0.00885088980844
0.00668139206035
0.00504365698583
0.00380735389990
approximate solution
171
0.00380686813002 0.75496169684658
0.00287387393559 0.75491817353192
0.00216949774809 0.75490358892216
0.00163773507452 0.75489134568691
0.00123630308593 0.75488588182657
to the triple root x = 0 is x19 = 0.0038. Since the
|xn xn1 |
0.75 constant
|xn1 xn2 |
f (0) 6= 0.
In general, the secant method may not converge at all to a multiple root.
(0.2, 1.04081077),
(0.4, 1.1735109)
lie on the graph of a certain function f (x). Use these points to estimate f (0.3).
Solution. We have
f [0.1, 0.2] =
1.04081077 1.0100502
= 0.307606,
0.1
f [0.2, 0.4] =
1.1735109 1.04081077
= 0.663501
0.2
and
f [0.1, 0.2, 0, 4] =
0.663501 0.307606
= 1.18632.
0.3
Therefore,
p2 (x) = 1.0100502 + (x 0.1) 0.307606 + (x 0.1) (x 0.2) 1.18632
and
p2 (0.3) = 1.0953.
Ex. 2.12. Approximate f (0.65) using the data in Exercise 2.10
x
f (x)
0.0
0.2
0.4
0.6
0.8
1.00000 1.22140 1.49182 1.82212 2.22554
172
fn
1.0000
1.2214
2 fn
fn
3 fn
4 fn
0.22140
0.2
0.04902
0.27042
0.4
1.4918
0.01086
0.05998
0.33030
3
0.6
1.8221
0.00238
0.01324
0.07312
0.40342
0.8
2.2255
+ x2
+ 2x2
+ 2x2
+
+
+
x3
2x3
3x3
=
=
=
5
6
8
1; 1 2 2; 1 2 3]; b = [5 6 8];
lu(A) % LU decomposition of A
0
1
1
0
0
1
1
1
1
1
0
1
% solution by forward substitution
173
x =
4
-1
2
Ex. 4.4. Solve the linear system
3x1
18x1
9x1
+
+
9x2
48x2
27x2
6x3
39x3
42x3
= 23
= 136
= 45
1.0000
0
0
48.0000 -39.0000
-51.0000
61.5000
0
13.7059
% solution by forward substitution
Ex. 4.6. Find the inverse of the Gaussian transformation
1
a
M =
b
c
0
1
0
0
0
0
1
0
0
0
.
0
1
174
1 0 0 0
a 1 0 0
M 1 =
b 0 1 0 .
c 0 0 1
Ex. 4.7. Find the product of the
1 0 0 0
1
a 1 0 0 0
L=
b 0 1 0 0
c 0 0 1
0
0 0 0
1 0 0 0
1 0 0
0 1 0 0 .
d 1 0 0 0 1 0
e 0 1
0 0 f 1
1 0 0 0
a 1 0 0
L=
b d 1 0 .
c e f 1
Ex. 4.10. Solve the linear system
4 10 8
x1
44
10 26 26 x2 = 128 .
8 26 61
x3
214
175
+ x2
+ x2
+ 5x2
x3
+ 7x3
+ x3
=
=
=
(0)
3
17
0
x1
(0)
x2
(0)
x3
with
= 1,
= 1,
= 1.
x1
(n+1)
x2
(n+1)
x3
Hence,
x(1)
= 16 [ 3
= 51 [ 0
= 71 [17 +
(n+1)
x1
(n+1)
x1
0.5
,
= 0.3
2.31429
x(2)
(n)
x2
(n+1)
x2
(n)
+ x3 ]
(n)
x3 ]
]
0.164 28
= 0.430 00 ,
2.466 53
x(n)
as n .
x(3)
0.0
0.5
2.5
(0)
with
x1
(0)
x2
(0)
x3
= 1,
= 1,
= 1.
0.017 24
= 0.489 86 .
2.496 09
(0, 0),
(1, 1),
(2, 2).
The Matlab command A\y produces the same answer. It uses the normal equations with the Cholesky or LU decomposition, or, perhaps, the QR decomposition,
Ex. 4.18. Determine and sketch the Gershgorin disks that contain the
eigenvalues of the matrix
2
1/2 i/2
A = 1/2
0
i/2 .
i/2 i/2 2
176
c2 =
0,
c3 =
2,
r1 = |1/2| + |i/2|
r2 = |1/2| + |i/2|
= 1,
= 1,
r3 = | i/2| + |i/2| = 1.
Note that the eigenvalues are real since the matrix A is symmetric, AT = A.
Solutions to Exercises for Chapter 5
The M-file exr5_25 for Exercises 5.3, 5.8, 5.13 and 5.12 is
function yprime = exr5_25(x,y); % Exercises 12.3, 12.8, 12.13 and 12.25.
yprime = x+cos(y);
Ex. 5.3. Use Eulers method with h = 0.1 to obtain a four-decimal approximation for the initial value problem
y = x + cos y,
y(0) = 0
0
0.10000000000000
0.20000000000000
0.30000000000000
0.40000000000000
0.50000000000000
0.60000000000000
0
0.10000000000000
0.20950041652780
0.32731391010682
0.45200484393704
0.58196216946658
0.71550074191996
7.00000000000000
8.00000000000000
9.00000000000000
10.00000000000000
0.70000000000000
0.80000000000000
0.90000000000000
1.00000000000000
177
0.85097722706339
0.98690209299587
1.12202980842386
1.25541526027779
Ex. 5.8. Use the improved Euler method with h = 0.1 to obtain a fourdecimal approximation for the initial value problem
y = x + cos y,
y(0) = 0
0
1.00000000000000
2.00000000000000
3.00000000000000
4.00000000000000
5.00000000000000
6.00000000000000
7.00000000000000
8.00000000000000
9.00000000000000
10.00000000000000
0
0.10000000000000
0.20000000000000
0.30000000000000
0.40000000000000
0.50000000000000
0.60000000000000
0.70000000000000
0.80000000000000
0.90000000000000
1.00000000000000
0
0.10475020826390
0.21833345972227
0.33935117091202
0.46622105817179
0.59727677538612
0.73088021271199
0.86552867523997
0.99994084307400
1.13311147003613
1.26433264384505
178
Ex. 5.13. Use the RungeKutta method of order 4 with h = 0.1 to obtain
a six-decimal approximation for the initial value problem
y = x + cos y,
y(0) = 0
0
1.00000000000000
2.00000000000000
3.00000000000000
4.00000000000000
5.00000000000000
6.00000000000000
7.00000000000000
8.00000000000000
9.00000000000000
10.00000000000000
0
0.10000000000000
0.20000000000000
0.30000000000000
0.40000000000000
0.50000000000000
0.60000000000000
0.70000000000000
0.80000000000000
0.90000000000000
1.00000000000000
0
0.10482097362427
0.21847505355285
0.33956414151249
0.46650622608728
0.59763447559658
0.73130914485224
0.86602471267959
1.00049620051241
1.13371450064800
1.26496830711844
179
180
count = count + 1;
end
output4
save output4 %for printing the grap
The command output4 prints the values of n, x, and y.
n
Error estimate
0
1.00000000000000
2.00000000000000
3.00000000000000
4.00000000000000
5.00000000000000
6.00000000000000
7.00000000000000
8.00000000000000
9.00000000000000
10.00000000000000
0
0.10000000000000
0.20000000000000
0.30000000000000
0.40000000000000
0.50000000000000
0.60000000000000
0.70000000000000
0.80000000000000
0.90000000000000
1.00000000000000
0
0.10482097362427
0.21847505355285
0.33956414151249
0.46650952510670
0.59764142006542
0.73131943222018
0.86603741396612
1.00050998975914
1.13372798977088
1.26498035231682
0
0
0
0
-0.00000234408483
-0.00000292485029
-0.00000304450366
-0.00000269077058
-0.00000195879670
-0.00000104794662
-0.00000017019624
The numerical solutions for Exercises 12.3, 12.8, 12.13 and 12.25 are plotted
by the commands:
load output1; load output2; load output3; load output4;
subplot(2,2,1); plot(output1(:,2),output1(:,3));
title(Plot of solution y_n for Exercise 5.3);
xlabel(x_n); ylabel(y_n);
subplot(2,2,2); plot(output2(:,2),output2(:,3));
title(Plot of solution y_n for Exercise 5.8);
xlabel(x_n); ylabel(y_n);
subplot(2,2,3); plot(output3(:,2),output3(:,3));
title(Plot of solution y_n for Exercise 5.13);
xlabel(x_n); ylabel(y_n);
subplot(2,2,4); plot(output4(:,2),output4(:,3));
title(Plot of solution y_n for Exercise 5.25);
xlabel(x_n); ylabel(y_n);
print -deps Fig9_3
181
1.2
1.2
0.8
0.8
yn
1.4
yn
1.4
0.6
0.6
0.4
0.4
0.2
0.2
0.2
0.4
0.6
0.8
0.2
0.4
xn
0.6
0.8
xn
1.2
1.2
0.8
0.8
yn
1.4
yn
1.4
0.6
0.6
0.4
0.4
0.2
0.2
0.2
0.4
0.6
xn
0.8
0.2
0.4
0.6
xn
Figure 8.4. Graph of numerical solutions of Exercises 12.3 (Euler), 12.8 (improved Euler), 12.13 (RK4) and 12.25 (ABM4).
0.8
Index
absolute error, 1
absolutely stable method for ODE, 101
absolutely stable multistep method, 110
AdamsBashforth multistep method, 110
AdamsBashforthMoulton method
four-step, 113
three-step, 112
AdamsMoulton multistep method, 110
Aitkens process, 20
GaussSeidel iteration, 74
Gaussian quadrature, 145
three-point, 146
two-point, 146
Gaussian transformation, 60
inverse, 60
product, 61
Gershgorin
disk, 79
Theorem, 79
global Newton-bisection method, 18
eigenvalue of a matrix, 78
eigenvector, 78
error, 1
Euclidean matrix norm, 72
Eulers method, 88
183
184
M
ullers method, 25
Newton divided difference, 29
parabola method, 25
interval of absolute stability, 101
inverse power method, 81
iterative method, 73
Jacobi iteration, 75
Jacobi method for eigenvalues, 83
l1 -norm
of a matrix, 72
of a vector, 72
l2 -norm
of a matrix, 72
of a vector, 72
l -norm
of a matrix, 72
of a vector, 72
Lagrange basis, 27
Lagrange interpolating polynomial, 27
Legendre
differential equation, 155
polynomial Pn (x), 143
linear regression, 76
Lipschitz condition, 87
local approximation, 111
local error of method for ODE, 100
local extrapolation, 106
local truncation error, 88, 89, 100
Matlab
fzero function, 20
ode113, 121
ode15s, 129
ode23, 104
ode23s, 129
ode23t, 129
ode23tb, 129
mean value theorem, 4
for integral, 4
for sum, 4
method of false position, 17
method of order p, 100
midpoint rule, 47, 48
multistep method, 110
natural boundary, 38
natural spline, 38
NDF (numerical differentiation formula, 123
Newtons method, 13
modified, 15
NewtonRaphson method, 13
normal equations, 76
normal matrix, 84
numerical differentiation formula, 123
numerical solution of ODE, 87
operation
gaxpy, 71
INDEX
saxpy, 70
order of an iterative method, 13
overdetermined system, 75, 83
partial pivoting, 59
PECE mode, 112
PECLE mode, 112
phenomenon of stiffness, 122
pivot, 60
positive definite matrix, 67
power method, 80
predictor, 111
principal minor, 67
QR
algorithm, 83
decomposition, 82
quadratic regression, 77
rate of convergence, 13
region of absolute stability, 100
regula falsi, 17
relaltive error, 1
residual, 82
Richardsons extrapolation, 45
RKF(4,5), 107
RKV(5,6), 109
roundoff error, 1, 43, 90
RungeKutta method
four-stage, 94
fourth-order, 94
second-order, 93
third-order, 93
RungeKuttaFehlberg pair
six-stage, 107
RungeKuttaVerner pair
eight-stage, 109
scaling rows, 65
Schur decomposition, 84
secant method, 16
signum function sign, 3
singular value decomposition, 84
stability function, 101
stiff system, 122
in an interval, 123
stiffness ratio, 122
stopping criterion, 12
SturmLiouville problem, 143
subordinate matrix norm, 72
substitution
backward, 61
forward, 61
supremum norm
of a vector, 72
three-point formula for f (x), 42
trapezoidal rule, 48
truncation error, 1
truncation error of a method, 90
INDEX
185