Course Note
Course Note
Contents
Preface 5
2 Root Finding 27
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.2 Four Algorithms for Root Finding . . . . . . . . . . . . . . . . . . . . . . . . 28
2.2.1 Bisection Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.2.2 Fixed Point Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.2.3 Newton’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.2.4 Secant Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.2.5 Stopping Criteria for Iterative Functions . . . . . . . . . . . . . . . . . 32
2.3 Rate of Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.4 Convergence Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.4.1 Fixed Point Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.4.2 Newton’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.4.3 Secant Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.4.4 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3
4 CONTENTS
5 Interpolation 95
5.1 Polynomial Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5.1.1 The Vandermonde Matrix . . . . . . . . . . . . . . . . . . . . . . . . . 96
5.1.2 Lagrange Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
5.1.3 Hermite Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
5.2 Piecewise Polynomial Interpolation . . . . . . . . . . . . . . . . . . . . . . . . 102
5.2.1 Piecewise Linear Interpolation . . . . . . . . . . . . . . . . . . . . . . 102
5.2.2 Spline Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.2.3 Further Generalizations . . . . . . . . . . . . . . . . . . . . . . . . . . 105
6 Integration 107
6.1 Integration of an Interpolating Polynomial . . . . . . . . . . . . . . . . . . . . 107
6.1.1 Midpoint Rule: y(x) degree 0 . . . . . . . . . . . . . . . . . . . . . . . 108
6.1.2 Trapezoid Rule: y(x) degree 1 . . . . . . . . . . . . . . . . . . . . . . 108
6.1.3 Simpson Rule: y(x) degree 2 . . . . . . . . . . . . . . . . . . . . . . . 109
6.1.4 Accuracy, Truncation Error and Degree of Precision . . . . . . . . . . 110
6.2 Composite Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
6.2.1 Composite Trapezoid Rule . . . . . . . . . . . . . . . . . . . . . . . . . 112
6.2.2 Composite Simpson Rule . . . . . . . . . . . . . . . . . . . . . . . . . 113
6.3 Gaussian Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
The goal of computational mathematics, put simply, is to find or develop algorithms that
solve mathematical problems computationally (ie. using computers). In particular, we desire
that any algorithm we develop fulfills three primary properties:
5
6
Chapter 1
Problem Consider an arbitrary problem P with input x. We must compute the desired
output z = fP (x).
In general our only tools for solving such problems are primitive mathematical operations
(for example, addition, subtraction, multiplication and division) combined with flow con-
structs (if statements and loops). As such, even simple problems such as evaluating the
exponential function may be difficult computationally.
Example 1.1 Consider the problem P defined by the evaluation of the exponential function
z = exp(x). We wish to find the approximation ẑ for z = exp(x) computationally.
Algorithm A. Recall from calculus that the Taylor series expansion of the exponential
function is given by
∞
x2 x3 X xi
exp(x) = 1 + x + + + ... = . (1.1)
2! 3! i=0
i!
Since we obviously cannot compute an infinite sum computationally without also using
infinite resources, consider the truncated series constructed by only summing the first n
terms. This yields the expansion
n
X xi
ẑ = (1.2)
i=0
i!
As we will later see, this is actually a poor algorithm for computing the exponential -
although the reason for this may not be immediately obvious.
7
8 CHAPTER 1. ERRORS AND ERROR PROPAGATION
1. Errors introduced in the input x. In particular, this form of error can be generally
classified to be from one of two sources:
a) Measurement error, caused by a difference between the “exact” value x and the
”measured” value x̃. The measurement error is computed as ∆x = |x − x̃|.
b) Rounding error, caused by a difference between the “exact” value x and the com-
putational or floating-point representation x̂ = f l(x). Since infinite precision
cannot be achieved with finite resources, the computational representation is a
finite precision approximation of the exact value.
However, a computer can only represent finite precision, so we are not guaran-
teed to retain all digits from the initial number. Let’s consider a hypothetical
“decimal” computer with at most 5 digits in the mantissa. The floating-point
representation of x obtained on this computer, after rounding, is given by
In analyzing sources of error, it is often useful to provide a mathematical basis for the
error analysis. In particular, there are two principal ways for providing a measure of the
generated error.
∆z = z − ẑ, (1.7)
1. The algorithm may contain one or more “avoidable” steps that each greatly amplify
errors.
2. The algorithm may propagate initially small errors such that the error is amplified
without bound in many small steps.
Example 1.2 Consider the problem P with input x defined by the evaluation of the expo-
nential function z = exp(x) (as considered in (1.1)). However, in performing the calculation,
assume the following conditions:
Consider solving this problem with input x = −5.5; we find a numerical approximation
to the exact value z = exp(−5.5) ≈ 0.0040868.
In solving this problem, we first apply Algorithm A, truncating the series after the first
P24 xi
25 values. This yields the formula ẑ = i=0 i! . Performing this calculation with our
floating point system yields the approximation ẑA = 0.0057563 which an observant reader
will notice has no significant digits in common with the exact solution. We conclude that
precision was lost in this calculation, and in fact notice that the relative error is δzA =
1
z (z − ẑA ) = −0.41 = −41%. This is a substantial error!
10 CHAPTER 1. ERRORS AND ERROR PROPAGATION
0 1.0000000000000 1.0000000000000
1 -5.5000000000000 -4.5000000000000
2 15.1250000000000 10.6250000000000
3 -27.7300000000000 -17.1050000000000
4 38.1280000000000 21.0230000000000
5 -41.9400000000000 -20.9170000000000
6 38.4460000000000 17.5290000000000
7 -30.2060000000000 -12.6770000000000
8 20.7670000000000 8.0900000000000
9 -12.6910000000000 -4.6010000000000
10 6.9803000000000 2.3793000000000
11 -3.4900000000000 -1.1107000000000
12 1.5996000000000 0.4889000000000
13 -0.6767600000000 -0.1878600000000
14 0.2658700000000 0.0780100000000
15 -0.0974840000000 -0.0194740000000
16 0.0335100000000 0.0140360000000
17 -0.0108420000000 0.0031940000000
18 0.0033127000000 0.0065067000000
19 -0.0009589000000 0.0055478000000
20 0.0002637100000 0.0058115000000
21 -0.0000690670000 0.0057424000000
22 0.0000172670000 0.0057597000000
23 -0.0000041289000 0.0057556000000
24 0.0000009462300 0.0057565000000
25 -0.0000002081700 0.0057563000000
26 0.0000000440350 0.0057563000000
1.2. FLOATING POINT NUMBERS AND OPERATIONS 11
Our answer to this question can be found in a related problem: Consider the subtraction
of two numbers that are almost equal to one another.
So, in general, the approximation of these two numbers to their floating-point equivalents
produce relatively small errors. Now consider the subtraction z = x1 −x2 . The exact solution
to this is z = 0.000013715 and the computed solution using the floating-point representations
is ẑ = fl(x1 ) − fl(x2 ) = 0.00001. The relative error in this case is δẑ = z1 (z − ẑ) ≈ 27%. So
what was an almost negligible error when performing rounding becomes a substantial error
after we have completed the subtraction.
Thus, if possible, we will need to avoid these kind of subtractions when we are developing
our algorithms. Looking back at the additions we performed in calculating exp(−5.5) (see
Table I), we see that there were several subtractions performed of numbers of similar magni-
tude. Fortunately, we have an easy way around this if we simply take advantage of a property
of the exponential, namely exp(−x) = (exp(x))−1 . This provides us with Algorithm B:
Algorithm B. Applying the truncated Taylor expansion for exp(5.5), we get the following
formula for exp(−5.5):
24
X (−x)i −1
exp(x) = ( ) (1.9)
i=0
i!
This yields ẑB = 0.0040865, which matches 4 out of 5 digits of the exact value.
We conclude that Algorithm B is numerically stable (largely since it avoids cancellation).
Similarly, Algorithm A is unstable with respect to relative error for the input x = −5.5.
However, it is also worth noting that for any positive value of the input we are better off
using Algorithm A - Algorithm B would end up with the same cancellation issues we had
originally with Algorithm A.
• The base, which defines the base of the number system being used in the representation.
This is specified as a positive integer bf .
• The mantissa, which contains the normalized value of the number being represented.
Its maximal size is specified as a positive integer mf , which represents the number of
digits allowed in the mantissa.
• The exponent, which effectively defines the offset from normalization. Its maximal
size is specified by a positive integer ef , which represents the number of digits allowed
in the exponent.
In shorthand, we write F [b = bf , m = mf , e = ef ].
Combined, the three components allow us to approximate any real number in the fol-
lowing form:
y1 y2 · · · ye
| {z }
0. x1 x2 · · · xm × |{z}
b exponent
(1.10)
| {z }
mantissa base
Example 1.3 Consider a “decimal” computer with a floating point number system defined
by base 10, 5 digits in the mantissa and 3 digits in the exponent. In shorthand, we write the
system as F [b = 10, m = 5, e = 3].
Consider the representation of the number x = 0.000123458765 under this system. To
find the representation, we first normalize the value and then perform rounding so both the
mantissa and exponent have the correct number of digits:
input x = 0.000123458765
normalize x = 0.123458765 × 10−3
round x̂ = fl(x) = 0.12346 × 10−003
Under this system, our mantissa is bounded by 99999 and the exponent is bounded by
999 (each representing the largest numbers we can display under this base in 5 and 3 digits,
respectively). The largest number we can represent under this system is 0.99999 × 10999 and
the smallest positive number we can represent is 0.00001 × 10−999 .
Note: Instead of rounding in the final step, we can also consider “chopping”. With chop-
ping, we simply drop all digits that cannot be represented under our system. Thus, in our
example we would get x̂ = fl(x) = 0.12345 × 10−003 since all other digits simply would be
dropped.
Example 1.4 Consider the binary number x = (1101.011)b. Similar to decimal, this nota-
tion is equivalent to writing
Floating-point number conversions from binary work exactly the same way as decimal
conversions, except for the new base:
Example 1.5 Consider the binary number x = (1101.011)b under the floating-point system
F [b = 2, m = 4, e = 3].
input x = (1101.011)b
normalize x = (0.1101011)b × 24
= (0.1101011)b × 2(100)b
sm m = 23 bits se e = 7 bits
Here sm is the sign bit of the mantissa and se is the sign bit for the exponent.
Recall that we normally write floating point numbers in the form given by (1.10). The
typical convention for sign bits is to use 0 to represent positive numbers and 1 to represent
negative numbers.
Example 1.6 Consider the decimal number x = −0.3125. We note that we may write this
in binary as
x = −0.3125
= −(0 · 2−1 + 1 · 2−2 + 0 · 2−3 + 1 · 2−4 )
= −(0.0101)b
= −(0.101)b × 2−1
= −(0.101)b × 2−(1)b
14 CHAPTER 1. ERRORS AND ERROR PROPAGATION
Under the single-precision floating point number system, the largest and smallest num-
bers that can be represented are given as follows (without consideration for normalization,
in the case of the smallest number).
x̂max = |0|11111111111111111111111|0|1111111|
= (1 − 2−23 ) · 2127
≈ 2127
≈ 1.7 × 1038
x̂min = |0|00000000000000000000001|1|1111111|
= 2−23 · 2−127
= 2−150
≈ 7.0 × 10−46
Note that there are 512 ways to represent zero under the single precision system proposed
here (we only require that the mantissa is zero, meaning that the signed bits and exponent
can be arbitrary). Under the IEEE standard (which is used by real computers) there are
some additional optimizations that take advantage of this “lost” space:
Double Precision Numbers. Double precision is the other standard floating point num-
ber system. Numbers are represented on a computer in a 64-bit chunk of memory (8 bytes),
divided as follows:
sm m = 52 bits se e = 10 bits
In this case, the maximum and minimum numbers that can be represented are xmax ≈
9 × 10307 and xmin ≈ 2.5 × 10−324 , respectively.
x − fl(x)
δx = . (1.11)
x
This motivates the question: Is there a bound on the absolute value of the relative error |δx|
for a given floating point number system?
1.2. FLOATING POINT NUMBERS AND OPERATIONS 15
An answer to this requires some delving into the way in which floating point numbers
are represented. Consider the floating point number given by
Definition 1.3 The machine epsilon ǫmach is the smallest number ǫ > 0 such that fl(1+
ǫ) > 1.
Proof. For simplicity we only prove part (a) of the proposition. If rounding is used a more
complex analysis will yield the result of part (b).
Consider the following subtraction:
1+ǫ = 0. 1
|{z} 0
|{z} 0
|{z} ··· 0
|{z} 1
|{z} ×b1
b−1 b−2 b−3 b−(m−1) b−m
1 = 0. 1
|{z} 0
|{z} 0
|{z} ··· 0
|{z} 0
|{z} ×b1
x1 x2 x3 xm−1 xm
ǫ = 0. 0 0 0 ··· 0 1
|{z} ×b1
b−m
ǫ = b1−m
x − fl(x)
|δx| = | | ≤ ǫmach . (1.13)
x
But we know the numerator is less than or equal to 1 (1 = 1 · b0 ) and the denominator is
greater than or equal to 0.1 (0.1 = 1 · b−1 ). So we may write:
1
Note. Since δx = x (x − fl(x)), we may also write fl(x) = x(1 − δx) with |δx| ≤ ǫmach .
Hence we often say
fl(x) = x(1 + η) with |η| ≤ ǫmach . (1.14)
Single Precision. Under single precision, m = 23 and so ǫ = 2−22 ≈ 0.24 × 10−6 . Thus
|δx| ≤ 0.24 × 10−6 , and so we expect 6 to 7 decimal digits of accuracy.
with |η1 |, |η2 |and|η| ≤ ǫmach . In general the operation of addition under F is not associative.
That is,
(a ⊕ b) ⊕ c 6= a ⊕ (b ⊕ c). (1.18)
Note: There are analogous operations for subtraction, multiplication and division, written
using the symbols ⊖, ⊗, and ⊘.
Definition 1.5 We say that a problem P is well-conditioned with respect to the absolute
error if small changes ∆~x in ~x result in small changes ∆~z in ~z. Similarly, we say P is
ill-conditioned with respect to the absolute error if small changes ∆~x in ~x result in large
changes ∆~z in ~z.
Definition 1.6 The condition number of a problem P with respect to the absolute error is
given by the absolute condition number κA :
The condition number with respect to the relative error is given by the relative condition
number κR :
k ∆~z k / k ~z k
κR = (1.20)
k∆~xk/k~xk
ẑ = x̂ + ŷ = (x + y) − (∆x + ∆y).
∆z = z − ẑ = ∆x + ∆y. (1.25)
|∆x| + |∆y|
κA ≤ = 1. (1.26)
|∆x| + |∆y|
There are three standard vector norms known as the 2-norm, the ∞-norm and the 1-norm.
Further, all vector norms that are induced by an inner product (including the 1-norm and
2-norm - but not the ∞-norm) satisfy the Cauchy-Schwartz Inequality, which will be of use
later:
Theorem 1.2 Cauchy-Schwartz Inequality. Let k · k be a vector norm over a vector space
V induced by an inner product. Then
ẑ = x̂ · ŷ = (x − ∆x)(y − ∆y)
neglect
z }| {
= xy − y∆x − x∆y + ∆x∆y
≈ xy − y∆x − x∆y
and so we define our error by
We conclude that P is well-conditioned with respect to the absolute error, except when
x or y are large.
and so can immediately conclude that P is well-conditioned with respect to the relative
error. In fact, in this particular case
|∆z|/|z|
κR =
k∆~xk/k~xk
Example 1.9 Stability with respect to the relative error. Consider the problem P defined
by z = exp(x) with x = 5.5. The approximate values for x and z are denoted x̂ and ẑ
respectively. They are related by the following formula:
to yield,
for small |∆x|. We conclude that the problem is well conditioned with respect to
∆, except for large x.
1.4. STABILITY OF A NUMERICAL ALGORITHM 21
Intermission:
Asymptotic Behaviour of Polynomials
We wish to consider a general method of analyzing the behaviour of polynomials in two cases:
when x → 0 and when x → ∞. In particular, if we are only interested in the asymptotic
behaviour of the polynomial as opposed to the exact value of the function, we may employ
the concept of Big-Oh notation.
Definition 1.11 Suppose that f (x) is a polynomial in x without a constant term. Then the
following are equivalent:
a) f (x) = O(xn ) as x → 0.
b) ∃ c > 0, x0 > 0 such that |f (x)| < c|x|n ∀ x with |x| < |x0 |.
c) f (x) is bounded from above by |x|n , up to a constant c, as x → 0.
This effectively means that the dominant term in f (x) is the term with xn as x → 0, or
f (x) goes to zero with order n.
Example 1.7 Consider the polynomial g(x) = 3x2 + 7x3 + 10x4 + 7x12 . We say
g(x) = O(x2 ) as x → 0
g(x) 6= O(x3 ) as x → 0
g(x) = 3x2 + O(x3 ) as x → 0
We note that g(x) = O(x) as well, but this statement is not so useful because it is not a
sharp bound.
Definition 1.12 Suppose that f (x) is a polynomial in x. Then the following are equivalent:
a) f (x) = O(xn ) as x → ∞.
b) ∃ c > 0, x0 > 0 such that |f (x)| < c|x|n ∀ x with |x| > |x0 |.
c) f (x) is bounded from above by |x|n , up to a constant c, as x → ∞.
As before, this effectively means that the dominant term in f (x) is the term with xn as
x → ∞, or f (x) goes to infinity with order n.
Example 1.8 Consider the polynomial g(x) = 3x2 + 7x3 + 10x4 + 7x12 . We say
g(x) = O(x12 ) as x → ∞
g(x) 6 = O(x8 ) as x → ∞
g(x) = 7x12 + O(x4 ) as x → ∞
22 CHAPTER 1. ERRORS AND ERROR PROPAGATION
f (x) = x + O(x2 ) as x → 0
g(x) = 2x + O(x3 ) as x → 0
Example R 1 1.10 Instability with respect to absolute error ∆. Consider the problem P defined
xn
by z = 0 x+α dx where α > 0. The approximate values for α and z are denoted α̂ and ẑ
respectively. It can be shown that P is well-conditioned (for instance with respect to the
integration boundaries 0 and 1).
We derive an algorithm for solving this problem using a recurrence relation. In deriving
a recurrence, we need to consider the base case (a) and the recursive case (b) and then
ensure that they are appropriately related.
a) Consider n = 0:
R1 1
I0 = 0 x+α
dx
1
= log(x + α) 0
= log(1 + α) − log(α)
and so we get
1+α
I0 = log( ). (1.35)
α
b) For general n we can derive the following recurrence:
R 1 n−1
In = 0 xx+αx dx
R 1 n−1 (x+α−α)
= 0 x x+α dx
R 1 n−1 R 1 xn−1
= 0 x dx − α 0 x+α dx
n 1
= xn 0 − αIn−1
which yields the expression
1
In =
− αIn−1 . (1.36)
n
Thus we may formulate an algorithm using the base case and the recurrence:
24 CHAPTER 1. ERRORS AND ERROR PROPAGATION
Algorithm A.
1. Calculate I0 from (1.35).
2. Calculate I1 , I2 , . . . , In using (1.36).
An implementation of this algorithm on a computer provides the following results, for
two sample values of α:
For α = 2.0, we obtain a very large result! This may be a first indication that something
is wrong. From the original equation we have that
Z 1 100
x 1
I100 = dx ≤ · 1 = 1/3. (1.37)
0 x+α 1+α
Hence we conclude something is definitely amiss. We need to consider the propagation of
the error in our algorithm, since we know that the algorithm is definitely correct using exact
arithmetic.
Consider In to be the exact value at each step of the algorithm, with Iˆn the numerical
approximation. Then the absolute error at each step of the algorithm is given by ∆In =
In − Iˆn . Thus the initial error is ∆I0 = I0 − Iˆ0 = I0 − fl(I0 ).
The the exact value satisfies
1
In = − αIn−1 (1.38)
n
and the numerical approximation is calculated from
1
Iˆn = − αIˆn−1 + ηn , (1.39)
n
where ηn is the rounding error we introduce in step n.
For a first analysis, we neglect ηn and simply investigate the propagation of the initial
error ∆I0 only. Then
∆In = In − Iˆn
= ( n1 − αIn−1 ) − ( n1 − αIˆn−1 )
= −α(In−1 − Iˆn−1 )
= −α∆In−1 .
Applying recursion yields an expression for the accumulated error after n steps:
Conclusion. From this expression, we note that there are potentially two very different
outcomes:
a) If α > 1 then the initial error is propagated in such a way that blow-up occurs. Thus
Algorithm A is numerically unstable with respect to ∆.
1.4. STABILITY OF A NUMERICAL ALGORITHM 25
b) If α ≤ 1 then the initial error remains constant or dies out. Thus Algorithm A is
numerically stable with respect to ∆.
We note that further analysis would lead to the conclusion that the rounding error ηn
is also propagated in the same way as the initial error ∆I0 . These results confirm the
experimental results observed in the implementation.
Root Finding
2.1 Introduction
The root finding problem is a classical example of numerical methods in practice. The
problem is stated as follows:
Problem Given any function f (x), find x∗ such that f (x∗ ) = 0. The value x∗ is called a
root of the equation f (x) = 0.
If f (x) = 0 has a root x∗ there is no truly guaranteed method for finding x∗ compu-
tationally for an arbitrary function, but there are several techniques which are useful for
many cases. A computational limitation inherent to this problem can be fairly easily seen
by an observant reader: any interval of the real line contains an infinite number of points,
but computationally we can solve this problem with only a finite number of evaluations of
the function f .
Additionally, since the value x∗ may not be defined in our floating point number system,
we will not be able to find x∗ exactly. Therefore, we consider a computational version of
the same problem:
Problem (Computational) Given any f (x) and some error tolerance ǫ > 0, find x∗ such
that |f (x∗ )| < ǫ.
Example 2.1
1) Consider the function f (x) = x2 + x − 6 = (x − 2)(x + 3). The function f (x) = 0 has
two roots at x=2, -3.
27
28 CHAPTER 2. ROOT FINDING
3) f (x) = 3x5 + 5x4 + 31 x3 + 1 = 0. We have no general closed form solution for the roots
of a polynomial with degree larger than 4. As a result, we will need to use numerical
approximation by iteration.
Definition 2.1 We say that x∗ is a double root of f (x) = 0 if and only if f (x∗ ) = 0 and
f ′ (x∗ ) = 0.
We naturally examine this computational problem from an iterative standpoint. That is,
we wish to generate a sequence of iterates (xk ) such that any iterate xk+1 can be written as
some function of xk , xk−1 , . . . , x0 . We assume that some initial conditions are applied to the
problem so that xp , xp−1 , . . . x0 are either given or arbitrarily chosen. Obviously, we require
that the iterates actually converge to the solution of the problem, i.e. x∗ = limk→∞ xk .
A natural question to ask might be, “how do we know where a root of a function may
approximately be located?” A simple result from first year calculus will help in answering
this:
Thus if we can find [a, b] such that f (a) · f (b) < 0 then by the Intermediate Value
Theorem, [a, b] will contain at least one root x∗ as long as f (x) is continuous.
When applying the bisection method, we only require continuity of the function f (x) and
an initial knowledge of two points a0 and b0 such that f (a0 ) · f (b0 ) ≤ 0. We are guaranteed
the existence of a root in the interval [a0 , b0 ] by the Intermediate Value Theorem (2.1) and
are further guaranteed that the bisection method will converge to a solution.
We consider the question of “speed of convergence,” namely given a, b and t, how many
steps does it take to reach t? If we suppose that x∗ = limk→∞ xk then at each iteration the
interval containing x∗ is halved. Thus, assuming it takes n steps to fulfill |b − a| ≤ t we have
that
2−n |b − a| ≤ t
⇒ n log 2 ≥ log( |b−a|
t )
⇒ n ≥ log1 2 log( |b−a|
t ).
Thus we conclude that for a given tolerance t and initial interval [a, b], bisection will take
1 |b − a|
n≥ log( ) (2.3)
log 2 t
steps to converge.
Example 2.2 Given |b − a| = 1 and t = 10−6 , how many steps does it take to converge?
From (2.3) we have that
1
n ≥ log10 (106 )
log10 (2)
⇒n ≥ 3.4 · 6
⇒n ≥ 20.
Definition 2.2 We say that x∗ is a fixed point of g(x) if g(x∗ ) = x∗ , i.e. if x∗ is mapped
to itself under g.
We note that if our function g has certain desirable properties (in particular, as will
be shown later, if |g ′ (x∗ )| < 1 and x0 is ”close enough” to x∗ ), then repeated application
of g will actually cause us to converge to this fixed point. This implies we can write our
algorithm for fixed-point iteration as follows:
i = 0
repeat
i = i + 1
x[i] = g(x[i-1])
until |x[i] - x[i-1]| < t
x = x[i]
We note that it is not required that we limit ourselves to the choice of g(x) = x − f (x)
in applying this scheme. In general we can write g(x) = x − H(f (x)) as long as we choose
H such that H(0) = 0. Not all choices will lead to a converging method. For convergence
we must also choose H such that |g ′ (x∗ )| < 1 (see later).
But we know that f (x∗ ) = 0 and so we find a new, often better approximation x1 from x0
by requiring that
i = 0
x[0] = x0
repeat
i = i + 1
if f’(x[i-1]) == 0 stop
x[i] = x[i-1] - f(x[i-1]) / f’(x[i-1])
until |x[i] - x[i-1]| < t
x = x[i]
f (xi ) − f (xi−1 )
f ′ (xi ) ≈ . (2.9)
xi − xi−1
This result can be plugged into Newton’s method to give the defining equation for the Secant
method:
32 CHAPTER 2. ROOT FINDING
" #
xi − xi−1
xi+1 = xi − f (xi ) . (2.10)
f (xi ) − f (xi−1 )
Note that this method actually requires the two previous values (xi and xi−1 ) in order
to compute xi+1 . Thus, we also need two initial values x0 and x1 in order to begin iteration.
Also, as in Newton’s method where we needed to check for f ′ (xi ) = 0 here we need to be
wary of the case where f (xi ) ≈ f (xi−1 ), as this will potentially give undesirable results.
1. Maximum number of steps. Using this method for stopping, we impose some
maximum number of steps in the iteration imax and stop when i = imax . This provides
a safeguard against infinite loops, but is not very efficient - i.e. even if we are very
close to the root after 1 iteration, this method will always run for the same number of
iterations.
2. Tolerance on the size of the correction. Under this criterion, we are given
some tolerance t and stop when |xi+1 − xi | ≤ t. Under the bisection method and
fixed-point iteration (with “spiral-wise” convergence, see later) we actually are able to
guarantee that |xi+1 − x∗ | ≤ t. Unfortunately, this criterion does not guarantee that
|xi+1 − x∗ | ≈ t, in general.
This may not work very well if one desires a small function value in the approximate
root for steep functions such as f (x) = a(x − x∗ ) with a = 1011 . Even a small error in
x will mean a large value for f (x).
3. Tolerance on the size of the function value. Under this criterion, we are given
some tolerance t and stop when |f (xi )| < t. This may not work well for a flat function
such as f (x) = a(x − x∗ ) with a = 10−9 . In this case, for xi far from x∗ , |f (xi )| may
be smaller than t.
In conclusion, choosing a good stopping criterion is difficult and dependent on the prob-
lem. Often trial and error is used to determine a good criterion and combines any of the
aforementioned options.
2.3. RATE OF CONVERGENCE 33
ei = xi − x∗ . (2.11)
We will define the rate of convergence by how quickly the error converges to zero (and
hence how quickly {xi }∞ ∗ ∞ ∗
i=0 converges to x ). If {xi }i=0 diverges from x , then we note that
limi→∞ ei = ±∞.
With these definitions in mind, we may consider the rate of convergence of each of our
iteration methods. Consider the example given in Table 2.1 and Table 2.2. We wish to
determine the positive root of f (x) = x2 − 21 exp(−x) which has the exact value x∗ =
0.53983527690282. We measure the value of the iterate xi in Table 2.1 and the value of the
error ei in Table 2.2.
Bisection Method. From the derivation (2.3) we note that, on average, |ei+1 | ≈ 12 |ei |.
But the error may increase for certain iterations, depending on the initial interval. Thus,
we cannot directly apply the definition for convergence to the bisection method, but we
nonetheless say that the method behaves like a linearly convergent method, with the follow-
ing justification:
Consider the sequence defined by {Li }∞ i=1 with Li = |bi − ai | the length of the interval
at step i. We know that Li+1 = 21 Li and so the sequence {Li } converges to 0 linearly. We
also know that |ei | ≤ Li , and so we say that {ei } converges to 0 at least linearly.
Fixed Point Iteration. The rate of convergence for this method is highly variable and
depends greatly on the actual problem being solved. In the example at hand, we find that
if we define ci by |ei+1 | = ci |ei | then, on average, it appears that limi→∞ ci = 0.37. Thus
we note that fixed point iteration converges linearly as well.
Secant Method. With a thorough analysis of the Secant method, we find that the Secant
method converges faster than fixed point iteration, but slower √ than Newton’s method. If
we consider |ei+1 | = ci |ei |q we actually find that q = 21 (1 + 5) ≈ 1.618. In the example at
hand, we actually get that limi→∞ ci ≈ 0.74.
For fixed point iteration, we construct the iteration sequence xi+1 = g(xi ) and iterate to
approximate the fixed point x∗ = g(x∗ ). We demonstrated previously that this is equivalent
to root-finding if we use g(x) = x− H(f (x)) for any function H(x) = H(f (x)) where H(0) =
0. Since f (x∗ ) = 0 by the definition of a root, we also have that H(x∗ ) = H(f (x∗ )) = H(0) =
0 and so g(x∗ ) = x∗ . We will show now that this method converges when |g ′ (x∗ )| < 1.
The theory for the fixed point iteration method goes hand in hand with the theory behind
contractions and contraction mapping from real analysis:
Definition 2.5 Suppose that g is a real-valued function, defined and continuous on a bounded
closed interval [a, b] of the real line. Then, g is said to be a contraction on [a, b] if there
exists a constant L ∈ (0, 1) such that
y y=x
g(a)
g(b) g(x)
a b x
For one, we notice |g(a) − g(b)| is smaller than |a − b| and so we may say that the interval
[a, b] has been contracted to a smaller interval [g(a), g(b)].
2.4. CONVERGENCE THEORY 35
1
Table 2.1: Root-Finding Iteration xi (f (x) = x2 − 2 exp(−x))
1
Table 2.2: Root-Finding Iteration ei (f (x) = x2 − 2 exp(−x))
1 Φ−1
The golden ratio appears often in nature and in mathematics. It can be defined using two
nested rectangles that have equivalent aspect ratios, as depicted in figure 2.1. The aspect
ratio of the larger rectangle is given by φ1 and the smaller rectangle has the aspect ratio
φ−1
1 . Equating these gives
1 1−φ
= , (2.13)
φ 1
or equivalently
φ2 − φ − 1 = 0. (2.14)
Finally, we apply the quadratic formula and take the positive root to get
√
1+ 5
φ1,2 = , (2.15)
2
the golden ratio.
38 CHAPTER 2. ROOT FINDING
g(x)
a x y b x
Also, we note that for any x, y ∈ [a, b] such that x 6= y a simple manipulation indicates
that a contraction fulfills
|g(x) − g(y)|
≤ L < 1. (2.17)
|x − y|
Thus we notice that the slope of any secant line within the interval [a, b] cannot exceed L in
absolute value.
An observant reader might notice that this definition of a contraction appears very
similar to the definition for a derivative. In fact, if we have g(x) differentiable on [a, b] with
|g ′ (x)| < 1 ∀ x ∈ [a, b], then g(x) is a contraction on [a, b] with
L = max |g ′ (x)|.
x∈[a,b]
Proof: Existence of the fixed point. The existence of a fixed point x∗ for g is a
consequence of the Intermediate Value Theorem. Define u(x) = x − g(x). Then
Then by the Intermediate Value Theorem, there exists x∗ ∈ [a, b] such that u(x∗ ) = 0. Thus
x∗ − g(x∗ ) = 0, or equivalently x∗ = g(x∗ ) and so x∗ is a fixed point of g.
2.4. CONVERGENCE THEORY 39
y y=x
a
g(x)
a b x
Uniqueness of the fixed point. The uniqueness of this fixed point follows from (2.16)
by contradiction. Suppose that g has a second fixed point, x∗2 , in [a, b] such that g(x∗ ) = x∗
and g(x∗2 ) = x∗2 . Then,
or equivalently, L ≥ 1. However, from the contraction property we know L ∈ (0, 1). Thus
we have a contradiction and so there is no second fixed point.
Convergence property. Let x0 be any element of [a, b]. Consider the sequence {xi }
defined by xi+1 = g(xi ), where xi ∈ [a, b] implies xi+1 ∈ [a, b]. We note for any xi−1 in
the interval we have, by the contraction property
|xi − x∗ | ≤ Li |x0 − x∗ |.
But since we also know that L ∈ (0, 1) then limi→∞ Li = 0. Thus our equation reduces to
lim |xi − x∗ | = 0,
i→∞
40 CHAPTER 2. ROOT FINDING
or
lim xi = x∗ .
i→∞
From the contraction mapping theorem, we note that it appears that convergence to the
fixed point x∗ appears to be linear; in particular, we get from the contraction property that
ei ≤ Lei−1 .
Intuitively, only one fixed point is allowed since we require a slope greater than 1 to get
multiple fixed points (see Figure 2.3).
y y=x
g(x)
a b x
Figure 2.3: In order to get multiple fixed points, we need the slope of g(x) to be greater
than 1.
We can determine when a sequence converges from the following corollary to the Con-
traction Mapping Theorem:
Corollary 2.1 Let g be a real-valued function, defined and continuous on a bounded closed
interval [a, b] of the real line, and assume that g(x) ∈ [a, b] for all x ∈ [a, b]. Let x∗ = g(x∗ )
be a fixed point of g(x) with x∗ ∈ [a, b]. Assume there exists δ such that g ′ (x) is continuous
in Iδ = [x∗ − δ, x∗ + δ]. Define the sequence {xi }∞i=0 by xi+1 = g(xi ). Then:
I. If |g ′ (x∗ )| < 1 then there exists ǫ such that {xi } converges to x∗ for |x0 − x∗ | < ǫ.
Further, convergence is linear with limi→∞ ci = |g ′ (x∗ )|.
II. If |g ′ (x∗ )| > 1 then {xi } diverges for any starting value x0 .
Using this corollary, we can come up with a method of choosing our form for g(x) in
terms of f (x) depending on the derivative at the point x∗ .
Example 2.3 Suppose that we somehow know that f ′ (x∗ ) = 3/2 where we wish to solve
for the root using f (x) = 0. Then if we add and subtract x from the equivalent equation
2.4. CONVERGENCE THEORY 41
−f (x) = 0 we get that x − x − f (x) = 0. We define g(x) = x + f (x) so we can apply fixed
point iteration on g(x) to solve x = g(x). Using this definition of g(x) we get that
|g ′ (x∗ )| = |1 + f ′ (x∗ )| = |1 + 3/2| = 5/2 > 1
and so from the corollary we note that we will not have convergence.
If we instead add and subtract x from f (x) = 0 we get that x − x + f (x) = 0. We define
g(x) = x − f (x) so we can apply fixed point iteration on g(x) as before. However, with this
definition of g(x) we get that
|g ′ (x∗ )| = |1 − f ′ (x∗ )| = |1 − 3/2| = 1/2 < 1
and so from the corollary we can choose some x0 close enough to x∗ to get convergence.
xα xβ x1β x
|f ′′ (x∗ )|
lim ci = . (2.18)
i→∞ |2f ′ (x∗ )|
Note that in this case, the defining equation for ci is |xi−1 − x∗ | = ci |xi − x∗ |2 . We
note that if f ′ (x∗ ) = 0 then the rate of convergence degrades to linear convergence. If the
conditions for Newton’s method are not met, we may not be able to achieve convergence for
any starting value x0 .
42 CHAPTER 2. ROOT FINDING
x1
x2
x2
x3
g(x)
x1 x4 g(x)
x1 x3 x2 x0 x x4 x2 x1 x3 x
x3
x1
g(x)
x2 x2
x1
x2 x1 x0 x x1 x2 x3 x
Theorem 2.5 Convergence Theorem for Secant Method (Atkinson p.67) If f (x∗ ) =
0, f ′ (x∗ ) 6= 0 and f , f ′ and f ′′ are all continuous in Iδ = [x∗ − δ, x∗ +√δ] with x0 sufficiently
1
close to x∗ then the sequence {xi }∞ i=0 converges with order q = 2 (1 + 5).
2.4.4 Overview
Each of the root-finding methods has their own strengths and weaknesses that make them
particularly useful for some situations and inapplicable in others. The following two tables
provide a summary of the functionality of each root-finding method:
In practice, MATLAB uses a combination of the Secant method, Bisection method and
a method not discussed here called Inverse Quadratic Interpolation (the combined method
is accessible through the fzero command). It uses the method that converges fastest if
possible, defaulting to the guaranteed convergence of Bisection if necessary. Further, it
requires no knowledge of the derivative. This approach allows MATLAB’s general root-
finding function fzero to be well-suited to a variety of applications.
3.1 Introduction
Consider the linear system
• accuracy (we must make sure that this is a well-conditioned problem and that we
have a stable algorithm)
• efficiency (we wish to solve potentially large systems with limited resources)
This problem has several well-known applications. For example, the Google search engine
uses linear systems in order to rank the results it retrieves for each keyword search. Here
we see that efficiency is very important: the linear systems they use may contain more than
three billion equations!
We may also ask if the problem of solving a given linear system is well posed, i.e., can
we find a single unique solution ~x that satisfies the linear system? We may appeal to a
standard result from linear algebra to answer this question:
45
46 CHAPTER 3. NUMERICAL LINEAR ALGEBRA
3.2.1 LU Factorization
Before continuing, we first consider the definition of a triangular matrix. Linear systems
involving triangular matrices are very easy to solve and appear when performing Gaussian
elimination.
In this case A(2) is obtained from A(1) by taking linear combinations of the first row of
(1)
A with each of the other rows so as to generate zeroes in the first column. This operation
may also be represented by matrix multiplication on the left with the matrix M1 :
M1 A(1) = A(2)
1 0 0 1 2 3 1 2 3
−4 1 0 4 5 6 = 0 −3 −6 .
1
− 71 0 1 7 8 1 0 −6 −20
In general, we may write
1 0 0 (2) (2) (2)
− a(1) a11 a12 a13
M1 =
21
1 0
, A(2)
(2)
= a21 (2) (2)
a11
(1)
a22 a23 .
(1)
a31 (2) (2) (2)
− (1) 0 1 a31 a32 a33
a11
(2)
We now choose a22 as the pivot element. We may then compute the matrix A(3) using
M2 · A(2) = A(3) :
1 0 0 1 2 3
Step i = 3 : M2 = 0 1 0 , A(3) = 0 −3 −6 ,
0 − (−6)
−3 1 0 0 −8
where we obtain A(3) by
M2 A(2) = A(3)
1 0 0 1 2 3 1 2 3
0 1 0 0 −3 −6 = 0 −3 −6 .
0 −2 1 0 −6 −20 0 0 −8
M2 · (M1 · A) = U
(M2 · M1 ) · A = U
A = (M2 · M1 )−1 U
A = M1−1 M2−1 U
48 CHAPTER 3. NUMERICAL LINEAR ALGEBRA
We will define a matrix L by M1−1 M2−1 = L and so may write the matrix A as the product
A = LU . We now wish to consider the properties of the matrices Mi and L. Consider the
matrix M1 , introduced above, and its inverse:
1 0 0 1 0 0
M1 = −4 1 0 M1−1 = 4 1 0 .
−7 0 1 7 0 1
We define the matrix Li as the inverse of the matrix Mi (so Li Mi = I). The following
inversion property then follows from the structure of the matrix Mi :
Inversion Property: Li can be obtained from Mi by swapping the signs of the off-
diagonal elements.
Example 3.2 Consider the matrix M2 , introduced above. A simple calculation allows us to
obtain the following result:
−1
1 0 0 1 0 0
L2 = M2−1 = 0 1 0 = 0 1 0 ,
0 −2 1 0 2 1
which satisfies the inversion property.
In addition, the structure of L can be directly determined from the matrices Li using
the combination property:
Qn−1
Combination Property: In general, L = i=1 Li = L1 · L2 · · · · · Ln−1 . L can be
obtained from the Li by placing all of the off-diagonal elements of the matrices Li in the
corresponding position in L.
We note that the matrix L is a special type of lower-triangular matrix, defined as follows:
Definition 3.2 L is called a lower triangular matrix with unit diagonal if and only
if the matrix elements of L vanish above the diagonal and are 1 on the diagonal (i.e. ℓij =
0 ∀ j > i and ℓii = 1 ∀ i).
A = L U
1 2 3 1 0 0 1 2 3
4 5 6 = 4 1 0 0 −3 −6 .
7 8 1 7 2 1 0 0 −8
The technique discussed here may be generalized to square matrices of any size and is
more generally known as LU decomposition.
3.2. GAUSSIAN ELIMINATION 49
A(1) = A
A(2) = M1 · A(1)
A(3) = M2 · A(2) = M2 M1 A(1)
..
.
A(n) = Mn−1 · A(n−1)
We obtain Mi from
1 0
..
. (j)
aij
Mj = 1 , cij = − .
(j)
.. ajj
cij .
0 1
After computing the product of all Mi , we have
Mn−1 · Mn−2 · · · M2 · M1 ·A = U,
| {z }
which may be inverted to obtain
A = L1 · L2 · · · Ln−2 · Ln−1 ·U
| {z }
and so obtain the LU decomposition of A,
A = L · U.
Since L and U are triangular matrices, we may easily compute the solution to a linear
system with either matrix L or U . The decomposition then leads to the final step in our
method of solving a general linear system:
Note: Retaining L and U is advantageous when systems have to be solved with the same
A and multiple right-hand-side vectors ~b.
3.2.2 Pivoting
We observe that the LU decomposition algorithm breaks down when at some step i the
pivot element aii is equal to zero. However, this problem does not necessarily imply that
the system is unsolvable.
We note that the pivot element in this example is zero in the first step, making it
impossible to proceed using the LU decomposition algorithm described previously. However,
we will have no problem proceeding if we swap the first and second rows before applying
the LU decomposition.
2 1 x1 3
= .
0 1 x2 1
In this case, swapping rows reduces the system to upper-triangular form, and so we may
solve it very easily. By inspection, this system has the solution ~x = (1 1)T .
More formally, the operation of swapping rows can be written as multiplication on the
left with a permutation matrix P . For example, in the previous example we may write
0 1 0 1
P = , A= .
1 0 2 1
0 1 0 1 2 1
=⇒ PA = = .
1 0 2 1 0 1
Definition 3.3 P ∈ Rn×n is a permutation matrix if and only if P is obtained from the
unit matrix In by swapping any number of rows.
Theorem 3.2 For all A ∈ Rn×n there exists a permutation matrix P , a unit lower trian-
gular matrix L and an upper triangular matrix U (all of type Rn×n ) such that P A = LU .
Proof. We write our linear system as A~x = ~b and multiply both sides by the permutation
matrix P (from Theorem 3.2) to obtain P A~x = P~b. From Corollary 3.1, we have that
P A = LU and so may write LU ~x = P~b. Thus we may simply apply forward and backward
substitution and so solve this system by LU decomposition. The substitution steps will not
lead to divisions by zero as L and U do not have any vanishing diagonalQelements. This
n
follows because det(A) = det(U ) det(L)/ det(P ) 6= 0, which means that i=1 uii 6= 0 as
det(P ) = ±1 and det(L) = 1 (see Section 3.2.4.)
LU-Decomposition
L = diag(1)
U = A
for p = 1:n-1
for r = p+1:n
m = -u(r,p)/u(p,p)
u(r,p) = 0
for c = p+1:n
u(r,c) = u(r,c) + m u(p,c)
end for
l(r,p) = -m
end for
end for
52 CHAPTER 3. NUMERICAL LINEAR ALGEBRA
Here, the variable p represents the row of the pivot element, r is the current row and c
is the current column.
× × × ×
1 0
.. ..
× .
. × × ×
A = LU = .. .
.. . × ×
× × .
× × × 1 ..
0 . ×
We may store L and U together as a single matrix since we know that the diagonal com-
ponents of L are all equal to 1. Thus the diagonal and upper-triangular portion of the
new combined matrix will consist of the elements of U and the lower-triangular portion will
consist of the elements of L.
With this storage mechanism, we can also perform LU decomposition in place; i.e. per-
form all computations directly on top of the input matrix A. However, using this technique
we lose the original contents of the matrix A.
(close inspection reveals that the additional multiplication only affects the O(n2 )
terms.)
We rewrite this as
i−1
X
y(i) = b(i) − L(i, k)y(k).
k=1
Thus we may formulate an algorithm for forward substitution by solving for each y(i) in
turn:
Forward Substitution
y = b
for r = 2:n
for c = 1:r-1
y(r) = y(r) - L(r,c) * y(c)
end for
end for
Here, r is the current row and c is the current column. We may determine the compu-
tational cost of the algorithm, as with LU decomposition:
n X
X r−1
W = (1M + 1A)
r=2 c=1
Xn
= (r − 1)(1M + 1A)
r=2
n−1
X
= s(1M + 1A)
s=1
1
= n(n − 1)(M + A)
2
= n(n − 1) flops
Thus, the total number of floating point operations required for Forward Substitution is
i
X
u(i, k)x(k) = y(i).
k=1
We rewrite this as
" n
#
X 1
x(i) = y(i) − u(i, k)x(k) .
u(i, i)
k=i+1
Thus we may formulate an algorithm for backward substitution by solving for each x(i)
in turn:
Backward Substitution
x = y
for r = n:-1:1
for c = r+1:n
x(r) = x(r) - U(r,c) * x(c)
end for
x(r) = x(r) / U(r,c)
end for
Here, r is the current row and c is the current column. The computational complexity
will be the same as with forward substitution:
Note: If u(i, i) = 0 for some i, the backward substitution algorithm breaks down, but this
can never happen if det(A) 6= 0.
3.2.4 Determinants
Before continuing, we consider some of the properties of the determinant.
with
a11 a12 ··· a(1)(j−1) a(1)(j+1) ··· a1n
a21 a22 ··· a(2)(j−1) a(2)(j+1) ··· a2n
.. .. .. .. ..
. . . . .
a(i−1)(1)
Aij = a(i−1)(2) · · · a(i−1)(j−1) a(i−1)(j+1) · · · a(i−1)(n) .
a(i+1)(1) a(i+1)(2) · · · a(i+1)(j−1) a(i+1)(j+1) · · · a(i+1)(n)
.. .. .. .. ..
. . . . .
an1 an2 ··· a(n)(j−1) a(n)(j+1) ··· ann
i.e. the matrix Aij is an (n − 1) × (n − 1) matrix obtained by removing row i and column j
from the original matrix A.
This is the expansion of the determinant about row i for any 1 ≤ i ≤ n. We may also
consider the expansion of the determinant about column j for any 1 ≤ j ≤ n as follows:
n
X
det(A) = (−1)i+j aij det(Aij ), for fixed j. (3.7)
i=1
Example 3.5 We compute the determinant of the matrix used in example 3.1 using an
expansion of the first row:
1 2 3
5 6 4 6 4 5
det 4 5 6 = 1 · det
− 2 · det
+ 3 · det
= 24.
7 8 1 8 1 7 1 7 8
The determinant satisfies several useful properties, which we may formulate in the fol-
lowing proposition:
The proof of 1, 3, and 4 are similar and are left as an exercise for the reader.
Recall that we may solve the linear system A~x = ~b using Gaussian elimination as follows:
• Phase 1: P A = LU
• Phase 3: U ~x = ~y
However, recall that the algorithm for the LU decomposition (phase 1) can only be
performed if there are no divisions by zero. How can we guarantee that this will not occur?
Proposition 3.2 Consider a matrix A ∈ Rn×n . Then det(A) 6= 0 if and only if the decom-
position P A = LU has uii 6= 0 ∀ i.
Proof.
PA = LU
det(P A) = det(LU )
det(P ) det(A) = det(L) det(U )
| {z } | {z } Q
| {z }
±1 1 n
i=0 uii
n
Y
± det(A) = uii .
i=0
If we multiply det(A) by x1 and apply a property of determinants, we may take the x1 inside
the determinant along one of the columns.
a11 a12 a13 x1 a11 a12 a13
x1 det(A) = x1 a21 a22 a23 = x1 a21 a22 a23 . (3.9)
a31 a32 a33 x1 a31 a32 a33
By another property of determinants, we may add to a column a linear combination of the
other columns without changing the determinant. We write
x1 a11 + x2 a12 + x3 a13 a12 a13 b1 a12 a13
x1 det(A) = x1 a21 + x2 a22 + x3 a23 a22 a23 = b2 a22 a23 .
(3.10)
x1 a31 + x2 a32 + x3 a33 a32 a33 b3 a32 a33
We define Di as the determinant of A with the ith column replaced with ~b. If det(A) 6= 0,
it follows from (3.10) that xi = Di / det(A). So, for our simple linear system:
b1 a12 a13 a11 b1 a13 a11 a12 b1
b2 a22 a23 a21 b2 a23 a21 a22 b2
b3 a32 a33 a31 b3 a33 a31 a32 b3
x1 = x2 = x3 =
D D D
This procedure is known as Cramer’s Rule and can be generalized to a set of n equations.
Using this method, we can solve our linear system by calculating the determinants of n + 1
matrices.
Consider the general case of A ∈ Rn×n . In order to apply Cramer’s Rule we must compute
n + 1 determinants, each of a matrix of size n × n. If we use the recursive method to
compute each of these determinants, for each of the n + 1 determinants, we must compute n
determinants of a matrix of size (n − 1) × (n − 1). A short calculation reveals that we need
to compute (n + 1)! determinants in the general case. This complexity is much higher than
for any polynomial-time algorithm; in fact, it is even much worse than even an exponential
time algorithm! Therefore, calculation of the determinant as in the proof of Proposition 3.2,
which requires O(n3 ) operations, is a much better idea.
3.3. CONDITION AND STABILITY 59
1. kAkp ≥ 0, kAkp = 0 ⇔ A = 0.
2. kαAkp = |α|kAkp .
Since the natural matrix p-norm satisfies the properties of a norm, it can be used as a
mechanism for measuring the “size” of a matrix. This will be useful later when considering
condition and stability.
From this statement of the problem, we may write ~x = f~(A, ~b). If we want to consider the
condition of this problem, there are then two dependent variables which can be perturbed
and with contribute to the condition number. We want to consider the change ∆~x if we
have inputs A + ∆A and ~b + ∆~b.
Example 3.6 Consider the linear system A~x = ~b and solution ~x given by
1 2 x1 3 1
= , ~x = .
0.499 1.001 x2 1.5 1
We perturb A by a small matrix ∆A to yield a new linear system and solution given by
1 2 x1 3 3
= , ~x = .
0.500 1.001 x2 1.5 0
Thus a small change in A results in a large change in ~x; this seems to imply that the problem
is ill conditioned.
x1 x1
2 2
1 1
1 2 3 4 x2 1 2 3 4 x2
In order to examine the condition of the initial problem P (A~x = ~b) we need to consider
a slight perturbation on the input data A and ~b,
We now look for bounds on κR . We note that it is difficult to derive a bound on κR without
considering a simplification of the defining equation (3.12). As a result, we consider the two
most obvious simplified cases:
k∆~xk k∆~bk
≤ kA−1 kkAk .
k~xk k~bk
k∆~xk/k~xk
κR = ≤ kAkkA−1 k. (3.16)
~ ~
k∆bk/kbk
Thus if κ(A) is small, problem P is well conditioned. The natural error due to rounding
~
in ~b produces an error of relative magnitude k∆bk ≈ ǫmach , so it follows that there will be
k~bk
k∆~xk
an error in ~x of relative magnitude k~
xk ≤ κ(A) · ǫmach .
We expand this equation and subtract A~x = ~b. This allows us to bound k∆~xk.
We may apply the approximation k∆~xk/k~x + ∆~xk ≈ k∆~xk/k~xk for ∆x small. This gives us
an expression for the relative condition number in this case:
k∆~xk/k~xk
κR = ≤ kAkkA−1 k. (3.18)
k∆Ak/kAk
Case 3: In the case of perturbations in A and b, i.e. ∆A 6= 0 and ∆~b 6= 0, it can also be
shown that κR ≤ κ(A) = kAkkA−1 k. However, the derivation is tedious and so will not be
given here.
From these three cases, it appears that the condition number of a matrix κ(A) is all
we need to determine the condition number of the problem P defined by the linear system
A~x = ~b. In particular, we note that the 2-condition number of a matrix (defined by using
the 2-norm) has a useful property that makes it unnecessary to compute the inverse A−1 :
Proof. We know that kAk2 = max1≤i≤n (λi (AT A))1/2 , where λi (AT A) > 0 ∀ i and λi are
eigenvalues of AT A. Also,
and so λi−1 are eigenvalues of B −1 = (AT A)−1 . Using this property with B = (AT A)−1 , we
obtain
AAT ~x = λi ~x
A AAT ~x =
T
λi AT ~x
AT A~y = λi ~y.
Thus the result follows from the definition of the condition number of a matrix.
a
Problem Consider the mathematical problem defined by z(x) = x with a constant.
We now wish to know the absolute and relative condition number of this problem. The
absolute condition number is computed from
∆z dz(x) a |a|
κA = ≈ = − = .
∆x dx x2 x2
Thus, if x is small then this problem is ill-conditioned with respect to the absolute error.
The relative condition number is computed from
|∆z|/|z| dz(x) x a x2
κR = ≈ = − 2 = 1,
|∆x|/|x| dx z x a
so the problem is well-conditioned with respect to the relative error. These calculations in-
dicate that dividing by a small number (or multiplying by a large number) is ill-conditioned;
this is a bad step in any algorithm because the absolute error may increase by a lot.
We conclude that the problem of linear systems will be ill-conditioned if we have many
divisions by small numbers. Consider the following example, with δ small.
Consider another approach to this problem, where we first interchange the two equations
(and so use pivot 1 instead of δ). We rewrite the linear system as
1 1 x1 2
= .
δ 1 x2 1
Thus, after applying Gaussian elimination
1 1 x1 2
= .
0 1−δ x2 1 − 2δ
We recompute under the finite precision system
fl(1 − 2δ) fl(1 − 2 · 10−5 ) fl(0.99998)
x̂2 = = = =1
fl(1 − δ) fl(1 − 10−5 ) fl(0.99999)
x̂1 = fl(2 − x̂2 ) = fl(2 − 1) = 1.
and so avoid the large error.
(2)
We note that we divide by the pivot element a22 in M2 . In order to minimize the error, we
should rearrange the rows in every step of the algorithm so that we get the largest possible
pivot element (in absolute value). This will give us the most stable algorithm for computing
the solution to the linear system since we avoid divisions by small pivot elements. This
approach is called LU decomposition with partial pivoting.
Proof. Left as an exercise for the reader (this can be easily proven with the Gershgorin
Circle Theorem).
and determine ~xnew from ~xold by either the Jacobi or Gauss-Seidel algorithm.
a11 xnew
1 + a12 xold
2 + a13 xold
3 = b1
a21 xold
1 + a22 xnew
2 + a23 xold
3 = b2 (3.22)
a31 xold
1 + a32 xold
2 + a33 xnew
3 = b3 .
This system may be easily rearranged to solve for ~xnew , giving us the defining equation for
the Jacobi method:
n
1 X
xnew
i = bi − aij xold
j
. (3.23)
aii
j=1,j6=i
Gauss-Seidel: In the Jacobi method we may write ~xnew = J(~xold ) for some function J.
However, there is no reason to ignore the elements of xnew
i derived earlier in the same step:
we can indeed construct a linear system as
a11 xnew
1 + a12 xold
2 + a13 xold
3 = b1
a21 xnew
1 + a22 xnew
2 + a23 xold
3 = b2 (3.24)
a31 xnew
1 + a32 xnew
2 + a33 xnew
3 = b3 .
For both of these methods, we must choose a starting vector ~x(0) and generate the
sequence ~x(1) , ~x(2) , . . . = {~x(i) }∞i=1 . We may also formulate these methods in matrix form
using the decomposition A = AL + AD + AR :
.. .. ..
. 0 . 0 . AR
A= 0 +
AD
+
0 .
(3.26)
.. .. ..
AL . 0 . 0 .
| {z } | {z } | {z }
AL AD AR
Theorem 3.3 Consider A~x = ~b and let ~x(0) be any starting vector. Let {~x(i) }∞
i=0 be the
sequence generated by either Jacobi or Gauss-Seidel iterative methods. Then if A is strictly
diagonally dominant the sequence converges to the unique solution of the system A~x = ~b.
Since this theorem is only a sufficient condition, we can have a matrix A that is not
strictly diagonally dominant but leads to a convergent method.
In general, we note that often Gauss-Seidel will converge faster than Jacobi, but this is
not always the case. For sparse matrices, we often obtain W = O(n2 ) for both methods.
Definition 3.9 The residual of a linear system A~x = ~b for some vector ~u is
~r = ~b − A~u. (3.29)
We write the residual at step i as ~r(i) . As ~r(i) becomes smaller, we approach convergence
of the system. Hence, for a given relative tolerance trel = 10−6 (for example), we compute
the residual at each step and stop when k~r(i) k2 /k~r(0) k2 ≤ trel .
For an iterative approximation ~u, the error is given by ~e = ~x − ~u. This leads to the
following relation between the error and the residual:
There are several benefits to using the residual instead of the error:
• ~r can be calculated easily (because calculating a matrix-vector product is “cheap” in
terms of computational work compared with solving a linear system).
• ~e is generally unknown.
Sometimes ~r is small but ~e is large (this may happen when κ(A) is large). Nevertheless,
we will assume our linear system is well-conditioned and use ~r instead of ~e in the stopping
criterion.
68 CHAPTER 3. NUMERICAL LINEAR ALGEBRA
If we knew the error ~e for any approximation ~u we could write out the solution to the
linear system directly:
~x = ~u + (~x − ~u) =
|{z} ~u
|{z} ~e .
+ |{z}
exact approximate error
However, since the error is unknown, we can instead use the residual:
~x = ~u + A−1~r. (3.30)
with det(A) 6= 0. If there exists a p-norm for which kI − B −1 Akp < 1 then the iterative
method will converge for any starting value ~x(0) . Convergence will then hold in any p-norm.
k~e(i+1) kp ≤ kI − B −1 Aki+1
p k~e(0) kp .
~ =B
3.4. ITERATIVE METHODS FOR SOLVING AX ~ 69
lim k~e(i+1) kp = 0,
i→∞
which is equivalent to
lim ~x(i) = ~x.
i→∞
We note that kI −B −1 Akp < 1 means that I ≈ B −1 A, or B −1 ≈ A−1 . Recall from before
that we require B −1 to be an approximation for A−1 in order to apply equation (3.31). The
convergence theorem is very similar to Theorem 2.3 (The Contraction Mapping Theorem),
which governs convergence of the fixed point method. In fact, Banach’s Contraction Theorem
(from real analysis) provides a generalization of both theorems.
Since (3.32) is a general form for iterative methods, we should be able to determine B for
the Jacobi and Gauss-Seidel methods. We rewrite the matrix form of Jacobi into standard
form:
4.1 Introduction
We define a complex
√ number z as z = a + ib, with a the real part of z, b the imaginary
part, and i = −1. As such, we may depict a complex number z = a + ib as a vector in the
complex plane:
Im(z)
z = a+ib
b
0 Re(z)
a
Term Definition
Complex conjugate z̄ = a − ib
Real part Re(z) = a
Imaginary part Im(z) = b
√
Modulus r = |z| = a2 + b2
Phase angle θ = arctan(b/a)
We may write the complex number in terms of the modulus and phase angle
z = r exp(iθ), (4.1)
in conjunction with the Euler formulas:
exp(iθ) = cos(θ) + i sin(θ), (4.2)
exp(−iθ) = cos(θ) − i sin(θ). (4.3)
71
72 CHAPTER 4. DISCRETE FOURIER METHODS
We may invert these formulas in order to write sinusoids in terms of complex exponentials:
1
cos(θ) = (exp(iθ) + exp(−iθ)), (4.4)
2
1
sin(θ) = (exp(iθ) − exp(−iθ)). (4.5)
2i
Consider an arbitrary wave signal given by y(t) = sin(2π · kt) for some constant k. The
frequency of the wave is defined by
f = k, (4.6)
and has dimensions of oscillations per second (or Hz). The period is defined as the time
required to complete one oscillation. It is related to the frequency by
1 1
T = = , (4.7)
f k
and has dimensions of seconds. The angular frequency ω is related to the frequency by
ω = 2π f (4.8)
with dimensions of radians per second. These definitions can be easily generalized to any
periodic function (not just sinusoids). This characterization is broadly applicable to sound
waves, water waves and any other periodic behaviour. Human audible sound, for example,
occurs in the frequency range from 20Hz to 20kHz.
T
y(t) = sin(2πt)
f = 1 oscillation / sec = 1Hz
1
T = 1 sec
ω = 2π radians / sec
y(t) = sin(2π · 31 t)
1 1
f = 3 oscillation / sec = 3 Hz
1
T = 3 sec
2
ω = 3 π radians / sec
y(t) = sin(2π · 3t)
f = 3 oscillation / sec = 3Hz
1
1
T = 3 sec
ω = 6π radians / sec
f(x)
a b
with
Z b
2 2πx
ak = f (x) cos k dx (4.10)
b−a a b−a
Z b
2 2πx
bk = f (x) sin k dx. (4.11)
b−a a b−a
Example 4.1 Compute the Fourier series of the function f (x) defined by
π
−4 x ∈ [−π, 0)
f (x) = π
4 x ∈ [0, π]
over the interval [a, b] = [−π, π] ⇒ b − a = 2π.
π
4
−π π
π
4
We note that f (x) is an odd function. Since cos(kx) is even, it follows that f (x) cos(kx)
is odd. Thus, Z π
2
ak = f (x) cos(kx)dx = 0
2π −π
for all k. We compute bk as follows:
2
Rπ
bk = 2π −π
f (x) sin(kx)dx
4
Rπ
= 2π 0 f (x) sin(kx)dx (because odd × odd = even)
Rπ
= 21 0 sin(kx)dx
11 π
= 2 k [− cos(kx)]0
1
= 2k (− cos(πk) + 1)
1 k+1
= 2k ((−1) − 1).
74 CHAPTER 4. DISCRETE FOURIER METHODS
Thus (
1
k if k is odd,
bk =
0 if k is even.
The complete Fourier series for f (x), given by (4.9), may then be written as
1 1
g(x) = sin(x) + 3 sin(3x) + 5 sin(5x) + · · · .
The Fourier coefficients of odd and even functions simplify under the following proposi-
tion:
Proposition 4.1 The Fourier coefficients of a function f (t) satisfy
f (t) even =⇒ bk = 0 ∀ k,
f (t) odd =⇒ ak = 0 ∀ k.
We may wonder: what functions can be expressed in terms of a Fourier series. The
following fundamental theorem, originally proposed by Dirichlet, describes how the Fourier
series g(x) is related to the original function f (x).
Theorem 4.1 Fundamental Convergence Theorem for Fourier Series.
Let s
Z b
V = f (x) f (x)dx < ∞ .
a
Then for all f (x) ∈ V there exist coefficients a0 , ak , bk (with 1 ≤ k < ∞) such that
n
a0 X 2πx 2πx
gn (x) = + ak cos k + bk sin k
2 b−a b−a
k=1
qR
b
converges to f (x) for n → ∞ in the sense that a
(f (x) − g(x)) = 0 with g(x) = limn→∞ gn (x).
Note that gn (x) is sometimes also called Sn (x) (the nth partial sum of the Fourier series).
This theorem holds for any bounded interval [a, b], but for simplicity we will generally
consider the interval to be [a, b] = [−π, π].
V is called the set of square integrable functions over [a, b] and contains all polynomials,
sinusoids and other nicely-behaved bounded functions. However, V does not contain many
common functions, including f (x) = tan(x) and f (x) = (x − a)−1 .
In addition, V is a vector space of functions, i.e. if f1 (x) ∈ V and f2 (x) ∈ V , then
c1 f1 (x) + c2 f2 (x) ∈ V (∀ c1 , c2 ∈ R). We define a norm over V by
s
Z b
kh(x)k2 = h(x)2 dx (L2 norm). (4.12)
a
As a norm, this measures the “size” of the function h(x). This implies a measure of the
“distance” between f (x) ∈ V and g(x) ∈ V by
s
Z b
kf (x) − g(x)k2 = (f (x) − g(x))2 dx. (4.13)
a
4.2. FOURIER SERIES 75
We call the set of functions V that are defined on an interval [a, b] L2 ([a, b]). We write
with Z π
1
ck = f (t) exp(−ikt)dt. (4.16)
2π −π
Note: In some books (and in MATLAB!)the complex form of the Fourier series is defined
using a different sign convention, namely
X
h(t) = ck exp(−ikt)
with Z π
1
ck = f (t) exp(ikt)dt.
2π −π
Relation between ck and ak , bk : We may apply Euler’s formula (4.3) to the complex
form (4.15) to obtain
Z π
1
ck = f (t)(cos(−kt) + i sin(−kt))dt
2π −π
Z Z
1 1 π 1 π
= f (t) cos(kt)dt − i f (t) sin(kt)dt .
2 π −π π −π
Proposition 4.2 The real and complex Fourier coefficients of a real function f (t) obey
1. ck = c−k
76 CHAPTER 4. DISCRETE FOURIER METHODS
Proof.
1. From (4.16) we write
Z π
1
ck = f (t) exp(ikt)dt = c−k .
2π −π
It also follows that Re(ck ) = Re(c−k ) (the real part of ck is even in k) and Im(ck ) =
−Im(c−k ) (the imaginary part of ck is odd in k).
2. This result follows from (4.10) and (4.11).
3. This result follows from (4.17).
4. This result follows from Proposition 4.1 in conjunction with (4.17).
5. This result follows from (4.11) and (4.17).
Although the complex and real forms of the Fourier series appear very different, they
describe the same Fourier series. We demonstrate this result in the following theorem:
Theorem 4.2 The complex and real forms of the Fourier series are equivalent (h(t) = g(t)).
We use i2 = −1 =⇒ i = − 1i to write
∞
a0 X ak − ibk ak + ibk
g(t) = + exp(ikt) + exp(−ikt) .
2 2 2
k=1
Thus,
∞
X
g(t) = ck exp(ikt) = h(t),
k=−∞
Example 4.2 Compute the complex Fourier series of the function f (x), defined by
π
−4 x ∈ [−π, 0]
f (x) = π
4 x ∈ [0, π]
Recall from Example 4.1 that the real Fourier coefficients are
1
ak = 0 ∀ k, bk = k, k odd
0, k even.
Any scalar product also induces a norm over the vector space by
p q
k~xk2 = < ~x, ~x > = x21 + x22 . (4.20)
78 CHAPTER 4. DISCRETE FOURIER METHODS
(0,1)
e2
e1
(1,0)
In particular, we wish to focus on a particular type of basis which has some useful
properties:
Definition 4.3 We say that a basis B = {~e1 , . . . , ~en } is an orthogonal basis if and only
if
< ~ei , ~ej >= cij for 1 ≤ i, j ≤ n,
where cij is nonzero if and only if i = j.
We note that the standard basis B = {~e1 , ~e2 } defined over V is orthogonal, since
< ~e1 , ~e1 >= 1, < ~e2 , ~e2 >= 1, < ~e1 , ~e2 >= 0.
Given an orthogonal basis {~e1 , ~e2 } we can easily find the components of any vector ~x in
the basis. Consider the scalar product of ~x and ~e1 :
Proof. We note that B is a basis due to Theorem 4.1. In order to prove that B is
orthogonal, we need to consider the scalar product of all basis vectors:
Z π
< 1, 1 > = 12 dt = 2π
−π
Z π
< cos(kt), cos(kt) > = cos2 (kt)dt = π (k ≥ 1)
−π
Z π
< sin(kt), sin(kt) > = sin2 (kt)dt = π (k ≥ 1).
−π
(If our basis vectors were normalized, each of these terms would equal 1). We must now
show that the scalar products between different basis vectors all vanish:
Z π
< 1, cos(kt) > = cos(kt)dt = 0 (k ≥ 1)
−π
Z π
< 1, sin(kt) > = sin(kt)dt = 0 (k ≥ 1)
−π
Z π
< cos(kt), sin(ℓt) > = cos(kt) sin(ℓt) dt = 0 (k, ℓ ≥ 1).
−π | {z } | {z }
even odd
The scalar product between different cosine basis vectors requires a more extensive deriva-
tion:
Z π
< cos(kt), cos(ℓt) > = cos(kt) cos(ℓt)dt = 0 (k 6= ℓ)
−π
Z π
1
= [cos((k + ℓ)t) + cos((k − ℓ)t)] dt
−π 2
" π π #
1 1 1
= sin((k + ℓ)t) + sin((k − ℓ)t)
2 k+ℓ −π k − ℓ −π
= 0.
80 CHAPTER 4. DISCRETE FOURIER METHODS
We must also show < sin(kt), sin(ℓt) >= 0 for (k 6= ℓ ≥ 1). This is left as an exercise for
the reader.
Since B forms an orthogonal basis, we may use the projection formula (4.21) to determine
the coefficients ak and bk . By projection, we have
Z
< f (t), cos(kt) > 1 π
ak = = f (t) cos(kt)dt (4.27)
< cos(kt), cos(kt) > π −π
Z π
< f (t), sin(kt) > 1
bk = = f (t) sin(kt)dt (4.28)
< sin(kt), sin(kt) > π −π
Z π
a0 < f (t), 1 > 1
= = f (t)dt, (4.29)
2 < 1, 1 > 2π −π
which are simply (4.10) and (4.11), the standard formulae for the Fourier coefficients.
For the complex form of the Fourier series, we can recover formula (4.16) by choosing
the alternate basis
B = {exp(ikx)} (−∞ < k < ∞) (4.30)
and using the scalar product for complex functions,
Z π
< f (t), g(t) >= f (t)g(t)dt. (4.31)
−π
where δkℓ is the Kroenecker delta. Applying the projection formula yields
Z π
< f (t), exp(ikt) > 1
ck = = f (t) exp(−ikt)dt, (4.33)
< exp(ikt), exp(ikt) > 2π −π
which is (4.16).
Figure 4.1: A discrete temperature profile f [n] with Discrete Fourier transform F [k].
82 CHAPTER 4. DISCRETE FOURIER METHODS
8
W8 = 1
4
-1 = W
8
7
W8
5 W8
6
W
8
Proof. By definition,
kN 2π
(WNk )N = exp i = exp(ik2π) = 1.
N
Proposition 4.5 The N th roots of unity satisfy
WN−k = WNN −k . (4.37)
Proof. By definition,
WN−k = WNN · WN−k = WNN −k .
We are now prepared to define the discrete Fourier transform:
Definition 4.5 The discrete Fourier transform of a discrete time signal f [n] with 0 ≤
n ≤ N − 1 is
N −1
1 X
F [k] = DF T {f [n]} = f [n] WN−kn , 0 ≤ k ≤ N − 1. (4.38)
N n=0
4.4. DISCRETE FOURIER TRANSFORM 83
Definition 4.6 The inverse discrete Fourier transform of a discrete frequency signal
F [k] with 0 ≤ k ≤ N − 1 is
N
X −1
f [n] = IDF T {F [k]} = F [k] WNkn , 0 ≤ n ≤ N − 1. (4.39)
k=0
Note that these expressions are closely related to the complex form of the Fourier series,
given in equation (4.15). Recall that when we compute the Fourier series of a function,
we enforce that it is periodic outside the interval we are examining. This requirement is
analogous in the discrete case; we implicitly assume that the time signal f [n] is periodic
when applying the discrete Fourier transform, which necessarily implies that the frequency
signal is also periodic:
Proposition 4.6 The frequency signal F [k] given by (4.38) is periodic with period N :
F [k] = F [k + sN ] with s ∈ Z. (4.40)
Proposition 4.7 For any real time signal f [n], the frequency signal F [k] satisfies
1. Re(F [k]) is even in k
2. Im(F [k]) is odd in k
3. F [k] = F [−k]
4. f [n] is even in n ⇒ Im(F [k]) = 0 (the DFT is real)
5. f [n] is odd in n ⇒ Re(F [k]) = 0 (the DFT is purely imaginary)
Example 4.4 Consider a cosine wave f (t) = cos(2πt), t ∈ [0, 1]. We sample at N = 6
points
n
f [n] = cos(2πtn ), tn = , 0 ≤ n ≤ N − 1. (4.42)
N
1
0 1 2 3 4 5 6=N
1 t
-1
84 CHAPTER 4. DISCRETE FOURIER METHODS
F[k]
1
2
-7 -6 -5 -4 -3 -2 -1 1 2 3 4 5 6 7 k
We note that cos(2πtn ) is a low frequency wave, but we still get higher frequency com-
ponents like F [5]. This effect is called aliasing.
F [0] = 1, F [6] = 1
due to periodicity.
F [1] = 12 , F [5] = 21 .
4.4. DISCRETE FOURIER TRANSFORM 85
F [2] = 21 , F [4] = 21 .
F [3] = 1.
F [4] = 21 , F [2] = 21 .
Examining these results, we may find it a little worrisome that the ℓ = 4 case and the
ℓ = 2 case match. The sampling rate, or sampling frequency fs for this example is 6 samples
/ second, or 6Hz. The “critical frequency” for aliasing occurs at f = 3Hz (we have 2 f =fs ).
In general, if fs ≥ 2 f then there will be no aliasing. If fs < 2 f then aliasing will occur.
Figure 4.2: Left: Continuous time signal f (t) = cos(2πℓt) with 6 discrete sampling points
f [n], 0 ≤ ℓ ≤ 6. Right: Discrete Fourier transforms of f [n]. Aliasing occurs for ℓ = 4, 5and6:
sampled high frequency signals show up as low-frequency discrete signals when the sampling
rate is not high enough.
4.5. FAST FOURIER TRANSFORM 87
For example, human audible sound falls in the range 20Hz to 20kHz. We will require
a sampling frequency fs ≥ 2 · 20000Hz = 40kHz to avoid aliasing problems. As a result of
this requirement, digital music CDs have a sampling frequency of fs = 44.1kHz.
We wish to numerically compute the discrete Fourier transform F [k] of a time signal f [n].
If we assume that the factors WN−kn are precomputed and stored in a table (there will be
N factors), then we can use formula (4.38) directly to compute the N coefficients F [k]
(0 ≤ k ≤ N − 1). The amount of work we must do per coefficient is then
W = N · 2N = 2N 2 (complex) flops.
This process is fairly inefficient; for a typical one minute sound file sampled at 44.1kHz we
require 2.656 × 106 samples for the time signal and 1.4 × 1013 (complex) flops to compute
the discrete Fourier transform which is very large, even for today’s fast computers. We pose
the question: “how do we compute the discrete Fourier coefficients more efficiently?”
In order to answer this, we first require some intermediate results:
Theorem 4.4 If N = 2m for some integer m, then the length N discrete Fourier transform
F [k] of discrete time signals f [n] can be calculated by combining two length N2 discrete
Fourier transforms. We define
Then
1
F [2ℓ] = 2 DF T {g[n]} (even indices) (4.45)
1
F [2ℓ + 1] = 2 DF T {h[n]} (odd indices). (4.46)
88 CHAPTER 4. DISCRETE FOURIER METHODS
Proof. From the direct formula for the discrete Fourier transform,
N −1
1 X
F [k] = f [n]WN−kn
N n=0
N/2−1 N −1
1 X 1 X
= f [n]WN−kn + f [n]WN−kn
N n=0 N
n=N/2
N/2−1 N/2−1
1 X 1 X −k(ℓ+N/2)
= f [n]WN−kn + f [ℓ + N/2]WN
N n=0 N
ℓ=0
N/2−1
1 X −kN/2
= (f [n] + f [n + N/2]WN )WN−kn .
N n=0
N/2−1
1 X
F [2ℓ] = (f [n] + f [n + N/2])WN−2ℓn
N n=0
N/2−1
1 1 X −ℓn
= g[n]WN/2
2 N/2 n=0
1
= 2 DF T {g[n]}.
N/2−1
1 X
F [2ℓ + 1] = (f [n] − f [n + N/2])WN−2ℓn−n
N n=0
N/2−1
1 1 X −ℓn
= h[n]WN/2
2 N/2 n=0
1
= 2 DF T {h[n]}.
4.5. FAST FOURIER TRANSFORM 89
If we split the discrete Fourier transform into two transforms (each of length N/2), the
total work required will be
Wtot = 2 · 2(N/2)2 f lops = N 2 f lops.
Recall that the direct method requires 2N 2 flops. In splitting up the Fourier transform, we
only require half as much work. But we may further apply the splitting method recursively!
In order to compute the splitting, we require N/2 additions for g[n], N/2 additions and
multiplications for h[n], and N multiplications for the transform. In total, we will require
5
2 N flops at each level (where N is the length at each level).
Theorem 4.5 The fast Fourier transform (FFT) requires O(N log2 N ) flops.
function F = FastFT(f, N)
m = log N
if m = 0
F = f
else
build g[N]
build h[N]
F[even] = (1/2) FastFT(g, N/2)
F[odd] = (1/2) FastFT(h, N/2)
end
90 CHAPTER 4. DISCRETE FOURIER METHODS
We note that this algorithm can also be applied to signals where N 6= 2m by padding the
signal with zeroes. In addition, this algorithm also works with complex f [n], but requires
that all computations be done in complex flops. Computationally, the fast Fourier transform
is almost always used, due to its efficiency.
Then any vector ~x can be expressed in terms of either basis (see figure).
(0,1)
e2 f1
f2
e1
(1,0)
Consider the discrete Fourier transform with N = 4, for simplicity. The time signal f [n]
(0 ≤ n ≤ N − 1 = 3) may be defined as a time signal vector
f~ = f [0], f [1], f [2], f [3] ∈ R4 . (4.47)
with
or
N
X −1
f~ = F [k]F~k = F [0]F~0 + F [1]F~1 + F [2]F~2 + F [3]F~3 , (4.50)
k=0
where
F~0 =( WN00 , WN01 , WN02 , WN03 ) −→ k = 0
F~1 =( WN10 , WN11 , WN12 , WN13 ) −→ k = 1
F~2 =( WN20 , WN21 , WN22 , WN23 ) −→ k = 2 (4.52)
F~3 =( WN30 , WN31 , WN32 , WN33 ) −→ k = 3
↓ ↓ ↓ ↓
n=0 n=1 n=2 n=3
We conclude from (4.50) that the discrete Fourier coefficients F [k] of the time signal f [n]
are just the coordinates of the time signal vector f~ in the DFT basis B ′ . This is the same
as saying that the DFT is nothing more than a basis transformation (with a basis that is
useful to extract frequency information).
If i = j then
N
X −1
< F~i , F~i >= 1 = N. (4.54)
k=0
Figure 4.3: A basis for the discrete Fourier transform over N = 8 points. For each basis
vector F~k , the functions on the left represents the real component and the functions on the
right represent the imaginary component.
4.7. POWER SPECTRUM AND PARSEVAL’S THEOREM 93
Recall that since B ′ is an orthogonal basis, we can use the projection formula (4.21) in
conjunction with expressions (4.47) and (4.52) to find F [k]:
N −1 N −1
< f~, F~k > 1 X 1 X
F [k] = = f [n]WNkn = f [n]WN−kn .
< F~k , F~k > N n=0 N n=0
Definition 4.7 Let F [k] be the complex Fourier coefficients of a discrete (or continuous)
signal f [n] (or f (t)). Then the power spectrum of f [n] (or f (t)) is |F [k]|2 .
Parseval’s theorem then provides a connection between the power of a signal in the time
domain and the summed power spectrum in the frequency domain:
(A) (Continuous case) Let F [k] be the complex Fourier coefficients of a real continuous
signal f (t). Then
Z b ∞
1 X
f (t)2 dt = |F [k]|2 . (4.56)
b−a a
| {z } k=−∞
| {z }
total power in time domain total power in frequency domain
(B) (Discrete case) Let F [k] be the complex Fourier coefficients of a real discrete signal
f [n]. Then
N −1 N −1
1 X 2
X 2
|f [n]| = |F [k]| . (4.57)
N n=0
k=0
| {z } | {z }
total power in time domain total power in frequency domain
94 CHAPTER 4. DISCRETE FOURIER METHODS
Proof of (A).
Z b Z b
1 1
f (t)2 dt = f (t) · f (t)dt
b−a a b−a a
Z ∞ !
b
1 X kt
= F [k] exp i2π f (t)dt .
b−a a b−a
k=−∞
Interpolation
We wish to now focus our attention on the problem of interpolation. If we have a set of
discrete data, we may wish to determine a continuous function that interpolates all the data
points. Consider the following statement of the problem:
Problem Given n + 1 discrete data points {(xi , fi )}ni=0 with xi 6= xj for i 6= j, determine
a continuous function y(x) that interpolates the data: y(xi ) = fi for 0 ≤ i ≤ n.
y
y(x)
x0 x1 x2 ... xn-1 xn xi
The points (xi , fi ) could come from measurements, expensive calculations, discrete data
analysis, or computer graphics (2D and 3D). There are several reasons for which we require
access to a continuous function y(x). For example,
95
96 CHAPTER 5. INTERPOLATION
where Vij is the (n − 1) × (n − 1) matrix obtained by removing row i and column j from
matrix V:
v00 v01 ··· v(0)(j−1) v(0)(j+1) ··· v0n
v10 v11 ··· v(1)(j−1) v(1)(j+1) ··· v1n
.. .. .. .. ..
. . . . .
Vij = v(i−1)(0) v(i−1)(1) · · · v(i−1)(j−1) v(i−1)(j+1) · · · v(i−1)(n) .
v(i+1)(0) v(i+1)(1) · · · v(i+1)(j−1) v(i+1)(j+1) · · · v(i+1)(n)
.. .. .. .. ..
. . . . .
vn0 vn1 ··· v(n)(j−1) v(n)(j+1) ··· vnn
All of the sub-determinants (indicated by boxes) are independent of xn since they do not
have any elements from the nth row: This is essentially a polynomial of degree n in xn , so
we may write det(V ) = pn (xn ).
We know pn (x1 ) = 0 because det(V ) = 0 when xn = x1 ; V then has two equal rows.
But we also know pn (x2 ) = 0, pn (x3 ) = 0, pn (x4 ) = 0, · · · , pn (xn−1 ) = 0, which gives us n
roots of pn (xn ). From this information, we know that pn (x) and may be written as:
Our result is obtained by repeating the decomposition (5.6) recursively to obtain the
desired result.
We may wish to consider when the interpolating polynomial is well-defined, i.e. when
we may solve the linear system (5.3) to find a unique solution. It turns out that as long as
98 CHAPTER 5. INTERPOLATION
xi 6= xj for i 6= j, we can always obtain a polynomial that interpolates the given points.
This is proven in the following theorem:
We note that we rarely solve the linear system V ~a = f~ in practice, for two reasons:
1. We note that this approach requires W = O(n3 ) time to solve the linear system. There
are more efficient methods than this to find the interpolating polynomial.
Instead, we consider a different approach which will allow us to write down the interpo-
lating polynomial directly.
Linear Case (n=1): We have two points (x0 , f0 ) and (x1 , f1 ). The polynomial is of the
form
y1 (x) = a0 + a1 x,
with the conditions
y1 (x0 ) = f0 , y1 (x1 ) = f1 .
With a little intuition, we may think to write y1 (x) as
x − x1 x − x0
y1 (x) = f0 + f1 .
x0 − x1 x1 − x0
where ℓ0 (x) and ℓ1 (x) are both degree 1 polynomials. We verify that y1 (x) is a interpolating
polynomial:
y1 (x0 ) = 1 · f0 + 0 · f1 = f0 , OK!
y1 (x1 ) = 0 · f0 + 1 · f1 = f1 , OK!
5.1. POLYNOMIAL INTERPOLATION 99
By Theorem 5.1 we know that the interpolating polynomial is unique, so this must be the
interpolating polynomial associated with the given points. If we collected the terms of this
polynomial, we would find that this is simply another way of writing the solution we would
get if we solved the Vandermonde system (5.3).
We may generalize this method for writing the interpolating polynomial to an arbitrary
number of points as follows:
Definition 5.2 The n + 1 Lagrange polynomials for a set of points {(xi , fi )}ni=0 are the
degree n polynomials that satisfy the property
1 if i = j
ℓi (xj ) = (5.7)
0 otherwise.
In general, the Lagrange form of the interpolating polynomial may be written as follows:
yn (x) = ℓ0 (x)f0 + ℓ1 (x)f1 + · · · + ℓn (x)fn , (5.10)
or, using summation notation,
n
X
yn (x) = ℓi (x)fi , (5.11)
i=0
with the Lagrange polynomials ℓi (x) defined by (5.8). This form is an alternative way of
writing the interpolating polynomial yn (x). Using this form, interpolation can be done in
O(n2 ) time without solving a linear system!
Example 5.1 Write the interpolating polynomial of degree 2 for the set of points
3
{(2, ), (3, 2), (5, 1)}.
2
1
y(x)
0 x
1 2 3 4 5 6
100 CHAPTER 5. INTERPOLATION
x0 x1 x2 xi
Definition 5.3 Given {(xi , fi , fi′ )}ni=0 , the Hermite interpolating polynomial is the polyno-
mial y(x) of degree 2n + 1 which satisfies
y(xi ) = fi n+1 conditions
y ′ (xi ) = fi′ n+1 conditions
2n + 2 conditions.
5.1. POLYNOMIAL INTERPOLATION 101
Example 5.2 Consider the case of n = 1. We have two points (x0 , f0 , f0′ ) and (x1 , f1 , f1′ ).
The polynomial is of degree 2n + 1 = 2 · 1 + 1 = 3 (a cubic), so we may write
y(x) = a0 + a1 x + a2 x2 + a3 x3 .
Similar to the standard polynomial interpolation problem, we must solve for these coef-
ficients. We consider two methods:
y(x) = a0 + a1 x + a2 x2 + a3 x3 ,
′
y (x) = a1 + 2a2 x + 3a3 x2 .
Method 2: Determine a, b, c, d Similar to the idea of the Lagrange form, we can actually
write the Hermite polynomial in a form that makes solving for the polynomial coefficients
much easier. We write the polynomial and its derivative as
We note that c and d are the only coefficients we need to solve for, since a and b are
immediately determined. We may rearrange the system to obtain
1
c = (f1 − f0 − f0′ (x1 − x0 ))
(x1 − x0 )2
1 2
d = (f ′ − f0′ − (f1 − f0 − f0′ (x1 − x0 ))).
(x1 − x0 )2 1 (x1 − x0 )
102 CHAPTER 5. INTERPOLATION
So why does this work? Recall that we could write a basis for P3 (x) as either {1, x, x2 , x3 }
(standard basis) or {ℓ0 (x), ℓ1 (x), ℓ2 (x), ℓ3 (x)} (Lagrange basis). We may also choose the
following basis for P3 (x):
{1, x − x0 , (x − x0 )2 , (x − x0 )2 (x − x1 )}.
Writing y(x) under this basis yields the form we used in method 2. This particular basis is
chosen such that the calculations are somewhat simplified.
x0 x1 x2 xi-1 xi xn-1 xn
intervals I1 I2 Ii In
This method of interpolation has the drawback of not being smooth at the interpolation
points (we end up with jagged edges on the interpolating curve). We may instead consider
piecewise quadratic (see figure) or piecewise cubic interpolation, but these will also inevitably
have jagged points at the boundaries.
x0 x1 x2 x3 x4 x5 x6
intervals I1 I2 I3
x0 x1 x2 xi-1 xi xn-1 xn
intervals I1 I2 Ii In
Under this definition, there will be n intervals and k + 1 coefficients for each polynomial.
In total, this gives n(k + 1) = nk + n unknowns.
We will have 2n interpolation conditions from part 2 and (k − 1)(n − 1) smoothness
conditions (note that the smoothness conditions only apply to the n − 1 internal points of
the spline.) Thus, in total we will have 2n + kn − k − n + 1 = kn + n − k + 1 conditions.
Comparing the number of unknowns and the number of conditions, it is clear we need
to impose k − 1 extra conditions. These are supplied by extra boundary conditions at x0
and xn .
Example 5.3 Consider the case of k = 3 (a cubic spline.) We will need to impose 2 extra
boundary conditions.
xi-1 xi xi+1
intervals Ii Ii+1
There are many different types of boundary conditions we could impose. Three possible
types of boundary conditions are
• “free boundary”: y1′′ (x0 ) = 0, yn′′ (xn ) = 0. A cubic spline with this boundary condition
is known as a “natural” cubic spline.
• “clamped boundary”: We specify the first derivatives at the ends by choosing constants
f0′ and fn′ . We then impose y1′ (x0 ) = f0′ and yn′ (xn ) = fn′ .
• “periodic boundary”: If f0 = fn we may impose that the first and second derivatives
of the first and last polynomial are equal at x0 and xn . We obtain the conditions
y1′ (x0 ) = yn′ (xn ) and y1′′ (x0 ) = yn′′ (xn ).
When we impose the three types of conditions, we will produce a (nk + n) × (nk + n)
linear system that may be uniquely solved for the coefficients of the yi (x), 1 ≤ i ≤ n.
5.2. PIECEWISE POLYNOMIAL INTERPOLATION 105
Integration
Problem Given a continuous function f (x) and an interval [a, b], find a numerical approx-
Rb
imation for I = a f (x)dx.
1. if f (x) is given but no closed form solution can be found. For example, the integral of
2
the function f (x) = e−x has no closed form solution and requires numerical integration
to compute.
2. if f (x) is not given, but {(xi , fi )}ni=0 is given.
In each case, numerical integration may be the only method of determining the inte-
gral. We consider three methods for performing numerical integration: integration of an
interpolating polynomial, composite integration and Gaussian integration.
107
108 CHAPTER 6. INTEGRATION
f(x)
a+b
f( 2
) y(x)
a a+b b x
2
(x − x1 ) (x − x0 )
y(x) = f0 + f1 .
(x0 − x1 ) (x1 − x0 )
f(x)
y(x)
a b x
(x0 = a, f0 = f (a)),
(x1 = a+b
2 , f1 = f ( a+b
2 )), and
(x2 = b, f2 = f (b)).
We may write the interpolating polynomial in Lagrange form:
(x − x1 )(x − x2 ) (x − x0 )(x − x2 ) (x − x0 )(x − x1 )
y(x) = f0 + f1 + .
(x0 − x1 )(x0 − x2 ) (x1 − x0 )(x1 − x2 ) (x2 − x0 )(x2 − x1 )
The numerical approximation using the interpolating polynomial is then
Z x2
ˆ (x − x1 )(x − x2 ) (x − x0 )(x − x2 )
I2 = f0 + f1 +
x0 (x0 − x1 )(x0 − x2 ) (x1 − x0 )(x1 − x2 )
(x − x0 )(x − x1 )
+ f2 dx.
(x2 − x0 )(x2 − x1 )
Iˆ2 = w0 f0 + w1 f1 + w2 f2 ,
with Z x2
(x − x1 )(x − x2 ) b−a
w0 = dx = .
x0 (x0 − x1 )(x0 − x2 ) 6
Repeating a similar integration for w1 and w2 yields the Simpson rule
b−a
Iˆ2 = (f0 + 4f1 + f2 ). (6.6)
6
110 CHAPTER 6. INTEGRATION
y(x)
f(x)
a a+b b x
2
Truncation error: We can use Taylor’s theorem to compute the truncation error of each
integration rule. For example, for the midpoint rule it can be shown that T0 = I − I0 =
(b−a)3 ′′
24 f (ξ0 ) with ξ0 ∈ (a, b).
The three integration formulas we discussed are summarized in the following table, along
with their accuracy:
Clearly the Simpson rule is the most accurate approximation, but also requires the
most computation. Perhaps surprising is the fact that the midpoint rule seems to provide
comparable accuracy to the more computationally-intensive trapezoid rule. We consider the
application of these methods to the following example:
6.2. COMPOSITE INTEGRATION 111
As predicted, the Simpson rule is the most accurate. We also find, in this case, that the
Midpoint rule is more accurate than the trapezoid rule.
with Z xi
Ii = f (x)dx. (6.8)
xi−1
Local Truncation Error: The local truncation error is the truncation error expected in
each subinterval. We write
1
Tloc,i = Ii − Iˆi = −(xi − xi−1 )3 f ′′ (ξi ), with ξi ∈ (xi−1 , xi )
12
Hence, the local truncation error for the trapezoid rule is given by
1 ′′
Tloc,i = − f (ξi )h3 . (6.11)
12
We say that the local truncation error is of the order O(h3 ).
Global Truncation Error: The global truncation error is the total truncation error over
all intervals. We write
n
X n
X
Tglobal = I − Iˆ = (Ii − Iˆi ) = Tloc,i .
i=1 i=1
In order to relate Tglobal to the interval length h, we will need the following theorem:
Theorem 6.1 The global truncation error for the composite trapezoid rule is O(h2 ).
Proof. The global truncation error, in terms of the local truncation error, is given by:
n
X n
X
|Tglobal | = | Tloc,i | ≤ |Tloc,i | (by Triangle inequality).
i=1 i=1
From (6.10),
n
X h3
|Tglobal | ≤ |f ′′ (ξi )| .
12
i=1
We define M = maxa≤x≤b |f ′′ (x)| and so obtain
h3
|Tglobal | ≤ nM .
12
b−a
Substituting n = h yields
h2
|Tglobal | ≤ (b − a)M .
12
We conclude Tglobal = O(h2 ).
6.3. GAUSSIAN INTEGRATION 113
Using this definition, we may write the Simpson rule over one subinterval as
h
Iˆi = (fi−1 + 4fi−1/2 + fi ). (6.12)
6
Summing over all intervals yields the expression for the composite Simpson rule:
n
X h
Iˆ = Iˆi = [f0 + 4f1/2 + 2f1 + · · · + 4fn−1/2 + fn ]. (6.13)
i=1
6
Finally, we can devise a theorem for the global truncation error of this expansion:
Theorem 6.2 The global truncation error for the Simpson rule is O(h4 ).
Proof. The proof is similar to the proof for the Trapezoid rule, except uses Tloc,i = O(h5 ).
This is left as an exercise for the reader.
This expression has four unknowns (namely w1 , w2 , x1 and x2 .) We want to determine these
unknowns so that the degree of precision is maximal. Note that the location of function
evaluation is now also a variable that will be determined optimally, in addition to the function
weights.
Recall the degree of precision of an integration formula is the highest degree of polyno-
mials that are integrated exactly. Since there are 4 unknowns in this problem, we assume
we can require exact integration of polynomials up to degree 3. The four conditions imposed
on the problem are then:
(Note: these form a basis for all polynomials of degree ≤ 3). Mathematically, these
conditions are written as the following non-linear system in the four unknowns:
R1
1 1 · dx = w1 + w2 ⇒ 2 = w1 + w2
R−1
1
2 −1 x · dx = w1 x1 + w2 x2 ⇒ 0 = w1 x1 + w2 x2
R1
3 −1
x2 · dx = w1 x21 + w2 x22 ⇒ 2/3 = w1 x21 + w2 x22
R1
4 −1 x3 · dx = w1 x31 + w2 x32 ⇒ 0 = w1 x31 + w2 x32 .
1 1
x1 = − √ , x2 = √ , w1 = 1, w2 = 1.
3 3
Substituting these constants back into (6.15) yields
1 1
Iˆ = 1 · f − √ +1·f √ . (6.16)
3 3
R1
We conclude that this is an approximation for −1 f (x)dx with degree of precision m = 3.
By a change of integration variable, result (6.16) can be generalized as follows:
b−ah i
Iˆ = f (( b−a
2 )(− √13 ) + ( b+a
2 )) + f (( b−a
2 ) √13 + ( b+a
2 )) (6.17)
2
Rb
is an approximation for I = a f (x)dx with degree of precision m = 3.
6.3. GAUSSIAN INTEGRATION 115
(1 − t) (1 + t)
x=a +b .
2 2
b−a
dx = dt.
2
Z b Z 1
I= f (x)dx = f (( b−a
2 )t +
b+a b−a
2 ) 2 dt
a −1
Z 1
b−a
= 2 g(t)dt,
−1
with
g(t) = f ( b−a b+a
2 )t + ( 2 ) .
b−a
Iˆ = g(− √13 ) + g( √13 ) , (6.18)
2
(a) Show that this function has a double root x∗ , and locate it.
(b) Apply the formula for Newton’s method and find an expression for xn+1 as a
function of xn .
117
118 APPENDIX A. SAMPLE MIDTERM EXAM
(c) Determine cn = |xn+1 − x∗ |/|xn − x∗ |. What does this prove about the or-
der of convergence? Does this contradict the convergence theorem for Newton’s
method? Explain.
(d) Can the Fixed Point Iteration method with g(x) = x + f (x) be applied to this
problem? If so, give the interval I in which the initial value x0 can be chosen
such that convergence will occur. (Watch out, this is not an obvious question.)
3. (Root finding.) [20]
(a) Define ‘contraction’. Illustrate with a graph and give one interpretation in terms
of geometrical properties of the graph.
(b) Formulate the contraction mapping theorem, and prove its first part (existence
and uniqueness of the fixed point).
1. (a) Define [15] machine epsilon of a floating point number system, and give an upper
bound for the relative error |δx| in the single precision floating point representa-
tion of a real number (assume chopping).
(b) Give a short derivation of the Secant method (starting from a Taylor series ex-
pansion). Provide a geometrical interpretation for the Secant method.
2. (a) Find [10] the positive root of f (x) = x2 − 6 using Newton’s method with starting
value x0 = 1. Stop the iteration when the fifth digit does not change anymore in
the result. (You can limit your calculations to numbers with 6 digits if you prefer
to do so.)
(b) If you were asked to find the positive root of f (x) using the bisection method
with initial interval [1, 3], how many iterations would be required to make sure
that the interval that contains the root has length smaller than t = 0.0001?
(b) Consider the general form of an iterative method for solving A~x = ~b, with A
non-singular. Show that if kI − B −1 Akp < 1 for any p-norm, then the iterative
method will converge to the solution for any starting value x~0 .
119
120 APPENDIX B. SAMPLE FINAL EXAM
(c) Show that if kAkp < 1 for any p-norm, then I + A is non-singular. (Hint: assume
that I + A is singular, and demonstrate a contradiction.)
4. (a) Define frequency, period and angular frequency of a sine wave. [25] How are they
related?
(b) Explain how the formula for the DFT coefficients F [k] of a time signal vector
f~ = (f [0], f [1], . . . , f [N −1]) can be obtained using the general projection formula
that is valid in any orthogonal basis. Briefly justify all steps in your reasoning.
(c) Find the Fourier Series of
1
f (t) = (π − |t|), t ∈ [−π, π].
2
Rb
(Hint: note that f (t) is an even function, and recall that a
f g ′ dt = f g|ba −
Rb ′
a f gdt.)
PN −1
(d) Recall the general formula for calculating the length of a vector ~x = n=0 x[n]~en
in an N-dimensional vector space with basis {~e0 , ~e1 , . . . , ~eN −1 }:
s X X
p
k~xk = < ~x, ~x > = < x[n]~en , x[l]~el >.
n l
Calculate the length of the time signal vector f~ = (f [0], f [1], . . . , f [N − 1]) both
using the expression for f~ in the time domain basis {f~0 , f~1 , . . . , f~N −1 }, and the ex-
pression in the frequency domain basis {F~0 , F~1 , . . . , F~N −1 }. The resulting length
should of course be the same in both cases. Is it the same? What is the physical
interpretation of this ‘length’ of time signal vector f~?
5. (a) Given [15] f0 , f0′ , f1 in points x0 and x1 , determine the coefficients a, b and c
of the interpolating polynomial y(x) = a(x − x0 )2 + b(x − x0 ) + c that satisfies
y(x0 ) = f0 , y ′ (x0 ) = f0′ and y(x1 ) = f1 .
n
(b) Given {(xP i , fi )}i=0 , you are asked to determine the coefficients cj such that
n
hn (x) = j=0 cj exp(j x) interpolates the data. Show that there is a unique
solution for the coefficients cj . You can assume that all the xi are different.
6. (a) Find [15] approximations for the integral
Z 4
1
I= exp( ) dx,
0 1+x
using the midpoint rule, the trapezoid rule, and the Simpson rule. An accurate
value for the integral is I ≈ 6.1056105. Which of the three methods is the most
accurate?
(b) Given the general expression for the truncation error of the Simpson rule in
interval i
−h5 (4)
Tloc,i = f (ξi ),
2880
121
derive an upper bound for the global error Tglobal of the composite Simpson rule
Rb
for approximating I = a f (x)dx using n subintervals of equal length h. What is
the global order of accuracy of the composite Simpson rule?