B1 Numerical Linear Algebra and Numerical Solution of Differential Equations
B1 Numerical Linear Algebra and Numerical Solution of Differential Equations
B1 Numerical Linear Algebra and Numerical Solution of Differential Equations
Candidates should submit answers to a maximum of four questions that include an answer to at
least one question in each section.
Do not turn this page until you are told that you may do so
Page 1 of 6
Section A: Numerical Solution of Differential Equations
1. The solution [t0 , T ] 3 t → y(t) to the initial value problem
satisfies Z t+h
y(t + h) = y(t) + f (τ, y(τ )) dτ . (2)
t
To devise a one-step method to solve (1), one can replace the integral on the right-hand side
of (2) with
h(1 − θ)f (t, y(t)) + hθf (t + h, y(t + h))
where θ ∈ [0, 1] is a real parameter. The resulting scheme reads
(a) [6 marks] Derive the Butcher table of this family of Runge-Kutta methods and write the
formulas of its Runge-Kutta stages. For which values of θ is the method explicit?
(b) [4 marks] Give the definition of consistency error and consistency order of a one-step
method.
(c) [10 marks] In the case that f is autonomous (so that f (t, y) = f (y)), use Taylor expansion
to verify that the Runge-Kutta scheme obtained by setting θ = 1 has consistency order 1.
(d) [5 marks] In the case that f (t, y) = Ay for a given matrix A, show that the scheme
obtained by setting θ = 1/2 preserves quadratic invariants. State clearly any results you
quote.
Page 2 of 6
2. We consider the following family of quadrature rules
Z b
f (τ ) dτ ≈ (b − a)f a + θ(b − a) ,
a
R1
where θ ∈ [0, 1] is a real parameter. In particular, note that 0 f (τ ) dτ ≈ f (θ).
(a) [7 marks] Show that the Butcher table of the family of collocation Runge-Kutta methods
based on these quadratures reads
θ θ
.
1
[Hint: Let s > 1 and i ∈ {1, 2, . . . , s}. The i-th Lagrange polynomial associated to s
distinct points c1 , . . . , cs is the polynomial of degree s − 1 that satisfies Li (cj ) = δij .]
(b) [4 marks] Derive the stability function of this family of Runge-Kutta methods (keeping θ
generic).
(c) [4 marks] Give the definition of: (i) stability domain of a Runge-Kutta method, (ii) A-
stability, and (iii) L-stability.
(d) [4 marks] Consider the IVP
and denote by {yk }k∈N a sequence of approximations obtained by employing: (i) an ex-
plicit Runge-Kutta method, (ii) an A-stable (but not L-stable) Runge-Kutta method, and
(iii) an L-stable Runge-Kutta method. For each case, describe the qualitative behaviour
of {yk }k∈N and compare it to the qualitative behaviour of the exact solution to this IVP.
(e) [6 marks] Show that, if the stability domain SΨ of a Runge-Kutta method Ψ satifies
SΨ = C− , then that Runge-Kutta Ψ method cannot be L-stable.
3. The first and second characteristic polynomials of the linear multi-step method BDF2 are
4 1 2
ρ(z) = z 2 − z + and σ(z) = z 2 ,
3 3 3
respectively.
(a) [4 marks] Write the update formula of BDF2 in terms of h, yn , yn+1 , yn+2 , f (tn , yn ),
f (tn+1 , yn+1 ), and f (tn+2 , yn+2 ).
(b) [7 marks] Give the definition of zero-stability of a linear k-step method and describe how
to verify this property using the root condition. Is BDF2 zero-stable?
(c) [7 marks] Show that
ρ(eh ) − hσ(eh ) = O(h3 ) .
What can you conclude about the consistency order of BDF2?
(d) [7 marks] The linear multi-step method
11 3 1
yn − 3yn−1 + yn−2 − yn−3 = hf (tn , yn ) .
6 2 3
is implicit. Describe how to use Newton’s method to approximate yn provided that
yn−1 , yn−2 , and yn−3 are available.
(c) [6 marks] Let N = 1/∆x. Show that the eigenvectors of the matrix
−2 1
1 −2 1
K= ∈ RN −1,N −1
.. ..
. . 1
1 −2
are given by
z>
p = (sin(pπ∆x), sin(2pπ∆x), . . . , sin((N − 1)pπ∆x)) , p = 1, . . . , N − 1 .
Page 4 of 6
Section B: Numerical Linear Algebra
5. Throughout this question we consider a rectangular matrix A ∈ Rm×n and a nonsingular
matrix B ∈ Rn×n .
(a) [4 marks] What is a QR factorisation of A? You do not need to show that such a factori-
sation exists.
What is an LU factorisation of B? You do not need to show that such a factorisation
exists.
(b) [4 marks] If m > n, the columns of the given matrix A are linearly independent and
b ∈ Rm is also given, explain how to solve the linear least squares problem
using a QR factorisation.
(c) [2 marks] If QR = B = LU , identify an LU factorisation of Q.
(d) [3 marks] Supposing that all the required factorisations exist, let B = B1 and
for k = 1, 2, . . .
Lk Uk = Bk − µk I (ie. perform an LU factorisation of Bk − µk I)
Bk+1 = Uk Lk + µk I (ie. define Bk+1 by matrix multiplication and addition of µk I)
end
kδxk2 kδBk2
6 kBk2 kB −1 k2
kx + δxk2 kBk2
where Bx = b and (B + δB)(x + δx) = b with x 6= −δx. What is the relevance of this
inequality to the computational solution of a linear system of equations?
(f) [7 marks] Calculate kBk2 kB −1 k2 for the matrix
1 1 0 0 0
0 2 0 0 0
B= 0 0 3 0 0 .
0 0 0 4 0
0 0 0 0 5
starting with x0 = [0, 1]T . Does the iteration converge to the solution x? Does the
sequence {kxk − xk2 , k = 0, 1, 2, . . .} reduce monotonically?
(b) [8 marks] What is a Jordan canonical form?
[You may assume that any square matrix has a Jordan canonical form.]
If for any particular splitting A = M − N we have that all of the eigenvalues of M −1 N lie
strictly inside the unit disc, prove that the simple iteration based on this splitting must
generate a sequence of iterates that converge to the solution for any x0 .
Further prove that if additionally M −1 N is symmetric, then
[Hint: Any symmetric matrix is orthogonally diagonalisable, so that there exists an or-
thonormal basis of eigenvectors.]
(c) [8 marks] Consider the nonsingular matrix
B −I 0 ··· 0 4 + −1 0 ··· 0
.. .. −1 4 + −1 . .
. ..
−I B
−I . .
.
A = 0 ... .. .. , where B = 0 . . . . . .
. . 0 . . . 0
.. ..
.. .. .. .. .. ..
. . . . −I . . . . −1
0 · · · 0 −I B 0 ··· 0 −1 4 +
2 2
is a tridiagonal matrix with B ∈ Rn×n , A ∈ Rn ×n and a positive constant.
Prove that the simple iteration based on the splitting A = M − N with
B 0 0 ··· 0
.. .
. ..
0 B 0
M = 0 . . . . . . . . . 0 ∈ Rn ×n .
2 2
.. . . . . . .
. . . . 0
0 ··· 0 0 B
will generate a sequence that will converge to the solution of Ax = b for any b and any
x0 . Quote, but do not prove, any results that you use.