Questions 3
Questions 3
Questions 3
MTH3320
Computational Linear Algebra
Assignment 3
This assignment contains four questions for a total of 100 marks (equal to 7.5% of the
final mark for this unit).
Late submissions will be subject to late penalties (see the unit guide for full
details).
The relevant output of the programming exercises are to be submitted as part of your
electronic submission. You can either include the outputs as a part of your pdf file (if
you can edit the pdf document), or include them in a separate word document (you need
to label the figures and outputs consistently with the question number). In addition,
collect all your Matlab m-files in one zip file and submit this file through Moodle.
Let ⃗q1 , . . . , ⃗qn denote eigenvectors corresponding the eigenvalues of A. Suppose further
we have an initial vector ⃗x such that ⃗x∗ ⃗qK ̸= 0. Prove that, at iteration k, the vector
⃗b(k) in the inverse iteration converges as
!
k
λ K − µ
∥⃗b(k) − ⃗qK ∥ = O .
λL − µ
(i) Suppose all the eigenvectors ⃗q1 , ⃗q2 , . . . , ⃗qn are orthonormal, write down the eigen-
decomposition of (A − µI)−1 .
(ii) Use induction to show that the inverse iteration method produces a sequence of
vectors ⃗b(k) , k = 1, 2, . . . in the form of
−k⃗ (0)
⃗b(k) = (A − µI) b ,
∥(A − µI)−k⃗b(0) ∥
(iii) Use the results in (i) and (ii) to complete the rest of the proof.
2022 2
School of Mathematics Monash University
(c) Download test steepest cg.m to test the correctness of (a) and (b). Report on
the errors and number of steps. (The errors should be smaller than 10−10 .)
(d) Make a driver program CG driver.m that applies steepest.m and conjGrad.m to
A⃗x = ⃗b with A being the 2D model problem matrix generated by build laplace 2D.m
or build laplace 2D kron.m, and ⃗b a vector with all twos. Use maxit=400 and
tol=1e-6. Generate a plot in which you compare, for N = 30, the residual con-
vergence for the steepest descent and conjugate gradient methods (starting from a
zero initial guess), as a function of the iteration. For each of the methods, plot the
10-log of the residuals as a function of iteration number. Which method requires
the least iterations to obtain an accurate solution, and is this as you expected?
(e) Provide a table in which you list, for steepest descent and conjugate gradient, how
many iterations you need to reduce the residual by 1e − 4, for N = 16, 32, 64.
(You can use maxit = 500.) What are the total number of unknowns and the
condition number for each problem size? (You can use cond(full(A)); no need
to do this for N = 64 though, because it may take too long.) For each method, do
you see the expected behaviour in the number of required iterations as a function
of the total number of unknowns? (Explain.) Using these numerical results, briefly
discuss the computational cost/complexity of each of the methods as a function
of the total number of unknowns (discuss the cost per iteration, the number of
iterations required, and the total cost, as a function of total problem size). Which
method is best for large problem size?
2022 3
School of Mathematics Monash University
% your code
end
(b) Implement the Rayleigh quotient iteration in Matlab according to the pseudocode
discussed in class. Your implementation RayleighIteration.m should store esti-
mated eigenvalues and eigenvectors in each step, except the step 0. You should
use the function header given below.
% your code
end
2022 4
School of Mathematics Monash University
(d) Run the same test for your implementation of InverseIteration.m using the
initial shift value given in my test Rayleigh.m and matrix data Q4.mat. Mod-
ify the convergence plots in my test Rayleigh.m to also plot the convergence
of eigenvalue and eigenvector estimates produced by the inverse iteration on the
same graph showing the convergence of the Rayleigh quotient iteration. Include
a printout of the plots produced by my test Rayleigh.m in the main assignment
package. Comment on the performance of your implementation of the Rayleigh
quotient iteration, compared with the inverse iteration.
2022 5