Lanczos Method Seminar For Eigenvalue Reading Group: Andre Leger
Lanczos Method Seminar For Eigenvalue Reading Group: Andre Leger
Lanczos Method Seminar For Eigenvalue Reading Group: Andre Leger
Andre Leger
• Why don’t we just use the QR method? Well if A is sparse, then applying an iteration of the
QR approach does not maintain sparsity of the new matrix. INEFFICIENT.
• Note: We only find ONE eigenvector and eigenvalue. What if we want more?
• So after n iterations
v, Av, . . . , An−1 v
are linearly independent and x can be formed from the space.
• By the Power Method, the n-th iteration tends to an eigenvector hence the sequence becomes
linearly dependent but we want a sequence of linearly independent vectors.
• Hence we orthogonalise the vectors, this is the basis of Lanczos Method
4 Lanczos Method
• Assume we have orthonormal vectors
q1 , q2 , . . . , qN
• So we define T to be
α1 β1 0 ... ... ... 0
β1 α2 β2 0 ... ... 0
0 ..
β2 α3 β3 0 ... .
.. ..
. 0 ... ... ... ... . ∈ Ck+1,k
Tk+1,k =
.. ..
. ... ... ... ... ... .
..
. ... ... ... ... . . . βk−1
0 ... ... ... 0 βk−1 αk
0 ... ... ... ... 0 βk
• After k steps we have AQk = Qk+1 Tk+1,k for A ∈ CN,N , Qk ∈ CN,k , Qk+1 ∈ CN,k+1 ,
Tk+1,k ∈ Ck+1,k .
• We observe that
AQk = Qk+1 Tk+1,k = Qk Tk,k + βk qk+1 eTk
• Now AQ = QT hence
A [q1 , q2 , . . . , qk ] = [q1 , q2 , . . . , qk ] Tk
Proof
• We assume qTi+1 qi = 0 = qTi+1 qi−1 and by induction step qTi qk for k < i.
Rearranging we obtain
• If βk = 0 then
1. We diagonalise the matrix Tk using simple QR method to find the exact eigenvalues.
so that Y = Qk Sk .
3. We converge to the k largest eigenvalues. The proof is very difficult and is omitted.
– So if βk → 0 we prove θj → λj .
– Otherwise |βk ||Skj | needs to be small to have a good approximation, hence convergence
criterion
|βk ||Skj | <
• As soon as one eigenvalue converges all the basis vectors qi pick up perturbations biased
toward the direction of the corresponding eigenvector and orthogonality is lost.
• A “ghost” copy of the eigenvalue will appear again in the tridiagonal matrix T.
• To counter this we fully re-orthonormalize the sequence by using Gram-Schmidt or even QR.
• However, either approach would be expense if the dimension if the Krylov space is large.
• If the eigenvalues of A are not well separated, then we can use a shift and employ the matrix
(A − σI)−1
following the shifted inverted power method to generate the appropriate Krylov subspaces.