Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Applied Mathematical Sciences, Vol. 2, 2008, no. 27, 1327 - 1334 A New Computational Harmonic Projection Algorithm for Large Unsymmetric Generalized Eigenproblems H. Saberi Najafi1 , H. Moosaei1 and M. Moosaei2 1 Department of Mathematics, Faculty of Sciences University of Guilan, P.O. Box 41335-1914, Rasht, Iran 2 Department of Mathematics, Faculty of Sciences Buali Sina University, Hamedan, Iran hnajafi@guilan.ac.ir, hmoosaei@gmail.com m.moosaei@basu.ac.ir Abstract The harmonic projection method can be used to find interior eigenpairs of large matrices. Given a target point or shift σ to which the needed interior eigenvalues are close, the desired interior eigenpairs are the eigenvalues nearest σ and the associated eigenvectors. In this article we use the harmonic projection algorithm for computing the interior eigenpairs of a large unsymmetric generalized eigenvalue problem. Keywords: Eigenvalue, unsymmetric matrix, harmonic, Arnoldi, shiftand-invert 1 Introduction The eigenvalue problem is one of the most important subjects in the applied sciences and engineering. One of the most important and practical topics in computational mathematics is computing some of the interior generalized eigenvalues close to target point or shift σ and the associated eigenvectors. Consider the large unsymmetric generalized eigenproblem. CXi = θi BXi (1) Where C and B are N ×N large matrices, we are interested in computing some eigenvalues close to a given shift σ and also associated eigenvectors of pair (C,B) . Since the matrices are large the standard numerical methods cannot be used, they are fine for small or medium-size matrices. For computing the 1328 H. Saberi Najafi, H. Moosaei and M. Moosaei eigenpairs of problem (1) we consider two methods. First method: This is a shift-and-invert Arnoldi method Second method: This is a new computational harmonic projection algorithm in this paper we comparative first metod with second method and the results shown that the first maetod is better than the second method . 2 Definition Let A be a n×n matrix and v ∈ Rn is a vector, then the subspace generated by vectors v, Av, . . . , Am−1 v is called as Krylov subspace and denoted by km (A, v) , i.e. km (A, v) = span{v, Av, . . . , Am−1‘ v} 3 Algorithm 1: (Arnoldi MGS process) Choose a vector v1 of norm 1 For j = 1, . . . , m do w := Av j For i = 1, . . . , j do hij := (w, vi ) w := w − hi,j vi End do hj+1,j := w2 w vj+1 := hj+1,j End do. 4 Theorems Theorem 4.1: The vectors v1 , v2 , . . . , vm produced by the Arnoldi algorithm form an orthonormal basis of the subspace km = span{v1 , Av1 , . . . , Am−1 v1 } . Proof in [2]. Theorem 4.2:Denote by Vm a n×m matrix with column vectors v1 , v2 , . . . , vm and by Hm a m × m Hessenberg matrix whose nonzero entries are defined by the algorithm. Then the following relations hold: AVm = Vm Hm + hm+1,m vm+1 eH m, H ∼ Vm AVm = Hm . Proof in [2]. 1329 Computational harmonic projection algorithm 5 The shift-and-invert Arnoldi method If the matrix C- σ B is invertible for some shift σ , the eigenproblem (1) can be transformed into the standard eigenproblem. CXi = θi BXi ⇒ CXi − σBXi = θi BXi − σBXi ⇒ (C − σB)Xi = (θi − σ)BXi ⇒ 1 Xi = (C − σB)−1 BXi θi − σ ⇒ (C − σB)−1 BXi = Now we suppose, λi = is: 1 θi −σ 1 Xi θi − σ and A = (C − σB)−1 B Therefore the problem AXi = λi Xi (2) It is easy to show that: the ( θi , Xi ) is eigenpair of problem (1) if and only if ( λi , Xi ) is the eigenpair of problem (2). Therefore, the shift-and-invert Arnoldi method for eigenproblem (1) is mathematically equivalent to the standard Arnoldi method for the transformed eigenproblem (2). It starts with a given unit length vector v1 (usually chosen randomly) and builds up an orthonormal basis Vm for the Krylov subspace km (A, v1 ) by means of the Gram-Schmidt orthogonalization process. In finite precision, reorthogonalization is performed whenever same sever cancellation occurs [2]. Then the approximate eigenpairs for the transformed eigenproblem (2) can be extracted from km (A, v1 ) . The approximate solutions for problem (1) can be recovered from these approximate eigenpairs. By theorem 4.2 the shift-and-invert Arnoldi process can be written in matrix from (C − σB)−1 BVm = Vm Hm + hm+1,m vm+1 eH m or (C − σB)−1 BVm ∼ = Vm+1 H̃m (3) Where em is mth coordinate vector of dimension m , Vm+1 = (Vm , vm+1 ) = (v1 , v2 , ..., vm+1 ) is an N × (m + 1) matrix whose columns from an orthonormal 1330 H. Saberi Najafi, H. Moosaei and M. Moosaei basis of the (m + 1) -dimensional Krylov subspace km+1 (A, v1 ) , and H̃m is the (m + 1) × m upper Hessenberg matrix that is the same as Hm except for an additional row whose only nonzero entry is hm+1,m in the position (m + 1, m) . Suppose that (λ̃i , ỹi ), i = 1, 2, ..., m are the eigenpairs of the matrix Hm , H ỹi = λ̃i ỹi Let λ̃i = 1 θ̃i −σ and X̃i = Vm ỹi . (4) Then the shift-and-invert Arnoldi method uses (θ̃i , X̃i ) to approximate the eigenpairs (θi , Xi ) of problem (1). The θ̃i and X̃i are called the Ritz values and Ritz vectors of C with respect to km (A, v1 ) , respectively. Define the corresponding residual r̃i = (C − θ̃i B)X̃i Then we have the following theorem. Theorem 5.1: The residual r̃i corresponding to the approximate eigenpairs (θ̃i , X̃i ) by the shift-and-invert Arnoldi method satisfy       r̃i  ≤ hm+1,m θ̃i − σ  C − σB ehm ỹi  proof: From relations (3), (4), we obtain             r̃i  = (C − θ̃i B)X̃i  = (C − θ̃i B)Vm ỹi = ((C − σB) − (θ̃i − σ)B)Vm ỹi      −1 = (C − σB)(I − (θ̃i − σ)(C − σB) B)Vm ỹi        −1 = θ̃i − σ  (C − σB)((C − σB) B − λ̃i I)Vm ỹi          ≤ θ̃i − σ  (C − σB) Vm+1 (H̃m − λ̃i I˜m )ỹi        = hm+1,m θ̃i − σ  (C − σB) ehm ỹi . Ruhe has developed a shift-and-invert Arnoldi algorithm: see e.g. sptarn.m in Matlab where it is designed to compute all the eigenvalues in a rectangle and the associated eigenvctors. The algorithm can be modified to compute the l eigenvalues near a target point σ and the associated eigenvectors. We call the resulting algorithmAlgorithm 2. 1331 Computational harmonic projection algorithm 6 The new computational harmonic projection algorithm We have shown in section 4 that the eigenproblem CXi = θi BXi can be transformed to the problem AXi = λi Xi such that θi = λ1i + σ and A = (C − σB)−1 B . As we are interested in computing some of the interior eigenvalues close to σ , therefore the large λi must be considered. If τ is large and not to be an eigenvalue of A such that (A − τ I ) is not invertible then we have from AXi = λi Xi that 1 Xi λi − τ (A − τ I)−1 Xi = Therefore, the interior eigenvalues near τ are transformed into exterior ones with largest magnitudes of (A−τ I)−1 .For the given τ and a subspace km (A, v) the harmonic projection method seeks the pairs ( λ̃i , X̃i ) satisfying the harmonic projection (for more details see [3, 4]),  X̃i ∈ km (A, v1 ) (7a) AX̃i − λ̃i X̃i ⊥(A − τ I)km (A, v1 ) (7b) and uses them to approximate some eigenvalues of A near τ and the associated eigenvectors. (m) (m) Theorem 6.1: Let X̃i = Vm gi be the harmonic Ritz vector, λ̃i be the harmonic Ritz value then:    (m) (m)  r̃i = (A − λ̃i I)Xi  Proof: Since we have AVm = Vm Hm + hm+1,m vm+1 eH m then (m) AXi (m) = AVm gi (m) = Vm H m g i (m) + hm+1,m vm+1 eH m gi So (m) (m) (A − λ̃i I)Xi (m) = Vm H m g i (m) (m) = Vm (Hm − λ̃i I)gi (m) (m) = AVm gi (m) + hm+1,m vm+1 eH m gi (m) + hm+1,m vm+1 eH m gi (m) − λ̃i Vm gi (m) (m) − λ̃i Vm gi = Vm+1  (m) (m) (Hm − λ̃i I)gi (m) hm+1,m eH m gi  . Algorithm 3.(The new computational Harmonic projection algorithm) 1332 H. Saberi Najafi, H. Moosaei and M. Moosaei 1. Input, τ , σ , v1 with v1  = 1 , l: the numbers of desired eigenvalues, k: the accuracy, A, B: are N × N large matrices 2. Set A = (C − σB)−1 B 3. Run algorithm1. (For computing Vm+1 = [v1 , v2 , ..., vm+1 ] and Hm .) 4. f Hm − τ I is nonsingular then Solving H ((Hm − τ I) + (Hm − τ I)−H em hH m+1,m hm+1,m em )gi = (λ̃i − τ )gi else solving H H ((Hm − τ I)H (Hm − τ I) + em hH m+1,m hm+1,m em )gi = (λ̃i − τ )(Hm − τ I) gi end if;(For computing (λ̃i , gi ) , i = 1, ..., m .) 5. Select λ̃,i s with respect to the smallest value of (λ̃i − τ ), s is to approximate the desired eigenvalues i = 1, ..., l. 6. Take the harmonic Ritz pairs (λ̃i , X̃i = Vm gi ) , i = 1, ..., l as approximations.    (m) (m)  7. Computing r̃i = (A − λ̃i I)X̃i  i = 1, ..., l 8. If r̃i = < k, i = 1, ..., l then Stop and set θ̃i = end; 1 λ̃i + σ else go to step (9) 9. Restarted: use the harmonic Ritz vectors X̃i , i = 1, ..., l , for a new initial guess, i.e. v1 , and go to step 3 7 Numerical experiments The algorithm has been tested using MATLAB 7.0.1 on a Pentium IV 2.8 GHz with main memory 512 Megabytes. Let C and B are 1001 × 1001 matrices as −510 1 ⎢ −1 −509 ⎢ ⎢ .. ⎢ 0 . ⎢ ⎢ .. . .. ⎢ . ⎢ ⎢ 0 ··· ⎢ ⎢ 0 ··· ⎢ ··· C =⎢ ⎢ 0 ⎢ 0 ··· ⎢ ⎢ 0 ··· ⎢ ⎢ .. ⎢ . ··· ⎢ ⎢ . .. ⎢ ··· ⎢ ⎣ 0 ··· 0 ··· ⎡ 0 1 .. . 0 0 .. . .. .. . 0 0 0 0 0 .. . .. . 0 0 0 0 .. . .. . . −1 −12 0 −1 0 0 0 0 0 0 .. .. . . .. .. . . 0 0 0 0 0 0 .. . 0 0 .. . .. .. . 1 −11 −1 0 0 .. . .. . 0 0 0 0 .. . .. . . 0 0 1 0 0 1 −1 11 0 −1 .. .. . . .. .. . . 0 0 0 0 0 0 .. . 0 0 .. . .. .. . . 0 0 0 1 12 .. . 0 0 0 0 1 .. . .. .. 0 0 . 0 0 . 0 0 .. . ··· ··· .. . 0 0 .. . .. . 0 0 0 0 0 .. . ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ .. .. ⎥ . . ⎥ ⎥ 0 ··· ⎥ ⎥ 0 ··· ⎥ ⎥ 0 ··· ⎥ ⎥ 0 ··· ⎥ ⎥ 0 ··· ⎥ ⎥ .. .. ⎥ . . ⎥ ⎥ .. .. . . 0 ⎥ ⎥ −1 509 1 ⎦ 0 −1 510 1001×1001 Computational harmonic projection algorithm 1333 B = diag(2, 3, · · · , 1002)1001×1001 We apply Algorithm shift-and-invert Arnoldi algorithm (Algoritm2) and Algorithm 3. the results shown for Iterative=100, m = 3, τ = 600, σ = 2 Algorithm(2) after Iterative=100 the residual norm is 510 i.e. it is not converge.but algorithm 3 after Iterative=100 the residual norm is .002 we see that algorithm 3 is better than algorithm 2. the results shown for Iterative=200, m = 4, τ = 550, σ = 2 Algorithm(2) after Iterative=200 the residual norm is 550 i.e. it is not converge.but algorithm 3 after Iterative=200 the residual norm is .003 we see that algorithm 3 is better than algorithm 2. As the results show the new computational harmonic method projection algorithm (algorithm 3) works better than shift-and-invert method (algorithm 2) and it gives the results with high accuracy. References [1] Y. Sadd, Numerical method for large eigenvalue problem, Manchester University press in Algorithm and architectures for advanced scientific computing 1992. [2] Y. Sadd, Iterative method for sparse linear system ,PWS Publishing Company, a division of international Thomson publishing Inc.,USA,1996. [3] G. Wu, A modified harmonic block Arnoldi Algorithm with adaptive shifts for large interior eigenproblems, Journal of computational and Applied Mathematics, 205 (2007) 343-365. [4] R. B. Morgan and M. Zeng, Harmonic projection method for large nonsymmetric eigenvalue problem, Number Linear Algebra,Appl.5 (1998) 3355. [5] G. H. Golub and C. F. Van Loan, Matrix computations second edition. The johns Hopkins University Press, Baltimore, MD, 1989. [6] Z. Jia, The refined harmonic Arnoldi method and an implicitly restarted refind algorithm for computing interior eigepairs of large matrices, Applied numerical mathematics, 42 (2002) 489-512. [7] Z. Jia and Y. Zhang, A refined shift-and-invert Arnodli algorithm for large un symmetric generalized eigenproblems, Computers and mathematics whit application 44 (2002) 1117-1127. 1334 H. Saberi Najafi, H. Moosaei and M. Moosaei [8] H. Saberi Najafi and H. Moosaei, A new restarting method in the harmonic projection algorithm for computing the eigenvalues of a nonsymmetric matrix, Applied Mathematics and Computation, (In press). [9] H. Saberi Najafi and H. Moosaei, The harmonic projection method for large non-symmetric generalized eigenproblem by shift-and-invert,submit to journal. Received: November 6, 2007