Some Modified Matrix Eigenvalue: E 2 (X), (A I) A (X) - (9) ) - A (A) O'i (A O'2 (A A, (A) 21 (A) 22 (A) 2, (A)
This document discusses several modified matrix eigenvalue problems and how to reduce them to standard eigenvalue problems that can be solved using existing algorithms. Specifically, it considers:
1. Finding the stationary values of a quadratic form subject to linear constraints, which reduces to finding the eigenvalues of a projected matrix.
2. Finding the stationary values of a bilinear form subject to constraints, which reduces to finding the singular values of a projected matrix.
3. Inverse eigenvalue problems, such as determining constraints to specify a given set of eigenvalues.
4. Statistical problems that lead to interesting eigenvalue problems, such as determining a matrix to specify constraints on some eigenvalues.
The key ideas are using projections to eliminate constrained components, and relating
Some Modified Matrix Eigenvalue: E 2 (X), (A I) A (X) - (9) ) - A (A) O'i (A O'2 (A A, (A) 21 (A) 22 (A) 2, (A)
This document discusses several modified matrix eigenvalue problems and how to reduce them to standard eigenvalue problems that can be solved using existing algorithms. Specifically, it considers:
1. Finding the stationary values of a quadratic form subject to linear constraints, which reduces to finding the eigenvalues of a projected matrix.
2. Finding the stationary values of a bilinear form subject to constraints, which reduces to finding the singular values of a projected matrix.
3. Inverse eigenvalue problems, such as determining constraints to specify a given set of eigenvalues.
4. Statistical problems that lead to interesting eigenvalue problems, such as determining a matrix to specify constraints on some eigenvalues.
The key ideas are using projections to eliminate constrained components, and relating
SOME MODIFIED MATRIX EIGENVALUE PROBLEMS* GENE H. GOLUB] Dedicated to the memory ofProfessor H. Rutishauser Abstract. We consider the numerical calculation of several matrix eigenvalue problems which require some manipulation before the standard algorithms may be used. This includes finding the stationary values of a quadratic form subject to linear constraints and determining the eigenvalues of a matrix which is modified by a matrix of rank one. We also consider several inverse eigenvalue problems. This includes the problem of determining the coefficients for the Gauss-Radau and Gauss-Lobatto quadrature rules. In addition, we study several eigenvalue problems which arise in least squares. Introduction and notation. In the last several years, there has been a great development in devising and analyzing algorithms for computing eigensystems of matrix equations. In particular, the works of H. Rutishauser and J. H. Wilkinson have had great influence on the development of this subject. It often happens in applied situations that one wishes to compute the eigensystem of a slightly modified system or one wishes to specify some of the eigenvalues and then compute an associated matrix. In this paper we shall consider some of these problems and also some statistical problems which lead to interesting eigenvalue problems. In general, we show how to reduce the modified problems to standard eigenvalue problems so that the standard algorithms may be used. We assume that the reader has some familiarity with some of the standard techniques for computing eigen- systems. We ordinarily indicate matrices by capital letters such as A, B, E; vectors by boldface lower-case letters such as x, y, 0t, and scalars by lower-case letters. We indicate the eigenvalues of a matrix as 2(X) where X may be an expression, e.g., ,(A 2 -k- I) indicates the eigenvalues of A 2 + I, and in a similar fashion we indicate the singular values of a matrix by a(X). We assume that the reader has some familiarity with singular values (cf. [9]). Usually we order the singular values a(A) [2(ArA)] 1/2 of a matrix A so that oI(A o2(A __< __< a,(A) and if A is symmetric, the eigenvalues so that 21(A) < 22(A) __< =< 2,(A). 1. Stationary values of a quadratic form subject to linear constraints. Let A be a real symmetric matrix of order n, and c a given vector with cTc 1. In many applications (cf. I10]) it is desirable to find the stationary values of (1.1) xTAx subject to the constraints (1.2) xrx 1, (1.3) cx O. Received by the editors September 8, 1971. " Computer Science Department, Stanford University, Stanford, California 94305. This work was supported in part by grants from the National Science Foundation and the Atomic Energy Commission. 318 MODIFIED MATRIX EIGENVALUE PROBLEMS 319 Let (1.4) o(x, 2,/) xTAx 2(xTx 1) + 2xTe, where 2, are Lagrange multipliers. Differentiating (1.4), we are led to the equation (1.5) Ax- 2x +/c 0. Multiplying (1.5) on the left by c T and using the condition that Iic112 1, we have (1.6) / --cTAx. Then substituting (1.6) into (1.5), we obtain (1.7) PAx 2x, where P I eeT. Although P and A are symmetric, PA is not necessarily so. Note that p2 p, so that P is a projection matrix. It is well known (cf. 14, p. 54]) that for two arbitrary square matrices G and H, the eigenvalues of GH equal the eigenvalues of .HG. Thus, 2(PA) 2(p2A)-- 2(PAP). The matrix PAP is symmetric and hence one can use one of the standard algorithms for finding its eigenvalues. Then if K PAP and if it follows that Kz }Lizi xi Pzi, 1,2, n, where xi is the eigenvector which satisfies (1.7). At least one eigenvalue of K will be equal to zero, and e will be an eigenvector associated with a zero eigenvalue. Now suppose we replace the constraint (1.3) by the set of constraints (1.8) CTx O, where C is an n p matrix of rank r. It can be verified that if (1.9) P I- CC-, where C- is a generalized inverse which satisfies (1.10) CC-C C, CC- --(CC-)T, then the stationary values of xTAx subject to (1.2) and (1.8) are eigenvalues of K PAP. At least r of the eigenvalues of K will be equal to zero, and hence it would be desirable to deflate the matrix K so that these eigenvalues are eliminated. By permuting the columns of C, we may compute the orthogonal decomposi- tion 320 GENE H. GOLUB (1.12) where R is an upper triangular matrix of order r, S is r x (p r), QrQ I,, and rl is a permutation matrix. The matrix Q may be constructed as the product of r Householder transformations (cf. [8]). A simple calculation shows p=QT I,,_ Q Or JQ, and thus Then if 2(PAP) 2(QTjQAQTjQ)= 2(JQAQTj). FGxT1 G12] (1.13) G QAQ r LG, G22 where G is an r x r matrix and G22 is an (n r) x (n r) matrix, JQAQTj G22 Hence the stationary values of xtax subject to (1.2) and (1.8) are simply the eigen- values of the (n r) x (n r) matrix G22. Finally, if G22zi ,iZi, 1,2, ..., n r, then Xi QTIiOn_rlZi The details of the algorithm are given in [10]. From (1.13) we see that 2(G) 2(A). Then by the Courant-Fischer theorem (cf. [14, p. 101]), (1.14) 2j(A) 2j(G22) =< 2r+j(A), j 1,2, ..., n r, when 2j(A) =< 2j+ (A) and /],j(G22 /],j+ 1(G22). Furthermore, if the columns of the matrix C span the same space as the r eigen- vectors associated with the r smallest eigenvalues of A, (1.15) Rj(Gz2) )r+j(A). Thus, we see that there is a strong relationship between the eigenvalues of A and the stationary values of the function (1.16) rp(x, 2, ) xTAx /],(xTx 1) + 2TcTx, where ia is a vector of Lagrange multipliers. 2. Stationary values of a bilinear form subject to linear constraints. Now let us consider the problem of determining the nonnegative stationary values of (2.1) (x T Ay)/( x 21] Y 2), MODIFIED MATRIX EIGENVALUE PROBLEMS 321 where A is an m x n matrix, subject to the constraints (2.2) Crx O, Dry O. The nonnegative stationary values of (2.1) are the singular values of A (i.e., a(A) 2(ArA)]a/2). It is easy to verify that the nonnegative stationary values of (2.1) subject to (2.2) are the singular values of (2.3) PcAPo where Pc=I- CC-, Po=I-DD-. The singular values of PcAPo can be computed using the algorithm given in [9]. Again it is not necessary to compute the matrices Pc and Po explicitly. If; as in (1.11), then I,,-r Qc QJcQc, 0] I,_s Qo QJoQo, where r is the rank of C and s is the rank of D. Then Hence if a(PcAPo) a(QJcQcAQJDQo) cr(JcQcAOroJD). G QcAQ Ge G22_] where Gll is r x s and G22 is (m r) x (n s), then JcQcAQJo [ 0 G22]. Thus the desired stationary values are the singular values of G22. 3. Some inverse eigenvalue problems. Suppose we are given a symmetric matrix A with eigenvalues {2}7= (2 < ;+ ) and we are given a set of values {/i}7---1 (.i < i+ 1)with (3.1) 2i < /i < 2i+ 1" 322 GENE H. GOLUB We wish to determine the linear constraint erx 0 so that the stationary values ofxrAx subject to xx 1 and erx 0(ee 1) are the set of {-2i}"-1i= 1. From (1.5) we have x -/(A 21)- c, and hence, (3.2) cx -#cr(A 21)-1c 0. Assuming/t 0, and given A QAQ where A is the diagonal matrix of eigen- values of A and Q is the matrix of orthonormalized eigenvectors, substitution into (3.2) gives (3.3a) 0 i=1 with (3.3b) d/ i=1 where Qd e. Setting 2 ;.j, j 1,2,..., n- 1, then leads to a system of linear equations defining the d. We shall, however, give an explicit solution to this system. Let the characteristic polynomial be n-1 (3.4) 0(2) 1 (2j- 2). j=l We convert the rational form (3.3a) to a polynomial, (3.5) i= d fi (2j-2). i= j= je We wish to compute d (drd 1) so that 0(2) (2). Then let us equate the two polynomials at n points. Now n--1 j=l j=l Hence (2) 0(2) for k 1, 2,... n, if The condition (3.1) guarantees that the right-hand side of (3.6) will be positive. Note that we may assign d a positive or negative value so that there are 2" different solutions. Once the vector g has been computed, it is an easy matter to compute e. MODIFIED MATRIX EIGENVALUE PROBLEMS 323 We have seen in 1 that the stationary values of (1.16) interlace the eigenvalues of A. In certain statistical applications [4] the following problem arises. Given a matrix A and an n x p matrix C, we wish to find an orthogonal matrix H so that the stationary values of (3.7) q(x, 2, la)= xTAx 2(xTx- 1) + IaT(HC)Tx are equal to the n r largest eigenvalues of A. As was pointed out in the last paragraph of 1, the stationary values of (3.7) will be equal to the n r largest eigenvalues of A providing the columns of HC span the space associated with the r smallest eigenvalues of A. For simplicity, we assume that rank (C) p. From (1.11), we see that we may write Let us assume that the columns of some n x p matrix V span the same space as eigenvectors associated with the p smallest eigenvalues. We can construct the decomposition where WTW I, and S is upper triangular. Then the constraints (HC)rx 0 are equivalent to and thus if H is chosen to be [R T O]QHTx O, H= WTQ, the stationary values of (3.7) will be equal to the n p largest eigenvalues of A. 4. Intersection of spaces. Suppose we are given two symmetric n n matrices A and B with B positive definite and we wish to compute the eigensystem for (4.1) Ax 2Bx. One ordinarily avoids computing C B-1A since the matrix C is usually not symmetric. Since B is positive definite, it is possible to compute a matrix F such that FTBF I and we can verify from the determinantal equation that 2(FTAF) 2(B-1A). The matrix FTAF is symmetric and hence one of the standard algorithms may be used for computing its eigenvalues. 324 GENE H. GOLUB Now let us consider the following example. Suppose A= 0 0 B= 0 e 0 0 0 0 0 where e is a small positive value. Note B is no longer positive definite. When xr= [1,0,0], then Ax Bx and hence 2 1. When xr= [0, 1,0], then Ax e-1Bx. Here 2 e-1 and hence as e gets arbitrarily small, 2(e) becomes arbitrarily large. This eigenvalue is unstable;such problems have been carefully studied by Fix and Heiberger [5]. Finally for x [0, 0, 1], Ax 2Bx for all values of 2. Thus we have the situation of continuous eigenvalues. We shall now examine ways of eliminating the problem of continuous eigenvalues. The eigenvalue problem Ax 2Bx can have continuous eigenvalues if the null space associated with A and the null space associated with B intersect. There- fore we wish to determine a basis for the intersection of these two null spaces. Let us assume we have determined X and Y so that with AX =0, BY=O (4.2) xTx Ip and yry Iq. Let (4.3) z [xY]. Suppose H is an n x v basis for the null space of Z with H- oo whereEisp x vandFisq x v. Then ZH XE + YF O. Hence the nullity of Z determines the rank of the basis for the intersection of the two spaces. Consider the matrix L= ZrZ. Note nullity(L) nullity(Z). From (4.3), we see that =-- Ip+q + W (4.4) L yrx Iq J Ip+q + Tr Oq Since 2(L)= 2(I + W)= 1 + 2(W), (4.5) 2(L) (T). Therefore, if a(T) for j 1, 2, ..., t, from (4.5) we see that the nullity (L) t. Consider the singular value decomposition of the matrix T xTy U,V T MODIFIED MATRIX EIGENVALUE PROBLEMS 325 where U u,..., %], V= Ivy,... ,vo]. The matrices ) XU and YV yield orthonormal bases for the null space of the matrices A and B, respectively. Since a(T) 1 for j 1, ..., t, zKj=Xuj= Yv-- for j-- 1,...,t, t= yield a basis for the intersection of the two spaces. and thus the vectors {xj}j The singular values of Xry can be thought of as the cosines between the spaces generated by X and Y. An analysis of the numerical methods for computing angles between linear subspaces is given in [2]. There are other techniques for computing a basis for the intersection of the subspaces, but the advantage of this method is that it also gives a way of finding vectors which are almost in the inter- section of the subspaces. 5. Eigenvalues of a matrix modified by a rank one matrix. It is sometimes desirable to determine some eigenvalues of a diagonal matrix which is modified by a matrix of rank one. In this section, we give an algorithm for determining in O(n2) numerical operations some or all of the eigenvalues and eigenvectors of D + ruur, where D diag (di) is a diagonal matrix of order n. Let C D + ruur; we denote the eigenvalues of C by 21,22,", 2, and we assume 2i =< 2/ and d =< d/ 1. It can be shown (cf. 14]) that (i) if a _>_ 0, di t, di+l, d, =< 2, d + auu; i= 1,2,-..,n- 1, (ii) if r =< 0, di-1 /],i di, i=2,-..,n, Thus, we have precise bounds on each of the eigenvalues of C. The eigenvalues of the matrix C satisfy the equation det (D + auu r 21) 0, which after some manipulation can be shown to be equivalent to the characteristic equation Now if we write i=1 i=1 j=l =/: -2)=0. 326 GENE H. GOLUB then it is easy to verify that (5.2) 2 , +,(2) (d, +, 2)o,(2) + u,+1(2), q,(2) (d 2)_,(2), with k=0,1,...,n- 1, k= 1,2,...,n- 1, o(;0 Oo(2)= 1. Thus it is a simple matter to evaluate the characteristic equation for any value of 2. Several well-known methods may be used for computing the eigenvalues of C. For instance, it is a simple matter to differentiate the expressions (5.2) with respect to 2 and hence determine 99,(2) for any value of 2. Thus Newtons method can be used in an effective manner for computing the eigenvalues. An alternative method has been given in [1] and we shall describe that tech- nique. Let K be a bidiagonal matrix of the form rl r2 rn- 1 and let M diag (/i). Then KMK T is a symmetric tridiagonal matrix with elements 0o {]2krk-1, (,//k -+- /k+ lrk2), llk+ lrk}k: 1, ro rn ,//n+ Consider the matrix equation (5.3) (D + auuT)x 2X. Multiplying (5.3) on the left by K and letting x KTy, we have K(D + auuT)KTy 2KKTy or (KDK r + aKuuTKT)y 2KKTy. Let us assume that we have reordered the elements of u (and hence of D, also) so that ul u2 Up_l 0 and 0 < luvl 5 lu+xl lu.I. Now it is possible to determine the elements of K so that (5.4) Ku -o 7 MODIFIED MATRIX EIGENVALUE PROBLEMS 327 Specifically, 0_ fori < p, ui/ui+ for/> p, and we note Iril 1. Therefore, if Ku satisfies (5.4), we see that KDK r + aKuurK r is a symmetric tridiagonal matrix and so is KKr. Thus we have a problem of the form Ay 2By, where A and B are symmetric, tridiagonal matrices and B is positive definite. Peters and Wilkinson [13] have shown how linear interpolation may be used effectively for computing the eigenvalues of such matrices when the eigenvalues are isolated. The algorithm makes use of det (A- 2B) which is quite simple to compute when A and B are tridiagonal. Once the eigenvalues have been com- puted it is easy to compute the eigenvectors by inverse iteration. Even if several of the eigenvalues are equal, it is often possible to compute accurate eigenvectors. This can be accomplished by choosing the initial vector in the inverse iteration process to be orthogonal to all the previously computed eigenvectors and by forcing the computed vector after the inverse iteration to be orthogonal to the previously computed eigenvectors. In some unusual situations, however, this procedure may fail. Another technique which is useful for finding the eigenvalues of (5.3) is to note that if ui - 0 for 1, 2, ..., n, then det (D + auu r 21)= det (D 21)det (I + a(D 21)-luur) (d 2) + r i= i= (di )) Thus, the eigenvalues of (5.3) can be computed by finding the zeros of the secular equation 09(2) + a i= (di 2) 6. Least squares problems. In this section we shall show how eigenvalue problems arise in linear least squares problems. The first problem we shall con- sider is that of performing a fit when there is error in the observations and in the data. The approach we take here is a generalization of the one in [9]. Let A be a given m n matrix and let b be a given vector with m components. We wish to construct a vector which satisfies the constraints (6.1) (A + E)x b + 6 and for which (6.2) [[P[Eif]QI[ minimum, where P is a given diagonal matrix with pi > 0, Q is a given diagonal matrix with qj > 0, and I1" indicates the Euclidean norm of the matrix. We rewrite (6.1) as 328 GFYF n. GOU or equivalently as (6.3) where By + Fy 0, (6.4) B [Ab]Q, F [E6]Q, Y=Q-1I-I" Our problem now is to determine so that (6.3) is satisfied, and ]IPFI] minimum. Again we use Lagrange multipliers as a device for minimizing ]]PFJ] subject to (6.3). Consider the function (6.5) q(F, y, ) 2 pf 2 2i (bij + fj)yj. i=1 j=l i=1 j=l Then cq) 2pZf,.s_ 22rys c3fs so that we have a stationary point of (6.5) when (6.6) p2 F yr. Note that the matrix F must be of rank one. Substituting (6.6) into (6.3) we have k p2B y/(yry), and hence, Thus, PF -PByyr/(yry). IIPFII m YrBrPBy/(YrY), and hence ]]PFI] minimum when y is the eigenvector associated with the smallest eigenvalue of BrPZB. Of course a more accurate procedure is to compute the smallest singular value of PB. Then, in order to compute , we perform the following calculations: (a) Form the singular value decomposition of PB, viz., PB UYV r (It is generally not necessary to compute U.) (b) Let v be the column vector of V associated with Omin(PB) so that v y. Compute (c) From (6.4), Zn+l MODIFIED MATRIX EIGENVALUE PROBLEMS 329 Note that minllPFll amin(PB), and that [E !i] [A b]vvTQ- 1. The solution will not be unique if the smallest singular value is multiple. Furthermore, it will not be possible to compute the solution if z, + 0. This will occur, for example, if P Im, Q In+ 1, ATb 0 and O-man(A < IIbll2, Another problem which arises frequently is that of finding a least squares solution with a quadratic constraint; we have considered this problem previously in [1]. We seek a vector such that (6.7) lib- axll2 minimum with the constraint that (6.8) Ilxll . The condition (6.8) is frequently imposed when the matrix A is ill-conditioned. Now let (6.9) (x, e) (b Ax)r(b Ax) + 2(xrx ca), where 2 is a Lagrange multiplier. Differentiating (6.9), we are led to the equation (6.10) or ATAx- ATb + 2x 0 (6.11) (ATA +/I)x ATb. Note that (6.10) represents the usual normal equations that arise in the linear least squares problem, with the diagonal elements of ATA shifted by 2. The parameter/ will be positive when < IIA+blI2, and we assume that this condition is satisfied. Since x (ATA + 21)-lATb, we have from (6.8) that (6.12) bTA(ATA + 2I)-2ATb 2 O. By repeated use of the identity det[z X Ywl=det(X)det(W-ZX-1Y)ifdet(X)#0, we can show that (6.12) is equivalent to the equation (6.13) det ((ATA + 21) 2 a-2ATbbTA) O. Finally if A UEVT, the singular value decomposition of A, then (6.14) ATA VDVT, vTv= I, where D Tand (6.13) becomes (6.15) det ((D + 21) 2 uuT) 0, 330 GENE H. GOLUB where u -I-.TuTb. Equation (6.15) has 2n roots; it can be shown (cf. [6]) that we need the largest real root of(6.15) which we denote by 2*. By a simple argument, it can be shown that 2" is the unique root in the interval [0, uTu]. Thus we have the problem of determining an eigenvalue of a diagonal matrix which is modified by a matrix of rank one. As in 5, we can determine a matrix K so that Ku satisfies (5.4), and hence (6.15) is equivalent to (6.16) det (K(D + 2I)2K T- KuuTKT) O. The matrix G(2) K(D + 2I)2K T KuuTK T is tridiagonal so that it is easy to evaluate G(2) and det G(2). Since we have an upper and lower bound on 2*, it is possible to use linear interpolation to find 2*, even though G(2) is quadratic in 2. Numerical experiments have indicated it is best to compute G(2) K(D + 2I)2K T KuuTK T for each approximate value of 2* rather than computing G(2) (KD2K T- KuuTKT) + 22KDK T + ).2KKT. Another approach for determining 2" is the following: we substitute the decomposition (6.14) into (6.12) and are led to the equation (6.17) q,(2) i= (di -+- /)2 with u e-let Urb. It is easy to verify that if (6.18) k=0,1,.--,n- 1, k= 1,2,...,n- 1, with 0o(2) Then using (6.18) we can easily evaluate ,,(2) and q/,(2), and hence use one of the standard root finding techniques for determining 2*. It is easy to verify that V(D + 2*I)-lEUrb. A similar problem arises when it is required to make when where minimum lib- AxI] 2 fl, fl > min tlb Ax Again the Lagrange multiplier 2 satisfies a quadratic equation which is similar to the equation given by (6.14). 7. Gauss-type quadrature rules with preassigned nodes. In many applications it is desirable to generate Gauss-type quadrature rules with preassigned nodes. This is particularly true for numerical methods which depend on the theory of moments MODIFIED MATRIX EIGENVALUE PROBLEMS 331 for determining bounds (cf. [3]), and for solving boundary value problems [12]. We shall show that it is possible to generate these quadrature rules as a modified eigenvalue problem. Let co(x) > 0 be a fixed weight function defined on the interval [a, b]. For co(x) it is possible to define a sequence of polynomials po(x), pl(z),." which are orthonormal with respect to co(x) and in which p,(x) is of exact degree n so that f {1 whenm=n, p,(x)p,,(x)co(x) 0 when m 4= n. The polynomial p,(x)= k, = (x tl")), k, > 0, has n distinct real roots a < t] ") < 2 ") < < t, "I < b. The roots of the orthogonal polynomials play an important role in Gauss-type quadrature. TI-IFOREM. Let f(x)e C2N[a,b]; then it is possible to determine positive wj so that f (x)co(x) dx wjf(t} u)) R[f], j=l where R[f] (2N)! (x- t! m) co(x) dx, a < rl < b. Thus, the Gauss-type quadrature rule is exact for all polynomials of degree less than or equal to 2N 1. Any set of orthonormal polynomials satisfies a three-term recurrence relation- ship: jpjx) (x ej)pj_ (x) _ lPj- 2(x) for j 1, 2, ..., N (7.1) p_ (x) o, po(x) 1. We may identify (7.1) with the matrix equation (7.2) where and xp(x) Jup(x)+ flupu(x)eN, [p(X)] r [P0(X), Pl(X), pu_ ,(X)], e= [0,0, ..., 1], fl 0 2 f12 fiN- 332 GENE H. GOLUB Suppose that the eigenvalues of JN are computed so that dNq 2sqj, with j= 1,2,...,N, qqj and qf [qlj, q2j, qNj]. Then it is shown in [11] that (7.3) tN)= 2, wj (qlj)2. (From here on in, we drop the superscripts on the tms.) A very effective way to compute the eigenvalues of JN and the first component of the orthonormalized eigenvectors is to use the QR method of Francis (cf. [14]). Now let us consider the problem of determining the quadrature rule so that f (x)o(x) dx 2 wjf(tj) + 2 Vkf (Zk), j=l k=l where the nodes (Zk}= are prescribed. It is possible to determine {wj, tj}= , {Vk} SO that we have for the remainder R[f] (2N+M) (x v) H (x tj) co(x) dx, a < r < b. j--1 For M and z a or z b, we have the Gauss-Radau-type formula, and for M 2 with z a and z 2 b, we have the Gauss-Lobatto-type formula. First we shall show how the Gauss-Radau-type rule may be computed. For convenience, we assume that z a. Now we wish to determine the polynomial PN + l(x) so that PN+ (a) 0. From (7.1) we see that this implies that 0 PN + 1(a) (a oN + ,)pN(a) flNPN- or (7.4) xN + a fiN-- From equation (7.2)we have or equivalently, (7.5) where pN(a) (JN aI)p(a) flNPN(a)eN, (JN- aI)6(a)= 6j(a) (flNPj- (a))/PN(a), j= 1,2,...,N. MODIFIED MATRIX EIGENVALUE PROBLEMS 333 Thus, (7.6) aN+ a + Hence, in order to compute the Gauss-Radau-type rule, we do the following: (a) Generate the matrix JN and the element fiN. (b) Solve the system of equations (7.5) for (c) Compute aN/ by (7.6) and use it to replace the (N + 1, N + 1) element of JN+I. (d) Use the QR algorithm to compute the eienvalues and first element of the eigenvector of the tridiagonal matrix += e +_1" Of course, one of the eigenvalues of the matrix JN / must be equal to a. Since a < /min(JN), the matrix JN- aI will be positive definite and hence Gaussian elimination without pivoting may be used to solve (7.5). It is not even necessary to solve the complete system since it is only necessary to compute the element fiN(a). However, one may wish to use iterative refinement to compute fiN(a) very precisely since for N large, /min(J) may be close to a, and hence the system of equations (7.5) may be quite ill-conditioned. When z b, the calcula- tion of JN/I is identical except with b replacing a in equations (7.5) and (7.6). The matrix JN bI will be negative definite since b > /max(J). TO compute the Gauss-Lobatto quadrature rule, we need to compute a matrix JN+ such that /min(JN + 1) a and /max(JN + 1) b. Thus, we wish to determine PN / l(x) so that (7.7) PN + l(a) PN + l(b) 0. Now from (7.1) we have fin + lPN + 1(X) (X 0 N + 1)PN(X) flNPN- I(X), so that (7.7)implies that (7.8) zN + lPN(a) + flNPN- l(a) apN(a), aN + lpN(b) + flNPN-1(b) bpN(b). Using the relationship (7.2), if (7.9) (JN aI) ),= eN and then 1 p2_ l(a) pj_ l(b) (7.10) 7= fin pN(a) #= fin pN(b) Thus, (7.8) is equivalent to the system of equations (7.11) 0N + Nfl) a, ON + ]lNfl b. j= 1,2,...,N. 334 GENE H. GOLUB Hence, in order to compute the Gauss-Lobatto-type rule, we perform the following calculations: (a) Generate the matrix JN. (b) Solve the systems of equations (7.9) for 7N and tu. (c) Solve (7.11) for u+ and/. (d) Use the QR algorithm to compute the eigenvalues and first element of the eigenvectors of the tridiagonal matrix +1 flNeNT aN+l_[" Galant [7] has given an algorithm for computing the Gaussian-type quadrature rules with preassigned nodes which is based on a theorem of Christoffel. His method constructs the orthogonal polynomials with respect to a modified weight function. Acknowledgments. The author wishes to thank Mr. Michael Saunders for making several helpful suggestions for improving the presentation, Mrs. Erin Brent for performing so carefully some of the calculations indicated in 5 and 6, Miss Linda Kaufman for expertly performing numerical experiments associated with 7. Special thanks to Dr. C. Paige for his careful reading of this manuscript and his many excellent suggestions. The referees also made several beneficial comments. REFERENCES [1] R. H. BARTELS, G. H. GOLUB AND M. A. SAUNDERS, Numerical techniques in mathematical pro- gramming, Nonlinear Programming, J. B. Rosen, O. L. Mangasarian and K. Ritter, eds., Academic Press, New York, 1970, pp. 123-176. [2] A. BJ6RCK AND G. H. GOLUB, Numerical methods for computing angles between linear subspaces, Math. Comp., to appear. [3] G. DAHLQUIST, S. EISENSTAT AND G. H. GOLUB, Bounds for the error of linear systems of equations using the theory of moments, J. Math. Anal. Appl., 37 (1972), pp. 151-166. [4] J. DURBIN, An alternative to the bounds test for testing for serial correlation in least squares regression, Econometrica, 38 (1970), pp. 422-429. [5] G. Fix AND R. HEmERGER, An algorithm for the ill-conditioned generalized eigenvalue problem, Numer. Math., to appear. [6] G. E. FORSYTHE AND G. H. GOLU, On the stationary values of a second degree polynomial on the unit sphere, SIAM J. Appl. Math., 13 (1965), pp. 1050-1068. [7] D. GALANT, An implementation of Christoffels theorem in the theory of orthogonal polynomials, Math. Comp., 24 (1971), pp. 111-113. [8] G. H. GoIu, Numerical methods for solving linear least squares problems, Numer. Math., 7 (1965), pp. 206-216. [9] G. H. GOLUB AND C. REINSCH, Singular value decomposition and least squares solutions, Ibid., 14 (1970), pp. 403-420. [10] G. H. GoIu AND R. UNDERWOOD, Stationary values of the ratio of quadratic forms subject to linear constraints, Z. Angew. Math. Phys., 21 (1970), pp. 318-326. [11] G. H. GOLUB AND J. H. WELSCH, Calculation of Gauss quadrature rules, Math. Comp., 23 (1969), pp. 221-230. [12] V. I. KRIOV, Approximate Calculation of Integrals, Macmillan, New York, 1962. 13] G. PETER AND J. H. WILKINSON, Eigenvalues of Ax 2Bx with band symmetric A and B, Comput. J., 12 (1969), pp. 398-404. [14] J. H. WILKINSON, The Algebraic Eigenvalue Problem, Clarendon Press, Oxford, 1965.