Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

5.2 Evaluation of Continued Fractions

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

5.

2 Evaluation of Continued Fractions

169

into equation (5.1.11), and then setting z = 1. Sometimes you will want to compute a function from a series representation even when the computation is not efcient. For example, you may be using the values obtained to t the function to an approximating form that you will use subsequently (cf. 5.8). If you are summing very large numbers of slowly convergent terms, pay attention to roundoff errors! In oating-point representation it is more accurate to sum a list of numbers in the order starting with the smallest one, rather than starting with the largest one. It is even better to group terms pairwise, then in pairs of pairs, etc., so that all additions involve operands of comparable magnitude.
CITED REFERENCES AND FURTHER READING: Goodwin, E.T. (ed.) 1961, Modern Computing Methods, 2nd ed. (New York: Philosophical Library), Chapter 13 [van Wijngaardens transformations]. [1] Dahlquist, G., and Bjorck, A. 1974, Numerical Methods (Englewood Cliffs, NJ: Prentice-Hall), Chapter 3. Abramowitz, M., and Stegun, I.A. 1964, Handbook of Mathematical Functions, Applied Mathematics Series, Volume 55 (Washington: National Bureau of Standards; reprinted 1968 by Dover Publications, New York), 3.6. Mathews, J., and Walker, R.L. 1970, Mathematical Methods of Physics, 2nd ed. (Reading, MA: W.A. Benjamin/Addison-Wesley), 2.3. [2]

Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America).

5.2 Evaluation of Continued Fractions


Continued fractions are often powerful ways of evaluating functions that occur in scientic applications. A continued fraction looks like this: f (x) = b0 + a1 b1 +
a2 b2 +
b3 + a3 a4

(5.2.1)

a5 b4 + b5 +

Printers prefer to write this as f (x) = b0 + a2 a3 a4 a5 a1 b1 + b2 + b3 + b4 + b5 + (5.2.2)

In either (5.2.1) or (5.2.2), the as and bs can themselves be functions of x, usually linear or quadratic monomials at worst (i.e., constants times x or times x 2 ). For example, the continued fraction representation of the tangent function is tan x = x x2 x2 x2 1 3 5 7 (5.2.3)

Continued fractions frequently converge much more rapidly than power series expansions, and in a much larger domain in the complex plane (not necessarily including the domain of convergence of the series, however). Sometimes the continued fraction converges best where the series does worst, although this is not

170

Chapter 5.

Evaluation of Functions

a general rule. Blanch [1] gives a good review of the most useful convergence tests for continued fractions. There are standard techniques, including the important quotient-difference algorithm, for going back and forth between continued fraction approximations, power series approximations, and rational function approximations. Consult Acton [2] for an introduction to this subject, and Fike [3] for further details and references. How do you tell how far to go when evaluating a continued fraction? Unlike a series, you cant just evaluate equation (5.2.1) from left to right, stopping when the change is small. Written in the form of (5.2.1), the only way to evaluate the continued fraction is from right to left, rst (blindly!) guessing how far out to start. This is not the right way. The right way is to use a result that relates continued fractions to rational approximations, and that gives a means of evaluating (5.2.1) or (5.2.2) from left to right. Let fn denote the result of evaluating (5.2.2) with coefcients through an and bn . Then fn = An Bn (5.2.4)

Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America).

where An and Bn are given by the following recurrence: A1 1 A0 b0 Aj = bj Aj 1 + aj Aj 2 B 1 0 B0 1 j = 1, 2, . . . , n (5.2.5) This method was invented by J. Wallis in 1655 (!), and is discussed in his Arithmetica Innitorum [4]. You can easily prove it by induction. In practice, this algorithm has some unattractive features: The recurrence (5.2.5) frequently generates very large or very small values for the partial numerators and denominators A j and Bj . There is thus the danger of overow or underow of the oating-point representation. However, the recurrence (5.2.5) is linear in the As and B s. At any point you can rescale the currently saved two levels of the recurrence, e.g., divide A j , Bj , Aj 1 , and Bj 1 all by Bj . This incidentally makes A j = fj and is convenient for testing whether you have gone far enough: See if f j and fj 1 from the last iteration are as close as you would like them to be. (If B j happens to be zero, which can happen, just skip the renormalization for this cycle. A fancier level of optimization is to renormalize only when an overow is imminent, saving the unnecessary divides. All this complicates the program logic.) Two newer algorithms have been proposed for evaluating continued fractions. Steeds method does not use A j and Bj explicitly, but only the ratio D j = Bj 1 /Bj . One calculates Dj and fj = fj fj 1 recursively using Dj = 1/(bj + aj Dj 1 ) fj = (bj Dj 1)fj 1 (5.2.6) (5.2.7)

Bj = bj Bj 1 + aj Bj 2

Steeds method (see, e.g., [5]) avoids the need for rescaling of intermediate results. However, for certain continued fractions you can occasionally run into a situation

5.2 Evaluation of Continued Fractions

171

where the denominator in (5.2.6) approaches zero, so that D j and fj are very large. The next f j +1 will typically cancel this large change, but with loss of accuracy in the numerical running sum of the f j s. It is awkward to program around this, so Steeds method can be recommended only for cases where you know in advance that no denominator can vanish. We will use it for a special purpose in the routine bessik (6.7). The best general method for evaluating continued fractions seems to be the modied Lentzs method [6]. The need for rescaling intermediate results is avoided by using both the ratios Cj = Aj /Aj 1 , and calculating f j by fj = fj 1 Cj Dj Dj = 1/(bj + aj Dj 1 ), Cj = bj + aj /Cj 1 (5.2.9) (5.2.10) Dj = Bj 1 /Bj (5.2.8)

Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America).

From equation (5.2.5), one easily shows that the ratios satisfy the recurrence relations

In this algorithm there is the danger that the denominator in the expression for D j , or the quantity C j itself, might approach zero. Either of these conditions invalidates (5.2.10). However, Thompson and Barnett [5] show how to modify Lentzs algorithm to x this: Just shift the offending term by a small amount, e.g., 10 30 . If you work through a cycle of the algorithm with this prescription, you will see that f j +1 is accurately calculated. In detail, the modied Lentzs algorithm is this: Set f0 = b0 ; if b0 = 0 set f0 = tiny . Set C0 = f0 . Set D0 = 0. For j = 1, 2, . . . Set Dj = bj + aj Dj 1 . If Dj = 0, set Dj = tiny . Set Cj = bj + aj /Cj 1 . If Cj = 0 set Cj = tiny . Set Dj = 1/Dj . Set j = Cj Dj . Set fj = fj 1 j . If |j 1| < eps then exit. Here eps is your oating-point precision, say 10 7 or 1015 . The parameter tiny should be less than typical values of eps|b j |, say 1030 . The above algorithm assumes that you can terminate the evaluation of the continued fraction when |f j fj 1 | is sufciently small. This is usually the case, but by no means guaranteed. Jones [7] gives a list of theorems that can be used to justify this termination criterion for various kinds of continued fractions. There is at present no rigorous analysis of error propagation in Lentzs algorithm. However, empirical tests suggest that it is at least as good as other methods.

172

Chapter 5.

Evaluation of Functions

Manipulating Continued Fractions


Several important properties of continued fractions can be used to rewrite them in forms that can speed up numerical computation. An equivalence transformation an an , bn bn , an+1 an+1 (5.2.11)
Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America).

leaves the value of a continued fraction unchanged. By a suitable choice of the scale factor you can often simplify the form of the as and the bs. Of course, you can carry out successive equivalence transformations, possibly with different s, on successive terms of the continued fraction. The even and odd parts of a continued fraction are continued fractions whose successive convergents are f 2n and f2n+1 , respectively. Their main use is that they converge twice as fast as the original continued fraction, and so if their terms are not much more complicated than the terms in the original there can be a big savings in computation. The formula for the even part of (5.2.2) is feven = d0 + c2 c1 d1 + d2 + (5.2.12)

where in terms of intermediate variables 1 = n = we have d0 = b0 , a1 b1

an , bn bn1

n2

(5.2.13)

c1 = 1 ,

d1 = 1 + 2 n2

cn = 2n1 2n2 , dn = 1 + 2n1 + 2n ,

(5.2.14)

You can nd the similar formula for the odd part in the review by Blanch [1]. Often a combination of the transformations (5.2.14) and (5.2.11) is used to get the best form for numerical work. We will make frequent use of continued fractions in the next chapter.
CITED REFERENCES AND FURTHER READING: Abramowitz, M., and Stegun, I.A. 1964, Handbook of Mathematical Functions, Applied Mathematics Series, Volume 55 (Washington: National Bureau of Standards; reprinted 1968 by Dover Publications, New York), 3.10. Blanch, G. 1964, SIAM Review, vol. 6, pp. 383421. [1] Acton, F.S. 1970, Numerical Methods That Work; 1990, corrected edition (Washington: Mathematical Association of America), Chapter 11. [2] Cuyt, A., and Wuytack, L. 1987, Nonlinear Methods in Numerical Analysis (Amsterdam: NorthHolland), Chapter 1. Fike, C.T. 1968, Computer Evaluation of Mathematical Functions (Englewood Cliffs, NJ: PrenticeHall), 8.2, 10.4, and 10.5. [3] Wallis, J. 1695, in Opera Mathematica, vol. 1, p. 355, Oxoniae e Theatro Shedoniano. Reprinted by Georg Olms Verlag, Hildeshein, New York (1972). [4]

5.3 Polynomials and Rational Functions

173

Thompson, I.J., and Barnett, A.R. 1986, Journal of Computational Physics, vol. 64, pp. 490509. [5] Lentz, W.J. 1976, Applied Optics, vol. 15, pp. 668671. [6] Jones, W.B. 1973, in Pade Approximants and Their Applications, P.R. Graves-Morris, ed. (London: Academic Press), p. 125. [7]
Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America).

5.3 Polynomials and Rational Functions


A polynomial of degree N is represented numerically as a stored array of coefcients, c[j] with j= 0, . . . , N . We will always take c[0] to be the constant term in the polynomial, c[N ] the coefcient of x N ; but of course other conventions are possible. There are two kinds of manipulations that you can do with a polynomial: numerical manipulations (such as evaluation), where you are given the numerical value of its argument, or algebraic manipulations, where you want to transform the coefcient array in some way without choosing any particular argument. Lets start with the numerical. We assume that you know enough never to evaluate a polynomial this way:
p=c[0]+c[1]*x+c[2]*x*x+c[3]*x*x*x+c[4]*x*x*x*x;

or (even worse!),
p=c[0]+c[1]*x+c[2]*pow(x,2.0)+c[3]*pow(x,3.0)+c[4]*pow(x,4.0);

Come the (computer) revolution, all persons found guilty of such criminal behavior will be summarily executed, and their programs wont be! It is a matter of taste, however, whether to write
p=c[0]+x*(c[1]+x*(c[2]+x*(c[3]+x*c[4])));

or
p=(((c[4]*x+c[3])*x+c[2])*x+c[1])*x+c[0];

If the number of coefcients c[0..n] is large, one writes


p=c[n]; for(j=n-1;j>=0;j--) p=p*x+c[j];

or
p=c[j=n]; while (j>0) p=p*x+c[--j];

Another useful trick is for evaluating a polynomial P (x) and its derivative dP (x)/dx simultaneously:
p=c[n]; dp=0.0; for(j=n-1;j>=0;j--) {dp=dp*x+p; p=p*x+c[j];}

You might also like