Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Construction of new generalizations of Wynn’s epsilon and rho algorithm by solving finite difference equations in the transformation order

  • Original Paper
  • Published:
Numerical Algorithms Aims and scope Submit manuscript

Abstract

We construct new sequence transformations based on Wynn’s epsilon and rho algorithms. The recursions of the new algorithms include the recursions of Wynn’s epsilon and rho algorithm and of Osada’s generalized rho algorithm as special cases. We demonstrate the performance of our algorithms numerically by applying them to some linearly and logarithmically convergent sequences as well as some divergent series.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Aitken, A.C.: On Bernoulli’s numerical solution of algebraic equations. Proc. Roy. Soc. Edinburgh. 46, 289–305 (1926)

    MATH  Google Scholar 

  2. Baker, G.A. Jr., Graves-Morris, P: Padé Approximants. 2nd edn. Cambridge University Press, Cambridge (1996)

  3. Barbeau, E.J.: Euler subdues a very obstreperous series. Amer. Math. Monthly. 86, 356–372 (1979)

    MathSciNet  MATH  Google Scholar 

  4. Barbeau, E.J., Leah, P.J.: Euler’s 1760 paper on divergent series. Hist. Math. 3, 141–160 (1976)

    MathSciNet  MATH  Google Scholar 

  5. Barber, M.N., Hamer, C.J.: Extrapolation of sequences using a generalized epsilon-algorithm. J. Austral. Math. Soc. B 23, 229–240 (1982)

    MathSciNet  MATH  Google Scholar 

  6. Bender, C.M., Wu, T.T.: Anharmonic oscillator. Phys. Rev. 184, 1231–1260 (1969)

    MathSciNet  Google Scholar 

  7. Bender, C.M., Wu, T.T.: Large-order behavior of perturbation theory. Phys. Rev. Lett. 27, 461–465 (1971)

    Google Scholar 

  8. Bender, C.M., Wu, T.T.: Anharmonic oscillator. II. A study in perturbation theory in large order. Phys. Rev. D 7, 1620–1636 (1973)

    Google Scholar 

  9. Bhowmick, S., Bhattacharya, R., Roy, D.: Iterations of convergence accelerating nonlinear transforms. Comput. Phys. Commun. 54, 31–36 (1989)

    MathSciNet  MATH  Google Scholar 

  10. Bickley, W.G., Miller, J.C.P.: The numerical summation of slowly convergent series of positive terms. Phil. Mag. 22, 754–767 (1936)

    MATH  Google Scholar 

  11. Bjørstad, P., Dahlquist, G., Grosse, E.: Extrapolations of asymptotic expansions by a modified Aitken δ2-formula. BIT 21, 56–65 (1981)

    MathSciNet  MATH  Google Scholar 

  12. Borghi, R.: Computational optics through sequence transformations. In: Visser, T D (ed.) Progress in Optics, vol 61, chap 1, pp 1–70. Academic Press, Amsterdam (2016)

    Google Scholar 

  13. Borghi, R., Weniger, E.J.: Convergence analysis of the summation of the factorially divergent Euler series by Padé approximants and the delta transformation. Appl. Numer. Math. 94, 149–178 (2015)

    MathSciNet  MATH  Google Scholar 

  14. Bornemann, F., Laurie, D., Wagon, S., Waldvogel, J.: The SIAM 100-Digit Challenge: a Study in High-Accuracy Numerical Computing. Society of Industrial Applied Mathematics, Philadelphia (2004)

    MATH  Google Scholar 

  15. Bowman, K.O., Shenton, L.R.: Continued Fractions in Statistical Applications. Marcel Dekker, New York (1989)

    MATH  Google Scholar 

  16. Brezinski, C.: Accélération de suites à convergence logarithmique. C. R. Acad. Sci. Paris 273 A, 727–730 (1971a)

    MATH  Google Scholar 

  17. Brezinski, C.: Méthodes d’accélération de la convergence en analyse numérique. Thèse d’Etat, Université de Grenoble (1971b)

  18. Brezinski, C.: Conditions d’application et de convergence de procédés d’extrapolation. Numer. Math. 20, 64–79 (1972)

    MathSciNet  MATH  Google Scholar 

  19. Brezinski, C.: Accélération de la Convergence en Analyse Numérique. Springer, Berlin (1977)

    MATH  Google Scholar 

  20. Brezinski, C.: Algorithmes d’Accélération de la Convergence—Étude Numérique. Éditions Technip, Paris (1978)

  21. Brezinski, C.: Padé-Type Approximation and General Orthogonal Polynomials. Basel, Birkhäuser (1980)

    MATH  Google Scholar 

  22. Brezinski, C.: A Bibliography on Continued Fractions, Padé Approximation, Sequence Transformation and Related Subjects. Prensas Universitarias de Zaragoza, Zaragoza (1991a)

    Google Scholar 

  23. Brezinski, C.: History of Continued Fractions and Padé Approximants. Springer, Berlin (1991b)

    MATH  Google Scholar 

  24. Brezinski, C.: Extrapolation algorithms and Padé approximations: a historical survey. Appl. Numer. Math. 20, 299–318 (1996)

    MathSciNet  MATH  Google Scholar 

  25. Brezinski, C.: Convergence Acceleration During the 20th Century. J. Comput. Appl. Math. 122, 1–21 (2000a). Reprinted in: Brezinski C (ed) Numerical Analysis 2000, Vol. 2: Interpolation and Extrapolation, Elsevier, Amsterdam, pp 1–21

    MathSciNet  MATH  Google Scholar 

  26. Brezinski, C.: Some pioneers of extrapolation methods. In: Bultheel, A, Cools, R (eds.) The Birth of Numerical Analysis, pp 1–22. World Scientific, Singapore (2009)

    Google Scholar 

  27. Brezinski, C.: Reminiscences of Peter Wynn. Numer. Algor. 80, 5–10 (2019)

    MathSciNet  MATH  Google Scholar 

  28. Brezinski, C., Redivo Zaglia, M.: Extrapolation Methods. North-Holland, Amsterdam (1991)

    MATH  Google Scholar 

  29. Brezinski, C., Redivo Zaglia, M.: The genesis and early developments of Aitken’s process, Shanks’ transformation, the ε-algorithm, and related fixed point methods. Numer. Algor. 80, 11–133 (2019)

    MathSciNet  MATH  Google Scholar 

  30. Brezinski, C., Redivo Zaglia, M., Weniger, E.J.: Special Issue: approximation and extrapolation of convergent and divergent sequences and series (CIRM, Luminy, France, 2009). Appl. Numer. Math. 60, 1183–1464 (2010)

    MathSciNet  MATH  Google Scholar 

  31. Brezinski, C., He, Y., Hu, X.B., Redivo-Zaglia, M., Sun, J.Q.: Multistep ε-algorithm, Shanks’ transformation, and the Lotka-Volterra system by Hirota’s method. Math. Comput. 81, 1527–1549 (2012)

    MathSciNet  MATH  Google Scholar 

  32. Bromwich, T.J.I.: An Introduction to the Theory of Infinite Series, 3rd edn. Chelsea, New York (1991). originally published by Macmillan (London, 1908 and 1926)

    MATH  Google Scholar 

  33. Burkhardt, H.: Über den Gebrauch divergenter Reihen in der Zeit von 1750–1860. Math. Annal. 70, 169–206 (1911)

    MATH  Google Scholar 

  34. Caliceti, E., Meyer-Hermann, M., Ribeca, P., Surzhykov, A., Jentschura, U.D.: From useful algorithms for slowly convergent series to physical predictions based on divergent perturbative expansions. Phys. Rep. 446, 1–96 (2007)

    MathSciNet  Google Scholar 

  35. Chang, X.K., He, Y., Hu, X.B., Li, S.H.: A new integrable convergence acceleration algorithm for computing Brezinski-Durbin-Redivo-Zaglia’s sequence transformation via pfaffians. Numer. Algor. 78, 87–106 (2018)

    MathSciNet  MATH  Google Scholar 

  36. Čížek, J., Zamastil, J., Skalá, L.: New summation technique for rapidly divergent perturbation series. Hydrogen atom in magnetic field. J. Math. Phys. 44, 962–968 (2003)

    MathSciNet  MATH  Google Scholar 

  37. Cuyt, A., Wuytack, L.: Nonlinear Methods in Numerical Analysis. North-Holland, Amsterdam (1987)

    MATH  Google Scholar 

  38. Delahaye, J.P.: Sequence Transformations. Springer, Berlin (1988)

    MATH  Google Scholar 

  39. Drummond, J.E.: Summing a common type of slowly convergent series of positive terms. J. Austral. Math. Soc. B 19, 416–421 (1976)

    MathSciNet  MATH  Google Scholar 

  40. Dyson, D.J.: Divergence of perturbation theory in quantum electrodynamics. Phys. Rev. 85, 32–33 (1952)

    MathSciNet  MATH  Google Scholar 

  41. Ferraro, G.: The Rise and Development of the Theory of Series up to the Early 1820s. Springer, New York (2008)

    MATH  Google Scholar 

  42. Filter, E., Steinborn, E.O.: Extremely compact formulas for molecular one-electron integrals and Coulomb integrals over Slater-type atomic orbitals. Phys. Rev. A 18, 1–11 (1978a)

    Google Scholar 

  43. Filter, E., Steinborn, E.O.: The three-dimensional convolution of reduced Bessel functions and other functions of physical interest. J. Math. Phys. 19, 79–84 (1978b)

    MathSciNet  MATH  Google Scholar 

  44. Fischer, J.: On the role of power expansions in quantum field theory. Int. J. Mod. Phys. A 12, 3625–3663 (1997)

    MathSciNet  MATH  Google Scholar 

  45. Gil, A., Segura, J., Temme, N.M.: Numerical Methods for Special Functions. SIAM, Philadelphia (2007)

    MATH  Google Scholar 

  46. Gil, A., Segura, J., Temme, N.M.: Basic methods for computing special functions. In: Simos, T E (ed.) Recent Advances in Computational and Applied Mathematics, pp 67–121. Springer, Dordrecht (2011)

    Google Scholar 

  47. Graves-Morris, P.R., Roberts, D.E., Salam, A.: The epsilon algorithm and related topics. J. Comput. Appl. Math. 122, 51–80 (2000a). Reprinted in: Brezinski C (ed) Numerical Analysis 2000, Vol. 2: Interpolation and Extrapolation, Elsevier, Amsterdam, pp 51–80

    MathSciNet  MATH  Google Scholar 

  48. Grosswald, E.: Bessel Polynomials. Springer, Berlin (1978)

    MATH  Google Scholar 

  49. Grotendorst, J., Weniger, E.J., Steinborn, E.O.: Efficient evaluation of infinite-series representations for overlap, two-center nuclear attraction, and Coulomb integrals using nonlinear convergence accelerators. Phys. Rev. A 33, 3706–3726 (1986)

    MathSciNet  Google Scholar 

  50. Hardy, G.H.: Divergent Series. Clarendon Press, Oxford (1949)

    MATH  Google Scholar 

  51. He, Y., Hu, X.B., Sun, J.Q., Weniger, E.J.: Convergence acceleration algorithm via an equation related to the lattice Boussinesq equation. SIAM J. Sci. Comput. 33, 1234–1245 (2011)

    MathSciNet  MATH  Google Scholar 

  52. Homeier, H.H.H.: Scalar Levin-type sequence transformations. J. Comput. Appl. Math. 122, 81–147 (2000a). Reprinted in: Brezinski C (ed) Numerical Analysis 2000, Vol. 2: Interpolation and Extrapolation, Elsevier, Amsterdam, pp 81–147

    MathSciNet  MATH  Google Scholar 

  53. Kummer, E.E.: Eine neue Methode, die numerischen Summen langsam convergirender Reihen zu berechnen. J. Reine. Angew. Math. 16, 206–214 (1837)

    MathSciNet  Google Scholar 

  54. Le Guillou, J.C., Zinn-Justin J. (eds.): Large-Order Behaviour of Perturbation Theory. North-Holland, Amsterdam (1990)

  55. Levin, D.: Development of non-linear transformations for improving convergence of sequences. Int. J. Comput. Math. B 3, 371–388 (1973)

    MathSciNet  MATH  Google Scholar 

  56. Liem, C.B., Lü, T., Shih, T.M.: The Splitting Extrapolation Method. World Scientific, Singapore (1995)

    MATH  Google Scholar 

  57. Marchuk, G.I., Shaidurov, V.V.: Difference Methods and Their Extrapolations. Springer, New York (1983)

    MATH  Google Scholar 

  58. Nagai, A., Satsuma, J.: Discrete soliton equations and convergence acceleration algorithms. Phys. Lett. A 209, 305–312 (1995)

    MathSciNet  MATH  Google Scholar 

  59. Olver, F.W.J., Lozier, D.W., Boisvert, R.F., Clark, C.W. (eds.): NIST Handbook of Mathematical Functions. Cambridge University Press, Cambridge (2010). Available online under http://dlmf.nist.gov/

  60. Osada, N.: A convergence acceleration method for some logarithmically convergent sequences. SIAM J. Numer. Anal. 27, 178–189 (1990)

    MathSciNet  MATH  Google Scholar 

  61. Osada, N.: An acceleration theorem for the ρ algorithm. Numer. Math. 73, 521–531 (1996)

    MathSciNet  MATH  Google Scholar 

  62. Osada, N.: The early history of convergence acceleration methods. Numer. Algor. 60, 205–221 (2012)

    MathSciNet  MATH  Google Scholar 

  63. Padé, H.: Sur la représentation approachée d’une fonction par des fractions rationelles. Ann. Sci. Éc. Norm. Sup. 9, 3–93 (1892)

    MATH  Google Scholar 

  64. Papageorgiou, V., Grammaticos, B., Ramani, A.: Integrable lattices and convergence acceleration algorithms. Phys. Lett. A 179, 111–115 (1993)

    MathSciNet  MATH  Google Scholar 

  65. Press, W.H., Teukolsky, S.A., Vetterling, W.T., Flannery, B.P.: Numerical Recipes: The Art of Scientific Computing. 3rd edn. Cambridge University Press, Cambridge (2007)

  66. Sablonniere, P.: Comparison of four algorithms accelerating the convergence of a subset of logarithmic fixed point sequences. Numer. Algor. 1, 177–197 (1991)

    MathSciNet  MATH  Google Scholar 

  67. Schmidt, J.R.: On the numerical solution of linear simultaneous equations by an iterative method. Philos. Mag. 32, 369–383 (1941)

    MathSciNet  MATH  Google Scholar 

  68. Sedogbo, G.A.: Convergence acceleration of some logarithmic sequences. J. Comput. Appl. Math. 32, 253–260 (1990)

    MathSciNet  MATH  Google Scholar 

  69. Shanks, D.: An analogy between transient and mathematical sequences and some nonlinear sequence transforms suggested by it. Part I. Tech. rep., Naval Ordonance Laboratory, White. Oak, memorandum 9994 (1949)

  70. Shanks, D.: Non-linear transformations of divergent and slowly convergent sequences. J. Math. Phys. (Cambridge Mass.) 34, 1–42 (1955)

    MathSciNet  MATH  Google Scholar 

  71. Shavitt, I.: The Gaussian function in calculations of statistical mechanics and quantum mechanics. In: Alder, B, Fernbach, S, Rotenberg, M (eds.) Methods in Computational Physics, vol. 2, Quantum Mechanics, pp 1–45. Academic Press, New York (1963)

  72. Sidi, A.: A convergence and stability study of the iterated Lubkin transformation and the θ-algorithm. Math. Comput. 72, 419–433 (2002)

    MathSciNet  MATH  Google Scholar 

  73. Sidi, A.: Practical Extrapolation Methods. Cambridge University Press, Cambridge (2003)

    MATH  Google Scholar 

  74. Smith, D.A., Ford, W.F.: Acceleration of linear and logarithmic convergence. SIAM J. Numer. Anal. 16, 223–240 (1979)

    MathSciNet  MATH  Google Scholar 

  75. Smith, D.A., Ford, W.F.: Numerical comparisons of nonlinear convergence accelerators. Math. Comput. 38, 481–499 (1982)

    MathSciNet  MATH  Google Scholar 

  76. Steinborn, E.O., Filter, E.: Translations of fields represented by spherical-harmonic expansions for molecular calculations. III. Translations of reduced Bessel functions, Slater-type s-orbitals, and other functions. Theor. Chim. Acta. 38, 273–281 (1975)

    MathSciNet  Google Scholar 

  77. Sun, J.Q., Chang, X.K., He, Y., Hu, X.B.: An extended multistep Shanks transformation and convergence acceleration algorithm with their convergence and stability analysis. Numer. Math. 125, 785–809 (2013)

    MathSciNet  MATH  Google Scholar 

  78. Suslov, I.M.: Divergent perturbation series. J. Exp. Theor. Phys. (JETP) 100, 1188–1234 (2005)

    Google Scholar 

  79. Temme, N.M.: Numerical aspects of special functions. Acta. Numer. 16, 379–478 (2007)

    MathSciNet  MATH  Google Scholar 

  80. Todd, J.: Motivation for working in numerical analysis. In: Todd, J (ed.) Survey of Numerical Analysis, pp 1–26. McGraw-Hill, New York (1962)

  81. Todd, J.: The lemniscate constants. Commun. ACM 18, 14–19 (1975)

    MathSciNet  MATH  Google Scholar 

  82. Trefethen, L.N.: Approximation Theory and Approximation Practice. SIAM, Philadelphia (2013)

    MATH  Google Scholar 

  83. Tucciarone J.: The development of the theory of summable divergent series from 1880 to 1925. Arch. Hist. Ex. Sci. 10, 1–40 (1973)

    MathSciNet  MATH  Google Scholar 

  84. Vanden Broeck, J.M., Schwartz, L.W.: A one-parameter family of sequence transformations. SIAM J. Math. Anal. 10, 658–666 (1979)

    MathSciNet  MATH  Google Scholar 

  85. Walz, G.: Asymptotics and Extrapolation. Akademie Verlag, Berlin (1996)

    MATH  Google Scholar 

  86. Weniger, E.J.: Untersuchung der Verwendbarkeit reduzierter Besselfunktionen als Basissatz für ab initio Rechnungen an Molekülen. Vergleichende Rechnungen am Beispiel des \(\mathrm {H}_{2}^{+}\). Diplomarbeit, Fachbereich Chemie und Pharmazie, Universität Regensburg (1977)

  87. Weniger, E.J.: Reduzierte Bessel-Funktionen als LCAO-Basissatz: Analytische und numerische Untersuchungen. PhD thesis, Fachbereich Chemie und Pharmazie, Universität Regensburg. A. short. abstract. of. this. thesis. was. published. in. Zentralblatt. für. Mathematik. 523,. 444. (1984),. abstract. no. 65015 (1982)

  88. Weniger, E.J.: Nonlinear sequence transformations for the acceleration of convergence and the summation of divergent series, vol. 10. Los Alamos Preprint arXiv:math-ph/0306302 (1989)

  89. Weniger, E.J.: On the derivation of iterated sequence transformations for the acceleration of convergence and the summation of divergent series. Comput. Phys. Commun. 64, 19–45 (1991)

    MathSciNet  Google Scholar 

  90. Weniger, E.J.: Interpolation between sequence transformations. Numer. Algor. 3, 477–486 (1992)

    MathSciNet  MATH  Google Scholar 

  91. Weniger, E.J.: Verallgemeinerte Summationsprozesse als numerische Hilfsmittel für quantenmechanische und quantenchemische Rechnungen. Habilitation thesis, Fachbereich Chemie und Pharmazie, Universität Regensburg, Los Alamos Preprint arXiv:math-ph/0306048 (1994)

  92. Weniger, E.J.: Prediction properties of Aitken’s iterated Δ2 process, of Wynn’s epsilon algorithm, and of Brezinski’s iterated theta algorithm. J. Comput. Appl. Math. 122, 329–356 (2000a). Reprinted in: Brezinski C (ed) Numerical Analysis 2000, Vol. 2: Interpolation and Extrapolation, Elsevier, Amsterdam, pp 329–356

    MathSciNet  MATH  Google Scholar 

  93. Weniger, E.J.: Mathematical properties of a new Levin-type sequence transformation introduced by Čížek, Zamastil, and Skalá. I. Algebraic theory. J. Math. Phys. 45, 1209–1246 (2004)

    MathSciNet  MATH  Google Scholar 

  94. Weniger, E.J.: Further discussion of sequence transformation methods. Subtopic “Related Resources” (R1) on the Numerical Recipes (Third Edition) Webnotes page http://www.nr.com/webnotes/ (2007)

  95. Weniger, E.J.: On the analyticity of Laguerre series. J. Phys. A 41, 425,207–1–425,207–43 (2008)

    MathSciNet  Google Scholar 

  96. Weniger, E.J.: The strange history of B functions or how theoretical chemists and mathematicians do (not) interact. Int. J. Quantum. Chem. 109, 1706–1716 (2009)

    Google Scholar 

  97. Weniger, E.J.: An introduction to the topics presented at the conference “Approximation and extrapolation of convergent and divergent sequences and series” CIRM Luminy: September 28, 2009–October 2, 2009. Appl. Numer. Math. 60, 1184–1187 (2010)

    MathSciNet  Google Scholar 

  98. Weniger, E.J., Kirtman, B.: Extrapolation methods for improving the convergence of oligomer calculations to the infinite chain limit of quasi-onedimensional stereoregular polymers. Comput. Math. Appl. 45, 189–215 (2003)

    MathSciNet  MATH  Google Scholar 

  99. Weniger, E.J., Steinborn, E.O.: Numerical properties of the convolution theorems of B functions. Phys. Rev. A 28, 2026–2041 (1983)

    MathSciNet  Google Scholar 

  100. Wimp, J.: Sequence Transformations and Their Applications. Academic Press, New York (1981)

    MATH  Google Scholar 

  101. Wynn, P.: On a device for computing the \(e_{m (S_{n})}\) transformation. Math. Tables. Aids. Comput. 10, 91–96 (1956a)

    MathSciNet  Google Scholar 

  102. Wynn, P.: On a Procrustean technique for the numerical transformation of slowly convergent sequences and series. Proc. Camb. Phil. Soc. 52, 663–671 (1956b)

    MathSciNet  MATH  Google Scholar 

  103. Wynn, P.: A note on programming repeated applications of the ε-algorithm. Rev. Franc. Trait. Inform. Chiffres. 8, 23–62 (1965)

    MathSciNet  MATH  Google Scholar 

  104. Wynn, P.: On the convergence and the stability of the epsilon algorithm. SIAM J. Numer. Anal. 3, 91–122 (1966)

    MathSciNet  MATH  Google Scholar 

Download references

Funding

Y. He was supported by the National Natural Science Foundation of China (Grant No. 11571358), the China Postdoctoral Science Foundation funded project (Grant Nos. 2012M510186 and 2013T60761), and the Youth Innovation Promotion Association CAS. X.K. Chang was supported by the National Natural Science Foundation of China (Grant Nos. 11701550, 11731014) and the Youth Innovation Promotion Association CAS. X.B. Hu was supported by the National Natural Science Foundation of China (Grant Nos. 11871336, 11571358). J.Q. Sun was supported by the Fundamental Research Funds for the Central Universities (201964008), and the National Natural Science Foundation of China (Grant Nos. 11401546, 11871444).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yi He.

Additional information

We dedicate this article to the memory of Peter Wynn (1931–2017)

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1. Reduced Bessel functions

In this Appendix, we briefly discuss the most important properties of the so-called reduced Bessel functions [76, Eqs. (3.1) and (3.2)] and their anisotropic generalizations, the so-called B functions [42, Eqs. (3.3) and (3.4)]. These functions have gained some prominence in molecular electronic structure theory (see, for example, [96] and references therein), but have been largely ignored in mathematics. Reduced Bessel and B functions had also been the topic of Weniger’s Diploma and PhD theses [86, 87]. But in this article, we are predominantly interested in reduced Bessel functions because of the series expansion (A1.10) of 1/z in terms of reduced Bessel functions, which is well suited to test the ability of a sequence transformation to accelerate the logarithmic convergence of model sequences of the type of (2.13) with a non-integral decay parameter.

Based on previous work by Shavitt [71, Eq. (55) on p. 15], the reduced Bessel function with in general complex order ν and complex argument z was introduced by Steinborn and Filter [76, Eqs. (3.1) and (3.2)]:

$$ \widehat{k}_{\nu} (z) = (2/\pi)^{1/2} z^{\nu} K_{\nu} (z) . $$
(A1.1)

Here, Kν(z) is a modified Bessel functions of the second kind [59, Eqs. (10.27.4) and (10.27.5)].

Many properties of the reduced Bessel functions follow directly from the corresponding properties of Kν(z). One example is the recursion

$$ \widehat{k}_{\nu+ 1} (z) = 2 \nu \widehat{k}_{\nu} (z) + z^{2} \widehat{k}_{\nu-1} (z) , $$
(A1.2)

which is stable upwards and which follows directly from the three-term recursion Kν+ 1(z) = (2ν/z)Kν(z) + Kν− 1(z) [59, Eq. (10.29.1)]. Another example is the symmetry relationship \(\widehat {k}_{-\nu } (z) = z^{-2 \nu } \widehat {k}_{\nu } (z)\), which follows from the symmetry relationship Kν(z) = Kν(z) [59, Eq. (10.27.3)].

If the order ν of \(\widehat {k}_{\nu } (z)\) is half-integral, ν = n + 1/2 with \(n \in \mathbb {N}_{0}\), a reduced Bessel function can be expressed as an exponential multiplied by a terminating confluent hypergeometric series 1F1 (see, for example, [99, Eq. (3.7)]):

$$ \widehat{k}_{n + 1/2} (z) = 2^{n} (1/2)_{n} \mathrm{e}^{-z} {~}_{1} F_{1} (-n; -2n; 2z) , \qquad n \in \mathbb{N}_{0} . $$
(A1.3)

The functions \(\widehat {k}_{n + 1/2} (z)\) can be computed conveniently and reliably with the help of the recursion (A1.2) in combination with the starting values \(\widehat {k}_{-1/2} (z) = \mathrm {e}^{-z}/z\) and \(\widehat {k}_{1/2} (z) = \mathrm {e}^{-z}\).

The polynomial part of \(\widehat {k}_{n + 1/2} (z)\) had been considered independently in the mathematical literature. There, the notation

$$ \theta_{n} (z) = \mathrm{e}^{z} \widehat{k}_{n + 1/2} (z) = 2^{n} (1/2)_{n} \mathrm{e}^{-z} {~}_{1} F_{1} (-n; -2n; 2z) $$
(A1.4)

is used [48, Eq. (1) on p. 34]. Together with some other, closely related polynomials, the θn(z) are called Bessel polynomials [48]. According to Grosswald [48, Section 14], they have been applied in such diverse and seemingly unrelated fields like number theory, statistics, and the analysis of complex electrical networks.

Bessel polynomials occur also in the theory of Padé approximants. Padé [63, p. 82] had shown in his thesis that the Padé approximant [n/m]exp(z) to the exponential function exp(z) can be expressed in closed form, and Baker and Graves-Morris showed that Padé’s expression corresponds in modern mathematical notation to the following ratio of two terminating confluent hypergeometric series 1F1 Baker and Graves-Morris [2, Eq. (2.12)]:

$$ [n/m]_{\exp} (z) = \frac {{~}_{1} F_{1} (- n; - n - m; z)} {{~}_{1} F_{1} (- m; - n - m; - z)} , \qquad m, n \in \mathbb{N}_{0} . $$
(A1.5)

Thus, the diagonal Padé approximants [n/n]exp(z) are ratios of two Bessel polynomials [88, Eq. (14.3-15)]:

$$ [n/n]_{\exp} (z) = \frac{\theta_{n} (z/2)}{\theta_{n} (-z/2)} , \qquad n \in \mathbb{N}_{0} . $$
(A1.6)

The known monotonicity properties of the modified Bessel function Kν(z) imply that the reduced Bessel functions \({\widehat k}_{\nu } (z)\) are for ν > 0 and z ≥ 0 positive and bounded by their values at the origin [99, Eq. (3.1)]. In the case of reduced Bessel functions with half-integral orders, this yields the following bound:

$$ 0 < \widehat{k}_{n + 1/2} (z) \le \widehat{k}_{n + 1/2} (0) = 2^{n} (1/2)_{n} , \qquad 0 \le z < \infty . \quad n \in \mathbb{N}_{0} . $$
(A1.7)

In Grosswald’s book [48], it is shown that for fixed and finite argument z the Bessel polynomials θn(z) satisfy the leading-order asymptotic estimate [48, p. 125]

$$ \theta_{n} (z) \sim \frac {(2 n)!} {2^{n} n!} \mathrm{e}^{z} = 2^{n} (1/2)_{n} \mathrm{e}^{z} , \qquad n \to \infty . $$
(A1.8)

Combination of (A1.4) and (A1.8) shows that the dominant term of the Poincaré-type asymptotic expansion of \(\widehat {k}_{n + 1/2} (z)\) with fixed and finite argument z corresponds to its value at the origin [99, Eq. (3.9)]:

$$ \widehat{k}_{n + 1/2} (z) = \widehat{k}_{n + 1/2} (0) \Bigl[1 + \mathrm{O} \bigl(n^{- 1} \bigr) \Bigr] = 2^{n} (1/2)_{n} \Bigl[1 + \mathrm{O} \bigl(n^{- 1} \bigr) \Bigr] , \qquad n \to \infty . $$
(A1.9)

For several functions, finite or infinite expansions in terms of reduced Bessel functions are known. As a challenging problem for sequence transformations, we propose to use the series expansion [43, Eq. (6.5)]

$$ \frac{1}{z} = \sum\limits_{m = 0}^{\infty} \widehat{k}_{m-1/2} (z) / [2^{m} m!] , \qquad z > 0 . $$
(A1.10)

Equation (A1.9) implies that the terms \(\widehat {k}_{m-1/2} (z) / [2^{m} m!]\) of this series decay like m− 3/2 as m [49, p. 3709], or that the remainders \({\sum }_{m=n + 1}^{\infty } \widehat {k}_{m-1/2} (z) / [2^{m} m!]\) decay like n− 1/2 as n. Thus, the series (A1.10) converges as slowly as the Dirichlet series \(\zeta (s) = {\sum }_{m = 0}^{\infty } (m + 1)^{s}\) with s = 1/2, which is notorious for extremely poor convergence. The slow convergence of the infinite series (A1.10) was demonstrated in [49, Table I]. After 106 terms, only 3 decimal digits were correct, which is in agreement with the truncation error estimate given above.

Appendix 2. Levin’s transformation

2.1 A2.1. The general Levin transformation

A sequence transformations of the type of Wynn’s epsilon algorithm (2.6) only requires the input of a finite sub-string of a sequence \(\{ S_{n} \}_{n = 0}^{\infty }\). No additional information is needed for the computation of approximations to the (generalized) limit S of an input sequence \(\{ S_{n} \}_{n = 0}^{\infty }\). Obviously, this is a highly advantageous feature. However, in more fortunate cases, additional information on the index dependence of the truncation errors Rn = SnS is available. For example, the truncation errors of Stieltjes series are bounded in magnitude by the first terms neglected in the partial sums (see, for example, [88, Theorem 13-2]), and they also have the same sign pattern as the first terms neglected. The utilization of such a structural information in a transformation process should enhance its efficiency. Unfortunately, there is no obvious way of incorporating such information into Wynn’s recursive epsilon algorithm (2.6) or into other sequence transformations with similar features.

In 1973, Levin [55] introduced a sequence transformation which overcame these limitations and which now bears his name. It uses as input data not only the elements of the sequence \(\{ S_{n} \}_{n = 0}^{\infty }\), which is to be transformed, but also the elements of another sequence \(\{ \omega _{n} \}_{n = 0}^{\infty }\) of explicit estimates of the remainders Rn = SnS. These remainder estimates, which must be explicitly known, make it possible to incorporate additional information into the transformation process and are thus ultimately responsible for the remarkable versatility of Levin’s transformation.

For Levin’s transformation, we use the notation introduced in [88, Eqs. (7.1-6) and (7.1-7)]:

$$ \begin{array}{@{}rcl@{}} \mathcal{L}_{k}^{(n)} (\beta, S_{n}, \omega_{n}) & =& \frac {{\Delta}^{k} [(\beta+n)^{k-1} S_{n} / \omega_{n}]} {{\Delta}^{k} [(\beta+n)^{k-1} / \omega_{n}]} \end{array} $$
(A2.1)
$$ \begin{array}{@{}rcl@{}} & = & \frac {\displaystyle \sum\limits_{j = 0}^{k} (-1)^{j} {\binom{k}{j}} \frac {(\beta+n+j)^{k-1}} {(\beta+n+k)^{k-1}} \frac {S_{n+j}} {\omega_{n+j}}} {\displaystyle \sum\limits_{j = 0}^{k} (-1)^{j} {\binom{k}{j}} \frac {(\beta+n+j)^{k-1}} {(\beta+n+k)^{k-1}} \frac {1} {\omega_{n+j}} } , \quad \!k, n \in \mathbb{N}_{0} , \quad \!\beta \!>\! 0 .\\ \end{array} $$
(A2.2)

Here, β > 0 is a shift parameter, \(\{ S_{n} \}_{n = 0}^{\infty }\) is the input sequence, and \(\{ \omega _{n} \}_{n = 0}^{\infty }\) is a sequence of remainder estimates.

In (A2.1) and (A2.2) it is essential that the input sequence {Sn} starts with the sequence element S0. The reason is that both the definitions according to (A2.1) and (A2.2) as well as the recurrence formulas for its numerator and denominator sums (see, for example, [93, Eq. (3.11)]) depend explicitly on n as well as on β.

Levin’s transformation, which is also discussed in the NIST Handbook [59, §3.9(v) Levin’s and Weniger’s Transformations], is generally considered to be both a very powerful and a very versatile sequence transformation (see, for example, [13, 28, 73,74,75, 88, 93] and references therein). The undeniable success of Levin’s transformation inspired others to construct alternative sequence transformations that also use explicit remainder estimates (see, for example, [36, 52, 88, 90, 93] and references therein).

We still have to discuss the choice of the so far unspecified remainder estimates \(\{ \omega _{n} \}_{n = 0}^{\infty }\). A principal approach would be to look for remainder estimates that reproduce the leading-order asymptotics of the actual remainders [88, Eq. (7.3-1)]:

$$ R_{n} = S_{n} - S = \omega_{n} \left[ C + \mathrm{O} \bigl(1/n \bigr) \right] , \qquad C \ne 0 , \quad n \to \infty . $$
(A2.3)

This is a completely valid approach, but there is the undeniable problem that for every input sequence \(\{ S_{n} \}_{n = 0}^{\infty }\) the leading-order asymptotics of the corresponding remainders Rn = SnS as n has to be determined. Unfortunately, such an asymptotic analysis may well lead to some difficult technical problem and be a non-trivial research problem in its own right.

In practice, it is much more convenient to use simple explicit remainder estimates introduced by Levin [55] and Smith and Ford [74], respectively, which are known to work well even in the case of purely numerical input data. In this article, we only consider Levin’s remainder estimates ωn = (β + nSn− 1, and ωn = −[ΔSn− 1][ΔSn]/[Δ2Sn− 1], which lead to Levin’s u and v variants [55, Eqs. (58) and (67)]. These estimates lead to transformations that are known to be powerful accelerators for both linear and logarithmic convergence (compare also the asymptotic estimates in [93, Section IV]). Therefore, Levin’s u and v transformations are principally suited to represent the remaining initial condition \(T_{2}^{(n)}\) in (3.19).

2.2 A2.2. Levin’s u transformation

Levin’s u transformation, which first appeared already in 1936 in the work of Bickley and Miller [10], is characterized by the remainder estimate ωn = (β + nSn− 1 [55, Eq. (58)]. Inserting this into the finite difference representation (A2.1) yields:

$$ u_{k}^{(n)} (\beta, S_{n}) = \frac {{\Delta}^{k} [(\beta+n)^{k-2} S_{n} / {\Delta} S_{n-1}]} {{\Delta}^{k} [(\beta+n)^{k-2} / {\Delta} S_{n-1}]} . $$
(A2.4)

For k = 2, the right-hand side of this expression depends on n only implicitly via the input data {Sn} and also does not depend on the scaling parameter β. Thus, we obtain:

$$ u_{2}^{(n)} (\beta, S_{n}) = \frac {{\Delta}^{2} \bigl[ S_{n} / {\Delta} S_{n-1} \bigr]} {{\Delta}^{2} \bigl[ 1 / {\Delta} S_{n-1} \bigr]} . $$
(A2.5)

If we now use Sn = Sn− 1 + ΔSn− 1, we obtain SnSn− 1 = Sn− 1Sn− 1 + 1, which implies Δ[SnSn− 1] = Δ[Sn− 1Sn− 1]. Thus, we obtain the alternative expression,

$$ u_{2}^{(n)} (\beta, S_{n}) = \frac {{\Delta}^{2} [ S_{n-1} / {\Delta} S_{n-1} ]} {{\Delta}^{2} [ 1 / {\Delta} S_{n-1} ]} , $$
(A2.6)

which can be reformulated as follows:

$$ u_{2}^{(n)} (\beta, S_{n}) = \frac {S_{n} \bigl[ {\Delta} S_{n + 1} \bigr] \bigl[ {\Delta}^{2} S_{n-1} \bigr] - S_{n + 1} \bigl[ {\Delta} S_{n-1} \bigr] \bigl[ {\Delta}^{2} S_{n} \bigr]} {\bigl[ {\Delta} S_{n + 1} \bigr] \bigl[ {\Delta}^{2} S_{n-1} \bigr] - \bigl[ {\Delta} S_{n-1} \bigr] \bigl[ {\Delta}^{2} S_{n} \bigr]} . $$
(A2.7)

In order to enhance numerical stability, it is recommendable to rewrite this expression as follows:

$$ u_{2}^{(n)} (\beta, S_{n}) = S_{n} - \frac {\bigl[ {\Delta} S_{n-1} \bigr] \bigl[ {\Delta} S_{n} \bigr] \bigl[ {\Delta}^{2} S_{n} \bigr]} {\bigl[ {\Delta} S_{n + 1} \bigr] \bigl[ {\Delta}^{2} S_{n-1} \bigr] - \bigl[ {\Delta} S_{n-1} \bigr] \bigl[ {\Delta}^{2} S_{n} \bigr]} . $$
(A2.8)

2.3 A2.3. Levin’s v transformation

Levin’s v transformation is characterized by the remainder estimate [55, Eq. (67)] which is inspired by Aitken’s Δ2 formula (1.8):

$$ \omega_{n} = \frac{[{\Delta} S_{n-1}] [{\Delta} S_{n}]} {{\Delta} S_{n-1} - {\Delta} S_{n}} = - \frac{[{\Delta} S_{n-1}] [{\Delta} S_{n}]}{{\Delta}^{2} S_{n-1}} = \frac{[{\Delta} S_{n-1}] [{\Delta} S_{n}]}{{\Delta}^{2} S_{n-1}} , $$
(A2.9)

In (A2.9), we made use of the fact that Levin’s transformation \(\mathcal {L}_{k}^{(n)} (\beta , S_{n}, \omega _{n})\) is a homogeneous function of degree zero of the k + 1 remainder estimates ωn, ωn+ 1, …. Thus, we can use any of the expressions in (A2.9) without affecting the value of the transformation.

Inserting the remainder estimate (A2.9) into the finite difference representation (A2.1) yields:

$$ v_{k}^{(n)} (\beta, S_{n}) = \frac {\displaystyle {\Delta}^{k} \frac{\displaystyle (\beta+n)^{k-1} S_{n} [{\Delta}^{2} S_{n-1}]}{\displaystyle [{\Delta} S_{n-1}] [{\Delta} S_{n}]}} {\displaystyle {\Delta}^{k} \frac{\displaystyle (\beta+n)^{k-1} [{\Delta}^{2} S_{n-1}]}{\displaystyle [{\Delta} S_{n-1}] [{\Delta} S_{n}]}} . $$
(A2.10)

For k = 1, the right-hand side of this expression depends on n only implicitly via the input data {Sn} and also does not depend on the scaling parameter β. We obtain:

$$ v_{1}^{(n)} (\beta, S_{n}) = \frac {\displaystyle S_{n + 1} [{\Delta} S_{n-1}] [{\Delta}^{2} S_{n}] - S_{n} [{\Delta} S_{n + 1}] [{\Delta}^{2} S_{n-1}]} {\displaystyle [{\Delta} S_{n-1}] [{\Delta}^{2} S_{n}] - [{\Delta} S_{n + 1}] [{\Delta}^{2} S_{n-1}]} . $$
(A2.11)

Comparison of (A2.7) and (A2.11) yields \(v_{1}^{(n)} (\beta , S_{n}) = u_{2}^{(n)} (\beta , S_{n})\) for arbitrary \(\beta \in \mathbb {R}\) and \(n \in \mathbb {N}_{0}\).

Again, it is recommendable to rewrite (A2.11) as follows:

$$ v_{1}^{(n)} (\beta, S_{n}) = S_{n} + \frac {\displaystyle [{\Delta} S_{n}] [{\Delta} S_{n-1}] [{\Delta}^{2} S_{n}]} {\displaystyle [{\Delta} S_{n-1}] [{\Delta}^{2} S_{n}] - [{\Delta} S_{n + 1}] [{\Delta}^{2} S_{n-1}]}. $$
(A2.12)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chang, XK., He, Y., Hu, XB. et al. Construction of new generalizations of Wynn’s epsilon and rho algorithm by solving finite difference equations in the transformation order. Numer Algor 83, 593–627 (2020). https://doi.org/10.1007/s11075-019-00695-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11075-019-00695-w

Keywords