Abstract
In this paper, we extend the study of superconvergence properties of Chebyshev-Gauss-type spectral interpolation in Zhang (SIAM J Numer Anal 50(5):2966–2985, 2012) to general Jacobi–Gauss-type interpolation. We follow the same principle as in Zhang (SIAM J Numer Anal 50(5):2966–2985, 2012) to identify superconvergence points from interpolating analytic functions, but rigorous error analysis turns out much more involved even for the Legendre case. We address the implication of this study to functions with limited regularity, that is, at superconvergence points of interpolating analytic functions, the leading term of the interpolation error vanishes, but there is no gain in order of convergence, which is in distinctive contrast with analytic functions. We provide a general framework for exponential convergence and superconvergence analysis. We also obtain interpolation error bounds for Jacobi–Gauss-type interpolation, and explicitly characterize the dependence of the underlying parameters and constants, whenever possible. Moreover, we provide illustrative numerical examples to show tightness of the bounds.
Similar content being viewed by others
References
Abramowitz, M., Stegun, I.A.: Handbook of Mathematical Functions. Dover, New York (1972)
Bernstein, S.N.: Sur l’ordre de la meilleure approximation des fonctions continues par des polynomes de degre donne, Memoires publies par la class des sci. Acad. de Belgique 2(4), 1–103 (1912)
Davis, P.J.: Interpolation and Approximation. Dover, New York (1975)
Gautschi, W.: Orthogonal Polynomials Computation and Approximation. Numerical Mathematics and Scientific Computation. Oxford University Press, New York (2004); Oxford Science Publications
Gottlieb, D., Shu, C.W.: On the Gibbs phenomenon and its resolution. SIAM Rev. 39(4), 644–668 (1997)
Gottlieb, D., Shu, C.W., Solomonoff, A., Vandeven, H.: On the Gibbs phenomenon. I. Recovering exponential accuracy from the Fourier partial sum of a nonperiodic analytic function. J. Comput. Appl. Math. 43(1–2), 81–98 (1992). Orthogonal polynomials and numerical methods
Hale, N., Tee, T.W.: Conformal maps to multiply slit domains and applications. SIAM J. Sci. Comput. 31(4), 3195–3215 (2009)
Mason, J.C., Handscomb, D.C.: Chebyshev Polynomials. Chapman & Hall/CRC, Boca Raton (2003)
Mastroianni, G., Milovanović, G.V.: Interpolation Processes: Basic Theory and Applications. Springer Monographs in Mathematics. Springer, Berlin (2008)
Pecaric, J., Ujevic, N.: A general interpolating formula and error bounds. Tamsui Oxf. J. Math. Sci. 26(1), 103–127 (2010)
Platte, R.B., Trefethen, L.N., Kuijlaars, B.J.: Impossibility of fast stable approximation of analytic functions from equispaced samples. SIAM Rev. 53(2), 308–318 (2011)
Ramanujan, S.: Ramanujan Collected Works. Chelsea, New York (1962)
Reddy, S.C., Weideman, J.A.C.: The accuracy of the Chebyshev differencing method for analytic functions. SIAM J. Numer. Anal. 42(5), 2176–2187 (2005). (electronic)
Shen, J., Tang, T., Wang, L.L.: Spectral Methods: Algorithms, Analysis and Applications, Volume 41 of Series in Computational Mathematics. Springer, Berlin (2011)
Szegö, G.: Orthogonal Polynomials (4th edn). AMS Coll. Publ (1975)
Tadmor, E.: The exponential accuracy of Fourier and Chebyshev differencing methods. SIAM J. Numer. Anal. 23(1), 1–10 (1986)
Tee, T.W., Trefethen, L.N.: A rational spectral collocation method with adaptively transformed Chebyshev grid points. SIAM J. Sci. Comput. 28(5), 1798–1811 (2006)
Wang, H.Y., Xiang, S.H.: On the convergence rates of Legendre approximation. Math. Comp. 81(278), 861–877 (2012)
Wang, M.K., Chu, Y.M., Qiu, S.L., Jiang, Y.P.: Bounds for the perimeter of an ellipse. J. Approx. Theory 164(7), 928–937 (2012)
Xiang, S.H.: On error bounds for orthogonal polynomial expansions and Gauss-type quadrature. SIAM J. Numer. Anal. 50(3), 1240–1263 (2012)
Xie, Z.Q., Wang, L.L., Zhao, X.D.: On exponential convergence of Gegenbauer interpolation and spectral differentiation. Math. Comp. 82(282), 1017–1036 (2013)
Zhang, Z.: Superconvergence of spectral collocation and \(p\)-version methods in one dimensional problems. Math. Comp. 74(252), 1621–1636 (2005). (electronic)
Zhang, Z.: Superconvergence of a Chebyshev spectral collocation method. J. Sci. Comput. 34(3), 237–246 (2008)
Zhang, Z.: Superconvergence points of polynomial spectral interpolation. SIAM J. Numer. Anal. 50(5), 2966–2985 (2012)
Zhao, X.D., Wang, L.L., Xie, Z.Q.: Sharp error bounds for Jacobi expansions and Gegenbauer–Gauss quadrature of analytic functions. SIAM J. Numer. Anal. 51(3), 1443–1469 (2013)
Author information
Authors and Affiliations
Corresponding author
Additional information
Li-Lian Wang and Xiaodan Zhao: The research of these two authors is partially supported by Singapore MOE AcRF Tier 1 Grant (RG 15/12), MOE AcRF Tier 2 Grant (2013–2016), and \( {A}^*\)STAR-SERC-PSF Grant (122-PSF-007).
Zhimin Zhang: This author is supported in part by the US National Science Foundation under Grant DMS-1115530.
Appendices
Appendix A: Jacobi Polynomials
The Jacobi polynomials satisfy the derivative relation (see [15, (4.21.7)]):
and there holds (see [15, (4.5.4)])):
Another recurrence relation reads (see [15, (4.5.5)–(4.5.6)]):
where
Appendix B: Proof of Theorem 4.2
We first present some necessary lemmas for its proof.
The following formula (see [3, Lemma 12.4.1]) is of paramount importance.
Lemma 7.1
Let \(z=(w+w^{-1})/2.\) Then
where
In fact, the coefficients \(\{g_k\}\) relate to the following Laurent series expansion.
Lemma 7.2
We have
which converges uniformly for all complex-valued \(w\) such that \(|w|>1.\)
Proof
Recall the binomial expansion:
This implies (7.3).\(\square \)
The key idea of estimating \(m_Q\) is to show that for \(z\in \mathcal{E }_\rho \) with \(|w|=\rho >1,\)
and more importantly, we care about the rate it decays. For this purpose, let us split the error term into two parts:
where
with
We deduce from (7.2) and (7.8) the following useful properties of \(\{g_k\}\) and \(\{q_k\}.\)
Lemma 7.3
-
(i)
For \(k\ge 0, \, g_k>0,\) and \(\{g_k\}\) is strictly decreasing, namely,
$$\begin{aligned} 1=g_0>g_1>\cdots >g_{k}>g_{k+1}>\cdots . \end{aligned}$$(7.9) -
(ii)
We have
$$\begin{aligned} 0=q_0<q_1<\cdots <q_{N}<q_{N+1}. \end{aligned}$$(7.10)Moreover, \((q_k+1)g_k<1,\) for \(1\le k\le N\), and \((q_{k+1}+1)g_k<1\) for \(1\le k\le N-1.\) In addition, \((q_{N+1}+1)g_{N+1}=1\), and \(q_{N+1}g_N<1\) for \(N\ge 2\).
Proof
-
(i)
It is clear that by (7.2), \(g_0=1\) and \(0<g_k<1\) for all \(k\ge 1.\) Since
$$\begin{aligned} \frac{g_{k+1}}{g_k}= \frac{k+1/2}{k+1}<1, \end{aligned}$$(7.11)\(\{g_k\}\) is strictly decreasing with respect to \(k\).
-
(ii)
Observe from (7.8) and (i) that \(q_0=0\) and \(\{q_k\}\) is strictly increasing. A direct calculation leads to that for \(1\le k\le N-1,\)
$$\begin{aligned} (q_{k+1}+1)g_k&= \frac{g_{N-k}}{g_{N+1}}g_{k}=\left( \prod _{j=0}^{k-2} \frac{1-\frac{1/2}{k-j}}{1-\frac{1/2}{N+1-j}}\right) \frac{(N-k+2)(N-k+1)}{2(N-k+3/2)(N-k+1/2)}\\&\le \frac{1}{2}\Big (1+\frac{1/2}{N-k+3/2}\Big ) \Big (1+\frac{1/2}{N-k+1/2}\Big )\le \frac{4}{5}<1, \end{aligned}$$where we used the fact \(1+\frac{1/2}{N-k+3/2}\) and \(1+\frac{1/2}{N-k+1/2}\) are strictly increasing with respect to \(k\). Note that if \(k=1\), the first term in the second identity equals \(1\).
which implies \((q_{k}+1)g_k<1\) for \(1\le k\le N\).
Next, by (7.2) and (7.8), we have \((q_{N+1}+1)g_{N+1}=1\) and
since \(g_N=\frac{\Gamma (N+1/2)}{\sqrt{\pi }N\Gamma (N)} >\frac{1}{\sqrt{\pi }N}>\frac{1}{2N+1}\) for \(N\ge 2\). This ends the proof.\(\square \)
Lemma 7.4
We have
where \(D_{k,N}\) is defined in (4.17).
Proof
By (7.2) and (7.8), we obtain from (4.2)–(4.3) that
A direct calculation leads to
Using the fact \(\sqrt{1+x}\le 1+\frac{x}{2}\) (with \(x\ge 0\)) yields
Consequently, we obtain
which gives the desired upper bound.\(\square \)
Proof of Theorem 4.2
By (7.6),
As \(|w|=\rho ,\) we find
We now work on the upper bound of \(R_N^e(\rho )+R_N^o(\rho )\) defined in (7.7). Using the properties in Lemma 7.3 and (7.3)–(7.4), we obtain that for some \(1<K<N,\)
and similarly,
Collecting the terms leads to the upper bound
Thus, a combination of (7.14)–(7.17) yields (4.15).
It remains to show the asymptotic estimate (4.18). Observe that for \(N\gg 1,\) if we choose \(K=[N^\varepsilon ]\) with \(0<\varepsilon <1,\) then
Therefore,
Thus, for \(N\gg 1,\)
Hence, the conclusion follows from (4.15). \(\square \)
Rights and permissions
About this article
Cite this article
Wang, LL., Zhao, X. & Zhang, Z. Superconvergence of Jacobi–Gauss-Type Spectral Interpolation. J Sci Comput 59, 667–687 (2014). https://doi.org/10.1007/s10915-013-9777-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10915-013-9777-x
Keywords
- Superconvergence points
- Jacobi–Gauss
- Jacobi–Gauss–Radau and Jacobi–Gauss–Lobatto interpolations
- Analytic functions
- Bernstein ellipse
- Exponential convergence
- Error remainder