Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Recent Developments of The Sinc Numerical Methods: Masaaki Sugihara, Takayasu Matsuo

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

CORE Metadata, citation and similar papers at core.ac.

uk
Provided by Elsevier - Publisher Connector

Journal of Computational and Applied Mathematics 164–165 (2004) 673 – 689


www.elsevier.com/locate/cam

Recent developments of the Sinc numerical methods


Masaaki Sugihara∗ , Takayasu Matsuo
Department of Computational Science and Engineering, Graduate School of Engineering, Nagoya University,
Furo-Cho, Chikusa, Nagoya 464-8603, Japan

Received 20 September 2002; received in revised form 23 April 2003

Abstract

This paper gives a survey of recent developments of the Sinc numerical methods. A variety of Sinc numerical
methods have been developed by Stenger and his √ school. For a certain class of problems, the Sinc numerical
methods have the convergence rates of O(exp(− n)) with some  ¿ 0, where n is the number of nodes or
bases used in the methods. Recently it has turned out that the Sinc numerical methods can achieve convergence
rates of O(exp(− n=log n)) with some  ¿ 0 for a smaller but still practically meaningful class of problems,
and that these convergence rates are best possible. The present paper demonstrates these facts for two Sinc
numerical methods: the Sinc approximation and the Sinc-collocation method for two-point boundary value
problems.
c 2003 Elsevier B.V. All rights reserved.

MSC: 30D55; 41A25; 41A30; 65D15; 65L10; 65L60

Keywords: Double-exponential transformation; Function approximation; Sinc approximation; Sinc-collocation method;


Sinc methods; Two-point boundary value problem

1. Introduction

The Sinc approximation for a function f de9ned on the real line R is given by
N

f (x) ≈ f (jh)S(j; h)(x); (1)
j=−N


This work is partially supported by the Grant-in-Aid for Scienti9c Research of the Ministry of Education, Culture,
Sports, Science and Technology of Japan.

Corresponding author.
E-mail addresses: sugihara@na.cse.nagoya-u.ac.jp (M. Sugihara), matsuo@na.cse.nagoya-u.ac.jp (T. Matsuo).

c 2003 Elsevier B.V. All rights reserved.


0377-0427/$ - see front matter 
doi:10.1016/j.cam.2003.09.016
674 M. Sugihara, T. Matsuo / Journal of Computational and Applied Mathematics 164–165 (2004) 673 – 689

where S(j; h)(x) is the so-called Sinc function de9ned by

sin[( =h)(x − jh)]


S(j; h)(x) =
( =h)(x − jh)

and the step size h is suitably chosen for a given positive integer n = 2N + 1.
During the last three decades there have been developed a variety of numerical methods based
on the Sinc approximation, which are now collectively referred to as Sinc numerical methods (see
[8,10,15,16]). The Sinc numerical methods cover

• function approximation,
• approximation of derivatives,
• approximate de9nite and inde9nite integration,
• approximate solution of initial and boundary value ordinary diCerential equation (ODE) problems,
• approximation and inversion of Fourier and Laplace transforms,
• approximation of Hilbert transforms,
• approximation of de9nite and inde9nite convolution,
• approximate solution of PDEs,
• approximate solution of integral equations,
• construction of conformal maps.

In the standard setup of the Sinc numerical methods, the errors are known to be O(exp(− n))
with some  ¿ 0, where n is the number of nodes or bases used in the methods. However, Sugihara
[18,19] has recently found that the errors in the Sinc numerical methods are O(exp(− n=log n)) with
some  ¿ 0, in another setup that is also meaningful both theoretically and practically. It has also
been found that the error bounds of O(exp(− n=log n)) are best possible in a certain mathematical
sense.
In the present paper we give detailed account of the above 9ndings for two Sinc methods: (1)
the Sinc method for function approximation (=the Sinc approximation) and (2) the Sinc-collocation
method for linear two-point boundary value problems for second-order ODE. First, we review the
Sinc approximation and the Sinc-collocation method for the two-point boundary problem in Sections 2
and 3, respectively. We describe the methods√ together with the standard convergence analyses, which
show the convergence rates of O(exp(− n)). Next, in Section 4 we describe the recent develop-
ments of the Sinc approximation and the Sinc-collocation method with the improved convergence
rates of O(exp(− n=log n)). It is also shown that these convergence rates of O(exp(− n=log n))
are best possible. Finally, we make some remarks in Section 5.

2. Review of the Sinc approximation

Following the standard treatises on the Sinc numerical methods [10,15], we 9rst treat the Sinc
approximation on the entire interval (−∞; ∞), and then proceed to that on a general interval
[a; b].
M. Sugihara, T. Matsuo / Journal of Computational and Applied Mathematics 164–165 (2004) 673 – 689 675

2.1. Sinc approximation on the entire interval (−∞; ∞)

The Sinc approximation on the entire interval (−∞; ∞) is nothing but the Sinc approximation
(1) itself given in Introduction.
To state convergence theorems, we introduce notation and de9nitions.

Denition 1. Let Dd denote the in9nite strip region of width 2d (d ¿ 0) in the complex plane:
Dd ≡ {z ∈ C | |Im z| ¡ d}:
For 0 ¡  ¡ 1, let Dd () be de9ned by
Dd () = {z ∈ C | |Re z| ¡ 1=; |Im z| ¡ d(1 − )}:
Let H 1 (Dd ) be the Hardy space over the region Dd , i.e., the set of functions f analytic in Dd such
that

lim |f (z)| |d z| ¡ ∞:
→0 @Dd ()

The following theorem, due to Stenger [15], presents the convergence√result on the Sinc approx-
imation, which shows that the convergence rate is given by O(exp(− n)) if the function to be
approximated belongs to H 1 (Dd ) and decays exponentially on the real line.

Theorem 1 (Stenger [15]). Assume, with positive constants ;  and d, that

(1) f belongs to H 1 (Dd );


(2) f decays exponentially on the real line, that is,
|f (x)| 6  exp(−|x|) for all x ∈ R:

Then we have
 
 
 N

sup  f (x) − f (jh)S(j; h)(x) 6 CN 1=2 exp[ − ( dN )1=2 ]

−∞¡x¡∞  j=−N 
for some C, where the mesh size h is taken as
 
d 1=2
h= :
N

2.2. Sinc approximation on a general interval [a; b]

The basic idea of the Sinc approximation of f (x) on the interval [a; b] is 9rst to transform the
function f (x) to that on the entire interval (−∞; ∞) by means of a properly selected variable
transformation x = (), and then to apply the Sinc approximation on the entire interval (−∞; ∞)
to the transformed function f ( ()). A more formal description of the Sinc approximation on the
interval [a; b] follows.
676 M. Sugihara, T. Matsuo / Journal of Computational and Applied Mathematics 164–165 (2004) 673 – 689

Step 1: With a suitably selected variable transformation x = () such that


: (−∞; ∞) → (a; b); lim () = a; lim () = b
→−∞ →∞

transform the function f (x) de9ned on [a; b] to f ( ()) on the entire interval (−∞; ∞).
Step 2: Apply the Sinc approximation on the entire interval (−∞; ∞) to the transformed function
f ( ()) to obtain
N

f ( ()) ≈ f ( (jh)) S(j; h)(); −∞ ¡  ¡ ∞
j=−N

or equivalently,
N

−1
f (x) ≈ f ( (jh))S(j; h)( (x)); a 6 x 6 b: (2)
j=−N

For the convergence of this approximation, the following theorem follows immediately from
Theorem 1.

Theorem 2 (Stenger [15]). Assume that, for a variable transformation z = (), the transformed
function f ( ()) satis8es assumptions 1 and 2 in Theorem 1 with some ;  and d. Then we have
 
 
 N


sup  f (x) − f ( (jh))S(j; h)( −1
(x)) 6 CN 1=2 exp[ − ( dN )1=2 ]
a6x6b  j=−N 

for some C, where the mesh size h is taken as


 
d 1=2
h= :
N

The variable transformation in the above theorem is at our disposal. This suggests the possibility
that even a function f with end-point singularity can be approximated successfully by (2) with a
suitable choice of the transformation . This is indeed the case, as it is demonstrated below.

Example 1. The Sinc approximation yields a good result even for the function that has end-point
singularity. For the function
f (x) = x1=2 (1 − x)3=4 ; x ∈ [0; 1] (3)
which has algebraic singularities at x = 0 and 1, we employ the variable transformation: 1
1  1
x= 1 () ≡ tanh + : (4)
2 2 2

1
This variable transformation was originally proposed for numerical integration [6,13,14,20], and is widely used in the
Sinc numerical methods.
M. Sugihara, T. Matsuo / Journal of Computational and Applied Mathematics 164–165 (2004) 673 – 689 677

Chebyshev
Ordinary-Sinc
100

10−2

10−4
|ERROR|
10−6

10−8

10−10

10−12

10−14
0 10 20 30 40 50 60 70 80 90 100 110 120
n

Fig. 1. Errors in the Sinc approximation and in the polynomial interpolation with the Chebyshev nodes for the function
x1=2 (1 − x)3=4 .

With a little calculation, we can verify that the transformed function f ( 1 ()) satis9es all the
assumptions in Theorem 1 with  = 12 and d ¡ , and hence Theorem 2 applies. (We here omit the
value of , for it is irrelevant to both the order of magnitude of the error estimate and the selection
of the mesh size h. For the same reason we will do so in the following.) Using  = 12 ; d = =2, we
apply the Sinc approximation on the interval (−∞; ∞) to the transformed function. Fig. 1 shows
the error in the Sinc
 approximation, which is denoted by “Ordinary-Sinc”. (We estimate the error
sup06x61 |f (x) − Nj=−N f ( (jh))S(j; h)( −1 (x))| by computing the diCerence between f (x) and
N −1 (x)) at 2000 equally spaced points in [0; 1]. In the following examples,
j=−N f ( (jh))S(j; h)(
we estimate the errors in a similar way.) For comparison, we also show the error in the polynomial
interpolation with the Chebyshev nodes [5], which is denoted by “Chebyshev”. In Fig. 1 we√observe
that the Sinc approximation yields a good result and that its error behaves like O(exp(− n)), as
expected from Theorem 2. We also observe that the polynomial interpolation with the Chebyshev
nodes yields a poor result.

3. Review of the Sinc-collocation method for two-point boundary value problems

In this section we review the Sinc-collocation method for solving the following linear two-point
boundary value problems for second-order ODE with the zero Dirichlet boundary condition:

y (x) + (x)y (x) + (x)y(x) = (x); x ∈ (a; b);


y(a) = y(b) = 0: (5)

As in the Sinc approximation, we 9rst describe the Sinc-collocation method on the entire interval
(−∞; ∞), and then proceed to that on a general interval [a; b].
678 M. Sugihara, T. Matsuo / Journal of Computational and Applied Mathematics 164–165 (2004) 673 – 689

3.1. Sinc-collocation method on the entire interval (−∞; ∞)

The problem we consider here is the following:

Ly(x) ≡ y (x) + (x)y (x) + (x)y(x) = (x); x ∈ (−∞; ∞);


lim y(x) = 0: (6)
x→±∞

We assume an approximate solution yn (x) to (6) of the form


N

yn (x) ≡ wj S(j; h)(x); n = 2N + 1: (7)
j=−N

Note that yn (x) satis9es the boundary condition in (6) because

lim S(j; h)(x) = 0:


x→±∞

The coeMcients {wj } are determined from the collocation condition

Lyn (kh) = (kh); k = −N; −N + 1; : : : ; N − 1; N; (8)

which yields the following system of linear equations in {wj }:


N

{#(2) 2 (1) (0)
jk =h + (kh)#jk =h + (kh)#jk }wj = (kh);
j=−N

k = −N; −N + 1; : : : ; N − 1; N; (9)

where

1 if j = k;
#(0)
jk ≡ S(j; h)(kh) =
0 if j = k;

0
 if j = k;
#(1)
jk

≡ hS (j; h)(kh) = (−1)k −j

 if j = k;
(k − j)
 2


− if j = k;
3
#(2)
jk ≡ h2 S  (j; h)(kh) = k −j

 −2(−1)
 if j = k:
(k − j)2
By solving (9) by the Gaussian elimination, we obtain the approximate solution yn (x).
For this Sinc-collocation method, the following convergence theorem holds good, which,√ roughly
speaking, states that the Sinc-collocation method converges at the rate of O(exp(− n)), if the
solution of problem (6) belongs to H 1 (Dd ) and decays exponentially on the real line.
M. Sugihara, T. Matsuo / Journal of Computational and Applied Mathematics 164–165 (2004) 673 – 689 679

Theorem 3 (Bialecki [2] and Stenger [15]). Assume that problem (6) has a unique solution y(x),
and that the solution y(x) is analytic on the real line. Furthermore assume, with positive constants
;  and d, that

(1) ,  and  are analytic and bounded in the strip region Dd ;


(2)  takes real values on the real line;
(3) Re{2(x) −  (x)} 6 0 for all x ∈ R;
(4)  belongs to H 1 (Dd ) and decays exponentially on the real line, that is,
|(x)| 6  exp(−|x|) for all x ∈ R;
1
(5) y belongs to H (Dd ) and decays exponentially on the real line, that is,
|y(x)| 6  exp(−|x|) for all x ∈ R:

Then we have
sup |y(x) − yn (x)| 6 CN 5=2 exp[ − ( dN )1=2 ]
−∞¡x¡∞

for some C, where the mesh size h in the Sinc-collocation method is taken as
 
d 1=2
h= :
N

3.2. Sinc-collocation method on a general interval [a; b]

The two-point boundary value problem (5) on a general interval [a; b] is now considered. The
basic idea is 9rst to transform the problem, with a properly selected variable transformation, to that
on the entire interval (−∞; ∞) and then to solve the transformed problem by the Sinc-collocation
method described in Section 3.1. A more formal description of the Sinc-collocation method on the
interval [a; b] follows.
Step 1: With an adroitly selected variable transformation x = () such that
: (−∞; ∞) → (a; b); lim () = a; lim () = b;
→−∞ →∞

transform problem (5) to that on the entire interval (−∞; ∞); the resulting problem is given by
ỹ () + ()
˜ ỹ () + ()
˜ ỹ() = ();
˜  ∈ (−∞; ∞);
lim ỹ() = 0; (10)
→±∞

where ỹ() = y( ()), and


 
()
˜ ≡ () · ( ()) − ()=  ();

()
˜ ≡ (  ())2 · ( ());

()
˜ ≡ (  ())2 · ( ()):
680 M. Sugihara, T. Matsuo / Journal of Computational and Applied Mathematics 164–165 (2004) 673 – 689

Step 2: Apply the Sinc-collocation method on the entire interval (−∞; ∞) to the transformed
problem (10).
For the convergence of the Sinc-collocation method on the interval [a; b], the following theorem
follows from Theorem 3.

Theorem 4 (Bialecki [2] and Stenger [15]). Assume that problem (5) has a unique solution y(x),
and that y(x) is analytic on the interval (a; b). Furthermore assume that, for a variable transfor-
mation z = (), the transformed problem satis8es assumptions 1–5 in Theorem 3 with some ; 
and d. Then we have
−1
sup |y(x) − ỹ n ( (x))| 6 CN 5=2 exp[ − ( dN )1=2 ]
a6x6b

for some C, where ỹ n () is the approximate solution to the transformed problem obtained by the
Sinc-collocation method on the interval (−∞; ∞) with mesh size
 
d 1=2
h= :
N

This theorem aCords a theoretical foundation for the feature of the Sinc-collocation method
that the Sinc-collocation method works well even if the solution of the problem has end-point
singularity.

Example 2. To illustrate that the Sinc-collocation method works well even if the solution of the
problem has end-point singularity, we consider the problem [9, Example 3.1]
3 √
y (x) − 2 y(x) = −3 x; x ∈ (0; 1);
4x
y(0) = y(1) = 0; (11)
which has a regular singular point at x = 0, and has the exact solution y(x) = x3=2 (1 − x). We employ
the variable transformation (4) used in Example 1. With some calculation we can prove that the
transformed problem, which is given by
ỹ () + (2 1 () − 1)ỹ () − 34 (1 − 1 ())
2
ỹ()

= − 3( 1 ())5=2 (1 − 1 ())
2
;  ∈ (−∞; ∞);
lim ỹ() = 0;
→±∞

satis9es all the assumptions in Theorem 3 with =1 and d ¡ , and hence Theorem 4 applies. Using
 = 1; d = =2, we apply the Sinc-collocation method on the interval (−∞; ∞) to the transformed
problem. Fig. 2 shows the error in the approximate solution of the Sinc-collocation method, which is
denoted by “Ordinary-Sinc”. For comparison, we also show the error in the Chebyshev-collocation
solution [3, pp. 161–166], which is denoted by “Chebyshev”. We observe √ that the Sinc-collocation
method yields a good result and that the error behaves like O(exp(− n)), as expected from
Theorem 4. We also observe that the Chebyshev-collocation method gives a good result, though its
convergence rate is lower than that of the Sinc-collocation method.
M. Sugihara, T. Matsuo / Journal of Computational and Applied Mathematics 164–165 (2004) 673 – 689 681

Chebyshev
100 Ordinary-Sinc

10−2

10−4

|ERROR| 10−6

10−8

10−10

10−12

10−14
0 10 20 30 40 50 60 70 80 90 100 110 120
n
Fig. 2. Errors in the Sinc-collocation solution and in the Chebyshev-collocation solution of problem (11).

4. New developments of the Sinc approximation and the Sinc-collocation method

As seen in the preceding sections, the Sinc approximation and √ the Sinc-collocation method on
the entire interval (−∞; ∞) converge at the rates of O(exp(− n)) under the assumption that the
function or the solution belongs to H 1 (Dd ) and decays exponentially on the real line. A careful
examination of the proofs of these results reveals the following fact:
For the convergence analysis of the Sinc methods, the assumption that functions belong to
H 1 (Dd ) is mandatory, whereas the assumption that functions decay exponentially on the real line
is not mandatory, although some assumption on the decay rate of functions on the real line is
required.
This fact naturally leads us to the study of the Sinc methods for functions that enjoy another type
of the decay rate on the real line, such as the double exponential, the triple exponential and so on.
In Sections 4.1 and 4.2, we consider functions that decay double exponentially on the real line
and show that the Sinc approximation and the Sinc-collocation method achieve convergence rates
of O(exp(− n=log n)) for such functions. In Sections 4.3, we deny the possibility of more rapid
decay rate than the double exponential, and establish the optimality of the convergence rates of
O(exp(− n=log n)).

4.1. An O(exp(− n=log n)) convergence of the Sinc approximation

The Sinc approximation method applied to a function with double-exponential decay rate on the
real line converges at the rate of O(exp(− n=log n)).

Theorem 5 (Sugihara [19]). Assume, with positive constants ; ; $ and d, that

(1) f belongs to H 1 (Dd );


(2) f decays double exponentially on the real line, that is,
|f (x)| 6  exp(− exp($|x|)) for all x ∈ R:
682 M. Sugihara, T. Matsuo / Journal of Computational and Applied Mathematics 164–165 (2004) 673 – 689

Then we have
 
 N 
  − d$N
sup  f (x) − f (jh)S(j; h)(x) 6 C exp (12)
 log( d$N=)
−∞¡x¡∞  j=−N 

for some C, where the mesh size h is taken as


log( d$N=)
h= :
$N

For the convergence of the Sinc approximation on the interval [a; b], the following theorem follows
directly from Theorem 5.

Theorem 6 (Sugihara [19]). Assume that, for a variable transformation z = (), the transformed
function f ( ()) satis8es assumptions 1 and 2 in Theorem 5 with some ; ; $ and d. Then we
have
 
 
 N
 − d$N

sup  f (x) − f ( (jh))S(j; h)( −1
(x)) 6 C exp
a6x6b  j=−N  log( d$N=)

for some C, where the mesh size h is taken as


log( d$N=)
h= :
$N

Example 3. Function (3) treated in Example 1 is considered again. As a variable transformation,


one of the so-called double-exponential transformations 2
1   1
x = 2 () ≡ tanh sinh  + (13)
2 2 2
is employed. With a little calculation it is shown that the transformed function f ( 2 ()) satis9es all
the assumptions in Theorem 5 with  = =4; $ = 1 and d ¡ =2, and hence Theorem 6 applies. With
= =4; $=1; d= =4, the Sinc approximation on the interval (−∞; ∞) is applied to the transformed
function. Fig. 3 shows the error in the Sinc approximation employing the variable transformation (13),
which is denoted by “DE-Sinc”. For comparison, the error in the Sinc approximation incorporated
with the variable transformation (4) is shown again, which is denoted by “Ordinary-Sinc”. It is
observed that the variable transformation (13) enhances the Sinc approximation method remarkably,
and that the error in the Sinc approximation with the variable transformation (13) converges to zero
at the rate of O(exp(− n=log n)), as expected from Theorem 6.

2
This variable transformation was originated for numerical integration in [21] (see also [11]). Recently, its usefulness
in the Sinc numerical methods has been recognized [7,12].
M. Sugihara, T. Matsuo / Journal of Computational and Applied Mathematics 164–165 (2004) 673 – 689 683

Chebyshev
Ordinary-Sinc
100 DE-Sinc

10−2

10−4
|ERROR|
10−6

10−8

10−10

10−12

10−14
0 10 20 30 40 50 60 70 80 90 100 110 120
n

Fig. 3. Errors in the Sinc approximation for the function x1=2 (1 − x)3=4 using the variable transformations (4) and (13)
(for comparison error in the polynomial interpolation with the Chebyshev nodes is also displayed).

4.2. An O(exp(− n=log n)) convergence of the Sinc-collocation method

We here consider the Sinc-collocation method for the problem whose solution decays double ex-
ponentially on the real line. We can prove the following theorem, which shows that the convergence
rate of the Sinc-collocation method is given by O(exp(− n=log n)).

Theorem 7 (Sugihara [18]). Assume that problem (6) has a unique solution y(x), and that solution
y(x) is analytic on the real line. Furthermore assume, with positive constants A; B; ; ; $ and
d, that

(1) ;  and  are analytic in the strip region Dd , and their absolute values on the real line are
bounded from above as follows:
|(x)|; | (x)|; |(x)| 6 A exp(B|x|) for all x ∈ R;
(2) y;  y and y belong to H 1 (Dd );
(3)  takes real values on the real line;
(4) Re{2(x) −  (x)} 6 0 for all x ∈ R;
(5)  belongs to H 1 (Dd ) and decays double exponentially on the real line, that is,
|(x)| 6  exp(− exp($|x|)) for all x ∈ R;
(6) y belongs to H 1 (Dd ) and decays double exponentially on the real line, that is,
|y(x)| 6  exp(− exp($|x|)) for all x ∈ R:

Then we have
− d$N
sup |y(x) − yn (x)| 6 C(log N )N B=$+3=2 exp
−∞¡x¡∞ log( d$N=)
684 M. Sugihara, T. Matsuo / Journal of Computational and Applied Mathematics 164–165 (2004) 673 – 689

for some C, where the mesh size h in the Sinc-collocation method is taken as
log( d$N=)
h= :
$N

For the convergence of the Sinc-collocation method on the interval [a; b], we obtain the following
theorem from Theorem 7.

Theorem 8 (Sugihara [18]). Assume that problem (5) has a unique solution y(x), and that y(x) is
analytic on the interval (a; b). Furthermore assume that, for a variable transformation z = (),
the transformed problem satis8es assumptions 1– 6 in Theorem 7 with some A; B; ; ; $ and d.
Then we have

−1 − d$N
sup |y(x) − ỹ n ( (x))| 6 C(log N )N B=$+3=2 exp
a6x6b log( d$N=)

for some C, where ỹ n () is the approximate solution to the transformed problem obtained by the
Sinc-collocation method on the interval (−∞; ∞) with mesh size
log( d$N=)
h= :
$N

Example 4. Problem (11) treated in Example 2 is considered again. Variable transformation (13)
used in Example 3 is employed. The transformed problem is given by

ỹ () + {( cosh ) (2 2 () − 1) − tanh } ỹ ()


− 34 ( cosh )2 (1 − 2 ())
2
ỹ() = −3( cosh )2 ( 2 ())5=2 (1 − 2 ())
2
;
 ∈ (−∞; ∞);
lim ỹ() = 0:
→±∞

With lengthy calculation, it can be proved that this problem satis9es all the assumptions in Theorem
7 with B = 2;  = =2, $ = 1 and d ¡ =2, and hence Theorem 8 applies. With  = =2; $ = 1; d =
=4, the Sinc-collocation method on the interval (−∞; ∞) is applied to the transformed problem.
Fig. 4 shows the error in the approximate solution of the Sinc-collocation method employing the
variable transformation (13), which is denoted by “DE-Sinc”. For comparison, the error by the
Sinc-collocation method with the variable transformation (4) is shown again, which is denoted by
“Ordinary-Sinc”. It is observed that the variable transformation (13) enhances the Sinc-collocation
method considerably to the convergence rate of O(exp(− n=log n)), as expected from Theorem 8.

4.3. Optimality of the O(exp(− n=log n)) convergence rate

We may naturally be tempted to proceed to the case where the decay rate is more rapid than the
double exponential. However, the following astonishing theorem denies this possibility.
M. Sugihara, T. Matsuo / Journal of Computational and Applied Mathematics 164–165 (2004) 673 – 689 685

Chebyshev
100 Ordinary-Sinc
DE-Sinc
10−2

10−4

|ERROR| 10−6

10−8

10−10

10−12

10−14
0 10 20 30 40 50 60 70 80 90 100 110 120
n

Fig. 4. Errors in the Sinc-collocation method for problem (11) employing the variable transformations (4) and (13) (for
comparison error in the Chebyshev-collocation solution is also displayed).

Theorem 9 (Sugihara [17]): If a function f satis8es the following two conditions:

(1) f belongs to H 1 (Dd );


(2) the decay rate on the real line satis8es
f (x) = O(exp(− exp($|x|))) as |x| → ∞;
where  ¿ 0 and $ ¿ =(2d),

then f ≡ 0.

This theorem implies that the double exponential decay treated in the preceding sections is extremal
and that the convergence rates of O(exp(− n=log n)) are best possible.

5. Concluding remarks

5.1. More examples

In Examples 1– 4, we saw that the “double-exponential” Sinc methods, i.e., the Sinc numeri-
cal methods incorporated with double-exponential transformations, outperform both the “ordinary”
Sinc methods, i.e., the Sinc numerical methods combined with ordinary transformations, and the
polynomial-based methods. It should be noted, however, that there exist some functions for which
this is not the case. Three examples are shown below.

Example 5. Consider the function


x(1 − x)e−x
f (x) = ; x ∈ [0; 1] (14)
(1=2)2 + (x − 1=2)2
686 M. Sugihara, T. Matsuo / Journal of Computational and Applied Mathematics 164–165 (2004) 673 – 689

Chebyshev
Ordinary-Sinc
100 DE-Sinc

10−2

10−4
|ERROR|
10−6

10−8

10−10

10−12

10−14
0 10 20 30 40 50 60 70 80 90 100 110 120
n

Fig. 5. Errors for the function x(1 − x)e−x =(( 12 )2 + (x − 12 )2 ) in the polynomial interpolation with the Chebyshev nodes
and in the Sinc approximation using the variable transformations (4) and (13).

which is analytic on the interval [0; 1], but has poles at 12 ± 12 i. To this function we 9rst apply the
Sinc approximation with x = 1 () in (4). We take the parameter values  = 1 and d = =4, since
the transformed function f ( 1 ()) satis9es the assumptions in Theorem 1 with  = 1 and √ d ¡ =2.
In Fig. 5 we observe that the error, denoted by “Ordinary-Sinc”, behaves like O(exp(− n)), as
expected from Theorem 1. Secondly, we apply the Sinc approximation using the double-exponential
transformations x = 2 () in (13). We take the parameter values  = =2; $ = 1; d = =12, because the
transformed function f ( 2 ()) satis9es the assumptions in Theorem 5 with = =2; $=1 and d ¡ =6.
Fig. 5 shows that the error in the double-exponential Sinc approximation, denoted by “DE-Sinc”,
converges to zero at the rate of O(exp(− n=log n)), as expected from Theorem 5. Thirdly, we
apply the polynomial interpolation with the Chebyshev nodes, whose error is also shown in Fig. 5,
denoted by “Chebyshev”. We observe that the polynomial-based method yields an extremely √ −n good
result. The theory of polynomial interpolation [4] tells us that the error is O((1 + 2) ), which
accords with the behavior of the observed error. In general, for functions that are analytic on the
underlying interval the error in the polynomial interpolation with the Chebyshev nodes converges to
zero at the rate of O(R−n ) with R ¿ 1. Hence in these cases the polynomial-based approximation
outdoes both the ordinary Sinc approximation and the double-exponential Sinc approximation.

Example 6. Consider approximating the function


f (x) = x1=2 (1 − x)3=4 sn(log(x=(1 − x)); (1=2)1=2 ); x ∈ [0; 1]; (15)
1=2
where sn(x; (1=2) ) is the Jacobi elliptic function, which has singularities at K(2m + (2n + 1)i)
where K =1:85407467730137 : : : and m and n are integers [1]. We 9rst apply the Sinc approximation
with the ordinary transformation x = 1 () in (4). Since the assumptions in Theorem 1 with  = 12
and d ¡ K are met, we choose the parameter values  = 12 and d = K=2. The √ result is shown in
Fig. 6 by “Ordinary-Sinc”. We observe that the error behaves like O(exp(− n)), as expected
from Theorem 1. Note that the error oscillates due to the oscillation of the approximated function.
Secondly, we apply the Sinc approximation with the double-exponential transformations x = 2 () in
M. Sugihara, T. Matsuo / Journal of Computational and Applied Mathematics 164–165 (2004) 673 – 689 687

Chebyshev
Ordinary-Sinc
DE-Sinc
100

10−2

10−4
|ERROR|
10−6

10−8

10−10

10−12

10−14
0 10 20 30 40 50 60 70 80 90 100 110 120
n

Fig. 6. Errors for the function x1=2 (1 − x)3=4 sn(log(x=(1 − x)); ( 12 )1=2 ) in the polynomial interpolation with the Chebyshev
nodes and in the Sinc approximation using the variable transformations (4) and (13).

(13). In this case Theorem 5, giving a theoretical foundation of the use of the double-exponential
Sinc approximation, is not applicable, since assumption 1 is not satis9ed for any d because of
the singularities of the transformed function lying arbitrarily near the real axis. Though without
theoretical justi9cations we used the same parameter values  = =4; $ = 1; d = =4 as for the
double-exponential Sinc approximation in Example 3. The result is shown in Fig. 6 by “DE-Sinc”.
The performance of the double-exponential Sinc approximation is comparable to that of the ordinary
Sinc approximation, although no theoretical analysis explains this. Thirdly, we apply the polynomial
interpolation with the Chebyshev nodes, whose error is also shown in Fig. 6 by “Chebyshev”. Not
surprisingly, this gives only a poor result.

Example 7. Consider the function


  1=2 
f (x) = x1=2 (1 − x)3=4 sn sinh(log(x=(1 − x))); 12
  1=2 
= x1=2 (1 − x)3=4 sn (1=(1 − x) − 1=x)=2; 12 ; x ∈ [0; 1];

to which we apply the three approximation schemes as above. For the ordinary Sinc approximation
we use  = =2 and d = =4, the same values as in Example 1, and for the double-exponential Sinc
approximation we use  = =4, $ = 1 and d = =4, the same values as in Example 3. Note that
these parameter values are not justi9ed theoretically. The results are shown in Fig. 7, which are
desperately poor. Approximating this function seems to be beyond the power of the Sinc methods.

5.2. Near-optimality of the Sinc-collocation method

Recently, near-optimality of the Sinc approximation has been established in certain function spaces
[19]. We suspect that the near-optimality of the Sinc-collocation method can be formulated with some
proper setup of function spaces.
688 M. Sugihara, T. Matsuo / Journal of Computational and Applied Mathematics 164–165 (2004) 673 – 689

Chebyshev
Ordinary-Sinc
100 DE-Sinc

10−2

10−4
|ERROR|
10−6

10−8

10−10

10−12

10−14
0 10 20 30 40 50 60 70 80 90 100 110 120
n

Fig. 7. Errors for the function x1=2 (1−x)3=4 sn((1=(1−x)−1=x)=2; ( 12 )1=2 ) in the polynomial interpolation with the Chebyshev
nodes and in the Sinc approximation using the variable transformations (4) and (13).

5.3. Tractability of the assumptions in Theorems 3 and 7

Assumption 5 in Theorem 3 for the convergence of the ordinary Sinc-collocation method and
assumptions 2 and 6 in Theorem 7 for the convergence of the double-exponential Sinc-collocation
method seem less tractable, because to check the assumptions requires a knowledge of the true
solution. Hence it is desired to replace the assumptions with those which are much more tractable,
which is left as a future work.

Acknowledgements

The authors are grateful to the referees for their valuable comments.

References

[1] M. Abramowitz, I. Stegun, Handbook of mathematical functions, NBS Applied Mathematics Series, Vol. 53, NBS,
Washington, DC, 1964, p. 1046.
[2] B. Bialecki, Sinc-collocation methods for two-point boundary value problems, IMA J. Numer. Anal. 11 (1991)
357–375.
[3] B. Fornberg, A Practical Guide to Pseudospectral Methods, Cambridge University Press, New York, 1996.
[4] D. Gaier, Lectures on Complex Approximation, BirkhPauser, Boston, 1987.
[5] W. Gautschi, Numerical Analysis: An Introduction, BirkhPauser, Boston, 1997.
[6] S. Haber, The tanh rule for numerical integration, SIAM J. Numer. Anal. 14 (1977) 668–685.
[7] F. Keinert, Uniform approximation to |x| by Sinc function, J. Approx. Theory 66 (1991) 44–52.
[8] M.A. Kowalski, K.A. Sikorski, F. Stenger, Selected Topics in Approximation and Computation, Oxford Univeristy
Press, Oxford, 1995.
[9] J. Lund, Symmetrization of the Sinc–Galerkin method for boundary value, Math. Comput. 47 (1986) 571–588.
[10] J. Lund, K.L. Bowers, Sinc Methods for Quadrature and DiCerential Equations, SIAM, Philadelphia, PA, 1992.
M. Sugihara, T. Matsuo / Journal of Computational and Applied Mathematics 164–165 (2004) 673 – 689 689

[11] M. Mori, Quadrature formulas obtained by variable transformation and DE rule, J. Comput. Appl. Math. 12, 13
(1985) 119–130.
[12] M. Mori, M. Sugihara, The double exponential transformation in numerical analysis, in: W. Gautschi, F. MarcellSan,
L. Reichel (Eds.), Numerical Analysis in the 20th Century, Vol. 5: Quadrature and Orthogonal Polynomials, Elsevier,
Amsterdam, 2001; J. Comput. Appl. Math. 127 (2001) 287–296.
[13] C. Schwartz, Numerical integration of analytic functions, J. Comput. Phys. 4 (1969) 19–29.
[14] F. Stenger, Integration formulas based on the trapezoidal formula, J. Inst. Math. Appl. 12 (1973) 103–114.
[15] F. Stenger, Numerical Methods Based on Sinc and Analytic Functions, Springer, Berlin, New York, 1993.
[16] F. Stenger, Summary of Sinc numerical methods, in: L. Wuytack, J. Wimp (Eds.), Numerical Analysis in the 20th
Century, Vol. 1: Approximation Theory, J. Comput. Appl. Math. 121 (2000) 379 – 420.
[17] M. Sugihara, Optimality of the double exponential formula—functional analysis approach, Numer. Math. 75 (1997)
379–395.
[18] M. Sugihara, Double exponential transformation in the Sinc-collocation method, J. Comput. Appl. Math. 149 (2002)
239–250.
[19] M. Sugihara, Near optimality of the Sinc approximation, Math. Comput. 72 (2003) 767–786.
[20] H. Takahasi, M. Mori, Quadrature formulas obtained by variable transformation, Numer. Math. 21 (1973) 206–219.
[21] H. Takahasi, M. Mori, Double exponential formulas for numerical integration, Publ. RIMS Kyoto Univ. 9 (1974)
721–741.

You might also like