Cox, Matthews, JCP, 2002, Exponential Time Differencing For Stiff Systems
Cox, Matthews, JCP, 2002, Exponential Time Differencing For Stiff Systems
Cox, Matthews, JCP, 2002, Exponential Time Differencing For Stiff Systems
We develop a class of numerical methods for stiff systems, based on the method of
exponential time differencing. We describe schemes with second- and higher-order
accuracy, introduce new Runge–Kutta versions of these schemes, and extend the
method to show how it may be applied to systems whose linear part is nondiagonal.
We test the method against other common schemes, including integrating factor and
linearly implicit methods, and show how it is more accurate in a number of appli-
cations. We apply the method to both dissipative and dispersive partial differential
equations, after illustrating its behavior using forced ordinary differential equations
with stiff linear parts. c 2002 Elsevier Science (USA)
Key Words: stiff systems; exponential time differencing; integrating factor methods.
1. INTRODUCTION
Stiff systems of ordinary differential equations (ODEs) arise commonly when solving
partial differential equations (PDEs) by spectral methods, and their numerical solution
requires special treatment if accurate solutions are to be found efficiently. In this paper,
we develop and test a class of numerical methods for integrating stiff systems, based on
exact integration of the linear part of the equations. These methods differ from the usual
integrating factor methods in their treatment of the nonlinear terms and are considerably
more accurate than those methods, as we demonstrate in this paper.
When solving a PDE subject to spatially periodic boundary conditions, it is natural to write
the solution as a sum of Fourier modes with time-dependent coefficients. The corresponding
spectral and pseudo-spectral methods have been shown to be remarkably successful for a
wide range of applications [3–5, 21]. However, the set of ODEs for the mode amplitudes
is stiff, because the time scale associated with the nth mode scales as O(n −m ) for large
n, where m is the order of the highest spatial derivative, so that the highest modes evolve
on short time scales. It is generally the case that the linear terms are primarily responsible
for the stiffness of a system of ODEs for modes in a spectral simulation. Several important
PDEs, such as the Cahn–Hilliard equation [22] or the Kuramoto–Sivashinsky equation [12],
430
0021-9991/02 $35.00
c 2002 Elsevier Science (USA)
All rights reserved.
EXPONENTIAL TIME DIFFERENCING 431
involve four spatial derivatives (m = 4). In fact, the methods discussed in this paper were
derived while considering model PDEs with six spatial derivatives [14, 15], so that m = 6,
and particularly careful consideration of the time-stepping method is required.
In problems where the boundary conditions are not periodic, a basis other than Fourier
modes may be appropriate (e.g., Chebyshev polynomials) and the linearized system may
no longer be diagonal; in this case, the stiffness problem is exacerbated [20].
The numerical method treated in this paper is the so-called “exponential time differen-
cing” (ETD) scheme [9, 17, 18], which involves exact integration of the governing equations
followed by an approximation of an integral involving the nonlinear terms. It arose originally
in the field of computational electrodynamics [19], where the problem of computing the
electric and magnetic fields in a box with absorbing boundaries is stiff (essentially because
of the large value of c, the speed of light). For that problem, standard, explicit time-stepping
techniques require an extremely small time step in order to be stable, and although implicit
schemes with less stringent constraints on the time step are available, they are costly (or
infeasible) to implement in three dimensions [9]. Explicit “exponential time differencing”
[9, 17, 18] with first-order accuracy has been used widely in computational electrodynamics.
This original first-order explicit scheme has since been extended to implicit and explicit
schemes of arbitrary order, and the stability of such schemes has been discussed in some
detail [1]. To develop ETD methods further, in this paper we derive new, more accurate
(Runge–Kutta) ETD methods and provide a more succinct derivation of ETD methods than
previously given. We also apply ETD methods to various PDEs and compare them with
alternative time-stepping methods to illustrate the superior performance of ETD schemes,
and show how ETD methods can be applied to systems of ODEs whose linear part is not
diagonal. All ETD schemes discussed in this paper are explicit.
The structure of this paper is as follows. In Section 2, we describe the first- and second-
order-accurate exponential time differencing scheme and give a derivation of ETD schemes
of arbitrary order. We then describe the new Runge–Kutta ETD methods. Other schemes for
stiff systems are also described for comparison, and in Section 3 we discuss the stability of
the various schemes. In Section 4, we compare the second- and fourth-order ETD schemes
with other methods, for several stiff problems. Our primary interest lies in solving PDEs,
and we give comparisons for both dissipative and dispersive PDEs. However, to illustrate
the behavior of the method, we also provide simpler and more detailed examples in the
form of a single, linear, inhomogeneous ODE with either a real or an imaginary linear part.
A final set of examples concerns problems in which the linearized system is nondiagonal,
and shows how the method can readily be adapted to this important case. We summarize
our results in Section 5.
2. DERIVATION OF METHODS
When a PDE is discretized using a Fourier spectral method, a stiff system of coupled
ODEs for the Fourier coefficients is obtained. The linear part of this system is diagnonal,
while the nonlinear terms are usually evaluated by transforming to physical space, evaluating
the nonlinear terms at grid points and then transforming back to spectral space.
In stiff systems, solutions evolve on two time scales. If the stiffness is due to rapid
exponential decay of some modes (as with a dissipative PDE), then there is a rapid approach
to a “slow manifold,” followed by slower evolution along the slow manifold itself [2]. If,
432 COX AND MATTHEWS
by contrast, the stiffness is due to rapid oscillations of some modes (as with a dispersive
PDE), then the solution rapidly oscillates about its projection on the slow manifold; it is
this projection which evolves slowly. In general, stiffness may have features of both rapid
decay and rapid oscillation.
Although our primary interest lies in solving PDEs, it is clearer and more instructive
first to describe ETD methods in the context of a simple model ODE for the evolution of a
single Fourier mode. Since the linear operator in a Fourier basis is diagonal, the extension
of the method to the system of ODEs for the mode amplitudes is then immediate. The model
ODE is
where c is a constant and F(u, t) represents nonlinear and forcing terms. For the high-
order Fourier modes, c is large and negative (for dissipative PDEs) or large and imaginary
(for dispersive PDEs). A suitable time-stepping method for (1) should be able to handle
the stiffness caused by the large values of |c| without requiring time steps of order |1/c|.
However, since the coefficients c span a wide range of values when all Fourier modes
are considered, the time-stepping method should also be applicable to small values of |c|.
Finally, we require that the term F(u, t) be handled explicitly, since fully implicit methods
are too costly for large-scale PDE simulations.
When |c| 1, solutions of (1) generally consist of two elements: a fast phase in which
u = O(1) and d/dt = O(c), and a “slow manifold” on which u = O(1/c) and d/dt =
O(1); if Re(c) < 0, solutions are attracted to the slow manifold. The slow manifold can be
expressed as an asymptotic series in powers of 1/c as
F 1 dF 1 d2 F
u∼− − 2 − 3 2 ···, (2)
c c dt c dt
and forms the basis of “nonlinear Galerkin” methods [3, 13]. Since initial conditions do not
generally lie on the slow manifold, a numerical method should ideally give stable, highly
accurate solutions during both the fast and slow phases.
This formula is exact, and the essence of the ETD methods is in deriving approximations
to the integral in this expression.
We denote the numerical approximation to u(tn ) by u n and write F(u n , tn ) as Fn . The
simplest approximation to the integral in (3) is that F is constant, F = Fn + O(h), between
t = tn and t = tn+1 , so that (3) becomes the scheme ETD1, given by
which has a local truncation error h 2 Ḟ/2. This version of the exponential time differencing
method has been applied in computational electrodynamics [9, 17, 19], but (4) is rarely
mentioned outside of this field in the numerical analysis literature (the notable exception
being [1]). Note that for small |c|, (4) approaches the forward Euler method, while for large
|c| the first term in the series (2) is recovered.
If instead of assuming that F is constant over the interval tn ≤ t ≤ tn+1 , we use the higher-
order approximation that
u n+1 = u n ech + Fn ((1 + hc)ech − 1 − 2hc)/ hc2 + Fn−1 (−ech + 1 + hc)/ hc2 , (6)
which has a local truncation error of 5h 3 F̈/12. Note that the apparent divergence of the
coefficients in (6) as c → 0 is illusory; in fact (6) becomes the second-order Adams–
Bashforth method in this limit. For large |c|, (6) gives the first two terms of (2).
For completeness we derive concisely in the next section ETD schemes of arbitrary order
(cf. [1]). The availability of high-order ETD schemes represents an important advantage of
these methods over standard linearly implicit methods, for which the order is limited by
stability constraints.
We now note two practical points that must be borne in mind when applying the ETD
methods to PDEs. First, there is often some mode or modes with c = 0 (as is the case with
(58) and (59) later). In that event, the explicit formulae for the coefficients of Fn , Fn−1 ,
etc., in the equivalent of (4) or (6) cannot be used directly, since they involve division
by zero. Instead, the limiting form of the coefficients as c → 0 must be used for such a
mode. Another practical consideration is that care must be taken in the evaluation of the
coefficients for modes where |ch| is small, to avoid rounding errors arising from the large
amount of cancellation in the coefficients. This becomes increasingly important as the order
of the method is raised; in some cases the Taylor series for the coefficients should be used
rather than the explicit formulae themselves, when |ch| 1.
−τ/ h −τ/ h s−1 −τ/ h
1− ∇+ ∇ + · · · + (−1)
2
∇ s−1
G n (0), (7)
1 2 s−1
434 COX AND MATTHEWS
∇G n (0) = G n (0) − G n−1 (0), ∇ 2 G n (0) = G n (0) − 2G n−1 (0) + G n−2 (0), (8)
etc., and
−τ/ h
m! = (−τ/ h)(−τ/ h − 1) · · · (−τ/ h − m + 1), (9)
m
for m = 1, . . . , s − 1.
It then follows from (3) that our approximation to u n+1 satisfies
h
−cτ −τ/ h s−1 −τ/ h
u n+1 − u n e ch
=e ch
e 1− ∇ + · · · + (−1) ∇ s−1
G n (0) dτ
0 1 s−1
s−1 h
−τ/ h
= ech (−1)m e−cτ dτ ∇ m G n (0)
0 m
m=0
s−1 1
−λ
=h (−1) m
ech(1−λ) dλ∇ m G n (0)
0
m
m=0
s−1
=h gm ∇ m G n (0), (10)
m=0
where
1
−λ
gm = (−1)m ech(1−λ) dλ. (11)
0
m
ech (1 − z − e−ch )
= , (13)
(1 − z)(ch + log(1 − z))
ech − 1 g0 − 1 ech − 1 − ch
g0 = and g1 = = , (18)
ch ch c2 h 2
which give rise to the scheme (6).
Having obtained the gm , the ETD scheme (10) is given explicitly as
s−1
m m
u n+1 = u n ech + h gm (−1)k Fn−k . (19)
k
m=0 k=0
is applied on the interval tn ≤ t ≤ tn+1 , and is substituted into (3) to yield the scheme
ETD2RK given by
The truncation error per step for this method is −h 3 F̈/12; note that this is smaller by a
factor of 5 than that of ETD2.
436 COX AND MATTHEWS
The terms an and bn approximate the values of u at tn + h/2 and tn + h, respectively. The
final formula (25) is the quadrature formula for (3) derived from quadratic interpolation
through the points tn , tn + h/2, and tn + h.
A straightforward extension of the standard fourth-order RK method yields a scheme
which is only third order. However, by varying the scheme and introducing further para-
meters, a fourth-order scheme ETD4RK is obtained:
The computer algebra package Maple was used to confirm that this method is indeed fourth
order.
d −ct
ue = F(u, t)e−ct (30)
dt
and then applying a time-stepping scheme to this equation. If we use the second-order
Adams–Bashforth method we obtain the IF method
3h h
IFAB2 u n+1 = u n ech + Fn ech − Fn−1 e2ch . (31)
2 2
5 3 d2 (Fe−ct ) 5 3 2
h 2
∼ h c F as |c| → ∞. (32)
12 dt 12
EXPONENTIAL TIME DIFFERENCING 437
This method has been used previously for a spectral simulation of the Cahn–Hilliard equation
[22]. The truncation errors for the linearly implicit methods (34) and (35) have the same
scaling. For solutions that lie off the slow manifold, the error per step is of order c3 h 3 for
large |c|, but for solutions on the slow manifold the error is smaller, of order h 3 .
3. STABILITY
In this section we compare the stability of several of the second-order methods described
above. The general approach for stability analysis of a numerical method that uses different
methods for the linear and nonlinear parts of the equation is as follows (see for example,
[1, 6]). For the nonlinear, autonomous ODE
u̇ = cu + F(u), (36)
we suppose that there is a fixed point u 0 , so that cu 0 + F(u 0 ) = 0. Linearizing about this
fixed point leads to
u̇ = cu + λu, (37)
When a second-order numerical method, for example, ETD2, is applied to (36), the
linearization of the nonlinear term in the numerical method leads to a recurrence relation
involving u n+1 , u n , and u n−1 . This is equivalent to applying the method to (37), with the
term λu regarded as the nonlinear term. Note that an implicit assumption in this approach
is that the fixed points of the numerical method are the same as those of the ODE. This is
true for ETD and LI methods, but not for the IF methods; it follows that the meaning of the
stability analysis for IF methods is not clear.
In general, both c and λ in (37) are complex, so the stability region for these methods is
four dimensional. In order to plot two-dimensional stability regions, previous authors have
used the complex λ-plane, assuming c to be fixed and real [1], or have assumed that both c
and λ are purely imaginary [6]. An alternative approach, used below, is to concentrate on
the case in which c and λ are real.
Consider first the method AB2AM2 applied to (37). This leads to
h 3h h
u n+1 = u n + (cu n + cu n+1 ) + λu n − λu n−1 . (38)
2 2 2
After defining r = u n+1 /u n , x = λh, y = ch, we find the following quadratic equation for
the factor r by which the solution is multiplied after each step:
With the assumption that x and y are real, it can be shown that r is real, so that the stability
boundaries correspond to r = 1 and r = −1 in (39). These correspond to the lines x + y = 0
and x = −1 in the x, y plane, respectively.
Similarly, for the method AB2BD2, the quadratic for r is
and the stability boundaries are the lines x + y = 0 and y = 4 + 3x. For ETD2, the equation
for r is
−y 2 (1 + e y )
x= . (42)
ye y + 2e y − 2 − 3y
and the stability region is bounded by two lines on which r = 1, which are x + y = 0 and
x = −y 2 /(e y − 1 − y).
The stability regions for these four methods are shown in Fig. 1. Note that for all four
methods the stability region includes the negative y-axis, and the width of this region
increases as |y| increases. The right-hand boundary is the same for all four methods, cor-
responding simply to the true stability boundary of (37). For both ETD2 and AB2BD2 the
EXPONENTIAL TIME DIFFERENCING 439
left-hand boundary is parallel to y = 3x as y → −∞, while for ETD2RK, which has the
largest stability region, it is parallel to y = x.
An alternative way of presenting the stability regions is in the complex x plane, at a fixed
value of y that is real and negative. The boundaries of the stability regions are obtained by
substituting r = eiθ into the formulae (39)–(41) and (43) and then by solving for x. Figure 2
shows the stability regions plotted in this way for the same four methods, with y = −20. For
each method, the boundary of the stability region passes through the point x = −y. As in the
purely real case, AB2AM2 has the smallest stability region and ETD2RK has the largest.
In the limit y → −∞, the stability region for ETD2RK simplifies to the disc |x| < |y|.
In the same limit, the boundaries of the stability regions for ETD2 and AB2BD2 become
x = ye2iθ /(1 − 2eiθ ) and that of AB2AM2 becomes x = −y(e2iθ + eiθ )/(3eiθ − 1). Note
that for AB2AM2 the radius of the stability region does not grow linearly with y at θ = π ,
as is also apparent from Fig. 1.
In this section the second- and fourth-order ETD methods and the new Runge–Kutta
ETD methods described above in Sections 2.1 and 2.2 are compared with the standard
methods described in Sections 2.3 and 2.4. Such tests do not seem previously to have been
carried out. Our primary goal is to test the ETD methods on dissipative and dispersive partial
differential equations, in which the high-wavenumber modes experience rapid decay and
rapid oscillation, respectively. As a precursor to these PDE tests, we begin by examining
model ordinary differential equations for which the linear part gives either rapid decay or
440 COX AND MATTHEWS
FIG. 2. Stability regions (interior of closed curves) in the complex x plane with y = −20. The four methods
are AB2AM2 (dashed), ETD2 (solid), AB2BD2 (dash-dot), and ETD2RK (dotted).
oscillation. In these simple models more analytical progress is possible. In order to make a
fair comparison between different methods, most examples will focus on the second-order
formulae; however, in applications, higher-order schemes may be more appropriate.
where we choose c = −100 to generate the rapid linear decay characteristic of stiff systems,
compared with the O(1) time scale of the forcing. For subsequent evaluation of the different
numerical schemes, we note that the exact solution is
to seek numerical methods that capture both of these phases accurately, and work well when
the time step h is of the order of 1/c.
In evaluating the different numerical schemes, a useful feature of (44) is that the recur-
rence relations resulting from the various numerical methods can be solved exactly (see
Section 4.1.1 below). However, a disadvantage of this equation from the point of view of
distinguishing between the various numerical schemes is that initial conditions tend to get
lost as the solution becomes phase-locked to the forcing term. So poor numerical schemes
are “helped along” by the forcing, and can recover from a bad start. Below, in Sections 4.2
and 4.5, we treat systems with self-sustained oscillations, which do not have this drawback;
these later problems provide a more demanding test of the various numerical schemes.
where u n = Re(wn ). Table I gives the values of α, β, and γ for each numerical scheme.
The solution to (46), subject to the initial value w0 = u 0 , is
iβ(γ n − α n )
wn = α n w0 + . (47)
γ −α
It is possible to use this formula to evaluate the accuracy of the various numerical schemes
(apart from AB2BD2). To do so, we note that the exact solution (45) may be written as
u(t) = Re(w(t)), where
i(e−it − ect )
w(t) = ect w0 − . (48)
i +c
TABLE I
The Values of α, β, and γ in the Recurrence Relation (46) Corresponding
to Various Numerical Schemes
α β γ
(1 + ch)e − (1 + 2ch)
ch
−e + (1 + ch) i h
ch
ETD2 ech + e e−i h
hc2 hc2
(ch − 1)ech + 1 ech − (1 + ch) −i h
ETD2RK ech + e e−i h
hc2 hc2
1 ch
IFAB2 ech he (3 − ech ei h ) e−i h
2
1
IFRK2 ech h(ech + e−i h ) e−i h
2
2 + ch h(3 − ei h )
AB2AM2 e−i h
2 − ch 2 − ch
442 COX AND MATTHEWS
where
√ √
2 + 1 + 2ch 2 − 1 + 2ch
ξ= and η= . (51)
3 − 2ch 3 − 2ch
with
Note that for AB2BD2, two starting values are needed. In situations where an exact
solution is known, this may be used to generate the second starting value; otherwise a
Runge–Kutta scheme, for instance, may be used at start-up.
Figure 3 shows the magnitude of the relative error (u num − u exact )/u exact in the numerical
solution to (44), with u 0 = 1 (off the slow manifold), at t = π/2 for the six methods above.
FIG. 3. Magnitude of the relative error at t = π/2 in (44) with u 0 = 1 for six methods.
EXPONENTIAL TIME DIFFERENCING 443
TABLE II
The Relative Error at t = π/2 in the Numerical Solution to (44), with u0 = 1
and c = −100, is Given by kh2 , Where k is the Constant Given in the Table
It is a straightforward matter to calculate, using the results above, this relative error in the
limit as h → 0. In this limit there is a clear ranking of the various schemes—the results are
shown in Table II. Note that for each method the relative error is smaller by a factor of c
than that expected from the truncation error per step; this is because the errors introduced
are exponentially damped as the calculation proceeds.
The most striking feature of Fig. 3 and Table II is the very poor performance of the
integrating factor methods IFAB2 and IFRK2; the error in these methods is a factor of
approximately 103 –104 greater than that of the other methods. This is because the truncation
error per step in these methods is of order c2 h 3 , while for the other methods it is of order h 3
c2 h 3 . Other authors have pointed out the weakness of integrating factor methods [3, 6, 7].
The relative error for small h is shown in Fig. 4 for different values of c: The poor
performance of the integrating factor methods is again illustrated in this figure. The LI
FIG. 4. In the limit as h → 0, the relative error at t = π/2 in the numerical solution to (44) is given by kh 2 .
The constant |k| is plotted as a function of c for various numerical schemes. Initial conditions on (u 0 = 0) and off
(u 0 = 1) the slow manifold give the first and second figures, respectively. When −c 1, the value of k for the
various numerical schemes is, in both cases: IFAB2 (−5c2 /12); IFRK2 (c2 /12); AB2BD2 (1); AB2AM2 (1/2);
ETD2 (5/12); ETD2RK (−1/12).
444 COX AND MATTHEWS
methods are almost as accurate as the ETD methods. However, this is due to fortuitous
damping of the errors that arise in the initial fast phase; if the results are compared at an
earlier time, the ETD methods show considerably greater accuracy than the LI methods.
The best of the methods considered, for all values of h, is ETD2RK. The error is smaller
than that of ETD2 by a factor of 5, which is consistent with the truncation errors for the two
methods. However, it should be remembered that this method requires twice as much CPU
time, so when this is taken into account the accuracy is improved only by a factor of 5/4.
The advantages of ETD methods become greater if we consider methods of fourth order.
Because of the second Dahlquist stability barrier, LI methods are not A-stable if their order
is greater than two (see, for example, [5] or [10]). The stability region for the fourth-order
Adams–Moulton method AM4 does not include the entire negative real axis, so this method
is not suitable for (44) except for very small h. For the fourth-order backward difference
method BD4, the stability region does include the negative real axis, so a fourth-order
linearly implicit method AB4BD4 can be constructed for (1);
FIG. 5. Magnitude of the relative error at t = π/2 in (44) with u 0 = 1 for four fourth-order methods. Accuracy
is ultimately limited by machine precision (here double-precision arithmetic is used).
EXPONENTIAL TIME DIFFERENCING 445
half the storage (a significant factor for PDE applications) since it uses only previous values
of F, whereas AB4BD4 requires previous values of both F and u. But the most accurate
method is ETD4RK, with an error smaller than that of ETD4 by a factor of almost 400.
eit − eict
u(t) = u 0 eict + , (57)
i(1 − c)
where our interest is in the stiff case for which the real parameter c satisfies c 1. Our
analysis of the various numerical schemes follows that given above in Section 4.1.
By solving exactly the difference equations that correspond to the various numerical
schemes identified above, we are able to calculate the absolute and relative errors in ad-
vancing the solution to time t = T using the time step h. For IFAB2, IFRK2, ETD2, and
ETD2RK, the results for large c are shown in Table III, for initial conditions on and off the
slow manifold (this manifold corresponds to u 0 = −i/(1 − c), so that the rapid oscillations
in (57) are removed). The absolute error is then k1 h 2 (ei T − ei T c ), where k1 is given in the
table. The relative error for initial conditions on the slow manifold is k2 h 2 (ei T − ei T c )e−i T
and off the slow manifold is k3 h 2 (ei T − ei T c )e−i T c ; again k2 and k3 are given in Table III.
The corresponding large-c results for AB2AM2 and AB2BD2 are shown in Table IV. The
absolute error and relative error are denoted by a h 2 and r h 2 , respectively, with superscripts
“on” or “off” indicating whether the initial condition lies on or off the slow manifold.
For initial conditions on the slow manifold, the results are very similar to those for (44):
The ETD methods are slightly more accurate than the LI methods, which are more accurate
than the IF methods by a factor of c2 . However, for initial conditions off the slow manifold,
the ETD methods are more accurate by a factor of c2 than IF methods, which in turn are more
accurate than LI methods by a factor c2 (this strongly situation-dependent performance of
LI and IF methods is discussed by Boyd [3], p. 269).
TABLE III
Errors in the Numerical Solution to (56) for Large c
5 1
k1 ic − ic 5i/(12c) −i/(12c)
12 12
5 2 1 2 5 1
k2 c − c −
12 12 12 12
k3 5ic/(12u 0 ) −ic/(12u 0 ) 5i/(12u 0 c) −i/(12u 0 c)
TABLE IV
Errors in the Numerical Solution to (56) for Large c:
Schemes AB2AM2 and AB2BD2
AB2AM2 AB2BD2
1 iT
aon i(e − eicT )/c i(ei T − eicT )/c
2
1 1
aoff − i T eicT u 0 c3 − i T eicT u 0 c3
12 3
1
ron (1 − ei(c−1)T ) 1 − ei(c−1)T
2
1 1
roff − i T c3 − i T c3
12 3
The relative errors for (56) with c = 100 and the initial condition u 0 = 1 (off the slow
manifold) are shown in Fig. 6. Note that the ETD methods are the most accurate, by a factor
of order c2 = 104 , for a very wide range of h.
∂u ∂ 2u ∂ 4u ∂u
= −2 2 − 4 − u , (58)
∂t ∂x ∂x ∂x
which is the well-known Kuramoto–Sivashinsky equation [12]. The boundary conditions
are periodic, with spatial period 2π ; the initial condition chosen is u(x, 0) = 0.03 sin x. The
FIG. 6. Magnitude of the relative error at t = π/2 in (56) with u 0 = 1 for six methods.
EXPONENTIAL TIME DIFFERENCING 447
FIG. 7. Magnitude of the relative error at t = 6 for the Kuramoto–Sivashinsky equation (58) for four methods.
fourth-derivative term makes the linear part of (58) extremely stiff, with rapid linear decay
of the high-wavenumber modes, so that standard explicit methods are impractical. Indeed,
the motivation for our interest in ETD methods came from PDEs similar to (58) but with
six derivatives in the linear term [14, 15].
The PDE (58) was solved with a pseudospectral method using 32 grid points without
dealiasing, and using double precision arithmetic for 0 ≤ t ≤ 6. The time stepping was car-
ried out in spectral space, so that the linear parts of the evolution equations for each Fourier
mode are uncoupled and the ETD, IF, and LI methods can be straightforwardly applied.
The relative error in u 2 dx at t = 6 is plotted in Fig. 7 for four second-order methods. The
error was measured by comparison with the “true” solution determined numerically using
a fourth-order method with a very small time step.
In applying the ETD methods to (58), we have used the limiting form of the coefficients
for the mode with zero wavenumber, in order to avoid division by zero. However, we did not
find it necessary to replace the explicit formulae for the coefficients by their Taylor series
for the modes with |ch| 1.
The results in Fig. 7 are qualitatively similar to those for (44), showing that (44) is a good
model problem for dissipative PDEs. The most accurate method is ETD2, with errors lower
than those of AB2AM2 by a factor of 1.7 for a wide range of h. The least accurate method
is the standard integrating factor method IFRK2.
The time-dependence of the relative error is shown in Fig. 8, for the same four methods,
with a fixed time step of h = 0.01. In the initial phase of exponential growth, both of the LI
methods perform poorly, compared with the IFRK2 and ETD2 methods. In the later stage
of nonlinear equilibration, IFRK2 is the least accurate, while the other three methods have
exponentially decaying errors. At all times the ETD2 method is the most accurate.
448 COX AND MATTHEWS
FIG. 8. Magnitude of the relative error as a function of t for the Kuramoto–Sivashinsky equation (58), with
h = 0.01.
FIG. 9. Error after one soliton period for the KdV equation (59) for six methods.
with the standard fourth-order RK scheme) are shown in Fig. 9. The ETD method shows a
tenfold improvement over the IF method.
differential equations
where r 2 = u 2 + v 2 and c > 0. The behavior of this system is more readily observed by
using the amplitude r and phase θ , such that u = r cos θ and v = r sin θ. Then
ṙ = cr (1 − r 2 ), (62)
θ̇ = 1 − λr 2 , (63)
so that (unless r is initially zero) r → 1 and θ̇ → 1 − λ at large time. The exact solution to
(62) and (63), i.e.,
r02
r 2 (t) = , (64)
r02 (1 − e−2ct ) + e−2ct
λ
θ (t) = θ0 + (1 − λ)t − log r02 (1 − e−2ct ) + e−2ct , (65)
2c
where r (0) = r0 and θ (0) = θ0 , is useful in calculating the amplitude and phase errors in
the numerical solutions.
To implement the nondiagonal ETD methods, we first write (60) and (61) in the vector
form
u̇ = Lu + F, (66)
where
u c −1
(λv − cu)r 2
u= , L= , F= . (67)
v 1 c −(λu + cv)r 2
Note that the matrix L is nondiagonal. Although in this case we could diagonalize L by a
suitable change of variables, in applications this may not be a desirable procedure and it
may be more convenient to solve the problem in the original variables, for which the linear
operator is nondiagonal. By analogy with (3), we then multiply (66) by e−Lt and integrate
from t = tn to tn+1 to give the exact result
h
un+1 = e Lh un + e Lh e−Lτ F(tn + τ ) dτ. (68)
0
where
M1 = L −1 (e Lh − I ), (70)
EXPONENTIAL TIME DIFFERENCING 451
and where I is the 2 × 2 identity matrix. For ETD2 we adopt the approximation
h h
e−Lτ F(tn + τ ) dτ = e−Lτ (Fn + (Fn − Fn−1 )τ/ h) dτ + O(h 3 ) (71)
0 0
Thus,
h
e Lh e−Lτ F(tn + τ ) dτ = M1 Fn + M2 (Fn − Fn−1 )/ h + O(h 3 ). (73)
0
In this particular example, all the necessary matrices can be computed exactly. The useful
matrices are
ch
e cos h −ech sin h
e Lh = , (74)
ech sin h ech cos h
1 −c + ech (c cos h + sin h) − 1 + ech (cos h − c sin h)
M1 = , (75)
1 + c2 1 − ech (cos h − c sin h) − c + ech (c cos h + sin h)
µ d µo
M2 = , (76)
−µo µd
where
un+1 = e Lh un + M1 Fn (79)
and ETD2 is
The scheme ETD2 may be extended to give a Runge–Kutta scheme ETD2RK by first
calculating
an = e Lh un + M1 Fn (81)
In order to evaluate the schemes ETD1, ETD2, and ETD2RK, we also introduce the more
successful competitors from above: AB2AM2 and IFRK2.
The AB2AM2 scheme is derived from the formula
1 3 1
un+1 − un = h L(un+1 + un ) + hFn − hFn−1 , (83)
2 2 2
from which it follows that
−1 −1
1 1 1 1
un+1 = I − h L I + h L un + h I − h L (3Fn − Fn−1 ). (84)
2 2 2 2
For the standard Runge–Kutta integrating factor method IFRK2 we first calculate
and the parameter values c = 100, λ = 12 . We evaluate the various schemes by comparing
their predicted amplitude and phase with the exact values of these quantities at time t = 1,
although other parameter values and end-times give similar results. The results are summa-
rized in Fig. 10, where it is seen that ETD2, ETD2RK, and AB2AM2 are the best schemes.
In particular, the integrating factor scheme IFRK2 is particularly poor in its calculation of
the amplitude (being out-performed by even the first-order scheme ETD1 in the range of
time steps considered).
While the best at capturing the amplitude of the evolving solution, ETD2RK is rather
poorer at calculating the phase. An error analysis reveals that there is no simple factor-of-5
difference between the schemes ETD2 and ETD2RK in this system.
M1 ≡ L † (e Lh − I ) + he Lh (I − L † L), (88)
EXPONENTIAL TIME DIFFERENCING 453
FIG. 10. The magnitude of the relative error at t = 1 in the numerical solution to (66), with u 0 = 2 and v0 = 1,
and with c = 100 and λ = 12 . The first figure shows the amplitude error (i.e., the error in r ), and the second the
corresponding phase error (in θ ).
τ e−Lτ dτ is given by
h
while e Lh 0
1
M2 ≡ L †2 [e Lh − (I + Lh)] + h 2 e Lh (I − L † L). (89)
2
The numerical schemes ETD1, ETD2, and ETD2RK are then given by (79), (80), and (82),
with M1 and M2 as in (88) and (89), respectively.
and the initial condition f (y, 0) = f 0 (y). This initial boundary-value problem arises in cal-
culating the flow of a viscous fluid in the channel −1 ≤ y ≤ 1, driven by uniform withdrawal
454 COX AND MATTHEWS
of fluid through the porous walls of the channel [11]; the Reynolds number R is a dimen-
sionless measure of the withdrawal speed. The solution to (90) and (91), after transients have
decayed, can exhibit self-sustained oscillations with intricate spatial and temporal behavior
and a rich analytical structure. Our numerical scheme to solve (90) and (91) proceeds by
first rendering the boundary conditions homogeneous by writing f (y, t) = p(y) + g(y, t),
where p(y) = −(y − 2)(y + 1)2 /4 satisfies p(−1) = p (−1) = p (1) = p(1) − 1 = 0 and
p (y) = 0. We then solve the resulting forced PDE for g by Chebyshev collocation
(although clearly a variety of other methods could be used). With this method, the vector
g = (g1 , . . . , g N −1 )t of values of g at the interior collocation points satisfies an evolution
equation of the form
ġ = Lg + n, (92)
5. CONCLUSIONS
We have developed and tested a class of numerical methods for systems with stiff linear
parts, based on combining exponential time differencing for the linear terms with a method
similar to Adams–Bashforth for the nonlinear terms. The ETD method is straightforward
to apply and can be extended to arbitrary order (cf. [1]). As the stiffness parameter tends to
zero, the ETD method approaches the Adams–Bashforth method of the same order; as the
stiffness parameter tends to infinity, the nonlinear Galerkin method [13] is recovered.
In addition to these multistep ETD methods, we have derived new Runge–Kutta forms
of the ETD method, of second, third, and fourth order. These are easier to use than the
high-order multistep forms, since they do not require initialization, and are more accurate.
These ETD methods have good stability properties and are widely applicable to dissipative
PDEs and nonlinear wave equations. They are particularly well suited to Fourier spectral
methods, which have diagonal linear part.
We have carried out extensive tests of ETD methods, comparing them with linearly
implicit and integrating factor methods. For all the examples tested, the ETD methods are
more accurate than either LI or IF methods. For solutions which follow a slow manifold,
the second-order ETD method is slightly more accurate than the LI methods, and has the
advantage that it readily generalizes to higher order, whereas LI methods do not. But for
solutions off the slow manifold, the results of Section 4.2 show that second-order ETD
methods are more accurate than LI methods by the fourth power of the stiffness parameter
c 1, and more accurate than integrating factor methods by a factor c2 . Like integrating
factor methods, the ETD methods solve the linear parts exactly. However, ETD methods
avoid the major drawback of IF methods, which is the introduction of the fast time scale
into the nonlinear terms, leading to large error constants.
EXPONENTIAL TIME DIFFERENCING 455
REFERENCES
1. G. Beylkin, J. M. Keiser, and L. Vozovoi, A new class of time discretization schemes for the solution of
nonlinear PDEs, J. Comput. Phys. 147, 362 (1998).
2. J. P. Boyd, Eight definitions of the slow manifold: Seiches, pseudoseiches and exponential smallness, Dyn.
Atmos. Oceans 22, 49 (1995).
3. J. P. Boyd, Chebyshev and Fourier Spectral Methods (Dover, New York, 2001).
4. C. Canuto, M. Y. Hussaini, A. Quarteroni, and T. A. Zang, Spectral Methods in Fluid Dynamics, Springer
Series in Computational Physics (Springer-Verlag, Berlin, 1988).
5. B. Fornberg, A Practical Guide to Pseudospectral Methods (Cambridge Univ. Press, Cambridge, UK, 1995).
6. B. Fornberg and T. A. Driscoll, A fast spectral algorithm for nonlinear wave equations with linear dispersion,
J. Comput. Phys. 155, 456 (1999).
7. B. Garcı́a-Archilla, Some practical experience with the time integration of dissipative equations, J. Comput.
Phys. 122, 25 (1995).
8. P. Henrici, Discrete Variable Methods in Ordinary Differential Equations (Wiley, New York, 1962).
9. R. Holland, Finite-difference time-domain (FDTD) analysis of magnetic diffusion, IEEE Trans. Electromagn.
Compat. 36, 32 (1994).
10. A. Iserles, A First Course in the Numerical Analysis of Differential Equations (Cambridge Univ. Press,
Cambridge, UK, 1996).
11. J. R. King and S. M. Cox, Asymptotic analysis of the steady-state and time-dependent Berman problem,
J. Eng. Math. 39, 87 (2001).
12. Y. Kuramoto and T. Tsuzuki, Persistent propagation of concentration waves in dissipative media far from
thermal equilibrium, Prog. Theor. Phys. 55, 356 (1976).
13. M. Marion and R. Temam, Nonlinear Galerkin methods, SIAM J. Numer. Anal. 26, 1139 (1989).
14. P. C. Matthews and S. M. Cox, Pattern formation with a conservation law, Nonlinearity 13, 1293 (2000).
15. P. C. Matthews and S. M. Cox, One-dimensional pattern formation with Galilean invariance near a stationary
bifurcation, Phys. Rev. E 62, R1473 (2000).
16. P. A. Milewski and E. Tabak, A pseudospectral procedure for the solution of nonlinear wave equations with
examples from free-surface flows, SIAM J. Sci. Comp. 21, 1102 (1999).
17. P. G. Petropoulos, Analysis of exponential time-differencing for FDTD in lossy dielectrics, IEEE Trans.
Antennas Propagation 45, 1054 (1997).
18. C. Schuster, A. Christ, and W. Fichtner, Review of FDTD time-stepping for efficient simulation of electric
conductive media. Microwave Optical Technol. Lett. 25, 16 (2000).
19. A. Taflove, Computational Electrodynamics: The Finite-Difference Time-Domain Method, Artech House
Antenna Library (Artech House, London, 1995).
20. L. N. Trefethen, Lax-stability vs. eigenvalue stability of spectral methods, in Numerical Methods for Fluid
Dynamics III, edited by K. W. Morton and M. J. Baines (Clarendon Press, Oxford, 1988), pp. 237–253.
21. L. N. Trefethen, Spectral Methods in Matlab (Soc. for Industr. & Appl. Math., Philadelphia, 2000).
22. J. Z. Zhu, L.-Q. Chen, J. Shen, and V. Tikare, Coarsening kinetics from a variable-mobility Cahn–Hilliard
equation: Application of a semi-implicit Fourier spectral method, Phys. Rev. E 60, 3564 (1999).