Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Higher Order ETD

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

Highlights

High-Order Schemes of Exponential Time Differencing for Stiff Systems


with Nondiagonal Linear Part
Evelina V. Permyakova, Denis S. Goldobin

• An approach to implementation of exponential time differencing methods is presented


• The approach allows for simulation of equation systems with nondiagonal linear part
arXiv:2208.14292v2 [math.NA] 13 Oct 2024

• Performance gain for high-precision calculations is up to several orders of magnitude


• Efficient for PDEs, Fokker-Planck equations, and large ensembles of oscillators
High-Order Schemes of Exponential Time Differencing for Stiff Systems
with Nondiagonal Linear Part

Evelina V. Permyakovaa, Denis S. Goldobina,b,∗


a Institute of Continuous Media Mechanics UB RAS, 1 Akad. Koroleva street, Perm 614013, Russia
b Institute of Physics and Mathematics, Perm State University, 15 Bukireva street, Perm 614990, Russia

Abstract
Exponential time differencing methods is a power tool for high-performance numerical simulation of computationally
challenging problems in condensed matter physics, fluid dynamics, chemical and biological physics, where mathe-
matical models often possess fast oscillating or decaying modes—in other words, are stiff systems. Practical imple-
mentation of these methods for the systems with nondiagonal linear part of equations is exacerbated by infeasibility of
an analytical calculation of the exponential of a nondiagonal linear operator; in this case, the coefficients of the expo-
nential time differencing scheme cannot be calculated analytically. We suggest an approach, where these coefficients
are numerically calculated with auxiliary problems. We rewrite the high-order Runge–Kutta type schemes in terms
of the solutions to these auxiliary problems and practically examine the accuracy and computational performance of
these methods for a heterogeneous Cahn–Hilliard equation, a sixth-order spatial derivative equation governing pattern
formation in the presence of an additional conservation law, and a Fokker–Planck equation governing macroscopic
dynamics of a network of neurons.
Keywords:
exponential time differencing, direct numerical simulation, nonlinear partial differential equations, stiff systems

1. Introduction

In condensed matter, chemical and biological physics, many systems are governed by the mathematical models
which can be called ‘stiff’ systems—the systems where some modes are fast oscillating or decaying. The dynamics
of wave function in atom lattices with frozen parametric disorder in potential is fundamentally contributed by the fast
oscillating modes and this dynamics gives rise to the phenomenon of Anderson localization [1, 2, 3, 4, 5, 6]. The
spinodal decomposition [7, 8, 9, 10, 11, 12] of two-component mixtures and heat-mass-transfer in active media [13,
14, 15, 16, 17, 18, 19, 20, 21, 22] is governed by partial differential equations with high-order spatial derivatives;
these derivatives drive a fast decay of short wavelength perturbations. The systems near a SNIPER (saddle-node
infinite period) bifurcation of chemical oscillations [23] are subject to thermal noise and can be subject to other noise
sources; at the bifurcation point, these systems can be described by the same mathematical models as the quadratic
integrate-and-fire neurons with noise and synaptic coupling [24, 25, 26, 27, 28, 29, 30]. The numerical simulation of
the Fokker–Planck equation of these chemical oscillation and neuronal systems is again the case of a stiff system.
Generally, fast oscillating or decaying modes impose severe limitation on the time stepsize, which has to be very
small. For oscillating modes it is needed to maintain a demanded accuracy of simulation [6]. Fast decaying modes
are dying-out and do not influence the dynamics of the chemical/physical system; therefore, there is no need for a
high precision of their simulation. However, for a finite stepsize, fast decay can produce numerical instability if the
stepsize in not small enough. As a result, the numerical simulation of stiff system requires the employment of ad hoc
approaches and implicit schemes, which require a sophisticated individual mathematical preliminary work, is costly

∗ Correspondingauthor
Email addresses: evelina.v.permyakova@gmail.com (Evelina V. Permyakova), Denis.Goldobin@gmail.com (Denis S. Goldobin)

Preprint submitted to Journal of Computational Physics October 15, 2024


or infeasible in higher dimensions, and are helpless for fast oscillatory modes in conservative systems. Otherwise,
the simulation of these problems should be performed with a tiny time stepsize and becomes extremely CPU (central
processing unit) time consuming. An alternative approach is a relatively new class of methods of exponential time
differencing (ETD) [31, 32, 33, 34, 35, 36, 37, 38, 39, 40].
In ETD methods [31, 32, 33] the linear part of equations, which gives the origin to the fast modes, is solved exactly.
For the nonlinear part, one constructs an exact solution for a quasistatic approximation of these terms during one time
step in the case of a first-order scheme. For an nth-order scheme, one effectively constructs the exact solution for the
case where the nonlinear part in one time step is an (n − 1)-order polynomial of time [31]. This approach provides
high resolution for temporal dynamics of fast modes and prevents numerical instabilities even for a quite large values
of time stepsize.
The application of ETD methods is straightforward when the linear part is diagonal, which is abundant for spectral
methods for the problems with periodic boundary conditions and Fourier basis. However, for spectral methods with
other basis functions (e.g., Chebyshev polynomials) the linear part can become nondiagonal. Moreover, for problems
with frozen heterogeneity of parameters [26, 41, 42, 43, 44, 45], the linear part becomes essentially nondiagonal. In
this case the analytical calculation of the coefficients of ETD schemes (considered in Sec. 2) can become not just
difficult but completely impossible.
In this paper we develop an approach to calculation of the coefficients of high-order schemes of exponential time
differencing for systems with a nondiagonal linear part. In this approach the coefficients are calculated by means
of numerical integration of auxiliary problems. This integration is carried-out with a plain standard method, like
the predictor–corrector one, and a very small time stepsize τ1 , ensuring numerical stability, but over a short time
interval—the time step τ of the ETD scheme. The basic concept was suggested in [46] and here we develop and apply
the approach to the Runge–Kutta type schemes [31] of 3rd and 4th order. For the sake of illustration, we consider
the numerical simulation of the discrete-space versions of partial differential equations: Cahn–Hilliard equation [7, 8]
and Matthews–Cox equation [15, 16], and the Fourier projection of a Fokker–Planck equation [24, 27, 47]. The
performance gain varies from two to many orders of magnitude. The technical implementation of the suggested
approach and the bulk of the text below do not employ advanced topics from Linear algebra and Tensor calculus.
The paper is organized as follows. In Sec. 2, we provide three exponential differencing schemes of the Runge–
Kutta type and rewrite them in terms of the solutions of the auxiliary problems. The procedure for constructing the so-
lutions of the auxiliary problems is provided. In Sec. 3, we considers examples of the application of the ETD schemes
to three generic mathematical models: Cahn–Hilliard equation (Sec. 3.1), sixth-order spatial derivative Matthews–Cox
equation (Sec. 3.2), and Fokker–Planck equation for a network of theta-neurons (Sec. 3.3). The accuracy and CPU
time performance of the ETD schemes is analysed. In Sec. 4 we derive conclusions. At Appendix A, we examine the
“strong” accuracy of the suggested approach to numerical solution of the auxiliary problems.

2. Exponential time differencing

We deal with the problem of numerical integration of the equations system of the following form:

u̇ = L · u + f(u, t) , (1)

where u(t) is an N-component vector, L is an [N × N]-matrix with time-independent elements, f(u, t) is the nonlinear
part of equations. The decomposition of the equations into the linear part with time-independent coefficients, L·u, and
the nonlinear part is not unique and guided by the merit of convenience: f(u, t) can even contain a time-independent
linear part. However, it is desirable that all the dominating sources of numerical instability should be collected into
L · u as fully as possible.
In this paper we consider implementation of three exponential differencing schemes of the Runge–Kutta type[31];
for equation system (1) these schemes take the following form:
(ETD2RK) two-step scheme:

a = eLτ · u(t) + L−1 · (eLτ − I) · f u(t), t ,



(2a)
 
f a, t + τ − f u(t), t
u(t + τ) = a + L−2 · (eLτ − I − Lτ) · , (2b)
τ
2
where τ is the time stepsize of the numerical scheme, a is a preliminary approximation of u for the next time step t + τ,
I is the unitary matrix, the exponential of a matrix is defined by the series exp A = I + A + A2 /2! + A3 /3! + . . . ;
(ETD3RK) three-step scheme:

a = eLτ/2 · u(t) + L−1 · (eLτ/2 − I) · f u(t), t ,



(3a)
Lτ −1 Lτ   
b = e · u(t) + L · (e − I) · 2f a, t + τ/2 − f u(t), t , (3b)
−3 
L
u(t + τ) = eLτ · u(t) + 2 · − 4 − Lτ + eLτ (4 − 3Lτ + L2 τ2 ) · f u(t), t
 
τ
h i  h i 
+ 4 2 + Lτ + eLτ (−2 + Lτ) · f a, t + τ/2 + −4 − 3Lτ − L2 τ2 + eLτ (4 − Lτ) · f b, t + τ , (3c)

where a and b are the preliminary approximations of u for t + τ/2 and t + τ, respectively;
(ETD4RK) four-step scheme:

a = eLτ/2 · u(t) + L−1 · (eLτ/2 − I) · f u(t), t ,



(4a)
b = eLτ/2 · u(t) + L−1 · (eLτ/2 − I) · f a, t + τ/2 ,

(4b)
c = eLτ/2 · a + L−1 · (eLτ/2 − I) · 2f b, t + τ/2 − f u(t), t ,
  
(4c)
L−3 
u(t + τ) = eLτ · u(t) + 2 · − 4 − Lτ + eLτ (4 − 3Lτ + L2 τ2 ) · f u(t), t
 
τ h i
+ 2 2 + Lτ + eLτ (−2 + Lτ) · f(a, t + τ/2) + f(b, t + τ/2)

h i 
+ −4 − 3Lτ − L2 τ2 + eLτ (4 − Lτ) · f c, t + τ . (4d)

The error of the ETD2RK scheme on one time step is −τ3 f̈/12 [31], the ETD3RK scheme error is ∝ τ4 d3 f/dt3 , the
ETD4RK scheme error is ∝ τ5 d4 f/dt4 .
In the numerical schemes above, the reciprocal matrix L−1 is introduced for the brevity of expressions using the
exponential; wherever the matrix L−n appears in equations, it is multiplied by expressions the series representation of
which starts from the matrix Lm , m ≥ n. For instance, L−1 · (eLτ − I) is actually a formal expression for the series
τI + τ2 L/2 + τ3 L2 /3! + τ4 L3 /4! + . . . . Therefore, the zero eigenvalues of the matrix L, which would make L−1 → ∞,
are not an issue; the divergence of L−1 only prevents the usage of a shorter form of equations. Albeit, in this paper,
we are handling the case of a nondiagonal shape of matrix L, for which analytical calculation of the matrix eLτ can be
impossible or problematic.
More convenient for our approach is to recast the numerical schemes in terms of the solutions of the following
auxiliary problems [46]. The general solution to the Cauchy problem

u̇ = L · u (5)

is given by the matrix Q(τ) ≡ eLτ such that



u t = τ|u(0), f = 0 = Q(τ) · u(0) .

Hence, one obtains the definition of the matrix Q(τ), which is convenient for numerical simulations:

Q jk (τ) = u j t = τ|ul (0) = δlk , f = 0 . (6)

The solutions to the problems


u̇ = L · u + g tn , u(0) = 0 , g = const (7)

are given by the matrices Mn (τ) ≡ 0 eL(τ−t) tn−1 dt :

u t = τ|u(0) = 0, f(t) = gtn−1 = Mn (τ) · g .




Hence, one obtains the definition of the matrix Mn (τ), which is convenient for numerical simulations:

Mn (τ) jk = u j t = τ|u(0) = 0, fl (t) = δlk tn−1 .


 
(8)
3
Using the recurrent relationship Mn+1 (τ) = L−1 · (−τn I + nMn (τ)), one can obtain

M1 (τ) = L−1 · (eLτ − I) , (9a)


−2 Lτ
M2 (τ) = L · (e − I − Lτ) , (9b)
−3 Lτ 2 2
M3 (τ) = L · (2e − 2I − 2Lτ − L τ ) , (9c)
... .

As proposed in [46], for an essentially nondiagonal shape of matrix L, the evaluation of matrices Q and Mn defined
by equations (6) and (8) can be conducted via the direct numerical integration of problems (5) and (7) with a very
small time stepsize but on a short time interval — one step of the ETD scheme τ.
In terms of matrices Mn the schemes (2), (3), and (4) acquire the following form:
(ETD2RK):

a = Q · u(t) + M1 · f u(t), t , (10a)
−1   
u(t + τ) = a + τ M2 · f a, t + τ − f u(t), t ; (10b)

(ETD3RK):

a = Q 2τ · u(t) + M1, 2τ · f u(t), t , (11a)
  
b = Q · u(t) + M1 · 2f a, t + τ/2 − f u(t), t , (11b)
" #
2M3 3M2 
u(t + τ) = Q · u(t) + − + M 1 · f u(t), t
τ2 τ
" # " #
4M3 4M2  2M3 M2 
− − · f a, t + τ/2 + − · f b, t + τ , (11c)
τ2 τ τ2 τ
where Q 2τ = Q(τ/2), M1, 2τ = M1 (τ/2), and for the full stepsize τ the corresponding subscript is omitted;
(ETD4RK):

a = Q 2τ · u(t) + M1, 2τ · f u(t), t , (12a)

b = Q 2τ · u(t) + M1, 2τ · f a, t + τ/2 , (12b)
  
c = Q 2τ · a + M1, τ2 · 2f b, t + τ/2 − f u(t), t , (12c)
" #
2M3 3M2 
u(t + τ) = Q · u(t) + 2
− + M1 · f u(t), t
τ τ
" # " #
2M3 2M2  2M3 M2 
− 2
− · f(a, t + τ/2) + f(b, t + τ/2) + 2
− · f c, t + τ . (12d)
τ τ τ τ
In the literature, for the difficult case of non-diagonalizable L, the matrices Q and Mn are calculated approximately
from formulas (9) either by means of the Taylor expansions and projection into Krylov subspaces [38, 39] or with
discretized contour integrals on the complex plane [40]. Additionally, for small eigenvalues of matrix τL (the presence
of which is abundant), the computations with formula (eτL − I)/(τL) (9a) suffer from the cancellation errors which
rapidly grow for Mn as n increases further above 1. Tackling the latter issue is involved but doable [31, 40]; this
issue does not appear with the algorithm we consider in this paper. Moreover, the technical implementation of this
algorithm and the text below do not employ any advanced topics from Linear algebra and Tensor calculus.

3. Examples

3.1. 1D Cahn–Hilliard equation


For a broad class of active media systems the pattern formation in thin layers is governed by the Cahn–Hilliard
equation [13, 14, 18] (CHE), which as well describes the spinodal decomposition of two-component mixtures [7, 8,
4
Figure 1: Oscillatory solutions of Cahn–Hilliard equation (13) (left panel) and Matthews–Cox equation (17) (right panel) are plotted for advection
velocity v = 1 and localized excitability q(x) (14) in the domain of length L = 10.

12]. For the generality of consideration, we will also allow for an advective transfer along the layer [8, 18]. In the
one-dimensional case (the patterns are homogeneous along the second direction in the layer plane), CHE for the field
u(x, t) with advection velocity v reads

∂2 ∂2 u
" #
∂u ∂u 3
= −v − 2 q(x)u + 2 − u , (13)
∂t ∂x ∂x ∂x
where q(x) is the local deviation of the bifurcation parameter from the instability threshold of the homogeneous infinite
layer (the patterns are excited for q > 0).
For an example, we consider CHE in a domain 0 < x < L with trivial boundary conditions:
∂u ∂u
u(0) = = u(L) = = 0.
∂x x=0 ∂x x=L

The local excitability parameter q(x) is assumed to be positive within certain excitation zone and negative beyond it:
3L 7L

 2.5 , for 10 < x < 10 ;


q(x) =  (14)
 −3 ,

otherwise .

For a localized excitation q(x), strong enough advection v results in an oscillatory behavior (e.g., see [45]). For v = 1
and q(x) given by (14), the oscillatory solution of Eq. (13) is presented in Fig. 1.

3.1.1. Numerical scheme


The spatially-discrete version of CHE (13) for vector u(t) ≡ {u j (t)| j = 1, 2, ..., N}, where u j (t) = u( jh x , t), h x =
L/N, can be written as Eq. (1) with
u j−1 − u j+1 q j+1 u j+1 − 2q j u j + q j−1 u j−1 u j+2 − 4u j+1 + 6u j − 4u j−1 + u j−2
(L · u) j = v − − , (15a)
2h x h2x h4x

u3j+1 − 2u3j + u3j−1


f j (u, t) = , (15b)
h2x
where q j = q( jh x ) (14). The trivial boundary conditions we consider imply that one can formally set u−1 = u0 =
uN+1 = uN+2 = 0 in the right-hand parts of (15).
The problem (1,15) can be numerically simulated with the basic Euler scheme or the predictor–corrector (PC) one.
For both of these schemes, the forth-order x-derivative term imposes the limitation on the time stepsize, τ1 < h4x /8;
for a bigger time stepsize the direct numerical simulation becomes unstable. More sophisticated Runge–Kutta type
5
Figure 2: For Cahn–Hilliard equation (13), matrices log10 |Q jk | (panel a), log10 |(M1 ) jk | (panel b), log10 |(τ−1 M2 ) jk | (panel c), log10 |(τ−2 M3 ) jk |
(panel d) are plotted versus indices k and j for τ = 2.5 × 10−4 , L = 10, N = 200; τ1 = 0.1h4x = 6.25 × 10−7 .

schemes or any other high-order methods are not needed here as the PC scheme with such a small time stepsize already
warrants an excessive accuracy of the numerical integration in time. Henceforth, we will employ the PC scheme:

F(t) = L · u(t) + f u(t), t , (16a)
a = u(t) + F(t)τ1 , (16b)

F(t + τ1 ) = L · a + f a, t , (16c)
F(t) + F(t + τ1 )
u(t + τ1 ) = u(t) + τ1 . (16d)
2
For problem (1,15), the PC scheme (16) was employed to integrate the auxiliary problems (5) and (7) for t ∈ [0, τ]
to evaluate matrices Q 2τ , Q (6) and M1, 2τ , Mn , n = 1, 2, 3 (8), respectively. The time stepsize for the direct numerical
integration of the auxiliary problems was τ1 = 0.1h4x . In Fig. 2, a sample structure of these matrices can be seen.

3.1.2. Accuracy and performance of numerical simulation with exponential time differencing schemes
The ETD2RK, ETD3RK, and ETD4RK schemes with matrices Q 2τ , Q, M1, 2τ , Mn , n = 1, 2, 3 calculated as de-
scribed in Sec. 3.1.1 were employed for the numerical simulation of CHE (13) with v = 1 and q(x) given by (14)
in the domain of length L = 10 (see Fig. 1). In Figs. 3 and 4, the dependencies of the accuracy and the simulation
performance versus the stepsize τ of the ETD schemes are presented.
In Fig. 3, one can compare the performance of ETD methods to that of the PC method (blue solid circles). The
numerical error of a specific scheme is evaluated as the deviation of the result from the result calculated with the PC
6
100
pred.-corr.
10−5
ETD2RK

CPU time [sec ]


ETD3RK Prep. CPU time
10
error/t

ETD4RK ETD2RK
10−8 ETD3RK
1 ETD4RK

10−11
0.1
0.0001 0.001 0.01 0.1 0.0001 0.001 0.01 τ 0.1
τ

Figure 3: The error rate and the CPU time for the numerical simulation of Cahn–Hilliard equation (13) with several ETD schemes over the time
interval 0 < t < 50 are plotted versus the ETD stepsize τ for N = 200 (i.e. hx = 0.05). Solid squares: ETD2RK, solid up-triangles: ETD3RK,
solid down-triangles: ETD4RK, solid circles: predictor–corrector scheme with fixed time stepsize τ1 = 0.1h4x = 6.25 × 10−7 . CPU times for the
preliminary (preparatory) calculation of the matrices Q and Mn required for the respective ETD scheme are plotted in the right panel with open
symbols. CPU times are provided for the processor Intel(R) Core(TM) i7-4790K CPU 4.00 GHz, disabled hypertrading; RAM: DDR3 16 GB;
program in FORTRAN.

pred.-corr.
10−5 ETD2RK 1000
CPU time [sec ]

ETD3RK Prep. CPU time


100 ETD2RK
error/t

ETD4RK
ETD3RK
10−8
10 ETD4RK

1
10−11

0.0001 0.001 0.01 τ 0.1 0.0001 0.001 0.01 τ 0.1

Figure 4: The error rate and the CPU time for the numerical simulation of CHE (13) with several ETD schemes are plotted versus the ETD stepsize
τ for N = 400 (i.e. hx = 0.025); τ1 = 0.1h4x = 3.90625 × 10−8 . See Caption to Fig. 3 for notations and values of other parameters.

scheme (16) with τ1 = 0.01h4x, which is by a factor of 10 smaller then the time stepsize 0.1h4x needed for practical
computations. The accuracy of the PC method is excessive, but it is dictated by the maximal admissible value of the
time stepsize h4x /8. In turn, the ETD methods can provide decent accuracy even for large values of τ, where they
perform much faster (see the right panel) than the PC method. For example, with a required accuracy of 10−6 (see
crests in Fig. 3), one can pick-up as large time stepsize as τ ≈ 0.005 for the ETD2RK scheme, 0.02 for ETD3RK, and
0.04 for ETD4RK. The biggest performance gain is achieved with ETD4RK scheme and exceeds a factor of 100 for
all three schemes. For an increased accuracy of the spatial discretization (see Fig. 4 where N is doubled compared to
the case of Fig. 3), the performance gain provided by ETD schemes significantly increases.
Note, the CPU time for the preliminary calculations of Q and Mn increases linearly with τ and can become
nonsmall (see open symbols in the right panels of Figs. 3 and 4). This CPU time can be significant for the problems
where simulation over short or moderate intervals of time t is sufficient. For these problems, an optimal performance
is achieved for the values of τ, which provide approximately equal CPU times for the preliminary calculation of Q
and Mn and the run of the ETD scheme (see the crossings of the lines marked by solid and open symbols in Figs. 3
and 4). The overall performance gain is decreased, but still large. The ETD2RK scheme becomes the most efficient
one in this case. However, the most computationally demanding problems are those of complex dynamics [8, 10, 9,
19, 20, 21, 48, 49, 50, 51, 52, 53, 54, 55] (including spatiotemporal chaos) and dynamics in the presence of frozen
parametric disorder [41, 43, 42, 44, 45], where the CPU time for the preliminary calculations can be neglected.
In Appendix A, our analysis is complemented with the examination of the accuracy of approximate calculation of

7
Figure 5: For Matthews–Cox equation (17), matrices log10 |Q jk | (panel a), log10 |(M1 ) jk | (panel b), log10 |(τ−1 M2 ) jk | (panel c), log10 |(τ−2 M3 ) jk |
(panel d) are plotted versus indices k and j for τ = 1.6 × 10−4 , L = 10, N = 100; τ1 = 0.02h6x = 2 × 10−8 .

Q and Mn . The inaccuracies in Q and Mn are primarily contributed by fast decaying modes, for which the contribution
into the inaccuracy of a numerically simulated solution u(t) is exponentially suppressed. The impact of inaccuracies
of Q and Mn related to different decaying perturbation modes onto the inaccuracy of u(t) is practically impossible to
unpack. The net error of the numerical method is contributed by two sources: the error of the ETD scheme and the
error of approximate calculation of matrices Q and Mn . With all the importance of the information on the inaccuracy
of Q and Mn , the net error decides the practical applicability of the approach; this error is reported in the left panels
of Figs. 3 and 4.

3.2. 1D pattern formation with a conservation law


The conservation laws (of the chemical specie mass, etc.) are structurally stable (persistent) properties of systems
and influence the general form of equations governing the pattern formation. Matthews and Cox [15] studied the
model equation governing pattern formation with an additional conservation law:

∂2 ∂2 u ∂4 u
" #
∂u ∂u
= −v − 2 q(x)u − 2 2 − 4 − u3 , (17)
∂t ∂x ∂x ∂x ∂x

where the terms inside the brackets are just those of the well-studied Swift–Hohenberg equation [56] and an additional
advective v-term is introduced [16]. Let us consider the Matthews–Cox equation (MCE) as an example of the pattern
formation equation with the sixth-order spatial derivative.

8
10−2 ETD2RK
10
ETD3RK

CPU time [sec ]


10−4 ETD4RK
1
error/t

Prep. CPU time


−6
10 ETD2RK
0.1 ETD3RK
10−8 ETD4RK

0.01
10−10
0.0001 0.001 0.01 τ 0.1 0.0001 0.001 0.01 τ 0.1

Figure 6: The error rate and the CPU time for the numerical simulation of MCE (17) with several ETD schemes are plotted versus the ETD stepsize
τ for N = 50 (i.e. hx = 0.2); τ1 = 0.02h6x = 1.6 × 10−7 . See Caption to Fig. 3 for notations and values of other parameters.

10−2 pred.-corr.
1000
ETD2RK
10−4 ETD3RK CPU time [sec ] 100
Prep. CPU time
error/t

ETD4RK
−6 10
10 ETD2RK

1 ETD3RK
10−8 ETD4RK
0.1

10−10 0.01
0.0001 0.001 0.01 0.1 0.0001 0.001 0.01 τ 0.1
τ

Figure 7: The error rate and the CPU time for the numerical simulation of MCE (17) with several ETD schemes are plotted versus the ETD stepsize
τ for N = 100 (i.e. hx = 0.1); τ1 = 0.02h6x = 2 × 10−8 . See Caption to Fig. 3 for notations and values of other parameters.

We consider MCE in a domain 0 < x < L with trivial boundary conditions:

∂u ∂2 u ∂u ∂2 u
u(0) = = = u(L) = = = 0.
∂x x=0 ∂x2 x=0 ∂x x=L ∂x2 x=L

The local excitability parameter q(x) is assumed to be the same as for CHE (14). For a localized excitation q(x), strong
enough advection v results in an oscillatory behavior. For v = 1 and q(x) given by (14), the oscillatory solution of
Eq. (17) is presented in Fig. 1.

3.2.1. Numerical scheme


The spatially-discrete version of MCE (17) for vector u(t) ≡ {u j (t)| j = 1, 2, ..., N}, where u j (t) = u( jh x , t), h x =
L/N, can be written as Eq. (1) with
u j−1 − u j+1 q j+1 u j+1 − 2q j u j + q j−1 u j−1 u j+2 − 4u j+1 + 6u j − 4u j−1 + u j−2
(L · u) j = v − +2
2h x h2x h4x
u j+3 − 6u j+2 + 15u j+1 − 20u j + 15u j−1 − 6u j−2 + u j−3
+ , (18a)
h6x
u3j+1 − 2u3j + u3j−1
f j (u, t) = , (18b)
h2x
where q j = q( jh x ) (14). The trivial boundary conditions we consider imply that one can formally set u−2 = u−1 =
u0 = uN+1 = uN+2 = uN+3 = 0 in the right-hand parts of (18).
9
The problem (1,18) can be numerically simulated with the basic Euler scheme or the predictor–corrector one. For
both of these schemes, the sixth-order x-derivative term imposes the limitation on the time stepsize, τ1 < h6x /32; for
a bigger time stepsize the direct numerical simulation becomes unstable. Henceforth, we will employ the PC scheme
(16).
For problem (1,18), the PC scheme (16) was employed to integrate the auxiliary problems (5) and (7) for t ∈ [0, τ]
to evaluate matrices Q 2τ , Q, M1, 2τ , Mn , n = 1, 2, 3. The time stepsize for the direct numerical integration of the
auxiliary problems was τ1 = 0.02h6x. In Fig. 5, a sample structure of these matrices can be seen.

3.2.2. Accuracy and performance of numerical simulation with exponential time differencing schemes
The ETD2RK, ETD3RK, and ETD4RK schemes with matrices Q 2τ , Q, M1, 2τ , Mn , n = 1, 2, 3 calculated as de-
scribed in Sec. 3.2.1 were employed for the numerical simulation of MCE (17) with v = 1 and q(x) given by (14)
in the domain of length L = 10 (see Fig. 1). In Figs. 6 and 7, the dependencies of the accuracy and the simulation
performance versus the stepsize τ of the ETD schemes are presented. One can see even more dramatic performance
gain than for the numerical simulation of CHE (13).
Generally, for the partial differential equations with the highest-order spatial derivative ∂m /∂xm , the performance
gain for the ETD methods is ∝ hm−2x , as anticipated in [46]. However, in Figs. 2 and 5, one can see that matrix elements
rapidly decay away from the diagonal. For nonlarge τ, the absolute values of a significant fraction of elements are
below ∼ 10−16 which is the level of the double precision computer accuracy. The elements smaller than the required
error level can be set to zero without any damage to the scheme accuracy. The optimization of a program code for the
sparseness of matrices Q and Mn allows for a noticeable performance acceleration. For a given stepsize τ, the highest
derivative ∂m /∂xm creates a bigger spread of nonsmall values of matrix elements (compare Fig. 2 to Fig. 5, where the
stepsize τ is even somewhat smaller). Therefore, this additional optimization becomes less beneficial for high-order
derivatives, where the gain is actually large without any optimization.

3.3. Macroscopic dynamics of population of quadratic integrate-and-fire neurons with noise


The next example we consider is the macroscopic dynamics of a large population of coupled oscillators. The
quadratic integrate-and-fire neuron (QIF) [57] is not only a useful toy model for the excitable behavior of neurons,
but also a normal for the Class I excitability neurons near the bifurcation point [58]. Networks of QIFs can exhibit
sophisticated macroscopic dynamics; these dynamics and the effect of endogenic noise on them can be often studied
and well understood within the framework of the Fokker–Planck equation [24, 25, 26, 27, 28, 29]. However, the
problem of reasonably accurate direct numerical simulation of it for some important collective regimes becomes a
challenging task. Below we provide the equations for a large recurrent synaptic network of QIFs with global coupling
and endogenic Gaussian noise, explain typical accuracy breakdown of its direct numerical simulation, and test three
ETD schemes for it.

3.3.1. Governing equations


In this section we provide a concise derivation and interpretation for equation system (25) and (27), the direct
numerical simulation of which can be a challenging task tackled by means of the ETD-methods.
We consider the population of QIFs with endogenic noise:

V̇n = Vn2 + In , (19)


In = ηn + σζn (t) + Js(t) + I(t) , (20)

where Vn is the membrane voltage of the nth neuron, ηn is the excitability parameter of individual neuron (an isolated
neuron is excitable for ηn < 0 and periodically spiking otherwise), I(t) is the external input current, σζn (t) are inde-
pendent Gaussian endogenic noises: hζn (t) ζm (t + t′ )i = 2δnm δ(t′ ), δnm is the Kronecker delta which is 1 for n = m
and 0 otherwise. When Vn reaches +∞ it is reset to −∞ and a synaptic spike is generated [58]. The input synaptic
current from other neurons Js(t) is characterised by the coupling coefficient J, which is negative for an inhibitory
coupling, and a common field s(t) equals to the neuron firing rate r(t) for a thermodynamically large population with
instantaneous synaptic spikes.

10
0 |zj|

|δzj|, ETD2RK

log10|δzj|, log10|zj|
−5
|δzj|, ETD3RK

|δzj|, ETD4RK
−10

−15

−20
0 40 80 120 160
j

Figure 8: Instantaneous state {z j } of system (25) and (27) after a transient process for η0 = −2, γ = 1, J = 10, σ2 = 10, and periodically
modulated external input current I(t) = 0.3 sin 10t. The numerical errors {δz j } are plotted for the simulations with ETD schemes with τ = 0.025
and τ1 = 1.25 × 10−6 .

One can introduce a phase variable φ,


φn
Vn = tan ,
2
and rewrite Eq. (19) as
 
φ̇n = (1 − cos φn ) + (1 + cos φn ) ηn + σζn (t) + Js(t) + I(t) . (21)
Let us index the QIFs with the value of their parameter ηn . The Fokker–Planck equation for the probability density
wη (φ, t) of stochastic system (21) reads
!
∂wη ∂    2 ∂ ∂ 
+ 1 − cos φ + (1 + cos φ)(η + Js + I(t)) wη = σ (1 + cos φ) (1 + cos φ)wη . (22)
∂t ∂φ ∂φ ∂φ
Now we calculate the firing rate r in termsR of φ. Given the distribution of η is g(η), in the thermodynamic limit of
large population, the firing rate equals r(t) = qη (φ = π) g(η) dη, where qη is the probability density flux,


qη = 1 − cos φ + (1 + cos φ)(η + Js + I(t)) wη − σ2 (1 + cos φ)
  
(1 + cos φ)wη .
∂φ
For φ = π, the flux qη (φ = π) = 2wη (π) and one finds
Z Z
r(t) = qη (π) g(η)dη = 2 wη (π) g(η)dη . (23)

1 
1 + ∞j=1 (a j (t)e−i jφ + c.c.) , where “c.c.” stands for complex conjugate, and
P 
In the Fourier space, wη (φ, t) = 2π
Eq. (22) takes the following form:
 i 
ȧ j (η) = j 2ia j + Aη (a j−1 + 2a j + a j+1 )
" 2 #
3  j  j j( j − 1) j( j + 1)
− σ2 j2 a j + j2 − a j−1 + j2 + a j+1 + a j−2 + a j+2 , (24)
2 2 2 4 4
where Aη = η + Js + I(t) − 1; a0 = 1 and a− j = a∗j , by definition. For a heterogeneous population with Lorentzian
distribution g(η) = γ/[π(γ2 +(η−η0 )2 )] with median η0 and half-width at half maximum γ, one can employ the Residue
theorem [59, 60, 61] to derive from Eq. (24) the infinite chain of dynamics equations for the Kuramoto–Daido order
parameters [62] Z Z Z
zj = dη g(η) a j(η) = dη g(η) dφ wη (φ)ei jφ .

11
Figure 9: For equation system (26) and (27), matrices log10 |Q jk | (panel a), log10 |(M1 ) jk | (panel b), log10 |(τ−1 M2 ) jk | (panel c), log10 |(τ−2 M3 ) jk |
(panel d) are plotted versus indices k and j for τ = 10−4 , η0 = −2, γ = 1, J = 10, σ2 = 10; τ1 = 1.25 × 10−6 .

One finds
" #
iAη − γ
ż j = j 2iz j + (z j−1 + 2z j + z j+1 )
2
" #
2 3 2 j j j( j − 1) j( j + 1)
 
2 2
−σ j z j + j − z j−1 + j + z j+1 + z j−2 + z j+2 (25)
2 2 2 4 4
≡ (L · z) j + f j (z, t) , (26)
where z = {z0 , z1 , z2 , . . . } (which does not contradict z0 = 1),
Jr + I(t)
(z j−1 + 2z j + z j+1 ) ,
fj = i j
2
and everything else is collected in the linear part with constant coefficients. Notice, the Jr-term of f is quadratic with
respect to z, as the firing rate r(t) is a function of the system state. Namely, substituting the Fourier series of wη (φ)
into Eq. (23), one finds the neuron firing rate [63]
1
(1 − z1 − z∗1 + z2 + z∗2 − z3 − z∗3 + z4 + z∗4 + . . . )
r=
π
ReW
= , W = 1 − 2z1 + 2z2 − 2z3 + 2z4 + . . . . (27)
π
To summarize, the macroscopic dynamics of a recurrent synaptic network of QIFs with all-to-all coupling is
governed by the system of equations (25) and (27); its Eq. (1) form is given by (26) and (27).
12
3.3.2. Direct numerical simulation challenges and ETD methods
For perfect synchrony regimes, the distribution w(φ, t) is a travelling delta-function δ(φ−ϕ(t)), the Fourier spectrum
of which is z j = ei jϕ(t) and does not decay with j. In physically meaningful set-ups the synchrony will be imperfect,
but still high-synchrony regimes are of interest. The series z j for these regimes decay slowly and may require as many
as 1000–2000 elements for a reasonably accurate simulation of the dynamics of observables z1 (t), r(t), etc. [47] In
what follows, we estimate how much demanding for the numerical time stepsize are Eqs. (25) for large j.
Typical explicit schemes which are stable for infinitesimally small time stepsize can become unstable for a larger
stepsize τ∗ . For a perturbation δz j ∼ Ck eik j , where only −π < k < π make sense as j is integer, the reference value of τ∗
can be estimated from the condition |δż j τ∗ |/|δz j | ∼ 1 (details vary from scheme to scheme, but the order of magnitude
is 1). For Eq. (25) and large j, this condition reads:
( ! )
n o
2 3 1 i
j 2i + (iAη − γ)(1 + cos k) − jσ j + 2 cos k + cos 2k + i sin k + sin 2k τ∗ ∼ 1 .
2 2 2

Keeping only principal contributions and maximizing over k, one finds approximately
 q 
2 j (1 + Aη )2 + γ2 + 4 j2 σ2 τ∗ ∼ 1 .

The Runge–Kutta–Merson and simplest predictor-corrector schemes yield stability threshold values of the same order
of magnitude. For computationally challenging cases of j ∼ 1000,pthe maximal admissible time stepsize for both
conventional schemes becomes as small as τ∗ ∼ 10−7 /(σ2 + 0.0005 (1 + Aη )2 + γ2 ); however, this and even bigger
number of modes is required for some high-synchrony regimes.
For the application of an ETD method for this problem, one practically cannot make the matrix L diagonal or
calculate matrix exp(Lτ) analytically in some other way. On the other hand, one can calculate matrixes Q, Mn , etc.
numerically with the Runge–Kutta–Merson or simplest predictor-corrector schemes. In Fig. 8, one can see the results
of the direct numerical simulation of system (25) and (27) with ETD methods for N = 200 modes z j . The reference
“accurate” solution is calculated with the predictor-corrector scheme (16) and small time step τ = 1.25 × 10−7 . 1 The
ETD scheme stepsize is set to a large value τ = 0.025 in order to illustrate the accuracy order of three employed ETD
schemes. In Fig. 9, the structure of numerically calculated matrices Q, Mn is presented for τ = 10−4 .
In Table 1, the performance of three EDT schemes is compared against the background of the conventional Runge–
Kutta–Merson and predictor-corrector (16) schemes. For typical studies, as many modes as N = 200–400 is sufficient
for accurate resolution of fine elements of phase diagrams. For short-term simulations, optimal stepsize was found
to be around τ = 0.0005 and ETD schemes provide the performance gain in range 3–10; for long-term simulations,
where one can neglect the time for calculation of the scheme coefficients, a bigger stepsize τ = 0.005 allows one to
have decent accuracy alongside with the performance gain up to two orders of magnitude.

4. Conclusion

We have suggested a practical approach for the implementation of the exponential time differencing schemes of
the Runge–Kutta type [31] for numerical simulation of stiff systems with nondiagonal linear part. In this approach,
the numerical schemes are written in terms of the solutions of auxiliary problems which are preliminary integrated
numerically by means of an explicit basic method with a very small time step but over a short interval of time—one
step of the ETD scheme. As a result, one can not only use the ETD methods for the systems where an analytical
calculation of the exponential of a nondiagonal linear operator is infeasible, but also employ the same program code
for different equation systems, the only part of code which is subject to change is the subroutine with the equations to
be simulated.

1 The stepsize around τ ∼ 10−7 was found to provide the best accuracy with double precision calculations for the given set of parameter

values. Here, the accumulation of the relative machine error ∼ 10−15 at each time step sums up over the unit time to a value much smaller than
∼ 10−15 /10−7 = 10−8 because of the fast decay of the leading perturbation modes; as a result, the net numerical error rate per the unit time is well
below 10−12 .

13
Table 1: CPU time for the direct numerical simulation of equation system (25) and (27) is averaged over 100 runs on the dimensionless time
interval t ∈ [0; 5] for η0 = −2, γ = 1, J = 10, σ2 = 10, I(t) = 0.3 sin 10t; “preparation”: time for calculation of Q and Mn [sec], “run”: time of
simulation with a numerical scheme with pre-calculated matrixes Q and Mn [sec]; stepsize τ was varied for thePETD schemes, P and for the RKM-
and PC-ones, the maximal values allowing for a numerically stable simulations were used; relative error ε ≡ Nj=1 |δz j |/ Nj=1 |z j |. For the CPU
specifications see Caption to Fig. 3.

simulation CPU time [sec]


N τ ε
preparation run
200 2.3 × 10−6 16.1
Runge–Kutta–
300 10−6 53.0
Merson method
400 0.5 × 10−6 139
200 1.2 × 10−6 13.8 6.1 × 10−11
predictor-
300 5.5 × 10−7 49.9 1.4 × 10−11
corrector (16)
400 3.1 × 10−7 106 4.9 × 10−10
0.0005 0.752 0.936 5.1 × 10−8
200
0.005 7.51 0.149 5.0 × 10−6
0.0005 4.23 1.96 5.1 × 10−8
ETD2RK 300
0.005 42.1 0.300 5.0 × 10−6
0.0005 12.2 3.58 5.1 × 10−8
400
0.005 119 0.656 5.0 × 10−6
0.0005 1.25 2.12 1.1 × 10−10
200
0.005 12.5 0.345 1.0 × 10−7
0.0005 7.05 5.56 1.9 × 10−10
ETD3RK 300
0.005 70.1 1.21 1.0 × 10−7
0.0005 20.3 17.6 4.3 × 10−10
400
0.005 199 3.23 1.0 × 10−7
0.0005 1.25 2.59 5.6 × 10−11
200
0.005 12.5 0.397 4.2 × 10−10
0.0005 7.05 6.02 1.5 × 10−10
ETD4RK 300
0.005 70.1 1.11 4.5 × 10−10
0.0005 20.3 17.5 4.5 × 10−10
400
0.005 199 3.36 7.9 × 10−10

The employment of this approach for Cahn–Hilliard equation (13) and sixth-order spatial derivative Matthews–
Cox equation (17) demonstrated that one can have a performance gain by two orders of magnitude for the former and
by three orders for the latter. Moreover, for partial differential equations, the performance gain grows as one increases
the required accuracy of spatial resolution; the basic gain for an equation with the highest-order spatial derivative
∂m /∂xm is ∝ hm−2
x . For nonlarge m, one can introduce code optimization accounting for the sparseness of the matrices
Q and Mn , which gives some further performance acceleration on top of the basic one (this acceleration is by far
not that large as the basic gain). For the simulation of a long-term evolution, as required for complex spatiotemporal
dynamics or Anderson localization phenomena, the maximal performance gain is achieved with the 4th-order Runge–
Kutta type ETD scheme; for a short-time simulation, the usage of the 2nd-order ETD scheme is somewhat more
efficient.
The employment of the approach for spectral methods is illustrated with 1D Fokker–Planck equation governing
the macroscopic dynamics of populations of quadratic integrate-and-fire neurons with endogenic noise. Here, one can
have a controllable accuracy and achieve the performance gain up to one order of magnitude for time-independent
regimes, where short-term simulations are sufficient, and several orders of magnitude for time-dependent regimes
where long-term simulations are demanded. Our approach is also efficient for numerical simulation of other classical
systems such as populations of coupled active rotators [53, 54].
The suggested approach can be extended into multiple dimensions in a straightforward way [46]. However, the
basic version of the code for a partial differential equation, say, in 2d requires matrices Q and Mn of size [N × N] with

14
1 1
;ĂͿ ;ďͿ
|Q100 k|, |(Mn)100 k|, error

|Q25 k|, |(Mn)25 k|, error


−5
10 10−3

10−10
10−6

10−15 Q
M1
10−9
10−20 M2/τ
10−12
−25
M3/τ2
10
0 40 80 120 160 200 0 10 20 30 40 k 50
k

Figure 10: The strong accuracy of calculation of Q jk , (M1 ) jk , (τ−1 M2 ) jk , (τ−2 M3 ) jk is demonstrated with the values of coefficients for CHE (13)
and j = 100 (panel a: τ = 2.5 × 10−4 , N = 200, i.e. hx = 0.05, other parameter values are as in Figs. 2 and 3) and for MCE (17) and j = 25 (panel b:
τ = 3.2 × 10−4 , N = 50, i.e. hx = 0.2, other parameter values are as in Fig. 6). Solid lines: “exact” values; dotted lines: error; the color coding is
identical in panels a and b. The “exact” coefficients are calculated with Taylor series of length M = 2088 and 384 decimal digits of intermediate
computations (a) and M = 1044 and 192 decimal digits (b).

1
Q
−5
|Qjk|, |(Mn)jk|, error

10 M1

M2/τ
10−10
M3/τ2

10−15

10−20

10−25
0 50 100 150 k

Figure 11: For Fokker–Planck equation (25), “exact” coefficients Q jk , (M1 ) jk , (τ−1 M2 ) jk , (τ−2 M3 ) jk are plotted versus index k with the solid lines
for j = 50 (the left set of curves) and 100 (the right set of curves); the error of the coefficients calculated with the proposed procedure is plotted
with the dotted curves. The “exact” coefficients are calculated with Taylor series of length M = 522 and 96 decimal digits. See Caption to Fig. 9
for the parameter values.

N = N x × Ny , where N x and Ny are the number of nodes (or modes) in the x- and y-directions. With the optimization
n of Q and Mn , one can store a reduced
for the sparseness o amount [N × Nspr ] of elements of each matrix, where
Nspr = 2mτ min Ny |c x | ln[h x /(err τ)], N x |cy | ln[hy /(err τ)] for the highest-order spatial derivative term c x ∂m u/∂xm +
cy ∂m u/∂ym and the acceptable relative error per a unit time err. For high-precision simulations in multiple dimensions
the usage of the ETD schemes becomes memory-demanding. Elevated memory requirements and the fact, that the
bulk of the computation process is the multiplication of large matrices and vectors, make these high-performance
simulation methods naturally suitable for parallel- and super-computing.

Appendix A. Strong accuracy of computation of Q and M n

In this section we evaluate the “strong” accuracy of the approximate calculation of matrices Q and Mn with the
suggested algorithm. Below we will also discuss why one distinguishes the “strong” accuracy of calculation of these
coefficients and the accuracy of the numerical solution u(t), which is the “soft” accuracy of the numerical simulation
of dynamical system (1) and often higher; significantly higher in the examples we considered in this paper.

15
Matrices Q = eτL and Mn given by Eqs. (9) are determined by their series:

τ2 2 τ3 3
Q = eτL = I + τL + L + L +... , (A.1)
2! 3!
τn τn+1 L τn+2 L2 τn+m Ln+m
Mn = I + + +···+ +... . (A.2)
n n(n + 1) n(n + 1)(n + 2) n(n + 1) . . . (n + m)
Notice, the set of formulas (9) is just a short way to write series (A.2) and they are well defined for the case where
matrix L possesses zero eigenvalues (which are inherent to conservation laws in physical systems and some bifurcation
points) and L−1 diverges.
A mathematically idealistic way to compute these matrices is related to their diagonalization or decomposition into
the basis of eigenvectors y j ( j = 1, 2, ..., N) of L, defined by L · y j = λ j y j with the corresponding Hermitian adjoint
problems vTj · L = λ j vTj , where the superscript “T ” indicates a transposed vector/matrix and the orthonormalization
condition is fulfilled: vTj · ym = δ jm . With a solved eigenvalue problem, one can substitute L = Nj=1 λ j y j vTj into series
P
(A.1) and (A.2) and calculate
N
X
Q= eλ j τ y j vTj , (A.3)
j=1
N
 n−1

X (n − 1)!  λ j τ X (λ j τ)m 
Mn = e −  y j vTj . (A.4)
(λ j )n 
j=1
m!  m=0

The j-th summand in Mn is again correctly defined also for λ j = 0: one takes the limit λ j → 0 and finds (τn /n)y jvTj .
In practice, the exact solution of the eigenvalue problem is not always feasible and approaches for high-precision
approximate calculations are available in the literature (e.g., employing the projection into Krylov subspaces [38, 39]).
Our task in this section is to calculate the “exact” matrices Q and Mn with a controlled accuracy (arbitrary small
error) and examine the results of their calculations with the algorithm of this paper. Therefore, we perform a calcula-
tion with Taylor series (A.1) and (A.2) with as many elements and decimal digits as needed to have the results with the
standard double precision accuracy—not less than 16 decimal digits. The Taylor series of the exponentials of (λ j τ) in
Eqs. (A.3) and (A.4) with respect to τ are equivalent to series (A.1) and (A.2). Since the exponential does converge
for all argument values, the series we have to compute always converges even thought a gigantic number of elements
can be required. 
The most operation-number efficient way of computing Q employs the formula (A.1) in a form Q = I + τL · I +
  
τ τ
2L · I + 3L · I + . . . . Accordingly to this form, for a Taylor series truncated after the M-th element, one can write
Q M = I and iteratively compute
τ
Qm−1 = I + L · Qm , (A.5)
m
descending from m = M to m = 1. This iterative procedure yields all required matrices,
τn
Q = Q0 , Mn = Qn , (A.6)
n
and provides the slowest accumulation rate for the machine cancellation error.
The core purpose of the ETD methods is to iterate a numerical scheme with time stepsize τ for which λfast τ ≫ 1,
where λfast is the eigenvalue of the fastest decaying or oscillating mode. Hence, the practically relevant cases require
the calculation of Taylor series with as many elements as needed to resolve eλfast τ [see Eqs. (A.3) and (A.4), which
are equivalent to (A.1) and (A.2)] for a large λfast τ or at least avoid its numerical explosion. To avoid the numerical
instability of the fastest mode caused by the Taylor series truncation, one needs the√order of magnitude of the truncated
terms for this mode ∼ |λfast τ| M /M! < 1. With the Stirling’s approximation M! ≈ 2πM(M/e) M , one can estimate the
minimal required truncation length M∗ :
e|λfast |τ 1
∼ 1.
M∗ (2πM∗ ) 2M1 ∗
16
1
Factor (2πM∗ ) 2M∗ tends to 1 for large M∗ and is always bigger than 1; therefore, we can safely adopt

M∗ = e|λfast |τ . (A.7)

For CHE (13), the short wavelength modes δu(x, t) ∝ cκ (t)eiκx with the wavenumber κ ≫ 1 and amplitude cκ (t) are
the fastest decaying ones. Their decay rate is dominated by the last term in (15a) and, substituting the perturbation
cκ (t)eiκx into (15a), one finds: λκ cκ (t)eiκx = · · · − eiκhx /2 − e−iκhx /2 4 /h4x cκ (t)eiκx = · · · − (16/h4x) sin4 (κh x /2) cκ (t)eiκx .
    
Hence, λfast ≈ −16/h4x and Eq. (A.7) yields
M∗ = 16eτ/h4x .
For the example in Fig. 2 with τ = 2.5 × 10−4 and the same values of other parameters, we find M∗ = 1740 and take
truncation length M = 1.2M∗ = 2088 to be on a safe side with the series convergence. We use the Maple analytical
calculation package for computations with a variable number of decimal digits to handle the problem of cancelation
error accumulation. For 256 decimal digits the first 20 digits of Q jk are still affected by the variation of the mantissa
length (cancellation error). We conduct the calculations with 384 digits and find the first 20 digits of the results to be
the same as with 512 for M = 1.2M∗ and elongated series M = 1.4M∗ = 2436. Therefore, we trust the first 20 digits
of the “exact” calculated matrices Q and Mn . In Fig. 10a, one can see that the relative error of the significant parts of
Q jk and (Mn ) jk is somewhat smaller than 10−6 .
With the approach of this paper the fine dynamics of the fastest decaying modes is resolved accurately enough
to avoid the numerical instability but not with a high precision. The corresponding contributions into Q and Mn are
not very accurate. These contributions result in the “strong” accuracy of calculation of the matrices . 10−6 for the
example in Figs. 10a and 2. However, the inaccuracy of the fastest decaying perturbations makes an exponentially
small contribution into the numerically simulated system state; the accuracy of numerical solutions is practically
important and is much higher as one can see in Figs. 3 and 4. The latter can be referred to as the “soft” accuracy of
the numerical scheme.
Similarly, for MCE (17), the short wavelength modes δu(x, t) ∝ cκ (t)eiκx are the fastest decaying ones. Their
decay rate is dominated by the last term in (18a) and one finds: λκ cκ (t)eiκx = · · · + eiκhx /2 − e−iκhx /2 6 /h6x cκ (t)eiκx =
  
· · · − (64/h6x) sin6 (κh x /2) cκ (t)eiκx . Hence, λfast ≈ −64/h6x and Eq. (A.7) yields
 

M∗ = 64eτ/h6x .

For the example in Fig. 6 with τ = 3.2 × 10−4 and the same values of other parameters, we find M∗ = 870 and take
truncation length M = 1.2M∗ = 1044. For calculations with 128 decimal digits the first 20 digits of Q jk are still
affected by the cancellation error. We conduct the calculations with 192 digits and find the first 20 digits of the results
to be the same as with 256 for M = 1.2M∗ and M = 1.4M∗ = 1218. Therefore, we trust the first 20 digits of the
“exact” calculated matrices Q and Mn . In Fig. 10b, one can see that the relative error of the significant parts of Q jk
and (Mn ) jk is . 10−6 . The “soft” accuracy of the numerical scheme is higher, as one can see in Figs. 6 and 7.
For Fokker–Planck equation (25), the fastest decaying modes δz j (t) ∝ cκ (t)eiκ j are localized at largest j ≈ N. Their
decay rate is dominated by the σ2 -term in (25) and one finds: λκ cκ (t)eiκ j = · · · − σ2 j2 eiκ/2 + e−iκ/2 4 /4 cκ (t)eiκ j =
  
· · · − 4 j2 σ2 cos4 (κ/2) cκ (t)eiκ j . Hence, λfast ≈ −4N 2 σ2 and Eq. (A.7) yields
 

M∗ = 4eσ2 N 2 τ .

For the example in Fig. 9 with τ = 10−4 and the same values of other parameters, we find M∗ = 435 and take truncation
length M = 1.2M∗ = 522. For calculations with 64 decimal digits the first 20 digits of Q jk are still affected by the
cancellation error. We conduct the calculations with 96 digits and find the first 20 digits of the results to be the same
as with 128 for M = 1.2M∗ and M = 1.4M∗ = 609. Therefore, we trust the first 20 digits of the “exact” calculated
matrices Q and Mn . In Fig. 11, one can see the relative error of the significant parts of Q jk and (Mn ) jk is somewhat
smaller than 10−5 . The “soft” accuracy of the numerical scheme is still better, as one can see in Fig. 8 and Table 1.

CRediT authorship contribution statement

Evelina V. Permyakova: Conceptualization (supporting), Methodology (equal), Software (equal), Validation


(equal), Formal analysis (equal), Visualization (supporting), Writing – original draft (supporting), Writing – review &
17
editing (supporting). Denis S. Goldobin: Conceptualization (lead), Methodology (equal), Software (equal), Valida-
tion (equal), Formal analysis (equal), Visualization (lead), Writing – original draft (lead), Writing – review & editing
(lead).

Declaration of competing interest

The authors declare no competing interests to disclose.

Data availability statement

The data that supports the findings of this study are available within the article in the graphic form. The data sheets
for the graphs and the program codes in FORTRAN are available on request from the authors.

Acknowledgements

The work was carried out as part of a major scientific project (Agreement no. 075-15-2024-535 by April 23, 2024).

References
[1] P. W. Anderson, Absence of diffusion in certain random lattices, Phys. Rev. 109 (5) (1958) 1492–1505.
doi:{10.1103/PhysRev.109.1492}.
URL https://doi.org/10.1103/PhysRev.109.1492
[2] J. Fröhlich, T. Spencer, A rigorous approach to Anderson localization, Physics Reports 103 (1) (1984) 9–25.
doi:{10.1016/0370-1573(84)90061-9} .
URL https://doi.org/10.1016/0370-1573(84)90061-9
[3] I. M. Lifshitz, S. A. Gredeskul, L. A. Pastur, Introduction to the Theory of Disordered Systems, Wiley, New York, 1988.
[4] R. Blümel, S. Fishman, U. Smilansky, Excitation of molecular rotation by periodic microwave pulses. a testing ground for anderson localization,
J. Chem. Phys. 84 (5) (1985) 2604–2614. doi:{10.1063/1.450330}.
URL https://doi.org/10.1063/1.450330
[5] T. Schwartz, G. Bartal, S. Fishman, M. Segev, Transport and Anderson localization in disordered two-dimensional photonic lattices, Nature
446 (7131) (2007) 52–55. doi:{10.1038/nature05623}.
URL https://doi.org/10.1038/nature05623
[6] A. S. Pikovsky, D. L. Shepelyansky, Destruction of Anderson localization by a weak nonlinearity, Phys. Rev. Lett. 100 (9) (2008) 094101.
doi:{10.1103/PhysRevLett.100.094101}.
URL https://doi.org/10.1103/PhysRevLett.100.094101
[7] J. W. Cahn, J. E. Hilliard, Free energy of a nonuniform system. I. Interfacial free energy, J. Chem. Phys. 28 (2) (1958) 258–267.
doi:{10.1063/1.1744102}.
URL https://doi.org/10.1063/1.1744102
[8] A. A. Golovin, A. A. Nepomnyashchy, S. H. Davis, M. A. Zaks, Convective Cahn-Hilliard Models: From Coarsening to Roughening, Phys.
Rev. Lett. 86 (2001) 1550–1553. doi:{10.1103/PhysRevLett.86.1550} .
URL https://doi.org/10.1103/PhysRevLett.86.1550
[9] A. Podolny, M. A. Zaks, B. Y. Rubinstein, A. A. Golovin, A. A. Nepomnyashchy, Dynamics of domain walls governed by the convective Cahn–Hilliard equation,
Phys. D 201 (3) (2005) 291–305. doi:{10.1016/j.physd.2005.01.003} .
URL https://doi.org/10.1016/j.physd.2005.01.003
[10] S. J. Watson, F. Otto, B. Y. Rubinstein, S. H. Davis, Coarsening dynamics of the convective Cahn-Hilliard equation, Phys. D 178 (3) (2003)
127–148. doi:{10.1016/S0167-2789(03)00048-4}.
URL https://doi.org/10.1016/S0167-2789(03)00048-4
[11] T. Speck, A. M. Menzel, J. Bialké, H. Löwen, Dynamical mean-field theory and weakly non-linear analysis for the phase separation of active Brownian particles,
J. Chem. Phys. 142 (22) (2015) 224109. doi:{10.1063/1.4922324}.
URL https://doi.org/10.1063/1.4922324
[12] Y. Kuramoto, T. Tsuzuki, Persistent propagation of concentration waves in dissipative media far from thermal equilibrium, Prog. Theor.
Phys. 55 (2) (1976) 356–369. doi:{10.1143/PTP.55.356}.
URL https://doi.org/10.1143/PTP.55.356
[13] E. Knobloch, Pattern selection in long-wavelength convection, Phys. D 41 (3) (1990) 450–479. doi:{10.1016/0167-2789(90)90008-D}.
URL https://doi.org/10.1016/0167-2789(90)90008-D
[14] L. Shtilman, G. Sivashinsky, Hexagonal structure of large-scale Marangoni convection, Phys. Nonlin. Phenom. 52 (2–3) (1958) 477–488.
doi:{10.1016/0167-2789(91)90140-5} .
URL https://doi.org/10.1016/0167-2789(91)90140-5

18
[15] P. C. Matthews, S. M. Cox, Pattern formation with a conservation law, Nonlinearity 13 (4) (2000) 1293–1320.
doi:{10.1088/0951-7715/13/4/317}.
URL https://doi.org/10.1088/0951-7715/13/4/317
[16] P. C. Matthews, S. M. Cox, One-dimensional pattern formation with Galilean invariance near a stationary bifurcation, Phys. Rev. E 62 (2)
(2000) R1473–R1476. doi:{10.1103/PhysRevE.62.R1473}.
URL https://doi.org/10.1103/PhysRevE.62.R1473
[17] R. R. Mosheva, E. A. Siraev, D. A. Bratsun, Chemoconvection of miscible solutions in an inclined layer, Computational Continuum Mechan-
ics 16 (1) (2023) 5–16. doi:{10.7242/1999-6691/2023.16.1.1}.
URL https://doi.org/10.7242/1999-6691/2023.16.1.1
[18] D. S. Goldobin, E. V. Shklyaeva, Large-scale thermal convection in a horizontal porous layer, Phys. Rev. E 78 (2) (2008) 027301.
doi:{10.1103/PhysRevE.78.027301}.
URL https://doi.org/10.1103/PhysRevE.78.027301
[19] S. Shklyaev, A. A. Alabuzhev, M. Khenner, Long-wave Marangoni convection in a thin film heated from below, Phys. Rev. E 85 (2012)
016328. doi:{10.1103/PhysRevE.85.016328} .
URL https://doi.org/10.1103/PhysRevE.85.016328
[20] A. E. Samoilova, A. Nepomnyashchy, Feedback control of Marangoni convection in a thin film heated from below, J. Fluid Mech. 876 (2019)
573–590. doi:{10.1017/jfm.2019.578}.
URL https://doi.org/10.1017/jfm.2019.578
[21] A. E. Samoilova, A. Nepomnyashchy, Nonlinear feedback control of Marangoni wave patterns in a thin film heated from below, Phys. D 412
(2020) 132627. doi:{10.1016/j.physd.2020.132627} .
URL https://doi.org/10.1016/j.physd.2020.132627
[22] N. V. Kozlov, Direct numerical simulation of double-diffusive convection at vibrations, Computational Continuum Mechanics 16 (3) (2023)
277–288. doi:{10.7242/1999-6691/2023.16.3.24}.
URL https://doi.org/10.7242/1999-6691/2023.16.3.24
[23] R. Erban, S. J. Chapman, I. G. Kevrekidis, T. Vejchodský, Analysis of a stochastic chemical system close to a SNIPER bifurcation of its mean-field model,
SIAM J. Appl. Math. 70 (3) (2009) 984–1016. doi:{10.1137/080731360}.
URL https://doi.org/10.1137/080731360
[24] I. Ratas, K. Pyragas, Noise-induced macroscopic oscillations in a network of synaptically coupled quadratic integrate-and-fire neurons, Phys.
Rev. E 100 (5) (2019) 052211. doi:{10.1103/PhysRevE.100.052211}.
URL https://doi.org/10.1103/PhysRevE.100.052211
[25] D. S. Goldobin, M. di Volo, A. Torcini, Reduction Methodology for Fluctuation Driven Population Dynamics, Phys. Rev. Lett. 127 (2021)
038301. doi:{10.1103/PhysRevLett.127.038301}.
URL https://doi.org/10.1103/PhysRevLett.127.038301
[26] M. Di Volo, M. Segneri, D. S. Goldobin, A. Politi, A. Torcini, Coherent oscillations in balanced neural networks driven by endogenous fluctuations,
Chaos 32 (2) (2022) 023120. doi:{10.1063/5.0075751}.
URL https://doi.org/10.1063/5.0075751
[27] T. Zheng, K. Kotani, Y. Jimbo, Distinct effects of heterogeneity and noise on gamma oscillation in a model of neuronal network with different reversal potential,
Sci. Rep. 11 (1) (2021) 12960. doi:{10.1038/s41598-021-91389-8}.
URL https://doi.org/10.1038/s41598-021-91389-8
[28] D. S. Goldobin, E. V. Permyakova, L. S. Klimenko, Macroscopic behavior of populations of quadratic integrate-and-fire neurons subject to non-Gaussian white no
Chaos 34 (1) (2024) 013121. doi:{10.1063/5.0172735}.
URL https://doi.org/10.1063/5.0172735
[29] B. Pietras, R. Cestnik, A. Pikovsky, Exact finite-dimensional description for networks of globally coupled spiking neurons, Phys. Rev. E 107
(2023) 024315. doi:{10.1103/PhysRevE.107.024315} .
URL https://doi.org/10.1103/PhysRevE.107.024315
[30] D. S. Goldobin, M. di Volo, A. Torcini, Discrete synaptic events induce global oscillations in balanced neural networks (2023).
arXiv:2311.06159, doi:{10.48550/arXiv.2311.06159}.
URL https://doi.org/10.48550/arXiv.2311.06159
[31] S. M. Cox, P. C. Matthews, Exponential time differencing for stiff systems, J. Comput. Phys. 176 (2) (2002) 430–455.
doi:{10.1006/jcph.2002.6995} .
URL https://doi.org/10.1006/jcph.2002.6995
[32] B. V. Minchev, W. M. Wright, A review of exponential integrators for first order semi-linear problems, Report NTNU-N-2005-2, Norwegian
University of Science and Technology, Trondheim, Norway (2005).
URL http://www.math.ntnu.no/preprint/numerics/2005/N2-2005.ps
[33] M. Hochbruck, A. Ostermann, Exponential integrators, Acta Numer. 19 (2010) 209–286. doi:{10.1017/S0962492910000048} .
URL https://doi.org/10.1017/S0962492910000048
[34] G. Beylkin, J. M. Keiser, L. Vozovoi, A new class of time discretization schemes for the solution of nonlinear PDEs, J. Comput. Phys. 147 (2)
(1998) 362–387. doi:{10.1006/jcph.1998.6093}.
URL https://doi.org/10.1006/jcph.1998.6093
[35] R. Holland, Finite-difference time-domain (FDTD) analysis of magnetic diffusion, IEEE Trans. Electromagn. Compat. 36 (1) (1994) 32–39.
doi:{10.1109/15.265477}.
URL https://doi.org/10.1109/15.265477
[36] P. G. Petropoulos, Analysis of exponential time-differencing for FDTD in lossy dielectrics, IEEE Trans. Antennas Propag. 45 (6) (1997)
1054–1057. doi:{10.1109/8.585755}.
URL https://doi.org/10.1109/8.585755

19
[37] C. Schuster, A. Christ, W. Fichtner, Review of FDTD time-stepping for efficient simulation of electric conductive media, Microw. Opt. Tech-
nol. Lett. 25 (1) (2000) 16–21.
URL https://doi.org/10.1002/(SICI)1098-2760(20000405)25:1<16::AID-MOP6>3.0.CO;2-O
[38] M. Hochbruck, C. Lubich, H. Selhofer, Exponential Integrators for Large Systems of Differential Equations, SIAM J. Sci. Comput. 19 (5)
(1998) 1552–1574. doi:{10.1137/S1064827595295337} .
URL https://doi.org/10.1137/S1064827595295337
[39] M. Tokman, Efficient integration of large stiff systems of ODEs with exponential propagation iterative (EPI) methods, J. Comp. Phys. 213 (2)
(2006) 748–776. doi:{10.1016/j.jcp.2005.08.032}.
URL https://www.sciencedirect.com/science/article/pii/S0021999105004158
[40] A.-K. Kassam, L. N. Trefethen, Fourth-Order Time-Stepping for Stiff PDEs, SIAM J. Sci. Comput. 26 (4) (2005) 1214–1233.
doi:{10.1137/S1064827502410633} .
URL https://doi.org/10.1137/S1064827502410633
[41] M. Hammele, S. Schuler, W. Zimmermann, Effects of parametric disorder on a stationary bifurcation, Phys. D 218 (2) (2006) 139–157.
doi:{10.1016/j.physd.2006.05.001}.
URL https://doi.org/10.1016/j.physd.2006.05.001
[42] D. S. Goldobin, E. V. Shklyaeva, Diffusion of a passive scalar by convective flows under parametric disorder, J. Stat. Mech.: Theory Exp.
(2009) P01024doi:10.1088/1742-5468/2009/01/P01024.
URL https://doi.org/10.1088/1742-5468/2009/01/P01024
[43] D. S. Goldobin, E. V. Shklyaeva, Localization and advectional spreading of convective currents under parametric disorder, J. Stat. Mech.:
Theory Exp. (2013) P09027doi:{10.1088/1742-5468/2013/09/P09027}.
URL https://doi.org/10.1088/1742-5468/2013/09/P09027
[44] D. S. Goldobin, Advectional enhancement of eddy diffusivity under parametric disorder, Phys. Scr. T142 (2010) 014050.
doi:{10.1088/0031-8949/2010/T142/014050}.
URL https://doi.org/10.1088/0031-8949/2010/T142/014050
[45] D. S. Goldobin, Two scenarios of advective washing-out of localized convective patterns under frozen parametric disorder, Phys. Scr. 94 (1)
(2019) 014011. doi:{10.1088/1402-4896/aaeefa}.
URL https://doi.org/10.1088/1402-4896/aaeefa
[46] E. V. Permyakova, D. S. Goldobin, Exponential time differencing for stiff systems with nondiagonal linear part, J. Appl. Mech. Tech. Phys.
61 (7) (2020) 1227–1237. doi:{10.1134/S002189442007010X} .
URL https://doi.org/10.1134/S002189442007010X
[47] D. S. Goldobin, Mean-field models of populations of quadratic integrate-and-fire neurons with noise on the basis of the circular cumulant approach,
Chaos 31 (8) (2021) 083112. doi:{10.1063/5.0061575}.
URL https://doi.org/10.1063/5.0061575
[48] A. Zincenko, S. Petrovskii, V. Volpert, M. Banerjee, Turing instability in an economic-demographic dynamical system may lead to pattern formation on a geograp
J. R. Soc. Interface 18 (177) (2021) 20210034. doi:{10.1098/rsif.2021.0034}.
URL https://doi.org/10.1098/rsif.2021.0034
[49] S. Pal, S. Petrovskii, S. Ghorai, M. Banerjee, Spatiotemporal pattern formation in 2d prey-predator system with nonlocal intraspecific competition,
Commun. Nonlinear Sci. Numer. Simul. 93 (2021) 105478. doi:{10.1016/j.cnsns.2020.105478}.
URL https://doi.org/10.1016/j.cnsns.2020.105478
[50] P. R. Chowdhury, M. Banerjee, S. Petrovskii, Canards, relaxation oscillations, and pattern formation in a slow-fast ratio-dependent predator-prey system,
Appl. Math. Model. 109 (2022) 519–535. doi:{10.1016/j.apm.2022.04.022} .
URL https://doi.org/10.1016/j.apm.2022.04.022
[51] B. Boaretto, R. C. Budzinski, T. L. Prado, J. Kurths, S. R. Lopes, Neuron dynamics variability and anomalous phase synchronization of neural networks,
Chaos 28 (10) (2018) 106304. doi:{10.1063/1.5023878} .
URL https://doi.org/10.1063/1.5023878
[52] V. Godavarthi, P. Kasthuri, S. Mondal, R. I. Sujith, N. Marwan, J. Kurths, Synchronization transition from chaos to limit cycle oscillations when a locally coupled
Chaos 30 (3) (2020) 033121. doi:{10.1063/1.5134821}.
URL https://doi.org/10.1063/1.5134821
[53] H. Sakaguchi, S. Shinomoto, Y. Kuramoto, Phase Transitions and Their Bifurcation Analysis in a Large Population of Active Rotators with Mean-Field Coupling,
Progr. Theor. Phys. 79 (1988) 600–607. doi:{10.1143/PTP.79.600} .
URL https://doi.org/10.1143/PTP.79.600
[54] V. V. Klinshov, S. Y. Kirillov, V. I. Nekorkin, M. Wolfrum, Noise-induced dynamical regimes in a system of globally coupled excitable units,
Chaos 31 (8) (2021) 083103. doi:{10.1063/5.0056504}.
URL https://doi.org/10.1063/5.0056504
[55] I. Franović, S. Eydam, N. Semenova, A. Zakharova, Unbalanced clustering and solitary states in coupled excitable systems, Chaos 32 (1)
(2022) 011104. doi:{10.1063/5.0077022} .
URL https://doi.org/10.1063/5.0077022
[56] J. Swift, P. C. Hohenberg, Hydrodynamic fluctuations at the convective instability, Phys. Rev. A 15 (1) (1977) 319–328.
doi:{10.1103/PhysRevA.15.319}.
URL https://doi.org/10.1103/PhysRevA.15.319
[57] E. M. Izhikevich, Dynamical Systems in Neuroscience, MIT Press, Cambridge, MA, 2007.
[58] G. B. Ermentrout, N. Kopell, Parabolic bursting in an excitable system coupled with a slow oscillation, SIAM J. Appl. Math. 46 (2) (1986)
233–253. doi:{10.1137/0146017}.
URL https://doi.org/10.1137/0146017
[59] E. I. Yakubovich, Dynamics of processes in media with inhomogeneous broadening of the line of the working transition, Sov. Phys. JETP 28

20
(1969) 160–164.
URL "http://jetp.ras.ru/cgi-bin/dn/e_028_01_0160.pdf"
[60] M. I. Rabinovich, D. I. Trubetskov, Oscillations and Waves: In Linear and Nonlinear Systems, Springer Netherlands, 1989.
[61] J. D. Crawford, Amplitude expansions for instabilities in populations of globally-coupled oscillators, J. Stat. Phys. 74 (1994) 1047–1084.
doi:{10.1007/BF02188217} .
URL https://doi.org/10.1007/BF02188217
[62] H. Daido, Onset of cooperative entrainment in limit-cycle oscillators with uniform all-to-all interactions: bifurcation of the order function,
Phys. D 91 (1) (1996) 24–66. doi:{h10.1016/0167-2789(95)00260-X}.
URL https://doi.org/10.1016/0167-2789(95)00260-X
[63] E. Montbrió, D. Pazó, A. Roxin, Macroscopic Description for Networks of Spiking Neurons, Phys. Rev. X 5 (2015) 021028.
doi:{10.1103/PhysRevX.5.021028} .
URL https://doi.org/10.1103/PhysRevX.5.021028

21

You might also like