plain ProofProof \nolinenumbers \jyear \jvol \jnum
On the partial autocorrelation function for locally stationary time series: characterization, estimation and inference
Abstract
For stationary time series, it is common to use the plots of partial autocorrelation function (PACF) or PACF-based tests to explore the temporal dependence structure of such processes. To our best knowledge, such analogs for non-stationary time series have not been fully established yet. In this paper, we fill this gap for locally stationary time series with short-range dependence. First, we characterize the PACF locally in the time domain and show that the th PACF, denoted as decays with whose rate is adaptive to the temporal dependence of the time series . Second, at time we justify that the PACF can be efficiently approximated by the best linear prediction coefficients via the Yule-Walker’s equations. This allows us to study the PACF via ordinary least squares (OLS) locally. Third, we show that the PACF is smooth in time for locally stationary time series. We use the sieve method with OLS to estimate and construct some statistics to test the PACFs and infer the structures of the time series. These tests generalize and modify those used for stationary time series in Brockwell & Davis (1987). Finally, a multiplier bootstrap algorithm is proposed for practical implementation and an package is provided to implement our algorithm. Numerical simulations and real data analysis also confirm usefulness of our results.
keywords:
Locally stationary time series; PACF; Sieve method; Multiplier bootstrapping.\arabicsection Introduction
The partial autocorrelation function (PACF) is one of the most popular and powerful tools for stationary time series modelling and analysis Brockwell & Davis (1987). However, in the era of big data, as increasingly longer time series are being collected, it has become more appropriate to model many of those series as locally stationary processes whose data generating mechanisms evolve smoothly over time. In this setting, the effectiveness of the classical PACF deteriorates and it is of urgent demand to establish the theories of PACF for locally stationary time series.
Even though there exists a rich body of literature on locally stationary time series analysis, see Dahlhaus (2012); Dahlhaus et al. (2019) for a review, much less has been studied related to the PACF. In Dégerine & Lambert-Lacroix (2003), the authors generalized the characterization of PACFs via some useful decomposition as introduced in Ramsey (1974) to general non-stationary processes. They also briefly discussed how to estimate the PACFs based on a generalized Levinson-Durbin algorithm when the autocovariance function is given. More recently, in Killick et al. (2020), by generalizing the partial autocorrelations of stationary processes to locally stationary time series from a wavelet spectrum perspective, the authors provided two new estimators for the local PACFs. The consistency of the wavelet-based estimator and the asymptotic distribution of the windowed estimator under Gaussian assumption have also been studied. However, the decay speed of local PACFs as a function of the lags has not been established and a direct time-domain characterization of the PACFs of locally stationary processes has not been fully investigated. Moreover, the inference for PACFs of locally stationary time series, for example significance tests and PACF-based Portmanteau tests, are still missing in the literature.
Motivated by the above challenges, in this paper, we aim to systematically study the theories of PACFs for locally stationary time series. For characterization, in contrast to Dégerine & Lambert-Lacroix (2003); Killick et al. (2020), we define PACFs for general locally stationary time series in the time domain using stationary approximations at each time point (c.f. Definition \arabicsection.\arabicthm). There are several advantages in using this characterization. First, since the time series is approximately stationary locally, the lower-order PACFs can be well approximated by the best short-term linear prediction coefficients via the Yule-Walker equations; see (\arabicsection.\arabicequation) and (\arabicsection.\arabicequation). This connection not only allows us to study the PACF via ordinary least squares (OLS) estimation but also enables us to establish the decay properties of the PACFs which are adaptive to the temporal dependence decay of the time series; see (\arabicsection.\arabicequation). Second, the smoothness of the locally stationary covariance structure can be easily translated to that of the PACFs; see (\arabicsection.\arabicequation). Therefore, it suffices to estimate some smooth functions at different lags More specifically, together with the OLS, the smooth PACFs can be estimated adaptively using the nonparametric method of sieves via flexible choices of basis functions such as the wavelets and the orthogonal polynomials Chen (2007); see Section \arabicsection.\arabicsubsection. Theoretically, under mild assumptions, the estimators are consistent uniformly in the time domain (c.f. Theorem \arabicsection.\arabicthm). Third, based on the time-domain characterization and the OLS form of the sieve estimators, one can further conduct various tests on the PACFs. For example, one can perform a white noise Portmanteau test or significance tests on some PACFs (e.g., checking the order for an AR process) uniformly over time and lags. Both tests have not been fully studied yet under the locally stationary time series framework. We establish the asymptotic normality and conduct a power analysis for the tests (c.f. Theorems \arabicsection.\arabicthm and \arabicsection.\arabicthm). We also propose a multiplier bootstrap procedure for practical implementation (c.f. Algorithm 1). Numerical simulations and real data analysis are provided to illustrate the usefulness of our results and an package is provided for users. Since our method covers stationary time series as a special case, we promote the use of the package instead of the default function in which only handles stationary time series.
The paper is organized as follows. In Section \arabicsection, we provide the characterization of the PACFs for locally stationary time series and study their asymptotic properties. In Section \arabicsection, we introduce our estimator for PACFs based on the nonparametric sieve method and inference procedures based on multiplier bootstrap. In Section \arabicsection, we provide theoretical analysis for our estimation and inference procedures. Numerical simulations and real data analysis are offered in Section \arabicsection. More details are provided in our online supplement Ding & Zhou (2024). Especially, technical proofs are deferred to Section A, further discussions and remarks are provided in Section B, some tuning parameter selection algorithm is listed in Section C and additional simulation results are offered in Section D.
Conventions. For a random variable and some constant we denote by its norm. We simply write when For any deterministic vector we use for its (or Euclidean) norm. For any matrix we use to stand for its operator norm. For two sequences of deterministic positive values and , we write if for some positive constant . Moreover, we write if for some positive sequence For a sequence of random variables and positive real values we use the notation to state that is bounded in norm; that is for some finite constant If we simply write We use for a identity matrix. We use for the function space on of continuous functions that have continuous first derivatives.
\arabicsection Characterization of PACF for locally stationary time series
In this section, we provide the characterization of PACF of locally stationary time series and study its properties. Suppose that we observe . For simplicity and without loss of generality, we assume the time series is centered (i.e., mean is zero). Till the end of the paper, for notional simplicity, we always write From time to time, we will emphasize the dependence on for various quantities.
In this paper, we focus on a general class of locally stationary time series following definition introduced in Ding & Zhou (2023). It covers many commonly used locally stationary time series models in the literature. See for instance Ding & Zhou (2020); Dahlhaus et al. (2019); Dahlhaus (2012); Roueff & Sanchez-Perez (2018); Kley et al. (2019); Dette et al. (2011); Dette & Wu (2020); Vogt (2012); Zhou & Wu (2009). We refer the readers to Example B.\arabicthm in our supplement for more details.
Definition \arabicsection.\arabicthm (Locally stationary time series and its PACF).
A non-stationary time series is a locally stationary time series (in covariance) if there exists a function such that
(\arabicsection.\arabicequation) |
Moreover, we assume that is Lipschitz continuous in and for any fixed is the autocovariance function (ACF) of some stationary process whose th order PACF is denoted as . We shall also call the th order PACF at rescaled time of
Observe that Definition \arabicsection.\arabicthm essentially means that the covariance structure of can be well approximated locally by that of a stationary process. Consequently, at each rescaled time , the PACF of is defined by that of the approximating stationary process. Now we study the PACF introduced above. Since only one realization of the series is available, we will need to assume short-range dependence that for some
(\arabicsection.\arabicequation) |
and add additional regularity conditions. These will be summarized in Assumption \arabicsection.\arabicthm after some necessary notations are introduced. For the autocovariance function in (\arabicsection.\arabicequation), given a lag we define a vector of functions via the local Yule-Walker equation
(\arabicsection.\arabicequation) |
where is a symmetric matrix whose entry is defined as and is vector whose th entry is defined as Consequently, we can write Brockwell & Davis (1987)
(\arabicsection.\arabicequation) |
Note that for stationary time series the PACF is closely related to the best linear forecast coefficients of the process (Brockwell & Davis (1987)). The representation (\arabicsection.\arabicequation) together with the smoothness of the time series covariance structure imply that we can study the PACF of locally stationary time series via local best linear forecasts and therefore a simple local regression analysis. Specifically, for all denote the th order best linear forecast of as where are the best linear forecast coefficients. Define the residual as We now write
(\arabicsection.\arabicequation) |
Using (\arabicsection.\arabicequation), we introduce the following notations
(\arabicsection.\arabicequation) |
The following theorem studies the uniform decay property of the PACF and builds the connection between the PACF defined in Definition \arabicsection.\arabicthm and in (\arabicsection.\arabicequation) for the locally stationary time series.
Theorem \arabicsection.\arabicthm.
For the locally stationary time series satisfying Definition \arabicsection.\arabicthm, suppose Assumption \arabicsection.\arabicthm holds. Then we have the followings holds.
-
\arabicenumi.
For in (\arabicsection.\arabicequation), we have that for in Definition \arabicsection.\arabicthm and in (\arabicsection.\arabicequation)
(\arabicsection.\arabicequation) -
\arabicenumi.
for some integer defined in (\arabicsection.\arabicequation) and
(\arabicsection.\arabicequation)
Remark \arabicsection.\arabicthm.
Several remarks on Theorem \arabicsection.\arabicthm are in order. First, (\arabicsection.\arabicequation) implies that, uniformly over time, the PACF decays polynomially fast to 0 as a function of the lag with the speed adaptive to that of the autocovariance. If is sufficiently large, this implies that practically we only need to consider the first few lags of the PACF. In practice, people are usually concerned with the cutoff that In this sense, we conclude from (\arabicsection.\arabicequation) that we only need to focus on the lags for Second, (\arabicsection.\arabicequation) demonstrates that under Definition \arabicsection.\arabicthm and Assumption \arabicsection.\arabicthm, the PACFs can be well approximated by if is not too large. Observe that is defined by local best linear forecasts and therefore is closely related to OLS. Therefore, (\arabicsection.\arabicequation) lays the theoretical foundation for the estimation and inference procedures in Section \arabicsection.
\arabicsection Estimation and inference procedures
In this section, we provide the procedures of the estimation and inference of the PACFs of locally stationary time series. The theoretical justifications will be provided in Section \arabicsection. As mentioned in Remark \arabicsection.\arabicthm, in what follows, we shall only consider lag with
(\arabicsection.\arabicequation) |
\arabicsection.\arabicsubsection Sieve nonparametric estimation
In this section, we estimate the PACFs As proved in Theorem \arabicsection.\arabicthm, since it is natural for us to approximate it via a finite diverging term basis expansion using the method of sieves as in Chen (2007); Ding & Zhou (2020, 2023). Recall is defined via in (\arabicsection.\arabicequation). Now we will work with
According to (Chen, 2007, Section 2.3), we have that for some pre-chosen orthonormal basis functions on , denoted as
(\arabicsection.\arabicequation) |
where is the number of basis functions. Here are the coefficients to be estimated. In fact, using (Ding & Zhou, 2023, Theorem 2.11) and a discussion similar to (3.8) therein, we have by (\arabicsection.\arabicequation) that
(\arabicsection.\arabicequation) |
where is defined in (\arabicsection.\arabicequation) and . Observe that a key component in (\arabicsection.\arabicequation) is the approximation of by local best linear forecast coefficients as in (\arabicsection.\arabicequation).
Using (\arabicsection.\arabicequation), we can estimate all the coefficients ’s using one OLS regression. In particular, we stack as a vector then the OLS estimator for can be written as where is the design matrix of (\arabicsection.\arabicequation) and
After estimating the ’s, is estimated using (\arabicsection.\arabicequation) as
(\arabicsection.\arabicequation) |
where has blocks and the th block is and zeros otherwise.
\arabicsection.\arabicsubsection Multiplier bootstrap based inference
In this subsection we propose a multiplier bootstrap procedure to infer the PACFs. Statistical inference of the PACFs plays an important role in stationary time series analysis. For example, it can be used to determine the order of an AR process and check whether the time series (or residuals after an ARIMA model fitting) is white noise. We refer the readers to Chapter 3 of Shumway & Stoffer (2017) for more details. However, the analogs for locally stationary time series are largely missing. We aim to fill the gap in this section.
Based on our estimators in (\arabicsection.\arabicequation), we can conduct various important tests on in (\arabicsection.\arabicequation). For example, we can test whether the PACFs are identical to functions of interest that for some given functions For another instance, we can check whether a group of the PACFs are time-invariant that for some integers and where means for all . Note that if we set it reduces to the testing the significance of the PACFs. Especially, when and it is equivalent to test whether the time series is white noise. While we are able to conduct several different tests on the PACFs, in this paper, motivated by their applications in model selection and goodness of fit, for conciseness, we will focus on two important such tests.
First, we are interested in testing
(\arabicsection.\arabicequation) |
The hypothesis (\arabicsection.\arabicequation) tests the significance of a single PACF. Similar to the stationary setting in Brockwell & Davis (1987), it can be used to select the order of a locally stationary AR process; see Remark B.\arabicthm of our supplement for more discussions.
Second, we are also interested in testing the significance for all the lags that
(\arabicsection.\arabicequation) |
The hypothesis (\arabicsection.\arabicequation) tests white noise (or lack of serial correlation) of the underlying locally time series. For stationary white noise, the well-known Box–Pierce (BP) test statistic Box & Pierce (1970) with fixed lag truncation number is probably the most commonly used statistic. Later on, such a test was extended to locally stationary white noise in Goerg (2012). We emphasize that the Portmanteau-type BP tests in Box & Pierce (1970); Goerg (2012) used the autocorrelation functions (ACFs) instead of the PACFs. However, the estimation of ACFs for locally stationary time series involves the estimation of the time-varying marginal variances which requires the choice of additional tuning parameters and may lead to deteriorated estimation accuracy in finite samples. Inspired by the above challenges and the discussions of Section 9.4 of Brockwell & Davis (1987), we will propose a PACF-based Portmanteau test.
We mention again that as discussed in Remark \arabicsection.\arabicthm, when is large, we have that for defined in (\arabicsection.\arabicequation). Therefore, from an inferential viewpoint, for can be effectively treated as zero. Consequently, we only need to consider the setting for (\arabicsection.\arabicequation) and (\arabicsection.\arabicequation) once is identified.
\arabicsection.\arabicsubsection.\arabicsubsubsection Test statistics
In this section, we propose the test statistics for (\arabicsection.\arabicequation) and (\arabicsection.\arabicequation). First, when the null hypothesis in (\arabicsection.\arabicequation) holds, the following statistic should be small
(\arabicsection.\arabicequation) |
Therefore, it is natural to use (\arabicsection.\arabicequation) to test (\arabicsection.\arabicequation).
Second, to test (\arabicsection.\arabicequation), motivated by the BP test Box & Pierce (1970), we may want to directly use the following statistic
(\arabicsection.\arabicequation) |
where we recall in (\arabicsection.\arabicequation). Even though it is natural to use , as described in Section \arabicsection.\arabicsubsection, in order to obtain its value, we need to do high dimensional OLS regressions which can be computationally expensively, especially when is large (or equivalently, is small). To address this issue, we consider an order best linear prediction as in (\arabicsection.\arabicequation). That is, According to Theorem 2.11 of Ding & Zhou (2023), by setting in (\arabicsection.\arabicequation), we find that
(\arabicsection.\arabicequation) |
The smooth coefficients can be estimated via the sieve method using only one high-dimensional OLS as in Section \arabicsection.\arabicsubsection whose estimators are denoted as
Now we define another statistic
(\arabicsection.\arabicequation) |
As will be seen later in Theorem \arabicsection.\arabicthm below, when (\arabicsection.\arabicequation) holds, under some mild conditions on (c.f. (\arabicsection.\arabicequation)), will be close to while the calculation of only needs one OLS regression. Therefore, we will use (\arabicsection.\arabicequation) to test (\arabicsection.\arabicequation).
\arabicsection.\arabicsubsection.\arabicsubsubsection Practical implementation
We point out that it is still difficult to directly use and since the variances in their limiting Gaussian distributions are usually hard to estimate and hence plug-in estimators are unavailable; see Theorems \arabicsection.\arabicthm and \arabicsection.\arabicthm for more details. To address this issue, we utilize the multiplier bootstrap procedure as in Ding & Zhou (2023); Zhou (2013a). We first explain how to construct the bootstrapped statistics. Using the sieve estimates as in Section \arabicsection.\arabicsubsection, denote the residual
(\arabicsection.\arabicequation) |
where for and for
Let where Given a block size we denote as
(\arabicsection.\arabicequation) |
where is the Kronecker product and are i.i.d. standard Gaussian random variables which are independent of the observed time series. Recall the discussions around (\arabicsection.\arabicequation) and is the design matrix. Denote and Moreover, let be a diagonal block matrix whose only non-zero part is the identity matrix lies in the last diagonal block. Inspired by Remark \arabicsection.\arabicthm below, we use the following statistics to mimic the distribution of Let be constructed as in (\arabicsection.\arabicequation) using and be constructed using Then we denote
(\arabicsection.\arabicequation) |
We point out that for the implementation, one needs to select some large value of to construct We discuss how to choose this parameter in Section C of our supplement.
Finally, based on the above result, we propose the following Algorithm 1 for the practical implementation. Note that in order to implement Algorithm 1, two tuning parameters, the number of basis functions and the block size , have to been chosen properly. In our package , these parameters can be chosen automatically using the function according to the methods provided in Section C of our supplement Ding & Zhou (2024).
\arabicsection Theoretical analysis
In this section, we provide some theoretical analysis on our estimation and inference procedures. Till the end of the paper, for notational simplicity, we assume that the locally stationary time series admits the general physical representation equipped with the physical dependence measures (see (B.\arabicequation) and (B.\arabicequation) of our supplement). In addition, we need the following assumptions.
Assumption \arabicsection.\arabicthm.
Throughout the paper, we suppose the followings holds:
-
(1).
For all sufficiently large we assume that there exists a universal constant that
(\arabicsection.\arabicequation) where is the smallest eigenvalue of the given matrix and is the covariance matrix of the given vector.
-
(2).
For all and we assume that there exists some constant such that (\arabicsection.\arabicequation) holds. In addition, we assume that
-
(3).
For some given integer , we assume that for any
(\arabicsection.\arabicequation)
The conditions in the above assumption are mild and can be satisfied by many commonly used locally stationary time series. Due to space constraint, we leave some discussions to Remark B.\arabicthm of supplement Ding & Zhou (2024).
\arabicsection.\arabicsubsection Uniform consistency
In what follows, we establish the consistency for our estimators. Recall below (\arabicsection.\arabicequation). Denote
(\arabicsection.\arabicequation) |
The following mild assumption will be needed to ensure a consistent estimation, which has been used frequently in the literature, see Ding & Zhou (2020, 2023, 2021); Vogt (2012). Recall in (\arabicsection.\arabicequation). For all denote whose th entry is
Assumption \arabicsection.\arabicthm.
For denote the long-run integrated covariance matrix as
(\arabicsection.\arabicequation) |
where we recall that is the Kronecker product. We assume that the eigenvalues of are bounded above and also away from zero by some universal constants.
Then we proceed to state the main results of this section. Recall (\arabicsection.\arabicequation). Denote
(\arabicsection.\arabicequation) |
Theorem \arabicsection.\arabicthm.
Suppose Assumptions \arabicsection.\arabicthm and \arabicsection.\arabicthm hold true. Moreover, for satisfying
(\arabicsection.\arabicequation) |
we have that for our estimator (\arabicsection.\arabicequation)
(\arabicsection.\arabicequation) |
Remark \arabicsection.\arabicthm.
Theorem \arabicsection.\arabicthm implies that our proposed estimator (\arabicsection.\arabicequation) is uniformly consistent under mild conditions. First, the condition (\arabicsection.\arabicequation) ensures that in the OLS estimator will convergence to which guarantees the regular behavior of In fact, (\arabicsection.\arabicequation) can be easily satisfied. Note that and can be calculated for the specific sieve basis functions and so does the convergence rate in (\arabicsection.\arabicequation). For example, when are Fourier basis functions or normalized orthogonal polynomials, and Consequently, when even for in (\arabicsection.\arabicequation), (\arabicsection.\arabicequation) only requires that In particular, if we set for some small constant we only need
Second, for the rate in (\arabicsection.\arabicequation) and (\arabicsection.\arabicequation), when and even for it reads . Therefore, for sufficiently large and it has an order of for some small constant
\arabicsection.\arabicsubsection Asymptotic normality and power analysis for the proposed statistics
In this section, we study the accuracy and power of the proposed statistics in (\arabicsection.\arabicequation) and in (\arabicsection.\arabicequation). We first prepare some notations. Recall (\arabicsection.\arabicequation). Following the conventions below (\arabicsection.\arabicequation), for regarding and regarding we denote
(\arabicsection.\arabicequation) |
where we recall According to (3.17) of Ding & Zhou (2023) or Lemma 3.1 of Ding & Zhou (2021), we see that has a physical representation in the sense that for some measurable function we have
(\arabicsection.\arabicequation) |
where and are i.i.d centered random variables. Denote the long-run covariance matrix of as and the integrated long-run covariance matrix as In what follows, to ease our discussion, we assume that is of the form
(\arabicsection.\arabicequation) |
Armed with the above notations, we now proceed to provide the theoretical properties of the statistic Recall the matrix in (\arabicsection.\arabicequation). For define
(\arabicsection.\arabicequation) |
where and is defined from (\arabicsection.\arabicequation) using
Theorem \arabicsection.\arabicthm.
Suppose the assumptions of Theorem \arabicsection.\arabicthm hold. Then we have that
-
\arabicenumi.
Suppose Assumption A.\arabicthm of our supplement holds and
(\arabicsection.\arabicequation) we have that when in (\arabicsection.\arabicequation) holds
(\arabicsection.\arabicequation) -
\arabicenumi.
When in (\arabicsection.\arabicequation) holds in the sense that
(\arabicsection.\arabicequation) where as assuming (\arabicsection.\arabicequation), then we have that for any
where is the th quantile of the standard Gaussian distribution.
The above theorem establishes the asymptotic normality for our proposed statistic in (\arabicsection.\arabicequation) concerning (\arabicsection.\arabicequation). Moreover, (\arabicsection.\arabicequation) shows that our proposed statistic can have asymptotic power one under weak local alternative. Then we provide the theoretical properties of the statistic For define
(\arabicsection.\arabicequation) |
where and is defined from (\arabicsection.\arabicequation) using
Theorem \arabicsection.\arabicthm.
Suppose the assumptions of Theorem \arabicsection.\arabicthm hold. Then we have that
-
\arabicenumi.
Suppose Assumption A.\arabicthm of our supplement holds and
(\arabicsection.\arabicequation) when in (\arabicsection.\arabicequation) holds, we have that
(\arabicsection.\arabicequation) -
\arabicenumi.
Recall (\arabicsection.\arabicequation). Suppose (\arabicsection.\arabicequation) holds. Then when in (\arabicsection.\arabicequation) holds, we have that
(\arabicsection.\arabicequation) Consequently, if we further assume that
(\arabicsection.\arabicequation) then
(\arabicsection.\arabicequation) -
\arabicenumi.
When in (\arabicsection.\arabicequation) holds in the sense that
(\arabicsection.\arabicequation) where as assuming the assumptions of parts 1 and 2 hold, then we have that for any
where is the th quantile of the standard Gaussian distribution.
Theorem \arabicsection.\arabicthm implies that and have the same Gaussian distribution and power performance asymptotically so that we can directly use which only requires one OLS regression for estimation. We point out that (\arabicsection.\arabicequation) and (\arabicsection.\arabicequation) basically impose some upper bound conditions for As discussed in Remark \arabicsection.\arabicthm, if one chooses Fourier or orthogonal polynomial as the basis functions, when and are large enough, we require to guarantee (\arabicsection.\arabicequation). Analogously, (\arabicsection.\arabicequation) requires that Recall (\arabicsection.\arabicequation). In particular, when . Note that is only required to be larger or equal to . As a result, the above constraints on is not restrictive.
Remark \arabicsection.\arabicthm.
We add a remark on the assumptions of the parameters. First, (\arabicsection.\arabicequation) and (\arabicsection.\arabicequation) are mainly used to guarantee that can be written as a quadratic form in terms of in (\arabicsection.\arabicequation) under the null hypotheses. More specifically, denote we can conclude from Ding & Zhou (2023) that
(\arabicsection.\arabicequation) |
Second, (\arabicsection.\arabicequation) guarantees the difference between and is negligible when in (\arabicsection.\arabicequation) holds.
Before concluding this section, we summarize the properties of in (\arabicsection.\arabicequation) which explains the validity for the bootstrap procedure. The motivation comes from the arguments as in Remark \arabicsection.\arabicthm that the statistic are essentially quadratic forms of the locally stationary vector (c.f. (\arabicsection.\arabicequation)) and the statistics are asymptotically Gaussian. Consequently, it is possible to mimic their asymptotic distribution using a multiplier bootstrap procedure.
Corollary \arabicsection.\arabicthm.
Suppose the assumptions of Theorems \arabicsection.\arabicthm and \arabicsection.\arabicthm hold. Moreover, assume the time series has finite fourth moment and as and
Then there exists a sequence of sets such that and under the event we have that conditional on the data the results in Theorems \arabicsection.\arabicthm and \arabicsection.\arabicthm still hold by replacing with
\arabicsection Numerical simulations and real data analysis
In this section, we use some numerical simulations and a real data analysis to illustrate the usefulness of our estimation and inference procedures of the PACFs. All the calculations, implementations and plots can be done using a few lines of coding with our package
\arabicsection.\arabicsubsection Numerical simulations
In this section, we conduct numerical simulations to illustrate the usefulness of our methodologies using both stationary and non-stationary models. Due to space constraint, we focus on reporting the results of AR type models in the following. Additional simulation results on other types of models can be found in Section D of our supplement.
In what follows, for some we consider the stationary AR(2) process
(\arabicsection.\arabicequation) |
and the locally stationary AR(2) process
(\arabicsection.\arabicequation) |
where are i.i.d. standard Gaussian random variables. Note that when (\arabicsection.\arabicequation) reduces to a standard white noise process and (\arabicsection.\arabicequation) reduces to a time-varying white noise process.
\arabicsection.\arabicsubsection.\arabicsubsubsection Estimation of PACFs
We first estimate the PACFs using the sieve method as introduced in Section \arabicsection.\arabicsubsection. For concreteness and due to space constraint, regarding and in (\arabicsection.\arabicequation) and (\arabicsection.\arabicequation), we only report the results for the choice and Note that similar results and conclusions can also be made for other choices.
In Figure \arabicfigure, we estimate the PACFs of the first 10 lags for both models (\arabicsection.\arabicequation) and (\arabicsection.\arabicequation) using the estimators in (\arabicsection.\arabicequation). We use the Legendre polynomials as the basis functions and the number can be chosen using the cross validation method as described in Section C. The computations of the PACFs can be obtained directly using the function from our package. From these plots, we can see that our method applies to both models and obtain reasonably accurate estimates. According to the cut-off properties of the PACFs of the AR models, these plots have also implied that the time series may be generated from some AR(2) models. In Section D.\arabicsubsection of our supplement, we compare our proposed method with the ones introduced in Killick et al. (2020) and find that our method is generally more accurate than Killick et al. (2020) in terms of mean integrated squared error. The main reason is that the sieve method is adaptive to the smoothness of the covariance structure and has less boundary issues compared with the kernel method. In addition, more simulation results on other types of models can be found in Section D.\arabicsubsection of our supplement and similar conclusions can also be made.
\arabicsection.\arabicsubsection.\arabicsubsubsection Inference of PACFs
In this section, we examine the accuracy and sensitivity of our proposed Algorithm 1 when applied to testing (\arabicsection.\arabicequation) and (\arabicsection.\arabicequation). We first investigate the accuracy. To test (\arabicsection.\arabicequation) for some individual PACF at lag , in the context of (\arabicsection.\arabicequation) and (\arabicsection.\arabicequation), we consider the following four settings: (1). and ; (2). and ; (3). and (4). and Moreover, to test (\arabicsection.\arabicequation) for the white noise, we consider the following setting in the context of (\arabicsection.\arabicequation) and (\arabicsection.\arabicequation): (5).
In Table \arabictable, we report the simulated type I error rates for all the above null settings for three different types of basis functions when We can see that our Algorithm 1 is quite accurate for both tests (\arabicsection.\arabicequation) and (\arabicsection.\arabicequation).
Basis/Setting | (1) | (2) | (3) | (4) | (5) | (1) | (2) | (3) | (4) | (5) |
---|---|---|---|---|---|---|---|---|---|---|
Model (\arabicsection.\arabicequation) | ||||||||||
Fourier | 0.108 | 0.096 | 0.108 | 0.11 | 0.098 | 0.048 | 0.059 | 0.061 | 0.054 | 0.049 |
Legendre | 0.109 | 0.1 | 0.096 | 0.103 | 0.108 | 0.06 | 0.053 | 0.055 | 0.064 | 0.048 |
Daubechies-9 | 0.096 | 0.102 | 0.11 | 0.101 | 0.095 | 0.058 | 0.061 | 0.054 | 0.058 | 0.043 |
Model (\arabicsection.\arabicequation) | ||||||||||
Fourier | 0.098 | 0.11 | 0.108 | 0.099 | 0.107 | 0.048 | 0.062 | 0.057 | 0.048 | 0.039 |
Legendre | 0.103 | 0.094 | 0.103 | 0.095 | 0.113 | 0.055 | 0.053 | 0.061 | 0.048 | 0.053 |
Daubechies-9 | 0.093 | 0.09 | 0.093 | 0.104 | 0.1 | 0.052 | 0.058 | 0.058 | 0.047 | 0.046 |
Then we study the power. For definiteness, we report the results for the white noise test that
(\arabicsection.\arabicequation) |
More specifically, in view of the concerned models (\arabicsection.\arabicequation) and (\arabicsection.\arabicequation), in (\arabicsection.\arabicequation) is equivalent to while is an AR(1) alternative that In Figure \arabicfigure below, we study the power of our proposed method when deviates away from zero. We can conclude that our proposed tests are reasonably powerful once the alternative deviates from the null.
Finally, for further visualization, using (\arabicsection.\arabicequation) as an example, in Figure \arabicfigure below, we provide two typical plots of the -values associated with each lag with (\arabicsection.\arabicequation) under both the null and alternative as in (\arabicsection.\arabicequation). Here for the alternative we set and in (\arabicsection.\arabicequation). From the plots we can easily distinguish the null and alternative. Moreover, it is clear that the plots suggest that the null is a white noise process and the alternative is an AR(1) process. Additionally, more simulation results on other types of models can be found in Section D.\arabicsubsection of our supplement and similar conclusions can also be made.
\arabicsection.\arabicsubsection Real data analysis
In this section, we apply our methods to the monthly Euro-Dollar exchange rate data set which has also been considered in Killick et al. (2020). The data set can be downloaded from EuroStat at https://ec.europa.eu/eurostat/web/products-datasets/-/ei_mfrt_m. We analyze the exchange rates from January 1999 until October 2017.
Following the traditions of financial data analysis, we consider the log returns of the exchange rate. Moreover, we apply our methods to estimate the PACFs and conduct inference on them. As can be seen from the left panel of Figure \arabicfigure, based on our inference, a white-noise-driven AR(1) model will be useful to model the data. This agrees with the findings in Killick et al. (2020). In fact, we can conduct the white noise test for the original time series as in (\arabicsection.\arabicequation) using Algorithm 1 and find the -value is This shows that the time series is not a white noise. Furthermore, after fitting the time series with a time-varying AR(1) model (note that for AR(1) model, the coefficient is identical to ), we can conduct the white noise test for the residuals and then find the -value is . This confirms that the AR(1) model is likely to be appropriate.
In the right panel of Figure \arabicfigure, we provide the estimation of the PACF of lag one which is also the coefficient of a time-varying AR(1) model. It can be seen that the even though the PACF is relatively stable, it experiences some smooth changes over time. In fact, one can directly generalize the significance test into a test of constancy by considering The multiplier bootstrap of yields a -value of , indicating that is likely to be time-varying. This concludes that a locally stationary AR(1) model may be more useful for the model fitting. Due to the pronounced responsiveness of Euro-Dollar exchange rates to the global economy, our plots serve as reasonably accurate reflections of prevailing global economic conditions. For instance, leading up to the global financial crisis (2005-2007), the PACF displays a consistent pattern. Subsequently, from 2008 onwards, there is a gradual decline observed, hitting its lowest point around 2013, coinciding with the recognized period of the financial crisis. Following this, the PACF demonstrates a resurgence. Such a visual representation offers valuable insights into the evolving dynamics of Euro-Dollar exchange rates, aiding in a deeper comprehension of their temporal behavior.
Supplementary file
In the supplement file, we provide the technical proofs, some additional remarks, practical methods for choosing the tuning parameters and additional simulation results.
A Technical proofs
A.\arabicsubsection Proofs of Section \arabicsection
In this subsection, we prove Theorem \arabicsection.\arabicthm. The strategies and ideas are similar to those of Theorems 2.4 and 2.11 of Ding & Zhou (2023). We focus on explaining the main differences.
[Proof of Theorem \arabicsection.\arabicthm] For the first part of the proof, it is analogous to the proof of equation (2.6) of Ding & Zhou (2023). We only sketch the proof. Due to similarity, we only prove the first control in (\arabicsection.\arabicequation). Recall (\arabicsection.\arabicequation). Set It is easy to see from (1) of Assumption \arabicsection.\arabicthm and Yule-Walker’s equation that
(A.\arabicequation) |
where we denoted that
where For the rest of the proof, we can follow lines of that of Theorem 2.4 of Ding & Zhou (2023) verbatim. More specifically, according to (A.\arabicequation), we find that in order to study it suffices to control the entries of the th row of and all the entries of For we denote the symmetric banded matrix as
Here is some large constant. Using (1) and (2) of Assumption \arabicsection.\arabicthm, by a discussion similar to equation (D.7) of Ding & Zhou (2023), we find that for some constant
where is defined via Moreover, using (1) of Assumption \arabicsection.\arabicthm, according to the discussions below equation (D.8) of Ding & Zhou (2023), we find that for some constant
Combining the above controls, we complete the proof of (\arabicsection.\arabicequation).
For the second part of the proof, for the smoothness that follows from Lemma 3.1 of Ding & Zhou (2020). To see (\arabicsection.\arabicequation), by (A.\arabicequation) and (\arabicsection.\arabicequation), using Cauchy-Schwarz inequality, we have that
(A.\arabicequation) |
The rest of the proof follows from the exact reasoning as the arguments of the proof of Theorem 2.11 of Ding & Zhou (2023). In particular, using (1)-(3) of Assumption \arabicsection.\arabicthm, by an argument similar to the equations between (D.31) and (D.32) of Ding & Zhou (2023), we find that the first term of the right-hand side of (A.\arabicequation) can be bounded by and the second term can be bounded by This completes our proof.
A.\arabicsubsection Proofs of Section \arabicsection
In this subsection, we prove the main results in Section \arabicsection.
[Proof of Theorem \arabicsection.\arabicthm] The proof is similar to that of Theorem 3.2 of Ding & Zhou (2021). Due to similarity, we only sketch the key points. Similar to (\arabicsection.\arabicequation), we set as the sieve estimator for that is
(A.\arabicequation) |
Recall (\arabicsection.\arabicequation). For denote
and Recall (\arabicsection.\arabicequation). The starting point is the following decomposition
Since the second term of the right-hand side of the above equation can be bounded by using (\arabicsection.\arabicequation), it suffices to control the first term. According to (A.\arabicequation), by Cauchy-Schwarz inequality, we see that
(A.\arabicequation) |
Moreover, for the OLS estimation, we have that
(A.\arabicequation) |
where with representing the error term in (\arabicsection.\arabicequation). The rest of the discussions follow lines of that of Theorem 3.2 of Ding & Zhou (2021).
To control the right-hand side of (A.\arabicequation), first, by an argument similar to (A.9) of Ding & Zhou (2021), for defined in (\arabicsection.\arabicequation), we find that
Combining the above discussion, together with Assumption \arabicsection.\arabicthm and the assumption of (\arabicsection.\arabicequation), we see that
(A.\arabicequation) |
Second, according to a discussion similar to (A.12) of Ding & Zhou (2021) and the fact that , we have that
(A.\arabicequation) |
We point out that the error rate is faster than the in (A.12) of Ding & Zhou (2021) or (2.23) and (2.24) of Ding & Zhou (2023) because of the assumption that is a mean-zero time series. Inserting the above two bounds into (A.\arabicequation), we conclude that
Combining (A.\arabicequation), we can conclude our proof.
[Proof of Theorem \arabicsection.\arabicthm] The proof follows from lines of those of Proposition 3.7 of Ding & Zhou (2023) verbatim. We omit the details.
[Proof of Theorem \arabicsection.\arabicthm] The proof of part 1 follows from lines of those of Proposition 3.7 of Ding & Zhou (2023) verbatim. We omit the details.
For part 2, we start with (\arabicsection.\arabicequation). Before proceeding to the actual proof, we first explore the relation between and for under the null hypothesis of (\arabicsection.\arabicequation). First of all, under (\arabicsection.\arabicequation), according to Theorem \arabicsection.\arabicthm, we can conclude that
(A.\arabicequation) |
Combining with an argument similar to Theorem 2.11 of Ding & Zhou (2023) and the smoothness of , we can conclude that for all
(A.\arabicequation) |
Then with an argument similar to Theorem 2.11 of Ding & Zhou (2023), in light of the representation (\arabicsection.\arabicequation), we see that
Therefore, we can repeat (A.\arabicequation) and (A.\arabicequation) to get that
(A.\arabicequation) |
and conclude that
Using our assumptions on , we find that the error term of the above approximation is always negligible with high probability. With the above preparation, we proceed to the proof. By definition, we have that
By a discussion similar to Theorem \arabicsection.\arabicthm, we have that (see Theorem 3.3 of Ding & Zhou (2020)) for
where we recall (\arabicsection.\arabicequation). Combining the above controls with Theorem \arabicsection.\arabicthm, under the null hypothesis (\arabicsection.\arabicequation), we see that
Consequently, we have that
Then, for (\arabicsection.\arabicequation), using the definition of in (\arabicsection.\arabicequation) and the fact that the proof follows from (\arabicsection.\arabicequation), (\arabicsection.\arabicequation) and the assumption of (\arabicsection.\arabicequation).
Finally, for part 3, by a discussion similar to (A.\arabicequation) and (A.\arabicequation), we find that (\arabicsection.\arabicequation) yields that
Then the proof follows from lines of those of Proposition 3.7 of Ding & Zhou (2023) verbatim.
[Proof of Corollary \arabicsection.\arabicthm] The proof follows directly from the proof of Theorem 3.10 of Ding & Zhou (2023) by setting therein.
A.\arabicsubsection Additional assumptions
In this section, we collect some more assumptions and provide more discussions on these technical assumptions. The following assumption will be needed in our proof. It is a mild assumption and can be easily satisfied by many time series. We refer the readers to Section C.1 of Ding & Zhou (2023) for more details.
Assumption A.\arabicthm.
We assume the following assumptions hold true
-
\arabicenumi.
Suppose in Assumption \arabicsection.\arabicthm. Moreover, we assume that is of the form (\arabicsection.\arabicequation) and satisfies that for large constant
-
\arabicenumi.
We assume that the derivatives of decay with as follows
where is the d derivative of with respect to .
-
\arabicenumi.
There exist constants for some constant we have
B Additional remarks and examples
In this section, we provide several remarks and examples. The following remark provides more discussions on Theorem \arabicsection.\arabicthm.
Remark B.\arabicthm.
Theorem \arabicsection.\arabicthm is established for locally stationary time series as in Definition \arabicsection.\arabicthm where an exact cut-off is not available in general. An exception is the locally stationary AR(p) process as in Zhou (2013b), where
(B.\arabicequation) |
where is a time-varying white noise process. In this setting, it is clear that
so that and (\arabicsection.\arabicequation) holds trivially once is fixed or divergent slowly. In fact, as proved in Theorem 2.11 of Ding & Zhou (2023), any locally stationary time series satisfying Definition \arabicsection.\arabicthm and Assumption \arabicsection.\arabicthm can be always well approximated by a time series in the form of (B.\arabicequation) with slowly diverging In this regard, our results can be used to provide an order for the AR approximation; see Remark B.\arabicthm below.
The following remark provides more explanations on the conditions in Assumption \arabicsection.\arabicthm.
Remark B.\arabicthm.
The conditions (1)–(3) in Assumption \arabicsection.\arabicthm are mild and commonly used in the literature. First, (1) is introduced to avoid the erratic behavior of the time series and frequently used in the statistics literature involving the covariance and precision matrix estimation Cai et al. (2016); Chen et al. (2013); Ding & Zhou (2020, 2023); Yuan (2010). Moreover, as proved in (Ding & Zhou, 2023, Proposition 2.9), it is equivalent to the uniform positiveness of the local spectral density function of Second, (2) imposes the condition that the temporal structure of decays polynomially fast. This amounts to a short range dependent requirement for when Analogous results can be easily obtained for the exponentially decay setting where
Third, (3) requires that the autocovariance functions of are smooth so that its PACFs can be estimated consistently. It is commonly used in the literature of locally stationary time series Dahlhaus (2012); Dahlhaus et al. (2019); Ding & Zhou (2020, 2023); Zhou & Wu (2009).
The remark below provides some insights on how to use our results to estimate the order of a locally stationary AR process.
Remark B.\arabicthm.
We discuss how to generalize the use of PACFs of stationary AR process to locally stationary AR process. For definiteness, we focus on the following time-varying AR() process which has been used in Zhou (2013b); Dahlhaus et al. (2019)
(B.\arabicequation) |
where is some locally-stationary white noise process and are some smooth functions on . Before estimating these time-varying coefficients, we first need to provide an estimator for Here is allowed to diverge with Based on the results of Theorem \arabicsection.\arabicthm, inspired by the ideas in Ding & Yang (2022); Ding et al. (2023), we can propose a sequential test estimate based on the following hypothesis testing problem
(B.\arabicequation) |
where is some pre-given integer representing our belief of the true value of and is some large integer that is interpreted as the maximum possible order the model can have. In light of (\arabicsection.\arabicequation), we can use the following estimate
(B.\arabicequation) |
The following remark is related to the hypothesis testing (\arabicsection.\arabicequation).
Remark B.\arabicthm.
Two remarks are in order. First, in classic stationary time series analysis, in addition to Box-Pierce test, people also use the Ljung–Box (LB) test Ljung & Box (1978). Moreover, the BP and LB tests are asymptotic equivalent and follow the Chi-squared distribution with the same degree of freedom . In this regard, we can also modify the LB test using
Moreover, we can study such a modified statistic as Theorem \arabicsection.\arabicthm. Due to the asymptotic equivalence, we omit further details. Second, (\arabicsection.\arabicequation) is frequently used for model diagnostics. In this regard, it provides an alternative approach to choose the order of AR approximations by checking whether the residuals follow white noise after fitting some AR models.
Finally, we provide two frequently-used models of locally stationary time series in the literature and explain how Definition \arabicsection.\arabicthm and Assumption \arabicsection.\arabicthm can be easily satisfied.
Example B.\arabicthm.
We shall first consider the locally stationary time series model in Zhou & Wu (2009, 2010) using a physical representation so that
(B.\arabicequation) |
where and are i.i.d centered random variables, and is a measurable function such that is a properly defined random variable for all In (B.\arabicequation), by allowing the data generating mechanism depending on the time index in such a way that changes smoothly with respect to , one has local stationarity in the sense that the subsequence is approximately stationary if its length is sufficiently small compared to . Moreover, they quantify the temporal decay using the physical dependence measure for (B.\arabicequation) as follows
(B.\arabicequation) |
Moreover, the following assumptions are needed to ensure local stationarity.
Assumption B.\arabicthm.
defined in (B.\arabicequation) satisfies the property of stochastic Lipschitz continuity, i.e., for some and
(B.\arabicequation) |
where Furthermore,
(B.\arabicequation) |
It can be shown that time series with physical representation (B.\arabicequation) and Assumption B.\arabicthm satisfies Definition \arabicsection.\arabicthm. In particular, for each fixed in Definition \arabicsection.\arabicthm can be found easily using the following
(B.\arabicequation) |
Note that the assumptions (B.\arabicequation) and (B.\arabicequation) ensure that is Lipschiz continuous in . Moreover, for each fixed is the autocovariance function of which is a stationary process.
The physical representation form (B.\arabicequation) includes many commonly used locally stationary time series models. For example, let be zero-mean i.i.d. random variables (or a white noise) with variance . We also assume be functions such that
(B.\arabicequation) |
(B.\arabicequation) is a locally stationary linear process. It is easy to see that (2) and (3) of Assumption \arabicsection.\arabicthm will be satisfied if and
(B.\arabicequation) |
Furthermore, we note that the local spectral density function of (B.\arabicequation) can be written as where is defined such that with being the backshift operator. As discussed in Remark B.\arabicthm, (1) of Assumption B.\arabicequation will be satisfied if for all and where is some universal constant. For more examples of locally stationary time series in the form of (B.\arabicequation) especially nonlinear time series, we refer the readers to Wu (2005), (Ding & Zhou, 2020, Section 2.1) , (Dahlhaus et al., 2019, Example 2.2 and Proposition 4.4), (Karmakar et al., 2022, Proposition E.6) and Ding & Zhou (2021); Karmakar et al. (2022); Mayer et al. (2020). Especially, the time-varying AR and ARCH models can be written into (B.\arabicequation) asymptotically Ding & Zhou (2023), and Assumptions \arabicsection.\arabicthm and B.\arabicthm can be easily satisfied under mild assumptions. We refer the readers to the aforementioned references for more details.
For a second example, note that in Dahlhaus & Rao (2006); Dette et al. (2011); Vogt (2012), the locally stationary time series is defined as follows (see Definition 2.1 of Vogt (2012)). is locally stationary time series if for each scaled time point there exists a strictly stationary process such that
(B.\arabicequation) |
where for some By similar arguments as those of model (B.\arabicequation) Ding & Zhou (2023), Definition \arabicsection.\arabicthm as well as assumptions of this subsection can be verified for (B.\arabicequation), especially (B.\arabicequation) implies (\arabicsection.\arabicequation).
C Tuning parameters selection
In this section, we explain how to choose the tuning parameters associated with our proposed methodology.
First, we discuss how to choose the tuning parameters and used in Algorithm 1. We use a data-driven procedure proposed in Bishop (2013) to choose For a given integer say we divide the time series into two parts: the training part and the validation part With some preliminary initial value , we propose a sequence of candidates in an appropriate neighborhood of where is some given integer. For each of the choices we fit a time-varying AR() model as in (\arabicsection.\arabicequation) with sieve basis expansion using the training data set. Then using the fitted model, we forecast the time series in the validation part of the time series. Let be the forecast of respectively using the parameter . Then we choose the parameter with the minimum sample MSE of forecast, i.e.,
To choose an for practical implementation, in Zhou (2013a), the author used the minimum volatility (MV) method to choose the window size for the scalar covariance function. The MV method does not depend on the specific form of the underlying time series dependence structure and hence is robust to misspecification of the latter structure Politis et al. (1999). The MV method utilizes the fact that the covariance structure of becomes stable when the block size is in an appropriate range, where is defined as
(C.\arabicequation) |
Therefore, it desires to minimize the standard errors of the latter covariance structure in a suitable range of candidate ’s. In detail, for a give large value and a neighborhood control parameter we can choose a sequence of window sizes and obtain by replacing with in (C.\arabicequation), For each we calculate the matrix norm error of in the -neighborhood, i.e.,
where Therefore, we choose the estimate of using
Note that in Zhou (2013a) the author used and we also adopt this choice in the current paper.
Second, we discuss how to choose a large value of to construct the statistic in (\arabicsection.\arabicequation). Theoretically, the lower bound for the order of is given by as in (\arabicsection.\arabicequation), while the upper bound is provided in the assumption of (\arabicsection.\arabicequation). The lower bound yields that for all while the upper bound guarantees that the error term is negligible. To balance these two conditions, for practical implementation, we use the following value
where is some pre-given large value (say, ). We emphasize that this can be easily done using our package by generating a plot as in Figure \arabicfigure.
D Additional simulation results
In this section, we provide additional numerical simulation results.
D.\arabicsubsection More results on other types of models
In this section, we conduct more numerical simulations using both stationary and non-stationary MA(1) models. For some constant we consider the stationary MA(1) process
(D.\arabicequation) |
and the locally stationary MA(1) process
(D.\arabicequation) |
where are i.i.d. standard Gaussian random variables. Note that when in (D.\arabicequation) and (D.\arabicequation), they both reduce to the standard white noise. Since (D.\arabicequation) and (D.\arabicequation) are essentially AR() models, one can follow the discussions of Section 3.3 of Shumway & Stoffer (2017) to calculate the true PACFs.
For the estimation of the PACFs, in Figure A, we provide the plots of the PACFs of the first 10 lags for both (D.\arabicequation) and (D.\arabicequation). We can see that our estimates are reasonably accurate. Regarding the inference of the PACFs, for the purpose of definiteness, we focus on the white noise test (\arabicsection.\arabicequation) where corresponds to in (D.\arabicequation) and (D.\arabicequation) and corresponds to an MA(1) alternative that Under (D.\arabicequation) and (D.\arabicequation) are essentially the same model. For under the type I error rate the simulated type I error rates are for the Fourier, Legendre and Daubechies-9 basis functions, respectively based on 1,000 repetitions. This shows the accuracy of our test. To examine the power, in Figure B, we report how the simulated power changes when deviates away from zero. We can conclude that our proposed test is reasonably powerful once the alternative deviates from the null.
D.\arabicsubsection Comparison with Killick et al. (2020) on estimating the PACFs
In this section, we compare our method with the ones proposed in Killick et al. (2020) in terms of the estimation of the PACFs using the mean integrated squared error (MISE). To implement Killick et al. (2020), we use the package which is developed by the authors of Killick et al. (2020).
For definiteness, we consider the AR type models (\arabicsection.\arabicequation) and (\arabicsection.\arabicequation) with and Consequently, for model (\arabicsection.\arabicequation), the true PACFs are
and for model (\arabicsection.\arabicequation), the true PACFs are
Let be some estimator for the MISE is defined as
In the actual calculations, MISE can be well approximated by an Riemann summation. In Table A, we record the for our proposed method (denoted as Proposed) and the two methods in Killick et al. (2020) (the wavelet-based method is denoted as Lpacf-I and the Epanechnikov windowed method is denoted as Lpacf-II). We can conclude that our proposed method has better finite sample performance than Killick et al. (2020) which are known to have worse performance near the boundaries.
Methods/Lags | ||||
---|---|---|---|---|
Model (\arabicsection.\arabicequation) | ||||
Proposed | ||||
Lpacf-I | ||||
Lpacf-II | 0.042 | 0.054 | 0.043 | 0.051 |
Model (\arabicsection.\arabicequation) | ||||
Proposed | ||||
Lpacf-I | 0.016 | |||
Lpacf-II | 0.049 | 0.051 | 0.042 | 0.055 |
References
- Bishop (2013) Bishop, C. (2013). Pattern Recognition and Machine Learning. Information science and statistics. Springer.
- Box & Pierce (1970) Box, G. E. & Pierce, D. A. (1970). Distribution of residual autocorrelations in autoregressive-integrated moving average time series models. Journal of the American statistical Association 65, 1509–1526.
- Brockwell & Davis (1987) Brockwell, P. & Davis, R. (1987). Time series: Theory and Methods. Springer-Verlag.
- Cai et al. (2016) Cai, T. T., Liu, W. & Zhou, H. H. (2016). Estimating sparse precision matrix: Optimal rates of convergence and adaptive estimation. Ann. Statist. 44, 455–488.
- Chen (2007) Chen, X. (2007). Large sample sieve estimation of semi-nonparametric models. Handbook of econometrics 6, 5549–5632.
- Chen et al. (2013) Chen, X., Xu, M. & Wu, W. B. (2013). Covariance and precision matrix estimation for high-dimensional time series. Ann. Statist. 41, 2994–3021.
- Dahlhaus (2012) Dahlhaus, R. (2012). Locally stationary processes. In Handbook of statistics, vol. 30. Elsevier, pp. 351–413.
- Dahlhaus & Rao (2006) Dahlhaus, R. & Rao, S. S. (2006). Statistical inference for time-varying arch processes. The Annals of Statistics 34, 1075–1114.
- Dahlhaus et al. (2019) Dahlhaus, R., Richter, S. & Wu, W. B. (2019). Towards a general theory for nonlinear locally stationary processes. Bernoulli 25, 1013–1044.
- Dette et al. (2011) Dette, H., Preubb, P. & Vetter, M. (2011). A measure of stationarity in locally stationary processes with applications to testing. J. Am. Stat. Assoc. 106, 1113–1124.
- Dette & Wu (2020) Dette, H. & Wu, W. (2020). Prediction in locally stationary time series. Journal of Business & Economic Statistics , 1–12.
- Ding et al. (2023) Ding, X., Xie, J., Yu, L. & Zhou, W. (2023). Extreme eigenvalues of sample covariance matrices under generalized elliptical models with applications. arXiv preprint arXiv:2303.03532 .
- Ding & Yang (2022) Ding, X. & Yang, F. (2022). Tracy-Widom distribution for heterogeneous gram matrices with applications in signal detection. IEEE Transactions on Information Theory 68, 6682–6715.
- Ding & Zhou (2020) Ding, X. & Zhou, Z. (2020). Estimation and inference for precision matrices of nonstationary time series. Ann. Statist. 48, 2455–2477.
- Ding & Zhou (2021) Ding, X. & Zhou, Z. (2021). Simultaneous sieve inference for time-inhomogeneous nonlinear time series regression. arXiv preprint arXiv:2112.08545 .
- Ding & Zhou (2023) Ding, X. & Zhou, Z. (2023). Auto-regressive approximations to non-stationary time series, with inference and applications. The Annals of Statistics 51, 1207 – 1231.
- Ding & Zhou (2024) Ding, X. & Zhou, Z. (2024). Supplement to ”On the partial autocorrelation function for locally stationary time series: characterization, estimation and inference” .
- Dégerine & Lambert-Lacroix (2003) Dégerine, S. & Lambert-Lacroix, S. (2003). Characterization of the partial autocorrelation function of nonstationary time series. Journal of Multivariate Analysis 87, 46–59.
- Goerg (2012) Goerg, G. M. (2012). Testing for white noise against locally stationary alternatives. Statistical Analysis and Data Mining: The ASA Data Science Journal 5, 478–492.
- Karmakar et al. (2022) Karmakar, S., Richter, S. & Wu, W. B. (2022). Simultaneous inference for time-varying models. Journal of Econometrics 227, 408–428.
- Killick et al. (2020) Killick, R., Knight, M. I., Nason, G. P. & Eckley, I. A. (2020). The local partial autocorrelation function and some applications. Electronic Journal of Statistics 14, 3268 – 3314.
- Kley et al. (2019) Kley, T., Preuß, P. & Fryzlewicz, P. (2019). Predictive, finite-sample model choice for time series under stationarity and non-stationarity. Electronic Journal of Statistics 13, 3710–3774.
- Ljung & Box (1978) Ljung, G. M. & Box, G. E. (1978). On a measure of lack of fit in time series models. Biometrika 65, 297–303.
- Mayer et al. (2020) Mayer, U., Zähle, H. & Zhou, Z. (2020). Functional weak limit theorem for a local empirical process of non-stationary time series and its application. Bernoulli 26, 1891–1911.
- Politis et al. (1999) Politis, D., Wolf, D., Romano, J., Wolf, M., Bickel, P., Diggle, P. & Fienberg, S. (1999). Subsampling. Springer Series in Statistics. Springer New York.
- Ramsey (1974) Ramsey, F. L. (1974). Characterization of the partial autocorrelation function. The Annals of Statistics , 1296–1301.
- Roueff & Sanchez-Perez (2018) Roueff, F. & Sanchez-Perez, A. (2018). Prediction of weakly locally stationary processes by auto-regression. Lat. Am. J. Probab. Math. Stat. 15, 1215–1239.
- Shumway & Stoffer (2017) Shumway, R. & Stoffer, D. (2017). Time Series Analysis and Its Applications: With R Examples. Springer Texts in Statistics. Springer International Publishing, 4th ed.
- Vogt (2012) Vogt, M. (2012). Nonparametric regression for locally stationary time series. Ann. Statist. 40, 2601–2633.
- Wu (2005) Wu, W. (2005). Nonlinear system theory: Another look at dependence. Proc Natl Acad Sci U S A. 40, 14150–14151.
- Yuan (2010) Yuan, M. (2010). High dimensional inverse covariance matrix estimation via linear programming. J. Mach. Learn. Res. 11, 2261–2286.
- Zhou (2013a) Zhou, Z. (2013a). Heteroscedasticity and autocorrelation robust structural change detection. J. Am. Stat. Assoc. 108, 726–740.
- Zhou (2013b) Zhou, Z. (2013b). Inference for non-stationary time-series autoregression. Journal of Time Series Analysis 34, 508–516.
- Zhou & Wu (2009) Zhou, Z. & Wu, W. (2009). Local linear quantile estimation for non-stationary time series. Ann. Stat. 37, 2696–2729.
- Zhou & Wu (2010) Zhou, Z. & Wu, W. (2010). Simultaneous inference of linear models with time varying coefficents. J.R. Statist. Soc. B 72, 513–531.