Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Examining Deep Learning Architectures for Crime Classification and Prediction
Previous Article in Journal
On the Autoregressive Time Series Model Using Real and Complex Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bayesian Forecasting of Dynamic Extreme Quantiles

by
Douglas E. Johnston
Farmingdale State College, The State University of New York, Farmingdale, NY 11735, USA
Forecasting 2021, 3(4), 729-740; https://doi.org/10.3390/forecast3040045
Submission received: 1 September 2021 / Revised: 1 October 2021 / Accepted: 7 October 2021 / Published: 11 October 2021
(This article belongs to the Special Issue Bayesian Time Series Forecasting)

Abstract

:
In this paper, we provide a novel Bayesian solution to forecasting extreme quantile thresholds that are dynamic in nature. This is an important problem in many fields of study including climatology, structural engineering, and finance. We utilize results from extreme value theory to provide the backdrop for developing a state-space model for the unknown parameters of the observed time-series. To solve for the requisite probability densities, we derive a Rao-Blackwellized particle filter and, most importantly, a computationally efficient, recursive solution. Using the filter, the predictive distribution of future observations, conditioned on the past data, is forecast at each time-step and used to compute extreme quantile levels. We illustrate the improvement in forecasting ability, versus traditional methods, using simulations and also apply our technique to financial market data.

1. Introduction

Forecasting extreme quantiles, which is a part of extreme value (EV) analysis, is an important problem in many applications. For example, in analyzing ocean storm severity it is often critical to forecast an extreme quantile, such as the 99% quantile, for the maximum wave height over a certain time period to facilitate adequate construction design [1]. In financial risk-management, forecasting extreme quantiles, or so-called VaR analysis, is critical for establishing adequate bank capital levels [2]. In fact, extreme value analysis, and quantile forecasting, has found wide application in fields as varied as target detection [3], communication systems [4], image analysis [5], power systems [6], and population studies [7].
In this paper, which is an extension of the work presented in [8], we derive a recursive method for Bayesian forecasting of extreme quantiles where the underlying process is non-stationary. This differs from our previous work given our focus here is on the predictive distribution of future observations, rather than simply the model parameters, and we also properly invert the predictive distribution to obtain efficient quantile forecasts. In our previous work, we obtained improved parameter estimates, versus traditional methods, but our prediction ability, as measured by predictive p-values, was hampered by a number of issues. We made a number of improvements to our previous work including modifying the support assumptions on the observed data as well as implementing an adaptive particle filtering algorithm both of which resulted in demonstrable improvement in quantile forecasting.
We model the maximum of a block of data, which has an asymptotic general extreme value (GEV) distribution [9], via a non-linear state-space model where the parameters are driven by a hidden stochastic process with unknown static parameters. To recursively estimate the model parameters, we utilize a particle filter (PF) [10] and, in particular, we derive a Rao-Blackwellized particle filter (RBPF) [11] to marginalize the unknown, static parameters. Importantly, we derive a recursive solution for the predictive state density, which is required for the particle filter, and we design an algorithm that bears similarity to the well-known Kalman filter [12] and is therefore readily implemented.
The work presented here, and in [8], is an extension of some previous studies. In [13], a deterministic trend is applied to a subset of the GEV parameters, while in [14,15,16], a GEV parameter subset is dynamic with known state equation. More recently, Ref. [17] developed a dynamic model for the tail-index and a Gaussian-mixture approximation to linearize the estimation problem. Our work has extended these previous studies in a number of ways. First, under a reasonable assumption, we reduce the number of GEV parameters and model the remaining set as a vector Markov process, with unknown system and covariance matrix. Second, we recursively compute the marginalized state density, eliminating the need to estimate unwanted nuisance parameters and, in the process, we derive recursive expressions for the necessary sufficient statistics. This allows for a fast, real-time implementation without the need to batch-process observations. Lastly, we derive the predictive distribution, for the block-maximum, that we use to forecast extreme quantiles.
The paper is organized as follows. In the next section, we formulate the problem for the time-varying block-maximum and propose a Bayesian approach by deriving the recursive solution for marginalizing the nuisance parameters-Rao-Blackwellization. In Section 3, we implement our solution and derive our simplified likelihood function. We describe our algorithm in detail and show the method’s ability to forecast extreme quantiles, outperforming traditional methods, using simulated data. In the last section, we discuss our results and use our approach to analyze S&P 500 stock market returns from 1928 to 2020. We also conclude the paper with ideas for further research.

2. Materials and Methods

The starting point for our analysis is a sequence of block-maximums denoted by y k R , k = 1 , N . The k’th block-maximum is the maximum of a strictly stationary time series, s t , so that
y k = max N k 1 < t N k s t ,
where the index t typically represents time and the k’th block consists of data with time indices ( N k 1 , N k ] and N 0 = 0 . An example would be in financial markets where the underlying data is the daily return of a stock market index (e.g., S&P 500) and y k is the largest daily return loss experienced in year k (see Figure 1). In this case, s t = r t , where r t = ( p t p t 1 ) / p t 1 is the daily return defined as the percentage change in the price, p t .
Similar to the central-limit theorem, there is a limiting distribution for the normalized block-maximum of i.i.d. random variables (RVs). The Fisher-Tippet-Gnedenko (FTG) theorem states that the only non-degenerate limiting distribution for the block-maximum, is in the generalized extreme value (GEV) family [18] with shape parameter, or tail-index, ξ . That is, the cumulative distribution function (cdf) of the block-maximum, y k , properly normalized, converges to a distribution H ξ as ( N k N k 1 ) which has the Jenkinson-von Mises representation [19],
H ξ ( y ) = exp ( ( 1 + ξ y ) 1 / ξ ) .
When the tail-index is positive ( ξ > 0 ), the resulting distribution is the Fréchet distribution with y 1 / ξ . The Fréchet is the limiting distribution for many heavy-tailed underlying random variables [20]. Upon normalization, a three parameter family is obtained and, therefore, we asymptotically model each of the block-maximum y k ~ H ξ , β , μ ( y k ) = H ξ ( y k μ β ) , with β > 0 and support y [ μ β / ξ , ) . We note that the FTG theorem’s i.i.d. assumption can be relaxed to include most strictly stationary processes [21].
To capture the non-stationary effects and clustering often witnessed in real-world data (see Figure 1), we propose to model the parameters of the GEV distribution as time-varying and, to insure positivity for the shape and scale parameters, we define the unknown state vector as
x k = [ log ( ξ k ) , log ( β k ) , μ k ]
and the state transition equation as
x k = η + Θ x k 1 + C 1 / 2 u k ,
where x k R 3 , k N 0 , u k ~ N ( 0 , I 3 ) , η R 3 , and Θ , C R 3 × 3 . We assume the matrices η , Θ , and C are unknown and the Gaussian process noise, u k , was chosen for its analytical tractability and maximal entropy property. Modeling the state as a first-order auto-regressive process, as in Equations (3) and (4), is similar to stochastic volatility models [22].
We specify the observation, or measurement, equation as
y k = x 3 k + e x 2 k w k ,
where e x 2 k = β k , x 3 k = μ k , and w k is distributed according to (2) and, hence, a function of e x 1 k = ξ k . Specifically, y k is a standard GEV RV that is scaled and translated and, combined, Equations (4) and (5) form a non-linear state-space model from which we wish to make Bayesian inferences.
The Bayesian inferences we wish to make come in the form of three probability density functions (pdf). The first is the state filtering density p ( x k | y 1 : k ) where y 1 : k { y 1 , y 2 , , y k } . The second is the state predictive density p ( x k + 1 | y 1 : k ) . Lastly, and most germane to our goal, is the predictive distribution of the observation
p ( y k + 1 | y 1 : k ) = p ( y k + 1 | x k + 1 ) p ( x k + 1 | y 1 : k ) d x k + 1 .
Since our goal is to estimate extreme quantiles, we require the cdf of this predictive density,
F ( y k + 1 | y 1 : k ) = y k + 1 p ( ζ | y 1 : k ) d ζ ,
which we can obtain under suitable conditions as
F ( y k + 1 | y 1 : k ) = E x k + 1 | y 1 : k [ H x k + 1 ( y k + 1 ) ] .
In words, the cumulative predictive distribution is the expected GEV distribution conditioned on the state predictive density p ( x k + 1 | y 1 : k ) and our goal is to forecast η α = F 1 ( α | y 1 : k ) or the α %-quantile.
To accomplish our goal, we need to derive a recursive solution for the filtering density, p ( x k | y 1 : k ) , and the state predictive density, p ( x k + 1 | y 1 : k ) . This can be done analytically for the case of known linear state-space models with Gaussian noise. However, with a non-linear state-space model, or non-Gaussian disturbances, a numerical approach is needed.
From standard Bayesian analysis, we can write
p ( x 0 : k | y 1 : k ) p ( y k | x k ) p ( x 0 : k | y 1 : k 1 ) ,
where x 0 : k { x 0 , x 1 , , x k } is the state stream which are the state vectors from the initial state, at time 0, up to the current state, at time k. The likelihood function, p ( y k | x k ) , highlights the assumption that the current observation depends only on the current value of the state vector. If we further assume that the state vector process is Markov, independent of the observations, we can write
p ( x 0 : k | y 1 : k ) { p ( y k | x k ) p ( x k | x 0 : k 1 ) } p ( x 0 : k 1 | y 1 : k 1 ) ,
which is the classic Bayesian recursion equation. If we can further invoke that the state process is first-order Markov, so that p ( x k | x 0 : k 1 ) = p ( x k | x k 1 ) , then we get the integral form of the Bayesian recursion [23],
p ( x k | y 1 : k ) p ( y k | x k ) x k 1 p ( x k | x k 1 ) p ( x k 1 | y 1 : k 1 ) d x k 1 ,
which can be implemented with a standard particle filter [24].
In our case, where the static parameters in the state transition equation (4) are unknown, we can not directly implement Equations (10) or (11) since the current state depends on its complete history through the unknown parameters. The tact we take is to derive a Rao-Blackwellized particle filter via marginalizing p ( x k , η , Θ , C | x 0 : k 1 ) ,
η , Θ , C p ( x k | η , Θ , C , x 0 : k 1 ) d P ( η , Θ , C | x 0 : k 1 ) ,
to obtain p ( x k | T k 1 ), where T k 1 are a set of sufficient statistics dependent only on the state stream up to time k 1 . In doing so, we need an efficient, recursive solution to T k 1 so that the filter can be implemented without the need for repeated, large matrix operations or the need to retain the complete state history.
To start, we note that Equation (4) can be written compactly as a general linear model for the complete state stream up to time k as
X k = Φ H k + U k ,
where X k = [ x k , x k 1 , , x 1 ] R p × k , with p being the number of state variables. The unknown matrix Φ = [ η Θ ] R p × p + 1 and H k = [ h k , h k 1 , , h 1 ] R q × k has columns given by h k = [ 1 x k 1 T ] . The noise term U k R p × k is a random Normal matrix whose columns are i.i.d. N ( 0 , C ) with C unknown. For the Rao-Blackwellized particle filter, the columns of X k are formed by the particles from 1 to k and the columns of H k include the past particles from 0 to k 1 . Therefore, Ref. (13) is the state transition equation that describes the evolution of the state stream and, in the filter, each particle stream operates under its own version of this multivariate regression model.
The predictive density for the general linear model, with a non-informative, Jeffreys prior [25], results in a multivariate Student-t distribution [26], which for a p-dimensional random vector is denoted as t p ( v , μ , Σ ) [ 1 + ( x μ ) T Σ 1 ( x μ ) / v ] ( v + p ) / 2 with v degrees of freedom, mean μ , and scaling matrix Σ . Using the non-informative prior, p ( Θ , Θ , C 1 ) | C | ( p + 1 ) / 2 , we can write the marginal predictive density for x k + 1 as a multivariate Student-t distribution with v k = k 2 p degrees of freedom, i.e.,
p ( x k + 1 | X k , H k , h k + 1 ) ~ t p ( v k , x ^ k + 1 , Σ ^ k + 1 ) .
The mean of this distribution is derived as
x ^ k + 1 = S X H k ( S H H k ) 1 h k + 1 = Φ ^ k h k + 1
and the scale matrix is
Σ ^ k + 1 = 1 + h k + 1 ( S H H k ) 1 h k + 1 v k ( S X X k Φ ^ k S X H k ) ,
with the matrices defined as
S X X k = X k X k , S X H k = X k H k , S H H k = H k H k .
As is, Equations (14)–(16) can be used to compute the state predictive density and to extrapolate particles from time k to time k + 1 . That said, the resulting matrix operations are expensive computationally and a recursive solution is desired. Letting P k = ( S H H k ) 1 , we derived recursive expressions for the mean and the scale matrix as follows:
K k = h k P k 1 [ 1 + h k P k 1 h k ] 1 ,
P k = P k 1 [ I h k K k ] ,
Φ ^ k = Φ ^ k 1 + ( x k x ^ k ) K k , Σ ^ k + 1 = v k 1 v k 1 + h k + 1 P k h k + 1 1 + h k P k 1 h k
× Σ ^ k + ( x k x ^ k ) ( x k x ^ k ) / ( v k 1 ) ,
x ^ k + 1 = Φ ^ k h k + 1 .
These equations bear a resemblance to the recursive equations found in the Kalman filter [27] and are readily implemented with minimum computational and memory requirements. The specific algorithm we use is discussed in the next section and we note that the above procedure can likewise be used in the case of a linear observation equation with unknown observation matrix.

3. Results

To implement our solution, we first make a simplifying assumption that reduces our state vector to x k = [ log ( ξ k ) , log ( β k ) ] . Recall that for the GEV distribution, with ξ > 0 , the support for the observation is y [ μ β / ξ , ) , where we have dropped the subscript k for simplicity. To eliminate the parameter μ , we constrain H ξ , β , μ ( y 0 ) = α 0 allowing us to solve for μ = H ξ , β , y 0 1 ( α 0 ) . This is a generalization of the constraint used in [8] where the support was y [ 0 , ) or α 0 = y 0 = 0 resulting in μ = β / ξ . The main issue in the previous study was that small values for the parameter ξ resulted in a substantial portion of the left tail with negligible probability since H ξ , β , β / ξ 1 ( α ) as ξ 0 + α > 0 resulting in sub-par prediction performance. While the parameter estimates in [8] were quite reasonable, and outperformed the maximum likelihood (ML) method, quantile estimates were biased.
Under our support constraint, we can write
μ = y 0 β ξ ( ln ( α 0 ) ) ξ 1
and the now two-parameter GEV distribution is
H ξ , β ( y ) = exp ξ β ( y y 0 ) + ( ln ( α 0 ) ) ξ 1 / ξ
with support y [ y 0 β ξ ( ln ( α 0 ) ) ξ , ) . The likelihood function, p ( y | x ) , can then be written as
p ( y | x ) = d d y H ξ , β ( y ) = 1 β H ξ , β ( y ) ξ β ( y y 0 ) + ( ln ( α 0 ) ) ξ ( 1 + ξ 1 ) ,
where the substitutions ξ = e x 1 and β = e x 2 can readily be made.
Armed with the likelihood in Equation (24) and the sufficient statistics, computed recursively in Equations (17)–(21), we are ready to implement our version of the RBPF (see Algorithm 1).
Algorithm 1 RBPF recursive implementation.
Initialization ( k = 0 , m = 1 : M particles):
  • Generate p ˜ > 2 p + 1 random particles, x 0 : p ˜ ( m ) from (13) with random Φ , C .
  • Compute S X X p ˜ , S X H p ˜ , and S H H p ˜ from (16).
  • Initialize Φ ^ 0 = S X H p ˜ ( S H H p ˜ ) 1 , x ^ 1 from (14) and Σ ^ 1 from (15).
  • Let P 0 = ( S H H p ˜ ) 1 and v k = p ˜ 2 p .
  • Set particle weights w ( m ) = 1 / M and M 1 = M .
Recursive Loop ( k = 1 : N ):
  • Sample x k ( m ) ~ t p ( v k , x ^ k , Σ ^ k ) for m = 1 , M k . (particle extrapolation)
  • Compute predictive statistics: x k ^ = m = 1 M w ( m ) x k ( m )
  • Compute quantile estimates, p-values, and M k (see Algorithm 2)
  • Systematically resample particles, x k ( m ) x k ( m ) + , using p ( y k | x k ) in (24)
  • Compute contemporaneous statistics: x k ^ + = m = 1 M w ( m ) x k ( m ) +
  • Compute K k from (17). Update P k 1 P k via (18) and Φ ^ k 1 Φ ^ k via (19)
  • Update to Σ ^ k + 1 from (20) and x ^ k + 1 from (21)
  • k = k + 1 , v k = v k + 1 , and repeat loop
In Algorithm 1, estimates for ξ k and β k are produced by a weighted sum of the particles x k ( m ) + available after processing the observation, y k , at time k. To illustrate the performance of the algorithm, we simulated the stochastic GEV parameters from an uncoupled, mean-reverting, AR(1) process as follows
log ξ k = ( 1 0.95 ) × log ( 0.45 ) + 0.95 log ξ k 1 + 0.1 u 1 , k ,
log β k = ( 1 0.9 ) × log ( 0.012 ) + 0.9 log β k 1 + 0.1 u 2 , k ,
where u 1 , k and u 2 , k are independent N ( 0 , 1 ) . The process was chosen with ξ ¯ = 0.45 and β ¯ = 1.25 % to be representative of time-series encountered in heavy-tailed phenomena such as financial markets and we refer the reader to [8] for representative simulation examples.
We produced 225 samples for each simulation run and we initialized the filter with M 1 = 1024 particles. At each point in time we computed estimates for the GEV parameters ξ ^ k and β ^ k . We also computed the maximum likelihood (ML) estimates for the GEV parameters from (24) and used at least 25 observations ( k 25 ) before reporting estimation errors. For each of the 100 simulation runs, we computed a root-mean-square (RMS) error for the parameter estimates, across time, and shown in Figure 2 are the histograms for the RMS errors for both the RBPF and ML method. We can see the RBPF method outperforms with an average RMS error for ξ ^ of 0.11 vs. 0.17 for the ML method. Similarly, the average RMS error for β ^ was 0.32% vs. 0.41% for the ML method.
One of the improvements we made to our original algorithm was to use systematic resampling of the existing set of particles versus standard multinomial resampling which improves the filter’s performance. The other improvement was to adapt the number of particles based on an assessment of the filter’s predictive performance as in [28] although we modified their method so that new particles, if needed, are generated from the initial prior distribution. Finally, the quantile estimates, η α ^ = F ^ 1 ( α | y 1 : k ) for α [ 0 , 1 ] , based on averaging H x k + 1 ( . ) over the particles that approximate the predictive state distribution, were improved. In particular, we computed an estimate of the predictive distribution, via approximating Equation (8), versus using E x k + 1 | y 1 : k [ H x k + 1 1 ( α ) ] . Algorithm 2 provides the details of the method used for quantile estimation, computing p-values, and to adapt the number of particles in the filter.
To test our approach, we examined the predictive performance of our method, outlined in Algorithm 2, versus the maximum likelihood. We applied a chi-square test as described in [29]. At each time k, we effectively partitioned the support of the predictive cdf estimate, F ^ ( y k + 1 | y 1 : k ) into B equiprobable buckets. For the ML method, we did the same using the ML estimates of the GEV parameters as “truth”. We then compared the ( k + 1 ) st observation, y k + 1 , to each bucket interval to find its location and to create an empirical frequency distribution. If the model is true, this frequency distribution will be a sample estimate of a uniform distribution and the statistic D is approximately chi-square with B 1 degrees of freedom (See Algorithm 2 for details).
For our test, we chose B = 20 intervals and shown in Figure 3 are the frequency distributions for the entire 100 runs. We note that, in total, there were 20,000 sets of quantiles forecasted and each forecast was based solely on past data. More importantly, each prediction was based on a varied amount of past data. For example, in each simulation run, the first set of quantile forecasts were based on just 25 observations ( y 1 : 25 ) and the last forecasts were based all that run’s observations except the last ( y 1 : 224 ) . We see that the RBPF method has a more uniform frequency distribution compared to the ML. The ML, similar to our previous work in [8], tended to under-estimate the 5% quantile, leading to more observations than expected. More importantly, the extreme quantiles were overestimated leading to less observations exceeding those thresholds. While one might argue that the bias to higher thresholds, for those extreme quantiles, is at least being “conservative”, it is clearly more beneficial to having a more accurate cdf and quantile forecast, particularly since these quantile forecasts are typically used to compute exceedance probabilities over multi-block periods.
Algorithm 2 Compute quantile estimates, p-values, and M k .
Initialization (prior to Recursive Loop)
  • Set α j = j / B for j = 0 , , B (B = # buckets)
  • Create y-grid: y ˜ = 0 : y s t e p : y m a x and set [ ξ ^ k , β ^ k ] T = exp ( x k ^ ) .
  • Initialize counter C ( j ) = 0 for j = 1 , , B
Within Recursive Loop ( k N p r e d )
  • Compute predictive CDF: F ^ ( y ˜ | y 1 : k ) = m = 1 M w ( m ) H ξ ^ k , β ^ k ( y ˜ ) .
  • Invert predictive CDF: η α j = F ^ 1 ( α j ) j
  • Find index j * such that y k [ η α j * 1 , η α j * ) for j * { 1 , , B }
  • Increment counter C ( j * )
  • If k 2 * N p r e d (compute p-values)
    N c = j = 1 B C ( j )
    Compute χ 2 statistic: D = B ( N c 1 ) j = 1 B ( C ( j ) / N c 1 / B ) 2
    Compute p-value using χ B 1 2 ( D ) test
    If p-value < p l o w (double # of particles)
    *
    M n e w = M k min ( 2 M k , M m a x )
    *
    Initialize M n e w new particles (See Initialization in Algorithm 1)
    *
    M k = M k + M n e w
    If p-value > p h i g h (halve # of particles)
    *
    Randomly discard M k max ( M k / 2 , M m i n ) existing particles
    *
    M k = max ( M k / 2 , M m i n )
To further illustrate the improvement in the performance of our approach, we show the histogram of the final p-values, for the 100 simulation runs, in Figure 4. The ML method has about 35 of the p-values in the range of [0,0.1), compared to an expected amount of 10, and most of the p-values are less than 0.5. This indicates that one would reject the ML method as an alternative hypothesis in most instances. In our previous work, our simulated p-values were worse than the ML method due to the reasons cited previously. Our new RBPF method, derived herein, has p-values close to the expected number for each bucket (i.e., 10), and produces quite reasonable quantile forecasts. To further see the accuracy of forecasting extreme quantiles with the RBPF, we plotted the empirical quantile estimate versus the predicted in Figure 5.
As seen in Figure 5, at the predicted 99.9%-quantile, 99.87% of the observations were below the threshold. Stated differently, 26 out of the 20,000 predictions exceeded the forecasted 99.9%-quantile threshold, which is six more than expected. We find this to be excellent given that many of the forecasts are made with limited data in hand. At the 99%-quantile, 0.15% additional observations exceeded our forecasts. Out of the 20,000 predictions, 200 exceedances were expected versus the 230 were observed. Our simulation results bear out the view that our methodology not only outperforms traditional quantile forecast methods (e.g., ML) but also our previous work in [8].

4. Discussion

From our results, illustrated in the previous section, we see that our methodology offers improvements in forecasting extreme quantiles, particularly when the time-series is non-stationary. This is true in many applications, where extreme behaviour tends to cluster in time, such as in finance, or trend over time, such as in climatology. To illustrate the application of our approach to real-world data, we applied it the data shown in Figure 1 which are the block-maximums of daily losses, or negative returns expressed in percent, for the S&P 500 stock market index, a widely used proxy for the overall stock market.
For completeness, we show in Figure 6 the likelihood surface for all of the data from 1928–2020 (93 years/block-maximums). While the ML estimates, using all of the data, are ξ ^ M L = 0.48 and β ^ M L = 1.26 % , one can see there is a large degree of uncertainty in the GEV parameters. For example, there is considerable probability that ξ > 0.5 , which would imply an underlying time-series with infinite variance and this has implications for financial models that rely on finite second moments (e.g., Black-Scholes). The Bayesian approach is to use all of the information available from the data to produce posterior densities from which to produce parameter estimates and, more importantly, to forecast. This is particularly important when forecasting quantiles since they can be quite sensitive to point estimates given the highly non-linear form of the GEV cdf.
Shown in Figure 7 are the S&P 500 block-maximums along with quantile forecasts over time. For the 90% quantile forecast, 8 out of 92 observed stock-market returns exceeded the forecast, or 8.7% of the time. This is well within what one would expect. The current, forecasted 90% quantile is 7.1% and the 99% quantile forecast is 23.8%. We should expect to see this type of market "crash," or worse, about once a century. While the range of the forecasted 90% quantile, from 5.6% in the 1970s to 8% post financial crisis, may seem minimal, we do note that more extreme quantile estimates have significant variation. For example, the 99% quantile forecast went from 18% to 25% and the 99.5% quantile went from 25.8% to 35% over the same period. This would be of use to a financial institution when constructing risk scenarios for stress testing their portfolios. Last year, in 2020, the block-maximum was a 12% loss, during the covid pandemic crisis, which was at the 96% quantile. We should expect to see at least this 12% loss about once every 25 years and it should not be considered unusual highlighting the importance of quantile forecasting-bad things do happen!
It is worth pointing out that the p-value based on the forecasted quantiles was a robust 0.75 versus 0.01 for the ML method. While one must be wary in using p-values to accept a model, we feel that the results we have shown in this paper clearly indicate an improved method of forecasting. But much work needs to be done. For one, we simply used block-maximums in this paper, which one can argue wastes data. We would like to extend our approach further to use more higher-order statistics, or exceedances above a threshold. We would also like to broaden our simulation study and apply our results to other data sets, particularly in the area of climatology. Lastly, additional computational efficiencies should be explored to allow real-time implementation to high-frequency data.

Funding

This research was funded in part by a Farmingdale State College, Mathematics Department, Summer Research Grant.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Daily closing prices for the S&P 500 index were downloaded from Yahoo! finance, https://finance.yahoo.com, and are available upon request, access date: 1 September 2021.

Acknowledgments

The author thanks Farmingdale State College and Carlos Marques for their continued support.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
cdfCumulative denisty function
EVExtreme Value
FTGFisher-Tippet-Gnedenko
GEVGeneralized extreme value
MLMaximum likelihood
pdfProbability density function
PFParticle filter
RBPFRao-Blackwellized particle filter
FMSRoot mean square
RVRandom variable
S&PStandard and Poor
VaRValue at Risk

References

  1. Northrop, P.J.; Attalides, N.; Jonathan, P. Cross-validatory extreme value threshold selection and uncertainty with application to ocean storm severity. J. R. Stat. Soc. Appl. Stat. 2017, 66, 93–120. [Google Scholar] [CrossRef] [Green Version]
  2. Johnston, D.E.; Djurić, P.M. The science behind risk management: A signal processing perspective. Signal Process. Mag. 2011, 28, 26–36. [Google Scholar] [CrossRef]
  3. Broadwater, J.B.; Chellappa, R. Adaptive threshold estimation via extreme value theory. IEEE Trans. Signal Process. 2010, 58, 490–500. [Google Scholar] [CrossRef]
  4. Resnick, S.I.; Rootzeń, H. Self-similar communication models and very heavy tails. Ann. Appl. Probab. 2000, 10, 753–778. [Google Scholar] [CrossRef]
  5. Roberts, S.J. Extreme value statistics for novelty detection in biomedical data processing. IEEE Proc. Sci. Meas. Technol. 2000, 147, 363–367. [Google Scholar] [CrossRef] [Green Version]
  6. Shenoy, S.; Gorinevsky, D. Estimating long tail models for risk trends. IEEE Signal Process. Lett. 2015, 22, 968–972. [Google Scholar] [CrossRef]
  7. Anderson, S.C.; Branch, T.A.; Cooper, A.B.; Dulvy, N.K. Black-swan events in animal populations. Proc. Natl. Acad. Sci. USA 2017, 114, 3252–3257. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Johnston, D.E.; Djurić, P.M. A recursive Bayesian model for extreme values. In Proceedings of the 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019. [Google Scholar]
  9. Coles, S. An Introduction to Statistical Modeling of Extreme Values; Springer: London, UK, 2004. [Google Scholar]
  10. Arulampalam, M.S.; Maskell, N.G.; Clapp, T. A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Trans. Signal Process. 2002, 50, 174–188. [Google Scholar] [CrossRef] [Green Version]
  11. Chen, R.; Wang, X.; Liu, J.S. Adaptive joint detection and decoding in flat fading channels via mixture Kalman filtering. IEEE Trans. Inf. Theory 2000, 46, 2079–2094. [Google Scholar] [CrossRef]
  12. Kalman, R.E. A new approach to linear filtering and prediction problems. J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef] [Green Version]
  13. Hundecha, Y.; St-Hilaire, A.; Quarda, T.; Adlouni, S.E. A nonstationary extreme value analysis for the assessment of changes in extreme annual wind speed over the Gulf of St. Lawrence, Canada. Environmetrics 2013, 24, 51–62. [Google Scholar] [CrossRef]
  14. Huerta, G.; Sansó, B. Time-varying models for extreme values. Environ. Ecol. Stat. 2007, 14, 285–299. [Google Scholar] [CrossRef]
  15. Toulemonde, G.; Guillou, A.; Naveau, P. Particle filtering for Gumbel-distributed daily maxima of methane and nitrous oxide. J. Appl. Meteorol. Climatol. 2008, 47, 2745–2759. [Google Scholar] [CrossRef]
  16. Wei, Y.; Huerta, G. Dynamic generalized extreme value modeling via particle filters. Commmications Stat. 2017, 46, 6324–6341. [Google Scholar] [CrossRef]
  17. Mao, G.; Zhang, Z. Stochastic tail index model for high frequency financial data with Bayesian analysis. J. Econom. 2018, 205, 470–487. [Google Scholar] [CrossRef]
  18. Embrechts, P.; Kluppelberg, C.; Mikosch, T. Modelling Extremal Events; Springer: New York, NY, USA, 2003. [Google Scholar]
  19. Smith, R.L. Statistics of extremes with applications in environment, insurance and finance. In Extreme Values in Finance, Telecommunications and the Environment; Finkenstädt, B., Rootzeń, H., Eds.; Chapman and Hall CRC: Boca Raton, FL, USA, 2003; pp. 1–78. [Google Scholar]
  20. Resnick, S. Heavy Tail Phenomena: Probabilistic and Statistical Modeling; Springer: New York, NY, USA, 2008. [Google Scholar]
  21. McNeil, A.; Frey, R.; Embrechts, P. Quantitative Risk Management; Princeton University Press: Princeton, NJ, USA, 2005. [Google Scholar]
  22. Jacquier, E.; Polson, N.G.; Rossi, P.E. Bayesian analysis of stochastic volatility models. J. Bus. Econ. Stat. 1994, 12, 371–417. [Google Scholar]
  23. Kramer, S.C.; Sorenson, H.W. Bayesian parameter estimation. IEEE Trans. Autom. Control. 1988, 33, 217–222. [Google Scholar] [CrossRef]
  24. Djurić, P.M.; Bugallo, M.F. Particle filtering. In Adaptive Signal Processing; Adali, T., Haykin, S., Eds.; Wiley & Sons: Hoboken, NJ, USA, 2010; pp. 271–331. [Google Scholar]
  25. Jaynes, E.T. Prior Probabilities. IEEE Trans. Syst. Sci. Cybern. 1968, 4, 227–241. [Google Scholar] [CrossRef]
  26. Geisser, S. Bayesian estimation in multivariate analysis. Ann. Am. Stat. 1965, 36, 150–159. [Google Scholar] [CrossRef]
  27. Meinhold, R.J.; Singpurwalla, N.D. Understanding the Kalman filter. Am. Stat. 1983, 37, 123–127. [Google Scholar]
  28. Elvira, V.; Miǵuez, J.; Djurić, P.M. Adapting the number of particles in sequential Monte Carlo methods through an online scheme for convergence assessment. IEEE Trans. Signal Process. 2017, 65, 1781–1794. [Google Scholar] [CrossRef]
  29. Djurić, P.M.; Khan, M.; Johnston, D.E. Particle filtering of stochastic volatility modeled with leverage. IEEE J. Sel. Top. Signal Process. 2012, 6, 327–336. [Google Scholar] [CrossRef]
Figure 1. Annual block maximum of S&P 500 return loss (1928–2020).
Figure 1. Annual block maximum of S&P 500 return loss (1928–2020).
Forecasting 03 00045 g001
Figure 2. Histogram of RMS errors for RBPF vs ML for 100 simulation runs.
Figure 2. Histogram of RMS errors for RBPF vs ML for 100 simulation runs.
Forecasting 03 00045 g002
Figure 3. Empirical frequency distribution of observation predictions for RBPF vs ML simulation.
Figure 3. Empirical frequency distribution of observation predictions for RBPF vs ML simulation.
Forecasting 03 00045 g003
Figure 4. Distribution of final p-values for RBPF vs ML simulation.
Figure 4. Distribution of final p-values for RBPF vs ML simulation.
Forecasting 03 00045 g004
Figure 5. Empirical extreme quantile estimates versus predicted for the RBPF.
Figure 5. Empirical extreme quantile estimates versus predicted for the RBPF.
Forecasting 03 00045 g005
Figure 6. Likelihood surface for GEV parameters using S&P 500 block-maximums.
Figure 6. Likelihood surface for GEV parameters using S&P 500 block-maximums.
Forecasting 03 00045 g006
Figure 7. S&P 500 max annual loss and predicted quantiles.
Figure 7. S&P 500 max annual loss and predicted quantiles.
Forecasting 03 00045 g007
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Johnston, D.E. Bayesian Forecasting of Dynamic Extreme Quantiles. Forecasting 2021, 3, 729-740. https://doi.org/10.3390/forecast3040045

AMA Style

Johnston DE. Bayesian Forecasting of Dynamic Extreme Quantiles. Forecasting. 2021; 3(4):729-740. https://doi.org/10.3390/forecast3040045

Chicago/Turabian Style

Johnston, Douglas E. 2021. "Bayesian Forecasting of Dynamic Extreme Quantiles" Forecasting 3, no. 4: 729-740. https://doi.org/10.3390/forecast3040045

Article Metrics

Back to TopTop