Forecasting Stock Market Volatility With Regime-Switching GARCH Models
Forecasting Stock Market Volatility With Regime-Switching GARCH Models
Forecasting Stock Market Volatility With Regime-Switching GARCH Models
GARCH Models
Juri Marcucci
The previous version of this paper has been awarded the SNDE Best Graduate Student Paper Prize at the Eleventh Annual
Symposium of the Society for Nonlinear Dynamics and Econometrics, held in Florence, March 2003. I would like to thank the
editor and an anonymous referee for valuable and helpful comments that greatly helped me improve the paper. I also would like to
thank Carlo Bianchi, Graham Elliott, Robert Engle, Giampiero Gallo, Raffaella Giacomini, Clive Granger, James Hamilton, Bruce
Lehmann, Francesca Lotti, Andrew Patton, Kevin Sheppard, Allan Timmermann, Halbert White and all the participants in the Sym-
posium for valuable and helpful comments. All errors remain, of course, my own responsibility. E-mail: jmarcucc@weber.ucsd.edu
1
1 Introduction
In the last few decades a growing number of scholars has focused attention on the analysis and forecasting
of volatility, due to its crucial role in nancial markets. Portfolio managers, option traders and market
makers all are interested in the possibility of forecasting, with a reasonable level of accuracy, this important
magnitude, in order to get either higher prots or less risky positions.
So far in the literature, many models have been put forward, but those that seem to be the most successful
and popular are the GARCH (Generalized Autoregressive Conditional Heterscedasticity) models by Boller-
slev (1986) who generalizes the seminal idea on ARCH models by Engle (1982). Their incredible popularity
stems from their ability to capture, with a very exible structure, some of the typical stylized facts of nan-
cial time series, such as volatility clustering, that is the tendency for volatility periods of similar magnitude
to cluster. Usually GARCH models can take into account the time-varying volatility phenomenon over a
long period (French et al., 1987, Franses and Van Dijk, 1996) and provide very good in-sample estimates.
Furthermore, as argued by Andersen and Bollerslev (1998), GARCH models do really provide good
volatility forecasts, even though it might be the case that researchers can get good in-sample t, but very poor
forecasting performances. This poor predictive power in forecasting volatility could originate both from an
erroneous form of judgement, such as the practice of comparing the model forecasts to a lousy measure
for the ex-post volatility, and from an incorrect choice of the statistical loss function to be minimized. For
these reasons the authors put forward a new way to compare volatility forecasts, by means of the so-called
realized volatility, calculated with intra-daily data.
One of the main goals of the present paper is to show that possible concerns about the forecasting ability
of GARCH models can arise because of the usually estimated excessive persistence of individual shocks on
volatility. Hamilton and Susmel (1994), for example, nd that, for their stock return data, a shock on a given
week would produce non-negligible effects on the variance more than one year later. This can be one of the
main reasons why the GARCH volatility forecasts are sometimes too smooth and too high across periods
with different levels of turbulence.
Financial returns exhibit sudden jumps that are due not only to structural breaks in the real economy, but
also to changes in the operators expectations about the future, that can originate from different information
or dissimilar preferences. The real volatility is affected by millions of shocks, that anyway never persist
for a long time, rendering its behavior mean-reverting. It follows that a good volatility model should entail
a different way of treating shocks, in order to give better forecasts. For such reasons in the present work
GARCH models are incorporated in a regime-switching framework, that allows, rather parsimoniously,
to take into account the existence of two different volatility regimes, characterized by a different level of
volatility. In both regimes volatility follows a GARCH-like pattern, in such a way to avoid the actual
variance to depend on the entire information set, as in Gray (1996) and Klaassen (2002).
Besides, some fat-tailed distributions, such as the Students t and the GED, are adopted. In some cases,
the corresponding shape parameters can vary across different regimes, to model possible variations in the
conditional kurtosis in a way that generalizes Dueker (1997)s Regime-Switching GARCH models, where
2
only few parameters are state-dependent.
GARCH models typically show high volatility persistence. Lamoureux and Lastrapes (1990) attribute
this persistence to the possible presence of structural breaks in the variance. They demonstrate that shifts
in the unconditional variance are likely to lead to wrong estimates of the GARCH parameters in a way that
implies persistence.
Cai (1994) and Hamilton and Susmel (1994) are the rst to apply the seminal idea of regime-switching
parameters by Hamilton (1988, 1989 and 1990) into an ARCH specication in order to account for the
possible presence of structural breaks. They use an ARCH specication instead of a GARCH to avoid
problems of innite path-dependence.
The property that makes Markov Regime-Switching GARCH (heretofore MRS-GARCH) and GARCH
models so different is given by their completely opposite representation of the concept of time-varying
volatility. Actually, while GARCH models describe volatility as an ARMA process, thus incorporating in-
novations directly, the MRS-GARCHmodels keep the same structure for the volatility, adding the possibility
of sudden jumps from the turbulent regime to the tranquil state and viceversa.
The main empirical results, using US stock market data, point out that MRS-GARCH models signif-
icantly outperform the usual GARCH models in forecasting volatility at shorter horizons according to a
broad set of statistical loss functions. The signicance of the performance differentials among the compet-
ing models is worked out both through Diebold-Mariano-type of tests for equal predictive ability and tests
for superior predictive ability, such as the Whites (2000) Reality Check test and Hansens (2001) test for
Superior Predictive Ability. Overall these tests show that the MRS-GARCH model with normal innova-
tions does outperform all standard GARCH models in forecasting volatility at shorter horizons. At longer
horizons, standard GARCH models outperform the MRS-GARCH. The tests for superior predictive ability
display the predictive superiority of MRS-GARCH model with normal innovations and the results do not
change if we only compare MRS-GARCH models, without including single-regime GARCH models. Since
volatility is used as a key ingredient for VaR estimates a risk-management loss function is adopted to com-
pare the forecasting performances of the competing models. According to this loss function, MRS-GARCH
models seem to fare much worse than with standard statistical loss function also at shorter horizons (the
only exception is the MRS-GARCH model with GED innovations). However, as in previous studies such as
Brooks and Persand (2003) there is no clear answer about which model fares the best under this VaR-based
loss function.
The plan of this paper is as follows. Linear and non-linear GARCH models are presented in section
2. Section 3 is devoted to a detailed description of MRS-GARCH models. The stock market data (daily
and intra-daily) and the methodology are discussed in section 4, while in section 5 a digression on the
various tests used to evaluate the one-day, one-week, two-week and one-month ahead volatility forecasts
is presented. All the empirical results and the discussion are developed in section 6 and conclusions are
sketched in section 7.
3
2 GARCH Models
Let us consider a stock market index p
t
and its corresponding rate of return r
t
, dened as the continuously
compounded rate of return (in percent)
r
t
= 100 [log (p
t
) log (p
t1
)] (1)
where the index t denotes the daily closing observations and t = R + 1, . . . , n. The sample period
consists of an estimation (or in-sample) period with R observations (t = R+1, . . . , 0), and an evaluation
(or out-of-sample) period with n observations (t = 1, . . . , n).
The GARCH(1,1) model for the series of returns r
t
can be written as
r
t
= +
t
= +
t
_
h
t
(2)
h
t
=
0
+
1
2
t1
+h
t1
(3)
where
0
> 0,
1
0 and
1
0 to ensure a positive conditional variance, and the innovation is conve-
niently expressed as the product of an i.i.d. process with zero mean and unit variance (
t
) times the square
root of the conditional variance.
In order to cope with the skewness often encountered in nancial returns, Nelson (1991) introduces the
Exponential GARCH (EGARCH) model where the logarithm of the conditional variance is modeled as
log (h
t
) =
0
+
1
t1
h
t1
t1
h
t1
+
1
log (h
t1
) (4)
with no parameter constraints.
Glosten, Jagannathan and Runkle (1993) put forward a modied GARCH model (GJR) to account for
the leverage effect. This is an asymmetric GARCH model that allows the conditional variance to respond
differently to shocks of either sign and is dened as follows
h
t
=
0
+
1
2
t1
_
1 I
{
t1
>0}
+
2
t1
I
{
t1
>0}
+
1
h
t1
(5)
where I
{}
is the indicator function which is equal to one when is true and zero otherwise.
Another common nding in the GARCH literature is the leptokurtosis of the empirical distribution of
nancial returns. To model such fat-tailed distributions researchers have adopted the Students t or the
Generalized Error Distribution (GED). Therefore, in addition to the classic gaussian assumption, in what
follows the errors
t
are also assumed to be distributed according to a Students t or a GED distribution. If
a Students t distribution with degrees of freedom is assumed, the probability density function (pdf) of
t
takes the form
4
f (
t
) =
_
+1
2
_
2
_ ( 2)
1
2
(h
t
)
1
2
_
1 +
2
t
h
t
( 2)
_
(+1)
2
(6)
where () is the Gamma function and is the degree-of-freedom (or shape) parameter, constrained to
be greater than two so that the second moments exist. With a GED distribution instead, the pdf of the
innovations becomes
f (
t
) =
exp
_
_
1
2
_
t
h
1/2
t
_
h
1/2
t
2
(1+
1
_
1
_
(7)
with [(2
2/
(1/))/(3/)]
1/2
, where () is the Gamma function, is the thickness-of-tail (or
shape) parameter, satisfying the condition 0 < and indicating how thick the tails of the distribution
are, compared to the normal. When the shape parameter = 2, the GED becomes a standard normal
distribution, while for < 2 and > 2 the distribution has thicker and thinner tails than the normal
respectively.
3 Markov Regime-Switching GARCH Models
The main feature of regime-switching models is the possibility for some or all the parameters of the model
to switch across different regimes (or states of the world) according to a Markov process, which is governed
by a state variable, denoted s
t
. The logic behind this kind of modeling is having a mixture of distributions
with different characteristics, from which the model draws the current value of the variable, according to the
more likely (unobserved) state that could have determined such observation. The state variable is assumed
to evolve according to a rst-order Markov chain, with transition probability
Pr (s
t
= j|s
t1
= i) = p
ij
(8)
that indicates the probability of switching fromstate i at time t1 into state j at t. Usually these probabilities
are grouped together into the transition matrix
P =
_
p
11
p
21
p
12
p
22
_
=
_
p (1 q)
(1 p) q
_
(9)
where for simplicity the existence of only two regimes has been considered. The ergodic probability (that is
the unconditional probability) of being in state
1
s
t
= 1 is given by
1
= (1 p)/(2 p q).
The MRS-GARCH model in its most general form can be written as
1
For further details on regime-switching models, see Hamilton (1994).
5
r
t
|
t1
_
_
_
f
_
(1)
t
_
w. p. p
1,t
f
_
(2)
t
_
w. p. (1 p
1,t
)
(10)
where f () represents one of the possible conditional distributions that can be assumed, that is Normal (N),
Students t or GED,
(i)
t
denotes the vector of parameters in the i-th regime that characterize the distribution,
p
1,t
= Pr [s
t
= 1|
t1
] is the ex ante probability and
t1
denotes the information set at time t 1, that
is the -algebra induced by all the variables that are observed at t 1. More specically, the vector of
time-varying parameters can be decomposed into three components
(i)
t
=
_
(i)
t
, h
(i)
t
,
(i)
t
_
(11)
where
(i)
t
E (r
t
|
t1
) is the conditional mean (or location parameter), h
(i)
t
V ar (r
t
|
t1
) is the
conditional variance (or scale parameter), and
(i)
t
is the shape parameter of the conditional distribution.
2
Hence, the family of density functions of r
t
is a location-scale family with time-varying shape parameters
in the most general setting.
Therefore, the MRS-GARCH consists of four elements: the conditional mean, the conditional variance,
the regime process and the conditional distribution. The conditional mean equation, which is generally
modeled through a random walk with or without drift, here is simply modeled as
r
t
=
(i)
t
+
t
=
(i)
+
t
(12)
where i = 1, 2 and
t
=
t
h
t
and
t
is a zero mean, unit variance process. The main reason for this choice
is given by our main focus on volatility forecasting.
The conditional variance of r
t
, given the whole regime path (not observed by the econometrician) s
t
=
(s
t
, s
t1
, . . .), is
3
h
(i)
t
= V [
t
| s
t
,
t1
]. For this conditional variance the following GARCH(1,1)-like
expression is assumed
h
(i)
t
=
(i)
0
+
(i)
1
2
t1
+
(i)
1
h
t1
(13)
where h
t1
is a state-independent average of past conditional variances. Actually, in a regime-switching
context a GARCH model with a state-dependent past conditional variance would be infeasible. The condi-
tional variance would in fact depend not only on the observable information
t1
and on the current regime
s
t
which determines all the parameters, but also on all past states s
t1
. This would require the integration
over a number of (unobserved) regime paths that would grow exponentially with the sample size rendering
the model essentially intractable and impossible to estimate.
Therefore, a simplication is needed to avoid the conditional variance be a function of all past states.
2
In all formulas the superscript (i) denotes the regime in which the process is at time t.
3
Here we are using Klaassens (2002) model simplifying his notation.
6
Cai (1994) and Hamilton and Susmel (1994) are the rst to point out this difculty by combining the regime-
switching approach with ARCH models only, thus eliminating the GARCH term in (13). However, both Cai
(1994) and Hamilton and Susmel (1994) realize that many lags are needed for such processes to be sensible.
To avoid the path-dependence problem, Gray (1996) suggests to integrate out the unobserved regime
path s
t1
in the GARCH term in (13) by using the conditional expectation of the past variance. In particular,
Gray (1996) uses the information observable at time t 2 to integrate out the unobserved regimes as follows
h
t1
= E
t2
{h
(j)
t1
} = p
1,t1
_
_
(1)
t1
_
2
+h
(1)
t1
_
+ (1 p
1,t1
)
_
_
(2)
t1
_
2
+h
(2)
t1
_
_
p
1,t1
(1)
t1
+ (1 p
1,t1
)
(2)
t1
_
2
(14)
where j = 1, 2. The main drawback of this specication is its inconvenience in terms of volatility forecast-
ing, because multi-step-ahead volatility forecasts turn out to be rather complicated.
Dueker (1997) uses a collapsing procedure in the spirit of Kims (1994) algorithm to overcome the
path-dependence problem, but he essentially adopts the same framework of Gray (1996).
All these models have been put into a unied framework by Lin (1998) who gives the following speci-
cation for the conditional standard deviation
t
t
1
=
s
t
1
+
s
t
2
(L)
p
t1
|f (
t1
)|
w
s
t
2
t1
|f (
t1
)|
w
t1
|
t1
|
+
s
t
3
(L)
q
_
t1
1
_
(15)
where t
1
, t
2
, t
3
t,
t
denotes the conditional expectation of
t
,
s
t
2
(L)
p
and
s
t
3
(L)
q
represent polyno-
mials in the lag operator (L) of order p and q respectively, and f (
t
) =
t
. Lin (1998) follows Grays
(1996) approach to avoid path-dependence.
Recently, Klaassen (2002) suggests to use the conditional expectation of the lagged conditional variance
with a broader information set than in Gray (1996). To integrate out the past regimes by also taking into
account the current one, Klaassen (2002) adopts the following expression for the conditional variance
h
(i)
t
=
(i)
0
+
(i)
1
2
t1
+
(i)
1
E
t1
{h
(i)
t1
|s
t
} (16)
where the expectation is computed as
E
t1
{h
(i)
t1
|s
t
} = p
ii,t1
_
_
(i)
t1
_
2
+h
(i)
t1
_
+ p
ji,t1
_
_
(j)
t1
_
2
+h
(j)
t1
_
_
p
ii,t1
(i)
t1
+ p
ji,t1
(j)
t1
_
2
(17)
and the probabilities are calculated as
7
p
ji,t
= Pr (s
t
= j|s
t+1
= i,
t1
) =
p
ji
Pr (s
t
= j|
t1
)
Pr (s
t+1
= i|
t1
)
=
p
ji
p
j,t
p
i,t+1
(18)
with i, j = 1, 2.
Klaassens (2002) regime-switching GARCH has two main advantages over the other models. Within
the model, it allows higher exibility in capturing the persistence of shocks to volatility.
4
Furthermore, it
allows to have straightforward expressions for the multi-step ahead volatility forecasts that can be calculated
recursively as in standard GARCH models.
Since there is no serial correlation in the returns, the h-step ahead volatility forecast at time T 1 can
be calculated as follows
h
T,T+h
=
h
=1
h
T,T+
=
h
=1
2
i=1
Pr (s
= i|
T1
)
h
(i)
T,T+
(19)
where
h
T,T+h
denotes the time T aggregated volatility forecast for the next h steps, and
h
(i)
T,T+
indicates
the -step-ahead volatility forecast in regime i made at time T that can be calculated recursively
h
(i)
T,T+
=
(i)
0
+
_
(i)
1
+
(i)
1
_
E
T
{h
(i)
T,T+1
|s
T+
} (20)
Therefore, the multi-step-ahead volatility forecasts are computed as a weighted average of the multi-
step-ahead volatility forecasts in each regime, where the weights are the prediction probabilities. Each
regime volatility forecast is obtained with a GARCH-like formula where the expectation of the previous
period volatility is determined by weighting the previous regime volatilities with the probabilities in (18).
In general, to compute the volatility forecasts the lter probability at periods ahead Pr (s
t+
= i|
t
) =
p
i,t+
= P
p
i,t
is needed.
Typically, in the Markov regime-switching literature maximum likelihood estimation is adopted to esti-
mate the numerous parameters. An essential ingredient is the ex-ante probability p
1,t
= Pr [S
t
= 1|
t1
],
i.e. the probability of being in the rst regime at time t given the information at time t 1, whose speci-
cation is
p
1,t
= Pr [s
t
= 1|
t1
] = (1 q)
_
f (r
t1
|s
t1
= 2) (1 p
1,t1
)
f (r
t1
|s
t1
= 1) p
1,t1
+f (r
t1
|s
t1
= 2) (1 p
1,t1
)
_
+p
_
f (r
t1
|s
t1
= 1) p
1,t1
f (r
t1
|s
t1
= 1) p
1,t1
+f (r
t1
|s
t1
= 2) (1 p
1,t1
)
_
(21)
where p and q are the transition probabilities in (9) and f () is the likelihood given in (10).
Thus, the log-likelihood function can be written as
4
A shock can be followed by a volatile period not only because of GARCH effects but also because of a switch to the higher
variance regime. Having different parameters across regimes can capture the pressure-relieving effect of some large shocks.
8
=
T+w
t=R+w+1
log [p
1,t
f (r
t
|s
t
= 1) + (1 p
1,t
) f (r
t
|s
t
= 2)] (22)
where w = 0, 1, . . . , n, and f(|s
t
= i) is the conditional distribution given that regime i occurs at time t.
4 Data and Methodology
The data set analyzed in this paper is the Standard & Poor 100 (S&P100) stock market daily closing price
index. The sample period is from January 1, 1988 to October 15, 2003 for a total of 4095 observations all
obtained from Datastream. The sample is divided in two parts. The rst 3985 observations (from January
1, 1988 to September 29, 2001) are used as the in-sample for estimation purposes, while the remaining
511 observations (from October 1, 2001 to October 15, 2003) are taken as the out-of-sample for forecast
evaluation purposes.
Table 1 shows some descriptive statistics of the S&P100 rate of return. The mean is quite small (about
0.5%) and the standard deviation is around unity. The kurtosis is signicantly higher than the normal value
of 3 indicating that fat-tailed distributions are necessary to correctly describe r
t
s conditional distribution.
The skewness is signicant, small and negative, showing that the lower tail of the empirical distribution of
the returns is longer than the upper tail, that is negative returns are more likely to be far below the mean than
their counterparts.
[INSERT TABLE 1 HERE]
LM(12) is the Lagrange Multiplier test for ARCH effects in the OLS residuals from the regression of the
returns on a constant, while Q
2
(12) is the corresponding Ljung-Box statistic on the squared standardized
residuals. Both these statistics are highly signicant suggesting the presence of ARCHeffects in the S&P100
returns up to the twelfth order.
The group of competing GARCH models with or without state-dependent parameters are estimated
using quasi-maximum likelihood. Both the conditional mean and the conditional variances are estimated
jointly by maximizing the log-likelihood function which is computed as the logarithm of the product of the
conditional densities of the prediction errors as shown in (22).
The ML estimates are obtained by maximizing the log-likelihood with the Broyden, Fletcher, Gold-
farb, and Shanno (BFGS) quasi-Newton optimization algorithm in the MATLAB numerical optimization
routines
5
.
The true volatility would be needed to evaluate the forecasting performances of competing GARCH
models both in-sample and out-of-sample. So far in the literature many researchers have used either the
ex-ante or the ex-post squared returns in order to proxy the realized volatility. However, the squared returns
represent a very noisy estimate of the unobserved volatility. As a matter of fact it can lead to wrong and
5
Some of the iterative procedures have been written in C/C++ in order to enhance speed and to improve capabilities which are
not directly available in MATLAB.
9
unfair assessments about the real ability of various GARCH models in forecasting volatility. As highlighted
in Andersen and Bollerslev (1998) one possibility to avoid such bad conclusions about the relatively poor
out-of-sample performances of GARCH models is using a more precise measure of volatility, obtained
with intra-daily data. This measure is called realized volatility and is based on the cumulative squared
intra-daily returns over different time intervals either of few minutes or of few hours.
In this paper we adopt three different measures of the acutal volatility denoted
2
t+1|t
. The rst one is
the realized volatility computed as the sum of 5-minute returns over each day. Intra-daily returns on the
S&P100 are obtained from www.disktrading.com. These data are also used by Hol and Koopman (2005).
To calculate the volatility at h-step ahead we sum the daily realized volatility over the h days. The second
measure is the more classical squared return for the daily volatility, which is summed over the relevant days
for horizons greater than 1-day. The third measure is given by the squared return of the forecasting horizon.
Thus, if for example we are forecasting volatility at 1-week, we use the square of the log difference of
closing price at time t and t + 5.
We denote the h-step ahead volatility forecast as
h
t+h|t
which is computed as the aggregated sum of the
forecasts for the next h steps made at time t, i.e.
h
t+h|t
=
h
j=1
h
t+j|t
. We compute volatility forecasts at
1-day, 5-, 10- and 22-days by aggregating the volatility forecast over the next 1, 5, 10 and 22 days. Actually
practitioners and risk managers are not interested in the multi-step ahead one-day volatility forecasts, such
as the volatility at time t + 22 made at t.
5 Evaluation of Volatility Forecasts
5.1 Standard Statistical Loss Functions
Forecast evaluation is a key step in any forecasting exercise. A popular metric to evaluate different forecast
models is given by the minimization of a particular statistical loss function. However, the evaluation of
the quality of competing volatility models can be very difcult because, as remarked by both Bollerslev,
Engle and Nelson (1994) and Lopez (2001), there does not exist a unique criterion capable of selecting
the best model. Many researchers have highlighted the importance of evaluating volatility forecasts by
means of the real loss function faced by the nal user. For example, Egle, Hong, Kane and Noh (1993) and
West, Edison, and Cho (1993) suggest prot-based and utility-based criteria for evaluating the accuracy of
volatility forecasts. Unfortunately, it is not possible to exactly know such loss function, because it depends
on the unknown and unobservable economic agents preferences. Thus, even though rather criticizable, so
far most of the literature has focused on a particular statistical loss function, the Mean Squared Error (MSE).
In the present work, instead of choosing a particular statistical loss function as the best and unique
criterion, we adopt seven different loss functions, that can have different interpretations and can lead to a
more complete forecast evaluation of the competing models. These statistical loss functions are:
MSE
1
= n
1
n
t=1
_
t+1
h
1/2
t+1|t
_
2
(23)
10
MSE
2
= n
1
n
t=1
_
2
t+1
h
t+1|t
_
2
(24)
QLIKE = n
1
n
t=1
_
log
h
t+1|t
+
2
t+1
h
1
t+1|t
_
(25)
R2LOG = n
1
n
t=1
_
log
_
2
t+1
h
1
t+1|t
__
2
(26)
MAD
1
= n
1
n
t=1
t+1
h
1/2
t+1|t
(27)
MAD
2
= n
1
n
t=1
2
t+1
h
t+1|t
(28)
HMSE = T
1
T
t=1
_
2
t+1
h
1
t+1|t
1
_
2
(29)
The criteria in (23) and (24) are the typical mean squared error metrics. The criteria in (24) and (26)
are exactly equivalent to using the R
2
metric in the Mincer-Zarnowitz regressions of
2
t+1
on a constant
and
h
t+1|t
and of log
_
2
t+1
_
on a constant and log
_
h
t+1|t
_
, respectively, provided that the forecasts are
unbiased. Moreover the R2LOG loss function has the particular feature of penalizing volatility forecasts
asymmetrically in low volatility and high volatility periods, as pointed out by Pagan and Schwert (1990)
who put forward (26), calling it logarithmic loss function. The loss function in (25) corresponds to the
loss implied by a gaussian likelihood and is suggested by Bollerslev, Engle and Nelson (1994). The Mean
Absolute Deviation (MAD) criteria in (27) and (28) are useful because they are generally more robust to
the possible presence of outliers than the MSE criteria, but they impose the same penalty on over- and
under-predictions and are not invariant to scale transformations. Bollerslev and Ghysels (1996) propose the
heteroscedasticity-adjusted MSE in (29).
When comparing different volatility forecasts it can also be useful to measure the number of times a
given model correctly predicts the directions of change
6
of the actual volatility. Such directional accuracy
of volatility forecasts can be of great importance because the direction of predicted volatility change can be
used to construct particular trading strategies such as straddles (Engle, Hong, Kane and Noh, 1993).
Some tests of directional predictive ability have been proposed in the literature. In the present paper we
use the so-called Success Ratio (SR) and the Directional Accuracy (DA) test of Pesaran and Timmermann
(1992) .
Let
t+j
be the proxy for the actual volatility after subtracting its non-zero mean and let h
t+j|t+j1
be
the demeaned volatility forecasts
7
. The SR is simply the fraction of the volatility forecasts that have the
6
We talk about direction of change, because volatility is always positive.
7
The author acknowledges that because of the Jensens inequality, E(X
2
) [E(X)]
2
, such a procedure can give results
11
same direction of change as the corresponding realizations and is given by
SR = m
1
m
j=1
I
{
t+j
h
t+j|t+j1
}>0
(30)
where I
{g>0}
is the indicator function, that is I
{g>0}
= 1 if g is positive and zero otherwise. Thus the SR
measures the number of times the volatility forecast correctly predicts the direction of the true volatility
process.
The DA test is instead given by
DA =
(SR SRI)
_
V ar(SR) V ar(SRI)
(31)
where SRI = P
P +(1P)(1
n
t=1
d
t
is
n
_
d
_
d
N
_
0, V
_
d
__
. An estimate of the asymptotic variance is
V
_
d
_
= n
1
_
0
+ 2
q
k=1
k
k
_
,
where q = h 1,
k
= 1 k/(q + 1) is the lag window and
i
is an estimate of the i-th order autocovari-
ance of the series {d
t
}
n
1
that can be estimated as
k
=
1
n
n
t=k+1
_
d
t
d
_ _
d
tk
d
_
for k = 1, . . . , q
8
.
that might be partially misleading. However, both the averages of
2
t
and
h
t
can be overestimated, and particularly the former.
Therefore, the results for the sign tests should be only partially underestimated.
8
Because for optimal h-step ahead forecasts the sequence of forecast errors follows a MA of order h 1. q has been chosen so
12
The DM test statistic for testing the null hypothesis of equal forecast accuracy is then given by DM =
d/
_
V
_
d
_
N (0, 1), i.e. under the null hypothesis of equal forecast accuracy the DM test statistic has
a standard normal distribution asymptotically. Harvey, Leybourne and Newbold (1997) argued that the
DM test can be quite over-sized in small samples and this problem becomes even more dramatic as the
forecast horizon increases. They thus suggest a Modied DM (MDM) test, where DM is multiplied by
_
n
1
[n + 1 2h +n
1
h(h 1)], where h is the forecast horizon and n is the length of the evaluation
period
9
Instead of testing for EPA, as in Diebold and Mariano (1995) or in West (1996), the Reality Check (RC)
for data snooping of White (2000) is a test for superior predictive ability
10
(SPA).
The RC is constructed in a way to test whether a particular forecasting model is signicantly outper-
formed by a set of alternative models, where the performance of each forecasting model may be dened
according to a pre-specied loss function.
White (2000) compares l + 1 forecasting models. Model 0 is the benchmark and the null hypothesis is
that none of the models k = 1, . . . , l outperform the benchmark in terms of the specic loss function chosen.
The best forecast model is that one which produces the smallest expected loss. Let L
t,k
L(
2
t
,
h
k,t
) denote
the loss
11
if one makes the prediction
h
t,k
with k-th model when the realized volatility turns out to be
2
t
.
The performance of model k relative to the benchmark model (at time t), can be dened as
f
k,t
= L
t,0
L
t,k
k = 1, . . . , l t = 1, . . . , n (32)
Assuming stationarity for f
k,t
we can dene the expected relative performance of model k relative to the
benchmark as
k
= E[f
k,t
] for k = 1, . . . , l. If model w outperforms the benchmark, then the value of
w
will be positive. Therefore, we can analyze whether any of the competing models signicantly outperform
the benchmark, testing the null hypothesis that
k
0, for k = 1, . . . , l. Consequently the null hypothesis
that none of the models are better than the benchmark (that is no predictive superiority over the benchmark
itself) can be equivalently formulated as
H
0
:
max
max
k=1,...,l
k
0 (33)
against the alternative that the best model is superior to the benchmark.
By the lawof large numbers one can consistently estimate
k
with the sample average f
k,n
= n
1
n
t=1
f
k,t
and then obtain the test statistic
that q = 4 (n/100)
2/9
.
9
Harvey, Leybourne and Newbold (1997) suggest to compare the statistic with the critical values fromthe Students t distribution
with n 1 degrees of freedom rather than from the normal distribution as with the DM test.
10
In economics, testing for SPA is certainly more relevant than testing for EPA, because we are more interested in the possibility
of the existence of the best forecasting model rather than in the probable existence of a better model between two pairs.
11
The function L() can be anyone of the loss functions given before. For example it can be L
k,t
= (
2
t
h
t,k
)
2
if we consider
the loss function in (24).
13
T
n
max
k=1,...,l
n
1/2
f
k,n
(34)
If we reject the null hypothesis, we have evidence that among the competing models, at least one is
signicantly better than the benchmark.
The most difcult problem is to derive the distribution of the statistic T
n
under H
0
, because the dis-
tribution is not unique. Hansen (2001) emphasizes that the Reality Check test applies a supremum over
the non-standardized performances T
n
and, more dangerously, a conservative asymptotic distribution that
makes the RCvery sensitive to the inclusion of poor models. Hansen (2001) argues that since the distribution
of the statistic is not unique under the null hypothesis, it is necessary to obtain a consistent estimate of the
p-value, as well as a lower and an upper bound. Hansen (2001) applies a supremum over the standardized
performances and tests the null hypothesis
H
0
:
s
max
max
k=1,...,l
k
_
var(n
1/2
f
k,n
)
0 (35)
using the statistic
T
s
n
= max
k
n
1/2
f
k,t
_
var(n
1/2
f
k,n
)
(36)
where var(n
1/2
f
k,n
) is an estimate of the variance of n
1/2
f
k,n
obtained via the bootstrap. Therefore,
Hansen (2001) suggests additional renements to the RC test and some modications of the asymptotic
distribution that result in tests less sensitive to the inclusion of poor models and with a better power. He
argues as well that the p-values of the RC are generally inconsistent (that is too large) and the test can be
asymptotically biased. To overcome these drawbacks, Hansen (2001) shows that it is possible to derive a
consistent estimate of the p-value together with an upper and a lower bound. Such a test is called Superior
Predictive Ability (SPA) test and it includes the RC as a special case. The upper bound (SPA
u
) is the
p-value of a conservative test (that is, it has the same asymptotic distribution as the RC test) where it is
implicitly assumed that all the competing models (k = 1, . . . , l) are as good as the benchmark in terms
of expected loss. Hence, the upper bound p-value coincides with the RC test p-value. The lower bound
(SPA
l
) is the p-value of the liberal test where the null hypothesis assumes that the models with worse
performance than the benchmark are poor models in the limit. With the SPA test it is possible to assess
which models are worse than the benchmark and asymptotically we can prevent them from affecting the
distribution of the test statistic. The conservative test (and thus the Reality Check test) is quite sensitive to
the inclusion of poor and irrelevant models in the comparison, while the consistent (SPA
c
) and the liberal
test are not
12
.
12
For a detailed description of how to implement the RC and SPA test, see White (2000) and Hansen (2001).
14
5.3 Risk Management Loss Functions
Since one of the typical use of volatility forecasts is as an input to nancial risk management, we also
employ a risk management loss function, which is based upon the calculation of the Value at Risk (VaR).
An institutions VaR is a measure of the market risk of a portfolio which quanties in monetary terms the
likely losses which could arise from market uctuations. Brooks and Persand (2003) suggest to use VaR-
based loss functions and also Sarma, Thomas and Shah (2002).
The VaR at time t of model i at % signicance level is calculated as follows
V aR
i
t
[n, ] =
i
t+n
+ ()
_
h
i
t+n
(37)
where () is a cumulative distribution function, n is the investment horizon (n = 1, 5, 10, 20 days), =1%
or 5%,
i
t+n
is the conditional mean at t +n and h
i
t+n
is the volatility forecast at t +n of model i.
We thus employ three methods to determine the adequacy of the volatility forecasts used as an input for
the VaR.
The TUFF test is based on the number of observations before the rst exception. The relevant null is,
once again, H
0
: =
0
and the corresponding LR test is
LR
TUFF
(
T, ) = 2 log{ (1 )
T1
} + 2 log{
1
T
_
1
1
T
_
T1
} (38)
where
T denotes the number of observations before the rst exception. LR
TUFF
is also asymptotically
distributed as
2
(1). Kupiec (1995) notices that this test has limited power to distinguish among alternative
hypotheses because all observations after the rst failure are ignored, resulting in a test which is over-sized.
A correctly specied VaR model should generate the pre-specied failure rate conditionally at every
point in time. This property is known as conditional coverage of the VaR. Christoffersen (1998) develops
a framework for interval forecast evaluation. The VaRis interpreted as a forecast that the portfolio return will
lie in (, V aR
t
) with a pre-specied probability p. Christoffersen emphasizes the importance of testing
for conditional coverage due to the well known stylized fact in nancial time series of volatility clustering.
Good interval forecasts should be narrow in tranquil periods and wide in volatile times, so that observations
falling outside a forecasted interval should be spread over the entire sample and not concentrated in clusters.
A poor interval forecast may produce correct unconditional coverage, but it may fail to account for higher-
order dynamics. In such case the symptom that could be observed is a clustering of failures.
Consider a sequence of one-period-ahead VaR forecasts {v
t|t1
}
T
t=1
, estimated at a signicance level
1p. These forecasts are intended to be one-sided interval forecasts (, v
t|t1
] with coverage probability
p. Given the realizations of the return series r
t
and the ex-ante VaRforecasts, the following indicator variable
can be calculated
I
t
=
_
1, r
t
< v
t
0, otherwise
15
The stochastic process {I
t
} is called failure process. The VaR forecasts are said to be efcient if they
display correct conditional coverage, i.e. if E
_
I
t|t1
= p, t or, equivalently, if {I
t
} is iid with mean p.
Christoffersen (1998) develops a three step testing procedure to test for correct conditional coverage: (i)
a test for correct unconditional coverage, (ii) a test for independence and (iii) a test for correct conditional
coverage.
In the test for correct unconditional coverage the null hypothesis of the failure probability p is tested
against the alternative hypothesis that the failure probability is different fromp, under the assumption of and
independently distributed failure process. In the test for independence, the hypothesis of an independently
distributed failure process is tested against the alternative hypothesis of a rst order Markov failure process.
Finally, the test of correct conditional coverage is done by testing the null hypothesis of an independent
failure process with failure probability p against the alternative of a rst order Markov failure process.
All the three tests are carried out in the likelihood ratio (LR) framework. The likelihood ratio for each
is as follows:
1. LR statistic for the test of unconditional coverage:
LR
UC
= LR
PF
= 2 log
_
p
n
1
(1 p)
n
0
n
1
(1 )
n
0
_
2
(1)
where p is the tolerance level at which VaR measures are estimated (i.e. 1 or 5 %), n
1
is the number
of 1s in the indicator series, n
0
is the number of 0s in the indicator series, and = n
1
/(n
0
+n
1
) is
the MLE estimate of p.
2. LR statistic for the test of independence:
LR
ind
= 2 log
(1
2
)
(n
00
+n
10
)
(1
2
)
(n
01
+n
11
)
(1
01
)
n
00
n
01
01
(1
11
)
n
10
n
11
11
2
(1)
where n
ij
is the number of i values followed by a j value in the I
t
series (i, j = 0, 1),
ij
=
Pr{I
t
= i|I
t1
= j} (i, j = 0, 1),
01
= n
01
/(n
00
+ n
01
),
11
= n
11
/(n
10
+ n
11
),
2
=
(n
01
+n
11
) / (n
00
+n
01
+n
10
+n
11
).
3. LR statistic for the test of correct conditional coverage:
LR
cc
= 2 log
(1 p)
n
0
p
n
1
(1
01
)
n
00
n
01
01
(1
11
)
n
10
n
11
11
2
(2)
If we condition on the rst observation, then these LR test statistics are related by the identity LR
cc
=
LR
uc
+LR
ind
.
In the VaR-based forecast evaluation we will use all these tests. For the interpretation of the results,
models are ranked in this way. As in Brooks and Persand (2003) we assume that any model which has
a percentage failures in the rolling hold-out sample which is greater than the nominal threshold should
16
be rejected as inadequate. Thus, the lowest ranking models (the worst) are those which have the highest
percentage of failures greater than the nominal value. When these models have been exhausted, we assume
that any model that generates a number of failures which is far less than the expected number should be less
desirable than those models which present a number of failures closer to the nominal level. Hence, the best
models under this loss function are those which generate a coverage rate which is less than the nominal one,
but as close as possible to it. Then we check for correct unconditional and conditional coverage. If a model
can also pass these tests is considered adequate for risk-management purposes.
6 Empirical Results and Discussion
The whole sample consists of the S&P100 closing prices from January 1, 1988 to October 15, 2003, for
a total of 4096 observations. The return series is calculated by taking the log difference of price indices
and then multiplying by 100. The estimation is carried out on a moving (or rolling) window of 3085
observations. In the present section we present the empirical estimates of single-regime GARCH and MRS-
GARCH models, together with the in-sample statistics and the out-of-sample forecast evaluation.
6.1 Single-regime GARCH
The parameter estimates for the different state-independent GARCH(1,1) models are presented in Table 2.
For each model three different distributions for the innovations are considered: the Normal, the Students
t and the GED. The in-sample period is from January 1, 1988 through September 28, 2001. The 511
observations from October 1, 2001 through September 15, 2003 are reserved for the evaluation of the out-
of-sample performances. The standard errors are the asymptotic standard errors. Regarding the conditional
mean, all the parameters for the various GARCH models are signicant. The conditional variance estimates
show that almost all the parameters are highly signicant, except for the
0
s in the GARCH and GJR
models. Hence GARCH models perform quite well at least in-sample. In addition, for the Students t
distribution, the degrees of freedom are always greater than 6, suggesting that all the conditional moments
up to the sixth order exist. In particular the conditional kurtosis for the Students t distribution is given by
3( 2)/( 4). Consequently, in the GARCH, EGARCH and GJR model, the value for the conditional
kurtosis is 5.538, 5.003 and 5.009 respectively, conrming the typical fat-tailed behavior of nancial returns.
[TABLE 2 HERE]
Moreover, for the models with GED innovations, the estimates clearly suggest that the conditional dis-
tribution has fatter tails than the gaussian, since all the shape parameters have values that signicantly lie
between 1 and 2. The same conclusion arises with the conditional kurtosis that for this distribution is given
by ((1/) (5/))/ ((1/))
2
) where () is the gamma function. For the GARCH, EGARCH and
GJR model the kurtosis is 4.149, 4.026 and 4.044 respectively, conrming that the estimated conditional
distribution of S&P100 returns is indeed fat-tailed.
17
6.2 MRS-GARCH
The parameter estimates for MRS-GARCH models are presented in Table 3. Both the models with constant
degrees of freedom and the one where the degrees-of-freedom parameters are allowed to switch between the
two regimes show highly signicant in-sample estimates. The conditional mean estimates are all signicant,
whereas for almost half of the conditional variance parameters, especially the constant
(i)
0
s, we fail to reject
the null of a zero value. The estimates conrm the existence of two states: the rst regime is characterized
by a low volatility and almost nil persistence of the shocks in the conditional volatility, whereas the second
one reveals high volatility and a higher persistence. The transition probabilities are all signicant but in the
normal case that is rather far away from unity, showing that almost all regimes are particularly persistent.
[TABLE 3 HERE]
Table 3 also reports the unconditional probabilities and the expected durations for each MRS-GARCH
model. The unconditional probability
1
of being in the rst regime, which is characterized overall by a
lower volatility than in the second one, ranges between 2% for the model with gaussian innovations and 61%
for the model with GED innovations. The expected duration for this low-volatility regime ranges between
53 and 8771 trading days. On the other hand, the unconditional probability of being in the high-volatility
regime (the second one) ranges between 39% for the GED model and 98% for the model with Normal
innovations. The expected duration of the high-volatility state is roughly between one day and 7519 days.
For the Students t version of the MRS-GARCH with constant degrees of freedom across regimes, the shape
parameter is below four, indicating the existence of conditional moments up to the third. This means that
by allowing state-dependent parameters it is possible to model most of the leptokurtosis in the data. In the
GED case the parameter is below the threshold value of 2, showing that the distribution has thicker tails
than the normal. The conditional kurtosis for the GED case is 5.134.
The MRS-GARCH model with Students t innovations is presented in the version in which the degrees
of freedom are allowed to switch, implying a time-varying kurtosis as in Hansen (1994) and Dueker (1997).
There is a main difference between those papers and the present work. While Hansen suggests a model
in which the Students t degrees-of-freedom parameter is allowed to vary over time according to a logistic
function of variables included in the information set up to time t 1, and Dueker allows only such a
parameter to be state-dependent, in the present paper the degrees-of-freedom parameter is allowed to switch
across regimes together with all the other parameters. Since both regimes show an estimated number of
degrees of freedom greater than 4, we can argue that in both regimes we have fatter tails than the normal.
6.3 In-Sample statistics
A big problem arises when one attempts to compare single-regime with regime-switching GARCH models.
Standard econometric tests for model specication may not be appropriate because some parameters are
unidentied under the null
13
. Since the main focus is on the predictive ability, we only present some statistics
13
See Hansen (1992 and 1996) who proposes simulation-based tests that can avoid this problem.
18
in Table 4, without doing any formal test.
[TABLE 4 HERE]
In Table 4 some in-sample goodness-of-t statistics are reported. These statistics are used as model
selection criteria. The largest log-likelihood among the state-independent GARCH models is given by the
EGARCH model with GED innovations, while for the MRS-GARCH models, and overall, the best result is
reached with the MRS-GARCH with Students t distribution, where the degrees of freedom are allowed to
switch.
The Akaike Information Criterion (AIC) and the Schwarz Criterion (BIC) both indicate that the best
model among the constant-parameter GARCH and overall is the EGARCH with GED errors, while among
the MRS-GARCH models is the MRS-GARCH(1,1)-t2 that ts the best. Another property of MRS-GARCH
models that emerges from Table 4 is the high persistence of the shocks in the conditional variance which
is not so tiny as expected. Only in one regime the persistence is slightly smaller than in standard GARCH
models. Table 4 also shows that according to all the statistical loss functions considered in the present work
but for HMSE the best model in-sample is the MRS-GARCH with gaussian innovations, while among the
standard GARCH models the best one is the EGARCH with normal innovations.
6.4 Out-of-Sample forecast evaluation
One possible way to overcome the problems highlighted in the previous section is to compare the models
through their out-of-sample forecasting performances. An out-of-sample test has the ability to control either
possible over-tting or over-parametrization problems, and gives a more powerful framework to evaluate
the performances of competing models.
Since most models only represent simple approximations of the true data generating process, often
having good in-sample t does imply neither a necessary nor a sufcient condition for accurate and reliable
forecasts. Furthermore, researchers and practitioners are particularly interested in having good volatility
forecasts rather than good in-sample ts that might be much more likely with highly parameterized models
such as MRG-GARCH.
Table 5 reports the out-of-sample evaluation of one- and ve-step ahead volatility forecasts, according to
the statistical loss functions in Section 5. Table 6 displays the out-of-sample evaluation of ten- and twenty-
two-step ahead volatility forecasts. For both tables the true volatility is given by the realized volatility.
[TABLES 5 AND 6 HERE]
All models exhibit a high SR (more than 60% and an average of 80%) and highly signicant DA test at
all forecast horizons.
At one-step ahead, the best model is the MRS-GARCH-N and the second best model is the GJR-N. At
ve-step ahead, the best model is again the MRS-GARCH-N, while the second best is the EGARCH-N.
19
At two-week horizon, the best model is the EGARCH-N and the MRS-GARCH-N is just the best model
among the MRS-GARCH. At one-month horizon, the best model is the EGARCH-GED, while the MRS-
GARCH-N is only the best among the MRS-GARCH
14
From previous results it is quite evident that MRS-GARCH fare better at shorter forecast horizons, while
at longer ones (more than a week) EGARCH and GJR models with non-normal innovations are the best.
This is conrmed by the DM test for EPA of which, for the sake of brevity, we only present the tables when
the benchmark is the MRS-GARCH-N at one-day horizon and the EGARCH-N at two-week horizon.
Table 7 report the DM test when the benchmark is the best model for one-day horizon (MRS-GARCH-
N) which is compared to each one of the other models. The comparison is carried out by taking into account
all the statistical loss functions introduced in section 5. From the table it is evident that the MRS-GARCH-
N (the benchmark) signicantly outperforms every standard GARCH model at any usual condence level.
Remarkably, the sign of the DM statistic, when the benchmark is compared to standard GARCH models, is
always negative, implying that the benchmarks loss is lower than the loss implied by these models. When
we consider the pairwise comparisons with the other MRS-GARCH models, we always reject the null of
equal forecast accuracy. Only with the HMSE loss function we have some models for which we cannot
reject the null hypothesis.
[TABLE 7 HERE]
Table 8, instead, presents the DM test when the benchmark model is the best at two-week horizon
(EGARCH-N). Here for all statistical loss functions and for all models but the MRS-GARCH-N we reject
the null of EPA, suggesting that the benchmark fares the best. When the benchmark is compared to the
second best (MRS-GARCH-N) we fail to reject the null of equal forecast accuracy for all loss functions but
HMSE.
[TABLE 8 HERE]
The results for all other models and forecast horizons
15
show that when the benchmark is a GARCH
model, tests of EPA are rejected for all MRS-GARCH but that one with normal innovations. In other words,
the benchmark outperforms all MRS-GARCH but the MRS-GARCH-N which, in particular at shorter hori-
zons, always implies a lower loss than the benchmark. In addition, EGARCH-N and EGARCH-GED also
outperforms the benchmark at horizon longer than one-day. When the benchmark is the EGARCH model,
14
When the proxy for the volatility is the d-day squared return, at all one-day, one-, two-, four-week horizon the best model is
the EGARCH-t while the second best is the MRS-GARCH-N. When the volatility proxy is given by the sum of the daily squared
returns, at one-day horizon the best model is the GJR-N and the best among the MRS-GARCH (MRS-GARCH-N) is just the sixth.
At one-week horizon the best model is the GJR-t whereas the best among the MRS-GARCH (MRS-GARCH-t2) is the eighth.
At two-week horizon, the best model is the GJR-t, while the best among the MRS-GARCH (MRS-GARCH-GED) is sixth). At
one-month horizon the best model is the GJR-t and the best among the MRS-GARCH (MRS-GARCH-GED) is fourth. All the
corresponding tables are available upon request from the author.
15
For the all forecast horizons we have also computed the MDM statistics of Harvery, Leybourne and Newbold (1997). The
overall results are only slightly different from the DM test and lead to exactly the same conclusions. This is due to the fact that
the multiplicative factor
n
1
[n + 1 2h +n
1
h(h 1)] is .95, .98, .99, .99 for one-, ve-, ten- and twenty-two step ahead
horizon respectively. These results are also available upon request.
20
it outperforms almost all MRS-GARCH and some other standard GARCH. In particular, at shorter horizons
(until one-week) the MRS-GARCH-N always fares better than the benchmark, whereas the other EGARCH
models seem to outperform at longer horizons. Only EGARCH-t fails to reject the null of EPA with almost
all other competing models. If GJR is the benchmark, it outperforms all MRS-GARCH but the MRS-
GARCH-N and fares better than many other standard GARCH. The MRS-GARCH-N model outperforms
at shorter horizons, while the EGARCH-N and EGARCH-GED outperform at longer ones (more than one-
day). Furthermore, at all horizons, the GJR-N fares better than the same models with fat-tailed distributions.
When the benchmark is the MRS-GARCH-N model, it outperforms all other models till the one-week hori-
zon, whereas it is beaten by the EGARCH-N and EGARCH-GED at longer horizons. However, it fares
the best if compared to the other MRS-GARCH models. If the benchmark is the MRS-GARCH-t2, it only
outperforms the MRS-GARCH-t and MRS-GARCH-GED, but it is beaten by almost all the other models
which imply a smaller loss. When the benchmark is the MRS-GARCH-t, all other models outperform at
all horizons since they signicantly display a smaller loss than the benchmark. The same is true for the
MRS-GARCH-GED which only beats the MRS-GARCH-t.
Therefore, we have seen that at shorter horizons MRS-GARCH-N fares the best, but at longer ones also
other standard GARCH models, such as the EGARCH-N, EGARCH-GED or GJR-N, tend to be superior.
Another striking feature of all this pairwise analysis is that the other MRS-GARCH models with fat-tailed
distributions are outperformed by almost all standard GARCH models. These results do not hold when the
more general forecast evaluation for SPA is undertaken.
Table 9 reports the Reality Check test for superior predictive ability for each model against all the others
at one-day forecast horizon. The table presents for each benchmark model in the row and each loss function
three p-values: the RC is the Reality Check p-value, while SPA
0
c
and SPA
0
l
are the Hansens (2001)
consistent and lower p-values, respectively
16
.
[TABLE 9 HERE]
The p-values reported in Table 9 for the RC and SPA tests distinctly show how all the tests reject the
null hypothesis of SPA when the benchmark is one of the standard GARCH models. This means that there
is a competing model, among those considered, which is signicantly better than the benchmark. This
happens for all the single-regime GARCH models and for every loss function except for HMSE, for which
EGARCH-N and EGARCH-GED are not beaten by another competing model. These apparently striking
results are not new in the literature. Hansen and Lunde (2001) obtain similar results with stock market data,
nding that the GARCH(1,1) specication is not the best model (in term of SPA) when compared to other
single-regime specications. Table 9 also presents the RC and SPA test p-values when the benchmark is one
of the MRS-GARCH models and the comparison is still carried out against all the other models. It is evident
that the MRS-GARCH-N model signicantly outperforms all the other models at the usual signicance level
16
Such p-values are calculated adopting the stationary bootstrap by Politis and Romano (1994) as in White (2000) and in Hansen
and Lunde (2001). The number of bootstrap re-samples B is 3000 and the block length q is 0.33. However we have done the
same calculations with B = (1000, 3000) and a different set of values for q (0.10, 0.20 and 0.33). The results do not change
considerably. Therefore we choose to report the table with B = 3000 and q = 0.33. The other tables are available upon request.
21
of 5%. As a matter of fact for all the loss functions but HMSE we fail to reject the null of no availability of
a superior model. According to this loss function EGARCH-N, EGARCH-GED, MRS-GARCH-t and MRS-
GARCH-GED are the only model for which we cannot reject the null of SPA. Similar results are obtained
for all the other forecast horizons and different block length. In general, MRS-GARCH-N is always the
best model according to all loss functions but HMSE, for which many other models fare the best. For some
of the other loss functions and with shorter block lengths we also nd a few standard GARCH models that
outperform all the competing models in terms of SPA.
[TABLE 10 HERE]
Table 10 reports the same RC and SPA test p-values when the comparison is done only among the MRS-
GARCH models two-week ahead volatility forecasts. This table can thus help us understand the possible
implications of including poor models for these tests. The results are quite different to the previous ones.
Now, the MRS-GARCH-t signicantly outperforms all the other MRS-GARCH, while the MRS-GARCH-
GED fares the best according to MSE
2
and QLIKE loss functions. We obtain quite similar results for
different forecast horizons and shorter block lengths. The MRS-GARCH-t still outperforms all other MRS-
GARCH for every loss but HMSE and the MRS-GARCH-GED also fares the best according to some loss
functions.
These and the previous results must also be compared to a VaR-based evaluation criterion. Since one of
the main purpose of volatility forecasting is to have an input for successive VaR estimation, it is necessary
to see how competing models do fare in terms of a risk-management loss function. This is closely related to
the results of Dacco and Satchell (1999) who demonstrate that the evaluation of forecasts from non-linear
models such as Regime-Switching models using statistical measures might be misleading. The authors
propose to adopt alternative economic loss functions. Their approach is followed by Brooks and Persand
(2003) who use both statistical and risk-management loss functions to evaluate a set of models in terms of
their ability to predict volatility. As already discussed in section 5 we go a little bit further with respect to
Brooks and Persand (2003) by comparing the models in terms of unconditional and conditional coverage of
the corresponding VaR estimates. Sarma, Thomas and Shah (2003) go even further by adopting a second
stage selection criterion of their VaR models using subjective loss functions that should incorporate the
risk managers preferences. This loss functions take into account the magnitude of the failure in the VaR
forecast, penalizing more the bigger ones.
Table 11 reports the risk-management out-of-sample evaluation of our competing GARCH models for
the 1-day, 1-week, 2-week and 1-month horizons.
Five statistics are presented for each forecast horizon: the TUFF, the proportion of failures (PF), the test
of correct unconditional coverage (LR
PF
) which checks if PF is signicantly higher than the nominal rate,
the LR
ind
which tests independence and the LR
cc
which tests the correct conditional coverage.
The rank for each forecast horizon gives the order according to the percentage PF. Models which present
a PF greater than the coverage probability (5 and 1%) are judged as inadequate. The theoretical TUFF at
5 and 1% should be 20 and 100, respectively. We can thus see that only for 99% VaR at one-day ahead
22
there are values under the theoretical ones except for the MRS-GARCH-t2 and MRS-GARCH-t. At all
the other forecast horizons the TUFF is greater than 100. In addition, if the objective is to cover either
the 99% or the 95% of future losses, then many models seem inadequate, especially at the shortest and
longest forecast horizons. The last three LR tests reject almost all models at longer horizons. It is noticeable
that at all horizons and for both coverage probabilities the best model according to the statistical forecast
evaluation criteria - i.e. the MRS-GARCH-N - is always rejected for a too high PF. The other MRS-GARCH
models fare better according to this loss function, even though only the MRS-GARCH-GED model is not
rejected by all three LR tests at 1-day horizon. At this horizon, however, other standard GARCH models also
fail to reject the three LR tests showing good out-of-sample performances under the risk-management loss
function. Nevertheless, it is not really clear which model among these fares the best. At longer horizons,
no model can really pass all tests. Therefore, a few models seem to provide reasonable and accurate VaR
estimates at 1-day horizon, with a coverage rate close to the nominal level. Actually, there is not a uniformly
most accurate model according to the risk-management loss functions. This result is not new, since also
Brooks and Persand (2003) nd a no clear answer for most of the series they examine. Somehow, our results
conrm Dacco and Satchells (1999) arguments that the choice of the correct loss function is fundamental
for the accuracy of volatility forecasts from non-linear models.
[FIGURES 2 and 3 HERE]
Figures 2 and 3 depict the excessive losses of 95% VaR and 99% VaR from GRJ-t and MRS-GARCH-t2
models. It is not clear the differences in the performance of the two models. The MRS-GARCH-t2 model
seems worse than the GJR-t to capture quickly the changes in volatility of the returns.
Figure 1 illustrates the volatility forecasts at 1-day, 1-week, 2-week and 1-month horizons from the best
models according the statistical out-of-sample evaluation when the proxy for volatility is the the realized
volatility. Every sub-gure depicts the comparison between the forecasts of the best standard GARCH
model and the best MRS-GARCH. From the plots it is quite evident at the shorter horizons that standard
GARCH models volatility forecasts tend to have higher spikes than those of the MRS-GARCH, while the
reverse is true at longer forecast horizons. Thus the model which fares the best usually gives much smoother
forecasts than the other.
[FIGURE 1 HERE]
In sum, no model seems to outperform all the others in forecasting volatility according to the different
out-of-sample evaluation criteria adopted. Therefore accounting for regime shifts in all the parameters
of the rst two moments of the conditional distribution of US stock returns, together with the inclusion
of GARCH effects and fat-tailed-ness gives a better in-sample t, and outstanding out-of-sample results
according to the usual statistical loss functions. However, when a more realistic loss function is used, such
as a risk-management loss function, the results conrm the Dacco and Satchells (1999) theoretical ndings
that although most non-linear techniques give good in-sample t, they are usually outperformed in out-
of-sample forecasting by simpler models using an economic loss function. They argue that such a typical
23
nding may be due to possible over-tting and to the mean squared error metric that might be inappropriate
for non-linear models. Therefore, the practical relevance of regime-switching models in predicting the
volatility turns out to be dubious. Further research is needed to evaluate these highly non-linear models
according to loss functions that should capture what is really relevant for the nal use of the volatility
forecast.
7 Conclusions
In this paper we compare a set of standard GARCH models and Markov Regime-Switching GARCH in
terms of their ability to forecast US stock market volatility. The standard GARCH models considered are
the GARCH(1,1), EGARCH(1,1) and GJR(1,1) in addition to some MRS-GARCH models, where each
parameter of the rst two conditional moments is allowed to switch between two regimes, one characterized
by a lower volatility than the other. In addition, all models are estimated assuming both gaussian innovations
and fat-tailed distributions, such as the Students t and the GED. Further, to model time-varying conditional
kurtosis, the degrees-of-freedom parameter in the Students t distribution is allowed to switch across the
two different regimes in a completely different setting than the one considered by Hansen (1994) or Dueker
(1997).
The main goal is to evaluate the performance of different GARCH models in terms of their ability to
characterize and predict out-of-sample the volatility of S&P100. Such out-of-sample comparison is carried
out by comparing the one-day, one-week, two-week and one-month ahead volatility forecasts.
The proxy for the true volatility is given by the realized volatility calculated by aggregating ve-minute
returns. The forecasting performances of each model are measured using both statistical and VaR-based loss
functions.
Overall, the empirical results showthat MRS-GARCHmodels signicantly outperformstandard GARCH
models in forecasting volatility at shorter horizons according to a broad set of statistical loss functions. This
strong conclusion is drawn considering whether the difference in the performances is signicant or not. In
order to test that, we apply the pairwise test for equal predictive ability of the Diebold-Mariano type. We also
apply the more general Reality Check for superior predictive ability of White (2000) and the test for Supe-
rior Predictive Ability of Hansen (2001). According to these tests the MRS-GARCH-N model outperforms
all other competing models.
Since volatility forecasting is mainly used as an input for successive VaR estimation, we also evaluate
the competing models out-of-sample according to a VaR-based loss function. A few models seem to provide
reasonable and accurate VaR estimates at 1-day horizon, with a coverage rate close to the nominal level, but
there is not a uniformly most accurate model according to the risk-management loss functions. This result
is not new, since also Brooks and Persand (2003) nd a no clear answer for most of the series they examine.
Somehow, our results also conrm Dacco and Satchells (1999) arguments that the choice of the correct loss
function is fundamental for the accuracy of volatility forecasts from non-linear models.
In sum, no model seems to outperform all the others in forecasting volatility according to the different
24
out-of-sample evaluation criteria adopted. Therefore accounting for regime shifts in all the parameters
of the rst two moments of the conditional distribution of US stock returns, together with the inclusion
of GARCH effects and fat-tailed-ness gives a better in-sample t, and outstanding out-of-sample results
according to the usual statistical loss functions. However, when a more realistic loss function is used, such
as a risk-management loss function, the results conrm the Dacco and Satchells (1999) theoretical ndings
that although most non-linear techniques give good in-sample t, they are usually outperformed in out-
of-sample forecasting by simpler models using an economic loss function. They argue that such a typical
nding may be due to possible over-tting and to the mean squared error metric that might be inappropriate
for non-linear models. Therefore, the practical relevance of regime-switching models in predicting the
volatility turns out to be dubious. Further research is needed to evaluate these highly non-linear models
according to loss functions that should capture what is really relevant for the nal user of the volatility
forecast.
25
References
Andersen, T. G., and T. Bollerslev (1998) Answering the Critics: Yes ARCH Models Do Provide Good
Volatility Forecasts. International Economic Review 39(4), 885905
Bollerslev, T. (1986) Generalized Autoregressive Conditional Heteroskedasticity. Journal of Econometrics
31, 307327
Bollerslev, T., and E. Ghysels (1996) Periodic Autoregressive Conditional Heteroskedasticity. Journal of
Business and Economic Statistics 14, 139157
Bollerslev, T., R. F. Engle, and D. Nelson (1994) ARCH Models. In Handbook of Econometrics Vol. IV,
ed. R. F. Engle and D. L. McFadden (Amsterdam: North-Holland) pp. 29593038
Brooks, C., and G. Persand (2003) Volatility Forecasting for Risk Management. Journal of Forecasting
22, 122
Cai, J. (1994) A Markov Model of Unconditional Variance in ARCH. Journal of Business and Economic
Statistics 12, 309316
Dacco, R., and S. Satchell (1999) Why Do Regime-Switching Models Forecast so Badly? Journal of
Forecasting 18, 116
Diebold, F. X., and R. S. Mariano (1995) Comparing Predictive Accuracy. Journal of Business and Eco-
nomic Statistics 13(3), 253263
Dueker, M. J. (1997) Markov Switching in GARCH Processes and Mean-Reverting Stock Market Volatil-
ity. Journal of Business and Economic Statistics 15(1), 2634
Engle, R. F. (1982) Autoregressive Conditional Heteroscedasticity with Estimates of U.K. Ination. Econo-
metrica 50, 9871008
Engle, R. F., C. H. Hong, A. Kane, and J. Noh (1993) Arbitrage Valuation of Variance Forecasts with
Simulated Options. In Advances in Futures and Options Research, ed. D. M. Chance and R. R. Trippi
(Greenwich: JAI Press)
Franses, P. H., and R. Van Dijk (1996) Forecasting Stock Market Volatility Using (Non-Linear) GARCH
Models. Journal of Forecasting 15(3), 22935
French, K. R., G. W. Schwert, and R. F. Stambaugh (1987) Expected Stock Returns and Volatility. Journal
of Financial Economics 19, 330
Glosten, L. R., R. Jagannathan, and D. Runkel (1993) Relationship Between the Expected Value and the
Volatility of the Nominal Excess Return on Stocks. Journal of Finance 48, 17791801
Gray, S. (1996) Modeling the Conditional Distribution of Interest Rates as a Regime-Switching Process.
Journal of Financial Economics 42, 2762
Hamilton, J. D. (1988) Rational-Expectation Econometrc Analysis of Changes in Regime. Journal of Eco-
nomic Dynamics and Control 12, 385423
26
(1989) A New Approach to the Economic Analysis of Nonstationary Time Series and the Business
Cycle. Econometrica 57, 357384
(1990) Analysis of Time Series Subject to Changes in Regime. Journal of Econometrics 45, 3970
(1994) Time Series Analysis (Princeton: Princeton University Press)
Hamilton, J. D., and R. Susmel (1994) Autoregressive Conditional Heteroskedasticity and Changes in
Regime. Journal of Econometrics 64, 30733
Hansen, B. E. (1992) The Likelihood Ratio Test under Nonstandard Conditions: Testing the Markov
Switching Model of GNP. Journal of Applied Econometrics 7, S61S82
(1994) Autoregressive Conditional Density Estimation. International Economic Review 35(3), 705
730
(1996) Erratum: the Likelihood Ratio Test Under Nonstandard Conditions: Testing the Markov
Switching Model of GNP. Journal of Applied Econometrics 11, 195198
Hansen, P. R. (2001) An Unbiased and Powerful Test of Superior Predictive Ability. mimeo, Brown
University
Hansen, P. R., and A. Lunde (2001) A Forecast Comparison of Volatility Models: Does Anything Beat a
GARCH(1,1)? mimeo, Brown University
Harvey, D., S. Leybourne, and P. Newbold (1997) Testing the Equality of Prediction Mean Squared Errors.
International Journal of Forecasting 13, 281291
Hol, E., and S. J. Koopman (2005) Forecasting daily variability of the S&P 100 stock index using historical,
realised and implied volatility measurements. Journal of Empirical Finance
Kim, C. J. (1994) Dynamic Linear Models with Markov-Switching. Journal of Econometrics 60, 122
Klaassen, F. (2002) Improving GARCH Volatility Forecasts. Empirical Economics 27, 363394
Lamoureux, C., and W. Lastrapes (1990) Persistence in Variance, Strucutral Change, and the GARCH
Model. Journal of Business and Economic Statistics 8, 225234
Lin, G. (1998) Nesting Regime-Switching GARCH Models and Stock Market Volatility, Returns and the
Business Cycle. PhD dissertation, University of California, San Diego, San Diego
Lopez, J. A. (2001) Evaluating the Predictive Accuracy of Volatility Models. Journal of Forecasting
20(1), 87109
Nelson, D. B. (1991) Conditional Heteroskedasticity in Asset Returns: A New Approach. Econometrica
59, 347370
Pesaran, M. H., and A. Timmermann (1992) A Simple Nonparameteric Test of Predictive Performance.
Journal of Business and Economic Statistics 10(4), 461465
Politis, D. N., and Romano J. P. (1994) The Stationary Bootstrap. Journal of The American Statistical
Association 89, 13031313
27
Sarma, M., S. Thomas, and A. Shah (2003) Selection of Value-at-Risk Models. Journal of Forecasting
22(4), 337358
West, K. D. (1996) Asymptotic Inference About Predictive Ability. Econometrica 64, 10671084
West, K. D., H. J. Edison, and D. Cho (1993) A Utility-Based Comparison of Some Models of Exchange
Rate Volatility. Journal of International Economics 35, 2345
White, H. (2000) A Reality Check For Data Snooping. Econometrica 68(5), 10971126
28
Table 1: Descriptive Statistics of r
t
Standard Normality
Mean Deviation Min Max Skewness Kurtosis Test LM(12) Q
2
(12)
0.0359 1.0887 -7.6445 5.6901 -0.1972 7.3103 3214.44
404.99
863.69
Note: The sample period is January 1, 1988 through October 15, 2003. The Normality Test is the Jarque-Bera test
which has a
2
distribution with 2 degrees of freedom under the null hypothesis of normally distributed errors. The
5% critical value is, therefore, 5.99. The LM(12) statistic is the ARCH LM test up to the twelfth lag and under
the null hypothesis of no ARCH effects it has a
2
(q) distribution, where q is the number of lags. The Q
2
(12)
statistic is the Ljung-Box test on the squared residuals of the conditional mean regression up to the twelfth order.
Under the null hypothesis of no serial correlation, the test is also distributed as a
2
(q), where q is the number of
lags. Thus, for both tests the 5% critical value is 21.03. At a condence level of 5% both skewness and kurtosis
are signicant, since the standard errors under the null of normality are
6/T = 0.038 and
24/T = 0.076
respectively.
29
T
a
b
l
e
2
:
M
a
x
i
m
u
m
L
i
k
e
l
i
h
o
o
d
E
s
t
i
m
a
t
e
s
o
f
S
t
a
n
d
a
r
d
G
A
R
C
H
M
o
d
e
l
s
w
i
t
h
d
i
f
f
e
r
e
n
t
c
o
n
d
i
t
i
o
n
a
l
d
i
s
t
r
i
b
u
t
i
o
n
s
.
G
A
R
C
H
-
N
G
A
R
C
H
-
t
G
A
R
C
H
-
G
E
D
E
G
A
R
C
H
-
N
E
G
A
R
C
H
-
t
E
G
A
R
C
H
-
G
E
D
G
J
R
-
N
G
J
R
-
t
G
J
R
-
G
E
D
0
.
0
5
6
2
0
.
0
6
1
0
0
.
0
4
4
1
0
.
0
3
6
2
0
.
0
4
5
3
0
.
0
3
0
5
0
.
0
3
8
2
0
.
0
5
0
0
0
.
0
3
4
0
(
0
.
0
1
4
0
)
(
0
.
0
1
3
0
)
(
0
.
0
1
2
0
)
(
0
.
0
1
4
0
)
(
0
.
0
1
3
0
)
(
0
.
0
1
2
0
)
(
0
.
0
1
5
0
)
(
0
.
0
1
3
0
)
(
0
.
0
1
2
0
)
0
0
.
0
2
2
0
0
.
0
1
8
2
0
.
0
1
8
7
-
0
.
0
7
4
7
-
0
.
0
7
7
5
-
0
.
0
7
4
5
0
.
0
2
8
5
0
.
0
2
0
9
0
.
0
2
3
0
(
0
.
0
0
2
0
)
(
0
.
0
0
3
0
)
(
0
.
0
0
3
0
)
(
0
.
0
0
5
0
)
(
0
.
0
1
0
0
)
(
0
.
0
1
0
0
)
(
0
.
0
0
2
0
)
(
0
.
0
0
3
0
)
(
0
.
0
0
4
0
)
1
0
.
0
7
5
2
0
.
0
7
5
1
0
.
0
7
4
6
0
.
0
9
7
5
0
.
1
0
2
1
0
.
0
9
8
5
0
.
1
2
9
3
0
.
1
2
8
9
0
.
1
3
0
0
(
0
.
0
0
5
0
)
(
0
.
0
1
0
0
)
(
0
.
0
1
0
0
)
(
0
.
0
0
7
0
)
(
0
.
0
1
4
0
)
(
0
.
0
1
4
0
)
(
0
.
0
0
9
0
)
(
0
.
0
1
5
0
)
(
0
.
0
1
5
0
)
1
0
.
9
0
1
7
0
.
9
0
4
9
0
.
9
0
4
6
0
.
9
8
5
5
0
.
9
9
0
0
0
.
9
8
8
9
0
.
8
9
7
7
0
.
9
0
2
2
0
.
9
0
1
1
(
0
.
0
0
6
0
)
(
0
.
0
0
9
0
)
(
0
.
0
1
0
0
)
(
0
.
0
0
2
0
)
(
0
.
0
1
0
0
)
(
0
.
0
0
3
0
)
(
0
.
0
0
7
0
)
(
0
.
0
1
0
0
)
(
0
.
0
1
1
0
)
-
-
-
-
0
.
0
5
9
8
-
0
.
0
6
3
5
-
0
.
0
6
1
4
0
.
0
1
4
1
0
.
0
2
0
3
0
.
0
1
8
6
(
0
.
0
0
5
0
)
(
0
.
0
0
3
0
)
(
0
.
0
1
0
0
)
(
0
.
0
0
7
0
)
(
0
.
0
1
1
0
)
(
0
.
0
1
2
0
)
-
5
.
4
4
1
6
1
.
2
0
4
7
-
5
.
4
4
6
9
1
.
2
1
6
2
-
5
.
7
3
3
2
1
.
2
2
5
9
(
0
.
4
6
4
0
)
(
0
.
0
2
9
0
)
(
0
.
4
6
8
0
)
(
0
.
0
2
8
0
)
(
0
.
5
0
4
0
)
(
0
.
0
2
9
0
)
L
o
g
(
L
)
-
4
8
1
6
.
3
7
9
1
-
4
6
7
1
.
9
1
3
3
-
4
6
6
8
.
2
8
0
8
-
4
7
7
7
.
9
9
8
7
-
4
6
3
0
.
1
9
2
1
-
4
6
3
3
.
4
3
7
5
-
4
7
8
0
.
2
7
9
8
-
4
6
4
9
.
2
7
1
1
-
4
6
4
6
.
1
1
0
4
N
o
t
e
:
E
a
c
h
G
A
R
C
H
m
o
d
e
l
h
a
s
b
e
e
n
e
s
t
i
m
a
t
e
d
w
i
t
h
a
N
o
r
m
a
l
(
N
)
,
a
S
t
u
d
e
n
t
s
t
a
n
d
a
G
E
D
d
i
s
t
r
i
b
u
t
i
o
n
.
T
h
e
i
n
s
a
m
p
l
e
d
a
t
a
c
o
n
s
i
s
t
o
f
S
&
P
1
0
0
r
e
t
u
r
n
s
f
r
o
m
1
/
1
/
1
9
8
8
t
o
9
/
2
8
/
2
0
0
1
.
A
s
y
m
p
t
o
t
i
c
s
t
a
n
d
a
r
d
e
r
r
o
r
s
a
r
e
i
n
p
a
r
e
n
t
h
e
s
e
s
.
30
Table 3: Maximum Likelihood Estimates of MRS-GARCH Models with different
conditional distributions.
MRS-GARCH-N MRS-GARCH-t2 MRS-GARCH-t MRS-GARCH-GED
(1)
0.0592 0.0571 0.0487 0.0715
(0.0140) (0.0140) (0.0130) (0.0210)
(2)
-1.6623 0.0558 0.0792 0.0252
(0.2090) (0.0310) (0.0290) (0.0120)
(1)
0
0.0062 0.0026 0.0359 0.0923
(0.0040) (0.0010) (0.0090) (0.0180)
(2)
0
0.5961 0.1132 0.1130 0.0341
(0.1420) (0.0310) (0.0210) (0.0100)
(1)
1
0.0230 0.0137 0.0413 0.0560
(0.0070) (0.0050) (0.0120) (0.0140)
(2)
1
0.0225 0.0754 0.0565 0.0388
(0.1170) (0.0190) (0.0160) (0.0150)
(1)
1
0.9093 0.9805 0.8473 0.8952
(0.0090) (0.0060) (0.0330) (0.0200)
(2)
1
0.9633 0.8569 0.8924 0.8533
(0.1810) (0.0290) (0.0210) (0.0370)
p 0.9811 0.9987 0.9998 0.9999
(0.0040) (0.0010) (0.0001) (0.0001)
q 0.1533 0.9988 0.9999 0.9998
(0.0850) (0.0010) (0.0001) (0.0002)
(1)
- 4.7318 5.3826 1.2212
(0.5370) (0.4440) (0.0350)
(2)
- 7.1652 - -
(1.3270)
Log(L) -4698.6929 -4632.0178 -4631.4338 -4630.9573
N. of Par. 10 12 11 11
1
0.02 0.52 0.61 0.33
2
0.98 0.48 0.39 0.67
d
1
53.01 765.70 4901.96 8771.93
d
2
1.18 842.46 7518.80 4291.85
Note: Each MRS-GARCH model has been estimated with different conditional distributions (see
Section 3). The in-sample data consist of S&P100 returns from 1/1/1988 to 10/28/2001. The
superscripts indicate the regime.
j
is the unconditional probability of being in regime j, while d
j
is the half-life or expected duration of the j-th state. Asymptotic standard errors are in parentheses.
31
T
a
b
l
e
4
:
I
n
-
s
a
m
p
l
e
g
o
o
d
n
e
s
s
-
o
f
-
t
s
t
a
t
i
s
t
i
c
s
.
M
o
d
e
l
N
.
o
f
P
a
r
.
P
e
r
s
.
A
I
C
R
a
n
k
B
I
C
R
a
n
k
L
o
g
(
L
)
R
a
n
k
M
S
E
1
R
a
n
k
M
S
E
2
R
a
n
k
Q
L
I
K
E
R
a
n
k
R
2
L
O
G
R
a
n
k
M
A
D
2
R
a
n
k
M
A
D
1
R
a
n
k
H
M
S
E
R
a
n
k
G
A
R
C
H
-
N
4
0
.
9
7
7
2
.
6
8
9
1
3
2
.
6
9
6
1
3
-
4
8
1
6
.
3
7
9
1
3
0
.
5
9
9
1
1
9
.
2
9
1
1
0
0
.
8
4
8
8
8
.
6
3
5
1
1
1
.
2
2
7
1
1
0
.
5
8
7
1
0
6
.
3
8
2
3
G
A
R
C
H
-
t
5
0
.
9
8
2
.
6
0
9
7
2
.
6
1
8
6
-
4
6
7
1
.
9
1
3
7
0
.
6
1
2
1
3
1
0
.
2
0
5
1
2
0
.
8
4
9
9
8
.
5
6
9
9
1
.
2
4
5
1
3
0
.
5
8
6
9
6
.
7
7
5
6
G
A
R
C
H
-
G
E
D
5
0
.
9
7
9
2
.
6
0
7
5
2
.
6
1
6
5
-
4
6
6
8
.
2
8
1
6
0
.
6
0
8
1
2
9
.
9
3
9
1
1
0
.
8
5
1
0
8
.
5
6
8
8
1
.
2
3
9
1
2
0
.
5
8
5
8
6
.
7
7
5
E
G
A
R
C
H
-
N
5
0
.
9
8
6
2
.
6
6
8
1
1
2
.
6
7
7
1
1
-
4
7
7
7
.
9
9
9
1
1
0
.
5
2
5
2
7
.
3
8
9
3
0
.
8
2
8
2
8
.
4
5
5
3
1
.
1
1
7
2
0
.
5
6
5
2
7
.
0
5
9
8
E
G
A
R
C
H
-
t
6
0
.
8
9
4
2
.
6
5
2
1
0
2
.
6
6
3
1
0
-
4
7
4
8
.
0
7
7
1
0
0
.
5
5
7
4
7
.
5
9
7
6
0
.
9
0
4
1
3
9
.
1
6
4
1
3
1
.
1
5
4
0
.
5
9
8
1
3
5
.
5
0
2
1
E
G
A
R
C
H
-
G
E
D
6
0
.
9
8
9
2
.
5
8
8
1
2
.
5
9
9
1
-
4
6
3
3
.
4
3
7
2
0
.
5
3
3
7
.
3
8
6
2
0
.
8
2
9
3
8
.
4
3
2
2
1
.
1
2
5
3
0
.
5
6
6
3
7
.
3
6
5
1
0
G
J
R
-
N
5
0
.
9
6
9
2
.
6
7
1
2
2
.
6
7
8
1
2
-
4
7
8
0
.
2
8
1
2
0
.
5
7
5
8
.
2
2
7
0
.
8
2
9
4
8
.
5
6
1
7
1
.
1
9
2
5
0
.
5
7
8
4
6
.
4
3
9
4
G
J
R
-
t
6
0
.
9
7
7
2
.
5
9
7
4
2
.
6
0
7
3
-
4
6
4
9
.
2
7
1
4
0
.
5
9
3
1
0
9
.
2
7
9
9
0
.
8
3
6
8
.
4
9
3
4
1
.
2
2
5
1
0
0
.
5
8
7
7
.
1
5
7
9
G
J
R
-
G
E
D
6
0
.
9
7
5
2
.
5
9
5
3
2
.
6
0
6
2
-
4
6
4
6
.
1
1
3
0
.
5
8
6
9
9
.
0
0
2
8
0
.
8
3
5
8
.
4
9
8
5
1
.
2
1
5
9
0
.
5
7
9
5
7
.
0
3
5
7
M
R
S
-
G
A
R
C
H
-
N
1
0
0
.
9
8
6
2
.
6
2
7
9
2
.
6
4
4
9
-
4
6
9
8
.
6
9
3
9
0
.
5
2
4
1
7
.
3
3
1
0
.
8
4
6
7
8
.
3
1
2
1
1
.
1
1
3
1
0
.
5
5
9
1
8
.
8
3
1
2
M
R
S
-
G
A
R
C
H
-
t
2
1
2
0
.
9
9
4
2
.
5
9
1
2
2
.
6
1
2
4
-
4
6
3
2
.
0
1
8
1
0
.
5
7
9
6
1
0
.
6
9
7
1
3
0
.
8
2
1
1
8
.
5
5
4
6
1
.
2
0
4
6
0
.
5
8
6
5
.
7
8
2
2
M
R
S
-
G
A
R
C
H
-
t
1
1
0
.
9
5
1
2
.
6
0
8
6
2
.
6
2
7
7
-
4
6
6
3
.
8
1
1
5
0
.
5
8
3
8
7
.
5
2
5
0
.
8
6
1
1
8
.
6
9
1
1
2
1
.
2
0
9
7
0
.
5
9
6
1
2
7
.
9
8
8
1
1
M
R
S
-
G
A
R
C
H
-
G
E
D
1
1
0
.
9
4
9
2
.
6
1
3
8
2
.
6
3
2
8
-
4
6
7
3
.
0
5
6
8
0
.
5
8
2
7
7
.
4
9
8
4
0
.
8
6
7
1
2
8
.
5
7
6
1
0
1
.
2
1
1
8
0
.
5
9
3
1
1
9
.
5
5
9
1
3
N
o
t
e
:
P
e
r
s
.
i
s
t
h
e
p
e
r
s
i
s
t
e
n
c
e
o
f
s
h
o
c
k
s
t
o
v
o
l
a
t
i
l
i
t
y
(
f
o
r
M
R
S
-
G
A
R
C
H
o
n
l
y
t
h
e
h
i
g
h
e
s
t
p
e
r
s
i
s
t
e
n
c
e
i
s
r
e
p
o
r
t
e
d
)
.
A
I
C
i
s
t
h
e
A
k
a
i
k
e
i
n
f
o
r
m
a
t
i
o
n
c
r
i
t
e
r
i
o
n
c
a
l
c
u
l
a
t
e
d
a
s
2
l
o
g
(
L
)
/
T
+
2
k
/
T
,
w
h
e
r
e
k
i
s
t
h
e
n
u
m
b
e
r
o
f
p
a
r
a
m
e
t
e
r
s
a
n
d
T
t
h
e
n
u
m
b
e
r
o
f
o
b
s
e
r
v
a
t
i
o
n
s
.
B
I
C
i
s
t
h
e
S
c
h
w
a
r
z
c
r
i
t
e
r
i
o
n
,
c
a
l
c
u
l
a
t
e
d
a
s
2
l
o
g
(
L
)
/
T
+
(
k
/
T
)
l
o
g
(
T
)
.
M
S
E
1
,
M
S
E
2
,
Q
L
I
K
E
,
R
2
L
O
G
,
M
A
D
1
,
M
A
D
2
,
a
n
d
H
M
S
E
a
r
e
t
h
e
s
t
a
t
i
s
t
i
c
a
l
l
o
s
s
f
u
n
c
t
i
o
n
s
i
n
t
r
o
d
u
c
e
d
i
n
S
e
c
t
i
o
n
5
.
32
T
a
b
l
e
5
:
O
u
t
-
o
f
-
s
a
m
p
l
e
e
v
a
l
u
a
t
i
o
n
o
f
o
n
e
a
n
d
v
e
-
s
t
e
p
a
h
e
a
d
v
o
l
a
t
i
l
i
t
y
f
o
r
e
c
a
s
t
s
.
1
-
s
t
e
p
a
h
e
a
d
v
o
l
a
t
i
l
i
t
y
f
o
r
e
c
a
s
t
s
M
o
d
e
l
M
S
E
1
R
a
n
k
M
S
E
2
R
a
n
k
Q
L
I
K
E
R
a
n
k
R
2
L
O
G
R
a
n
k
M
A
D
2
R
a
n
k
M
A
D
1
R
a
n
k
H
M
S
E
R
a
n
k
S
R
D
A
G
A
R
C
H
-
N
0
.
1
1
6
7
6
1
.
0
1
5
4
6
1
.
1
5
5
2
7
0
.
3
3
4
2
5
0
.
2
6
9
7
5
0
.
6
8
4
6
4
0
.
2
3
3
5
9
0
.
7
9
1
1
.
8
3
6
5
*
*
G
A
R
C
H
-
t
0
.
1
1
7
7
7
1
.
0
5
7
6
1
0
1
.
1
5
3
2
5
0
.
3
2
9
1
4
0
.
2
6
7
9
4
0
.
6
8
5
9
5
0
.
2
3
1
1
8
0
.
8
0
1
2
.
2
9
4
2
*
*
G
A
R
C
H
-
G
E
D
0
.
1
1
6
5
5
1
.
0
3
8
4
9
1
.
1
5
3
4
6
0
.
3
2
8
1
3
0
.
2
6
7
1
3
0
.
6
8
1
6
3
0
.
2
3
5
3
1
0
0
.
8
0
1
2
.
0
8
8
5
*
*
E
G
A
R
C
H
-
N
0
.
1
2
8
8
8
0
.
8
8
5
9
3
1
.
1
7
7
2
8
0
.
4
1
8
3
9
0
.
3
1
0
6
9
0
.
7
6
4
2
9
0
.
1
9
4
6
5
0
.
8
1
1
3
.
4
6
0
6
*
*
E
G
A
R
C
H
-
t
0
.
1
3
8
7
9
1
.
0
2
8
8
8
1
.
2
1
3
7
1
1
0
.
4
1
1
8
0
.
3
0
0
3
8
0
.
7
2
4
2
8
0
.
4
9
7
7
1
3
0
.
6
7
5
.
0
1
4
9
*
*
E
G
A
R
C
H
-
G
E
D
0
.
1
6
0
8
1
1
1
.
2
4
9
1
1
1
.
1
9
7
7
9
0
.
4
7
6
9
1
0
0
.
3
4
7
8
1
1
0
.
8
9
0
3
1
1
0
.
2
1
3
7
6
0
.
8
1
1
3
.
3
3
9
9
*
*
G
J
R
-
N
0
.
1
0
2
2
0
.
7
6
7
2
1
.
1
4
4
2
2
0
.
3
2
2
6
2
0
.
2
6
1
6
2
0
.
6
4
9
5
2
0
.
1
6
6
7
2
0
.
8
3
1
3
.
8
2
7
8
*
*
G
J
R
-
t
0
.
1
1
5
7
4
0
.
9
4
5
3
5
1
.
1
5
1
3
4
0
.
3
4
3
9
7
0
.
2
7
7
5
7
0
.
7
0
7
1
7
0
.
1
7
1
5
4
0
.
8
3
1
3
.
9
9
9
8
*
*
G
J
R
-
G
E
D
0
.
1
1
1
3
0
.
8
9
2
4
4
1
.
1
4
8
2
3
0
.
3
3
5
6
0
.
2
7
1
6
6
0
.
6
8
7
9
6
0
.
1
6
9
3
0
.
8
3
1
3
.
8
8
2
1
*
*
M
R
S
-
G
A
R
C
H
-
N
0
.
0
6
8
6
1
0
.
4
3
9
6
1
1
.
1
1
9
2
1
0
.
2
4
7
5
1
0
.
2
1
1
1
1
0
.
4
9
2
3
1
0
.
1
5
4
4
1
0
.
7
9
1
2
.
4
8
0
6
*
*
M
R
S
-
G
A
R
C
H
-
t
2
0
.
1
4
9
9
1
0
1
.
0
1
9
3
7
1
.
2
0
8
2
1
0
0
.
5
0
7
4
1
1
0
.
3
3
8
4
1
0
0
.
8
1
6
1
1
0
0
.
2
2
9
4
7
0
.
8
0
1
2
.
2
5
9
3
*
*
M
R
S
-
G
A
R
C
H
-
t
0
.
2
2
6
1
3
1
.
5
0
9
7
1
3
1
.
2
7
4
9
1
3
0
.
7
1
2
9
1
3
0
.
4
2
6
6
1
3
1
.
0
6
2
7
1
3
0
.
2
8
3
2
1
2
0
.
8
1
1
2
.
6
9
0
4
*
*
M
R
S
-
G
A
R
C
H
-
G
E
D
0
.
1
8
5
7
1
2
1
.
2
7
6
7
1
2
1
.
2
3
8
5
1
2
0
.
5
9
9
1
2
0
.
3
8
3
2
1
2
0
.
9
4
4
5
1
2
0
.
2
5
3
7
1
1
0
.
8
1
1
2
.
6
9
0
4
*
*
5
-
s
t
e
p
a
h
e
a
d
v
o
l
a
t
i
l
i
t
y
f
o
r
e
c
a
s
t
s
M
o
d
e
l
M
S
E
1
R
a
n
k
M
S
E
2
R
a
n
k
Q
L
I
K
E
R
a
n
k
R
2
L
O
G
R
a
n
k
M
A
D
2
R
a
n
k
M
A
D
1
R
a
n
k
H
M
S
E
R
a
n
k
S
R
D
A
G
A
R
C
H
-
N
0
.
4
7
6
7
2
1
.
4
7
3
1
9
2
.
7
6
4
9
0
.
2
5
7
9
7
0
.
5
3
6
7
3
.
0
7
3
2
7
0
.
1
6
1
9
9
0
.
7
8
1
2
.
0
8
4
2
*
*
G
A
R
C
H
-
t
0
.
4
8
1
3
9
2
2
.
6
3
7
4
1
1
2
.
7
6
1
9
6
0
.
2
5
2
7
6
0
.
5
3
1
4
6
3
.
0
8
0
3
8
0
.
1
5
8
7
7
0
.
7
9
1
2
.
4
6
7
5
*
*
G
A
R
C
H
-
G
E
D
0
.
4
7
4
9
6
2
2
.
0
7
8
8
1
0
2
.
7
6
2
7
0
.
2
5
1
8
5
0
.
5
3
0
1
5
3
.
0
6
0
5
5
0
.
1
6
0
9
8
0
.
7
9
1
2
.
2
7
1
4
*
*
E
G
A
R
C
H
-
N
0
.
3
0
7
2
2
1
0
.
0
3
2
5
2
2
.
7
4
2
7
2
0
.
2
1
1
5
2
0
.
4
5
4
6
2
2
.
4
2
9
6
2
0
.
1
1
4
9
1
0
.
8
3
1
4
.
7
7
0
4
*
*
E
G
A
R
C
H
-
t
0
.
5
8
1
1
0
2
0
.
8
9
8
6
2
.
8
4
7
1
1
0
.
3
2
3
5
1
0
0
.
5
7
7
5
1
0
3
.
0
7
1
6
0
.
6
7
7
1
3
0
.
6
2
3
.
2
5
2
7
*
*
E
G
A
R
C
H
-
G
E
D
0
.
4
0
0
7
3
1
4
.
8
2
8
9
3
2
.
7
5
6
8
4
0
.
2
4
8
9
4
0
.
5
2
5
7
4
2
.
9
3
2
2
4
0
.
1
3
0
5
4
0
.
8
3
1
4
.
6
5
2
5
*
*
G
J
R
-
N
0
.
4
0
9
6
4
1
6
.
3
4
3
5
4
2
.
7
5
5
8
3
0
.
2
4
6
7
3
0
.
5
1
2
7
3
2
.
8
7
9
5
3
0
.
1
2
9
8
3
0
.
8
2
1
3
.
8
5
8
6
*
*
G
J
R
-
t
0
.
4
7
7
8
8
2
0
.
9
3
0
7
7
2
.
7
6
2
8
8
0
.
2
6
6
5
9
0
.
5
4
7
5
9
3
.
1
7
4
5
1
0
0
.
1
3
5
9
6
0
.
8
3
1
4
.
3
3
3
3
*
*
G
J
R
-
G
E
D
0
.
4
5
7
3
5
1
9
.
6
6
6
1
5
2
.
7
6
0
3
5
0
.
2
5
9
6
8
0
.
5
3
6
6
8
3
.
0
8
7
8
9
0
.
1
3
3
6
5
0
.
8
2
1
4
.
1
3
8
2
*
*
M
R
S
-
G
A
R
C
H
-
N
0
.
2
3
4
3
1
8
.
0
6
8
1
2
.
7
2
7
4
1
0
.
1
5
6
4
1
0
.
3
7
6
9
1
1
.
9
7
5
7
1
0
.
1
2
7
7
2
0
.
8
0
1
2
.
8
8
0
8
*
*
M
R
S
-
G
A
R
C
H
-
t
2
0
.
6
5
2
2
1
1
2
1
.
4
3
1
5
8
2
.
8
2
5
5
1
0
0
.
4
3
7
2
1
1
0
.
7
0
5
1
1
1
3
.
7
7
8
7
1
1
0
.
2
1
0
2
1
0
0
.
7
8
1
1
.
9
6
4
7
*
*
M
R
S
-
G
A
R
C
H
-
t
0
.
9
6
4
4
1
3
3
1
.
3
9
5
1
1
3
2
.
8
8
0
3
1
3
0
.
6
0
1
1
3
0
.
8
8
2
6
1
3
4
.
8
8
8
9
1
3
0
.
2
5
7
1
2
0
.
7
9
1
2
.
2
0
7
7
*
*
M
R
S
-
G
A
R
C
H
-
G
E
D
0
.
8
1
3
3
1
2
2
8
.
8
3
2
7
1
2
2
.
8
4
8
4
1
2
0
.
5
0
4
1
2
0
.
7
9
8
9
1
2
4
.
4
3
4
2
1
2
0
.
2
2
8
9
1
1
0
.
7
9
1
2
.
5
7
7
0
*
*
N
o
t
e
:
O
u
t
-
o
f
-
s
a
m
p
l
e
e
v
a
l
u
a
t
i
o
n
o
f
o
n
e
-
a
n
d
v
e
-
s
t
e
p
a
h
e
a
d
v
o
l
a
t
i
l
i
t
y
f
o
r
e
c
a
s
t
s
.
T
h
e
v
o
l
a
t
i
l
i
t
y
p
r
o
x
y
i
s
g
i
v
e
n
b
y
t
h
e
r
e
a
l
i
z
e
d
v
o
l
a
t
i
l
i
t
y
a
t
1
-
m
i
n
u
t
e
.
33
T
a
b
l
e
6
:
O
u
t
-
o
f
-
s
a
m
p
l
e
e
v
a
l
u
a
t
i
o
n
o
f
t
e
n
a
n
d
t
w
e
n
t
y
-
t
w
o
-
s
t
e
p
a
h
e
a
d
v
o
l
a
t
i
l
i
t
y
f
o
r
e
c
a
s
t
s
.
1
0
-
s
t
e
p
a
h
e
a
d
v
o
l
a
t
i
l
i
t
y
f
o
r
e
c
a
s
t
s
M
o
d
e
l
M
S
E
1
R
a
n
k
M
S
E
2
R
a
n
k
Q
L
I
K
E
R
a
n
k
R
2
L
O
G
R
a
n
k
M
A
D
2
R
a
n
k
M
A
D
1
R
a
n
k
H
M
S
E
R
a
n
k
S
R
D
A
G
A
R
C
H
-
N
0
.
8
8
8
6
8
7
7
.
5
0
1
6
8
3
.
4
5
6
5
9
0
.
2
3
7
5
9
0
.
7
5
4
1
1
0
6
.
1
0
2
6
9
0
.
1
5
5
1
8
0
.
7
8
1
1
.
7
9
6
4
*
*
G
A
R
C
H
-
t
0
.
9
0
1
4
9
8
2
.
4
2
0
3
1
1
3
.
4
5
4
3
7
0
.
2
3
2
4
7
0
.
7
4
9
6
9
6
.
1
4
2
8
1
0
0
.
1
5
1
6
6
0
.
7
9
1
2
.
2
7
1
4
*
*
G
A
R
C
H
-
G
E
D
0
.
8
8
7
8
7
8
0
.
0
3
7
8
1
0
3
.
4
5
4
4
8
0
.
2
3
1
5
6
0
.
7
4
6
8
8
6
.
0
8
8
8
8
0
.
1
5
3
8
7
0
.
7
8
1
1
.
9
8
1
7
*
*
E
G
A
R
C
H
-
N
0
.
3
2
8
1
1
2
2
.
3
5
2
4
1
3
.
4
0
5
2
1
0
.
1
1
0
7
1
0
.
4
4
1
6
1
3
.
2
8
9
5
1
0
.
0
8
1
5
1
0
.
8
3
1
4
.
4
1
4
5
*
*
E
G
A
R
C
H
-
t
1
.
0
6
1
1
0
7
5
.
3
4
4
9
7
3
.
5
4
4
1
2
0
.
2
8
4
4
1
0
0
.
7
1
3
9
5
5
.
3
4
1
7
4
0
.
7
7
8
8
1
3
0
.
6
5
4
.
5
2
3
3
*
*
E
G
A
R
C
H
-
G
E
D
0
.
4
0
0
4
2
2
8
.
8
0
7
8
2
3
.
4
1
1
2
2
0
.
1
2
7
8
2
0
.
4
9
6
8
3
3
.
8
1
1
7
3
0
.
0
8
5
2
0
.
8
3
1
4
.
5
6
6
7
*
*
G
J
R
-
N
0
.
7
2
5
4
5
6
.
5
3
8
9
4
3
.
4
4
4
6
4
0
.
2
1
7
2
4
0
.
6
8
7
5
4
5
.
4
1
0
2
5
0
.
1
2
0
4
3
0
.
8
0
1
3
.
1
7
6
9
*
*
G
J
R
-
t
0
.
8
6
1
9
6
7
4
.
6
8
6
9
6
3
.
4
5
1
5
6
0
.
2
3
6
9
8
0
.
7
3
8
3
7
6
.
0
2
2
8
7
0
.
1
2
6
5
0
.
8
1
1
3
.
4
5
7
9
*
*
G
J
R
-
G
E
D
0
.
8
2
4
9
5
6
9
.
9
0
4
9
5
3
.
4
4
9
6
5
0
.
2
3
1
4
5
0
.
7
2
4
4
6
5
.
8
5
6
9
6
0
.
1
2
4
3
4
0
.
8
1
1
3
.
3
6
3
7
*
*
M
R
S
-
G
A
R
C
H
-
N
0
.
4
4
4
8
3
3
2
.
4
3
1
5
3
3
.
4
2
3
3
3
0
.
1
3
0
7
3
0
.
4
9
3
8
2
3
.
6
8
4
7
2
0
.
1
5
9
4
9
0
.
7
8
1
2
.
1
3
1
5
*
*
M
R
S
-
G
A
R
C
H
-
t
2
1
.
2
6
6
1
1
1
7
7
.
5
6
1
8
9
3
.
5
2
3
1
1
0
0
.
4
2
8
6
1
1
1
.
0
0
6
2
1
1
7
.
5
5
7
3
1
1
0
.
2
1
1
5
1
0
0
.
7
7
1
1
.
4
7
6
4
*
*
M
R
S
-
G
A
R
C
H
-
t
1
.
7
8
4
7
1
3
1
1
0
.
6
1
0
3
1
2
3
.
5
6
7
2
1
3
0
.
5
6
1
1
1
3
1
.
2
0
9
6
1
3
9
.
3
9
0
6
1
3
0
.
2
4
7
6
1
2
0
.
7
7
1
1
.
3
7
3
2
*
*
M
R
S
-
G
A
R
C
H
-
G
E
D
1
.
5
6
5
2
1
2
1
1
1
.
2
9
4
4
1
3
3
.
5
3
9
6
1
1
0
.
4
7
6
5
1
2
1
.
1
2
4
2
1
2
8
.
8
4
7
1
1
2
0
.
2
2
4
1
1
1
0
.
7
9
1
2
.
1
7
7
5
*
*
2
2
-
s
t
e
p
a
h
e
a
d
v
o
l
a
t
i
l
i
t
y
f
o
r
e
c
a
s
t
s
M
o
d
e
l
M
S
E
1
R
a
n
k
M
S
E
2
R
a
n
k
Q
L
I
K
E
R
a
n
k
R
2
L
O
G
R
a
n
k
M
A
D
2
R
a
n
k
M
A
D
1
R
a
n
k
H
M
S
E
R
a
n
k
S
R
D
A
G
A
R
C
H
-
N
1
.
8
4
9
6
7
3
3
1
.
8
3
3
7
7
4
.
2
5
0
2
8
0
.
2
2
1
9
9
1
.
1
1
0
6
1
0
1
3
.
1
1
1
3
9
0
.
1
7
5
4
6
0
.
7
5
1
0
.
4
1
6
4
*
*
G
A
R
C
H
-
t
1
.
8
8
2
7
9
3
5
5
.
8
6
0
8
1
1
4
.
2
4
7
9
6
0
.
2
1
6
8
8
1
.
1
0
4
1
9
1
3
.
2
1
5
8
1
0
0
.
1
7
0
7
4
0
.
7
6
1
0
.
7
7
0
8
*
*
G
A
R
C
H
-
G
E
D
1
.
8
5
3
6
8
3
4
4
.
5
4
4
8
4
.
2
4
8
2
7
0
.
2
1
6
7
1
.
0
9
9
6
8
1
3
.
0
9
2
5
8
0
.
1
7
4
8
5
0
.
7
6
1
0
.
7
7
0
8
*
*
E
G
A
R
C
H
-
N
1
.
0
4
2
6
2
1
6
8
.
3
6
1
5
2
4
.
2
3
1
9
3
0
.
1
3
0
2
2
0
.
7
0
1
6
2
7
.
6
8
6
6
2
0
.
2
5
6
8
1
1
0
.
8
3
1
4
.
8
8
4
4
*
*
E
G
A
R
C
H
-
t
2
.
3
6
5
5
1
0
3
4
5
.
6
9
2
2
9
4
.
3
6
1
6
1
3
0
.
3
0
0
2
1
0
1
.
0
5
8
7
7
1
1
.
4
2
3
2
5
0
.
8
9
3
9
1
3
0
.
6
3
6
.
6
7
2
7
*
*
E
G
A
R
C
H
-
G
E
D
0
.
9
0
3
1
1
4
6
.
9
3
4
9
1
4
.
2
2
1
4
1
0
.
1
1
5
6
1
0
.
6
5
2
1
7
.
1
3
0
7
1
0
.
2
1
4
8
7
0
.
8
3
1
4
.
8
6
1
8
*
*
G
J
R
-
N
1
.
3
6
1
4
3
2
1
6
.
5
2
6
4
3
4
.
2
3
1
6
2
0
.
1
8
7
6
3
0
.
9
6
0
1
4
1
0
.
9
6
1
3
4
0
.
1
2
4
6
1
0
.
8
0
1
3
.
0
7
5
1
*
*
G
J
R
-
t
1
.
6
3
2
2
6
2
9
2
.
4
6
8
1
6
4
.
2
3
7
3
5
0
.
2
0
5
4
6
1
.
0
2
9
2
6
1
2
.
1
8
0
5
7
0
.
1
2
6
2
0
.
8
1
1
3
.
5
5
6
0
*
*
G
J
R
-
G
E
D
1
.
5
7
2
9
5
2
7
3
.
8
6
3
5
4
.
2
3
6
5
4
0
.
2
0
2
6
5
1
.
0
1
7
6
5
1
1
.
9
3
2
3
6
0
.
1
2
6
1
3
0
.
8
1
1
3
.
4
6
4
3
*
*
M
R
S
-
G
A
R
C
H
-
N
1
.
5
1
1
8
4
2
3
2
.
1
2
0
9
4
4
.
2
7
3
9
0
.
1
8
8
4
4
0
.
8
5
5
1
3
9
.
3
3
5
5
3
0
.
4
3
3
1
2
0
.
7
7
1
1
.
7
8
5
3
*
*
M
R
S
-
G
A
R
C
H
-
t
2
2
.
7
9
2
4
1
1
3
4
8
.
3
3
6
9
1
0
4
.
3
2
1
8
1
0
0
.
4
3
0
6
1
1
1
.
5
2
2
1
1
1
1
6
.
7
3
9
7
1
1
0
.
2
2
6
9
0
.
7
7
1
0
.
9
8
5
1
*
*
M
R
S
-
G
A
R
C
H
-
t
3
.
5
8
2
2
1
3
4
5
3
.
4
7
5
6
1
2
4
.
3
5
2
6
1
2
0
.
5
2
5
4
1
3
1
.
7
2
7
2
1
3
1
9
.
4
3
9
1
1
3
0
.
2
4
8
4
1
0
0
.
7
4
9
.
9
7
3
4
*
*
M
R
S
-
G
A
R
C
H
-
G
E
D
3
.
2
9
1
4
1
2
5
1
8
.
3
6
2
8
1
3
4
.
3
2
5
4
1
1
0
.
4
4
3
3
1
2
1
.
6
2
6
3
1
2
1
8
.
9
3
7
1
2
0
.
2
2
3
5
8
0
.
7
7
1
1
.
1
6
4
7
*
*
N
o
t
e
:
O
u
t
-
o
f
-
s
a
m
p
l
e
e
v
a
l
u
a
t
i
o
n
o
f
o
n
e
-
a
n
d
v
e
-
s
t
e
p
a
h
e
a
d
v
o
l
a
t
i
l
i
t
y
f
o
r
e
c
a
s
t
s
.
T
h
e
v
o
l
a
t
i
l
i
t
y
p
r
o
x
y
i
s
g
i
v
e
n
b
y
t
h
e
r
e
a
l
i
z
e
d
v
o
l
a
t
i
l
i
t
y
a
t
1
-
m
i
n
u
t
e
.
34
Table 7: Diebold-Mariano Test. (Benchmark: MRS-GARCH-N, 1-step-ahead)
Model MSE
1
MSE2 QLIKE R2LOG MAD2 MAD1 HMSE
GARCH-N -3.12** -2.48* -3.81** -3.49** -3.18** -3.61** -1.94
p-values 0.00 0.01 0.00 0.00 0.00 0.00 0.05
GARCH-t -2.95** -2.41* -3.59** -3.23** -3.01** -3.39** -1.88
p-values 0.00 0.02 0.00 0.00 0.00 0.00 0.06
GARCH-GED -2.96** -2.42* -3.60** -3.23** -3.02** -3.40** -1.87
p-values 0.00 0.02 0.00 0.00 0.00 0.00 0.06
EGARCH-N -5.80** -3.78** -6.54** -6.83** -5.78** -7.08** -3.10**
p-values 0.00 0.00 0.00 0.00 0.00 0.00 0.00
EGARCH-t -3.76** -3.14** -4.00** -4.24** -4.19** -4.45** -2.90**
p-values 0.00 0.00 0.00 0.00 0.00 0.00 0.00
EGARCH-GED -6.16** -3.99** -7.71** -7.92** -6.18** -7.83** -4.23**
p-values 0.00 0.00 0.00 0.00 0.00 0.00 0.00
GJR-N -3.79** -2.63** -4.30** -4.90** -3.87** -4.53** -1.18
p-values 0.00 0.01 0.00 0.00 0.00 0.00 0.24
GJR-t -4.04** -2.80** -4.83** -5.37** -4.13** -4.92** -1.56
p-values 0.00 0.01 0.00 0.00 0.00 0.00 0.12
GJR-GED -3.92** -2.74** -4.57** -5.12** -4.01** -4.74** -1.36
p-values 0.00 0.01 0.00 0.00 0.00 0.00 0.17
MRS-GARCH-t2 -7.44** -3.92** -8.23** -8.18** -8.10** -9.81** -5.76**
p-values 0.00 0.00 0.00 0.00 0.00 0.00 0.00
MRS-GARCH-t -10.16** -6.20** -9.38** -9.20** -10.47** -11.29** -7.10**
p-values 0.00 0.00 0.00 0.00 0.00 0.00 0.00
MRS-GARCH-GED -8.64** -4.68** -9.22** -9.09** -8.85** -10.77** -6.53**
p-values 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Note: * and ** represent the DM statistics for which one can reject the null hypothesis of equal predictive
accuracy at 5% and 1% respectively. and represent the DM statistics for which one can reject the null
at 5% and 1% respectively, but the sign of the statistics is positive, indicating that the benchmark implies a
bigger loss.
35
Table 8: Diebold-Mariano Test. (Benchmark: EGARCH-N, 10-step-ahead)
Model MSE
1
MSE2 QLIKE R2LOG MAD2 MAD1 HMSE
GARCH-N -4.47** -3.00** -7.35** -6.69** -4.96** -6.51** -8.05**
p-values 0.00 0.00 0.00 0.00 0.00 0.00 0.00
GARCH-t -4.08** -2.83** -6.95** -6.30** -4.61** -6.07** -7.64**
p-values 0.00 0.00 0.00 0.00 0.00 0.00 0.00
GARCH-GED -4.18** -2.87** -7.17** -6.45** -4.69** -6.20** -7.82**
p-values 0.00 0.00 0.00 0.00 0.00 0.00 0.00
EGARCH-t -2.83** -2.56* -2.95** -3.05** -3.27** -3.47** -2.59**
p-values 0.00 0.01 0.00 0.00 0.00 0.00 0.01
EGARCH-GED -2.74** -2.09* -2.98** -3.83** -2.99** -3.32** -0.82
p-values 0.01 0.04 0.00 0.00 0.00 0.00 0.41
GJR-N -4.78** -2.99** -5.97** -6.35** -5.03** -6.46** -3.50**
p-values 0.00 0.00 0.00 0.00 0.00 0.00 0.00
GJR-t -4.55** -2.91** -6.03** -6.49** -4.80** -6.16** -3.45**
p-values 0.00 0.00 0.00 0.00 0.00 0.00 0.00
GJR-GED -4.58** -2.91** -6.01** -6.46** -4.82** -6.22** -3.44**
p-values 0.00 0.00 0.00 0.00 0.00 0.00 0.00
MRS-GARCH-N -1.63 -1.59 -1.95 -1.20 -1.17 -1.28 -2.61**
p-values 0.10 0.11 0.05 0.23 0.24 0.20 0.01
MRS-GARCH-t2 -9.78** -6.68** -8.74** -8.29** -11.32** -11.29** -8.83**
p-values 0.00 0.00 0.00 0.00 0.00 0.00 0.00
MRS-GARCH-t -9.95** -8.38** -8.91** -8.42** -11.87** -11.58** -8.88**
p-values 0.00 0.00 0.00 0.00 0.00 0.00 0.00
MRS-GARCH-GED -8.60** -4.72** -9.69** -9.27** -9.55** -11.83** -8.88**
p-values 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Note: * and ** represent the DM statistics for which one can reject the null hypothesis of equal predictive
accuracy at 5% and 1% respectively. and represent the DM statistics for which one can reject the null
at 5% and 1% respectively, but the sign of the statistics is positive, indicating that the benchmark implies a
bigger loss.
36
Table 9: Reality Check and Superior Predictive Ability Tests (all models, one-day
horizon).
Loss Functions
Benchmark MSE
1
MSE2 QLIKE R2LOG MAD1 MAD2 HMSE
SPA
0
l
0 0.003 0 0 0 0 0
GARCH-N SPA
0
c
0 0.003 0 0 0 0.001 0
RC 0.001 0.003 0.021 0.004 0 0.001 0.008
SPA
0
l
0 0.002 0.001 0 0 0 0
GARCH-t SPA
0
c
0 0.002 0.001 0 0 0 0
RC 0.001 0.002 0.024 0.01 0 0 0.006
SPA
0
l
0 0.004 0 0 0 0 0
GARCH-GED SPA
0
c
0 0.004 0.001 0 0 0 0
RC 0.001 0.004 0.022 0.013 0 0 0.007
SPA
0
l
0 0 0 0 0 0 0.036
EGARCH-N SPA
0
c
0 0 0 0 0 0 0.051
RC 0 0 0 0 0 0 0.572
SPA
0
l
0 0.003 0 0 0 0 0
EGARCH-t SPA
0
c
0 0.004 0 0 0 0 0
RC 0 0.004 0 0 0 0 0
SPA
0
l
0 0 0 0 0 0 0.502
EGARCH-GED SPA
0
c
0 0 0 0 0 0 0.725
RC 0 0 0 0 0 0 0.939
SPA
0
l
0 0 0 0 0 0 0
GJR-N SPA
0
c
0 0.004 0.001 0 0 0 0
RC 0.006 0.018 0.071 0.016 0 0.001 0.131
SPA
0
l
0 0.001 0 0 0 0 0
GJR-t SPA
0
c
0 0.001 0 0 0 0 0
RC 0.002 0.002 0.03 0.002 0 0 0.174
SPA
0
l
0 0 0 0 0 0 0
GJR-GED SPA
0
c
0 0 0 0 0 0 0
RC 0.001 0.003 0.044 0.003 0 0 0.148
SPA
0
l
0.514 0.524 0.531 0.627 0.578 0.562 0
MRS-GARCH-N SPA
0
c
1 1 1 1 1 1 0
RC 1 1 1 1 1 1 0
SPA
0
l
0 0.001 0 0 0 0 0
MRS-GARCH-t2 SPA
0
c
0 0.001 0 0 0 0 0
RC 0 0.001 0 0 0 0 0.309
SPA
0
l
0 0 0 0 0 0 0.492
MRS-GARCH-t SPA
0
c
0 0 0 0 0 0 0.781
RC 0 0 0 0 0 0 0.953
SPA
0
l
0 0 0 0 0 0 0.165
MRS-GARCH-GED SPA
0
c
0 0 0 0 0 0 0.302
RC 0 0 0 0 0 0 0.837
Note: This table presents the p-values of Whites (2000) Reality Check test (RC), and the p-values of
Consistent (SPA
0
c
) and Lower bound (SPA
0
L
) Hansens (2001) SPA test of one-step-ahead forecasts. Each
model in the row is the benchmark versus all the other competitors. The null hypothesis is that none of the
models are better than the benchmark. The number of bootstrap replications to calculate the p-values is 3000
and the block length is 0.33.
37
Table 10: Reality Check and Superior Predictive Ability Tests (MRS-GARCH models only,
two-week horizon).
Loss Functions
Benchmark MSE
1
MSE2 QLIKE R2LOG MAD1 MAD2 HMSE
SPA
0
l
0 0 0 0 0 0 0
MRS-GARCH-N SPA
0
c
0 0 0 0 0 0 0
RC 0 0 0 0 0 0 0
SPA
0
l
0 0.007 0.007 0 0 0 0
MRS-GARCH-t2 SPA
0
c
0 0.007 0.007 0 0 0 0
RC 0.001 0.007 0.007 0 0 0 0
SPA
0
l
0.517 0.512 0.604 0.502 0.507 0.487 0
MRS-GARCH-t SPA
0
c
0.517 1 0.604 0.502 1 1 0
RC 1 1 1 1 1 1 0
SPA
0
l
0.001 0.014 0.06 0 0 0 0
MRS-GARCH-GED SPA
0
c
0.001 0.014 0.091 0 0 0 0
RC 0.024 0.277 0.091 0.006 0.015 0.037 0
Note: This table presents the p-values of Whites (2000) Reality Check test (RC), and the p-values of Consistent
(SPA
0
c
) and Lower bound (SPA
0
L
) Hansens (2001) SPA test of forecasts at two-week horizon. Each model in the
row is the benchmark versus all other MRS-GARCH models. The null hypothesis is that none of the models are
better than the benchmark. The number of bootstrap replications to calculate the p-values is 3000 and the block
length is 0.3.
38
T
a
b
l
e
1
1
:
R
i
s
k
m
a
n
a
g
e
m
e
n
t
O
u
t
-
o
f
-
s
a
m
p
l
e
E
v
a
l
u
a
t
i
o
n
:
9
5
%
a
n
d
9
9
%
V
a
R
9
5
%
V
a
R
S
t
e
p
s
1
5
1
0
2
2
M
o
d
e
l
T
U
F
F
P
F
(
%
)
R
a
n
k
L
R
P
F
L
R
i
n
d
L
R
c
c
T
U
F
F
P
F
(
%
)
R
a
n
k
L
R
P
F
L
R
i
n
d
L
R
c
c
T
U
F
F
P
F
(
%
)
R
a
n
k
L
R
P
F
L
R
i
n
d
L
R
c
c
T
U
F
F
P
F
(
%
)
R
a
n
k
L
R
P
F
L
R
i
n
d
L
R
c
c
G
A
R
C
H
-
N
2
0
6
.
4
5
8
1
1
=
2
.
1
0
2
0
.
0
1
0
2
.
1
1
2
4
9
5
.
8
7
1
1
1
=
0
.
7
7
5
3
4
.
2
9
3
*
3
5
.
0
6
7
*
7
7
5
.
0
8
8
1
0
0
.
0
0
8
4
2
.
7
3
4
*
4
2
.
7
4
3
*
6
9
9
.
0
0
2
9
=
1
4
.
0
7
0
*
1
3
5
.
6
9
4
*
1
4
9
.
7
6
4
*
G
A
R
C
H
-
t
2
0
3
.
7
1
8
3
=
1
.
9
3
2
0
.
1
1
6
2
.
0
4
9
8
6
2
.
5
4
4
3
=
7
.
8
5
4
*
2
1
.
8
1
2
*
2
9
.
6
6
5
*
1
8
6
1
.
5
6
6
2
1
7
.
1
4
8
*
2
4
.
6
3
7
*
4
1
.
7
8
4
*
7
0
4
.
5
0
1
4
0
.
2
7
7
7
6
.
1
6
8
*
7
6
.
4
4
5
*
G
A
R
C
H
-
G
E
D
2
0
6
.
4
5
8
1
1
=
2
.
1
0
2
0
.
0
1
0
2
.
1
1
2
4
9
5
.
8
7
1
1
1
=
0
.
7
7
5
3
4
.
2
9
3
*
3
5
.
0
6
7
*
7
7
5
.
2
8
4
1
1
0
.
0
8
5
4
7
.
0
9
8
*
4
7
.
1
8
3
*
6
9
9
.
0
0
2
9
=
1
4
.
0
7
0
*
1
3
5
.
6
9
4
*
1
4
9
.
7
6
4
*
E
G
A
R
C
H
-
N
2
0
4
.
8
9
2
8
0
.
0
1
3
2
.
5
7
9
2
.
5
9
1
4
9
4
.
8
9
2
8
=
0
.
0
1
3
2
6
.
2
2
4
*
2
6
.
2
3
6
*
7
7
5
.
4
7
9
1
2
0
.
2
4
0
4
4
.
6
5
9
*
4
4
.
8
9
9
*
6
5
1
3
.
8
9
4
1
2
5
8
.
6
2
5
*
1
5
6
.
7
2
2
*
2
1
5
.
3
4
7
*
E
G
A
R
C
H
-
t
2
0
4
.
6
9
7
7
0
.
1
0
1
2
.
4
4
3
2
.
5
4
4
4
9
4
.
8
9
2
8
=
0
.
0
1
3
3
2
.
0
6
6
*
3
2
.
0
7
9
*
1
6
9
4
.
8
9
2
8
=
0
.
0
1
3
6
0
.
1
8
8
*
6
0
.
2
0
1
*
1
2
9
8
.
8
0
6
7
=
1
2
.
8
3
2
*
1
3
1
.
6
6
3
*
1
4
4
.
4
9
6
*
E
G
A
R
C
H
-
G
E
D
2
0
3
.
7
1
8
3
=
1
.
9
3
2
1
.
4
7
1
3
.
4
0
3
4
9
4
.
3
0
5
6
0
.
5
4
4
2
5
.
4
7
4
*
2
6
.
0
1
8
*
7
7
4
.
8
9
2
8
=
0
.
0
1
3
3
8
.
3
8
0
*
3
8
.
3
9
2
*
6
5
1
2
.
9
1
6
1
1
4
7
.
8
3
9
*
1
4
1
.
2
6
2
*
1
8
9
.
1
0
1
*
G
J
R
-
N
2
0
5
.
6
7
5
1
0
0
.
4
7
1
3
.
4
9
9
3
.
9
7
0
4
9
4
.
8
9
2
8
=
0
.
0
1
3
2
0
.
8
5
9
*
2
0
.
8
7
1
*
7
7
3
.
7
1
8
5
=
1
.
9
3
2
2
4
.
5
1
3
*
2
6
.
4
4
5
*
6
9
8
.
8
0
6
7
=
1
2
.
8
3
2
*
1
1
4
.
5
8
8
*
1
2
7
.
4
2
0
*
G
J
R
-
t
2
0
3
.
7
1
8
3
=
1
.
9
3
2
1
.
4
7
1
3
.
4
0
3
4
9
2
.
5
4
4
3
=
7
.
8
5
4
*
4
.
3
1
9
*
1
2
.
1
7
3
*
1
8
7
1
.
7
6
1
3
=
1
4
.
8
7
6
*
2
2
.
1
2
3
*
3
6
.
9
9
9
*
7
0
4
.
1
1
3
0
.
9
0
6
4
8
.
4
4
4
*
4
9
.
3
5
0
*
G
J
R
-
G
E
D
2
0
5
.
4
7
9
9
0
.
2
4
0
3
.
2
5
5
3
.
4
9
5
4
9
4
.
5
0
1
7
0
.
2
7
7
1
8
.
4
6
0
*
1
8
.
7
3
7
*
7
7
3
.
7
1
8
5
=
1
.
9
3
2
2
4
.
5
1
3
*
2
6
.
4
4
5
*
6
9
8
.
6
1
1
6
1
1
.
6
4
3
*
1
1
8
.
9
4
7
*
1
3
0
.
5
9
0
*
M
R
S
-
G
A
R
C
H
-
N
2
0
7
.
0
4
5
1
3
4
.
0
1
4
*
0
.
0
9
2
4
.
1
0
6
4
1
7
.
2
4
1
1
3
4
.
7
7
3
*
3
2
.
3
5
7
*
3
7
.
1
3
0
*
7
0
8
.
6
1
1
1
3
1
1
.
6
4
3
*
6
7
.
5
7
8
*
7
9
.
2
2
1
*
6
5
1
7
.
4
1
7
1
3
1
0
3
.
9
2
4
*
2
6
1
.
9
9
5
*
3
6
5
.
9
1
8
*
M
R
S
-
G
A
R
C
H
-
t
2
8
6
1
.
7
6
1
1
1
4
.
8
7
6
*
2
.
1
6
4
1
7
.
0
4
1
*
2
0
0
0
.
9
7
8
1
2
5
.
6
4
6
*
2
3
.
3
5
3
*
4
8
.
9
9
9
*
2
0
0
0
.
5
8
7
1
3
3
.
2
7
9
*
1
8
.
5
2
3
*
5
1
.
8
0
2
*
1
7
5
1
.
7
6
1
1
1
4
.
8
7
6
*
4
2
.
3
6
2
*
5
7
.
2
3
9
*
M
R
S
-
G
A
R
C
H
-
t
2
0
2
.
5
4
4
2
7
.
8
5
4
*
0
.
9
4
5
8
.
7
9
8
*
1
8
7
1
.
3
7
2
1
9
.
6
7
4
*
1
7
.
7
3
9
*
3
7
.
4
1
3
*
1
8
7
1
.
7
6
1
3
=
1
4
.
8
7
6
*
1
4
.
0
2
6
*
2
8
.
9
0
3
*
1
7
5
3
.
3
2
7
2
3
.
3
9
7
6
2
.
1
6
2
*
6
5
.
5
5
9
*
M
R
S
-
G
A
R
C
H
-
G
E
D
2
0
4
.
3
0
5
6
0
.
5
4
4
0
.
9
8
9
1
.
5
3
3
4
9
3
.
9
1
4
5
1
.
3
6
7
3
5
.
9
5
2
*
3
7
.
3
1
9
*
7
7
4
.
1
1
7
0
.
9
0
6
3
3
.
7
3
4
*
3
4
.
6
4
0
*
6
9
8
.
4
1
5
5
1
0
.
5
0
3
*
1
1
4
.
9
1
8
*
1
2
5
.
4
2
2
*
9
9
%
V
a
R
S
t
e
p
s
1
5
1
0
2
2
M
o
d
e
l
T
U
F
F
P
F
(
%
)
R
a
n
k
L
R
P
F
L
R
i
n
d
L
R
c
c
T
U
F
F
P
F
(
%
)
R
a
n
k
L
R
P
F
L
R
i
n
d
L
R
c
c
T
U
F
F
P
F
(
%
)
R
a
n
k
L
R
P
F
L
R
i
n
d
L
R
c
c
T
U
F
F
P
F
(
%
)
R
a
n
k
L
R
P
F
L
R
i
n
d
L
R
c
c
G
A
R
C
H
-
N
2
0
2
.
5
4
4
1
3
8
.
6
2
1
*
0
.
9
4
5
9
.
5
6
6
*
2
0
6
0
.
5
8
7
9
1
.
0
3
3
6
.
8
4
4
*
7
.
8
7
8
*
1
8
7
0
.
7
8
3
7
0
.
2
6
3
1
5
.
0
8
3
*
1
5
.
3
4
6
*
1
7
5
1
.
7
6
1
8
=
2
.
4
3
8
4
2
.
3
6
2
*
4
4
.
8
0
1
*
G
A
R
C
H
-
t
8
6
0
.
3
9
1
3
2
.
4
8
7
0
.
0
1
6
2
.
5
0
3
2
0
7
0
.
1
9
6
2
=
4
.
9
9
1
*
0
.
0
0
4
4
.
9
9
5
2
0
0
0
.
5
8
7
1
=
1
.
0
3
3
1
8
.
5
2
3
*
1
9
.
5
5
6
*
1
8
8
0
.
5
8
7
4
1
.
0
3
3
1
8
.
5
2
3
*
1
9
.
5
5
6
*
G
A
R
C
H
-
G
E
D
8
6
1
.
1
7
4
8
0
.
1
4
8
0
.
1
4
3
0
.
2
9
1
2
0
6
0
.
3
9
1
5
=
2
.
4
8
7
8
.
9
2
6
*
1
1
.
4
1
2
*
2
0
0
0
.
5
8
7
1
=
1
.
0
3
3
1
8
.
5
2
3
*
1
9
.
5
5
6
*
1
8
7
0
.
9
7
8
6
0
.
0
0
2
2
3
.
3
5
3
*
2
3
.
3
5
5
*
E
G
A
R
C
H
-
N
2
0
1
.
5
6
6
1
0
1
.
4
0
8
0
.
2
5
5
1
.
6
6
3
1
8
7
0
.
7
8
3
1
0
0
.
2
6
3
5
.
5
0
5
*
5
.
7
6
8
1
8
7
1
.
5
6
6
1
2
1
.
4
0
8
2
4
.
6
3
7
*
2
6
.
0
4
5
*
7
0
5
.
8
7
1
1
2
5
7
.
6
6
5
*
9
2
.
7
8
7
*
1
5
0
.
4
5
3
*
E
G
A
R
C
H
-
t
8
6
0
.
7
8
3
6
=
0
.
2
6
3
0
.
0
6
3
0
.
3
2
6
2
0
0
1
.
1
7
4
1
2
0
.
1
4
8
2
0
.
1
9
7
*
2
0
.
3
4
6
*
1
8
7
1
.
1
7
4
1
0
0
.
1
4
8
1
0
.
9
4
4
*
1
1
.
0
9
3
*
1
8
7
1
.
7
6
1
8
=
2
.
4
3
8
4
2
.
3
6
2
*
4
4
.
8
0
1
*
E
G
A
R
C
H
-
G
E
D
8
6
0
.
5
8
7
4
=
1
.
0
3
3
0
.
0
3
6
1
.
0
6
9
2
0
6
0
.
3
9
1
5
=
2
.
4
8
7
8
.
9
2
6
*
1
1
.
4
1
2
*
2
0
0
0
.
9
7
8
8
=
0
.
0
0
2
2
3
.
3
5
3
*
2
3
.
3
5
5
*
1
2
9
2
.
1
5
3
1
1
5
.
1
5
6
*
5
6
.
6
0
8
*
6
1
.
7
6
4
*
G
J
R
-
N
2
0
1
.
7
6
1
1
1
2
.
4
3
8
0
.
3
2
3
2
.
7
6
2
1
8
7
0
.
9
7
8
1
1
0
.
0
0
2
4
.
5
2
2
*
4
.
5
2
5
1
8
7
1
.
3
7
1
1
0
.
6
3
3
1
7
.
7
3
9
*
1
8
.
3
7
2
*
1
7
5
1
.
7
6
1
8
=
2
.
4
3
8
4
2
.
3
6
2
*
4
4
.
8
0
1
*
G
J
R
-
t
8
6
0
.
5
8
7
4
=
1
.
0
3
3
0
.
0
3
6
1
.
0
6
9
5
1
1
0
1
1
0
.
2
7
1
*
0
.
0
0
0
1
0
.
2
7
1
*
2
0
0
0
.
5
8
7
1
=
1
.
0
3
3
1
8
.
5
2
3
*
1
9
.
5
5
6
*
1
8
9
0
.
1
9
6
2
4
.
9
9
1
*
0
.
0
0
4
4
.
9
9
5
G
J
R
-
G
E
D
2
0
1
.
3
7
9
0
.
6
3
3
0
.
1
9
5
0
.
8
2
8
2
0
6
0
.
3
9
1
5
=
2
.
4
8
7
8
.
9
2
6
*
1
1
.
4
1
2
*
2
0
0
0
.
9
7
8
8
=
0
.
0
0
2
2
3
.
3
5
3
*
2
3
.
3
5
5
*
1
8
7
1
.
1
7
4
7
0
.
1
4
8
3
1
.
4
9
3
*
3
1
.
6
4
1
*
M
R
S
-
G
A
R
C
H
-
N
2
0
2
.
3
4
8
1
2
6
.
8
0
3
*
0
.
5
7
8
7
.
3
8
2
*
1
8
7
1
.
5
6
6
1
3
1
.
4
0
8
1
5
.
7
2
7
*
1
7
.
1
3
5
*
1
8
6
2
.
1
5
3
1
3
5
.
1
5
6
*
2
6
.
0
2
8
*
3
1
.
1
8
5
*
6
9
7
.
8
2
8
1
3
9
7
.
2
9
8
*
1
1
9
.
8
3
2
*
2
1
7
.
1
3
0
*
M
R
S
-
G
A
R
C
H
-
t
2
5
1
1
0
1
=
1
0
.
2
7
1
*
0
.
0
0
0
1
0
.
2
7
1
*
2
0
7
0
.
1
9
6
2
=
4
.
9
9
1
*
0
.
0
0
4
4
.
9
9
5
2
0
0
0
.
5
8
7
1
=
1
.
0
3
3
1
8
.
5
2
3
*
1
9
.
5
5
6
*
5
1
1
0
1
1
0
.
2
7
1
*
0
.
0
0
0
1
0
.
2
7
1
*
M
R
S
-
G
A
R
C
H
-
t
5
1
1
0
1
=
1
0
.
2
7
1
*
0
.
0
0
0
1
0
.
2
7
1
*
2
0
7
0
.
1
9
6
2
=
4
.
9
9
1
*
0
.
0
0
4
4
.
9
9
5
2
0
0
0
.
5
8
7
1
=
1
.
0
3
3
1
8
.
5
2
3
*
1
9
.
5
5
6
*
1
8
9
0
.
3
9
1
3
2
.
4
8
7
8
.
9
2
6
*
1
1
.
4
1
2
*
M
R
S
-
G
A
R
C
H
-
G
E
D
8
6
0
.
7
8
3
6
=
0
.
2
6
3
0
.
0
6
3
0
.
3
2
6
2
0
6
0
.
3
9
1
5
=
2
.
4
8
7
8
.
9
2
6
*
1
1
.
4
1
2
*
2
0
0
0
.
5
8
7
1
=
1
.
0
3
3
1
8
.
5
2
3
*
1
9
.
5
5
6
*
1
8
7
0
.
7
8
3
5
0
.
2
6
3
2
7
.
8
0
4
*
2
8
.
0
6
7
*
N
o
t
e
:
T
h
i
s
t
a
b
l
e
p
r
e
s
e
n
t
s
t
h
e
t
i
m
e
u
n
t
i
l
r
s
t
f
a
i
l
u
r
e
(
T
U
F
F
)
,
t
h
e
p
e
r
c
e
n
t
a
g
e
p
r
o
p
o
r
t
i
o
n
o
f
f
a
i
l
u
r
e
s
(
P
F
(
%
)
)
,
t
h
e
L
R
t
e
s
t
f
o
r
u
n
c
o
n
d
i
t
i
o
n
a
l
c
o
v
e
r
a
g
e
(
L
R
P
F
)
,
t
h
e
L
R
t
e
s
t
f
o
r
i
n
d
e
p
e
n
d
e
n
c
e
(
L
R
i
n
d
)
,
a
n
d
t
h
e
L
R
t
e
s
t
f
o
r
c
o
n
d
i
t
i
o
n
a
l
c
o
v
e
r
a
g
e
(
L
R
c
c
)
f
o
r
b
o
t
h
9
5
%
a
n
d
9
9
%
V
a
R
f
a
i
l
u
r
e
p
r
o
c
e
s
s
e
s
a
t
o
n
e
,
v
e
,
t
e
n
a
n
d
t
w
e
n
t
y
-
t
w
o
s
t
e
p
s
a
h
e
a
d
.
*
i
n
d
i
c
a
t
e
s
s
i
g
n
i
c
a
n
c
e
a
t
5
%
.
39
O
c
t
2
0
0
1
J
a
n
2
0
0
2
A
p
r
2
0
0
2
J
u
l
2
0
0
2
O
c
t
2
0
0
2
J
a
n
2
0
0
3
A
p
r
2
0
0
3
1 2 3 4 5 6 7 8
V o l a t i l i t y
O
n
e
S
t
e
p
A
h
e
a
d
V
o
l
a
t
i
l
i
t
y
F
o
r
e
c
a
s
t
s
o
f
G
J
R
N
a
n
d
M
R
S
G
A
R
C
H
N
R
e
a
l
.
V
o
l
.
1
M
i
n
.
G
J
R
N
M
R
S
G
A
R
C
H
N
O
c
t
2
0
0
1
J
a
n
2
0
0
2
A
p
r
2
0
0
2
J
u
l
2
0
0
2
O
c
t
2
0
0
2
J
a
n
2
0
0
3
A
p
r
2
0
0
3
5
1
0
1
5
2
0
2
5
3
0
V o l a t i l i t y
F
i
v
e
S
t
e
p
A
h
e
a
d
V
o
l
a
t
i
l
i
t
y
F
o
r
e
c
a
s
t
s
o
f
E
G
A
R
C
H
N
a
n
d
M
R
S
G
A
R
C
H
N
V
o
l
a
t
i
l
i
t
y
P
r
o
x
y
(
f
r
o
m
R
V
1
M
)
E
G
A
R
C
H
N
M
R
S
G
A
R
C
H
N
O
c
t
2
0
0
1
J
a
n
2
0
0
2
A
p
r
2
0
0
2
J
u
l
2
0
0
2
O
c
t
2
0
0
2
J
a
n
2
0
0
3
A
p
r
2
0
0
3
5
1
0
1
5
2
0
2
5
3
0
3
5
4
0
4
5
V o l a t i l i t y
T
e
n
S
t
e
p
A
h
e
a
d
V
o
l
a
t
i
l
i
t
y
F
o
r
e
c
a
s
t
s
o
f
E
G
A
R
C
H
N
a
n
d
M
R
S
G
A
R
C
H
N
V
o
l
a
t
i
l
i
t
y
P
r
o
x
y
(
f
r
o
m
R
V
1
M
)
E
G
A
R
C
H
N
M
R
S
G
A
R
C
H
N
O
c
t
2
0
0
1
J
a
n
2
0
0
2
A
p
r
2
0
0
2
J
u
l
2
0
0
2
O
c
t
2
0
0
2
J
a
n
2
0
0
3
A
p
r
2
0
0
3
1
0
2
0
3
0
4
0
5
0
6
0
7
0
8
0
V o l a t i l i t y
T
w
e
n
t
y
t
w
o
S
t
e
p
A
h
e
a
d
V
o
l
a
t
i
l
i
t
y
F
o
r
e
c
a
s
t
s
o
f
E
G
A
R
C
H
G
E
D
a
n
d
M
R
S
G
A
R
C
H
N
V
o
l
a
t
i
l
i
t
y
P
r
o
x
y
(
f
r
o
m
R
V
1
M
)
E
G
A
R
C
H
G
E
D
M
R
S
G
A
R
C
H
N
F
i
g
u
r
e
1
:
C
o
m
p
a
r
i
s
o
n
o
f
o
n
e
,
v
e
,
t
e
n
a
n
d
t
w
e
n
t
y
-
s
t
e
p
-
a
h
e
a
d
v
o
l
a
t
i
l
i
t
y
f
o
r
e
c
a
s
t
s
f
r
o
m
t
h
e
M
a
r
k
o
v
R
e
g
i
m
e
-
S
w
i
t
c
h
i
n
g
G
A
R
C
H
M
o
d
e
l
s
a
n
d
s
t
a
n
d
a
r
d
G
A
R
C
H
.
40
O
c
t
2
0
0
1
J
a
n
2
0
0
2
A
p
r
2
0
0
2
J
u
l
2
0
0
2
O
c
t
2
0
0
2
J
a
n
2
0
0
3
A
p
r
2
0
0
3
1 0 1 2 3 4 5
R e t u r n s
E
x
c
e
s
s
i
v
e
L
o
s
s
e
s
f
o
r
9
5
%
V
a
R
o
f
G
J
R
t
a
n
d
M
R
S
G
A
R
C
H
t
2
(
O
n
e
s
t
e
p
a
h
e
a
d
)
R
e
t
u
r
n
s
G
J
R
t
9
5
%
V
a
R
M
R
S
G
A
R
C
H
t
2
9
5
%
V
a
R
O
c
t
2
0
0
1
J
a
n
2
0
0
2
A
p
r
2
0
0
2
J
u
l
2
0
0
2
O
c
t
2
0
0
2
J
a
n
2
0
0
3
A
p
r
2
0
0
3
1
0
5 0 5
1
0
R e t u r n s
E
x
c
e
s
s
i
v
e
L
o
s
s
e
s
f
o
r
9
5
%
V
a
R
o
f
G
J
R
t
a
n
d
M
R
S
G
A
R
C
H
t
2
(
F
i
v
e
s
t
e
p
a
h
e
a
d
)
R
e
t
u
r
n
s
G
J
R
t
9
5
%
V
a
R
M
R
S
G
A
R
C
H
t
2
9
5
%
V
a
R
O
c
t
2
0
0
1
J
a
n
2
0
0
2
A
p
r
2
0
0
2
J
u
l
2
0
0
2
O
c
t
2
0
0
2
J
a
n
2
0
0
3
A
p
r
2
0
0
3
1
5
1
0
5 0 5
1
0
R e t u r n s
E
x
c
e
s
s
i
v
e
L
o
s
s
e
s
f
o
r
9
5
%
V
a
R
o
f
G
J
R
t
a
n
d
M
R
S
G
A
R
C
H
t
2
(
T
e
n
s
t
e
p
a
h
e
a
d
)
R
e
t
u
r
n
s
G
J
R
t
9
5
%
V
a
R
M
R
S
G
A
R
C
H
t
2
9
5
%
V
a
R
O
c
t
2
0
0
1
J
a
n
2
0
0
2
A
p
r
2
0
0
2
J
u
l
2
0
0
2
O
c
t
2
0
0
2
J
a
n
2
0
0
3
A
p
r
2
0
0
3
2
0
1
5
1
0
5 0 5
1
0
1
5
2
0
R e t u r n s
E
x
c
e
s
s
i
v
e
L
o
s
s
e
s
f
o
r
9
5
%
V
a
R
o
f
G
J
R
t
a
n
d
M
R
S
G
A
R
C
H
t
2
(
T
w
e
n
t
y
t
w
o
s
t
e
p
a
h
e
a
d
)
R
e
t
u
r
n
s
G
J
R
t
9
5
%
V
a
R
M
R
S
G
A
R
C
H
t
2
9
5
%
V
a
R
F
i
g
u
r
e
2
:
9
5
%
V
a
R
e
s
t
i
m
a
t
e
s
f
o
r
S
&
P
1
0
0
s
e
r
i
e
s
.
41
O
c
t
2
0
0
1
J
a
n
2
0
0
2
A
p
r
2
0
0
2
J
u
l
2
0
0
2
O
c
t
2
0
0
2
J
a
n
2
0
0
3
A
p
r
2
0
0
3
2 0 2 4
R e t u r n s
E
x
c
e
s
s
i
v
e
L
o
s
s
e
s
f
o
r
9
9
%
V
a
R
o
f
G
J
R
t
a
n
d
M
R
S
G
A
R
C
H
t
2
(
O
n
e
s
t
e
p
a
h
e
a
d
)
R
e
t
u
r
n
s
G
J
R
t
9
9
%
V
a
R
M
R
S
G
A
R
C
H
t
2
9
9
%
V
a
R
O
c
t
2
0
0
1
J
a
n
2
0
0
2
A
p
r
2
0
0
2
J
u
l
2
0
0
2
O
c
t
2
0
0
2
J
a
n
2
0
0
3
A
p
r
2
0
0
3
1
5
1
0
5 0 5
1
0
R e t u r n s
E
x
c
e
s
s
i
v
e
L
o
s
s
e
s
f
o
r
9
9
%
V
a
R
o
f
G
J
R
t
a
n
d
M
R
S
G
A
R
C
H
t
2
(
F
i
v
e
s
t
e
p
a
h
e
a
d
)
R
e
t
u
r
n
s
G
J
R
t
9
9
%
V
a
R
M
R
S
G
A
R
C
H
t
2
9
9
%
V
a
R
O
c
t
2
0
0
1
J
a
n
2
0
0
2
A
p
r
2
0
0
2
J
u
l
2
0
0
2
O
c
t
2
0
0
2
J
a
n
2
0
0
3
A
p
r
2
0
0
3
2
5
2
0
1
5
1
0
5 0 5
1
0
R e t u r n s
E
x
c
e
s
s
i
v
e
L
o
s
s
e
s
f
o
r
9
9
%
V
a
R
o
f
G
J
R
t
a
n
d
M
R
S
G
A
R
C
H
t
2
(
T
e
n
s
t
e
p
a
h
e
a
d
)
R
e
t
u
r
n
s
G
J
R
t
9
9
%
V
a
R
M
R
S
G
A
R
C
H
t
2
9
9
%
V
a
R
O
c
t
2
0
0
1
J
a
n
2
0
0
2
A
p
r
2
0
0
2
J
u
l
2
0
0
2
O
c
t
2
0
0
2
J
a
n
2
0
0
3
A
p
r
2
0
0
3
3
0
2
0
1
0 0
1
0
2
0
R e t u r n s
E
x
c
e
s
s
i
v
e
L
o
s
s
e
s
f
o
r
9
9
%
V
a
R
o
f
G
J
R
t
a
n
d
M
R
S
G
A
R
C
H
t
2
(
T
w
e
n
t
y
t
w
o
s
t
e
p
a
h
e
a
d
)
R
e
t
u
r
n
s
G
J
R
t
9
9
%
V
a
R
M
R
S
G
A
R
C
H
t
2
9
9
%
V
a
R
F
i
g
u
r
e
3
:
9
9
%
V
a
R
e
s
t
i
m
a
t
e
s
f
o
r
S
&
P
1
0
0
s
e
r
i
e
s
.
42