Tssol 11
Tssol 11
Tssol 11
(a) The lag 1 dierencing operator is dened by Xt = Xt Xt1 = (1 B )Xt and the lag d dierencing operator d by d Xt = Xt Xtd = (1 B d )Xt .
Let mt = 0 + 1 t + . . . + k tk . Then, since k mt = k !k , the dierenced series k Xt will have no trend. Similarly, since d st = st std = 0, the dierenced series d Xt will have no seasonality. By combining these two operations, both the trend and seasonality can be removed from the time series {Xt }. [6] (b) When k = 1, we may write d Xt = d mt + d st + d Yt = 0 + 1 t 0 1 (t d) + d Yt = 1 d + d Yt . For stationarity, we need to check that the expectation and variance are constant, and that the covariances do not depend on t. Clearly, E (d Xt ) = 1 d, which does not depend on t. We also have cov(d Xt , d Xt+ ) = cov(d Yt , d Yt+ ) = 2Y ( ) Y ( d) Y ( + d). Since this does not depend on t, the dierenced series is stationary. [5] (c) The rst step of the classical decomposition method involves estimating the trend using a moving average lter of period length d. The seasonal eects are estimated in the next step by computing the averages of the detrended values and adjusting them so that the seasonal eects meet the model assumptions. Using the estimated seasonal eects, the time series {Xt } is then deseasonalised and the trend re-estimated using the deseasonalised variables. In the nal step, the residuals are calculated as the detrended and deseasonalised variables. [6]
Time Series
2. (a) First note that E (Zt ) = 0 for all t, and so E (Xt ) = 0 for all t. Thus, we may write cov(Xt , Xt+ ) = E {(Zt + 1 Zt1 + 2 Zt2 )(Zt+ + 1 Zt+ 1 + 2 Zt+ 2 )} = E (Zt Zt+ + 1 Zt Zt+ 1 + 2 Zt Zt+ 2 + 1 Zt1 Zt+ 2 +1 Zt1 Zt+ 1 + 1 2 Zt1 Zt+ 2 + 2 Zt2 Zt+ 2 +1 2 Zt2 Zt+ 1 + 2 Zt2 Zt+ 2 ). Since the Zt are uncorrelated random variables with var(Zt ) = 2 , we obtain
2 2 (1 + 1 + 2 ) 2 2 1 (1 + 2 ) ( ) = 2 2 0
if if if if
= 0, = 1, = 2, | | > 2.
It follows that ( ) =
1 2 ) 1 (1+ 2 2
2 2 + 2 1+1 2 1+1 +2
if if if if
= 0, = 1, = 2, | | > 2.
For an MA(q ) process, ( ) is not necessarily zero when | | q and ( ) = 0 when | | > q . [12] (b) The above MA(2) process is invertible if and only if (z ) = 1 + 1 z + 2 z 2 = 0 only for |z | > 1. Solving this quadratic equation yields z= 1
2 1 42 . 22
The process is invertible for those values of 1 and 2 which satisfy |z | > 1. (c) The seasonal MA(2)h process is dened as Xt = Zt + 1 Zth + 2 Zt2h . It is invertible if and only if (z h ) = 1 + 1 z h + 2 z 2h = 0 only for |z | > 1.
[6]
[4]
Time Series 3. (a) In the operator form, the process is (B )Xt = (B )Zt ,
where (z ) = 1 1 z 2 z 2 and (z ) = 1 + z . It would be an ARMA(2, 1) process if the polynomials (z ) and (z ) have no common factors. The condition for this is that 1 + 1 / 2 /2 = 0. [4] (b) The linear process form of the time series is
Xt =
j =0
j Ztj = (B )Zt ,
where (B ) =
j B j .
j =0
Thus, in terms of polynomials in z , we may write (1 1 z 2 z 2 )(0 + 1 z + 2 z 2 + . . .) = 1 + z. Equating the coecients of z j , j = 0, 1, . . ., we obtain 0 = 1, 1 = 1 + and j = 1 j 1 + 2 j 2 for j 2. When 1 = 0.3, 2 = 0.4 and = 0.9, the roots of (z ) are z1 = 2 and z2 = 5/4. So the general solution to this second-order dierence equation is
j j , + c2 z2 j = c1 z1
where c1 and c2 can be obtained from the initial conditions. In this case, the initial conditions yield the equations c1 + c2 = 1 1 4 c1 + c2 = 1.2. 2 5 These give c1 = 4/13 and c2 = 17/13. Thus, we have 17 5 4 j = (2)j + 13 13 4
j
and
for j 2. [15] (c) The dierence equations in terms of the autocorrelation function for an ARMA(2, 1) process are given by ( ) 1 ( 1) 2 ( 2) = 0 for 2 with initial conditions ( ) 1 ( 1) 2 ( 2) = 2 ( 0 + +1 1 ) (0)
for 0 < 2, where 0 = 1, 1 = and = 0 if > 1. The autocorrelation function tails o for this process. [5] 3
Time Series
4. (a) There are three cases to consider: (i) || < 1; (ii) || = 1; and (iii) || > 1. In cases (i) and (iii), an AR(1) process is stationary, since Xt can be expressed as a linear combination of the Zt . It is only causal in case (i), because Xt depends on future values of Zt in case (iii). In case (ii), an AR(1) process reduces to a random walk, which is neither stationary nor causal. However, the rst dierence of a random walk is stationary, since it is just white noise. [6] (b) The dierence equation in terms of the autocorrelation function for a causal AR(1) process is ( ) ( 1) = 0 for 1 with initial condition (0) = 1. Thus, we can write ( ) = ( 1) = 2 ( 2) = . . . = (0) = for = 0, 1, . . .. Since ( ) = ( ) for all , the autocorrelation function is ( ) = | | for = 0, 1, 2, . . .. The partial autocorrelation function is 11 = and = 0 for > 1. [8] (c) The best linear predictor of Xn+1 based on X1 , . . . , Xn is Xn+1 = Xn . The Yule-Walker estimators are = (1) and 2 = (0) 1 2 (1) , where (1) is the sample autocorrelation at lag 1 and (0) is the sample variance. So the estimated predictor is (n) n X (1)Xn +1 = and
(n) n X +1 1.96 (n)
[6]
Time Series
5. (a) The sample ACF of yt = xt cuts o after lag 1 and the sample PACF tails o with the same sign. This suggests that yt can be modelled as an MA(1) process with a negative value of the moving average parameter. Hence, we have p = 0, q = 1, and, since the xt were dierenced once, d = 1. This means that the time series belongs to the ARIMA(0, 1, 1) class. [8] (b) The suggested model for Xt is Xt = Zt + Zt1 , which can be written as (1 B )Xt = (1 + B )Zt , where {Zt } W N (0, 2 ) and < 0. [3] (c) The residuals ACF, PACF, normal plot and histogram can be used to assess whether the residuals behave like a Gaussian white noise process. If this is the case, the sample autocorrelations at lag = 0 will be approximately zero and lie within the approximate 95% condence bounds, the normal plot will show a straight line and the histogram will be approximately bell-shaped. The Ljung-Box-Pierce Q statistic can also be used to test whether groups of autocorrelations are zero. If the null hypothesis is rejected, then there are some correlations in the data that are unaccounted for. [6]