Chapter 4 - Lecture Notes
Chapter 4 - Lecture Notes
Chapter 4 (Part 1)
Estimation, diagnostic checking for nonseasonal Box-Jekins models
EXt = δ/(1 − ϕ1 − · · · − ϕp ).
xt = ϕ1 xt−1 + · · · + ϕp xt−p + Zt .
Moreover, we have
Cov(xt , xs ) = Cov(Xt , Xs )
Therefore, all the properties of the nonzero-mean AR(p) are the same as those
of zero-mean AR(p) in terms of ACF.
1
DEFINITION 2 : Nonzero mean MA(q): Xt = δ + Zt + θ1 Zt−1 + · · · + θq Zt−q .
xt = δ + Zt + θ1 Zt−1 + · · · + θq Zt−q .
Then
EXt = δ/(1 − ϕ1 − · · · − ϕp )
If we define xt = Xt − EXt , then
It follows that
Cov(xt , xs ) = Cov(Xt , Xs ).
Therefore, all the properties of the nonzero-mean ARMA(p) are the same as
those of zero-mean ARMA(p).
n
∑
Denote the sample mean by X̄ = n1 Xj , which is one possible point estimate
j=1
of the population mean µ = EXt . If X̄ is statistically (significantly) different from
zero, it is reasonable to assume that µ does not equal zero and, therefore, to
assume that δ does not equal zero. Let s stand for the standard deviation with
∑n
s2 = (Xi − X̄)2 /(n−1). One rough rule of thumb is to decide that X̄ is statistically
i=1
different from zero if the absolute value of
X̄
√ ,
s/ n
is greater than 1.96.
2 Invertibility
2.1 Motivation
Why do we need the concept of invertibility ? Consider the following example.
2
EXAMPLE 1 Evaluate ACF of the following two MA(1) models.
1 . Xt = Zt + 1/3Zt−1 .
2 . Xt = Zt + 3Zt−1 .
1 1 1
= E(Zt Zt+k ) + E(Zt Zt+k−1 ) + E(Zt−1 Zt+k ) + E(Zt−1 Zt+k−1 ).
3 3 9
This gives γ1 = 1/3 and γk = 0 for k > 1. Xt = at + 3at−1 , which further implies
ρ1 = γγ01 = 1+1/9
1/3 3
= 10 . Similarly, for the second example, we may obtain γ1 = 3
and γk = 0 for k > 1 and ρ1 = γγ01 = 10 3
. So two different models have the same
ACF.
We can not distinguish between these two models by looking at the sample
ACFs. Hence we will have to choose only one of them. We now further look at
the difference between them.
Intuitively speaking, the most recent observations should have higher weight than
observations from the more distant past observations on Xt . When |θ| < 1, |θ|j
becomes smaller as j gets larger. So we should choose model 1 in Example 1
with θ = 1/3.
Xt = Zt + ψ1 Xt−1 + ψ2 Xt−2 + · · · ,
∞
∑
such that |ψj | < ∞.
j=1
3
(a) AR(p) model
Xt = ϕ1 Xt−1 + · · · + ϕp Xt−p + Zt
where {Zt } ∼ W N(0, σ 2 ), is always invertible.
ϕ(B) = 1 + θ1 B + · · · + θq Bq = 0
has all its roots outside the unit circle, i.e. all the roots have modulus (com-
plex norm) greater than 1.
For example, if p = 1, the root of 1 + θ1 B = 0 is
B = −1/θ1 .
If p = 2, the condition is
(c) ARMA(p,q):
ϕ(B) = 1 + θ1 B + · · · + θq Bq = 0
has all its roots outside the unit circle, i.e. all the roots have modulus (com-
plex norm) greater than 1.
Xt = Zt + 2Zt−1 + Zt−2
invertible ?
4
3 Estimation of an AR(p) model
Suppose that Xt : t = 1, 2, · · · , n is a TS (time series), and we want to fit the
following regression model
Xt = δ + ϕ1 Xt−1 + · · · + ϕk Xt−k + Zt
what is X and Y?
Using LSE (least-squares estimation) we have
δ̂
ϕ̂1
. = (X ′ ∗ X)−1 X ′ ∗ Y .
..
ϕ̂p
b) Prediction errors
5
6
time Xt intercept Xt−1
1 1.0445 1 –
2 -0.1338 1 1.0445
3 0.6706 1 -0.1338
.. ..
. 1 .
20 0.5438 1 -0.4062
we have ( )
′ −1 0.054303878 −0.007102646
(X ∗ X) =
−0.007102646 0.030166596
and ( )
3.97280
X′ ∗ Y =
−26.35337
We have
δ̂ = 0.4029171, ϕ̂1 = −0.8232089
The fitted values are (time: prediction) 2: -0.4569, 3: 0.5130, 4: -0.1491, 5:
0.0938, 6: 0.8235, 7: 0.5965, 8: 0.2716, 9: -0.9354, 10: 1.7809, 11: -1.6121, 12: 2.9564,
13: -1.8082, 14: 1.2183, 15: -0.5942, 16: 0.47939, 17: -0.4124, 18: -0.0262, 19: 0.4966,
20: 0.73730
The estimate of σ 2 = V ar(Zt ) is
20
∑
(Xt − X̂t )2 /(19 − 2) = 0.5948365
i=2
and that of ϕ1 is
√
(0.5948365 ∗ 0.030166596) = 0.1339559
Xt = 0.4029171 − 0.8232089Xt−1 .
or
Xt = 0.4263259 − 0.82323Xt−1 .
SAS output(II)
7
Model for variable TS
Autoregressive Factors
or
Xt = 0.4263259 − 0.82323Xt−1 .
8
MAS451/MTH451/MH4500 TIME SERIES
ANALYSIS
Chapter 4 (Part 2)
Estimation, diagnostic checking for nonseasonal Box-Jekins models
Basic Questions:
(a) How to forecast ?
Since g(c) is quadratic in c and opens upward, solving g ′ (c) = 0 will produce the
required minimum. Note that
g ′ (c) = −2EY + 2c = 0
1
Now consider the situation where a second random variable X is available and
we wish to use the observed value of X to help predict Y. Again, our criterion
will be to minimize the mean square error of prediction. We need to choose the
function h(X), say, that minimizes
E(Y − h(X))2 .
Rewrite h i
E(Y − h(X))2 = E E (Y − h(X))2 |X .
For each value of x, h(x) is a constant and hence we can apply Theorem 1 to the
conditional distribution of Y given X = x. Thus the best choice of h(x) is
It follows that h(X) = E(Y |X) is the best predictor of Y of all functions of X.
THEOREM 2 The minimum of E(Y − g(X))2 is obtained when g(X) = E(Y |X).
= δ + φ1 Xn + · · · + φp Xn+1−p .
2
It follows that
X̂n+1 = δ̂ + φ̂1 Xn + · · · + φ̂p Xn−p+1 .
Likewise, to predict Xn+2 : use
Here sn+τ (n) is the standard error of the forecast error. The calculation of
sn+τ (n) is beyond the scope of this module. However, SAS(orR) can help us to
calculate it.
EXAMPLE 1 yt : 3.91, 3.86, 3.81, 3.02, 2.62, 1.89, -1.13, -3.82, -5.08, -4.42, -1.99, 0.70,
1.86, 2.98, 1.78, 3.01, 2.13, 3.23, 3.17, 4.64, 5.20, 6.76, 5.79, 5.08, 1.88, -0.72, -2.00, -3.03,
-2.35, -3.34, -3.21, -3.57, -4.28, -3.54, -3.16, -1.41, 0.48, 1.61, 2.42, 2.11, 2.45, 1.39, 2.04,
1.71, 3.26, 3.20, 1.43, 1.68, 4.17, 4.75
Using SAS, ACF and PACF graphs can be obtained by the following codes:
3
proc arima data=mydata;
identify var=ts nlag=20 outcov=exp2;
run;
proc gplot data=exp2;
symbol i=needle width=6;
plot corr*lag;
run;
proc gplot data=exp2; symbol i=needle width=6; plot partcorr*lag/vref=-
0.2771859 0.2771859 lvref=2;
run;
Because the SPACF has a clear cut-off after 2, we can choose AR(2) model
for yt .
The estimates are µ̂ = 3.06451, φ̂1 = 1.40646, φ̂2 = −0.50907. Thus
Fitted values
4
Residuals
e3 = y3 − ŷ3 = 0.0571
e4 = y4 − ŷ4 = −0.688
...
e50 = y50 − ŷ50 = −0.57
Prediction
SAS codes:
Fourth step: estimation and prediction.
proc arima data=mydata;
identify var=ts nlag=20;
estimate p=2 plot;
run;
forecast lead=5;
run;
quit;
The output results of the four steps, including identify, estimate and forecast
statements, are listed in Appendix A.
Further topics: (diagnostic) checking of the model(we will discuss this
later)
Check whether there is autocorrelation in the residuals.
Since the autocorrelations of the residuals have been provided in the out-
put(Appendix A), we can plot the acf by the following codes:
data mydata;
input id acf;
datalines;
1 -0.11259
2 0.28399
3 -0.17813
4 0.02414
5 -0.05594
6 -0.02071
7 0.00486
8 0.10026
9 -0.01289
5
10 0.02440
11 -0.04040
12 -0.08307
13 -0.08352
14 -0.12912
15 -0.00174
;
proc gplot data=mydata;
symbol i=needle width=6;
title ’acf of residuals’;
axis2 order=(-1 to 1 by 0.2);
plot acf*id/vaxis=axis2;
run;
quit;
3 LSE of MA model
Suppose that X1 , X2 , · · · , Xn is a sample. To estimate MA(1):
Xt = Zt − θ1 Zt−1
6
For example: Time series Xt : -0.3771, -0.7009, -0.6063, -2.1099, -2.0939, -0.6972,
-0.8131, -0.4401, 0.0068, 0.0498, 0.5552, -0.3333, -2.3981, -1.9854, -1.3579, 0.6725,
2.0068, 2.4630, 1.8879, -0.2878, -1.2468, 0.5062, 1.5715, 2.7983, 0.8574, 0.1626, 0.6390,
0.2969, 0.3278, 0.6458
Xt = Zt + θ1 Zt−1
or
Xt = Zt + θ1 Xt−1 − θ12 Xt−2 + · · · .
To find θ1 , we will minimize
n
X
MSE = n−1 (Xt − X̂t )2 .
t=1
7
For the example, the minimum point for θ1 is 0.7423. Our estimated model is
then
X̂t = Zt + 0.7423Zt−1 .
SAS codes:
Read data into sas:
data example1;
input id ts;
datalines;
1 -0.3771
2 -0.7009
3 -0.6063
.. ..
..
28 0.2969
29 0.3278
30 0.6458
;
proc print;
run;
proc gplot data=example1;
symbol i=spline c=red v=star;
plot ts*id;
run;
ARIMA procedure:
proc arima data=example1;
identify var=ts nlag=14 outcov=exp3;
run;
GOPTIONS RESET=ALL; proc gplot data=exp3; symbol i=needle width=6;
plot corr*lag/VAXIS=(-0.5 to 1.0 by 0.1) vref=-0.3578454 0.3578454 lvref=2; run;
GOPTIONS RESET=ALL; proc gplot data=exp3; symbol i=needle width=6;
plot partcorr*lag/vref=-0.3578454 0.3578454 lvref=2; run; quit;
proc arima data=example1;
identify var=ts nlag=14 outcov=exp3;
estimate q=1 plot;
run;
The output results are listed in Appendix B.
———
Then the estimated model is
or
Xt = 0.02271 + Zt + 0.75661Zt−1
8
Plot the acf of the residuals by the following codes: (autocorrelations of the
residuals are listed in the output results in Appendix B)
data mydata;
input id acf;
datalines;
1 0.28638
2 0.19386
3 -0.19558
4 0.209419
5 0.01885
6 0.27097
7 0.15829
8 0.285044
9 -0.06605
10 -0.01413
11 -0.11075
12 -0.05437
13 -0.07005
14 -0.15193
;
proc gplot data=mydata;
symbol i=needle width=6;
title ’acf of residuals’;
axis2 order=(-1 to 1 by 0.2);
plot acf*id/vaxis=axis2;
run;
quit;
9
MAS451/MTH451/MH4500 TIME SERIES
ANALYSIS
Chapter 4 (Part 3)
Estimation, diagnostic checking for nonseasonal Box-Jekins models
Basic Questions:
(a) Use SAS to implement the estimation.
(b) Estimate ARMA models based on the properties.
EXAMPLE 1 yt , t = 1, · · · , 50 are observed as: -1.30, -0.18, 0.94, -0.26, -1.05, -0.78,
-0.82, 0.43, 0.57, 1.41, -1.47, 0.49, 0.00, -0.15, -0.64, 0.24, -0.79, 0.82, -0.20, -0.80, -0.22,
0.88, -0.75, 0.55, 0.73, -0.82, 0.70, -1.54, 0.04, -0.70, -0.58, -1.38, -1.28, 0.49, -0.76, 1.08,
0.16, 1.11, -0.06, 0.88, 0.89, 0.31, 0.03,-1.19, -0.38, 0.49, 1.02, -0.98, 0.50, -0.57
The output results are listed in Appendix and we below list part of them.
1
Conditional Least Squares Estimation
Standard Approx
Parameter Estimate Error t Value Pr > |t| Lag
Autoregressive Factors
2
2 Estimation of ARMA model based on the ACF and
PACF: Yule–Walker estimation method
Consider an AR(p) model of the form,
Our aim is to find estimators of the coefficient vector φ = (φ1 , . . . , φp ) and the
white noise variance σ 2 based on the observations X1 , . . . , XN .
where ρ(k) is ACF of the time series. We need to calculate sample ACF. We can
solve the above equations to estimate φ1 , · · · , φp and σ 2 .
Suppose that X1 , X2 , · · · , Xn are observations.
AR(1) model with mean 0: Xt = φ1 Xt−1 + εt
Recall that we have
γ(1) = φ1 γ(0)
3
i.e.
φ1 = ρ(1)
EXAMPLE 2 Fit an AR(1): Xt = φ1 Xt−1 + Zt to data -0.06, -0.18, 0.06, 0.15, 0.13, -0.02,
0.19, -0.13, -0.26, -0.29, -0.17, -0.10, 0.10, 0.17, 0.04, 0.00, 0.15, 0.11, 0.01, 0.19
Because r1 = 0.4755, the estimated model is
X̂t = 0.4755Xt−1 .
γx (1) = φ1 γx (0)
i.e.
φ1 = ρx (1).
We can use sample ACF r1 to estimate ρ(1) thus φ1 : φ̂1 = r1 ; and δ̂ = (1 − φ̂1 )µ̂.
EXAMPLE 3 Fit an AR(1): Xt = φ1 Xt−1 + Zt to data: 5.05, 5.02, 4.78, 4.73, 4.86, 4.81,
4.86, 4.74, 4.89, 5.03, 5.13, 5.16, 5.19, 5.13, 5.16, 5.10, 5.04, 5.07, 4.95, 4.91
We have
X20
µ̂ = X̄ = n−1 Xt = 4.98
t=1
and
19
X 20
X
r1 = (Xt − X̄)(Xt+1 − X̄)/ (Xt − X̄)2 = 0.7747.
t=1 t=1
Thus
δ̂ = (1 − φ̂1 )µ̂ = 1.1220.
Finally the estimated model
4
AR(2) model with mean 0: Xt = φ1 Xt−1 + φ2 Xt−2 + Zt
Recall that we have
ρ(1) = φ1 + φ2 ρ(1)
ρ(2) = φ1 ρ(1) + φ2 .
r1 = φ1 + φ2 r1
r2 = φ1 r1 + φ2
EXAMPLE 4 Fit an AR(2): Xt = φ1 Xt−1 + φ2 Xt−2 + Zt to data 0.15, -0.06, -0.39, -0.56,
-0.52, -0.26, -0.11, 0.32, 0.31, 0.01, 0.00, 0.17, 0.52, 0.32, -0.08, -0.30, -0.16, 0.32, 0.29,
0.07
Because r1 = 0.64, r2 = 0.04, by
0.64 = φ1 + 0.64φ2
0.04 = 0.64φ1 + φ2
we have
φ1 = 1.04, φ2 = −0.62.
The estimated model is
EXAMPLE 5 Example: Xt : 1.25, 1.64, 1.78, 1.33, 1.21, 1.04, 1.04, 1.55, 1.31, 0.89, 0.78,
1.28, 1.79, 2.42, 2.09, 1.57, 1.05, 0.97, 1.26, 1.70
Because X̄ = 1.40. Let xt = Xt − 1.40. We have, for xt
r1 = 0.57, r2 = −0.11
5
By solving
0.57 = φ1 + 0.57φ2
−0.11 = 0.57φ1 + φ2
————————————–
MA(1) model with mean 0: Xt = Zt + θ1 Zt−1
We have
θ1
ρ1 =
1 + θ12
6
Thus
ρ1 θ12 − θ1 + ρ1 = 0
i.e.
r1 θ̂12 − θ̂1 + r1 = 0
We can estimate θ1 by solving the above equations. (we discard the root with
absolute value greater 1 )
MA(1) model with nonzero mean: Xt = δ + Zt + θ1 Zt−1 . Because EXt = δ.
define zt = Xt − δ. We can estimate MA(1) for zt and then Xt .
EXAMPLE 7 Xt : -0.89 -0.53 0.54 -0.26 -1.34 -1.97 -0.35 0.46 -0.08 -1.13 0.04 1.64 1.95
0.94 -0.11 0.18 0.72 0.91 -1.09 0.12 1.29 0.79 1.67 -0.60 -1.72 -0.76 -2.60 -1.71 -0.39
-1.18
Fit a MA(1) model
30
X
ȳ = Xt /30 = −0.182, r1 = 0.5
t=1
We have q
1− 1 − 4r12
θ̂1 = = 1.00.
2r1
Thus the estimated model is
X̂t = Zt + 1.00Zt−1 .
MA(q) model with nonzero mean: There is no analytic solution. For some
special cases, we still have solutions. For example
Xt = δ + Zt + θp Zt−p
——————-
There are no simple methods for Estimation of ARMA model ARMA(p,q)
(the details are beyond the scope of the module).
7
MAS451/MTH451/MH4500 TIME SERIES
ANALYSIS
Chapter 4 (Part 4)
Estimation, diagnostic checking for nonseasonal Box-Jekins models
Basic Questions:
(a) How to select ARMA(p,q) model by AIC ?
(b) “Is the fitted model OK?” What does OK mean?
(c) How to use Ljung-Box statistics?
(d) How to improve a fitted model ?
Suppose that the white noise Zt is Gaussian distribution with variance σ 2 . Let
n
∑
σ̂ 2 = n1 (Xj − X̂j )2 /rj−1 with rj−1 being some constants, independent of σ 2 . It
j=1
turns out that the log likelihood function of ARMA(p,q) models is
n
ln L = − lnσ̂ 2 + Const.
2
1
How does it works ? Choose the model (choose the values of p, q) with
minimum AIC.
Intuitively, one can think of 2(p + q) as a penalty term to discourage over-
parameterization.
1
ρ̂(k) = rk ∼ N(0, ).
n
We can use the above as a rough guide on whether each ρ(k) is zero.
2
where 0 << m << n (usually, m ≈ n/5). Q(m) is called the Ljung-Box statistic
(or Portmanteau statistic).
EXAMPLE 1 The data Xt , t = 1, · · · , 20 are observed as: 0.50, -0.41, 0.37, -0.61, 0.23,
-0.13, 0.06, -0.11, 0.18, -0.14, 0.20, 0.09, -0.03, -0.02, -0.14, -0.07, 0.09, 0.09, -0.01, -0.10
Series y Series y
1.0
0.4
0.2
0.5
0.0
Partial ACF
ACF
−0.2
0.0
−0.4
−0.5
−0.6
0 2 4 6 8 10 12 2 4 6 8 10 12
Lag Lag
The residuals are et : 2, 3, ..., 20: 0.00, 0.02, -0.31, -0.29, 0.05, -0.06, -0.07, 0.08,
0.00, 0.08, 0.25, 0.04, -0.05, -0.16, -0.19, 0.02, 0.16, 0.06, -0.12
The SACF for et are
3
(each H0 : ρe (k) = 0, k = 1, ..., 10 can be accepted separately, why?)
Consider the Ljung-Box test. If we let m = 5, then
Standardized Residuals
2
1
0
−1
−2
5 10 15 20
Time
ACF of Residuals
0.8
0.4
ACF
−0.4 0.0
0 2 4 6 8 10 12
Lag
0.4
0.0
2 4 6 8 10
lag
Xt = ϕ1 Xt−1 + Zt .
4
IF the SACF of Ẑt has a cut-off after lag 1, then it suggests
Ẑt ∼ MA(1)
i.e
Ẑt = et + θet−1
Thus
Xt ∼ ARMA(1, 1).
Hopefully, êt is now closer to white noise.
If the SPACF of Ẑt has a cut-off after lag 1, it suggests
Ẑt ∼ AR(1)
i.e
Zt = ψ1 Zt−1 + et−1 .
Thus
(Xt − ϕ1 Xt−1 ) = ψ1 (Xt−1 − ϕ1 Xt−2 ) + et−1
i.e.
Xt ∼ AR(2).
Hopefully, êt is now closer to white noise.
5
MAS451/MTH451/MH4500 TIME SERIES
ANALYSIS
Chapter 4 (Part 5)
Estimation, diagnostic checking for nonseasonal Box-Jekins models
Basic Questions:
(a) Change a non-stationary time series into stationary one.
(b) Fit ARIMA models.
1
Values of lambda Transformation
1
-1.0 Xt
-0.5 √1
Xt
0.0 ln(X
√ t)
0.5 Xt
1.0 Xt (no transformation)
(a) If the variability of a time series increases as time advances it then implies
that the time series is non-stationary with respect to its variance; See figure
1 below.
1800
1600
no. of passengers
1400
1200
1000
800
0 2 4 6 8 10 12 14 16 18 20
time: month
Figure 1:
7.6
log(no. of passengers)
7.4
7.2
6.8
6.6
0 2 4 6 8 10 12 14 16 18 20
time: month
2
After transformation, we then consider the possible differecing to make the
time series stationary.
Criteria: The ACFs of non-stationary time series converges to zero slowly, how-
ever, those of stationary time series converges to zero fast.
2 ARIMA model
EXAMPLE 3 Figure 3 shows US Dow Jones Industrial Average Market Index {Yt }
from 17-Jul-02 to 20-Mar-03.
{Yt } is not stationary. See figures 3 and 4.
zt = yt−yt−1
80
60
40
20
−20
−40
−60
0 20 40 60 80 100 120 140 160 180
Figure 3:
Series y Series y
1.0
0.8
0.8
0.6
0.6
Partial ACF
0.4
ACF
0.4
0.2
0.2
0.0
0.0
0 5 10 15 20 5 10 15 20
Lag Lag
Figure 4: series Y
3
Series z Series z
0.15
1.0
0.10
0.8
0.05
0.6
Partial ACF
0.00
ACF
0.4
−0.05
0.2
−0.10
0.0
−0.15
−0.2
0 5 10 15 20 5 10 15 20
Lag Lag
Figure 5: series z or x
or
xt − φ1 xt−1 − · · · − φ19 xt−19 = Zt .
Generally, we can fit the difference xt = Xt − Xt−1 of a time series by a
ARMA(p,q) model,
or
If we fit wt by
φp (B)wt = θq (B)Zt
or
4
For the example, we can fit ARIMA(0,1,19) to yt in the example
tsdiag(fitma)
Standardized Residuals
2
1
0
−2
0 50 100 150
Time
ACF of Residuals
0.8
ACF
0.4
0.0
0 5 10 15 20
Lag
0.4
0.0
2 4 6 8 10
lag
5
1500
1400
1300
y
1200
1100
1:171
Figure 6: the black dot is the observation; the red dots are the predictions
tsdiag(fitma)
predict(fitma, n.ahead= 20)
The fitted model is
6
Standardized Residuals
2
1
0
−2
0 50 100 150
Time
ACF of Residuals
0.8
ACF
0.4
0.0
0 5 10 15 20
Lag
0.4
0.0
2 4 6 8 10
lag
1500
1400
1300
y
1200
1100
1:171
Figure 7: the black dot is the observation; the red dots are the predictions
7
EXAMPLE 4 Weekly sales of Super Tech Videocassette Tape [the data can be found
at the website].
100
80
60
y
40
20
1:161
Figure 8: series y
Series y Series y
1.0
1.0
0.8
0.8
0.6
0.6
Partial ACF
0.4
ACF
0.4
0.2
0.2
0.0
0.0
−0.2
−0.2
0 5 10 15 20 25 30 0 5 10 15 20 25 30
Lag Lag
Figure 9: series y
zt = yt − yt−1 = (1 − B)yt .
8
Series z Series z
1.0
0.4
0.8
0.3
0.6
0.2
Partial ACF
0.4
ACF
0.1
0.2
0.0
0.0
−0.1
−0.2
−0.2
0 5 10 15 20 25 30 0 5 10 15 20 25 30
Lag Lag
Standardized Residuals
2
1
−1 0
−3
0 50 100 150
Time
ACF of Residuals
1.0
0.6
ACF
0.2
−0.2
0 5 10 15 20
Lag
0.4
0.0
2 4 6 8 10
lag
Thus, the model is adequate (OK), i.e. there is no autocorrelation in the resid-
uals.
1. Write down the estiamted model
> fit
Call: arima(x = y, order = c(0, 1, 6))
Coefficients:
9
ma1 ma2 ma3 ma4 ma5 ma6
0.6331 -0.0160 0.0361 -0.0264 -0.1490 -0.4374
s.e. 0.0771 0.0892 0.0917 0.0879 0.1055 0.0783
sigmaˆ2 estimated as 4.896: log likelihood = -356.33, aic = 726.65
The fitted model is
40
20
Index
Figure 12: the black dots are the observation and the red dots are the predictions
10