ECN302E ProblemSet08 IntroductionToTSRAndForecastingPart2 Solutions
ECN302E ProblemSet08 IntroductionToTSRAndForecastingPart2 Solutions
ECN302E ProblemSet08 IntroductionToTSRAndForecastingPart2 Solutions
Table of contents
Question 1: Solution 2
Question 2 3
Question 2: Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Question 3 4
Question 3: Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Question 4 (Optional) 6
Question 4 (Optional): Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1
Question 1: Solution
Define the following terms in your own words.
a) One-step ahead forecast is the forecast for the next time period, using existing time series.
b) Multi-step ahead forecast is the forecast for more than one time period, using existing time series.
c) Final predictionerror is an estimate of the MSFE
that incorporates both terms in
MSFE = σu2 + Var β̂0 + β̂1 YT + · · · + β̂p YT–p+1 .
T+p+1 2 T+p+1 SSR
FPE uses MSFE
\ FPE = sû = .
T T–p–1 T
d) Pseudo out-of-sample forecasting is the forcast that is computed over part of the sample using a
procedure that is as if these sample data have not yet been realized.
e) Trend is a persistent long-term movement of a variable over time. A time series variable fluctuates
around its trend.
Deterministic trend is a nonrandom function of time. In other words, a persistent long-term
movement of a variable over time that can be represented as a nonrandom function of time.
Stochastic trend is random and varies over time. In other words, a persistent but random long-term
movement of a variable over time.
f) Random walk: A time series Yt is said to follow a random walk if the change in Yt is (iid):
Yt = Yt–1 + ut
g) Random walk with drift: A time series Yt is said to follow a random walk with drift if the change
in Yt is (iid + the drift β0 )
Yt = β0 + Yt–1 + ut
δ̂
∆Yt = β0 + δYt–1 + ut DF statistic =
se(δ̂)
H0 : δ = 0
H1 : δ < 0
k) Augmented Dickey-Fuller statistic is a regression-based statistic used to test for a unit root in a
p-th order autoregression, AR(p).
δ̂
∆Yt = β0 + δYt–1 + γ1 ∆Yt–1 + · · · + +γp–1 ∆Yt–p+1 + ut ADF statistic =
se(δ̂)
H0 : δ = 0
H1 : δ < 0
2
Question 2
The index of industrial production (IPt ) is a monthly time series that measures the quantity of industrial
commodities produced in a given month. This problem uses data on this index for the United States. All
regressions are estimated over the sample period 1986:M1 to 2017:M12 (that is, January 1986 through
December 2017). A researcher tests for a stochastic trend in ln(IPt ), using the following regression:
where the standard errors shown in parentheses are computed using the homoskedasticity-only formula and
the regressor t is a linear time trend. Use the ADF statistic to test for a stochastic trend (unit root) in
ln(IPt ).
Question 2: Solution
To test for a stochastic trend(unit root) in ln(IP), the ADF statistic is the t-statistic testing the hypothesis
that the coefficient on ln(IPt–1 ) is zero versus the alternative hypothesis that the coeffıcıet on ln(IPt–1 ) is
less than zero.
coefficient –0.0070
The calculated t-statistic = ADF-statistic = = = –1.89.
se(coefficient) 0.0037
From Table 15.4 in the textbook, the 10% critical value with a time trend is –3.12.
Because –1.89 > –3.12, the test does not reject the null hypothesis that ln(IPt–1 ) has a unit autoregressive
root at 10% significance level. That is, the test does not reject the null hypothesis that ln(IPt–1 ) contains a
stochastic trend, against the alternative that it is stationary.
3
Question 3
Suppose Yt follows a random walk Yt = Yt–1 + ut , for t = 1, 2, . . . , T, where Y0 = 0 and ut is i.i.d. with
mean 0 and variance σu2 .
Question 3: Solution
Before starting the solution, we want to write Yt in a way that we can use expectation, variance, and
covariance properties. Let’s respectively use t = 1, 2, 3 to get a sense of the pattern.
Y 1 = Y 0 + ut if we set t = 1 and Y0 = 0
= u1
Y 2 = Y 1 + u2 if we set t = 2 and Y0 = 0
= Y 0 + u1 + u2
= u1 + u2
Y 3 = Y 2 + u3 if we set t = 3 and Y0 = 0
= Y 1 + u2 + u3
= Y 0 + u1 + u2 + u3
= u1 + u2 + u3
t
Yt = ui
X
generalize the pattern above for any t
i=1
a.
t
!
E(Yt ) = E ui
X
i=1
= E(u1 + . . . + ut )
= E(u1 ) + . . . + E(ut ) linearity of expectation
=0 ■
4
" 2 #
Var[Yt ] = E Yt – E[Yt ]
t
!2
= E ui – 0
X
i=1
t
!2
= E ui
X
2 2 2 2 2
Recall :(a + b + c + · · · + n) = a + b + c + · · · + n + 2 (ab + ac + · · · + an + bc + bd + · · · + bn + cd + ce + · · · )
| {z }
i=1 cross products
h i
= E u12 + u22 + u32 + · · · + ut2 + 2u1 u2 + 2u1 u3 + · · · + 2u1 ut + 2u2 u3 + · · · + ut–1 ut
= E[u12 ] + E[u22 ] + E[u32 ] + · · · + E[ut2 ] + 2E[u1 u2 ] + 2E[u1 u3 ] + · · · + 2E[u2 u3 ] + · · · + 2E[ut–1 ut ]
= σu2 + σu2 + σu2 + · · · + σu2 + 0 + 0 + ··· + 0 + ··· + 0
= tσu2 ■
i=1 i=1
t s
" ! !#
=E ui ui
X X
i=1 i=1
= E (u1 + u2 + u3 + · · · + ut )(u1 + u2 + u3 + · · · + us )
=E u1 u1 +u1 u2 + u1 u3 + · · · + u1 us +
| {z }
σu2
u2 u1 + u2 u2 +u2 u3 + · · · + u2 us +
| {z }
σu2
u3 u1 + u3 u2 + u3 u3 + · · · + u3 us +
| {z }
σu2
.. .. .. . ..
. u1 + . u2 + . u3 + . . + . us +
.. .. .. ..
. u1 + . u2 + . u3 + · · · + . us +
ut u1 + u t u2 + u t u3 + · · · + u t us
= min(t, s)σu2 ■
c. A series is stationary if its mean, variance, and covariance is not a function of time.
• Mean of this series is not a function of time.
• Variance of this series is a function time.
• Covariance of this series is a function time.
Thus, this series is not stationary.
We can also read “is function of time” as “depends on time, t”.
5
Question 4 (Optional)
Suppose Yt follows the stationary AR(1) model Yt = 2.5 + 0.7Yt–1 + ut , where ut is i.i.d. with E(ut ) = 0
and Var(ut ) = 9.
= Var(2.5) + Var(0.7Yt–1 ) + Var(ut ) +2 Cov(2.5, 0.7Yt–1 ) + Cov(2.5, ut ) + Cov(0.7Yt–1 , ut )
| {z } | {z } | {z } | {z } | {z } | {z }
0 0.49Var(Yt–1 ) 9 0 0 0.7Cov(Yt–1 ,ut )
= 0.49 Var(Yt–1 ) + 9 + 2 0.7 Cov(Yt–1 , ut )
| {z } | {z }
Var(Yt ) 0
= 0.49Var(Yt ) + 9
Var(Yt ) = 0.49Var(Yt ) + 9
0.51Var(Yt ) = 9
9
Var(Yt ) =
0.51
= 17.647 ■
6
b. We manipulate Yt to use stationary property for in Cov(). Then we replace variance with its value.
Cov Yt , Yt–1 = Cov 2.5 + 0.7Yt–1 + ut , Yt–1
= Cov(2.5, Yt–1 ) + Cov(0.7Yt–1 , Yt–1 ) + Cov(ut , Yt–1 ) bilinearity of covariance for first term
| {z } | {z } | {z }
0 0.7Cov(Yt–1 ,Yt–1 ) 0
= 0.7 Cov(Yt–1 , Yt–1 )
= 0.7 Cov(Yt , Yt ) stationary property for covariance
| {z }
Var(Yt )
2
= 0.7 σY
= 0.7 × 17.647
= 12.352 ■
Cov Yt , Yt–2 = Cov 2.5 + 0.7Yt–1 + ut , Yt–2
= Cov(2.5, Yt–2 ) + Cov(0.7Yt–1 , Yt–2 ) + Cov(ut , Yt–2 ) bilinearity of covariance for first term
| {z } | {z } | {z }
0 0.7Cov(Yt–1 ,Yt–2 ) 0
= 0.7Cov(Yt–1 , Yt–2 )
= 0.7Cov(2.5 + 0.7Yt–2 + ut–1 , Yt–2 )
= 0.7 Cov(2.5, Yt–2 ) + Cov(0.7Yt–2 , Yt–2 ) + Cov(ut–1 , Yt–2 ) bilinearity of covariance for first term
| {z } | {z } | {z }
0 0.7Cov(Yt–2 ,Yt–2 ) 0
= (0.7)(0.7) Cov(Yt–2 , Yt–2 )
= (0.7)2 Cov(Yt , Yt ) stationary property for covariance
| {z }
Var(Yt )
2 2
= (0.7) σY
= 0.49 × 17.647
= 8.647 ■
Cov(Yt , Yt–1 )
Corr(Yt , Yt–1 ) = p
Var(Yt )Var(Yt–1 )
(0.7) σY
2
= 2
σY
= 0.7 ■
Cov(Yt , Yt–2 )
Corr(Yt , Yt–2 ) = p
Var(Yt )Var(Yt–2 )
(0.7)2 σY
2
= 2
σY
= 0.49 ■
d. Because value of YT is given, we need to just replace it with the given value.