Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
2 views

Theme 6 Time Series and Autocorrelation Lecture Notes

The document discusses time series data, differentiating between static and dynamic models, and the concept of autocorrelation, including its detection and remedial measures. It covers the classification of time series models, the implications of autocorrelation in regression analysis, and methods for detecting autocorrelation such as the Durbin-Watson test. Additionally, it highlights the importance of using Generalized Least Squares (GLS) over Ordinary Least Squares (OLS) for hypothesis testing in the presence of autocorrelation.

Uploaded by

mismail10001000
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Theme 6 Time Series and Autocorrelation Lecture Notes

The document discusses time series data, differentiating between static and dynamic models, and the concept of autocorrelation, including its detection and remedial measures. It covers the classification of time series models, the implications of autocorrelation in regression analysis, and methods for detecting autocorrelation such as the Durbin-Watson test. Additionally, it highlights the importance of using Generalized Least Squares (GLS) over Ordinary Least Squares (OLS) for hypothesis testing in the presence of autocorrelation.

Uploaded by

mismail10001000
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Time series and Autocorrelation

6.1What is meant by time series data


6.2 Differentiate between static and dynamic (Finite Distributed Lag) models in time series.
6.3 Detecting Autocorrelation [Durbin Watson- Run-Tests – Breusch Godfrey]
6.4 Remedial measures for autocorrelation
6.5 Applications for autocorrelation
6.6 Stationary Vs. Non-Stationary Time Series [Correlogram – unit root tests – random walk
– deterministic trends – AR(P) and MA(Q) structure
6.7 Seasonality in times series and dummy variables

6.1Time series Data Basic analysis :


It is the type of data where the past can affect the future only but not vice versa.

What is then the main difference between cross-sectional data and time series ?
Ø Time series data has a temporal ordering, unlike cross-section data.
Ø It is still valid that economic time series still satisfy the intuitive requirements of
random variables, however as an example, today we cannot predict precisely we can
only have speculations about what the Dow Jones Industrial average will be at the
closure of the trading session.
Ø Since outcomes are not foreknown, they should be clearly stated as random variables.
Ø A sequence of random variables in time series are known as stochastic process.

yt = b0 + b1xt + . . .+ bkxtk + ut
6.1.1 Classification of Time series Models Static Model :

static model is driven from the fact that a contemporaneous relationship is being modelled
between z and y and 𝑦! and 𝑧! , where both are dated contemporaneously .

Phillips curve in Economics


𝒊𝒏𝒇𝒕 = 1.42 −. 𝟒𝟔𝟖 𝒖𝒏𝒆𝒎𝒕 + 𝒖𝒕
(1.72) (.289) and 𝑹𝟐 =.053, n= 49
and 𝑹#𝟐 =. 𝟎𝟑𝟑

𝑖𝑛𝑓! : is the annual inflation rate


𝑢𝑛𝑒𝑚! : unemployment rate
Phillips curve assumes a constant rate of unemployment and constant inflationary
expectations in the economy and it seeks the tradeoff between both. Refer to (Mankiw model
1994)
The tradeoff between inflation and unemployment where 𝐻" = 𝛽# =0 and 𝐻# : 𝛽# < 0
where there is a negative relationship between inflation and unemployment
(i.e. a recession will cause a sharp rise in unemployment and inflation becomes lower as
demand on goods from consumers become less and businesses will hire less employees.

6.2 Finite Distributed Lag Models

Ø A type of model under time series when one or more variables are allowed to
affect y with a lag using the following annual observations for example:-
Ø 𝛿$ the impact propensity – it reflects the immediate change in y.

𝒚𝒕 = 𝜶𝟎 +∝𝟏 𝒀𝒕$𝟏 + 𝜹𝟎 𝑿𝒕 + 𝜹𝟏 𝑿𝒕$𝟏 + 𝜹𝟐 𝜻𝒕$𝟐 + 𝑼𝒕

Where it is a finite distributed


lag of order two. ………

6.3What is meant by autocorrelation ?


the Problem
Nature of the Problem
tocorrelation may be defined
The term autocorrelation may be defined as “correlation
as “correlation between
between membersmembe
Ø The
observations term autocorrelation
ordered may be
indefined
timeasseries
“correlation between members[as of series
ordered in time [asinordered
of observations intimetime [as
in time series data]
[as in time series
data] or[asspace
orspacespace
data] or [as in in cross-sectio
in cross-sectional cross-sec
In the regression
data]. context, the classical linear regression model assumes that su
sion context,
relation does the
Ø In the
not classical
regression
exist context, thelinear
in the disturbances regression
classical linear
u i . model
regression model assumes
Symbolically, assumes tha
that such
autocorrelation does not exist in the disturbances 𝑈 %
not exist in the disturbances u . Symbolically, i != j
cov(u , u |x ,ix ) = E(u u ) = 0
i j i j i j

Put simply, cov(u i , u j |xmodel


the classical i , x j )assumes
= E(uthat ) =disturbance
i u jthe 0 i term
!= jrelating to a
tion is not influenced by the disturbance term relating to any other observation
he ple,
classical
if we aremodeldealingassumeswith quarterly thattime theseries
disturbance
data involving term the relating
regressiont
labor and
fluenced capital if inputs andwithif, quarterly
say, theretime is a data
labor striketheaffecting
regression output in
of by the disturbance term
if, say,relating
there is a laborto any other
output observat
For example, we are dealing series involving
output on labor and capital inputs and strike affecting
there isinnoone quarter, there is no reason to believe that this disruption will be carried over to to the next q
reason to believe that this disruption will be carried over
dealing thewith
is, if outputnext is quarterly
lower
quarter. time there
this quarter, series data
is no involving
reason to expectthe it toregressio
be lower n
pital inputsifand
Similarly, we are if, dealing
say, there is a labor strike
with cross-sectional affecting
data involving the output
regressio
consumption
ason to believe expenditure
that thisondisruptionfamily income, will thebeeffect of an increase
carried over toofthe one fam
nex
on its consumption expenditure is not expected to affect the consumption ex
is lower
another this quarter, there is no reason to the
Similarly, if we are dealing with cross-sectional data involving the
family.
regression of family consumption expenditure on family income, expect
effect it to be lowe
we are dealing
However, if there
not expected with is such
to affect cross-sectional
a dependence,
the consumption expenditurewe of data involving
of an increase of one family’s income on its consumption expenditure is
havefamily.
another autocorrelation. theSymbolic
regres
expenditure on family income, E(uthe
i u j ) effect
!= 0 ofi !=anj increase of one(12 f
mption
In thisexpenditure is not expected
situation, the disruption caused by ato
Serial Correlation affect
strike the consumption
this quarter
Autocorrelation
may very well a
lag correlation between lag correlation of a given
ly. next quarter, or the two
increases in the consumption
different series. series with itself,expenditure
lagged of one family m
promptis another such as by a number of timeexpenditure
units,
if there suchfamily to increase itswe
a dependence, consumption
have autocorrelation.
time series
if it wants
Symbo to k
the Joneses. u1, u2,, ..., u10 and v1, v2, v3, ...,
v11, where u and v are two different u1..., u10
Before we find out why E(uautocorrelation exists,
i !=it jis essential to clear up so(
i u j ) != 0
logical questions. Although it is now a common practice to treat the terms auto
serial
ion,andthe correlationcaused
disruption synonymously, by a strikesome authors prefer to distinguish
this quarter may verythewe tw
example, Tintner defines autocorrelation as “lag correlation of a given series
orlagged
the increases
by a numberinofthe timeconsumption
units,’’ whereas he expenditure of one
reserves the term serialfamily
correlat
her family to increase its consumption expenditure if it wants t
1
On this, see William H. Greene, Econometric Analysis, 4th ed., Prentice Hall, NJ, 2000, C
e find out why autocorrelation exists, it is essential to clear up
and Paul A. Rudd, An Introduction to Classical Econometric Theory, Oxford University Press
Chapter 19.
ions.
2 Although
Maurice it and
G. Kendall is now
WilliamaR.common
Buckland, A practice
Dictionary ofto treat Terms,
Statistical the terms au
Hafner Pub
rrelation
Company, synonymously,
New York, 1971, p. 8. some authors prefer to distinguish the

tner defines autocorrelation as “lag correlation of a given ser


0 Time ut–1

6.3.1 OLS Estimation in the Presence(b)of Autocorrelation


What are the types of autocorrelation ?
Figure 6.1 types of autocorrelation
Ø One point,
any practical use. As a starting can assume that the disturbance,
or first approximation, one canor
assume that the dis-
!! & !!"# turbance, or error, terms areerror, terms are generated by the following
generated by the following mechanism.
+ve corr. mechanism.
u t = ρu t−1 + εt −1 < ρ < 1 (12.2.1)
purely random error
where ρ ( = rho) is known
coefficient of
coefficient
as the times of autocovariance
its value in the term.
and where εt is the sto-
chastic disturbance term such that it satisfies the standard OLS assumptions, namely,
autocovariance previous period

Ø where ρ ( = rho) is known as the coefficient of


autocovariance E(ε t) = 0
and where εt is the stochastic disturbance
term such that it satisfies the standard OLS assumptions,
namely, var (εt ) = σε2 (12.2.2)
- E(!! ) = 0
- var (!! )cov (εt , εt+swhite
= σε2 ) = 0 noises error
"= 0 term
!! & !!"# - cov(!! , !!"# ) =0
-ve corr. In the engineering literature, an error term with the preceding properties is often called
a white noise error term. What Eq. (12.2.1) postulates is that the value of the disturbance
term in period t is equal to ρ times its value in the previous period plus a purely random
error term.
The scheme (12.2.1) is known as a Markov first-order autoregressive scheme, or sim-
6.4 Consequencesply
of Using OLS in
a first-order the Presencescheme,
autoregressive of Autocorrelation
usually denoted as AR(1). The name autoregres-
sive is appropriate because Eq. (12.2.1) can be interpreted as the regression of u t on itself
Ø As in the case of heteroscedasticity, in the presence of autocorrelation the OLS
lagged one period. It is first order because u t and its immediate past value are involved; that
estimators are still linear unbiased as well as consistent and asymptotically
is, the maximum lag is 1. If the model were u = ρ1 u t−1 + ρ2 u t−2 + εt , it would be an
normally distributed, but they are no longer efficient (i.e.,t minimum variance)
AR(2), or second-order, autoregressive scheme, and so on. We will examine such higher-
Ø Figure 6.2 Confidence Interval based on GLS and OLS
order schemes in the chapters on time series econometrics in Part 5.

Ø Do you see the the difference between GLS 95% interval and OLS 95%
interval
Ø It could be misleading in hypothesis testing and significance of relationship
between variables.

Ø The message is: To establish confidence intervals and to test hypotheses, one
should use GLS and not OLS even though the estimators derived from the
latter are unbiased and consistent.
6.5 Detecting Autocorrelation

[Graphical Plot Durbin Watson– Breusch Godfrey]

I. Graphical Method
Ø We can simply plot error against time, the time sequence plot, which shows the residuals
obtained from the log wages–productivity regression.
Ø plot the standardized residuals against time, which is (𝑢!& ) divided by the standard error of the
regression (σˆ), that is, they are (𝑢!& /σˆ).
Figure 6.3 Residuals (magnified 100 times) and standardized residuals from the wages–productivity
regression.

both 𝑢!& and the standardized 𝑢!& exhibit a pattern suggesting that perhaps 𝑢! 𝑖𝑠 𝑛𝑜𝑡 random and
autocorrelation might exist
6.5.1 Detecting Autocorrelation
[Graphical Plot Durbin Watson– Breusch Pagan]
guj75772_ch12.qxd
14/08/2008 10:40 AM Page 429

II. Durbin–Watson d Test:


The most celebrated test for detecting autocorrelation is that developed by statisticians
Durbin and Watson. It is popularly known as the Durbin–Watson d statistic,
Ø If there is no auto correlation (of the first-order), d isAutocorrelation:
Chapter 12 expected What to be about
Happens 2.Error Terms Are Correlated?
If the 429
Ø
FIGURE 12.7 130
Durbin Watson Index of compensation Residuals
(Y ) and index of "
d<2 productivity (X ),
120 !!" and !!#$ are positively correlated
d>3 United States, !!" and !!#$
"
are negatively correlated
1960–2005. 110
72_ch12.qxd 14/08/2008 10:40 AMd[2-3]
Page 428 The absence of autocorrelation
100

428 Part Two Relaxing the Assumptions of the Classical Model 90

12.5 Relationship between Wages and Productivity in the Business


80
Sector of the United States, 1960–2005
Now that we have discussed the consequences of autocorrelation, the obvious 70question is,
How do we detect it and how do we correct for it? Before we turn to these topics, it is use-
6.5.2 Application on Relationship between Wages and Productivity in the
ful to consider a concrete example. Table 12.4 gives data on indexes of real compensation
per hour Y (RCOMPB) and output per hour X (PRODB) in the business sector 60 of the U.S.
Business
economy for the period 1960–2005, the base of the indexes being 1992 = 100.
Sector of the United States, 1960–2005
First plotting the data on Y and X, we obtain Figure 12.7. Since the relationship between
Ø compensation
real It is pro-intuitive that the relationship
and labor productivity is expected to bebetween
positive, itreal
is notcompensation
surprising
50 that and labor productivity is expected to be
the two variables are positively related. What is surprising is that the relationship 40 between 60 80 100 120 140 160
positive,
the two it is not
is almost linear, surprising
although thathint
there is some thethattwo variables
at higher values ofare positively
productivity the related. What is surprising is that the
relationship
relationship between
between the two may the two is
be slightly almostTherefore,
nonlinear. linear, although
we decided to there is some hint that at higher values of
estimate
a linear as well as a log–linear
productivity model, with the
the relationship followingthe
between results:
two may where d is the Durbin–Watson
be slightly nonlinear. statistic, which will be discussed shortly.
Ŷt = 32.7419 + 0.6704X t ln!Yt = 1.6067 + 0.6522 ln X t
se = (1.3940) (0.0157) se = (0.0547) (0.0124)
t = (23.4874) (42.7813) (12.5.1) t = (29.3680) (52.7996) (12.5.2)
r 2 = 0.9765 d = 0.1739 σ̂ = 2.3845 2
r = 0.9845 d = 0.2176 σ̂ = 0.0221
Ø How reliable are the results given if there Since is theQualitatively,
above model is double-log,
both thethemodels
slope coefficient represents elasticity. In the
give similar
TABLE 12.4 autocorrelation?
Year Y X Year Y present case, we see that In
Xresults. if laborboth
productivity goes up by 1
cases the estimated percent, the average compensa-
Indexes of Real
1960 Ø As stated
60.8 previously,
48.9 if there is 1983
autocorrelation,
90.3 thegoes up
tion by
83.0 about 0.65 percent.
Compensation and
Productivity, U.S., 1961 estimated
62.5 standard
50.6 errors are biased ?
1984 90.7
coefficients are “highly” significant, as
85.2
1960–2005 1962 64.6 52.9 1985 92.0 Qualitatively,
87.1 both the models give similar results. In both cases the estimated coeffi-
indicated by the high t values. D stands for
(Index numbers, 1963 66.1 55.0 1986 cients are “highly”
94.9 89.7 significant, as indicated by the high t values. In the linear model, if the
the Durbin Watson indicator measuring
1992 ! 100; 1964 67.7 56.8 1987 95.2 90.1
1965 69.1 58.8 1988 index of productivity
96.5 91.5 goes up by a unit, on average, the index of compensation goes up by
quarterly data autocorrelation.
seasonally adjusted) 1966 71.7 61.2 1989 about 0.6792.4
95.0 units. In the log–linear model, the slope coefficient being elasticity (why?), we
1967 73.5 62.5 1990 96.2 94.4
Source: Economic Report of the
President, 2007, Table B-49.
1968 76.2 64.7 1991 find that if95.9
97.4 the index of productivity goes up by 1 percent, on average, the index of real
1969 77.3 65.0 1992 compensation
100.0 100.0 goes up by about 0.65 percent.
1970 78.8 66.3 1993 99.7 100.4
6.5.5 General Test of Autocorrelation: The Breusch–Godfrey (BG) Test
1971 80.2 69.0 1994 99.0 How reliable
101.3 are the results given in Eqs. (12.5.1) and (12.5.2) if there is autocorrela-
1972 82.6 71.2 1995 98.7
tion? As 101.5
stated previously, if there is autocorrelation, the estimated standard errors are
1973 84.3 73.4 1996 99.4 104.5
1974 Ø The null hypothesis H0 to be tested is that H0: ρ1 =ρ2 =···=ρp =0
83.3 72.3 1997 biased, as106.5
100.5 a result of which the estimated t ratios are unreliable. We obviously need to find
1975 84.1 74.8 1998 105.2
out if our109.5
data suffer from autocorrelation. In the following section we discuss several
1976 Ø The alternate hypothesis H1 to be tested is that H0: ρ1 ≠ρ2 ·· ≠ =ρp ≠ 0
86.4 77.1 1999 108.0
methods of
112.8
detecting autocorrelation. We will illustrate these methods with the log–linear
1977 87.6 78.5 2000 112.0 116.1
1978 Ø Judgement criteria for the presence or absence of autocorrelation is p-value =
89.1 79.3 2001 113.5
model 119.1
(12.5.2).
1979 89.3 79.3 2002 115.7 124.0
1980 0.05.
89.1 79.2 2003 117.7 128.7
1981
1982
Ø How do we correct
89.3
90.4
80.8
80.1 12.6
2004
2005
or treat Detecting
119.0
120.2 Autocorrelation
for the presence
132.7
135.7
of autocorrelation?
Ø Taking lags of the variable which is typical treatment for time series data.
I. Graphical Method
Notes: Y = index of real compensation per hour, business sector (1992 = 100).
X = index of output, business sector (1992 = 100).

Recall that the assumption of nonautocorrelation of the classical model relates to the pop-
ulation disturbances u t , which are not directly observable. What we have instead are their
proxies, the residuals û t , which can be obtained by the usual OLS procedure. Although the
count both autocorrelation and heteroscedasticity, provided the sample is reasonably large.

12.13 A Concluding Example


In Example 10.2, we presented data on consumption, income, wealth, and interest rates for
the U.S., all in real terms. Based on these data, we estimated the following consumption
function for the U.S. for the period 1947–2000, regressing the log of consumption on the
logs of income and wealth. We did not express the interest rate in the log form because
Application 1
some of the real interest rate figures were negative.

Dependent Variable: ln(CONSUMPTION)


Method: Least Squares
Sample: 1947–2000
Included observations: 54

Coefficient Std. Error t-Statistic Prob.


C -0.467711 0.042778 -10.93343 0.0000
ln(INCOME) 0.804873 0.017498 45.99836 0.0000
ln(WEALTH) 0.201270 0.017593 11.44060 0.0000
INTEREST -0.002689 0.000762 -3.529265 0.0009

R-squared 0.999560 Mean dependent var. 7.826093


Adjusted R-squared 0.999533 S.D. dependent var. 0.552368
S.E. of regression 0.011934 F-statistic 37832.59
Sum squared resid. 0.007121 Prob. (F-statistic) 0.000000
Log likelihood 164.5880 Durbin-Watson stat. 1.289219

As expected, the income and wealth elasticities are positive and the interest rate semielastic-
ity is negative. Although the estimated coefficients seem to be individually highly statistically
significant, we need to check for possible autocorrelation in the error term. As we know, in the
Ø Allpresence
signs of coefficients are
of autocorrelation, pro-intuitive
the estimated to errors
standard the literature findings and
may be underestimated. main
Examing
theoretical review as wealth and income are +vely related to consumption, however,
interest
47
Lois W.rates
Sayrs,have
Pooled a -veSeries
Time relatioship with
Analysis, Sage consumption.
Publications, California, 1989, p. 19.
Ø 48
All variables are highly significant at p-values lessand
See Jeffrey M. Wooldridge, op. cit., pp. 402–403, and A. K. Bera than 0.05
C. M. Jarque, “Efficient Tests
for Normality, Homoscedasticity and Serial Independence of Regression Residuals: Monte Carlo
Ø The overallEconomic
Evidence,” goodness
Letters,of
vol.fit of the
7, 1981, model is impressive with a 0.99 R^2
pp. 313–318.
Ø Durbin Watson autocorrelation indicator is too low,which might raise doubts about
the validity of results and presence of serial correlation and autocorrelation.
Ø Test the presence of serial correlation between wealth and income as the residuals of
both variables might be correlated.
Ø Take AR(1) lags first difference of variables and MA(1) of residuals to make sure
they are not auto correlated.
Ø
6.6 Stationary Vs. Non-Stationary Time Series

Ø Stationary Process: it is one whose probability distribution over time is stable and
any sample of random variables chosen from this population, will show the
characteristics of a normal distribution and it does follow a normal stochastic process
without a deterministric trend or a random walk.

Ø Thus, stationarity implies that the xt’s are identically distributed and that nature.
of any correlation between adjacent terms is the same across all periods.
Example the joint distribution of (𝑥# , 𝑥' )for the two terms in sequence must be the same as
the joint distribution of (𝑥! , 𝑥!(# ) for any t >=1 and it has nothing to do with correlation
between variables or correlation between variables across time periods
6.6.1 Random Walk known by highly persistent time series
Ø A random walk is an AR(1) model where r1 = 1, meaning the series is not weakly dependent
Ø Note that trending and persistence are different things – a series can be trending but weakly dependent, or a
series can be highly persistent without any trend.
Ø A random walk with drift is an example of a highly persistent series that is trending.

Random Walk needs to be treated doesn’t thoroughly predict relationship of yt is consistent over time as
Variance of the time series increases over time and this violates stationarity/ CHAPTER 11 Further Issues in Using OLS with Time Series Data 395
394 PART 2 Regression Analysis with Time Series Data

F I G U R E 1 1 . 3 A realization of the random walk with drift, yt 2 yt 1 et, with


FIGURE 11.2 The U.S. three-month T-bill rate, for the years 1948–1996. y0 0, et S Normal(0, 9), and n 50. The dashed line is the expected
value of yt , E(yt ) 2t.
interest Random walk with no trend or drift yt
rate 14
100 Random walk with a
Highly persistent trend

8 50

© Cengage Learning, 2013


0
© Cengage Learning, 2013
1
0 25 50
1948 1972 1996 t
year

∆yᵢ = a₀ + a₁*t + δyᵢ₋₁ + uᵢ


Therefore, if y0 0, E( yt) 0t: the expected value of yt is growing over time if 0 0

Test for a unit root with drift: ∆yᵢ = a₀ + δyᵢ₋₁ + uᵢ and shrinking over time if 0
we can show that E( yt h yt)
0. By reasoning as we did in the pure random walk case,
0h yt, and so the best prediction of yt h at time t is yt plus
United States. If GDP is asymptotically uncorrelated, then the level of GDP in the coming the drift 0h. The variance of yt is the same as it was in the pure random walk case.
year is at best weakly related to what GDP was, say, 30 years ago. This means a policy Figure 11.3 contains a realization of a random walk with drift, where n 50, y0 0,
that affected GDP long ago has very little lasting impact. On the other hand, if GDP is 0 2, and the et are Normal(0, 9) random variables. As can be seen from this graph, yt
strongly dependent, then next year’s GDP can be highly correlated with the GDP from tends to grow over time, but the series does not regularly return to the trend line.
many years ago. Then, we should recognize that a policy that causes a discrete change in A random walk with drift is another example of a unit root process, because it is the
GDP can have long-lasting effects. special case 1 1 in an AR(1) model with an intercept:
It is extremely important not to confuse trending and highly persistent behaviors.
yt 1yt et.
A series can be trending but not highly persistent, as we saw in Chapter 10. Further, factors 0 1

6.6.2 Time Series: Data Correlogram


such as interest rates, inflation rates, and unemployment rates are thought by many to be
highly persistent, but they have no obvious upward or downward trend. However, it is
often the case that a highly persistent series also contains a clear trend. One model that
When 1 1 and {et} is any weakly dependent process, we obtain a whole class of highly
persistent time series processes that also have linearly trending means.

leads to this behavior is the random walk with drift:


Transformations on Highly Persistent Time Series
Ø View------- y t correlogram
y 0e, t 1, 2, …,
t 1 t [11.23] Using time series with strong persistence of the type displayed by a unit root process in a
regression equation can lead to very misleading results if the CLM assumptions are vio-

Ø
where {e : t
t
At levels and at default lags
1, 2, …} and y satisfy the same properties as in the random walk model.
0
What is new is the parameter , which is called the drift term. Essentially, to generate y , the
0 t
lated. We will study the spurious regression problem in more detail in Chapter 18, but
for now we must be aware of potential problems. Fortunately, simple transformations are
constant 0 is added along with the random noise et to the previous value yt 1. We can show available that render a unit root process weakly dependent.
that the expected value of yt follows a linear time trend by using repeated substitution:

The Correlogram
y ist shown
e et
here.
… There
e y . are three
0 t t 1 1 0

main parts:
ØAutocorrelation (AC) – this is the
correlation coefficient for values of
the series k-periods apart. If the value
of the first AC is non-zero, it means
that the series is first-order serially
correlated. If autocorrelation dies off
geometrically as k increases, this is a
sign of low-order AR process. If
autocorrelation drops to 0 after a
small number of lags, this is a sign of
low-order MA process
The partial correlation at lag k is the coefficient of the regression of Xt on a constant and
all lags of X up to k. If the partial correlation at lag k is close to 0, this means that
autocorrelation is of the order less than k
6.6.3 Correlogram is used to test stationarity of GDP
Unit Root test conducted
Steps to followed:-
1) View
2)Choose unit root test
To check if lGDP series is
stationary or not leave it at levels
It is insignificant , we cannot reject that GDP has a unit root
and it is non-stationary
3) In order to treat for this problem of stationarity, we test unit
root at 1st difference
4) View- unit root test – choose
This time taking 1st difference
The p value is significant thus we reject the non- stationarity
and series of GDP was transformed to a stationary one.

6.6.4 Testing for Unit Roots

Ø It is essential to know whether a time series follows a unit root process denoted by
I(1) or not. If the data has a unit root process means it is non-stationary and there is
shock that has a permanent effect on the data and the normal AR(q) or MA(p) order
and other processes cannot be applied normally, some transformation needs to be
done.
Ø The simple approach to testing for a unit root with an AR(1) model where:
Ø 𝒚𝒕 =𝜶 + 𝝆𝒚𝒕*𝟏 +𝒆𝒕 ,t=1,2…..,
6.6.5 Example: Unit Root test for annual inflation
Using the annual data for U.S. inflation, based on the CPI to test for a unit root in inflation
Restricting the data to years 1948- 1996 allowing for one and two lags of ∆ "#$! in the augemented Dickey-Fuller
regression It is tested via: -
- Augemented Dickey-Fuller (DF) test for a unit root
(large large samples)
- Phillips –Perron Test rotbust to robust to issues like
∆"#$"! =1.36 -3.10"#$!#$ +1.38∆"#$!#%
autocorrelation and heteroskedasticity and used for
(.517) (.103) (.126)
small samples.
n=47, %% =.172
- KPSS Test where Ho is a trend stationarity
rather than unit root
- ADF-GLS test uses GLS increases efficiency

Unit root test is needed to demonstrate whether the inflation series and CPI
are Stationary or non-stationary and have a unit root
Steps
At level Intercept
At first difference Trend+ intercept
At Second difference None
6.6.6 Analysis of Time series Correlogram
Correlogram: Example 2
1. Open the lGDP series in the
TimeSeries workfile page.
2. Click View → Correlogram.
3. The Correlogram Specification
box opens up. Select 1st Difference
under Correlogram of section.
4. Under Lags to Include section,
type the number of lags (36)
5. Click OK

It is seen, that the autocorrelation pattern is less pronounced for 1st differences relative to levels,
as they show normal stochastic oscillations and no decaying trend as before.

6.6.7 Famous Time Series Models with AR(P) and


MA (Q) Lags

6.7 Seasonality in Time series.

Ø Often time-series data exhibits some periodicity, referred to seasonality.


Ø Example: Quarterly data on retail sales will tend to jump up in the 4th quarter due to
the
Christmas holidays.
Ø Seasonality can be dealt with and captured for by adding a set of seasonal dummies.
Ø As with trends, the series can be seasonally adjusted before running the regression.
(b) Demonstrate that the roots of 1 1.5L + 0.5L are the reciprocals of your a
(c) Given initial conditions for y0 and yl , find the solution for yt , in terms o
past values of the {"t } sequence.
(d) Find the forecast function for yT +s , (i.e., find the solution for any value o
Ø To take into account the increase in retail sales during the christmas holidays on a
values of yT and yT 1 ).
yearly basis:–
Ø The model
(e) Find Eyt , then
Eyt+1for seasonal
, var (y dummies will be as follows: -
t ), var (yt+1 ), and cov(yt+1 , yt ).
Ø 𝒚𝒕 = 𝜷𝟎 +𝜹𝟏 𝒇𝒆𝒃. +𝒕 𝜹𝟐 𝒎𝒂𝒓.𝒕 + 𝜹𝟑 𝒂𝒑𝒓.𝒕 + ⋯ . 𝜹𝟏𝟏 𝒅𝒆𝒄.𝒕 + ⋯ 𝜷𝟏 𝑿𝒕𝟏 + ⋯ 𝜷𝒌 𝑿𝒕𝒌 + 𝒖𝒕
2. [Enders 2.8]𝑓𝑒𝑏
Ø Where !, , 𝑚𝑎𝑟
The file! and 𝑑𝑒𝑐! so on
entitled are dummy variables
SIM_2.csv contains indicating to whether the data sets used
the simulated
time t corresponds to a specific month or not and Jan. is the base month 𝛽$ intercept,
Endersif (2010). The firsteffect
there is the seasonality series,
, Thendenoted Y 1,becontains
𝜹𝟏 , 𝜹𝟐 ….. Will not equal to the 100 values of the
zero through
processthe used
F test. in Section 7 of Chapter 2. Use this series to perform the follow
Due to differences in data handling and rounding, your answers need only ap
presented
Application 1 here):
:

(a) I)Plot Use


thethe
sequence against time.
following Autoregressive Doesand
Function thepartial
series appear to be stationary?
autoregressive
function ACF and PACF of the prices of stocks over 12 years (lags)
(b) Verify that the first 12 coefficients of the ACF and the PACF1 are as follow
ACF
1 : 0.739 0.584 0.471 0.389 0.344 0.335
7: 0.297 0.325 0.269 0.201 0.189 0.082
PACF
1: 0.739 0.083 0.030 0.026 0.060 0.089
7: -0.017 0.144 -0.100 -0.065 0.070 -0.204
Ljung-Box Q-Statistics: Q(8) = 177.58, Q(16) = 197.84, Q(24

(c) (a)
UsePlotthe data toagainst
the sequence verify the
time. results
Does given
the series appearbelow.
to be stationary?
(b) Indicate if the series is not stationary which tests will be conducted first and then how
1 PACF results in Enders (2004) are derived using the Yule-Walker method. The default in EVIEW
could it be made stationary.
option. Thus you may find considerable differences in the default values of the PACF calculated
given in Enders (2010).
II) Design a Financial Application using time series data

Monthly Average Return Strategy

Objective: To create a basic investment strategy that relies on historical monthly average
returns to inform investment decisions.

Components:

Historical Monthly Returns: Utilize time series data for monthly returns of a financial
asset.
Average Monthly Return:Calculate the average monthly return over a specified historical
period.

Investment Decision Rule:If the current month's return is above the average, consider it a
positive signal.If the current month's return is below the average, consider it a negative
signal.
Position Management: Execute buy orders when the current month's return is above the
average.Execute sell orders when the current month's return is below the average.

Risk Management: Implement a simple risk management rule, such as setting a maximum
allowable portfolio loss for each trade.

This simplified strategy is based on the premise that if the current month's return is
consistently above (or below) the historical average, it may indicate a trend that can be
exploited for investment decisions.
The regression model for this simple strategy could be represented as:

Return = 𝛽$! +𝛽#! +Ut

You might also like