Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
35 views

Chapter 2 - Lecture Notes

This document summarizes key concepts in time series analysis including stationarity, autocorrelation, and trend modeling. It defines strong and weak stationarity, and explains that weak stationarity is more commonly used. Autocovariance and autocorrelation functions are introduced to measure the relationship between observations over time or at different time lags. Methods for estimating these relationships from sample data are also presented. The document then discusses modeling trends using simple functions to extract trend components from time series data.

Uploaded by

Joel Tan Yi Jie
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views

Chapter 2 - Lecture Notes

This document summarizes key concepts in time series analysis including stationarity, autocorrelation, and trend modeling. It defines strong and weak stationarity, and explains that weak stationarity is more commonly used. Autocovariance and autocorrelation functions are introduced to measure the relationship between observations over time or at different time lags. Methods for estimating these relationships from sample data are also presented. The document then discusses modeling trends using simple functions to extract trend components from time series data.

Uploaded by

Joel Tan Yi Jie
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

MH4500 TIME SERIES ANALYSIS

Chapter 2 (Part 1): Stationarity and Autocorrelation function

A key role in time series analysis is played by processes whose properties, or


some of them, do not vary time. Such a property is illustrated in the following
important concept, stationarity.

1 Strong and weak stationary


Loosely speaking, a time series {Xt , t = 0, ±1, · · · } is said to be stationary if it
has statistical properties similar to those of the "time shifted" series {Xt+h , t =
0, ±1, · · · } for each integer h. We can make this idea precise with the following
definitions.

DEFINITION 1 The expectation function of X is defined as

µ(t) = E[Xt ], t ∈ T.

And the covariance function of X is given by

γ(t, s) = cov(Xt , Xs )
= E[(Xt − µX (t))(Xs − µX (s))]

for all t, s ∈ T.
The variance function is defined by

σX2 (t) = γ(t, t) = var(Xt ).

Thus µ(t), var(Xt ) and γ(t, s) are just real functions of t and (t, s) respectively.

EXAMPLE 1 . Consider the Gaussian process (Xt , t ∈ [0, 1]) of i.i.d. N(0, 1) random
variables Xt .
Its expectation and covariance functions are given by

µX (t) = 0 and
γ(t, s) = 1 if t = s; 0 otherwise.

1
DEFINITION 2 A time series is said to be strictly stationary if, for any n ∈ Z+ and
all integers h, (X1 , ..., Xn ) and (X1+h , · · · , Xn+h ), s ∈ Z} have the same distribu-
tions.

DEFINITION 3 Denote

µt = EXt and γ(t, k) = cov(Xt , Xt+k ), t, k ∈ Z.

A time series is said to be weakly stationary if (a) µt = µ is independent of t;


and (b) γ(t, k) is independent of t for each k.

Weak stationarity is also called stationarity directly since we are mainly inter-
ested in this type of stationarity. The relationship between strict stationarity and
weak stationarity is as follows. Strict stationarity and finiteness of second mo-
ment ensure weak stationarity. However, generally speaking, weak stationarity
does not imply strict stationarity. An exceptional case is that the strict stationarity
is the same as the weak stationarity when Xt is Gaussian process. Note that some
distributions may have infinite second moment such as Cauchy distribution.
REMARK 1 . Usually, we first have a look at whether either
µ(t) = c0 or var(Xt ) = c1 . (1.1)
If so, we need to look at whether the following holds
γ(t, s) = h(t − s). (1.2)
If (1.1) doesn’t hold, then we will conclude that Xt is not weakly stationary.
The verification of (1.2) is then not required. 
EXAMPLE 2 Consider
Xt = β0 + β1 t + et ,
where et ’s are i.i.d. with mean zero and variance one, and β1 ̸= 0. Is yt stationary
?
Solution: It is not stationary because
EXt = β0 + β1 t,
dependent on time t. 
EXAMPLE 3 Consider a random walk of the form
Xt = Xt−1 + et , t = 1, 2, . . . ,
where et is a sequence of i.i.d. random variables with E[et ] = 0 and E[et2 ] = 1.
Let X0 = 0.

2
◃ Justify whether Xt is stationary.

◃ Is Zt = Xt − Xt−1 stationary ?

2 Autocovariance and autocorrelation functions


In view of condition (b) in Definition 3, whenever we use the term covariance
function with reference to a stationary process {Xt }, γ(t, k) is a function of one
variable, time lag k and hence denote it by γk .

DEFINITION 4 Let X(t) be a stationary time series. The autocovariance function


at lag k is
γk = cov(Xt , Xt+k ) = E(Xt − µ)(Xt+k − µ), k ∈ Z;
and its autocorrelation function (ACF) is

cov(Xt , Xt+k )
ρk = √ = γk /γ0 k ∈ Z.
var(Xt )var(Xt+k )

The two functions have the following properties,

◃ γ0 = var(Xt ); ρ0 = 1.

◃ γk = γ−k ; ρk = ρ−k .
Therefore, the ACFs are often plotted only for the nonnegative lags.

EXAMPLE 4 If Zt is i.i.d with variance V ar(Zt ) = σ 2 then for any s, t with s ̸= t,

Cov(Zt , Zs ) = 0

and {Zt } is stationary. 

DEFINITION 5 The process Zt is said to be white noise if it is a stationary with


Cov(Zt , Zs ) = 0 for s ̸= t, and V ar(Zt ) = σ 2 . Denote it by zt ∼ W N(0, σz2 ).

EXAMPLE 5 We can build time series with white noise sequence. Suppose Zt ∼
W N(0, σ 2 ). Let
Xt = Zt + θZt−1 (MA(1)model)
Then
E(Xt ) = E(Zt + θZt−1 ) = EZt + θEZt−1 = 0.

3
and

V ar(Xt ) = V ar(Zt + θZt−1 )


= V ar(Zt ) + θ 2 V ar(Zt−1 )
= σ 2 + θ 2 σ 2 = (1 + θ 2 )σ 2

What about Cov(Xt , Xt−1 ) and Cov(Xt + h, Xt ), h > 1? Is it stationary ? 

3 Sample Autocovariance and autocorrelation functions


Although we have just seen how to compute the autocorrelation function for
a few simple time series model, in reality, we do not start with a model but
with observations x1 , x2 , · · · , xn . Suppose that {xt } is stationary, then we have to
esitamte µx , γx (h) and ρx (h), with h = 0, 1, 2, · · · . This estimate may suggest which
of the many possible stationary models is a suitable candidate for representing
the dependence in the data. For example, a sample ACF that is close to zero for
all nonzero lags suggests that an appropriate model for the data might be i.i.d
noise.

DEFINITION 6 The sample mean is


n
1∑
µ̂x = x̄ = xt .
n
t=1

The sample autocovariance function (SACVF) is


n−h
1∑
γ̂(h) = (xt+h − x̄)(xt − x̄).
n
t=1

The sample autocorrelation function (SACF) at lag h is


∑n−h
γ̂(h) t=1∑(xt+h − x̄)(xt − x̄)
ρ̂(h) = = n 2
.
γ̂(0) t=1 (xt − x̄)

4
MH4500 TIME SERIES ANALYSIS
Chapter 2 (part 2): Time series regression

The first step in the analysis of any time series is to plot the data. If there
are apparent discontinuities in the series, such as a sudden change of level, it
may be advisable to analyze the series by first breaking it into homogeneous
segments. If there are outlying observations, they should be studied carefully
to check whether there is any justification for discarding them. Inspection of a
graph may also suggest the possibility of representing the data as a realization of
the process,
Xt = Tt + St + et , (0.1)
where Tt is a slowly changing function known as a trend component, St is a
function with known period d referred to as a seasonal component, and et is a
random noise component. The error term et represents random fluctuations that
cause the Xt values to deviate from the average level EXt .
If the seasonal and noise fluctuations appear to increase with the level of the
process then a preliminary transformation of the data is often used to make the
transformed data compatible with model 0.1.
Our aim is to estimate and extract the deterministic components Tt and St in
the hope that the residual or noise component et will turn out to be a stationary
random process. We can then use the theory of such processes to find a satisfac-
tory probabilistic model for the process et , to analyze its properties, and to use
it in conjunction with Tt and St for purposes of prediction and control of Xt .
An alternative approach is to apply difference operators repeatedly to the data
set Xt until the differenced observations resemble a realization of some stationary
process Zt . We can then use the theory of stationary processes for the modelling,
analysis and prediction of Zt and hence of the original process.
The two approaches to trend and seasonality removal, (a) by estimation of Tt
and St in (0.1) and (b) by differencing the data {Xt } , will be discussed in some
detail.

1 Modelling trend by using simple functions


In the absence of a seasonal component model (0.1) becomes the following.

1
DEFINITION 1 A trend model is

Xt = Tt + et

where xt is the time series in period t, Tt is the trend in time period t, et is the
error term in time period t.

Possible types of trend


Three examples

1600

1400

1200
SP500 index

1000

800

600

400

200
0 500 1000 1500 2000 2500 3000 3500 4000
time (daily)

world population
3000

2500
population (million)

2000

1500

1000

500

0
0 200 400 600 800 1000 1200 1400 1600 1800 2000
year

(i) No long-run growth (or decline) trend

Tt = β0

(ii) Straight-line trend


Tt = β0 + β1 t, β1 ̸= 0

2
world brith rate

2
birth rate

1.5

1
1950 1960 1970 1980 1990 2000 2010
year

i. if β1 > 0: linear growth trend


ii. if β1 < 0: linear decline trend
(iii) Quadratic trend
Tt = β0 + β1 t + β2 t 2 ,
implies that there is a quadratic (or curvilinear) long-run change over
time. It can be either growth at an increasing or decreasing rate.

The above three are the most commonly used. This depends on the fact that
many functions can be well approximated, on an interval of finite length, by a
polynomial of reasonably low degree. However, other more complicated trend
exist such as the p-th order polynomial trend.

1.1 Estimation and Elimination of Trend


Method 1 (Least squares estimation of Tt ). In this procedure we attempt to fit a
parametric family of functions, e.g.

Tt = a0 + a1 t + a2 t 2 (1.1)

to the data by choosing the parameters, in this illustration a0 , a1 and a2 , to


minimize ∑
(Xt − Tt )2 .
t

This method needs to assume that the error term et satisfies the constant variance
and independence assumptions.
Method 2 (Differencing to generate stationary data). As an alternative, we now
attempt to eliminate the trend term by differencing.

3
DEFINITION 2 We define the first difference operator ∇ by

∇Xt = Xt − Xt−1 = (1 − B)Xt ,

where B is the backward shift operator,

BXt = Xt−1 , Bi Xt = Xt−i , for i = 1, 2, ....

Similarly,
∇i Xt = ∇(∇i−1 Xt )
with ∇0 Xt = Xt .

Polynomials in B and ∇ are manipulated in precisely the same way as poly-


nomial functions of real functions. For example,
∇2 Xt = ∇(∇Xt) = (1 − B)(1 − B)Xt = (1 − 2B + B2 )Xt = Xt − 2Xt−1 + Xt−2 .
If the operator ∇ is applied to a linear trend function
Tt = at + b,
then we obtain the constant function
∇Tt = a.
In the same way any polynomial trend of degree k can be reduced to a constant
by application of the operator ∇k .
k

Starting therefore with the model Xt = Tt + et where Tt = aj t j and et is
j=0
stationary with mean zero, we obtain
∇k Xt = k!ak + ∇k et ,
a stationary process with mean k!ak . These considerations suggest the possibility,
given any sequence Xt of data, of applying the operator ∇ repeatedly until we find
a sequence ∇k Xt which can plausibly be modelled as a realization of a stationary
process. It is often found in practice that the order k of differencing required is
quite small, frequently one or two.

2 Estimation of Both Trend and Seasonality


We now consider time series that display seasonal variation and hence model
(0.1)
Xt = Tt + St + et .
There are two types of seasonal variation.

4
DEFINITION 3 We say that the time series exhibits constant seasonal variation if
the magnitude of the seasonal swing does not depend on the level of the time
series.
We say that the time series exhibits varying (increasing) seasonal variation
if the magnitude of the seasonal swing depends on the level of the time series.

When a time series displays varying (increasing) seasonal variation, it is com-


mon practice to apply a transformation to the data in order to a transformed
series that displays constant seasonal variation. A transformation of the form is

DEFINITION 4 Box-Cox transformation: for some λ ≥ 0,

xtλ − 1
zt = , if λ > 0
λ

It can be proved that as the power λ approaches zero, the transformation is

log(xt ).

In fact this is common way to make data display constant seasonal variation. In
addition, one may also take the square root transformation (λ = 21 ).

2.1 Modelling seasonality using Dummy variables


The relation between the number of seasons and dummy variables is as follows.

DEFINITION 5 A dummy variable is a variable taking values 0 and 1. Suppose that


there are L seasons. We need to introduce L − 1 dummy variables to describe
the seasons.
seasons D1,t D2,t · · · DL−1,t
1 ÎÏ 1 0 ··· 0
2 ÎÏ 0 1 ··· 0
.. .. .. .. ..
. . . . .
L-1 ÎÏ 0 0 ··· 1
L ÎÏ 0 0 ··· 0

or {
1, if time period t is season k
Dk,t =
0, otherwise

5
For example, (normal) seasons Spring, Summer, Autumn, Winter can be de-
scribed by
seasons D1,t D2,t D3,t
Spring 1 0 0
Summer 0 1 0
Autumn 0 0 1
Winter 0 0 0

DEFINITION 6 The seasonal factor expressed using dummy variables is

St = βs,1 D1,t + · · · + βs,L−1 DL−1,t .

EXAMPLE 1 (International Airline Passengers: Quarterly Totals, 1956-1960). The


time series are

878, 1005, 1173, 883, 972, 1125, 1336, 988, 1020, 1146,

1400, 1006, 1108, 1288, 1570, 1174, 1227, 1468, 1736, 1283
.

1800

1600
no. of passengers

1400

1200

1000

800
0 2 4 6 8 10 12 14 16 18 20
time: month

Our observations: 1. there is trend; 2. the trend is linear; 3. there is season-


ality ; 4. the seasonality is increasing.
Method 1. Take a transformation of xt :

yt = log(xt ).

The plot says that taking logarithms of data may equalize the seasonal variation
reasonably well.
Fit the following model
yt = Tt + St + et

6
7.6
log(no. of passengers)
7.4

7.2

6.8

6.6
0 2 4 6 8 10 12 14 16 18 20
time: month

where Tt = β0 + β1 t.
Using dummy variables for the seasonality gives the following regression
model

yt = β0 + β1 t + βs,1 D1,t + βs,2 D2,t + βs,3 D3,t + et

where {
1, if time period t is season k
Dk,t =
0, otherwise
Below we assume that the constant variance and independence regarding the
error et are satisfied.

7
The data are listed below

time xt yt = log(xt ) const t D1 D2 D3 pred et


1 878 6.78 1 1 1 0 0 6.7635 0.0141
2 1005 6.91 1 2 0 1 0 6.909 0.0037
3 1173 7.07 1 3 0 0 1 7.0875 -0.0202
4 883 6.78 1 4 0 0 0 6.7856 -0.0023
5 972 6.88 1 5 1 0 0 6.8525 0.0269
6 1125 7.03 1 6 0 1 0 6.998 0.0275
7 1336 7.20 1 7 0 0 1 7.1765 0.021
8 988 6.90 1 8 0 0 0 6.8746 0.0211
9 1020 6.93 1 9 1 0 0 6.9414 -0.0139
10 1146 7.04 1 10 0 1 0 7.087 -0.0429
11 1400 7.24 1 11 0 0 1 7.2654 -0.0212
12 1006 6.91 1 12 0 0 0 6.9636 -0.0498
13 1108 7.01 1 13 1 0 0 7.0304 -0.0201
14 1288 7.16 1 14 0 1 0 7.1759 -0.0151
15 1570 7.36 1 15 0 0 1 7.3544 0.0044
16 1174 7.07 1 16 0 0 0 7.0525 0.0156
17 1227 7.11 1 17 1 0 0 7.1194 -0.007
18 1468 7.29 1 18 0 1 0 7.2649 0.0268
19 1736 7.46 1 19 0 0 1 7.4434 0.016
20 1283 7.16 1 20 0 0 0 7.1415 0.0154
1 21 1 0 0 7.2083
1 22 0 1 0 7.3539
1 23 0 0 1 7.5323
1 24 0 0 0 7.2305
where    
1 1 1 0 0 6.78
 1 2 0 1 0   6.91 
X=
 ···

 Y =
 ··· 

1 20 0 0 0 7.16

8
We have the following calculations

(X T X)−1 =
 
0.4250 −0.0187 −0.2562 −0.2375 −0.2187
 −0.0187 0.0016 0.0047 0.0031 0.0016 
 
 −0.2562 0.0047 0.4141 0.2094 0.2047 
 
 −0.2375 0.0031 0.2094 0.4063 0.2031 
−0.2187 0.0016 0.2047 0.2031 0.4016
 
141.29
 1498.36 
 
X Y =
T 
 34.71  .
 35.43 
36.33

The estimator of β’s are


   
β̂0 6.6967
 β̂   
 1 
   0.0222 
β̂ =  β̂s,1  = (X T X)−1 X T Y = 
 0.0446
.

   0.1679 
 β̂s,2 
β̂s,3 0.3241

The prediction errors are then

êt = yt − (1, t, D1,t , D2,t , D3,t )β̂

The estimator of σ 2 = V ar(εt ) is


n

2
σ̂ = êt2 /(n − np ) = 6.7435 × 10−4 .
t=1

(with n = 20 and np = 5)
The Durbin-Watson statistic is
∑n−1
(êt − êt−1 )2
DW = t=1∑n 2
= 0.8346.
t=1 êt

The standard deviation s ckk of the estimators are

(0.0169, 0.0010, 0.0167, 0.0166, 0.0165).

Total sum of squares: we have ȳ = 7.0644 and


n

Syy = (yt − ȳ)2 = 0.6577
t=1

9
Sum squares of prediction errors
n

SSE = êt2 = 0.0101
t=1

Thus, we have
R2 = 1 − SSE/Syy = 0.9846.
The F statistics is
(Syy − SSE)/(np − 1)
F= = 240.0942.
SSE/(n − np )

Our estimated model is then

yt = 6.70 + 0.02t + 0.04D1,t + 0.17D2,t + 0.32D3,t


(0.0169) (0.0010) (0.0167) (0.0166) (0.0165)
2
R = 0.9846, DW = 0.8346 F = 240.09
2 −4
σ̂ = 6.7435 × 10 .

(i) point Predictions: ŷt = Xt β̂: first quarter in year 61: Xt = (1, 21, 1, 0, 0),

ŷ21 = 6.70 + 0.02 ∗ 21 + 0.04 ∗ 1 + 0.17 ∗ 0 + 0.32 ∗ 0 = 7.2

ŷ22 =??, ẑ23 · · ·

(ii) prediction intervals with 95% confidence:



ŷt ± 1.96s 1 + Xt (X T X)−1 XtT .

first quarter in 61: 7.2 ± 0.0608 = [7.14, 7.26]


or yt : [e7.14 , e7.26 ] = [1261, 1422]
second quarter in 61: ??
thrid quarter in 61: ??
fourth quarter in 61: ??

The predictions are shown in the following figure.

10
7.6
log(no. of passengers)

7.4
Observations

7.2

6.8 predictions

6.6
0 5 10 15 20 25
time quarter

2000

1800
Observations
no. of passengers

1600

1400

1200
predictions
1000

800
0 5 10 15 20 25
time quarter

11
DEFINITION 7 The Durbin-Watson statistic is
∑n−1
(et − et−1 )2
DW = t=1∑n 2
,
t=1 et

where e1 , · · · , en are the time-ordered residuals.


Consider testing the null hypothesis

H0 : The error terms are not autocorrelated.

versus the alternative hypothesis

H1 : The error terms are (positively) autocorrelated.

We reject H0 if d < dα , where dα is the quartile value corresponding to the level


of significance α.

Here small values of d lead to the conclusion of autocorrelation, because if d is


small, the difference et − et−1 are small. This indicates that the adjacent residuals
et and et−1 are of the same magnitude, which in turn says that the adjacent error
terms et and et−1 are correlated.
Here is an example of autocorrelated:

60

50

40

30

20

10

0
0 2 4 6 8 10 12 14 16 18 20
time

20

10
predict errors

−10

−20
0 2 4 6 8 10 12 14 16 18 20
time

12
2.2 Modelling seasonality using trigonometric functions
Sometimes regression models involving trigonometric terms can be used to fore-
cast time series exhibiting either constant or increasing seasonal variation. Such
models are as follows.

DEFINITION 8 Trigonometric models for constant seasonal variation


( ) ( )
2πt 2πt
(1) St = βs,1 sin + βs,2 cos .
L L

DEFINITION 9 Trigonometric models for increasing seasonal variation


( ) ( )
2πt 2πt
(2) St = βs,1 sin + βs,2 cos
L L
( ) ( )
2πt 2πt
+βs,2 t sin + βs,4 t cos .
L L

EXAMPLE 2 We again consider Example 1 of International Airline Passengers.


This time uses trigonometric functions for the seasonality
2πt 2πt
yt = β0 + β1 t + βs,1 sin( ) + βs,2 sin( ) + et
4 4

13
where

time xt yt = log(xt ) const t sin( 2πt


4 ) cos( 2πt
4 ) pred
1 878 6.78 1 1 1.00 0.00 6.7199
2 1005 6.91 1 2 0.00 -1.00 6.9651
3 1173 7.07 1 3 -1.00 0.00 7.0439
4 883 6.78 1 4 0.00 1.00 6.8417
5 972 6.88 1 5 1.00 0.00 6.8058
6 1125 7.03 1 6 0.00 -1.00 7.0509
7 1336 7.20 1 7 -1.00 0.00 7.1298
8 988 6.90 1 8 0.00 1.00 6.9275
9 1020 6.93 1 9 1.00 0.00 6.8916
10 1146 7.04 1 10 0.00 -1.00 7.1368
11 1400 7.24 1 11 -1.00 0.00 7.2156
12 1006 6.91 1 12 0.00 1.00 7.0134
13 1108 7.01 1 13 1.00 0.00 6.9775
14 1288 7.16 1 14 0.00 -1.00 7.2226
15 1570 7.36 1 15 -1.00 0.00 7.3015
16 1174 7.07 1 16 0.00 1.00 7.0992
17 1227 7.11 1 17 1.00 0.00 7.0633
18 1468 7.29 1 18 0.00 -1.00 7.3085
19 1736 7.46 1 19 -1.00 0.00 7.3873
20 1283 7.16 1 20 0.00 1.00 7.1851
21 1 21 1.00 0.00 7.1492
22 1 22 0.00 -1.00 7.3943
23 1 23 -1.00 0.00 7.4732
24 1 24 0.00 1.00 7.2709

[Similarly, we can estimate the model, and make predictions (How)]


The predictions are shown in the following figure.

14
7.6

observations

log(no. of passengers)
7.4

7.2

6.8 predictions

6.6
0 5 10 15 20 25
time quarter

1800

1600 observations
no. of passengers

1400

1200

1000 predictions

800
0 5 10 15 20 25
time quarter 

3 Elimination of both trend and seasonality


Consider the removal of seasonality from the following model

Xt = Tt + St + et , (3.1)

where St+L = St and L is the length of seasonal period. This means that it belongs
to constant seasonal variation.
Method 2 (Differencing at lag d). The technique of differencing which we
applied earlier to non-seasonal data can be adapted to deal with seasonality of
period d by introducing the lag-L difference operator ∇L defined by

∇L Xt = Xt − Xt−L = (1 − BL )Xt .

(This operator should not be confused with the operator ∇L = (1 − B)L defined
earlier.)
Applying the operator ∇L to the model

Xt = Tt + St + et ,

where St has period L, we obtain

∇L Xt = Tt − Tt−L + et − et−L ,

which gives a decomposition of the difference ∇L Xt into a trend component


(Tt − Tt−L ) and a noise term (et − et−L ). The new trend, (Tt − Tt−L ), can then be
eliminated using the methods already described, for example by application of
some power of the operator ∇.

15
EXAMPLE 3 Consider a simple random walk of the form

Xt = Xt−1 + et , t = 1, 2, ..., T,

where {et } is a sequence of independent and identically distributed (i.i.d.) random


errors with zero mean and variance σ 2 = 1. Let Y0 = 0.

◃ Justify whether {Xt } is weakly stationary.

◃ If not, how to construct a weakly stationary sequence based on {Xt }. 

EXAMPLE 4 Suppose that

Tt = c0 + c1 t + c2 t 2 , t = 0, 1, ....

Define a random walk of the form

Xt = Tt + et , t = 0, 1, ...,

where et is a sequence of i.i.d. random variables with E[et ] = 0 and E[et2 ] =


1. Justify whether Xt is weakly stationary. If not, can you construct a weakly
stationary process based on Xt ? 

16

You might also like