Introduction To Time Series Analysis. Lecture 4
Introduction To Time Series Analysis. Lecture 4
Introduction To Time Series Analysis. Lecture 4
Peter Bartlett
Last lecture:
1. Sample autocorrelation function
1
Introduction to Time Series Analysis. Lecture 4.
Peter Bartlett
2
Mean, Autocovariance, Stationarity
γX (t + h, t) = Cov(Xt+h , Xt )
= E[(Xt+h − µt+h )(Xt − µt )].
3
Estimating the ACF: Sample ACF
4
Properties of the autocovariance function
2. |γ(h)| ≤ γ(0),
3. γ(h) = γ(−h),
4. γ is positive semidefinite.
5
Introduction to Time Series Analysis. Lecture 4.
1. Review: ACF, sample ACF.
2. Properties of estimates of µ and ρ.
6
Properties of the sample autocovariance function
1. γ̂(h) = γ̂(−h),
2. γ̂ is positive semidefinite, and hence
7
Properties of the sample autocovariance function: psd
γ̂(0) γ̂(1) ··· γ̂(n − 1)
γ̂(1) γ̂(0) ··· γ̂(n − 2)
Γ̂n =
.. .. .. ..
. . . .
γ̂(n − 1) γ̂(n − 2) · · · γ̂(0)
1
= M M ′, (see next slide)
n
1
so a′ Γ̂n a = (a′ M )(M ′ a)
n
1
= kM ′ ak2
n
≥ 0,
8
Properties of the sample autocovariance function: psd
0 ··· 0 0 X̃1 X̃2 ··· X̃n
0 ··· 0 X̃1 X̃2 ··· X̃n 0
0
M = ··· X̃1 X̃2 ··· X̃n 0 0 .
. ..
.. .
X̃1 X̃2 ··· X̃n 0 ··· 0
and X̃t = Xt − µ.
9
Estimating µ
10
Estimating the ACF: Sample ACF
!
n n
1X 1 X
To see why: var(X̄n ) = E Xi − µ X j − µ
n i=1 n j=1
n n
1 XX
= 2 E(Xi − µ)(Xj − µ)
n i=1 j=1
1 X
= 2 γ(i − j)
n i,j
n−1
1 X |h|
= 1− γ(h).
n n
h=−(n−1)
11
Estimating µ
n
1 X |h|
Since var(X̄n ) = 1− γ(h),
n n
h=−n
12
Estimating µ
n
1 X |h|
Also, since var(X̄n ) = 1− γ(h),
n n
h=−n
X ∞
X ∞
X
if |γ(h)| < ∞, n var(X̄n ) → γ(h) = σ 2 ρ(h).
h h=−∞ h=−∞
13
Estimating µ
∞
X
n var(X̄n ) → σ 2 ρ(h).
h=−∞
σ2 σ2
i.e., instead of var(X̄n ) ≈ , we have var(X̄n ) ≈ ,
n n/τ
P
with τ = h ρ(h). The effect of the correlation is a reduction of sample
size from n to n/τ . (c.f. mixing time.)
14
Estimating µ: Asymptotic distribution
15
Estimating µ for a linear process: Asymptotically normal
P
Theorem (A.5) For a linear process Xt = µ + j ψj Wt−j ,
P
if ψj 6= 0, then
V
X̄n ∼ AN µx , ,
n
X∞
where V = γ(h)
h=−∞
2
∞
X
2
= σw ψj .
j=−∞
d
(X ∼ AN (µn , σn ) means σn−1 (Xn − µn ) → Z.)
16
Estimating µ for a linear process
P
Recall: for a linear process Xt = µ + j ψj Wt−j ,
∞
X
2
γX (h) = σw ψj ψh+j ,
j=−∞
n−1
X |h|
so lim n var(X̄n ) = lim 1− γ(h)
n→∞ n→∞ n
h=−(n−1)
∞ n−1
2
X X |h|
= lim σw ψj ψj+h − ψj+h
n→∞
j=−∞
n
h=−(n−1)
2
∞
X
2
= σw ψj .
j=−∞
17
Estimating the ACF: Sample ACF for White Noise
18
Sample ACF and testing for white noise
19
Sample ACF for white Gaussian (hence i.i.d.) noise
1.2
0.8
0.6
0.4
0.2
−0.2
−20 −15 −10 −5 0 5 10 15 20
20
Estimating the ACF: Sample ACF
P
Theorem (A.7) For a linear process Xt = µ + j ψj Wt−j ,
if E(Wt4 ) < ∞,
ρ̂(1) ρ(1)
1
.. ..
∼ AN , V ,
. . n
ρ̂(K) ρ(K)
∞
X
where Vi,j = (ρ(h + i) + ρ(h − i) − 2ρ(i)ρ(h))
h=1
× (ρ(h + j) + ρ(h − j) − 2ρ(j)ρ(h)) .
21
Sample ACF for MA(1)
θ
Recall: ρ(0) = 1, ρ(±1) = 1+θ 2 , and ρ(h) = 0 for |h| > 1. Thus,
∞
2
X
V1,1 = (ρ(h + 1) + ρ(h − 1) − 2ρ(1)ρ(h)) = (ρ(0) − 2ρ(1)2 )2 + ρ(1)2 ,
h=1
∞ 1
2
X X
V2,2 = (ρ(h + 2) + ρ(h − 2) − 2ρ(2)ρ(h)) = ρ(h)2 .
h=1 h=−1
And if ρ̂ is the sample ACF from a realization of this MA(1) process, then
with probability 0.95,
r
Vhh
|ρ̂(h) − ρ(h)| ≤ 1.96 .
n
22
Sample ACF for MA(1)
1.2
ACF
Confidence interval
Sample ACF
1
0.8
0.6
0.4
0.2
−0.2
−10 −8 −6 −4 −2 0 2 4 6 8 10
23
Introduction to Time Series Analysis. Lecture 4.
1. Review: ACF, sample ACF.
2. Properties of estimates of µ and ρ.
24
Convergence in Mean Square
25
Convergence in Mean Square
26
Example: Linear Processes
∞
X
Xt = ψj Wt−j
j=−∞
P∞
Then if j=−∞ |ψj | < ∞,
27
Example: Linear Processes (Details)
1
(1) P (|Xt | ≥ α) ≤ E|Xt | (Markov’s inequality)
α
∞
1 X
≤ |ψj |E|Wt−j |
α j=−∞
∞
σ X
≤ |ψj | (Jensen’s inequality)
α j=−∞
→ 0.
28
Example: Linear Processes (Details)
For (2):
The Riesz-Fisher Theorem (Cauchy criterion):
Sn converges in mean square iff
lim E(Sm − Sn )2 = 0.
m,n→∞
29
Example: Linear Processes (Details)
n
X
(2) Sn = ψj Wt−j converges in mean square, since
j=−n
2
X
2
E(Sm − Sn ) = E ψj Wt−j
m≤|j|≤n
X
= ψj2 σ 2
m≤|j|≤n
2
X
2
≤σ |ψj |
m≤|j|≤n
→ 0.
30
Example: AR(1)
is a solution. The same argument as before shows that this infinite sum
converges in mean square, since |φ| < 1 implies j≥0 |φj | < ∞.
P
31
Example: AR(1)
32
Example: AR(1)
Xt − φXt−1 = Wt ,
where Wt ∼ W N (0, σ 2 ).
If |φ| < 1,
∞
X
Xt = φj Wt−j .
j=0
φ = 1?
φ = −1?
|φ| > 1?
33
Introduction to Time Series Analysis. Lecture 4.
1. Properties of estimates of µ and ρ.
34