PS2 Solutions
PS2 Solutions
PS2 Solutions
ECON301—EconometRics I
These are my suggested solutions. It is your responsability to cross-check this document with your own answers.
P P P P
1. (a) E.YN jX 0 s/ D E n1 Yi jX 0 s D n1 EŒYi jX 0 s D n1 EŒYi jXi D n1 .ˇ0 C ˇ1 Xi / D ˇ0 C ˇ1 XN : The
third equality is by strong exogeneity and the fourth equality is by assumption 2 (linearity).
(b) E.ˇO0 C ˇO1 X1 jX 0 s/ D E.ˇO0 jX 0 s/ C X1 E.ˇO1 jX 0 s/ D ˇ0 C ˇ1 X1 , where the last equality uses the results
from theorem 3.3.
2 P 2 2
(c) Var.ˇO0 C ˇO1 X1 jX 0 s/ D Var.ˇO0 jX 0 s/ C X12 Var.ˇO1 jX 0 s/ C 2X1 Cov.ˇO0 ; ˇO1 / D sxx n
2
i Xi C X1 sxx
2 2 P
2X1 sxx XN D sxx .1=n/ i Xi2 C X12 2X1 XN : The first equality is by theorem 2.3 and 2.4. The rest is by
theorem 3.3.
P 2
(d) Var.YN ˇ1 XjX N 0 s/ D Var.YN jX 0 s/ D 12 1
i Var.Yi jX s/ D n2 n D n : The first equality is because the
0 2
n
term ˇ1 XN is a constant conditional on the X 0 s . The second equality is the variance calculation formula from
theorem 2.2 and also uses strong exogeneity.1 The third equality is by assumption 4 of homoskedasticity.
P P P P
(e) Cov.YN ; ˇO1 jX 0 s/ D ns1xx Cov. i Yi ; sxy jX 0 s/ D ns1xx Cov i Yi ; i .Xi
N i jX 0 s D 1
X/Y nsxx i .Xi
1 P 1 P 2 P
N
X/Cov.Y 0
i ; Yi jX s/ D nsxx i .Xi XN /Var.Yi jX s/ D nsxx i .Xi X/
0 N 2
D nsxx i .Xi XN / D 0: Here,
most of the computations use the results from theorem 2.3. The third equality uses strong exogeneity.2
(f) E.YN ˇO1 X1 jX 0 s/ D E.YN jX 0 s/ X1 EŒˇO1 jX 0 s D ˇ0 C ˇ1 XN X1 ˇ1 D ˇ0 C ˇ1 .XN X1 /: The first equality
is by linearity of conditional expectation and the second equality is from task 1.a in this problem set and
theorem 3.3, i.e., the conditional unbiasedness of the LSE.
2 2 2
(g) Var.YN ˇO1 X1 jX 0 s/ D Var.YN jX 0 s/CX12 Var.ˇO1 jX 0 s/ 2X1 Cov.YN ; ˇO1 / D n C sxx X12 D sxx sxx n
C X12 D
2 1 P
sxx n i Xi
2
XN 2 C X12 : The first equality is the variance calculation formula. The second equality uses
the results from items 1.d, 1.e, and theorem 3.3. The rest is just algebra and the result from lemma 3.1.
3. In Lecture 3, part 3, we calculated the expression for Var.ˇO1 jX 0 s/: We need to calculate the remaining ex-
pressions in the variance-covariance matrix: Var.ˇO0 jX 0 s/ and Cov.ˇO0 ; ˇO1 jX 0 s/. We start by calculating the
conditional variance of ˇO0 W
The calculation above uses many of the results from task 1 in this problem set. The third equality uses 1.d and
1.e, as well as the expression for the variance of ˇO1 that was calculated in theorem 3.3; it also uses item 1.f as
inspiration about how to compute this variance. The penultimate equality uses the algebraic result from lemma
3.1.
Next, we calculate the conditional covariance between ˇO0 and ˇO1 :
1
In this calculation, we exploit all the results from task 1 in this problem set. The penultimate inequality is due
to task 1.e. The last equality exploits our knowledge of Var.ˇO1 jX 0 s/, which we derived in the proof of theorem
3.3.
Note that the results from task 2 and 3 in this problem set close the remaining gaps in the proof of theorem 3.3
which I gave in the lecture.
4. First note that the population parameters are: ˇ0 D 3; ˇ1 D 2 and 2 D 3:
P
(a) With the data that we have, we calculate XN D 3:5. Now we can calculate sxx D 3iD1 Xi2 3.3:62 / D 15:5:
Since sxx > 0 then assumption 3 is satisfied.
(b) From item 1.f above we know that E.YN ˇO1 X1 jX 0 s/ D ˇ0 C ˇ1 .XN X1 / D 3 C 2.3:5 1/ D 8.
Now we calculate the expression for Var.ˇO0 C ˇO1 jX 0 s/ D Var.ˇO0 jX 0 s/ C Var.ˇO1 jX 0 s/ C 2Cov.ˇO0 ; ˇO1 / D
2 1 P
sxx n
X 2 C 1 2XN . For the data given: Var.ˇO0 C ˇO1 jX 0 s/ D 3 1 .52:25/ C 1 2.3:5/ D 2:209677
i i 15:5 3
(c) From the expression in theorem 3.3 and using the data from this problem, we have that
O 0 s/ D 3 52:25=3
Var.ˇjX
3:5
D
3:3710 0:6774
15:5 3:5 1 0:6774 0:1935
(b) For simplicity, we will introduce the following notation: for any vector with n components .z1 ; : : : ; zn /,
let z i D .zj /j ¤i , i.e., the vector with n 1 components in which we delete the component i but all other
ones remain. Similarly, denote by dz i D j ¤i dzj ; i.e., the product of all differentials dzj except dzi :
Then we have that
Z Z Z Z Y
n
f .yi jx1 ; : : : ; xn / D f .yi ; y i jx1 ; : : : ; xn /dy i D f .yj jxj /dy i
j D1
Z Z Y Y Z
D f .yi jxi / f .yj jxj /dy i D f .yi jxi / f .yj jxj /dyj
j ¤i j ¤i
D f .yi jxi /;
2
have that
Z Z Z Z Y
n
f .yi ; yj jx1 ; : : : ; xn / D f .yi ; yj ; y i; j jx1 ; : : : ; xn /dy i; j D f .yk jxk /dy i; j
kD1
Z Z Y
D f .yi jxi /f .yj jxj / f .yk jxk /dy i; j
k¤i;k¤j
Y Z
D f .yi jxi /f .yj jxj / f .yk jxk /dyk
k¤i;k¤j
D0
(3)
where the fourth equality comes from item (b) in this task and the last equality comes from the fact the
the random variables are identically distributed, as mentioned earlier. Combining (3) and (4) we get the
statement written in the question.
6. You have met this distribution before: in the example from lecture 2 and in problem set 1. First of all note that
as we computed in lecture 2: EŒY jX D x D 3xC2
6xC3
. Thus, assumption 2 from lecture 3 about the linearity of the
conditional expectation will not be met. However, assumption 1 is met because we do have a random sample
with n independent observations. Thus, strong exogeneity will be true, and we will have that EŒYi jXi D 3X i C2
6Xi C3
:
Let’s compute E.ˇO1 jX 0 s/ W
" #
1 1 X
E.ˇO1 jX 0 s/ D E.sxy =sxx jX 0 s/ D EŒsxy jX 0 s D E .Xi XN /Yi jX 0 s
sxx sxx
i
1 X 1 X
D .Xi XN /EŒYi jX 0 s D .Xi XN /EŒYi jXi
sxx sxx
i i
3
where the last equality comes from strong exogeneity. Up until now, the calculation looks like the proof of
theorem 3.3, but since we do not have linearity, we cannot proceed with the calculation in the same way as we
did in the proof of that theorem. However, we do have an expression for the conditional expectation:
1 X 3Xi C 2
D .Xi XN /
sxx 6Xi C 3
i
7. Let me begin by introducing some new notation that will help with the calculations. Denote the components
of the variance covariance matrix as follows:
" #
1 P 2 XN
O 0 2 nsxx i Xi sxx 2 V0 V01
Var.ˇjX s/ D XN
D
1 V01 V1
sxx sxx
Then note that Var.ˇOi jX 0 s/ D 2 Vi . From Theorem 3.4, we know that the LSE has a bivariate normal distri-
bution. It is a well-known result that the marginal distributions from a bivariate normal distribution are also
normal. Thus we have that
ˇOi jX 0 s N.ˇi ; 2 Vi /:
Therefore, we have that:
ˇOi ˇi
p N.0; 1/:
Vi
On the other hand, as explained in the lecture, we have the following:
n 2
O 2 jX 0 s 2 .n 2/:
2
p
Note that SE.ˇOi jX 0 s/ D O Vi : Thus
ˇOi ˇi ˇOi ˇi
p
Vi
p
Vi ˇOi ˇi ˇOi ˇi
r ı D D p D D ti
n 2 2 =
O O Vi SE.ˇOi jX 0 s/
2
O .n 2/