Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

PS2 Solutions

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

PRoblem Set 2—Solutions

ECON301—EconometRics I

PRof. AlejandRo Melo Ponce NazaRbayev UniveRsity

These are my suggested solutions. It is your responsability to cross-check this document with your own answers.
 P  P P P
1. (a) E.YN jX 0 s/ D E n1 Yi jX 0 s D n1 EŒYi jX 0 s D n1 EŒYi jXi  D n1 .ˇ0 C ˇ1 Xi / D ˇ0 C ˇ1 XN : The
third equality is by strong exogeneity and the fourth equality is by assumption 2 (linearity).
(b) E.ˇO0 C ˇO1 X1 jX 0 s/ D E.ˇO0 jX 0 s/ C X1 E.ˇO1 jX 0 s/ D ˇ0 C ˇ1 X1 , where the last equality uses the results
from theorem 3.3.
2 P 2 2
(c) Var.ˇO0 C ˇO1 X1 jX 0 s/ D Var.ˇO0 jX 0 s/ C X12 Var.ˇO1 jX 0 s/ C 2X1 Cov.ˇO0 ; ˇO1 / D sxx n
2
i Xi C X1 sxx
2 2  P 
2X1 sxx XN D sxx .1=n/ i Xi2 C X12 2X1 XN : The first equality is by theorem 2.3 and 2.4. The rest is by
theorem 3.3.
P 2
(d) Var.YN ˇ1 XjX N 0 s/ D Var.YN jX 0 s/ D 12 1
i Var.Yi jX s/ D n2 n D n : The first equality is because the
0 2
n
term ˇ1 XN is a constant conditional on the X 0 s . The second equality is the variance calculation formula from
theorem 2.2 and also uses strong exogeneity.1 The third equality is by assumption 4 of homoskedasticity.
P P P  P
(e) Cov.YN ; ˇO1 jX 0 s/ D ns1xx Cov. i Yi ; sxy jX 0 s/ D ns1xx Cov i Yi ; i .Xi
N i jX 0 s D 1
X/Y nsxx i .Xi
1 P 1 P  2 P
N
X/Cov.Y 0
i ; Yi jX s/ D nsxx i .Xi XN /Var.Yi jX s/ D nsxx i .Xi X/
0 N 2
D nsxx i .Xi XN / D 0: Here,
most of the computations use the results from theorem 2.3. The third equality uses strong exogeneity.2
(f) E.YN ˇO1 X1 jX 0 s/ D E.YN jX 0 s/ X1 EŒˇO1 jX 0 s D ˇ0 C ˇ1 XN X1 ˇ1 D ˇ0 C ˇ1 .XN X1 /: The first equality
is by linearity of conditional expectation and the second equality is from task 1.a in this problem set and
theorem 3.3, i.e., the conditional unbiasedness of the LSE.
2 2 2 
(g) Var.YN ˇO1 X1 jX 0 s/ D Var.YN jX 0 s/CX12 Var.ˇO1 jX 0 s/ 2X1 Cov.YN ; ˇO1 / D n C sxx X12 D sxx sxx n
C X12 D
2 1 P

sxx n i Xi
2
XN 2 C X12 : The first equality is the variance calculation formula. The second equality uses
the results from items 1.d, 1.e, and theorem 3.3. The rest is just algebra and the result from lemma 3.1.

2. E.ˇO0 jX 0 s/ D E.YN ˇO1 XjX N ˇO1 jX 0 s/ D ˇ0 C ˇ1 XN Xˇ


N 0 s/ D E.YN jX 0 s/ XE. N 1 D ˇ0 : The results follow from
the results in item 1.a in this problem set and the hint, which one of the results proven in theorem 3.3.

3. In Lecture 3, part 3, we calculated the expression for Var.ˇO1 jX 0 s/: We need to calculate the remaining ex-
pressions in the variance-covariance matrix: Var.ˇO0 jX 0 s/ and Cov.ˇO0 ; ˇO1 jX 0 s/. We start by calculating the
conditional variance of ˇO0 W

Var.ˇO0 jX 0 s/ D Var.YN ˇO1 XjX


N 0 s/ D Var.YN jX 0 s/ C XN 2 Var.ˇO1 jX 0 s/ 2XCov.
N YN ; ˇO1 /
!
2 2  2  sxx 2
  2
1 X
2 2 2
D C 0D C XN D Xi XN C XN
n sxx XN 2 sxx n sxx n
i
!
2 1 X 2
D Xi
sxx n
i

The calculation above uses many of the results from task 1 in this problem set. The third equality uses 1.d and
1.e, as well as the expression for the variance of ˇO1 that was calculated in theorem 3.3; it also uses item 1.f as
inspiration about how to compute this variance. The penultimate equality uses the algebraic result from lemma
3.1.
Next, we calculate the conditional covariance between ˇO0 and ˇO1 :

Cov.ˇO0 ; ˇO1 jX 0 s/ D Cov.YN ˇO1 XN ; ˇO1 jX 0 s/ D Cov.YN ; ˇO1 jX 0 s/ N


XCov.ˇO1 ; ˇO1 jX 0 s/ D XN Var.ˇO1 jX 0 s/
2 N
D X
sxx
1
An attentive student should recognize that in the variance computation, a bunch of covariance terms between Yi and Yj for i ¤ j should
appear. I skipped them because by strong exogeneity, they equal zero. For a formal proof of this fact, see exercise 5d in this problem set.
2
Once again, you should recognize that in the sum of covariances computation, a bunch of covariance terms between Yi and Yj for i ¤ j
should appear. I skipped them because by strong exogeneity, they equal zero. For a formal proof of this fact, see exercise 5d in this problem
set.

1
In this calculation, we exploit all the results from task 1 in this problem set. The penultimate inequality is due
to task 1.e. The last equality exploits our knowledge of Var.ˇO1 jX 0 s/, which we derived in the proof of theorem
3.3.
Note that the results from task 2 and 3 in this problem set close the remaining gaps in the proof of theorem 3.3
which I gave in the lecture.
4. First note that the population parameters are: ˇ0 D 3; ˇ1 D 2 and  2 D 3:
P
(a) With the data that we have, we calculate XN D 3:5. Now we can calculate sxx D 3iD1 Xi2 3.3:62 / D 15:5:
Since sxx > 0 then assumption 3 is satisfied.
(b) From item 1.f above we know that E.YN ˇO1 X1 jX 0 s/ D ˇ0 C ˇ1 .XN X1 / D 3 C 2.3:5 1/ D 8.
Now we calculate the expression for Var.ˇO0 C ˇO1 jX 0 s/ D Var.ˇO0 jX 0 s/ C Var.ˇO1 jX 0 s/ C 2Cov.ˇO0 ; ˇO1 / D
2 1 P
 
sxx n
X 2 C 1 2XN . For the data given: Var.ˇO0 C ˇO1 jX 0 s/ D 3 1 .52:25/ C 1 2.3:5/ D 2:209677
i i 15:5 3
(c) From the expression in theorem 3.3 and using the data from this problem, we have that
   
O 0 s/ D 3 52:25=3
Var.ˇjX
3:5
D
3:3710 0:6774
15:5 3:5 1 0:6774 0:1935

5. Because of independence between each of the .Xi ; Yi /niD1 , we have that


n
Y
f .x1 ; y1 ; : : : ; xn ; yn / D f .xi ; yi /:
i D1

(a) We have that Qn


f .x1 ; y1 ; : : : ; xn ; yn / f .xi ; yi /
f .y1 ; : : : ; yn jx1 ; : : : ; xn / D D iD1 : (1)
f .x1 ; : : : ; xn / f .x1 ; : : : ; xn /
The denominator is equal to
Z Z Z Z Y
n
f .x1 ; : : : ; xn / D  f .x1 ; y1 ; : : : ; xn ; yn /dy1    dyn D  f .xi ; yi /dy1    dyn
i D1
n Z  n
(2)
Y Y
D f .xi ; yi /dyi D f .xi /
iD1 i D1

Combining (1) and (2) we get


Qn n n
Y Y
i D1 f .xi ; yi / f .xi ; yi /
f .y1 ; : : : ; yn jx1 ; : : : ; xn / D Qn D D f .yi jxi /:
iD1 f .xi / f .xi /
iD1 iD1

(b) For simplicity, we will introduce the following notation: for any vector with n components .z1 ; : : : ; zn /,
let z i D .zj /j ¤i , i.e., the vector with n 1 components in which we delete the component i but all other
ones remain. Similarly, denote by dz i D j ¤i dzj ; i.e., the product of all differentials dzj except dzi :
Then we have that
Z Z Z Z Y
n
f .yi jx1 ; : : : ; xn / D    f .yi ; y i jx1 ; : : : ; xn /dy i D    f .yj jxj /dy i
j D1
Z Z Y Y Z 
D  f .yi jxi / f .yj jxj /dy i D f .yi jxi / f .yj jxj /dyj
j ¤i j ¤i

D f .yi jxi /;

because all the integrands f .yj jxj /; j D 1; : : : ; n; j ¤ i integrate to one.


(c) Similarly as above, for any vector with n components .z1 ; : : : ; zn /, let z i; j D .zk /k¤i;k¤j , i.e., the vector
with n 2 components in which we delete the i th and jt h entries but all other ones remain. Similarly,
denote by dz i; j D k¤i;k¤j dzk , i.e., the product of all differentials dzk except dzi and dzj . Then we

2
have that
Z Z Z Z Y
n
f .yi ; yj jx1 ; : : : ; xn / D  f .yi ; yj ; y i; j jx1 ; : : : ; xn /dy i; j D  f .yk jxk /dy i; j
kD1
Z Z Y
D  f .yi jxi /f .yj jxj / f .yk jxk /dy i; j
k¤i;k¤j
Y Z 
D f .yi jxi /f .yj jxj / f .yk jxk /dyk
k¤i;k¤j

D f .yi jxk /f .yj jxj /;

because all the integrands f .yk jxk /; k D 1; : : : ; n; k ¤ i; k ¤ j integrate to one.


(d) First note that since we have have independent and identically distributed random variables, Var.Y1 jX1 / D
Var.Yi jXi / for i D 1; : : : ; n. With this in mind, we are going to calculate the covariance next. We need to
analyze two cases:

Case 1: i ¤ j When the two terms are different we have that:


Cov.Yi ; Yj jX1 ; : : : ; Xn / D EŒYi Yj jX1 ; : : : ; Xn  EŒYi jX1 ; : : : ; Xn EŒYj jX1 ; : : : ; Xn 

D yi yj f .yi ; hj jx1 ; : : : ; xn /dyi dyj
Z  Z 
yi f .yi jx1 ; : : : ; xn /dyi yj f .yj jx1 ; : : : ; xn /dyj
“ Z  Z 
D yi yj f .yi jxi /f .yj jxj /dyi dyj yi f .yi jxi /dyi yj f .yj jxj /dyj
Z  Z  Z  Z 
D yi f .yi jxi /dyi yj f .yj jxj /dyj yi f .yi jxi /dyi yj f .yj jxj /dyj

D0
(3)

Case 2: i D j For this case we have that


Cov.Yi ; Yj jX1 ; : : : ; Xn / D Cov.Yi ; Yi jX1 ; : : : ; Xn / D Var.Yi jX1 ; : : : ; Xn /
D EŒYi2 jX1 ; : : : ; Xn  .EŒYi jX1 ; : : : ; Xn /2
Z Z 2
D yi2 f .yi jx1 ; : : : ; xn /dyi yi f .yi jx1 ; : : : ; xn /dyi
Z Z 2 (4)
D yi2 f .yi jxi /dyi yi f .yi jxi /dyi

D EŒYi2 jXi  .EŒYi jXi /2


D Var.Yi jXi / D Var.Y1 jX1 /;

where the fourth equality comes from item (b) in this task and the last equality comes from the fact the
the random variables are identically distributed, as mentioned earlier. Combining (3) and (4) we get the
statement written in the question.
6. You have met this distribution before: in the example from lecture 2 and in problem set 1. First of all note that
as we computed in lecture 2: EŒY jX D x D 3xC2
6xC3
. Thus, assumption 2 from lecture 3 about the linearity of the
conditional expectation will not be met. However, assumption 1 is met because we do have a random sample
with n independent observations. Thus, strong exogeneity will be true, and we will have that EŒYi jXi  D 3X i C2
6Xi C3
:
Let’s compute E.ˇO1 jX 0 s/ W
" #
1 1 X
E.ˇO1 jX 0 s/ D E.sxy =sxx jX 0 s/ D EŒsxy jX 0 s D E .Xi XN /Yi jX 0 s
sxx sxx
i
1 X 1 X
D .Xi XN /EŒYi jX 0 s D .Xi XN /EŒYi jXi 
sxx sxx
i i

3
where the last equality comes from strong exogeneity. Up until now, the calculation looks like the proof of
theorem 3.3, but since we do not have linearity, we cannot proceed with the calculation in the same way as we
did in the proof of that theorem. However, we do have an expression for the conditional expectation:
  
1 X 3Xi C 2
D .Xi XN /
sxx 6Xi C 3
i

With the data that we have: E.ˇO1 jX 0 s/ D 0:11508


15:5
D 0:00742:

7. Let me begin by introducing some new notation that will help with the calculations. Denote the components
of the variance covariance matrix as follows:
" #
1 P 2 XN  
O 0 2 nsxx i Xi sxx 2 V0 V01
Var.ˇjX s/ D  XN
D
1 V01 V1
sxx sxx

Then note that Var.ˇOi jX 0 s/ D  2 Vi . From Theorem 3.4, we know that the LSE has a bivariate normal distri-
bution. It is a well-known result that the marginal distributions from a bivariate normal distribution are also
normal. Thus we have that
ˇOi jX 0 s  N.ˇi ;  2 Vi /:
Therefore, we have that:
ˇOi ˇi
p  N.0; 1/:
 Vi
On the other hand, as explained in the lecture, we have the following:
n 2
O 2 jX 0 s  2 .n 2/:
2
p
Note that SE.ˇOi jX 0 s/ D O Vi : Thus
ˇOi ˇi ˇOi ˇi
p
 Vi
p
 Vi ˇOi ˇi ˇOi ˇi
r ı D D p D D ti
n 2 2 =
O O Vi SE.ˇOi jX 0 s/
 2 
O .n 2/

Thus, conditional on the X 0 s , the t -statistic ti  t .n 2/.


h i h i
.n 2/ .n 2/ p
8. Using the previous result, a confidence interval for ˇ1 is ˇO1 ˙ t˛=2 SE.ˇO1 jX 0 s/ D ˇO1 ˙ t˛=2 O V1 . For
qP q
i /2
i .O
the data that we have, sxx D 15:5, so that V1 D 1=15:5 D 0:0645, O D n 2
D 8:6
1
D 2:9326, and thus
p
SE.ˇO1 jX 0 s/ D 2:9396 0:0645 D 0:7449: For a 95% two-sided confidence interval, we need the critical value
.1/
for a Student’s t distribution with one degree of freedom: t0:025 D 12:7062: Thus, the confidence interval is
ŒˇO1 ˙ 9:4648:

You might also like