Chapter 3
Chapter 3
School of Commerce
Department of Economics
Introduction to Econometrics
December 8, 2023
Chapter 3
I Implications of A2 :
1 E(ui |X1i , X2i , ..., Xki ) = 0 ⇒ E(ui ) = 0 ∀i. This is from the law of
iterated expectations which suggest that E[E(ui |Xs )] = E(ui ). Since
E(ui |Xs ) = 0, A2 implies that E(ui ) = E[E(ui |Xs )] = E[0] = 0
If the conditional mean of u for each and every population value of Xs
equals zero, then the mean of these zero conditional means must also
be zero.
2 Orthogonality condition:
E(ui |X1i , X2i , ..., Xki ) = 0 ⇒ cov(Xj i , ui ) = E(Xj i ui ) = 0
∀i, j = 1, ..., k
ý var(ui |Xj i ) = E([ui − E(ui |Xj i )]2 |Xj i )= E(u2i |Xj i ) = σ 2 > 0
Implications of A3 :
1 the unconditional variance of the random error u is also equal to σ 2 ⇒
V ar(ui ) = E(ui − E(ui ))2 = E(u2i ) = σ 2 . This is based on the
iterated expectation.
2 the conditional variance of the regressand Yi corresponding to given set
of regressor values Xj i , j =1,...,k equals the conditional error variance
σ2
βˆ0 ni=1 X1i + βˆ1 ni=1 X1 2i + βˆ2 ni=1 X1i X2i = ni=1 X1i Yi
P P P P
βˆ0 ni=1 X2i + βˆ1 ni=1 X1i X2i + βˆ2 ni=1 X2 2i = ni=1 X2i Yi
P P P P
I For βˆ1 and βˆ2 , we can solve from the minimization of the deviation
form equation. So to minimize:
e2 ⇒ n (yi − yˆi )2 ⇒ n (yi − βˆ1 x1i − βˆ2 x2i )2
Pn P P
i=1 i i=1 i=1
I we needPto solve:
n Pn Pn
∂ e2
i=1 i
∂ (yi −yˆi )2 ∂ (yi −βˆ1 x1i −βˆ2 x2i )2
⇒ i=1
⇒ i=1
=0
∂ βˆ1 ∂ βˆ1 ∂ βˆ1
Pn Pn and P
n
∂ i=1 i
e2 ∂ (y −yˆi )2
i=1 i
∂ (y −βˆ1 x1i −βˆ2 x2i )2
i=1 i
⇒ ⇒ =0
∂ βˆ2 ∂ βˆ2 ˆ ∂ β2
I which gives respectively the following normal equations:
= βˆ1 ni=1 x1 2i + βˆ2 ni=1 x1i x2i
Pn P P
i=1 x1i yi
and
Pn Pn Pn Pn
( x1 2i )( i=1 x2i yi )−( i=1 x1i x2i )( i=1 x1i yi )
βˆ2 = i=1 Pn Pn Pn
( i=1 x1 2i )( i=1 x2 2i )−( i=1 x1i x2i )2
X¯12
P 2 ¯2 P 2 P
x2 i +X2 x1 i −2X̄1 X̄ 2 x1i x2i 2
var(βˆ0 ) = [ n1 + P
x21 i
P P
x22 i −( x1i x2i )2
]σ
P
x22 i
var(βˆ1 ) = [ P x2 P x2 P
x1i x2i )2
]σ 2
1i 2 i −(
P
x21 i
var(βˆ2 ) = [ P x2 P x2 P
x1i x2i )2
]σ 2
1i 2 i −(
P 2
⇒
P P
ei = yi ei since xki ei = 0 by assumption.
I Again, substituting ei = yi − βˆ1 x1i − βˆ2 x2i into this last equation, we
have
ei = yi (yi − βˆ1 x1i − βˆ2 x2i )
P 2 P
=⇒ T SS = ESS + RSS
yi = T SS; βˆ1 x1i yi + βˆ2 x2i yi = yˆi 2 = ESS; e2i = RSS
P 2 P P P P
I Then,
βˆ1 x1i yi +βˆ2
P 2 P P
ESS yˆi x2i yi
R2 = T SS = P
yi2
= P
yi2
or
P 2
e
R2 = 1 − RSS ⇒ 1 − P i2
T SS yi
Introduction to Econometrics Addis Ababa University, School of Commerce December 8, 2023 29 / 43
The Problem of Estimation
Adjusted R2
P 2
e
P 2 P 2 i
ŷ ei
R2 = P y 2 = 1 − P y 2 = 1 − P
n−k
y2
i i
n−1
Example:
I From the earlier example we have the following information:
n = 10; TSS = yi2 = 3450; ESS = yˆi 2 = 3085.78;
P P
RSS= e2i = 364.22; βˆ0 = 111.692; βˆ1 = −7.19 & βˆ2 = 0.0143
P
ei ∼ N (0, σ 2 ); ⇒ Yi ∼ N (α + βXi , σ 2 )
ˆ
β0 P ∼ N (β0 , var( ˆ
P 2β0 );
X¯12 x22 i +X¯22
P
var(βˆ0 ) = [ n1 + P 2 P 2x1 i −2 PX̄1 X̄ 2 2 x1i x2i ]σ 2
x1 i x2 i −( x1i x2i )
P
x22 i
βˆ1 ∼ N (β1 , var(βˆ1 ); var(βˆ1 ) = [ P x2 P x2 −(
P
x1i x2i )2
]σ 2
1i 2i
P
x21 i
βˆ2 ∼ N (β2 , var(βˆ2 ); var(βˆ1 ) = [ P x2 P x2 P
x1i x2i )2
]σ 2
1i 2 i −(
Confidence Intervals
I Given the level of significance (type I error) denoted by α, the
confidence interval for β0 and βj are given as follows:
100(1 − α)% two sided CI for β0 : βˆ0 ± (tn−k−1
α )se(βˆ0 )
2
Hypothesis Testing
I To test the statistical relationship between economic variables in
MLR, we use two types of tests:
1 the t-test: to test individual coefficients
2 the F-test: to test more than one coefficients (to conduct joint tests)
I the t test is conducted using
βˆj −βj
t= s.e.(βj )
The F-test
I The F-statistic is used to test the joint significance of all the slope
coefficients in a multiple linear regression model.
I If the unrestricted PRE is given as:
Yi = β0 + β1 X1i + β2 X2i + ... + βk Xki + ui
I The null and alternative hypothesis are:
H0 : βj = 0 for all j = 1, ..., k
HA : βj 6= 0 for all j = 1, ..., k
I The null hypothesis H0 says that all slope coefficients are jointly
equal to zero.
I The alternative hypothesis HA says that some or all of the slope
coefficients are not equal to zero.
Decision Rule:
I Retain H0 at significance level α if F0 ≤ Fα [K − 1, N − K].
I Reject H0 at significance level α if F0 > Fα [K − 1, N − K].
or
ü Retain H0 at significance level α if p-value of F0 ≥ α
ü Reject H0 at significance level α if p-value F0 < α.
Example: Test the joint significance of X1 and X2 in the earlier examples:
⇒ H0 : β1 = β2 = 0 vs HA : β1 = β2 6= 0