Econometrics Module
Econometrics Module
DISTANCE EDUCATION
PROGRAM
INTRODUCTION
TO
ECONOMETRICS
(Econ 352)
Prepared by
Alemayehu Kebede
Edited by
Tesfaye Melaku
1
December 25/06
2
Chapter 1: Definition, Scope, & Division of Econometrics..............................................4
1.1 Econometrics & Other Disciplines of Economics.....................................................4
1.2 Goals of Econometrics..............................................................................................5
1.2.1 Analysis :Testing of Economic Theories...........................................................5
1.2.1 Policy Makings..................................................................................................5
1.2.3 Forecasting.........................................................................................................6
1.3 Division of Econometrics.........................................................................................6
1.3.1 Theoretical Econometrics...................................................................................6
1.3.2 Applied Econometrics........................................................................................7
1.4 Methodology of Econometric Research...................................................................7
Chapter 2: Simple linear regression model...................................................................15
OR......................................................................................................................................15
The Classical linear regression model...............................................................................15
2.1 The simple linear regression Analysis...................................................................15
2.2 Assumptions of the linear stochastic regression model..........................................17
2.2.1 Assumption about Ui.......................................................................................17
2.2.1 Assumption about Ui & Xi...............................................................................17
2.2.2 Relation ship about explanatory variables........................................................18
2.3 Estimation of the model..........................................................................................19
2.4 Statistical tests of Estimates....................................................................................22
2.5 The relationship between r2 and the slope coefficient .........................................25
2.6 Significance test of the Parameter estimates..........................................................26
Chapter 3 :Properties of the least square Estimates...........................................................44
3.1 The Gauss-Markov Theorem.................................................................................45
3.2 Importance of the BLUE Properties:-.....................................................................46
3.3 Maximum Likelihood Estimation...........................................................................46
Chapter 4: Multiple Linear Regression..............................................................................51
4.1 Variance & standard errors of OLS estimators......................................................53
4.2 Test of significance of the parameter estimates of multiple regressions................53
4.3 Importance of the statistical Test of Significance...................................................62
Chapter 5: Relaxing the assumptions of the classical model.............................................65
5.1 Violation of the important assumptions..................................................................65
5.1.1 Hetroscedaticty:-.............................................................................................65
5.1.1.1 Consequences of hetroscedasticity................................................67
5.2.1 Graphic methods of detecting autocorrelation.........................................................91
5.3.3 Test for detecting multicollinearity.......................................................................116
Chapter 6 : Estimation with Dummy Variables...............................................................123
6.1 Some important uses of Dummey variables..........................................................127
6.2 Use of Dummy variables in Seasonal analysis......................................................131
Chapter 7 : Simultaneous equation models.....................................................................134
7.1 The nature of simultaneous – equation model......................................................136
7.2 Inconsistency & Simultaneity Bias of OLS Estimators.........................................137
7.3 Solution to the simultaneous equations....................................................................140
7.4 Some definitions of simultaneous equations............................................................140
7.4 The Identification Problem........................................................................................146
7.4.1 Under identification...............................................................................................147
3
7.4.2 Exact /Just/ Identification......................................................................................148
7.4.3 Over identification.................................................................................................150
7.5.1 Order Condition for identification.........................................................................152
7.5.2 Rank Condition for identification..........................................................................153
7.5.3 Rank Condition......................................................................................................155
7.6 Estimation of Simultaneous equation models...........................................................155
7.6.1 Ordinary Least Squares..........................................................................................156
4
Chapter 1: Definition, Scope, & Division of Econometrics
Ex. 2.
Where
Ct.: consumption expenditure
Yt: current income
Yt-1: previous income
Again this mathematical relation does not capture other factors that affect consumption
expenditure. Then mathematical economics explain the exact relation ship between the
dependent variable (Ct) & the independent variables (Yt &Yt-1) by ignoring other variables
that affects consumption expenditure.
C) Economic Statistics:- It is a descriptive aspects of economic theory. i.e. by collecting,
processing and presenting economic data in the form of table & charts. Though Economic
statistics provides numerical data like mean, median - standard deviation etc. but it does
not make reliable the relationship between the economic variables.
D) Mathematical statistics:- This is based up on the probability theory, which are developed
on the basis of controlled experiments. This statistical method can be applied in economic
5
relationships because such experiment can not be designed for economic phenomena.
This probability theory applied for very few cases in economics such as Agricultural or
industrial experimentations.
In all of the above methods they completely ignore the other factors that will affect the economic
relationship but econometrics by developing a method for dealing with the random term that will
affect the economic relation ships differentiate itself from the remaining.
6
B2 = Marginal propnsity to export
B3 & B4 import & export propensities respectively
Then on the basis of these coefficients of numerical values the government will decide whether
devaluation will eliminate the countries deficit or not.
1.2.3 Forecasting
It means using the numerical values of the coefficients of economic relation ships we can judge
whether to take any policy measure in order to influence the future value of economic variables or
not
Assuming that the estimated result from the Ethiopian economy for the year 1985-1995.
7
1.3.2 Applied Econometrics:- This is the application of theoretical Econometrics methods to
the specific branch of economic theory i.e.application of theoretical Econometrics for verification
& forecasting of demand, cost, supply, production, investment, consumption & other related field
of economic theory.
Economic theory
Collecting data
Evaluation of Estimates
(Hypothesis testing)
Application (forecasting)
Stage 1. The first step in Econometrics is to formulate the economic theory that will be tested
against reality using Econometrics.
Ex,
The theory may hypothesize that "Aggregate saving in the economy is affected by the
average interest rate and the one year lag of income (Previous year income).
Or if you take Keynes psychological law of consumption it hypothesize consumption is a
function of income to be precise. "Aggregate consumption in terms of wage units (Cw) &
aggregate income (Yw) in terms of wage units are called this relation ship propensity to
consume.
Or consumption of an individual at any period of time depends upon income of an
individual at period t & future rate of interest.
8
Aggregate saving is the dependent variable & the remaining variables (interest rate &
previous year income) are the independent variable.
E.x.2
Aggregate consumption in terms of wage units is the dependent variable & aggregate
income in terms of wage units is the independent variable
Ex. 3.
Consumption of an individual at time t is dependent variable and future rate interest & income are
independent variables.
B) Determine the theoretical values: a prior expectation of the sign & magnitude of the
parameters. This needs only a theoretical background to determine the relationship between the
dependent & independent variables i.e. negative or positive relation ship between variables. From
our example we can have the following sign or direction of relation ship between variables
Ex.1.
i. Interest rate & saving have positive relationship and also there is a positive relation
ship between income and saving. Then we can say that the sign of the parameters that
will explain the relationship between aggregate saving & interest rate & income have
to be positive.
ii. Aggregate consumption in terms of wage & Aggregate income in terms of wage are
positively related & the sign of the parameter has to be positive
iii. The relation ship between consumption at time t & income at time t has positive
relationship & consumption at time t & future rate of return have negative
relationship (if future rate of interest is high an individual will cut down his
consumption at time t & post pone his consumption for other period & increase his
savings).
C) Specification of the model: In this stage we specify the relationships between the
dependent & independent variables on the basis of economic theories. In this stage we also
determine the number of equations (single equation or simultaneous equation model) & the type
of equation i.e. whether the relationship between economic variables explained using linear or
non- linear equations. Let’s specify our previous theoretical relation ships.
Ex. 1.
9
represents marginal propensity to consume and the magnitude of is 0<β<1 it is determined by
the economic theory. The explanation of the magnitude of equation 1.9 &1.10 are different from
equation 1.11. In equation 1.9 &1.10 which is a linear equation the coefficients of the variable
explains the marginal magnitudes but equation 1.11 explains the elastic ties.
Ex. In equation 1.9 & 1.10 if income increases by 1 birr on the average consumption will increase
by β amount. The same thing is true in the interpretation of β 2 in equation 1.10 i.e. if rate of
interest is increasing by one birr on the average saving will increased by β 2 amount. But in
equation 1.11 β1 & β2 explains elastic ties i.e. if income increases by 1% consumption will
increase on the average by β 1% & for β2 if rate of interest is increasing by 1% consumption will
be cut down on the average by β2 %.
10
Cross-sectional data: data collected on one or more variables collected at particular
period of time. Ex. Number of children registered for schooling in all K.G. Schools of
Bahir Dar in 1999 E.C. by sex, age, religion etc.
Panel data:- These are the results of repeated survey of a single (cross sectional data)
sample in different periods of time. Ex. If consumption expenditure of a sample of
population from Bahir Dar city on Teff, Coffee, cloth is taken in 1985, in 1990 & 1996.
Polled data:- These are data of both time series & cross sectional data.
Dummy Variable: These are data constructed by econometricians when they are faced
with qualitative data. These qualitative data may not be measurable in any one of the
above methods ex. sex, religion, race, profession etc. The value of these data can be
approximated using dummy variables ex. if religion is appearing in the independent side
of the equation since we do not have qualitative data we can assign 1 for Christian & 0
Otherwise.
Accuracy of data:- Though plenty of data are available for research purpose but the quality of
data matters in arriving at a good results. The quality of data may not be good for different
reasons.
i. Most social science data are not - experimental in nature i.e. there will be omission,
errors etc.
ii. Approximation & round off the numbers will have errors of measurement.
iii. In questioner type of survey non-response and not giving an answer for all questions
may lead to selectivity bias. /rejecting non-response & excluding those questions
which is not answered by the respondent/
iv. The data obtained using one sample may be varying with the data obtained in another
sample & it is difficult to compare the results of these two samples.
v. Economic data are available at aggregated level & errors may be committed in
aggregation.
vi. Due to confidentiality of some data’s it is impossible to get the data or may be
published in aggregated form.
Because of the above reasons one can deduce that the results obtained by any researchers are
highly depending up on the quality of the data. Then if you get unsatisfactory results the reason
may be the quality of the data if you correctly specifying the model.
Simultaneous equation techniques!- When we have more than one equation & if the
numerical values of the coefficients are determined simultaneously at a time then we use
any one of the following methods of estimation, Three stage least squares (3SLS), & the
Full information Maximum Likelihood (FIML) method. The selection of the techniques
of estimation will depends upon many factors.
11
a) Nature of the relationship between economic variables and its identification.
Under this condition if we studied the economic relationship using a single equation
method then the best method is OLS. But if the relationships between economic variables
are in a function of simultaneous equation we may use any techniques from the above
stated.
b) On the properties of the estimated coefficients obtained from each method agood
estimate should give the properties of unbiased ness, consistency, efficiency & sufficiency
or a combination of such properties.If one method gives an estimate which passes more of
these desirable characteristics than any other estimates from other methods, then that
techniques which possess more of the desirable characteristics will be selected.
c) On the purpose of Econometric research:- If the purpose of the model is
forecasting the property of minimum variance is very important i.e. the techniques which
will give the minimum variance of the coefficients of the variables will be selected. But if
the purpose of the research is for policy making (analysis) that techniques which gives
unbiased ness of the variables will be selected.
d) On the simplicity of the techniques: If our interest is simple computation we can
select that technique which involves simple computation & less data requirement.
e) Time & cost required for computation of the coefficients of the variables may
determine the selection of the Econometric techniques.
The estimation of the /coefficients of the variable/ the model can be computed using any one of
the above stated econometrics techniques. Some techniques which are theoretically applicable
may not be used for estimation purpose due to non-availability of data or defaults of the statistical
results obtained from the technique.
Having selected the econometric method that will be applicable for estimation of the model one
should take in to consideration whether the model is linear in variable & in parameters.
a) If the model is non - linear in parameters it is beyond to this level of Econometric
analysis
Ex.
-------------------------1.16
Since the coefficient β1 is the power of 2 & β2 is the power of 3 then we call these kinds
of model non-linear in the coefficients.
b) If the model is non-linear in variables then before estimation the model has to be
transformed in to linear model.
To know whether the model is linear or non-linear in variable we can take the first derivatives &
if the first derivative of the model gives us a constant number then the model is linear in variables
but if it doesn't give us a constant number the model is non-linear in variable.
Example (1)
12
or then since the first derivation is not equal to a
constant number then again the model is non-linear in the variable.
To estimate the model which is non-linear in variable we should first transform the model in to
linear model.
Equation (1.18)
Where
Equation (3)
Where
Again if you have the following models first transform as follows
Yt= transform in to lnyt= ln- ln Xi +Ui
Transform in to lnyt =+βXi + Ui
Transform in to y=+β log ex + Ui
Transform in to y=+β ( ) + Ui
Having transformed the model from non-linearity in variable to linearity in variable then we can
estimate the model using the appropriate (selected) method of econometrics methods.
Step A. Economic a prior criterion: at this stage we should confirm that whether the estimated
values explain the economic theory or not i.e. it refers to the sign & magnitudes of the
coefficients of the variables.
Ex. 1. If we have the following consumption function
Ct= +β1Yt + Ut -------------------------------------------1.22
Where Ct: consumption expenditure,
Yt is income
From the economic theory (economic relationship between consumption and income) it is known
that β represents MPC (Marginal Propensity to consume). Then on the basis of a priori-economic
criterion it is determined that the sign of β has to be positive & the magnitude (size) β again is in
between zero & one (0< β<1). If the estimated results of the above consumption function gives
----------------------------1.23
From the economic relationship explained by economic theory states that if your income increase
by 1 birr your consumption will increase on the average by less than one birr i.e .203 cents. Then
13
the value of β1 is less than one & greater than zero in its magnitude (size) again the sign of β 1 is
positive. Therefore, the estimated models explains the economic theory (economic relationship
between consumption & income) or satisfies the a priori - economic criterion. If another
estimation of the model using other data gives the following estimated results
-----------------------------1.24
Where Ct is consumption expenditure Yt is income. From Economic theory it is known that β 1
has to be positive & its magnitude is greater than zero & less than one. But the estimated model
results that the sign of β1 is negative & its magnitude is greater than one then we reject the model
because the results are contradictory or do not confirm the economic theory.
In the evaluation of the estimates of the model we should take in to consideration the sign &
magnitudes of the estimated coefficients. If the sign & magnitudes of the parameter do not
confirm the economic relationship between variables explained by the economic theory then the
model will be rejected. But if there is a good reason to accept the model then the reason should be
clearly stated. In general if the apriori theoretical criterions are not satisfied, the estimates should
be considered as unsatisfactory.In most of the cases the deficiencies of empirical data utilized for
the estimation of the model are responsible for the occurrence of wrong sign or size of the
estimated parameters. The deficiency of the empirical data means either the sample observation
may not represents the population (due to sampling procedure problem or collecting inadequate
data or some assumption of the method employed are violated). In general if a prioriy criterion is
not satisfied, the estimates should be considered as unsatisfactory.
Step-B- First order test or statistical criterion: If the model passes a prior-economic
criterion the reliability of the estimates of the parameters will be evaluated using statistical
criterion. The most widely used statistical criterions are:
The correlation coefficient - R2/r2/
The standard error /deviation/ S.E of the estimate
t- ratio or t-test of the estimates.
Since the estimated value is obtained from a sample of observations taken from the population,
the statistical test of the estimated values will help to find out how accurate these estimates are
(how they accurately explain the population?).
R2 will explain that the percentage of the total variation of the dependent variable
explained by the change of the explanatory variables (how much % of the dependent
variable is explained by the explanatory variables).
S.E. (Standard error or deviation) - measures the dispersion of the sample estimates
around the true population parameters. The lower the S.E. the higher the reliability (the
sample estimates are closer to the population parameters) of the estimates & vice -versa.
Step -C- Second order test /Economic Criterion/: after testing a prior test & statistical test
the investigator should check the reliability of the estimates whether the econometric assumptions
are holds true or not. If any one of the assumption of Econometric assumptions are violated.
The estimates of the parameters cease to posses some of the desirable properties (un
biased ness, consistency, sufficiency etc)
Or the statistical criterion loses their validity & became unreliable.
If the assumptions of econometric techniques are violated then the researcher has to re –
specifying the already utilizing model. To do so the researcher introduce additional variable in to
the model or omit some variables from the model or transform the original variables etc.
By re-specify the model the investigator proceeds with re- estimation & re-application of all the
tests (a priori, statistical & econometric) until the estimates satisfies all the tests.
14
Step 7. Forecasting or Prediction
Forecasting is one of the prime aim of econometric research the estimated model may
economically meaningful, statistically & econometrically correct for the sample period. But given
all these it may not have a good power of forecasting due to the inaccuracy of the explanatory
variables & deficiency of the data used in obtaining the estimated values.
If this happens the estimated value (i.e. forecasted) should be compared with the actual realized
value magnitude of the relevant dependent variable. The difference between the actual &
forecasted value is tested statistically. If the difference is significant we concluded that the
forecasting power of the model is poor. If it is statistically insignificant the forecasting power of
the model is good.
15
Chapter 2: Simple linear regression model
OR
16
b)
Even if we know them the factors may not be measured statistically example
psychological factors (test, preferences, expectations etc) are not measurable
c) Some factors are random appearing in an unpredictable way & time. Example
epidemic earth quacks e.t.c.
d) Some factors may be omitted due to their small influence on the dependent
variables
e) Even if all factors are known, the available data may not be adequate for the
measure of all factors influencing a relationship
ii. The erratic nature of human beings:- The human behavior may deviate from the
normal situation to a certain extent in unpredictable way.
iii. Misspecification of the mathematical model:- we may wrongly specified the
relationship between variables. We may form linear function to non- linearly related
relationships or we may use a single equation models for simultaneously determined
relationships.
iv. Error of aggregation: - Aggregation of data introduces error in relationship. In many
of Economics data are available in aggregate form ex. Consumption, income etc is
found in aggregate form which we are added magnitudes referring to individuals
where behavior is dissimilar.
v. Errors of measurement:- when we are collecting data we may commit errors of
measurement
In order to take in to account the above source of error we introduce in econometric functions a
random term variable which is usually denoted by the latter U & is called error term, random
disturbance term or stochastic term of the function. By introducing this random term variable in
the function the model will be just like equation number (2.3). The relationship between variables
will be split in to two parts.
Ex. From equation (2.3)
+Yt represents the exact relationships explained by the line
Apart represented by the random term Ui is the unexplained part by the line. This
Ct Consumption
Yn Ct= = +yt
Un
Un
Yn
U2
Y1 X
17
0 Current income (Yt)
Figure 1
The line Ct: = +Yt shows /explain/ the exact relation ship between consumption & income but
other variables that affect consumption expenditure are scattered around the straight line. Then
the true relationship is explained by the scatter of observations between Ct &Yt.
Ct = +Yt + Ut ------------------------ (2. 4)
Variation in Explained
+= ++ Unexplained
consumption variation variation
To estimate this equation we need data on Ct, Yt &Ut, since Ut is never observed like other
variables (Ct & Yt) we should guess the value of ‘U’, that is we should make some assumptions
about the shape of each Ui (mean, S.E, Covariance etc)
E(Ui) = 0 or
iii) Homoscedasticity: (Constant Variance). The variation of each Ui around all values of
the explanatory value is the same i.e. the deviation of Ui around the straight line (in figure 1)
is remain the same var (Ui)=
iv) The variable Ui has a normal distribution with mean zero & variance of Ui.
Ui is N(0, )
v) Ui is serially independent:- the value of U in one period is not depend up on the value
of Ui in other period of time means the co-variance between Ui & Uj is equal to zero
Cov (UiUj) = 0
= E [Ui-0] [Uj-0]
= E(Ui) E(Uj)
18
2.2.1 Assumption about Ui & Xi
i- The disturbance term Ui is not correlated with explanatory variables. It means Ui’s &
Xi’s are not moving together or the covariance between Ui & Xi’s are zero
Cov (UiXi) =0
Cov (UiXi)
E
By assumption we have E(Ui)=0 then
= E{[Ui-0][Xi-E(Xi)]}
= E{UiXi-UiE(Xi)]
= E(UiXi)-E(Ui)E(Xi)]
Again by assumption E(Ui)=0
= E(UiXi)-0E(Xi)]
= E(UiXi) since the value of Xi's are fixed then
=XiE(Ui)=0
Cov (UiUj)=0
ii- The explanatory variables Xi's are measured with out error i.e no problem of
aggregation, round off etc. If there is such problem in the measurement it will be
absorbed by the random term Ui.
Yi =
Mean of Yi (Expected value of Yi) can be found as follow
E(Yi)= E[ ]
E(Yi) =
Where E (Ui) = 0 by assumption
E(Y) = ----- is the mean value of the dependent variable Yi
Variance of Yi =
Var (Yi) = E [Yi-E (Yi)]2
Substitute in place of E(Y) =
Var (Yi) =E [Yi-E ( ]2
Again in place of Yi substitute Yi =
Var (Yi) = E
Var (Yi)= E(Ui)2
From our previous assumption the variance of Ui is equal to E (Ui)2 =u2 then
19
2.3 Estimation of the model
The relationship
Yi =
Holds for population of the values X&Y. Since these values of the population are unknown we do
not know the exact numerical values of & β' s. To calculate or obtain the numerical values of
& β we took sample observations for Y & X. By substituting these values in the population
regression we obtain sample regression which gives an estimated value of & β given by
respectively then the sample regression line is given by
The true relationships between variables (that explain the population) is given by
If you estimated this relationship using sample observation we get the estimated relationship
which has the following
We can estimate the value of &β using least square method (OLS) or classical least squares
(CLS).The reasons to start or use OLS or CLS methods are many
i. The parameters obtained by this methods have some optimal properties i.e.
BLUE (Best, Linear, Unbiased Estimators).
ii. The computational procedure of OLS is fairly simple as compared to other
econometric methods.
iii. OLS is one of the most commonly employed methods in estimating
econometric models.
iv. The mechanics of OLS is simple to understand.
v. OLS is an essential component of most other econometric techniques
From the sample observations we will have
Finding values for the estimates which will minimize the square of residuals
To find the values of & β that minimize this sum, we have to differentiate with respect to
& set the partial derivatives equal to zero
20
Run the sum over the
equation
=0
Divided by 2n to get
Multiply by X
Sum it over
21
The numerical value of can be found in deviation forms. To write the above equation
number 2.16 in deviation form
Take the numerator which is
=
=
Take n in common
-------------------------2.17
22
When we estimate a model of two variable case (one independent (X) & one dependent variable
Y) we find r2. But if we have more than two variable case (one dependent variable & more than
one independent variables (X1,X2...Xn) we will have the coefficient of determination R2.
Definition of r2/R2/After estimation of from the sample data observations of Y & X using
OLS method, we need to know how 'good' is this fit of the line to the sample observations of
Y&X. Means measure the dispersion of the sample observation around the regression line. The
closer the observations to the line the better is the explanation of the variation of Y by the change
in the explanatory variables (X's)
r2 shows the percentage of the total variation of the dependent variable that can be explained by
the independent variable X.
Y Unexplained variation
Yi =
Total variation
Explained variation
X
Suppose a researcher may have Yi=+βXi+ Ui model. To estimate this model he took some
sample observation to estimate the value of & β. In his estimation all the data may fall below,
above or on the line. Then using R2 he can observe that whether the regressions line will give the
best fit for these data or not.
Yi is the observed sample value
is the mean value of the sample
is the estimated regression line using sample data
Yi - shows by how much the actual sample value is deviating from the sample mean
value. This is called total variation represent by small y.
Explain by how much the estimated values are deviating from the sample mean
value. This is called explained variation & represent by
This also shows that the difference between the actual value of Yi & the
estimated value of Yi ( ). This is called unexplained variation represent by ei.
Therefore;
Total variation:
Explained variation
23
Unexplained variation
Sum it over each equation & squared it. We will have
We square it because the sum of deviation of any variable around its mean value is zero then to
avoid this we make it squared.
&
This shows that each deviation of the sample observed values of Y from its mean
consists of two components
i. which shows the explained amount by the regression line
ii. = the unexplained variation by the regression line
By Taking equation number 2.28 Sum it over
Squared it
is in deviation from
24
We also know that since
Substitute
=
=
=
From equation 2.19 we know that then substitute in the above equation in place of
Then
25
From equation 2.21
. =
. or or
Limiting values of r2
regression line then ei will have some values will be greater than zero but less than one
26
3) If the regression line does not explain any part of the variation then will be one then
will be zero.
By definition it is known that the sum of any variable deviations from its mean is equal to zero.
Then then
. =
The value of the independent variables is a set of fixed values, which do not change from sample
to sample. Then will be a constant number & lets’ represent it by K & equation 2.37 can
be written as
By definition
Because the sum of any variables deviations from its mean is zero
27
Then
Again
=
Since is constant number summation of xi means multiply by n. Again here means
Then
We know that
28
= again by definition
Now we turn in to equation number 2.39 & write it as follows (Remember that
)
= +
Again substitute it
= =
By assumption number 2
Substitute + Ui in place of Y & in place of then
Var
=
Var - - - - - - - - - - - - - - - - - - -2.44
From equation no 2.43 we have the following
Var ( ) = Var ( )
29
From equation no 2.44 Var
Var
- - - - - We know that
Var ( - - - - - - - - - - - - - - -- - - - - - -2.45
Then ~N
Mean of
From Equation no 2.14 we have a gain from equation 2.19 & from
- - - - - - - - - - - - - - - - - - - - - - - - - -- - - - - - 2.45
Take the expected Value of since are constant the expected value of a
constant is a constant it self.
W know that
30
We know that
- - - - - - - - - - - - - - - - - - - -- - - - - - -- -- - 2.46
Variance of
Var Yii
We know that Var Yi= from equation no 2.43 substitute in place of Var Yi. Then we will have.
Var
We know that the summation over a constant number is equal to multiplying the constant number
by n
We proved that
Again
Var
31
Standard Error (S.E) values of
S.E is the square root of variance
S.E (
S.E (
Sine can not be easily computed it will be substituted by Equation 2.47 & 2.48
S.E (
S.E (
32
- The independent variable is insignificant.
- The sample parameter do not explain the population parameter or
- The independent variable do not influence the dependent variables (no relationship
between the dependent & independent variables)
Then the above equation will be written as
Because the value of Ho = is not different zero or the slope of the line is zero
i. Again in case of the intercept
If we accept the null hypothesis HO= =0 & reject the alternative .
S.E(
Again it means the equation will have intercept or is statistically significant
33
0
X
0 0
Fig (a) fig. (b)
0 0 X
X Fig. d
Fig. c
4) If we find that S.E we reject the null hypothesis Ho = =0 & accept the
34
Level of significance shows that rejecting a null hypothesis when it is true [committing type I
error). Or choosing a certain level of probability with which we would be willing to risk error
Type I is called significance level.
Ex. If the level of significance is 5%, then there are 5 chances out of 100 that we would reject the
null hypothesis (i.e. you commit error & reject Ho = =0 when it is correct & accepted)
OR we are 95% confident that we have made the right decision; & only with 5% of probability
that we might have done wrong.
iii. Define the number of degrees of freedom (d.f.). i.e. N-K
N = total number of sample size
K = number of estimated variables
The d.f = N-K
alternatives, i.e. .
OR t= falls in the critical region
b) If we accept the null hypothesis Ho = =0 & reject the
alternatives .
Ho = =0
. Acceptance region
.
Rejection region
Rejection region
-2.228 0 + 2.228
35
Figure (e)
Interpretation of t test
If it means reject the null hypothesis Ho = =0
Then if this value is greater than +2 or less than -2 we reject the null hypothesis Ho = =0 &
accept the alternative . And the t-value will lie in the critical region. If calculated
smaller than +2 & greater than -2 or (if -2<t<2) we accept the null hypothesis &
36
constructing confidence intervals we can define how our estimates are closer to the true
population parameters. Interpretation of confidence intervals. If we have 95% confidence level.
i. In the long run 95 out of 100 cases will contain the true parameter in the limit or
intervals. OR
ii. We are 95% confident that the unknown population parameters ( ) will lie with in
the limit /interval. OR
iii. In 5% of the cases the population parameter will lie outside the confidence limits.
But it doesn’t mean that or we can not say that the confidence interval contains the true
population parameter ( ). Because the probability contains specific fixed interval is either 1 or 0.
Confidence interval from the student t- distribution
37
Xt = is rate of interest, >1 & is the intercept
iii. Specification of the Econometric model.
There are other variables which will affect investment these are marginal efficiency of capital
(MEC), saving, consumption, political stability etc. Since these & other variables are not
incorporated in the mathematical model of investment function we can capture these
variables by incorporating a random term Ui in our model.
+Ui
Then by adding the random (error, or stochastic or disturbance) term Ui we convert the
mathematical (exact) relationship between investment & rate of interesting in to in exact
relationship of Econometric models.
iv. Obtaining data:- A sample of 10 years observation data are given to estimate the
model & the type of data are time series data.
v. Estimation of the econometric model
The economic relationship is explained using a single equation model then the most
appropriate method of estimating this equation is OLS method. To estimate this model we
use table 2.1 & obtaining the following results in deviation forms.
i. If interest rate is zero i.e Xt =0 then means if the interest rate is zero
investment will be equal to 878 birr. This is the interpretation of the constant term in
case of linear equation
ii. Indicates that if rate of interest increase/decrease by 1 birr then investment
will decrease/increase on the average by 2225 birr. Therefore it passes the economic
criterion because it explains the inverse relationship between investment & interest rate.
b) Since the model passes a prior economic criterion, the next step is to test the reliability of
the estimated parameters using statistical tests using r 2, & S.E tests.
i. the correlation coefficient test r2
To estimate r2 we can use the formula of 2.31-2.35 let’s us
38
We know that = -2.225
This means 12.3% of the change investment is accounted (explained) by interest rate &
the remaining 87.71, is not explained by rate of interest but by some other factors
represented by Ui in our model.
ii. Standard error test:-
S.E(
S.E(
S.E ( = = =2103.661
Having obtained the value of S.E and S.E ( we can undertake the S.E Test using
hypothesis testing.
Test of S.E : If then we can reject the null hypothesis Ho= &
accept the alternative that H1= .
& = 133.046
= 439> 133.046
Then we can conclude that rejecting the null hypothesis & accept the
alternative.
Test of S.E : If we can accept the null hypothesis and reject alternative.
39
Given the 5% level of significance level we can have the following approximate measurements of
t-test
t= = 6.66 since t-value is greater than 2 we can reject the null hypothesis & accept
the alternative
t=
Again if t> + 2 we reject the null hypothesis & accept the alternative
t= since the t value is greater than -2 then we accept the null hypothesis
& reject the alternative. To summarize we can writ the results as follows
887 - 2225xt
S.E. (133.046) (2103.66)
t (6.66) (-1.058)
The equation passed the economic or a priori criterion but we found different results in statistical
test.
a) Satisfied the statistical test because
S.E. < & also t>2 we can say that we reject the null hypothesis & accept
the alternative . It means that the equation should have an intercept term.
b) Does not satisfies the statistical test because
S.E. > & which means acceptance of the null hypothesis that
is statistically insignificant
Investment (Y) is not affected by the change in the interest rate or investment is not
interest sensitive.
No relation ship between investment and interest rate because =0
And & reject the alternative means is not different from
zero.
Again this is supported by the t-test. Since t= -1.058 & which is greater than -2 or t is found
between + 2 then we can conclude that we accept the null hypothesis and
- The estimates is statistically insignificant
- is not different from zero
- No relationship between investment & interest rate.
Example 2
Given the data on table 2.2. fit the data and estimate the income elasticity .
40
Table 2.1
Invest- Rates of
ment interest ei =
59 804 0.045 50.6 -0.011 2560.36 0.000121 -0.5566 777.875 26.125 682.5156
60 836 0.045 82.6 -0.011 6822.76 0.000121 -0.9086 777.875 58.125 3378.516
61 765 0.055 11.6 -0.001 134.56 0.000001 -0.0116 755.625 9.375 87.89
62 777 0.06 23.6 0.004 556.96 0.000016 0.0944 744.5 32.5 1056.25
63 711 0.06 -42.4 0.004 1797.76 0.000016 -0.1696 744.5 -33.5 1122.25
64 755 0.06 1.6 0.004 2.56 0.000016 0.0064 744.5 10.5 110.25
65 745 0.05 -6.4 -0.006 40.96 0.000036 0.0384 766.75 -10.75 390.0625
66 696 0.07 -57.4 0.14 3294.76 0.000196 -0.8036 722.25 -26.25 689.0625
67 787 0.065 33.6 0.009 1128.96 0.000081 0.3024 733.375 53.625 2875.641
yi yˆi
2 2
=7,534 =0.56
41
Table 2.2
Log Y-
0
58 .
8 17 0.9 1.23 -0.317 -0.399 0.1004 0.1596 0.1286 0.9163 -0.0132 000176
59 12 27 1.079 1.431 -141 -0.1986 0.0198 0.0394 0.02797 1.069 0.0101 0.000103
60 15 36 1.176 1.556 -0.044 -0.0737 0.00193 0.00543 0.00323 1.164 0.0121 0.000146
61 18 46 1.255 1.663 0.0352 0.0327 0.001244 0.00107 0.00115 1.245 0.01037 0.000108
62 22 57 1.342 1.756 0.1224 0.1258 0.0149 0.0158 0.0154 1.316 0.0267 0.000716
63 23 67 1.361 1.826 0.1417 0.196 0.02 0.0384 0.0278 1.369 -0.00729 0.0000531
64 26 81 1.415 1.908 0.1949 0.278 0.038 0.0776 0.0543 1.431 -0.01668 0.000278
logx
log Yi =11.37
=0.1965 =0.3374 =8.51
=0.00158
8.53 =0.2564
=1.22 =1.624
42
Exercise for chapter two
1) Given the following table about the relationship between corn production & fertilizer utilization
Corn 40 44 46 48 52 58 60 68 74 80
Fertilizer 6 10 12 14 16 18 22 24 26 32
If the relation ship between the production of corn & fertilizer is linear.
a) Estimate the parameters coefficients, standared errors & test the significance of the
parameters?
b) What variations of the corn production are explained by fertilizers?
c) Interpreate the results obtained from the linear relationship? Had the relationship was non
linear what would be the interpretation of the cofecients?
2) Suppose that Mr.A is estimating a consumption function and obtain the following results
.
t (3.1) (18.7)
43
Chapter 3: Properties of the least square Estimates
Where are sample parameters, and are the true population parameter & E means
expected or average values.
c) It has minimum variance i.e. if the variance of has smallest variable as
compared to any other variance obtained from other econometric methods.
Gauss Markov Theorem: - Given the assumption of CLS model, the least square estimators satisfies the
properties of BLUE
Where
We know that are constant then is a linear function of the dependent variable Yi
Thus both are expressed as linear function of the Y's.
2) Unbiased ness:- The bias of the estimation is defined as the difference between its expected
value & the true parameter.
44
Bias =E
If the estimation is unbiased its bias is zero
i.e. E we proved this in the previous page equation number 2.42
Again bias =E if the estimation is unbiased its bias is zero E Also we proved this & you
can refer equation number 2.46
3) The minimum variance property (best estimator)
The property of minimum variance is the main reason for the popularity of the OLS method.
Best in this sense means definitely superior. One should know that when we say
OLS estimator is best estimator it will have a minimum variance as compared to any estimators obtained
using other econometric methods such as 2SLS, 3SLS. Maximum Likely hood estimators etc.
45
Probability
Distribution A Distribution B
x6 x2 x3 x4 x5 x8 x7 x1
Given the distribution A & B. then if the true population were B, then the probability that we would have
obtained the sample shown would be quite small. But if the true population were A then the probability
that we would have drawn the sample would be substantially large. Select the observation from
population A as the most likely to have yielded the observed data.We define the maximum likelihood
estimator of as the value of which would most likely generate the observed sample observations Y1,
Y2, Y3 --- Yn. Then if Yi is normally distributed & each of Y's is drawn independently then the maximum-
likelihood estimation maximizes. P(Y1) P(Y2) . . . P(Yn)Where each P represents a probability associated
with the normal distribution.P(Y 1) P(Y2)--- P(Yn) is often referred to as the likelihood function. The
likelihood function depends up on not only on the sample values but also in the unknown parameters of
the problems . In describing the likelihood function we often think of the unknown parameters as
varying while the Y's (dependent variables) are fixed.This seems reasonable because finding the
maximum likelihood estimation involves a search over alternative parameter estimators which would be
most likely to generate the given sample. For this reason the likelihood function must be interpreted
differently from the joint probability distribution. In the latter case the Y's are allowed to vary & the
underlying parameters are fixed & the reverse is true in case of maximum likelihood. Now we are in a
position to search for the maximum likelihood estimators of the parameters of the two variable regression
models.
We know that N
I.e Y is normally distributed with mean & variance
Assume that all the assumptions of least squares & further assume that the disturbance term has normal
distribution. Will the estimators different from the least square estimators? Will such estimators
possess the desirable properties? In our model sample consists 'n' observations in Y&X.
Then we will have a mean value of , , ... but a common
variance of . Why we will have a different mean but a constant variance? The reason is simple
that a random variable X assumes different value with a probability of density function of f(x) but fixed
values of Yi. The joint probability density function can be written as a product of n individual density
function
=
46
Exp ---------------3.3
ex
* exp
...
exp
exp -----------------------------3.6
The value of Y1,Y2 ...Yn are given but the value of are not known then this function can
be called likelihood function and denoted by
exp ------------------3.7
exp -------------------3.8
Take log of L
47
Since
Equation no 3.11 will be again equal to
Since
Substitute equation 2.13 in to equation 3.14 and we will have
-------------------------------------3.15
The two equations again will give us the same normal equation OLS. From equation number
3.12 we can to obtain the value of
=0
We know that
Then Yi- or
Then Yi-
Therefore
-3.17
The ML estimation is different from OLS estimator of OLS. The variance was in OLS
a) What are the properties that make the OLS estimators BLUE?
b) What are the important properties of BLUE as compaired to other
methods of Econometrics?
48
Chapter 4: Multiple Linear Regression
In a simple regression we study the relationship between the dependent variable (Yi) & only one
independent (explanatory) variable Xi. In this simple regression analysis, for example the Quantity
demanded (Y) is depend up on the price of the product alone other things being constant (absorbed by
Ui). But in a multiple regression analysis when a dependent variable (Yi) is depends upon many
explanatory variables (independent variables).
Ex. If quantity demanded is a function of price of the product, price of other goods (substitute &
complementary goods) income, wealth, previous year income consumption behavior etc. For the sake of
simplicity let’s consider a three variable case
Where Yi is dependent & X1 & X2 are independent (explanatory variables) & Ui is the stochastic
disturbance term. Alternatively we can write this equation as follows.
Measures the constant term & measuring the average value of Y when X 1 & X2 are
zero.
Measures the change in Y for a unit change in X1 alone (the effects of X1 on Y
given that X2 is constant.)
Measures the change in Y for a unit change in X2 alone (the effects of X 2 on Y
given that X1 is constant.)
The value of & are called partial repression coefficients or marginal coefficients but not
elastic ties & these value measures only the average value of Y because the regression line is a linear
regression line. But if the regression equation is non linear like the following Cob-Douglas production
function
Let Y= output, L is labor & K is capital and are parameters then to estimate this equation first
transform in to linearity using log.
R(Ui,Uj)=0
In addition to this we assume further that there is no exact linear relationship between explanatory
variable X1& X2 (no multi collinarity between X1 & X2)
Given n observations on Y1, X1 & X2 our problem is to estimate the value of . Then we apply
OLS to obtain their estimates . In the simple linear regression model we have already defined
that
49
Differentiate w.r.t. to setting these values equal to zero.
If you write the above equation in lower cases letters for deviations from means we can write the
above equations in the following ways.
- - - - - - - - - - - - - - -4.14
- - - - - - - - - - - - - - ..4.15
50
Var ( . . .. . . . . . .. 4.16
Var ( . . . . . . . . . . . . . . . . . . . . . 4.17
Var = . . . . . . . .. . . . . . . . . . .. . . . 4.18.
estimated .
S.E . . . . . . . . . . .. . . . . .4.19
S.E . . .. . . …. . . . . . . . 4.20
S.E . . . . . . . . .. . . . . . . 4.21
If the estimated values of are equal to zero i.e. if we the accept the above null hypothesis it
means
a) The dependent variable Yi is not explained (depend on) by the explanatory variables X1i & X 2i or
there is no relation ship between Y & Xi's .
51
OR
b) We accept the null hypothesis that there is no relation ship between
Yi & Xi's
OR
c) The estimates are not significantly different from zero or the estimates are insignificant.
The alternative hypothesis
H2=
H1 =
It means
a) The value of estimators is different from zero i.e there is a relation ship between Yi & X 1
& X2 or Y is explained by X1 & X2
OR
b) The estimators are significantly different from zero. ( they are not equal to zero)
OR
c) are statistically significant. Having these ideas in your mind we can undertake the test
as follows
The null hypothesis H0=
Ho =
The alternative hypothesis H1=
H1=
a) If we find that S.E ( ) > & S.E ( )'s > (if the estimated S.E is greater than half of the
estimators).
We accept the null hypothesis & interpret as stated above or
There is no relationship between Y & X1, X2
are insignificant ( X1 & X2 are not affected Yi )
are equal to zero
Reject the alternative hypothesis which says the are different from zero
& S.E (
It means if the S.E are less than half of the estimators then we can interpreter it as follows.
a) Accepting the alternative means the value of are different from zero and reject the null
hypothesis which means the value of is equal to zero.
b) There is a relation ship between the dependent variable Yi and the independent variables X1 &
X2 or X1 & X2 explains Yi
c) are significant
52
intercept term
t=
b) Theoretical or table value of t. This can be obtained from the table as follows
First count the number of sample size & represent it by n.
Second determine the level of significance represented by
Third determine whether you are taking one tail or two tail test
Forth count the numb of estimated values i.e & represent by k
Then having determine it we can write the table value of t as follow
t , (N-K)
top of row & the d.f. can be obtained in the first columns of that table. Then the point where
(significance level) & the d.f. are interesting with each other gives you the table value of the t- test.
c) The last stage will be compare a & b i.e. the calculated value with the table value.
Now one should be aware of the following idea i.e the computed value of t should be calculated for each
estimator but you will have only one table value of t that will be obtained from the statistical table. Then
each calculated value of t from the estimators should compare with the fixed table value of t & the
procedure of comparison is.
1. If the calculated value of t test is less than the table value of t then accept the null hypothesis &
reject the alternative. It means
If t <t
If t
53
t
R2=
Adjusted
=1 -
OR
=1-(1- )
If the sample size is small < but if the sample size is very large & will be very close to each
others. Here we should be aware of that for very small size of sample may be negative but taken as
zero. Note that if =1, =1, when =0, = (1-k)/(n-k) in this case will be negative if k>1.
F- tests:-
Under this task we compare the computed F-values with the table value of F-tests. Importance of –F-
tests: - This test is undertaken in multiple regression analysis for the following reasons.
54
a) To test the over all significance of the explanatory variables i.e. whether the explanatory variables
X1,X2 actually do have influence on the explained variable Yi or not
b) Test of improvement, means by introducing one additional variable in the model to test that
whether the additional variable will improve the influence of the explanatory variable on the
explained variable or not
Ex.
Now if you take the first equation it has only two explanatory variables X 1 &X2 but in the second we
included X3. Then the addition of X3 may affect positively or negatively the relationship between Y & X 1,
X2.
C) To test the equality of coefficients obtained from different sample size (chaw test). Suppose you
may have a sample data of agricultural output of Adet woreda from 1974 E.C. up to 1994 E.C.
Now if you want to know the change in agricultural out put before & after the fall down of Derge
(1983). By splitting these data in to two you can compare the coefficients & by doing so you can
undertake F-test and see whether there is a change in agricultural output or not.
D) Testing the stability of the coefficients of the variables as the number of sample size increase
Ex. You may take first a 10 year sample for your study & estimate your estimators. But if you
increase the number of sample size in to 15 years, will the coefficients of the variables are stable
or not will be tested using F-tests.
F test can be calculated for a given estimated equation using the following formula.
F* =
Table value of F-test can be read from the statistical table as follow.
or
Where V1= (K-1) & V2= (N-K) both of them explain degrees of freedom V 1 the numerator & V2
the denominator degree of freedom. To undertake F test we should compare the calculated value with the
table value.
1) If the calculated F value is less than the table value i.e.
F* =
Where F* is calculated F value
F is the table value
It means we accept the null hypothesis that H 0= & H0= and reject the alternative hypothesis
which says H0= & H1= . The interpretation will be
a) The estimators ( are equal to zero then the estimates are insignificant.
b) The explanatory variables (X1,X2) do not have influence on the explained variable Yi.
2) If the calculated F value is greater than its table value i.e.
F* =
It means we reject the null hypothesis that H 0= & H0= & accept the alternatives that H 1=
& H1= . The interpretation of this is just the opposite of a & b in the above sentences.
Example:-
Suppose the quantity supplied of commodity is assumed to be a linear function of the price of the
commodity itself & the wage rate of labor used in the production of the commodity- If the supply
equation is given by
55
Where Q1 = quantity supplied. Px, price of commodity X & W is wage rate
Using the following sample data
a) Estimate the parameters using OLS
b) Test the statistical significance of the individual coefficients ( at 5% significant level
c) Test the over all significance of the coefficients (F-test)
d) Compute the price elasticity of supply
= 1.16
85.4-(1.06x36.2)-(-7.699x5.6)=
=85.4-38.372-(-42.946)=89.974
=
= 89.974 + 1.06X - 7.69Z. This equation will be read as follow
= 89.974 means if the price of the commodity & the wage rate is zero the supplier will
supply 89.974 units of goods. But it is meaningless to interpret the constant term ( ). (In
some analysis it doesn’t give sense.)
56
is the coefficient of price of the commodity. The value 1.06 signifies that if the price
of the commodity is increasing by one birr given the price of wage is constant quantity
supplied will increased on the average by 1.06 units.
is the coefficient of the wage rate. If the wage rate is increased by 1 birr keeping
constant the price of the commodity quantity supplied will decrease on the average by
7.69.
& are coefficients of the explanatory variables & they are containing the marginal
values. If you take the first derivative of the equation, you will have marginal values.
Ex. 89.974+16px-7.697Z
365.5
Var =
Var = 1,110.54
From the above values we can calculate S.E , S.E & S.E as follows
S.E = = 31.748
S.E = = 0.521
S.E = = 2.529
Having calculated S.E of the coefficients of the variables ( we can undertake S.E. tests
– as follows
57
- If S.E( )> we can accept the null hypothesis & reject the alternative
= 44.987
Then S.E =i.e 31.76 is less than which is 44.987. There fore we can conclude that is
statistically significant
S.E = 0.521 =0.58
From this we can see that S.E is less than then we can conclud that is
significant. Lastly S.E = 0.529 & = = 3.848 Again here S.E is less than
. All the estimators are statistically significant or we reject the null hypothesis that H 0=
, H0= (we reject the hypothesis which says ( are equal to zero) & accept
the alternative that H1= , H1= ( are different from zero.)
The economic interpretation of rejecting the null hypothesis and accepting the alternative states the
following
i) The estimators are statistically significant
ii) The explanatory variables X 1 & Z (price of commodity X & wage rate) influence the supply of
commodity (Y)
Student –t –test
In the t-test analysis we compare the calculated t with table value of t. How to get calculated t- value
Computed t = =2.83
Computed t = =3.043
Significance level
58
= t0.25,12
From the t- table in the top of the raw find 0.025 & in the first column of the table find 12 then when
these two values are intersecting with each other, that point will give you the table value of t. From our t
0.025,12 the table value is 2.179. Compare this table value with the computed value & if the computed
value is greater than the table value we reject the null hypothesis & accept the alternative. Again if the
computed value is less than the table value we accept the null hypothesis & reject the null hypothesis.
R2= =
R2= = = 0.8599
This means 85.99% of quantity supplied is explained by price of the commodity & wage rate.
Adjusted
=1-(1- ) = 1-(1-0.8599) = 0.8365
F - test
The over all significance of the explanatory variables can be tested using F-test. Just like t- test in the case
of F test we will have computed & table value of F. Calculated value of F * can be obtained using the
following formula
F* =
=36.826
Table value of F can be obtained by taking the enumerator degree of freedom (k-1) & the denominator
degree of freedom (N-K). Then F (K-1), (N-K). Given the level of significance of 5% we can get F 2, 12
i.e. K-1 = 3-1 = 2 & N-K = 15-3 = 12. Then from the table we find the enumerator degree of
freedom (K-1) in the top row of the table & the denominator degree of freedom (N-K) in the first column
of the table at the intersection point of these values you will get the table value. From our example F 2 , 12
at 5% level is 3.89. Comparison of calculated F & table value of F. If the calculated value of F is greater
than the table value then we can reject the null hypothesis & accept the alternative. From our example the
calculated F value is 36.826 is greater than the table value 3.89. The economic interpretation of this is
i) All the estimators are significant or statistically different from zero
ii) Quantity supplied is affected by the price of the commodity & wage rate
59
Presentation of regression results:- Different books used different presentation methods but the most
commonly is the one which we write under here using our previous example.
1) From the following data compute the regression of automobile expenditure on consumer expenditure &
other travel expenditer (Take the linear regression analysis)
Automobile 212 158 180 253 175 429 437 419 318 355
expendtuer
Other 29 46 28 26 29 64 119 81 74 66
travel
expense
Consumer 2437 2476 2132 2256 2258 3566 4486 3602 3446 3736
expenditure
60
b) Write the equation
c) Test the significance of the parameters using standared error & t-test?
d) Construct the 95% confidence interval for the parameters?
e) Calculate the unadjusted & adjusted R2. Why the difference is arising?
g) If the relation ship is non linear how would you inerpreate the results (coefficients)
h) Calculate the partial correlation coefficients of the parameters & interpreate the results?
2 Given the following data & answer the question from question one i.e. from a up to h?
Yea 1985 1986 1987 1988 1989 1990 199 1992 1993 1994 1995 1996 19 199 199
r 1 97 8 9
Y 40 45 50 55 60 70 65 65 75 75 80 100 90 95 85
X1 9 8 9 8 7 6 6 8 5 5 5 3 4 3 4
X2 400 500 600 700 800 900 100 1100 1200 1300 1400 1500 16 170 180
0 00 0 0
Assume that income is the dependent variable & the remainings are the independent variable.
61
Chapter 5: Relaxing the assumptions of the classical model
In the previous chapters to estimate the coefficients of variables in the two variable & more than two
variable cases we utilize two basic assumptions, one about the random term Ui & the other about
X( independent variable). These are:
5.1.1 Hetroscedaticty:-
If the probability distribution of Ui remains the same over all explanatory variables this assumptions is
called homosceasticity i.e. var (Ui)= constant Variance. In this case the variation of ui around the
explanatory variables is remains constant. But if the distribution of ui around the explanatory is not
62
constant we say that ui’s are hetro scedastic (not constant variance).Var (ui) = i. signifies the fact that
the individual variance may be different.
The assumption of homoscedasticity states that the variation of each random term (Ui) around its zero
mean is constant and does not change as the explanatory variables change whether the sample size is
increasing, decreasing or remains the same it will not affect the variance of Ui which is constant.
Var (Ui) = f(Xi)-------------------------------------------------------5.1
This explains that the variation of the random term around its mean does not depend upon the explanatory
variable Xi. This constant variance is called homoscedastic (constant Variance)
But the dispersion of the random term around the regression line may not be constant or the variance of
the random term Ui may be a function of the explanatory variables. Var (Ui) = here i-
signifies the individual variance may all be different. This is called hetroscedasticity or not constant
variance. The case of hetroscedasticity is shown by the increasing or decreasing depression of the random
term around the regression line as shown in fig b, c & d.
Y Y
X X
Fig. (a) Fig. (b)
Y Y
X
X
Fig. (c) Fig. (d)
63
On diagram (a) you can see that the random term Ui is dispersed with in a constant variance around the
regression line.
In figure (b) as the value of X is increasing the variance of the random term Ui is also increasing.
Ex1.
---------------------------------------------------------5.2
Where St is saving, Yt is income parameter & Ut is random term.
If you wants to estimate this saving function & collect cross sectional data from a lower income group
level the variation of saving is lower where as there is greater variation in the saving behaviors of high
income-families. Thus the random term Ui is very low at lower income and it tends to increase as income
increases due to variation in the behavior of saving.
Ex2. Suppose we try to study the consumption expenditure from a given cross sectional sample of family
budget.
----------------------------------------5.3
Where Ct= consumption expenditure
Yt= disposable income of the h.h
Again at a lower income level the consumption expenditure is almost equal i.e. no variation in
consumption expenditure but at a higher level of income there is variation in consumption between higher
& lower income level. Then there is a possibility of increasing variation of the random term called hetro
scedasticity. In figure (c) we can see that as the value of X is increasing the variation of random term Ui
is decreasing. By doing & learning the error of committing errors will decrease. In this case the variation
of the term Ui is decreasing as Xi's is increasing. Again here we will have hetroscedasticity. In figure (d)
we may have a complicated hetroscedasticity. i.e. in the beginning there is a high variation of U at lower
Xi's & a higher variation of Ui at a higher Xi's.
var
Var ( =
Since the variance of is assumed to be constant we took it out from the summation in equation number
2.47 and 2.48 But under hetroscedasticity condition is not constant & we will have
var &
Var (
in this case is not a constant number but varies as X changes. To calculate var and var we
should know the value of i.e. i=1, 2...n. The problem encountered here is that since we can not have
observable value of Ui we have to estimate it from the sample data using residuals as proxies
to the unobservable errors. Then we have to estimate n variables from n- observables i.e.
one for each variance, a situation in which estimation is impossible.
2) OLS estimators shall be inefficient: - If the random term Ui is hetroscedastic, the OLS estimates
do not have the minimum variance in the class of unbiased estimators. Therefore they are not
efficient both in small & large samples.
In case of hetroscedasticity
64
Under hetroscedasticity.
Since is constant =
This means the hetroscedasticity is a proportion of (Ki) the homoscedasticity & Ki is non-stochastic
constant weight.
Substitute in the variance of when it is hetroscedastic
Var
Var
a) The var under hetroscedasticity will be greater than its variance under homoscedastic.
Following this true standard error of shall be underestimated & the t- value associated with it
will be overestimated.
i. This leads to the conclusion that in a specific case at hand is statistically significant (which
in fact may not be true).
ii. The consequence of var of hetroscedastic is that the confidence limits & the test of
significance will not be applicable
iii. If we proceed with our model under false of homogeneity variances then our inference &
predictions about the population coefficients would be incorrect.
65
Hetroscedasticity have a serious effect on OLS estimates. If so how does one know the existence of
hetroscedasticity in the model? Various tests have been suggested for establishing hetroscedasticity &
some of them are described below.
Obtain the value of residuals which are estimate of the U’s. Then arrange ei’s (ignore the sign
or take the absolute value of ei) in ascending or descending values of X & we compute the rank
correlation coefficient.
The numbers of explanatory variables are 3(X 1, X2, X3) then the sample size is at least must be 6.In
addition to the size of sample this test assumes normality & no autocorrelation. The steps described are as
follows
a) Formulate hypothesis testing.
The null hypothesis H0 =Uo are homoscedastic
The alternative H1= U1 are not homoscedastic (U’s are hetroscedastic.)
b) Order the observations according to the magnitude of the explanatory variable X.i
c) Omit certain number of central observations say C amount. Now if you deduct the amount that
will be omitted C we left with n-c number of observations. Then divided this remaining sample
in to two parts. i.e . These samples contain two pars one part includes the small values of X
while the other parts the large values of X.
d) Fit separate regression by OLS procedure & obtain the sum of squared residuals from each of
them
e) Let is the residual squared from the sample of low values of X & from the large
sample values of X. Then calculate F- test using ratio of the residuals variances.
66
The numerator degree of freedom is -k & the denominator degree of freedom is -k. From
this one can see that the numerator is equal to the denominator i.e. V 1=V2. Using V1&V2 we can find the
table value of F v1,v2 & compare this table value with the above calculated value.
1) If Fv1,v2 if the calculated value is greater than the table value of F then our decision
will be
a. Reject the null hypothesis that Ho=Ui are homoscedastic
b. Accept the alternative that H1=Ui’s are hetroscedastic. Then the variance of Ui’s are not
constant (homoscedastic) but the variance of Ui’s are hetroscedastic.
Ex. Given the following hypothetical data on consumption expenditure Y & income X.
Rank data in ascending order of X
Y X Y X
55 80 55 80
65 100 70 85
70 85 75 90
80 110 65 100
79 120 74 105
84 115 80 110 Lower values of X’s
98 130 84 115
95 140 79 120
90 125 90 125
67
75 90 98 130
74 105 95 140
110 160 108 145
113 150 113 150
125 165 110 160
108 145 125 165 Middle observations C=4
115 180 115 180 that will be omitted
140 225 130 185
120 240 135 190
145 185 120 200
130 220 140 205
152 210 144 210
144 245 152 220
175 260 140 225 higher values of X’s
180 190 137 230
135 205 145 240
140 265 175 245
178 270 189 250
191 230 180 260
137 250 178 265
189 275 191 270
Take the first 13 observations i.e lower observations & run regression
Again from the remaining observations 13 sample run regression Y on X & you will have
(30.6421) (0.1319)
r2= 0.7681
68
Computed value of
Table of Fv1,v2
F11,11, from the table of F at 5% level & you will get 2.82. Compare the F table value with F computed
value. If the computed value is greater than the table value rejects the null hypothesis that there is
homoscedasticity & accepts the alternative that there is hetroscedasticity. The computed value 4.07 is
greater than the table value i.e 2.82. Then we can say that there is hetroscedasticity.
Find
b) Regress /ei/ on each explanatory variables Xi’s independently using different power.
-------------------------5.10
e.t.c.
But here it is assumed that vi satisfies all the assumptions of OLS. We choose the form of regression
which gives the best fit in light of
i. R2 & the standard error of coefficients of
ii. Formulate the hypothesis testing
The null hypothesis H0 = U0 s are homoscedastic
The alternative hypothesis is H1 = Ui’s are hetroscedastic.
Using standard error test we can accept or reject the null hypothesis as follows
69
If we reject the null hypothesis (the existence of homoscedasticity is
rejected) & accept the alternatives which says that there is hetroscedasticity. This kind of
hetroscedasticity i.e if we reject the null hypothesis of & accept the alternative is called
mixed hetroscedasticity.
If S.E ( i.e we accept the null hypothesis for & reject the null
d) The transformation of the original model consists in dividing through the original relationship by
the square root of the term which is responsible for hetroscedasticity. It means simply dividing
the original model by the square root of the variable which is identified or responsible for the
existence of hetroscedasticity. Suppose lets have the following type of original model.
= ------------------------------------5.14
Where is a finite constant term to be estimated from the model. It explains the variance of the
random term increasing by (proportionately) as the explanatory variables increases by
Solve for
70
This suggests that the appropriate transformation of the original model is the division of the original
relationship by
The reason for this is that
------------------------------------5.16
This shows divided the orginal model by Xi or then
From this equation the new transformed random term is homoscedastic (constant variance).
The type of hetroscedasticity we assume was that just like in equation 5.15 then substitute it in equation
number 5.18 & you will get
is a constant number which proves that the new random term in the model have a finite
constant variance Ki2. Then we can apply OLS to the transformed original model of equation number
(4.16)
In this transformed model the position of the coefficient has changed i.e. the constant term in the
original model equation number 5.12 now will be the coefficient of the & the which was the
coefficient of Xi in the original model appear to be a constant term in the transformed model of equation
(5.21). Then if you want to get the original model multiply by Xi the transformed model (equation 5.21).
Case (b) Suppose the form of hetroscedasticity is
71
=k2Xi -------------------------------4.22
It means as X is increasing or decreasing the variance of the random term Ui is increasing or decreasing
by K2. Then from equation number 4.22 we can get the value of K2
This means the original model equation number 4.11 will be divided by
or
We have assumed that in equation number 5.22 the type of hetroscedasticity is substituted in
equation number 5.25. Then
Var (Ui) = K2 shows that the variance of the random term Ui after transformation of the original data will
give us a constant number equal to K2. Therefore we can apply OLS to equation number 4.25. In the
transformed model we do not have intercept term then the equation pass through the origin to estimate
. In this model if you wants to get the original model we shall have multiply the transformed model
by .
Case (C) suppose the form of hetroscedasticity is the form of
8
In this equation we assumed that the variance of the disturbance terms is proportional to the square of the
dependent variable Y.
Var (Ui2)=
-----------------------------------------------4.29
72
= + + ----------5.30
Var =E =
-----------------------------------------5.31
Substitute in equation number of (5.27)
Var (Ui)=
Var (Ui)= K2
73
Table 5
No. Saving Disposable (Yd )=yd
(S) income(Yd) (S- )=s yd*s e=(S- )
1 -
264 8777 -986.32258 13646.03226 972832.2318 186214196.4 13459389.75 95.06082 168.9392 28540.45
2 -
105 9210 -1145.32258 13213.03226 1311763.812 174584221.5 15133184.2 131.7186 -26.7186 713.8836
3 -
90 9954 -1160.32258 12469.03226 1346348.49 155476765.5 14468099.68 194.7056 -104.706 10963.27
4 -
131 10508 -1119.32258 11915.03226 1252883.038 141967993.8 13336764.65 241.6073 -110.607 12233.97
5 -
122 10979 -1128.32258 11444.03226 1273111.845 130965874.4 12912560.01 281.4821 -159.482 25434.55
6 -
107 11912 -1143.32258 10511.03226 1307186.522 110481799.2 12017500.52 360.4699 -253.47 64247
7
406 12747 -844.32258 -9676.03226 712880.6191 93625600.3 8169692.522 431.161 -25.161 633.0769
8
503 13499 -747.32258 -8924.03226 558491.0386 79638351.78 6669130.813 494.8253 8.17466 66.82507
9
431 14269 -819.32258 -8154.03226 671289.4901 66488242.1 6680782.749 560.0135 -129.014 16644.49
10
588 15522 -662.32258 -6901.03226 438671.2 47624246.25 4570709.491 666.0925 -78.0925 6098.442
11
898 16730 -352.32258 -5693.03226 124131.2004 32410616.31 2005783.814 768.3618 129.6382 16806.06
12
950 17663 -300.32258 -4760.03226 90193.65206 22657907.12 1429545.169 847.3496 102.6504 10537.11
13
779 18575 -471.32258 -3848.03226 222144.9744 14807352.27 1813664.493 924.5595 -145.56 21187.57
14
819 19635 -431.32258 -2788.03226 186039.168 7773123.883 1202541.268 1014.299 -195.299 38141.74
15
1222 21163 -28.32258 -1260.03226 802.1685379 1587681.296 35687.36449 1143.66 78.34042 6137.221
16
1702 22880 451.67742 456.96774 204012.4917 208819.5154 206402.0098 1289.021 412.9792 170551.8
17
1578 24127 327.67742 1703.96774 107372.4916 2903506.059 558351.7528 1394.592 183.4082 33638.56
18
1654 25604 403.67742 3180.96774 162955.4594 10118555.76 1284084.85 1519.635 134.3654 18054.05
19
1400 26500 149.67742 4076.96774 22403.33006 16621665.95 610230.0127 1595.49 -195.49 38216.34
20
1829 27670 578.67742 5246.96774 334867.5564 27530670.46 3036301.755 1694.542 134.4578 18078.9
74
21
2200 28300 949.67742 5876.96774 901887.2021 34538749.82 5581223.561 1747.878 452.122 204414.3
22
2017 27430 766.67742 5006.96774 587794.2663 25069725.95 3838729.109 1674.224 342.7762 117495.5
23
2105 29560 854.67742 7136.96774 730473.4923 50936308.52 6099805.175 1854.55 250.4504 62725.4
24
1600 28150 349.67742 5726.96774 122274.2981 32798159.5 2002591.304 1735.179 -135.179 18273.36
25
2250 32100 999.67742 9676.96774 999354.9441 93643704.64 9673846.144 2069.586 180.414 32549.21
26
2420 32500 1169.67742 10076.96774 1368145.267 101545278.8 11786801.63 2103.45 316.55 100203.9
27
2570 35250 1319.67742 12826.96774 1741548.493 164531101.4 16927459.69 2336.265 233.735 54632.05
28
1720 33500 469.67742 11076.96774 220596.8789 122699214.3 5202601.63 2188.11 -468.11 219127
29
1900 36000 649.67742 13576.96774 422080.7501 184334053 8820649.373 2399.76 -499.76 249760.1
30
2100 36200 849.67742 13776.96774 721951.7181 189804840.1 11705978.4 2416.692 -316.692 100293.8
31
2300 38200 1049.67742 15776.96774 1101822.686 248912711.1 16560726.79 2586.012 -286.012 81802.86
Sum
38760 695114 20218310.77 2572501037 217800819.7 -0.35124 1778203
mean 1250.322
58 22423.03226
75
Lower values of the disposable income Table 5.2
N.o Saving Disposable (Yd )=yd
S income(Yd) (S- yd*s e=(S- )
)=s
1
264 8777 -67.363 -3414.54 4537.773769 11659083.41 230013.658 32.6414 231.3586 53526.8
2 -
105 9210 226.363 -2981.54 51240.20777 8889580.772 674910.339 70.832 34.168 1167.452
3 -
90 9954 241.363 -2237.54 58256.09777 5006585.252 540059.367 136.4528 -46.4528 2157.863
4 -
131 10508 200.363 -1683.54 40145.33177 2834306.932 337319.125 185.3156 -54.3156 2950.184
5 -
122 10979 209.363 -1212.54 43832.86577 1470253.252 253861.012 226.8578 -104.858 10995.16
6 -
107 11912 224.363 -279.54 50338.75577 78142.6116 62718.43302 309.1484 -202.148 40863.98
7
406 12747 74.637 555.46 5570.681769 308535.8116 41457.86802 382.7954 23.2046 538.4535
8
503 13499 171.637 1307.46 29459.25977 1709451.652 224408.512 449.1218 53.8782 2902.86
9
431 14269 99.637 2077.46 9927.531769 4315840.052 206991.882 517.0358 -86.0358 7402.159
10
588 15522 256.637 3330.46 65862.54977 11091963.81 854719.263 627.5504 -39.5504 1564.234
11
898 16730 566.637 4538.46 321077.4898 20597619.17 2571659.359 734.096 163.904 26864.52
Sum
3645 134107 0.007 0.06 680248.5455 67961362.73 5998118.818 -26.8474 150933.7
mean
331.36364 12191.5455
76
Large values of the disposable income Table 5.3
N.o Saving Disposab (Yd )=
S le (S- )=s yd yd*s e=(S- )
income(Y
d)
1
1829 27670 -261.363 -4823.63 68310.61777 23267406.38 1260718.408 1936.483 -107.483 11552.6
2
1600 28150 -490.363 -4343.63 240455.8718 18867121.58 2129955.438 1951.795 -351.795 123759.7
3 -
2200 28300 109.637 -4193.63 12020.27177 17586532.58 459777.0123 1956.58 243.42 59253.3
4 -
2105 29560 14.637 -2933.63 214.241769 8606184.977 42939.54231 1996.774 108.226 11712.87
5 -
2250 32100 159.637 -393.63 25483.97177 154944.5769 62837.91231 2077.8 172.2 29652.84
6
2420 32500 329.637 6.37 108660.5518 40.5769 2099.78769 2090.56 329.44 108530.7
7 -
1720 33500 -370.363 1006.37 137168.7518 1012780.577 372722.2123 2122.46 -402.46 161974.1
8
2570 35250 479.637 2756.37 230051.6518 7597575.577 1322057.038 2178.285 391.715 153440.6
9 -
1900 36000 -190.363 3506.37 36238.07177 12294630.58 667483.1123 2202.21 -302.21 91330.88
10
2100 36200 9.637 3706.37 92.871769 13737178.58 35718.28769 2208.59 -108.59 11791.79
11
2300 38200 209.637 5706.37 43947.67177 32562658.58 1196266.288 2272.39 27.61 762.3121
Sum 135687054.5
22994 357430 0.007 0.07 902644.5455 46 4341055.455 0.073 763761.7
mean
2090.3636 32493.6364
77
Order table 5.1 in assending order of disposable income Table 5.4
Saving(S Disposable (Yd )=yd
income(Yd) (S- )=s yd*s e=(S- )
)
1 -
264 8777 -986.32258 13646.03226 972832.2318 186214196.4 13459389.75 95.06082 168.9392 28540.45
2 - -
105 9210 1145.32258 13213.03226 1311763.812 174584221.5 15133184.2 131.7186 -26.7186 713.8836
3 - -
90 9954 1160.32258 12469.03226 1346348.49 155476765.5 14468099.68 194.7056 -104.706 10963.27
4 - -
131 10508 1119.32258 11915.03226 1252883.038 141967993.8 13336764.65 241.6073 -110.607 12233.97
5 - -
122 10979 1128.32258 11444.03226 1273111.845 130965874.4 12912560.01 281.4821 -159.482 25434.55
6 - -
107 11912 1143.32258 10511.03226 1307186.522 110481799.2 12017500.52 360.4699 -253.47 64247
7
406 12747 -844.32258 -9676.03226 712880.6191 93625600.3 8169692.522 431.161 -25.161 633.0769
8
503 13499 -747.32258 -8924.03226 558491.0386 79638351.78 6669130.813 494.8253 8.17466 66.82507
9
431 14269 -819.32258 -8154.03226 671289.4901 66488242.1 6680782.749 560.0135 -129.014 16644.49
10
588 15522 -662.32258 -6901.03226 438671.2 47624246.25 4570709.491 666.0925 -78.0925 6098.442
11
898 16730 -352.32258 -5693.03226 124131.2004 32410616.31 2005783.814 768.3618 129.6382 16806.06
12
950 17663 -300.32258 -4760.03226 90193.65206 22657907.12 1429545.169 847.3496 102.6504 10537.11
13
779 18575 -471.32258 -3848.03226 222144.9744 14807352.27 1813664.493 924.5595 -145.56 21187.57
14
819 19635 -431.32258 -2788.03226 186039.168 7773123.883 1202541.268 1014.299 -195.299 38141.74
15
1222 21163 -28.32258 -1260.03226 802.1685379 1587681.296 35687.36449 1143.66 78.34042 6137.221
16
1702 22880 451.67742 456.96774 204012.4917 208819.5154 206402.0098 1289.021 412.9792 170551.8
17
1578 24127 327.67742 1703.96774 107372.4916 2903506.059 558351.7528 1394.592 183.4082 33638.56
18 1654 25604 403.67742 3180.96774 162955.4594 10118555.76 1284084.85 1519.635 134.3654 18054.05
78
19
1400 26500 149.67742 4076.96774 22403.33006 16621665.95 610230.0127 1595.49 -195.49 38216.34
20
1829 27670 578.67742 5246.96774 334867.5564 27530670.46 3036301.755 1694.542 134.4578 18078.9
21
2200 28300 949.67742 5876.96774 901887.2021 34538749.82 5581223.561 1747.878 452.122 204414.3
22
2017 27430 766.67742 5006.96774 587794.2663 25069725.95 3838729.109 1674.224 342.7762 117495.5
23
2105 29560 854.67742 7136.96774 730473.4923 50936308.52 6099805.175 1854.55 250.4504 62725.4
24
1600 28150 349.67742 5726.96774 122274.2981 32798159.5 2002591.304 1735.179 -135.179 18273.36
25
2250 32100 999.67742 9676.96774 999354.9441 93643704.64 9673846.144 2069.586 180.414 32549.21
26
2420 32500 1169.67742 10076.96774 1368145.267 101545278.8 11786801.63 2103.45 316.55 100203.9
27
2570 35250 1319.67742 12826.96774 1741548.493 164531101.4 16927459.69 2336.265 233.735 54632.05
28
1720 33500 469.67742 11076.96774 220596.8789 122699214.3 5202601.63 2188.11 -468.11 219127
29
1900 36000 649.67742 13576.96774 422080.7501 184334053 8820649.373 2399.76 -499.76 249760.1
30
2100 36200 849.67742 13776.96774 721951.7181 189804840.1 11705978.4 2416.692 -316.692 100293.8
31
2300 38200 1049.67742 15776.96774 1101822.686 248912711.1 16560726.79 2586.012 -286.012 81802.86
79
From table 5.1 we can find the following values
N=31
Now test whether there is hetroscedasticity or not in our model. Let’s test hetroscedasticity using
the goldfeld & Qundt test as follows using the table 5.2 up to 5.4.
Goldfeld&Qundat test
80
1st order the observation (given in the table 5.4 in ascending order of disposable income (Yd)
2nd omit 9 central values (quarter of the total sample size i.e. . Then you will have two sets of
regression equation one which contains small values of Yds (using table5.2) & the large values of Yds
(using 5.3.)
For small values of Yds (from table number 5.2) we will have
S1 331.36
The estimated equation which holds small values of Yd can be written as follows
Again the estimated for large value of Yds (see in the table 5.4)
81
We can write the estimated line for large values of Yds.
= 1053.47 + 0.0319Yd
s.e. (710.3) (0.025
t (1.276) (1.48)
2
R =0.846
Now to test whether there is hetroscedasticity or not we should undertake Goldfeld Quadat – using F tes
The table value will have which is the numerator degree of freedom again equal to the
denominator degree of freedom
N= 31 number of samples C = 9 omitted variables
K is the number of estimated parameters in our case 2
Again V2=9 F9, 9 from the table is equal to 3.18 and compare the calculated F *
value which is 5.06 with table value of F i.e. 3.18. Since the calculated value is greater than the table
value we reject the null hypothesis (i.e. there is homoscedasticity) & accept the alternative and then in this
model there is hetroscedasticity or the variance of Ui is not constant.
Order the the value of Yd in assending order & following this order the value of ei ( from table
5.1) Table 5.5
disposable Order of Order of
Yd ei
income (Yd) ( Yd-ei) = D (Yd-ei)2=D2
1 8777 1 23 22 484
82
2
9210 2 15 13 169
3
9954 3 13 10 100
4
10508 4 12 8 64
5
10979 5 8 3 9
6
11912 6 5 -1 1
7
12747 7 16 9 81
8
13499 8 17 9 81
9
14269 9 11 2 4
10
15522 10 14 4 16
11
16730 11 20 9 81
12
17663 12 19 7 49
13
18575 13 9 -4 16
14
19635 14 7 -7 49
15
21163 15 18 3 9
16
22880 16 30 14 196
17
24127 17 25 8 64
18
25604 18 21 3 9
19
26500 19 6 -13 169
20
27670 21 22 1 1
21
28300 23 31 8 64
22
27430 20 29 9 81
23
29560 24 27 3 9
24
28150 22 10 -12 144
25
32100 25 24 -1 1
26
32500 26 28 2 4
27
35250 28 26 -2 4
28
33500 27 2 -25 625
29
36000 29 1 -28 784
30
36200 30 3 -27 729
83
31
38200 31 4 -27 729
Sum 4826
r= 1-
1-- /29760 = 0.837
Since the Spearman rank correlation is high it indicates the existence of hetroscedasticity
So that
To remove hetroscedasticity transform the original model by dividing X. Then our saving equation will
be transformed as follows
84
N.o. Saving Disposable
( S) (Yd) S/Yd I/Yd (S/Yd)2 1/yd*s/yd e=(S/yd- )
1
264 8777 30.0786 1.1393 349.0196 0.3532 -11.1035 -722.4 521858.3
2
105 9210 11.4007 1.0858 1395.7717 0.2924 -20.2034 -722.4 521865.2
3
90 9954 9.0416 1.0046 1577.6060 0.2113 -18.2557 -722.4 521875.8
4
131 10508 12.4667 0.9517 1317.2536 0.1654 -14.7592 -722.4 521882.6
5
122 10979 11.1121 0.9108 1417.4138 0.1338 -13.7730 -722.4 521887.9
6
107 11912 8.9825 0.8395 1582.3005 0.0867 -11.7142 -722.4 521897.2
7
406 12747 31.8506 0.7845 285.9497 0.0574 -4.0499 -722.4 521904.3
8
503 13499 37.2620 0.7408 132.2192 0.0383 -2.2514 -722.4 521910.0
9
431 14269 30.2053 0.7008 344.3006 0.0243 -2.8913 -722.4 521915.2
10
588 15522 37.8817 0.6442 118.3519 0.0098 -1.0797 -722.4 521922.6
11
898 16730 53.6760 0.5977 24.1607 0.0028 0.2592 -722.4 521928.6
12
950 17663 53.7847 0.5662 25.2413 0.0004 0.1063 -722.4 521932.7
13
779 18575 41.9381 0.5384 46.5478 0.0000 0.0453 -722.5 521936.3
14
819 19635 41.7112 0.5093 49.6947 0.0013 0.2517 -722.5 521940.1
15
1222 21163 57.7423 0.4725 80.6692 0.0053 -0.6510 -722.5 521944.9
16
1702 22880 74.3881 0.4371 656.7653 0.0117 -2.7661 -722.5 521949.5
17
1578 24127 65.4039 0.4145 276.9969 0.0170 -2.1724 -722.5 521952.4
18
1654 25604 64.5993 0.3906 250.8613 0.0239 -2.4461 -722.5 521955.5
19 1400 26500 52.8302 0.3774 16.5609 0.0281 -0.6822 -722.5 521957.2
85
20
1829 27670 66.1005 0.3614 300.6683 0.0337 -3.1835 -722.5 521959.3
21
2200 28300 77.7385 0.3534 839.7150 0.0367 -5.5534 -722.5 521960.3
22
2017 27430 73.5326 0.3646 613.6494 0.0326 -4.4697 -722.5 521958.9
23
2105 29560 71.2111 0.3383 504.0212 0.0427 -4.6406 -722.5 521962.3
24
1600 28150 56.8384 0.3552 65.2490 0.0360 -1.5328 -722.5 521960.1
25
2250 32100 70.0935 0.3115 455.0874 0.0545 -4.9806 -722.5 521965.8
26
2420 32500 74.4615 0.3077 660.5341 0.0563 -6.0990 -722.5 521966.3
27
2570 35250 72.9078 0.2837 583.0835 0.0683 -6.3099 -722.5 521969.4
28
1720 33500 51.3433 0.2985 6.6698 0.0608 -0.6366 -722.5 521967.5
29
1900 36000 52.7778 0.2778 16.1371 0.0714 -1.0735 -722.5 521970.2
30
2100 36200 58.0110 0.2762 85.5693 0.0722 -2.4861 -722.5 521970.4
31
2300 38200 60.2094 0.2618 131.0737 0.0802 -3.2425 -722.5 521972.2
Sum -
16.8957 2.1086 152.3450 16179999.0
86
R2= 0.77
The transformed equation must be multiplied by Yd & the equation will be
This equation is free of hetroscedasticity
5.2 Autocorrelation
One of the assumption of OLS is the successive values of the random term Ui are temporarily
independent i.e. the value of Ui at time t is not correlated with Ui at t-1 period. This is called Ui's are not-
correlated with each other or there is no autocorrelation or Ut &Ut-1 are not serially dependent (they are
serially independent)
This assumption of no autocorrelation (serially independent) states that the covariance between Ut&Ut-1
is equal to zero or the successive values of Ut & Ut-1 covariance is zero
Cov (UtUt-1) = E{[Ut-E(Ut)][Ut-1-E(Ut-1]}
We know that from our assumption
E(Ut)=0 & E(Ut-1) = 0 then we left E[Ut-Ut-1]=0
But if the assumption is violated i.e. if the value of Ut (the random term at time t) is correlated with
(depend up on) its own previous value (i.e Ut-1) we say that there is autocorrelation or the random term
Ui is serially dependent. Autocorrelation is a special type of correlation & it refers to only the successive
value of the same variable but it doesn't refer to the successive values of different variables.
87
i.e . After the value of ei's are found we can calculate the value of et-1, as
follows.
et et-1
e2 e1
e3 e2
34 e3
. .
. .
. .
en en-1
In the first case when time period is 2 then e t will be e2 & et-1 will be e2-1=e1 by doing so you can get the
values e1e2, e2e3, e3e4 etc. Now plotted these corresponding values on the two dimensional diagrams. If on
plotting, most of the points (etet-1) fall in 1 st & 3rd quadrant (as shown in fig a) we can say that there is
positive autocorrelations. i.e. the product between et & et-1 are positive. If most of the points are fall
(etet-1) in quadrant 2nd &4th there will be negative autocorrelation because the product et et-1 are negative
(as shown in fig b).
et et
et-1 et-1
et-1 et-1
et et
Fig(a) positive outcome Fig. (b) Negative outcome
Autocorrelation may be positive or negative but in most of the cases of practice autocorrelation is
positive. The main reason for this is economic variables are moving in the same direction. Ex. in period of
88
boom employment, investment, output, growth of GNP,consumption etc are moving up wards & then the
random term Ui will follow the same pattern and again in periods of recession all the economic variables
will move down words & the random term will follow the same patterns.
Another methods commonly used in applied econometric research for the detection of
autocorrelation is to plot the residuals against time (t)- & we will have two alternatives.
a) If the sign of successive values of the residuals etet-1 are changing rapidly their sign we can say
there is negative autocorrelation.
b) If the sign of successive values of etet-1 do not change its sign frequently i.e. several positives are
followed by several negatives values of etet-1 we can conclude that there is positive
autocorrelation. This can be seen using the following diagram
+ve et +ve et
Negative e6
autocorrelation e6
e1 e3 e5 e6
e6
e5
e6
e6 e4
t
e3
e2
e2 e4 e6 Positive
e1 autocorrelation
-ve et
-ve et
89
that in our example the e's are changing the signs frequently, thus indicating negative autocorrelation.
Similarly if there are too few runs, they may suggest positive autocorrelation.
Now let’s represent the variables
n= total number of sample observations, equal to n1 +n2
Where n1 is the number of +ve symbols of e's
n2 is the number of -ve symbols of e's
K= number of runs
Assuming that n1>10 (the values of e's) & n2>10 (-ve values of e's) the number of runs is
distributed normally with
Mean = E(K) =
Variance of K =
Now using our hypothesis testing we hypothesis as follow
The null hypothesis H0= Cov (etet-1) = 0. No autocorrelation against
The alternative H1 Cov (etet-1) 0. There is autocorrelation
There is autocorrelation and the establish confidence interval with 95% confidence.
If the estimated k is lies in this limit accept the null hypothesis that there is no autocorrelation & reject the
existence of autocorrelation. Again if the estimated K is lies outside this limit reject the null hypothesis &
accept the alternative that there is autocorrelation.
In our example
n1=14 i.e. total number of +ve residuals (e's)
n2 = 18 i.e total number of -ve residuals (e's)
Using our formula
Mean value of K
Var(K) =
The 95% confidence interval is calculated as follows. From 95% confidence interval the level of
significance is 1- 0.95 which is equal to 0.05 and again we will have two tail test then
from the normal table 0.025 value will be obtained at the intersection point of 1.9
in the first column & 0.06 in the first raw of the normal table. Then the normal distribution value of the
two tail test of 5% significance level is 1.96. Hence the 95% confidence interval of k( number of Runs)
& this will be
But the calculated run value (K) is 5 form our example i.e. 1 st -ve e's, 2nd +ve e's 3rd -ve's, 4th +ve's 5th -
ve's. Since the calculated value of K is lying outside the table value of K we reject the null hypothesis
that ther is no autocorrelation between e’s & accept the alternatives that e's are correlated with each other.
2) Von Neumen ratio tests for the existence of autocorrelation. This is the ratio
of the variance of the first difference of any variable X over the variance of X. And this test is
applicable for directly observed series & for variables which are random i.e. variables whose
successive values are not auto correlated.
90
In case of the random term Ui this values are not directly observable but are estimated from the OLS
residuals (e's). For large samples (n>30) the Neuman ratio is
Since the method is applicable only for large sample size i/e/ n>30. Then as the sample size is
very large then may be approximately normally distributed with
Variance = Var
To under take the test 1 st compute the Von Numan ration 2ndly by using the formula for mean & variance
determine the confidence interval. Then
if the calculated value of Von Numan ratio is lies in the confidence interval accept the null
hypothesis that there is no autocorrelation reject the alternative.
If the calculated value of Neuman test ratio is lie outside the confidence interval we reject the
null hypothesis & accept the alternative that there is autocorrelation in the model.
3) Durbin - Watson Test
The most celebrated test for the defection of serial correlation (autocorrelation) is developed by two
statisticians Durbin & Watson. It is popularly known as the Durbin- Watson d- statistics. This test is
applicable
a) For small samples
b) This is based upon the sum of the squared differences in successive values of the estimated
disturbance term.
c) The test is appropriate only for the first-order autoregressive scheme i.e. only when the
successive values are correlated with each other meansif et is correlated with et-1 et etc. but not et
correlated with et-2 & so on.
The d- statistics will be obtained using the following formula.
91
ii. If there is large differences between et & et-1 i.e when the successive values of et & et-1 have
different signs it will generate large difference in the numerators. The signal for these types
of autocorrelation is an unusually large value of d which indicates negative autocorrelation.
The d- statistics test may be outlined as follows.
Step 1: If the null hypothesis H 0 = p= 0 means that there is no- auto correlation (the e's are serially
independent or e's are not serially dependent). Against the alternative that H1=P=0 means that the e's are
serially correlated (dependent) with each other or there is autocorrelation between the successive values
of et & et-1. Then we compute the d- statistics to test the null hypothesis
And this
92
Now we can conclude from the above values of the value of d is lies between 0&4 (o<d<4). The next
step is to compare the computed value of d * with the table value of d. The problem associated with this
test is the exact distribution of d is not known. Due to this Durbin & Watson have established arange of
values with in which we can neither accept nor reject the null hypothesis. These are the upper (dU) &
lower limits (dL) for the significance level of d which are appropriate to test no autocorrelation against
the alternative hypothesis of the existence of autocorrelation. For the two tailed Duration- Watson test we
have set of five regions for the value of d in diagram as follows.
Inconclusive region
Inconclusive region
autocorrelation
Critical region
Reject H0
Critical region
Reject H0& accept
that there is –ve
Accept H0 autocorrelation
No d
autocorrelation
dL du (4-du) (4.dL)
i) If calculated d is less than dL (i.e. d<dL ) we reject the null hypothesis & accept the
alternative that there is autocorrelation & the type of autocorrelation is positive
autocorrelation
ii) If the calculated d is less than (4-dL) i.e [d<(4-dL)] we reject the null hypothesis of no
autocorrelation & accept the alternative that there is autocorrelation. The type of
autocorrelation is negative autocorrelation.
iii) If the calculated d value is lieing between du & 4-du [du<d<(4-du)] we accept the null
hypothesis of no autocorrelation & reject the alternative.
iv) If the calculated d have the following type
[dL<d<du] of if (4—du)<d<(4-du)<d<(4-dL) the test will be inconclusive.
93
This alternative test for autocorrelation has the advantage of applicability to any form of autocorrelation
& it provides estimates of the coefficients of the autocorrelation relationship, which are required for
remedial transformation of the original observations. The procedure is as follows.
1st apply OLS to the sample observation i.e
Regress your model by fitting the data & obtain the values of the estimated residuals i.e. e’s
2nd Since we are not sure in a priori about the existence of autocorrelation, we may experiment with
various forms of autoregressive structures, for example
. This implies that the calculated value (F*) is greater than the table value
94
Thus irrespective of whether the random term ei is serially independent or not the estimates of the
parameter have any statistical bias as long as Ui& Xi’s are uncorrelated.
We know that
From the above formula if Ui’s are serially independent E(UiUj)=0 & the term
and left with
If you compare the two variance of [for no autocorrelation] & equation of autocorrelation the variance
of ( ) with no autocorrelation is smaller than the var( ) with autocorrelation.
c) The variance of the random term Ui may be seriously underestimated if the U’s are auto
correlated. In particular the var of Ui will be seriously underestimated in the case of positive
autocorrelation of the error term (Ui). As a result R2 ,t & F statistics are exaggerated.
d) Finally if the value of U is auto correlated, the prediction based up on OLS estimates will be
inefficient. It means the forecasting made on OLS estimates will be incorrect because this
prediction will have large variance as compared with prediction based on estimators obtained
from other techniques.
95
autocorrelation’ since the autocorrelation is occurred due to the pattern of omitted explanatory
variables. In this case the autocorrelation is occurring not due to the behavior of the random term
Ui but following the pattern of omitted explanatory variable. If several auto correlated
explanatory variables are omitted then their effect on the random term may not be observed
because the autocorrelation pattern of the omitted regressions may be cancelled out with e ach
others.
ii. Misspecification of the mathematical model. If we have adopted a mathematical form
which differs from the true form of the relation ship, the U’s may show serial correlation. For
example if we have chosen a linear function.
Yt =
While the true relation ship is
- Now if you omitted the lagged income then its influence will be reflected in the random term Ui
& autocorrelation will occur.
- If you omitted lagged income & wealth their effect may be cancelled out with each other & their
influence on the random term may not be observed & Ui will be serially independent.
To eliminate the autocorrelation which appear following the omitted variable (Xt-1) is to introduce the
omitted variables in to the function.
b) If the source of autocorrelation is misspecification of the mathematical form of the relationship
the relevant approach is to change the functional form. This can be investigated by regress the
residuals against higher powers of the explanatory variables or by computing a linear in logs form
& re-examining the resulting new results.
Given the above cases, if autocorrelation is observed the appropriate procedure is
a) To transform the original data
96
b) Applying OLS to the transformed data
Transformation of the original dataThe transformation of the original data depends up on the patterns
of the autoregressive structure which may be first order or higher order autoregressive.
If in this model the autocorrelation is the first order scheme means Ut is depend up on Ut-1 i.e. e 1e2, e2e3,
e3e4 etc. This is called also first order autoregressive. Then Ut is correlated with its preceding values.
Ut= Put-1+vt
Where vt satisfies all the assumption of Ui (OLS assumptions). P= is coefficient of Ut-1 since Ut is not
observable but we can approximate using the sample observation by obtaining et. There fore
et= Put-1+vt
The estimated value will be
=
Because
This is the estimated value of autoregressive of the first order i-e when et is correlated with et-1. To
transform the original data take the lagged form of equation.
97
The estimators obtained are efficient, if only our sample is large so that loss of one observation becomes
negligible. The above procedure is possible only when the value of is known. Now we can describe the
method through which the parameters of the auto correlated model can be estimated.
Multiply by
Let Yt-Yt-1=Yt*
Xt-Xt-1= X*
Ut-Ut-1=Vt
Then we can write
Here is suppressed in this case & we will have the equation that will passes through the origin. Suppose
one assumes that there is perfect negative autocorrelation i.e
The original model is
This model is called two period moving average regression models because we are regressing the
value of one moving average
on
98
This method of first difference is quite popular in applied research for its simplicity. The problem with
this method is that it depends up on the assumptions of the perfect positive or perfect negative
autocorrelation in the data. But now the question arises how to know whether the value of is equal to
+ve or –ve 1? The answer is estimation of in the following ways.
Will not be accurate if the sample size is small. The above relationship of d-statistics will be true for
large samples. For small samples Theil & Nagar have suggested the following relation
Table 5.7
Y X (X- )2=x2 (Y- )2=y2 xy = + X (Y- )=e e2
1
2 1 49 25 35 0.63 1.37 1.8769
2
2 2 36 25 30 1.54 0.46 0.2116
3
2 3 25 25 25 2.45 -0.45 0.2025
4
1 4 16 36 24 3.36 -2.36 5.5696
5
3 5 9 16 12 4.27 -1.27 1.6129
6
5 6 4 4 4 5.18 -0.18 0.0324
7
6 7 1 1 1 6.09 -0.09 0.0081
8
6 8 0 1 0 7 -1 1
9
10 9 1 9 3 7.91 2.09 4.3681
10
10 10 4 9 6 8.82 1.18 1.3924
11
10 11 9 9 9 9.73 0.27 0.0729
12
12 12 16 25 20 10.64 1.36 1.8496
13
15 13 25 64 40 11.55 3.45 11.9025
14 10 14 36 9 18 12.46 -2.46 6.0516
99
15
11 15 49 16 28 13.37 -2.37 5.6169
Sun
105 120 280 274 255 41.768
Average
7 8
100
9
2.09 -1 4.3681 3.09 9.5481 1
10
1.18 2.09 1.3924 -0.91 0.8281 4.3681
11
0.27 1.18 0.0729 -0.91 0.8281 1.3924
12
1.36 0.27 1.8496 1.09 1.1881 0.0729
13
3.45 1.36 11.9025 2.09 4.3681 1.8496
14
-2.46 3.45 6.0516 -5.91 34.9281 11.9025
15
-2.37 -2.46 5.6169 0.09 0.0081 6.0516
sum
41.768 60.2134 36.1511
(et et 1)
2
60.2134
F(d)
No autocorrelation
1.08=dL
101
2.64=4-dU
1.36=dU
Example 2
Suppose the hypothetical data on consumption expenditure and disposable income are given in the table
5.9 if the estimated function is given by
102
S.n Ct Yt C Y c2 y2 Cy (C- ) et2 et-1 (et-et-1) (et-et- (et-1)2 et*et-1
=et 1)
1
206.3 226.5 -140.647 -152.84 19781.69 23358.84 21495.99 208.48 -2.18 4.75 0.00 0.00
2
216.7 238.6 -130.247 -140.74 16964.39 19806.62 18330.50 219.44 -2.74 7.52 -2.18 -0.56 0.32 4.75 5.97
3
230 252.6 -116.947 -126.74 13676.69 16062.01 14821.45 232.13 -2.13 4.52 -2.74 0.62 0.38 7.52 5.83
4
236.5 257.4 -110.447 -121.94 12198.63 14868.39 13467.51 236.47 0.03 0.00 -2.13 2.15 4.63 4.52 -0.05
5
254.4 275.3 -92.5474 -104.04 8565.02 10823.49 9628.26 252.69 1.71 2.92 0.03 1.68 2.83 0.00 0.04
6
266.7 293.2 -80.2474 -86.14 6439.65 7419.41 6912.19 268.91 -2.21 4.88 1.71 -3.92 15.35 2.92 -3.77
7
281.4 308.5 -65.5474 -70.84 4296.46 5017.74 4643.12 282.77 -1.37 1.88 -2.21 0.84 0.70 4.88 3.03
8
290.1 318.8 -56.8474 -60.54 3231.63 3664.61 3441.31 292.10 -2.00 4.01 -1.37 -0.63 0.40 1.88 2.75
9
311.2 337.3 -35.7474 -42.04 1277.88 1767.03 1502.68 308.86 2.34 5.46 -2.00 4.34 18.83 4.01 -4.68
10
325.2 350 -21.7474 -29.34 472.95 860.60 637.98 320.37 4.83 23.33 2.34 2.49 6.22 5.46 11.28
11
335.2 364.4 -11.7474 -14.94 138.00 223.08 175.46 333.42 1.78 3.18 4.83 -3.05 9.28 23.33 8.61
12
355.1 385.5 8.1526 6.16 66.46 37.99 50.25 352.53 2.57 6.59 1.78 0.78 0.61 3.18 4.58
13
375 404.6 28.0526 25.26 786.95 638.27 708.72 369.84 5.16 26.65 2.57 2.60 6.74 6.59 13.25
14
401.2 438.1 54.2526 58.76 2943.34 3453.21 3188.10 400.19 1.01 1.02 5.16 -4.15 17.23 26.65 5.22
15
432.8 473.2 85.8526 93.86 7370.67 8810.45 8058.47 431.99 0.81 0.66 1.01 -0.20 0.04 1.02 0.82
16
466.3 511.9 119.3526 132.56 14245.04 17573.21 15821.86 467.05 -0.75 0.56 0.81 -1.56 2.44 0.66 -0.61
17
492.1 546.3 145.1526 166.96 21069.28 27876.98 24235.26 498.22 -6.12 37.43 -0.75 -5.37 28.80 0.56 4.60
18
536.2 591 189.2526 211.66 35816.55 44801.65 40057.96 538.72 -2.52 6.33 -6.12 3.60 12.97 37.43 15.39
19
579.6 634.2 232.6526 254.86 54127.23 64955.66 59294.77 577.86 1.74 3.04 -2.52 4.26 18.15 6.33 -4.39
Sum 7207. 272019.2 246471.8 145.9
6592 4 223468.51 4 4 144.73 2 141.68 67.87
Table 5
103
From the table we found the following information
Now the estimated consumption function shows that almost all variation in consumption is explained by
disposable income because R2=0.99 (99%). Let’s examine the error terms ui is auto correlated or not.
104
).
Now
First transform the previous data i.e table 5.9 & the new estimated model will be
Table 5.10
sn Ct-1 Yt-1 .48Ct- .48Yt- C* Y* c* y* c*2 y*2 c*y*
1 1
105
1 -
216.7 238.6 104.016 114.528 102.284 111.972 61.45289 -67.028 3776.458 4492.753 4119.064
2 -
230 252.6 110.4 121.248 106.3 117.352 57.43689 -61.648 3298.996 3800.476 3540.869
3 -
236.5 257.4 113.52 123.552 116.48 129.048 47.25689 -49.952 2233.214 2495.202 2360.576
4 -
254.4 275.3 122.112 132.144 114.388 125.256 49.34889 -53.744 2435.313 2888.418 2652.207
5 -
266.7 293.2 128.016 140.736 126.384 134.564 37.35289 -44.436 1395.238 1974.558 1659.813
6 -
281.4 308.5 135.072 148.08 131.628 145.12 32.10889 -33.88 1030.981 1147.854 1087.849
7 -
290.1 318.8 139.248 153.024 142.152 155.476 21.58489 -23.524 465.9074 553.3786 507.7629
8 -
311.2 337.3 149.376 161.904 140.724 156.896 23.01289 -22.104 529.5931 488.5868 508.6769
9 -
325.2 350 156.096 168 155.104 169.3 8.632889 -9.7 74.52677 94.09 83.73902
10 -
335.2 364.4 160.896 174.912 164.304 175.088 0.567111 -3.912 0.321615 15.30374 2.218539
11
355.1 385.5 170.448 185.04 164.752 179.36 1.015111 0.36 1.030451 0.1296 0.36544
12
375 404.6 180 194.208 175.1 191.292 11.36311 12.292 129.1203 151.0933 139.6754
13
401.2 438.1 192.576 210.288 182.424 194.312 18.68711 15.312 349.2081 234.4573 286.137
14
432.8 473.2 207.744 227.136 193.456 210.964 29.71911 31.964 883.2256 1021.697 949.9417
15
466.3 511.9 223.824 245.712 208.976 227.488 45.23911 48.488 2046.577 2351.086 2193.554
16
492.1 546.3 236.208 262.224 230.092 249.676 66.35511 70.676 4403.001 4995.097 4689.714
17
536.2 591 257.376 283.68 234.724 262.62 70.98711 83.62 5039.17 6992.304 5935.942
18
579.6 634.2 278.208 304.416 257.992 286.584 94.25511 107.584 8884.026 11574.32 10140.34
sum
2947.264 3222.368 -2E-07 0.368 36975.91 45270.8 40854.01
= 163.74-(1.10488)(179.02)= -34.9722
106
The regression model after the data is transformed can be written as
This model after transformation of the data can be expressed in terms of original variable as
n=18
Example 3: Given the following model
Model A =
Model B=
Where t is time then regression on data for 1948-1964 gave the following results.
Model A =
(-3.9608) R2=0.5284 d=0.8252
Model B=
t=(-3.2724) (2.777)
R2=0.6629
d=1.82
a) Is the serial correlation in model or in model B? The period is 1949-1964 then n=16 years which
is the sample size in both the models. In model A we have only one explanatory variable K=1
given 5% significance level, n=16 &K=1 from the d-table dL=1-10 & du is 1.37. The calculated d
value of the regression model A is 0.8252 since it is less than the lower value of d-statistics then
we reject the null hypothesis & accept that in model A the random term Ui is positively auto
correlated with each others.
b) In case of model B we have two explanatory variables K=2, N=16 from the table at 5%
significance level dL=0.98 & du is 1.54. The calculated d-value of model B is 1.82 which is
greater than du then again here there is negative autocorrelation in random term Ui.
107
5.3 Multicollinearity
In the assumptions of OLS we were assuming that the explanatory variables are not perfectly linearly
correlated. i.e.
Then we assume that the explanatory variables X1,X2 & X3 are not perfectly correlated means rX 1X2
1, r X1,X3 1, rX2,X3 0. But if it does not hold we speak of there is perfect multicollinearity with the
explanatory variables. If the explanatory variables are multicollinear then we can not identify the
independent effects of the explanatory variables on the explained variables.
Suppose
108
Where Ct=consumption expenditure.
Yt current income, Yt-1 is previous income. Here Yt & Yt-1 may be
correlated with each others. Hence the problem of multicollinearity may be observed in distributed lag
models.
3rd Multicollinearity is usually connected with time series data because economic variables are move
together in the same directions.
4th – Multicollinearity is also quite frequent in cross sectional data where
Q= AL Keut
Where Qt= out put
Lt = lab our & Kt is capital
Then if you collect data from different manufacturing at a single period (at a time) then you will find that
in large firms capital & lab our are very high as comp aired to small firms i.e. capital & lab our are tend to
move in the same direction & will correlated with each other.
5th- An over determined model: - when the model has more explanatory variables than the number of
observations there will be multicollinearity. This could happen in medical research where there may be a
small number of patients about who information is collected on large number of variables.
Assume further that X1 & X2 are related with each other & their relation is X2=KX1 where K is any
arbitrary number. The formulae for the estimation of the coefficients are
Again
109
are indeterminate i.e. there is no way of finding separate values for . The
standard error of the estimates becomes infinitely large.
Suppose we have
Will be
110
d) If multicollinearity is high, one may obtain a high R 2 but none of them or very few estimated
regression coefficients are statistically significant.
Means that Y= f(X1X2X3) then the procedure is regress Y on X 1, X2, &X3 separately i.e. stepwise first
regress Y on X1 then on X2 & finally on X3. Thus
a) In elementary regression we examine their results on the basis of a priori & statistical criterion.
We choose the elementary regression which aspects to give the most possible results on both a
priori & statistical criterion. Then gradually insert additional variables & we examine their effects
on the individual coefficients i.e. standard error & R 2. And the new variable is classified as
useful, superfluous or detrimental as follows.
i. If the new variable improves R 2 without rendering the individual coefficients unacceptable
(wrong) on a priori considerations, the variable is considered useful & is retained as an
explanatory variable.
ii. If the new variable improves R 2 and does not affect to any considerable extent the value of
the individual coefficients, it is considered as superfluous & is rejected.
iii. If the new variable affects considerably the sign or values of the coefficients it is considered
as detrimental.
If the individual coefficients are affected in such away as to become unacceptable on theoretical, a priori,
considerations, then we may say that this is a warning for the existence of multicollinearity. In this case
the new variable is important, but because of intecorrelation with the other explanatory variables its
influence can not be assessed statistically by OLS. This does not mean that we must reject the detrimental
variable.If we omit the detrimental variable completely to avoid multicollinearity.
a) we must bear in mind that the influence of the detrimental variable will be absorbed by other
coefficient and
b) The influence may be absorbed by the random term which may become correlated with the
variable left i.e. E(XUi) 0. It will violate the assumption of OLS which is Cov (UiXi)=0
2) Examination of partial correlations. If in the regression Y on X 2,X3 & X4 the R2 is very
high but 2X3X4, 2X2X4, 2X2X3, are comparatively low, this may suggest that the variables X 2,X3 &
X4 are highly interconnected.
3) Auxiliary regression. Here we regress each X on the remaining X variables & compute the
corresponding R2, which we denote by R2i. Each one of these regression is called auxiliary
regression. That is auxiliary to the main regression of Y on the X variables. Then based
on the relation between the F test & R2.
111
Follow the F distribution with k-2-numerator
N-k+1 denominator degree of freedom. In the equation
N-stands number of sample size
k- Number of explanatory variables including the intercept term
R 2 X1X2...Xk the coefficient of determination in the regression of variable
Xi on the remaining X variables. If the computed F at chosen level of significance exceeds the table
value of F it would indicate the presence of multicollinearity between Xi & the other X variables. If it
does not, we say that the particular Xi is not collinear & we may retain the variable in the model. This
rule is some how simplified if we use Klein’s rule of the thumb which states that we except
multicollinearity if the R2 computed from the auxiliary regression is greater than the over all R 2 obtained
from the regression of Y on all repressors. The above test of multicollinearity will show us the location of
multicollinearity.
4) Tolerance & Variance inflation factor. The variance & covariance of the multiple regression i.e.
Can be written as
From the above if x1x2 tend to wards 1 that is, co linearity increases, the variance of the two estimator
increases & in the limit x1x2=1 the variance of the estimators will be infinite. It is equally clear from the
above formulae as x1x2 increases towards 1, the covariance of the two estimators also increase in
absolute value. The speed with which the variance & covariance increases can be seen with the variance
inflation factor (VIF) which is defined as
VIF shows how the variance of an estimator is inflated by the presence of multicollinearity. As 2x1x2
approaches to 1. The VIF approaches infinity. As the extent of co linearity increases, the variance the
estimator increases, & in the limit it can become infinite. If there is no co linearity x1x2=0 then VIF will
be 1. Then we can write the variance of follows
This shows the variance of are directly proportional to the VIF. We could also use what is
known as the measure of tolerance which is defined as
112
From which TOLj is 1 if there is no co linearity where as it is zero when there is perfect col linearity.
5) Computation of t- ratio to the pattern of multicollinearity. The t- test helps to detect those
variables which are the cause of multicollinearity. The test is performed on the partial correlation
coefficients through the following procedure of hypothesis testing.
The null hypothesis is H0= xixj.x1x2x3...xk=0
The alternative is H1= xixj.x1x2x3...xk0
In three variable models
2x1,x2x3=
2x1,x3.x2=
2 x2,x3. x1=
The form this we can compute the t-test for each estimator as follows
If calculated t* is > the table value of t reject the null hypothesis that no-multicollinearity & accept that
there is multicollinearity. If calculated t* t table value accept the null hypothesis that there is no
multicollinearity & reject he alternative that there is multicollinearity.
113
2) Increase the size of the sample!- To avoid multicollinearity it is suggested that multicollinearity
may be avoided or reduced by increasing the sample size. The reason for this is that as you
increase the sample size the covariance among the parameters (X's) get reduced. But one should
remember that this should be true when inter correlation happens to exist only in the sample but
not in the population, the procedure of increasing the size of the sample will not help to reduce
the multicollinearity.
3) Dropping a variable (s):- The problem of multicollinearity may be reduced or solved if we drop
the variable(s) that is (are) collinear. But here we must be carefully not to commit a specification
bias or specification error.
Let
From this model
Now let’s omit variable X2 by expecting that it is collinear with X1 & we will have the following
model
In place of Yi substitute
Thus the estimator after omitting the variable X 2 is biased (large mean) but has smaller variance than
(completed model variance). Therefore, dropping a variable with the hope to estimate the problem of
multicollinearity may lead to biasness of estimators.
4) Introducing an additional equation in the model. The problem of multicollinearity may be
overcome by expressing the relationship between multicollinear variable. Such relation in a form
of an equation may then be added to original model. The addition of null equation transforms
our single model (original) in to simultaneous equation model. The reduced form method can
then be applied to avoid multcollinearity.
5) Use extraneous information:- Extraneous information means the information obtained from any
other source outside of the sample which being used for estimation. Extraneous information may
114
be available from economic theory or from some empirical studies already conducted in the field
in which we are interested. We can use the following methods which extraneous information is
utilized in order to deal with the problem of multicollineaity.
A) A priori information. Suppose we consider the following model
Where Yi= consumption X1i=income & X2i wealth. Income & wealth variables may move together &
create multicollinearity. But suppose in a priori if we know that i.e the rate of change of
consumption with respect to wealth is of the corresponding rate with respect to income. Then we can
write the regression
Yi
Let X1i + 0.10X2 = X*1 substitute
Run regression on the model /apply OLS/ & obtain the estimator & from the relationship between
. You can calculate from =0.10
B) Combining cross sectional & time serious data or pooling cross section & time series data.
Suppose we want to study the demand function for automobile & assume that we have a time
series & cross sectional data; on the number of cars sold, average price of the cars, & consumers
income. Suppose we have the following
Given that the demand function is studied using the above model. Where. Yt= demand for cars (number
of cars sold), Pt is average price of the cars, I is income & t is time. If our objective is to estimate the
price elasticity of demand (i.e. ) & income elasticity using . First transform the non-linear function
in to linear function i.e.
In time series data the price & income variables generally tend to be highly collinear i.e. one can not
separate the income & price effect on quantity demanded. On the other hand it is not possible to obtain
price effect B1 from the cross sectional data (because price structure is the same for all consumers at a
particular point of time). Under such condition it is suggested to use pooling techniques which avoids to a
certain extent the problems associated with both (cross & time series) of sample data. Pooling
techniques can be outlined as follows.
1st Cross-section sample is used to obtain an estimate of the income coefficient using the
following
Where is derived from the time series data & is obtained from the cross sectional data. By
pooling techniques we have skirted the multicolinearity between income & price. The estimators obtained
115
using pooling the time - series and cross-sectional data in the manner just suggested may create problems
of interpretation. Because we are assuming implicitly that the cross sectional estimated income elasticity
is the same thing as that which would be obtained from a pure time series analysis.
C) Transformation of variables: Given that we have time series data on consumption (Ct), income
(UE) & wealth (W).
From the above model income & wealth may tend to move in the same direction & creates
multicollinearity. One way of minimizing this dependence is to proceed as follows. Take the lagged (t-1)
values of the above model & you will have
This equation is known as the first difference form because we run the regression, not on the original
variable, but on the difference of successive values of the variable. The first difference regression model
often reduces the severity of multicuollinearity because there is no a priori reason to believe that the first
difference model will also be highly correlated. But the problem with this difference transformation
model is the error terms Vi may not satisfy one of the assumption of CLR i.e. the disturbance term are not
serially correlated.
116
Excersice unit 6:
117
Family Consumptio Yd Family Consumption Yd
118
Year Imports GDP
1980 299.2 2918.8
1981 319.4 3203.1
1982 294.9 3315.6
1983 358 3688.8
1984 416.4 4033.5
1985 438.9 4319
1986 467.7 4537.5
1987 536.7 4891.6
1988 573.5 5258.3
1989 599.6 5588
1990 649.2 5847.3
1991 639 6080.7
1992 687.1 6469.8
1993 744.9 6795.5
1994 859.6 7217.7
1995 909.3 7529.3
1996 992.8 7981.4
1997 1087 8478.6
1998 1147.3 8974.9
1999 1330.1 9559.7
119
Chapter 6 : Estimation with Dummy Variables
The variables used in regression equation usually take values over some continuous range. In regression
analysis the dependent variable is frequently influenced not only by variables that are quantitative
(income, output, prices, costs, heights etc.) but also by variables that are essentially qualitative in nature
(sex, race, profession etc.). Dummy variables are constructed by econometricians to be mainly used as
proxies for other variables which can not be measured in any particular case for various reasons. Dummy
variables are commonly used as proxies for qualitative factors such as sex, profession, religion etc. Since
such qualitative variables usually indicates the presence or absence of a quality or an attribute such as
male or female, black or white, etc. One method of quantifying such attribute is by constructing artificial
variables that take on values of 1 or 0. 1 indicates the presence of an attribute & 0 indicates the absence of
an attribute. Variables that assume 1 or zero values are called dummy variables, or indicator variables,
binary variables & dichotomous variables. Suppose the firm utilized two types of production process to
obtain its output.
120
The above examples consist of dummy variables as explanatory variable. Such models are called analysis
of variance (ANOVA) i.e the dependent variable is quantitative but the explanatory variables are
qualitative. But in most economic research, regression model contains a mixture of quantitative &
qualitative variables. Such as
Notice that each of the three qualitative variables, having children, owning house, & age of the house
hold, has two categories & hence need one dummy variable for each. Not also that the omitted or base,
category now as “a house hold with no child, no house & less than 70 years” is given by the following
equation.
E(Ct)=
E(Ut)=0
E(Ct) =
Mean value of the consumer who has a child but no house & less than 70 years. i.e. D 1t=1, D2t=0
& D3t=0. Then
Because E (Ut)=0
Again this is the mean consumption expenditure of the consumer. By the same analogy you can
continue in such away.
121
6.1 Some important uses of Dummey variables
Analysis of Covariance model
A) Use of dummy variable for measuring the shift of a function. A shift of a function implies that the
constant intercept changes in a different period while the other coefficients remaining the same.
Such type of shift can be examined by introducing a dummy variable in the function under study.
Suppose if we want to study the consumption expenditure of the society as a whole for the period
1910-1950. We know that there were two break through events where occurring during these
periods. These are.
1st world war in 1910’s & 2nd WW in 1940’s
Great depression was occurring in 1930’s
Given these two events the remaining period is assumed to be normal years. Then generally we can
divided these period as normal years & abnormal (war & depression period) years. During these periods it
is expected to have different consumption expenditure patterns of the society. Suppose we assume that
during the abnormal & the normal period marginal propensity to consume is constant but there is a
change with respect to the autonomous /subsistence level/ consumption. The consumption function will
be
Then we apply OLS using two explanatory variables Yt & Dt & obtain estimates of Yt &Dt in the
regression equation i.e.
Now from this estimated you can get the normal & abnormal year period equations as follows. If there is
war period /abnormal year/ Dt=0 & the equation will be
Abnormal year equation.
122
If there is normal year Dt will have a value of 1 then
To generalize
Abnormal year
Normal year
These function suggest that there is a difference in the intercept in abnormal & normal period since the
abnormal period is less than the normal period intercept then we can conclude the depression & the two
war periods had significantly negative effect on consumption expenditure
Y
Normal year
Abnormal year
X
0 income
B) Use of Dummy variable for measuring the change in the slope parameter over time. The abnormal
period may not affects the autonomous (subsistence level) but it may affect the marginal propensity to
consume i.e. if the abnormal periods affects the marginal propensity to consume but not the constant term.
Suppose we have
1 in normal years
Dt= 0 in abnormal years
Since we assume that the abnormal year is affecting the slope (MPC) we clipped it with disposable
income. If the period is normal year the estimated function will be (Dt=1)
123
Normal year equation. If the period is abnormal
year i.e Dt=0 then the estimated function will be
let
This is the normal year equation.
When Dt=0 the estimated function will have
This is for abnormal year.
Since is greater than we can conclude that even though the constant term (autonomous
consumption) in both periods is equal but the marginal propensity to consume is different in normal
& abnormal period
Normal year
Abnormal year
Income
124
C) The final possibility is when there is a change in the intercept & slope over time. The regression
equation when it is considering two things simultaneously i.e affecting the slope coefficient & the
constant term can be explained using the following function.
let
Normal year
Abnormal year
( )
Income
125
a. To test seasonal pattern using intercept
If there are seasonal patterns in various quarters the estimated differential intercept will explain it by
applying OLS you can estimate the function & obtain the coefficients.
If D2 =1 (third seasons)
If D4 =1 (fourth quarter)
Because all the other quarters will be assigned zero value. In the above case we consider only the
intercept term which indicates the presence of seasonal patterns
Ex. If we obtain the following results from a given data.
= 6688+ 1322D2- 217D3t+ 183D4 0.0383Xt.
S.E (1711) (638) (632) (654) (0.0115)
t (3.908) (2.07) (0.344) (0.281) (3.33)
R2=0.5255
The results shown that only the sales coefficient (Xt) & the differential intercept associated with
the second quarter are statistically significant at 5% level. It means on the basis of S.E & t-test
you can check it as follows
126
,
And in all case if t>2 you can accept that the explanatory variables will affect the dependent variables
(i.e. Profit is affected by sales (Xt), & the dummy variables). In many of the study only some of ‘s are
significant which reflects that only some quarters may affect the profit From the above example only
are statistically significant. Thus we can conclude that there are some seasonal factors operating in the
second quarter of each year but there is no seasonal factors that will affect profit in the other quarters. The
average profit in the first quarter is 6688 & in the second quarter the average profit is ( + ) i.e
6688+1322 = 8010. The sales (X1) coefficient tells us that if sales increase by 1 average profits will
increase by 0.033 cents.
b. If the seasonal factor do not affect the mean value but the slope of the seasonal factors.
Where
1 If it is the second quarter
D1= 0 Otherwise
The seasonal effect can be captured by the slope coefficients. If all (D1, D2, D3 are zero)
Yˆt 1 xt 2 D1 xt ut
127
If D3 = 1 the forth generator
Now the seasonal effect can now be examined by using hypothesis testing of F-test to know the joint
values are equal to zero
c. The final possibility is that let the intercept and the slope will be affected by the seasonal
factors. And the function would be
=
If D2= 1 for the third quarter
=
If D3= 1 for the fourth quarter
128
Excersise for chapter six
1) The following table gives the consumption expenditure C,the disposable income
Yd & sex of the house hold head of 12 random families.
1 18535 22550 M
2 11350 14035 M
3 12130 13040 F
4 15210 17500 M
5 8680 9430 F
6 16760 20635 M
7 13480 16470 M
8 9680 10720 F
9 17840 22350 M
10 11180 12200 F
11 14320 16810 F
12 19860 23000 M
A) Regress Yd on X B) Test for a different intercept for families with a male or a female as
head of the household. C) Test for different slope or MPC for families with male or female as head
of the household. D) Test for both different intercept & slope. E) which is the best result
129
2) Given the following sales in four quarters from 1995 -1999
1995 1 1 540.5
1995 2 2 608.5
1995 3 3 606.6
1995 4 4 648.3
1996 1 5 568.4
1996 2 6 632.8
1996 3 7 626
1996 4 8 674.6
1997 1 9 587
1997 2 10 640.2
1997 3 11 645.9
1997 4 12 686.9
1998 1 13 597
1998 2 14 675.3
1998 3 15 663.6
1998 4 16 723.3
1999 1 17 639.5
1999 2 18 716.5
1999 3 19 721.9
1999 4 20 779.9
Using the above data run regression of sales and the seasonal dummies & interpreate the result?
130
Chapter 7: Simultaneous equation models
P=f (Qd)--------------------------------------------------------7.2
Under such conditions we need to consider more than one regression equations. One for each
interdependent variable to understand the multi-flow of influence among the variables. This is precisely
what is done in simultaneous equation models. A system of equation describing the joint dependence of
explanatory & explained variables is called a system of simultaneous-equations or simultaneous equation
model. In simultaneous equation model variables are classified as endogenous (explained) & exogenous
(explanatory variable). Endogenous (explained variables are those variables that are determined by the
economic model. But exogenous /explanatory variables are those variables which are determined outside
the model.
Example.
Qd= +1P+2Y+Ut - demand function
Qs= 1+3P+4R+Ut - supply function
Where Qd= quantity demanded, P= price, R= rainfall
Qs= quantity supplied, Y= in income
In these models we have five variables (Qd, Qs, P, Y & R). Quantity supply, quantity demanded & prices
are endogenous variables that will be determined in the model Y& R are exogenous variables which is
determined or given outside the model. In this simultaneous model total number of equations is equal to
the number of endogenous variables. In the above model we have three endogenous variables such as Qd,
Qs & P then the total number of simultaneous equation is equal to three.
Qd= +1P+2Y+Ut
Qs= 1+3P+4R+Ut
Qd= Qs
In these simultaneous equation models it is not possible to estimate each equation independently but we
can determine the value of endogenous variables by taking in to account all the information provided by
other equations of the system. Means we can determine the values of endogenous variables jointly or
simultaneously. If you apply OLS for each equation independently to estimate the parameters of each
equation by disregarding other equations of the model, the estimates obtained are not only biased but also
becomes inconsistent. As you increase the sample size indefinitely, the estimation does not converge to
their true parent. The bias arising from the application of OLS independently for each equations of the
simultaneous equations model is known as simultaneity bias or simultaneous equation bias.
Ex. Demand function! - The demand for a commodity depend on its own price (P), on other goods price
(Po) & on income (I) so the equation.
131
Qd= 0+1P+2P0+3I+Ui……………………………..7.3
If you apply OLS for the above equation one of the CLRS assumption will broken down i.e. E(Pu 1)0.
The explanatory variables are independent of the random term (Ui).
Since the price of the commodity is also affected by the quantity demanded of that commodity. The above
single equation model can not be treated independently. Means there is a two-way causation of the
following type.
P=0+2Qd+3Z0+v...........................................................7.5
Where Y is income & Z is advertisement
E(Ui)=0 E(V) = 0
E(Ui ) = u
2 2
E(v2) = 2u
E(UiUj)=0 E(ViVj)=0 Also E(UiVi)=0
There are two endogenous variables i.e Qd & P. In addition there is one exogenous variable Z.
Substitute the quantity demand equation in to the price equation
P = 0 + 2 (+1P+Ui ) + 3Z0+ Vi
P--12P = 0 + 2 +2 Ui + 3Z+ Vi
P (1--12) = (0+2 ) + 3 Z + (2Ui+Vi )
2 + +
P 0
(1 , 2 )
From here we can proved that the covariance of X & U is not zero. Let’s move it
132
Cov (P,V) = E[{(P---E(P)(U--E(U)}]
= E[(P---E(P)U] Since E(Ui)=0
3
Cov(P,U)=E Z -
1
1 2
u
E ( 2ui vi)
1 1 2
Cov(PU)=
The covariance between P & U is not zero then we break our assumption that E(PiUi)=0. That is
covariance between P & Ui is not zero as a consequence of applying OLS to each equations of the model
separately the coefficient will turn out to be biased & inconsistency. Let's see it that the estimates are
biased.
Qd= +1P+Ui From our demand equation if we use deviation form we will
have
qd.p = 1pi2 + pivi divided both sides by pi2
qdpi pivi
pi 2 pi 2
1
We know that =
pivi
Then ˆ 1
pi 2
133
pivi
E ( ˆ ) 1 E 2
6.7
pi
In case of E(XU) 0 then the above equation E i/e ̂ is not equal to the true population
Consistency Problems: An estimator is said to be consistent if its probability limit is equal to its
population value. Therefore to show that the estimator of is consistent, it is required to be shown
that its plm is equal to i.e
P lim ( ) =
If we applying the rules of probability limit to
ˆ 1 E 2
xu
xi
P lim( xivi / N )
P lim(1 )
P lim( xi 2 / N
P lim( xiu / N )
P lim(ˆ1 ) 6.8
P lim( xi 2 / N )
The quantities in the brackets are explained as follows
is the cov (XU) & xi2/n is the sample variance of X. As the sample size N increases, the
sample covariance between X&U would approximate the true population covariance given by
ˆ 1 E
xu
2
xi
Similarly as N, the sample variance of X will approximate its population variance, say 2x.
Therefore
2 2u
1 1 2
P lim(ˆ1 ) 1
u 2
2 v 2
ˆ1 1 . 6.9
1 1 2 x 2
̂ is a biased estimator & the bias will not disappear even when N. The direction of bias will depend
up on the structure of the particular model & the value of the regression coefficients.
134
A) Biased estimates
piui
E ( ˆ ) 1 E 2
pi
Where p is price from our equation. The estimated average value of is different from the
population parameter by the amount
B) Inconsistent estimates
The obvious solution is to apply other methods of estimation which give better estimates of parameters.
These are several methods used to estimate the equation. The most common are
a. The reduced form method or indirect least squares (ISLS)
b. The method of instrumental variables
c. Two - stage least squares (2SLS)
d. Limited information maximum likelihood (LIML)
e. The mixed estimation
f. Three stage least squares (3LS)
g. Full information maximum likelihood (FIML)
The first five methods can be applied to one equation at a time & due to this we call it single equation
system. The remaining two are called systems methods because they are applied to all equations of the
system simultaneously.
i. Structural models
A structural model is complete system of equations which describe the structure of the relationship of the
economic variables. Structural equations express the endogenous variables (which is determined in the
model) as a function of
a) Other endogenous variables
b) Exogenous variables (determined outside the model) forgiven
c) The random term Ui.
The simultaneous equation will be complete equation if the number of equations is equal to the number of
endogenous variables. Ex. lets consider the following closed
Ct = 0 + 1 Yt +u1.....................................7.9
It = b+b1Yt + b2 Yt-1 + u2.............................7.10
Yt = Ct + It + Gt.............................................7.11
Equation 7.9 is consumption function 7.10 is investment function & 7.11 is national income
(definitional)
From the above equations consumption (Ct), It (investment) & (Yt) national income are endogenous
variables because their values will be obtained from the above equations. Since we have three
endogenous variables as a rule the number of equation must be three. Yt-1 & Gt are exogenous variables
or predetermined values or the values are determined outsides of our model. Now from the above
135
simultaneous equations we have 3 endogenous variables & two exogenous variables. The above
simultaneous equation system is complete because the number of endogenous variables equals to the
number of equations. The structural parameters are in general express propensities if the equation is linear
& elasticity’s if the equations are non-linear & transform in to linear equations. The structural parameter (
) which are either propensities or elastic ties they explained the direct effects of each explanatory
variables on the dependent variable. Indirect effects of the explanatory variable on the dependent variable
can be computed by the solution of the structural system. Factors not appearing in any function explicitly
may have an indirect influence on the dependent variable of the function.
tatisticsDirect effects
Ex. From our equations (6.10 -6.12) 1 , b1 ,b2 express the direct effect of the explanatory variables on the
dependent variable.
= is marginal propensity to consume
b 1 =if income increase by 1 birr on the average investment will increase on the
average by b1 amount
b2= if the lagged income increase by one birr on the average investment will increase
by b2 amount.
Indirect effect:- The indirect effect can not be easily obtained from the structural parameters.
Ex. a change in consumption will affect indirectly because an increase in consumption will affect income
(Y) which is a determinant of investment (CYI). The effect of C on I can not be measured directly by
any of the parameter ( 1 ,b1 ,b2) but it will be taken into account by the simultaneous solution of the
system. Traditionally the structural parameters are represented by ̂ 's when they are endogneous
variable (coefficients of Ct,Yt & It).
The parameters of the exhogenous variables such as coefficients of Yt-1 & Gt. represents by
Endogenous variables are (I, Ct, & Yt) represented by the y's.
For the sake of simplicity we will ignore the constant term.
The structural system, if we use the above notations, will becomes
y1= 3 Y3 +u1....................................................7.12
y2= 3 Y3 +21x1 + u2..........................................7.13
y3= y1 +y2 + x2......................................................7.14
Where y1 =C y2 =I y3= Y
x1= Yt-1 x2=G
Transfer all the observable variables to the left hand side & leave ui in the right hand side.
Take the coefficients of the endogenous & exhogenous variable & establish table structure
1 0 - 3 0 0
0 1 - 3 -21 0 This is table of structural coefficients.
-1 -1 1 0 -1
136
Values of the structural parameters may be obtained by using sample observations on the variables of the
model & applying an appropriate econometric method.
1st method
yi = i1 x1 + i2 x2 ................nk xk +vi .........................7.18
And proceed with the estimation of's by applying some appropriate method. In this case yi is
endogenous where as xi are exogenous variables. In our example of simple three equations model the
reduced form would be (we can write the endogenous variables Ct, It & Yt as a function of Yt-1 & Gt
as follows)
Ct= 11 Yt-1 + 12 Gt +v1 .........................7.19
It= 21 Yt-1 + 22 Gt +v2 .........................7.20
Yt= 31 Yt-1 + 32 Gt +v3 .........................7.21
2nd method
For obtaining the reduced form of a model is to solve this structural system of equation (6.10 -
6.12)
Ct = + 1 Yt +u1
It = b+b1Yt + b2 Yt-1 + u2
Yt = Ct + It + Gt
Drop the constant term of &
a) substitute equation 6.10 & 6.11 in to 6.12 & you will get
137
This is the reduced form of the consumption function.
c) Substitute Yt in to the investment function (i/e equation number 7.22 in
to equation number 7.10)
This is the reduced form of the investment function. The reduced form of the parameters measures the
total effect. i.e direct & indirect effects of the exhougneous variables on the endogenous variables of the
model. Let’s write equation 7.9 up to equation 7.11 as follows.
CtYtIt
138
Hence the direct effect of Yt-1 on It (investment) will be captured by b2 but the other effects of lagged
income on other variables of income & consumption are indirect effects. From the parameter 21 we will
have the following parameters in equation 7.9 & 7.10
This 21 consists of three variables b2, 2, b1 which is reflected (appear) as coefficients in equation 7.9 &
7.10.This can be decomposed in the following components.
139
b. Obtain the estimates of the structural parameters by any appropriate econometric method.
c. Substitute the estimates of 's & 's in to the system of parameters relations to find the
estimates of the reduced form coefficients.
ii) Second equation contains all the predetermined variables i.e. x's & only one endogenous variable.
Y2=f(x1,x2,...xk,y1,u2)
Given values of the exhougneous variable (x) we may apply OLS to each equation individually
because by assumption
cov(uiuj)=0
cov(y,u2)=0
Recurisive systems are also called triangualr system because the coefficients of the endogenous variables
(the 's) from a triangular array: the main diagonal of the arrays of 's contains units & no coefficients
apear above the main diagonal. Take our equation 7:9-7.11.
Ct= y1=f(yt-1,Gt)
It= y2=f(ct,yt-1,Gt)
Yt= y3=f(ct,It,Yt-1,Gt)
y1=11yt-1+12Gt+ui
y2=21Ct+21Yt-1+22 Gt + u2
y3=31Ct+32It+31Yt-1+32 Gt + u3
Take all the right hand variables in to the left hand side by excluding ui.
y1-11yt-1-12Gt=u1
y2-21Ct-21Yt-1-22 Gt = u2
y3-31Ct-32It+31Yt-1-32 Gt = u3
's of endogenous variables 's of exhougnous variables
y1/Ct/ Y2/It/ y3/yt/ yt-1 Gt
1 0 0 -11 -12
-21 1 0 -21 -22
-31 -32 1 -31 32
140
If you look at the coefficients of 's (endogneous variable) the diagonal contains 1 in all coefficients.
Above the diagonals the coefficients are zero & below it there are numbers. Then this equation can be
estimated using OLS & it will yields unbiased & consistent estimates.
Where Ct= consumption expenditure Yt= income & I is investment. Ct& Yt are endogenous variables
& we have two equation. But only we have one exogenous variable i.e. I. The model is complete because
the number of equations equal to the number of endogenous variables.
To simplify substitute equation 7.29 in to 7.30
Let
Substitute in the above equation
Yt=0+1It+w0 -----------------7.31
Equation 6.31 is the reduced- form equations. The reduced form coefficients (0&1) are non linear
combinations of the structural coefficients. Substitute equation 7.31 in to equation number 7.29
141
Let
Then
Ct= 2+3It+w1 ...............................................7.33
The reduced form coefficients such as 1+3 which are coefficients of investment in the income &
consumption function respectively. They are called impact or short run or multiply because they measure
the immediate impact on endogenous variables. 1 explain the immediate impact of investment on income
and 3 shows the immediate impact of investment on consumption
Qd=0+1P1+U1 ................................................................7:34
Qs=0+1P1+U2 ................................................................7:35
Qd=Qs equilibrium condition
Where: Qd is quantity demand, Qs is quantity supplied & P is price
Qd=Qs
0+1P1+u1=0+1P1+U2.....................................................7:36
Rearrange it
1 P1-1P1=0-0 1+ (U2-U1)
(1 -1)=(0-0 ) 1+(U2-U1)
......................................7:37
Where
142
Let
Then we can write equation number 6.39 as follows
Qd = 1+w................................................................7.40
Equation number 7.38 & 7.40 were the two reduced form equations derived from the structural equations
number 7.34 & 7.35. Now if you compare the number of structural equation coefficients (0, 1, 0 & 1)
are four where as from the structural equations we have only two coefficients (0 & 1). The coefficients of
reduced form contain the coefficients of the structural equations i.e 0, 1, 1 & 2 are found in 0 & 1. But
how we can find the values of 0, 1, 1 & 2 from 0 & 1. Since it is not possible to find these values from
0&1 or the coefficients of the structural equations are greater than the coefficients of the reduced form
then we can say that the equation is under identified & we can not compute four structured coefficients
from two reduced coefficients.
Let
P1=0+1+U1 ................................................................7:44
Substituting 7.44 in to equation 7.41
Qd=0+1
143
Let
Qd = 2+3 I+w............................................7.45
Equation number 7.44 & 7.45 are reduced- form equations & OLS can be applied to estimate their
parameters. In the structural equations (7.41&7.42) contains five structural coefficients 0,1,2,1&,2.
But there are four reduced form equations coefficients (0,1,2&3). Since the number of ' are less than
(they are four) the structural coefficients (they are five 0,1, 0,1,2, &,2) then we can not find unique
solutions. But the supply function is independently identified because
Qs = 2+3It+wt
In the supply equation of 7.42 there are two structural parameters (0 &, 1) again in the reduced form
equation of the supply equation we have two reduced form coefficients 2+3 i.e why the supply function
is identified.
Substitute in place of Q3 equation number 7.45 & in place of P1 equation number 7.44 and
But in case of the demand function 0, 1, & 2, is 3 structural coefficients but in reduced form of
equation the coefficients are two. Since in the demand function the coefficient of the reduced form (7.45)
is less than the coefficients of the structural equation (7.41). We can concluded that the demand function
is under identified (2,3) are less than 0,1,&2). But in case of supply function 2,3 are equal to ,
then it is just identified. In conclusion we can say that the supply function is identified but the demand
function is not identified on the basis of this one can say that the system as a whole is not identified. In
conclusion we can say that the supply function is identified but the demand function is not identified on
the basis of this one can say that the system as a whole is not identified. Suppose lets have the following
equations
144
P1= 0 + 1It + 2Pt-1+ Vt .....................7.49
Substitute this price value either in demand or supply equation you will get
Qt=3 + 4It + 5Pt-1 + wt .....................7.50
Where 3=
The structural equations number 7.47 & 7.48 consists of six structural coefficients 0, 1 ,2, 0, 1, & 2
& there are six reduced form coefficients in equation 7.49 & 7.50 (0, 1, 2, 3, 4,& 5). In this equation
the structural coefficients are equal to reduced form. Then we can conclude that the system as a whole is
identified.
Where
145
Pt = 0 + 1It + 2 Rt + 3Pt-1 + Vt .....................7.53
Substitute Pt in the demand or supply function
Qt=0 + 1 [0 + 1It + 2 Rt + 3Pt-1 + Vt] + 2 It + 3Rt + U1
After simplification you will get
Q = 4 + 5It + 6 Rt + 7Pt-1 + wt .....................7.54
Where 4= ' ,
&
From equation number 7.51 & 7.52 we have seven structural coefficients but in equation 7.53 & 7.54 we
have eight reduced form coefficients. Since the coefficients of reduced form coefficients are greater than
the reduced form coefficients we can say that the system as a whole is over identified.
146
Example 1. Qd = + 1P1 + 2I + u1...........................................7.55
Qs = 0 + 1P1 + u2 ..................................................7.56
y1 = 3y2 + 2x1 + x2 + u1
y2 = y3 + x3 + u2
y3 = y1 + y2 - 2x3 + u3
It is known that y's are endogenous & x's are exogenous variables. To construct rank condition 1 st
Transfer all the left hand variables into right hand side & put in table
147
-y1 + 3y2 + 0y3 + 2x1 + x2 + 0x3 + u1 = 0
0y1 + y2 + y3 + 0x1 + 0x2 + x3 + u1 = 0
y1 + y2 - y3 + 0x1 + 0x2 + 2x3 + u1 = 0
By ignoring the random term put in table form
Variables
Equations y1 y2 y3 x1 x2 x3
1st Equation -1 3 0 2 1 0
2nd Equation 0 -1 1 0 0 1
3rd Equation 1 1 -1 0 0 -2
Now let’s examine the identifiably of the second equation. Then to do so follow the following steps
a) strike out the raw coefficients of equation number 2
b) strike out the columns in which a non-zero coefficients of equation number 2 (delete the column
of non zero coefficients of the variables found in equation 2)
Equation Variables
y1 y2 y3 x1 x2 x3
1st Equation -1 3 0 -2 1 0
2nd Equation 0 -1 1 0 0 1
3rd Equation 1 1 -1 0 0 -2
Then we left with
y1 X1 x2
-1 -2 1
1 0 0
C) Now determine that the table with one dependent & two independent variables of order (G-1) the
determinants of at least one of (G-1) is non-zero.
If the (G-1) determinant is equal to zero then we can say that the equation is under identified.
If the (G-1) determinant at least one of the determinant is none zero then the equation is
identified.
From the above table we will have the following determinants of order
148
= 0+2 0 , =0 =1
From the above we have two non-zero determinants & conclude from the second equation is identified.
Example 2 : Given the mathematical model
Variables
Equation
D1 P1 P2 Y T C S
1st equation -1 1 2 3 4 0 0
2nd equation 0 1 2 0 4 3 -1
3rd equation 1- 0 0 0 0 0 0
149
Since the value of the determinate is none zero i.e. 3. Hence the supply equation is identified.
(K - M ) (G-1)
(7-5) (3-1) = 2 the supply equation is exactly identified.
We have seen that applying OLS on simultaneous equation produce bias & inconsistent parameters. But
there is one situation were OLS can be applied appropriately even in the context of simultaneous
equation.
Y1t = 0 + x1 + 2 x2 + u1 .............................................7.57
Y2t = 0 + 1Y1 + 2 x1 t +3x2 + u2 ................................7.58
Y3t = r0 + r1Y1 + r2Y2 + r3 x1 + r4x2 + u3 ......................7.59
In equation 7.57 the endogenous variables appear in the left & the exogenous variables in the right hand
side. Hence OLS can apply straight forwardly to this question given all the assumptions of OLS holds
true. In equation 7.58 we can apply OLS provided that Y 1 & U2 are uncorrelated. Again we can apply
OLS to the last equation if both Y1 & Y2 are uncorrelated with uU3. In this recursive system OLS can be
applied to each equation separately & we do not face a simultaneous equation problem. The reason for
this is that clear, because there is no interdependence among the endogenous variables. Thus Y 1 affect Y2
influence Y3 without being influenced by Y3. In other words each equation exhibits a unilateral causal
dependence.
150
ILS involves the following Steps
Step - 1- We first obtain the reduced form equation from the structural equations. i.e. explaining
the endogenous variables as a function of explanatory (exogenous variables) & a stochastic term.
Step 2 Apply OLS to the reduced- form equations individually. In this case the exogenous
variables are uncorrelated with the stochastic term.
Step 3 we obtain estimates of the original structural coefficients from the estimated reduced-form
coefficients obtained in step-2. ILS derives from the fact that structural coefficients are obtained
indirectly from the OLS estimates of the reduced form coefficients.
Example
Qd = 0 + 1 p1 + 2 Y + u1 ..........................7.60
Qs = 0 + 1p1 +u2 …........................................7.61
Where Qd = quantity demanded, Qs= quantity supplied P = Price, Y is income. Assume that Y is
exogenous & Q &Pt are endogenous variable. Take equation 7.60 prove if it is identified or not
(K-M) (G-1)
(3-3) < (2-1)
0 < 1 the demand function is under identified.
Take equation number 7.61.
(K-M) (G-1)
(3-2) = (3-1)
1 = 1. The supply function is just /exactly/ identified.
0 + 1 p1 + 2 Yt + u1t = 0 + 1p1 + u2
1 p1 = 1p1 + ( 0 - 0) - 2 Yt + u2 - u1
1 p1 - 1p1 = ( 0 - 0) - 2 Yt + u2 - u1
p1 (1 - 1 ) = (0-0 ) - 2 Yt + u2 -u1
Where
P1 = 0 + 1Y + w.....................................7.63
Substitute equation 7.62 in to 7.60
Qd= 0 +1
151
Let
Qd = 2 + 3 Yt + vt............................................7.64
In equation 7.63 & 7.64's are reduced form coefficients & are non linear combinations of the structural
coefficients ( & ). The reduced form parameters ('s) can be estimated by OLS.
Since the supply function is exactly identified the parameters can be estimated uniquely from the reduced
form coefficients as follows.
0 = 2 + 10 &
From the estimated value
& . These are the parameters of the supply function using ILS.
When the equation is over identified we use 2SLS method consider the following
Y1t = + 1Y2 + 3 X1 + 4 X2 + U1 ..........................7.65
Income function
Y2 = 0 + 1Y1 + Ut .................................................7.66
Money supply function
Where Y1 = income X1 - investment
Y2 = Stock of money X2 = Government expenditure
Y1 & Y2 are endogenous variables & X1 & X2 are exogenous variables.
Take equation 6.65 = K=4 , G=2, M=4
(K-M) (G-1)
(4-4) < (2-1) then the income equation is not identified /under identified
Take equation number 7.66 M=2
(K-M) (G-1)
(4-2) > (2-1)
2 > 1 the money supply equation is over identified
If one applies OLS to the money supply the estimates obtained will be inconsistent because of the
correlation between Y1 & U2. If we use a proxy for Y1 which is not correlated with U 2 such a proxy is
called instrumental variable. If you find such a proxy you can apply OLS & estimate the money supply
equation. This is instrumental variable can be obtained by the two-stage least squares (2SLS). The method
indicates two successive applications of OLS. The process is explained as follows.
Step 1. To get rid of the likely correlation between Y 1 & U2 (in equation 7.66). First
regress Y1 on all the predetermined variables in the whole system (x1 & x2).
Equation number 7.67 is the reduced form regression because it explains that the exogenous variables
appear on the right- hand side.
152
Equation number 7.67 can be explained.
In this equation is a linear combination of the non- linear stochastic X’s & a random component .
Following the OLS theory & are uncorrelated.
Step 2. The money supple equation which is over identified can be written as
Y2 = 0 + 1( + ) + U2
= 0 + 1 + 1 +U2
= 0 + 1 + ( U 2 + 1 )
= 0 + + U*..........................................7.70
Where U* = U2+B1
Comparing equation 7.70 & 7.66 is that, they seem very similar. But the difference lies between .
The advantage of replacing in the original money supply equation is that since Y1 is
correlated with U2 & rendering OLS in appropriate; but the replacement will avoid this problem. As a
result OLS can be applied to equation number 7.70 which will give consistent estimates of the parameters
of the money supply function. The basic idea behind 2SLS is to purify the stochastic explanatory variable
Y1 of the influence of the stochastic disturbance U2. This goal is achieved
1st by regressing Y1 on explanatory variables (X, & X2)
2nd obtaining
3rd Replacing Y1 by & apply OLS.
The estimators obtained are consistent i.e. they converge to their values as the sample size increases
indefinitely. To further illustrate 2SLS we use the following model.
Y1 = 0 + 1Y2 + 2 X1 + 3 X2 + U1 ..........................7.71
Y2= 0 + 1Y1 + 2 X3 + 3 X4 +U2 ...........................7.72
Where Y1 = income X4 = Previous income
Y2 = money supply X3 = money supply in the previous period
X1 = investment X2= Government expenditure
X1, X2 , X3 & X4 are exogenous variables but Y1 & Y2 are endogenous variables.
Take equation number 7.71 the income equations
K=6 G= 2 M=4
(K - M) (G-1)
(6-4) > (2-1) = then the income equation is over identified.
Equation number 7.72 - the supply equation
K= 6, G=2 M=4
(K-M) (G-1)
(6-4) > (2-1) again the money supply equation is over identified.
Since both the income & money supply equations are over identified and we use 2SLS to estimate the
parameters coefficient. Then to apply 2SLS we follow the following steps.
Step 1. Regress the endogenous variables (Y1 & Y2) on all the exogenous variables (X 1, X2, X3,
&X4).
Y1 = 0 + 1X1 + 2 X2 + 3 X3 + 4 X4 + U1 ..........................7.73
Y2 = 0 + 1X1+ 2 X2+ 3 X3 +4 X4+ U2 ...........................7.74
Obtain & replace these values in equation 7.71 & 7.72 respectively by the original Y1 & Y2
153
Y1 = 0 + 1 ( + ) + 2 X1+ 3 X2+U1
Y2= 0 + 1( + ) + 2 X3+ 3 X4 +U2
Y1=0+1( +2 X1+ 3 x3 +1 +u1
Where 1 +u1 =U*1
Y1=0+1( +2 x1+ 3 x3 +U*1 ................7.75
The money supply equation will have
Y2= 0+1 +2 x3+ 3 x4+1 +u2
Where +u2 =U*2
Y2= 0+1 +2 x3+ 3 x4+ U*2.................................................7.76
Apply OLS for equation 6.75 & 6.76 the estimates obtained will be consistent & unbiased.
154
Exersice for chapter seven
Demand Qt = 0 + 1 X1 + 2 Y1 + U1
Supply Qt= 0 + 1X1 + U2
Demand: Qt = 0 + 1P1 + 2 Yt + U1
Supply Qt = 0 + 1P1+ 2 T2+ U2
a) Determine if the demand & the supply functions are exactly identified. Overidentified or
underidentified?.
b) Find the reduced form equations?
c) Derive the formula for the structural parameters?
Where Y1, Y2, Y3,are endogenous & X1,X2 & X3 are exougneous variables. Discuses the
identification of each of the equations of the model, based on the order & rank conditions?
155