Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Econometrics For Finance

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 54

Chapter 2: Regression Analysis

1.1. Population and Sample Regression Functions


Regression models describe the relationship between variables by fitting a line to the observed
data. Linear regression models use a straight line, while logistic and nonlinear regression models
use a curved line. Regression allows you to estimate how a dependent variable changes as the
independent variable(s) change. Regression models can be used to predict the value of the
dependent variable at certain values of the independent variable. However, this is only true for
the range of values where we have actually measured the response.
Simple linear regression is used to estimate the relationship between two quantitative variables.
It is regression model that estimates the relationship between one independent variable and one
dependent variable using a straight line. If you have more than one independent variable,
use multiple linear regressions instead.
You can use simple linear regression when you want to know:
1. How strong the relationship is between two variables (e.g. the relationship between rainfall
and soil erosion).
2. The value of the dependent variable at a certain value of the independent variable (e.g. the
amount of soil erosion at a certain level of rainfall).

Self Test: You are a social researcher interested in the relationship between income and
happiness. You survey 500 people whose incomes range from Birr15 to Birr75 and ask them to
rank their happiness on a scale from 1 to 10. Your independent variable (income) and dependent
variable (happiness) are both quantitative, so you can do a regression analysis to see if there is a
linear relationship between them.

Assumptions of simple linear regression


Simple linear regression is a parametric test, meaning that it makes certain assumptions about
the data. These assumptions are:

1) Linear Regression model. The regression model is linear in the parameters.

Yi = β0 + β1Xi + Ui for the population regression function

i = 0 + 1Xi + i for the sample regression function

2) X values are fixed in repeated sampling. Values taken by the regressor X are considered
fixed in repeated samples. More technically, X is assumed to be non stochastic
3) Zero mean value of disturbance Ui. Given the value of X, the mean, or expected, value of
the random disturbance term ui is zero. Technically, the conditional mean value of ui is zero.
4) Homoscedasticity or equal variance of Ui. Given the value of X, the variance of ui is the
same for all observations. That is, the conditional variances of Ui are identical.
5) No autocorrelation between the disturbances.
6) Zero covariance between Ui and Xi, i.e the disturbance U and explanatory variable X are
uncorrelated.
7) The number of observations n must be greater than the number of parameters to be
estimated. Alternatively, the number of observations n must be greater than the number of
explanatory variables.
8) Variability in X values. The X values in a given sample must not all be the same. var(X)
must not be the same.
9) The regression model is correctly specified. Alternatively, there is no model specification
bias or error in the model used in empirical analysis.

Self Test 2: You think there is a linear relationship between cured meat consumption and the
incidence of colorectal cancer in the Ethiopia. However, you find that much more data has
been collected at high rates of meat consumption than at low rates of meat consumption, with
the result that there is much more variation in the estimate of cancer rates at the low range
than at the high range. Because the data violate the assumption of homoscedasticity, it
doesn’t work for regression, but you perform a Spearman rank test instead

2.4. Parameter Estimation: Least Squares

Properties of least-squares estimators: the gauss–markov theorem.

Gauss–Markov Theorem: Given the assumptions of the classical linear regression model, the
least-squares estimators, in the class of unbiased linear estimators, have minimum variance, that
is, they are BLUE.
1. It is linear, that is, a linear function of a random variable, such as the dependent variable
Y in the regression model.

2. It is unbiased, that is, its average or expected value, E( ), is equal to the true value, β2.

3. It has minimum variance in the class of all such linear unbiased estimators; an unbiased
estimator with the least variance is known as an efficient estimator.
Alternative formula;
1=

Example: Given the following data, estimate the regression coefficient?


Yi Xi XiYi

3 2 6 4 9
4 3 12 9 16
5 4 20 16 25
8 5 40 25 64
20 14 78 54 114

Solution:

1=

1= = =1.60

0= -

0= 5-1.6(3.50)

0=-0.60
Therefore the regression equation will be;

i = 0 + 1Xi + i

i =-0.60 +1.60Xi + i

2.5 .Covariance, correlation coefficient, coefficient of determination (r2)


2.5.1) Covariance (Covxy): It shows the direction of linear relationship between X and Y. It
shows the positive, negative or no correlation ship of X and Y to one another or how the two
variables changing together. Its value ranges from negative infinity to positive infinity.

Formula for Population Covariance (CovXY)= where N is population size

Formula for Sample Covariance (CovXY) = where n is sample size

Example: Given the following sample data, calculate covariance between X and Y?
Yi Xi

3 2 -1.5 -2 3
4 3 -0.5 -1 0.5
5 4 0.5 0 0
8 5 1.5 3 4.5

20 14 8

Solution:

Cov(XY) = =
Cov(XY) = 2.66
Interpretation: There is positive relationship between X and Y or both variables X and Y
change in the same direction.
2.5.2 Correlation coefficient (rXY)
It shows both direction and magnitude of linear relationship between the two variables. Its
value ranges between 1 to -1. It is unitless.
 When rxy is -1, there is perfect negative relationship between X and Y.
 When rxy is 1, there is perfect positive relationship between X and Y.
 When rxy is 0, there is perfect no relationship or correlation between X and Y.
 When rxy is close to 0, small correlation between X and Y.
 When rxy is close to 1, high positively correlation between X and Y.
 When rxy is close to -1, high negatively correlation between X and Y.
 When rxy is 0.50, moderately correlation between X and Y.
Some of the properties of r are as follows:
1. It can be positive or negative, the sign depending on the sign of the term in the numerator of,
which measures the sample co variation of two variables.
2. It lies between the limits of −1 and +1; that is, −1 ≤ r ≤ 1
3. It is symmetrical in nature; that is, the coefficient of correlation between X and Y(rXY) is the
same as that between Y and X(rYX).
4. It is independent of the origin and scale
5. If X and Y are statistically independent, the correlation coefficient between them is zero; but
if r = 0, it does not mean that two variables are independent. In other words, zero correlation
does not necessarily imply independence
6. It is a measure of linear association or linear dependence only; it has no meaning for
describing nonlinear relations.
7. Although it is a measure of linear association between two variables, it does not necessarily
imply any cause-and-effect relationship.

Formula for correlation coefficient )


=

 When >0 will be positive relationship between X and Y

 <0 will be negative relationship between X and Y

 )=0 will be no relationship between X and Y


Correlation patterns

Example: The following sample data calculate correlation coefficient (rXY)?


Yi Xi XiYi

3 2 6 4 9
4 3 12 9 16
5 4 20 16 25
8 5 40 25 64
20 14 78 54 114
Solution:

=
=312-280

= =154.66

=0.2069

Interpretation: There is positive low correlation between X and Y


Example-2: The following data shows the score of 12 students for Accounting and Statistics examinations.

a) Calculate a simple correlation coefficient


b) Fit a regression equation of Statistics on Accounting using least square estimates.
c) Predict the score of Statistics if the score of accounting is 85.
No Accounting Statistics
(x) (y)
1 74 81 5476 6561 5994
2 93 86 8649 7396 7998
3 55 67 3025 4489 3685
4 41 35 1681 1225 1435
5 23 30 529 900 690
6 92 100 8464 10000 9200
7 64 55 4096 3025 3520
8 40 52 1600 2704 2080
9 71 76 5041 5776 5396
10 33 24 1089 576 792
11 30 48 900 2304 1440
12 71 87 5041 7569 6177
Total 687 741 45591 52525 48407
Mean 57.25 61.75
A.
The Coefficient of Correlation (r) has a value of 0.92. This indicates that the two variables are
positively correlated (Y increases as X increases).
B.

is the estimated regression line

C. Insert in the estimated regression line.

2.5.3. Coefficient of determination (r2)


The coefficient of determination r2 (one-variable case) is a summary measure that tells how well
the sample regression line fits the data. The overall goodness of fit of the regression model is
measured by the coefficient of determination, r2. It tells what proportion of the variation in the
2
dependent variable, or regressand, is explained by the explanatory variable, or regressor. This r
lies between 0 and 1; the closer it is to 1, the better is the r2 measures the proportion or
percentage of the total variation in Y explained by the regression model. r 2 is a more meaningful
measure than r, for the former tells us the proportion of variation in the dependent variable
explained by the explanatory variable(s) and therefore provides an overall measure of the extent
to which the variation in one variable determines the variation in the other. The latter does not
have such value.
 If r2 =1 , total variation of Y is completely explained by the estimated regression line .
 If r2 =0 , total variation of Y is not explained by the estimated regression line. In general
r2 measures the percentage of variation in Y explained by the regression function.
Two properties of r2 may be noted:
1) It is a nonnegative quantity.
2) Its limits are 0 ≤ r2 ≤ 1.

=( + where,

Total variation in Y

Explained variation of Y= (

Unexplained variation of Y= , that is to mean;

TSS=ESS+RSS
Where, TSS= Total sum of square, ESS explained sum of square and RSS residual sum of
square.

1= +

= +

=
=

Example: The following sample data calculate coefficient of determination (r2)?


Yi Xi

3 1 -2 -2 4 4 4
4 2 -1 -1 1 1 1
5 3 0 0 0 0 0
8 6 3 3 9 9 9
20 12 0 0 14 14 =14

Solution

= =1

Interpretation: 100 percent of total variation in Y is perfectly explained by the estimated


regression line. i.e the data fit the model reasonably well.
2.6 .Hypotheses Testing
Hypothesis testing is an act in statistics whereby an analyst tests an assumption regarding a
population parameter. The methodology employed by the analyst depends on the nature of the
data used and the reason for the analysis. Hypothesis testing is used to assess the plausibility of a
hypothesis by using sample data. Such data may come from a larger population, or from a data-
generating process. The word "population" will be used for both of these cases in the following
descriptions. In hypothesis testing, an analyst tests a statistical sample, with the goal of providing
evidence on the plausibility of the null hypothesis. Statistical analysts test a hypothesis by
measuring and examining a random sample of the population being analyzed. All analysts use a
random population sample to test two different hypotheses: the null hypothesis and the
alternative hypothesis. The null hypothesis is usually a hypothesis of equality between
population parameters; e.g., a null hypothesis may state that the population means return is equal
to zero. The alternative hypothesis is effectively the opposite of a null hypothesis (e.g., the
population means return is not equal to zero). Thus, they are mutually exclusive, and only one
can be true. However, one of the two hypotheses will always be true.

A statistical hypothesis is a hypothesis that is testable on the basis of observed data modeled as
the realized values taken by a collection of random variables. A set of data is modeled as being
realized values of a collection of random variables having a joint probability distribution in some
set of possible joint distributions. The hypothesis being tested is exactly that set of possible
probability distributions. A statistical hypothesis test is a method of statistical inference.
An alternative hypothesis is proposed for the probability distribution of the data, either explicitly
or only informally. The comparison of the two models is deemed statistically significant if,
according to a threshold probability the significance level the data would be unlikely to occur if
the null hypothesis were true. A hypothesis test specifies which outcomes of a study may lead to
a rejection of the null hypothesis at a pre-specified level of significance, while using a pre-
chosen measure of deviation from that hypothesis (the test statistic, or goodness-of-fit measure).
The pre-chosen level of significance is the maximal allowed "false positive rate". One wants to
control the risk of incorrectly rejecting a true null hypothesis.

The process of distinguishing between the null hypothesis and the alternative hypothesis is aided
by considering two types of errors.

A Type I error: occurs when a true null hypothesis is rejected.


A Type II error: occurs when a false null hypothesis is not rejected.

Hypothesis tests based on statistical significance are another way of expressing confidence
intervals (more precisely, confidence sets). In other words, every hypothesis test based on
significance can be obtained via a confidence interval, and every confidence interval can be
obtained via a hypothesis test based on significance. Significance-based hypothesis testing is the
most common framework for statistical hypothesis testing. An alternative framework for
statistical hypothesis testing is to specify a set of statistical models, one for each candidate
hypothesis, and then use model selection techniques to choose the most appropriate model. The
most common selection techniques are based on either Akaike information criterion or Bayesian
information criterion.

There are 5 main steps in hypothesis testing

1. State your research hypothesis as a null (Ho) and alternate (Ha) hypothesis

After developing your initial research hypothesis (the prediction that you want to investigate), it
is important to restate it as a null (Ho) and alternate (H1) or (Ha) hypothesis so that you can test it
mathematically. The alternate hypothesis is usually your initial hypothesis that predicts a
relationship between variables. The alternative hypothesis (H1) is the statement that there is an
effect or difference. The null hypothesis is a prediction of no relationship between the variables
you are interested in. The null hypothesis (H0) is a statement of no effect, relationship, or
difference between two or more groups or factors. In research studies, a researcher is usually
interested in disproving the null hypothesis. You want to test whether there is a relationship
between income and consumption expenditure. Based on your knowledge of Economics, you
formulate a hypothesis that household consumption expenditure is associated or related with
income.

1) A one-tailed (or one-sided) hypothesis: It specifies the direction of the association between the
predictor and outcome variables.

H0: βi>0

H1: βi ≥0
Example: To predict the relationship between income and consumption, the one tailed hypothesis will be
developed as follows;

H0: there is no a positive association ship between incomes with consumption spends.

H1: there is a positive association ship between incomes with Consumption

A two-tailed hypothesis states: It specifies only that an association exists; it does not specify the
direction.

H0: βi=0

H1: βi ≠0

Example: To predict the relationship between income and consumption, the two tailed hypothesis will be
developed as follows;

H0: There is not a relationship between household income and Expenditure.

H1: There is a relationship between household income and Expenditure.

2. Collect data in a way designed to test the hypothesis: For a statistical test to be valid, it is
important to perform sampling and collect data in a way that is designed to test your hypothesis. If
your data are not representative at least more than 25 sample size must be included, then you cannot
make statistical inferences about the population you are interested in.

3. Perform an appropriate statistical test: Here are a variety of statistical tests available, but
they are all based on the comparison of within-group variance (how spread out the data is
within a category) versus between-group variance (how different the categories are from
one another).If the between-group variance is large enough that there is little or no overlap
between groups, then your statistical test will reflect that by showing a low p-value. This
means it is unlikely that the differences between these groups came about by chance.
Alternatively, if there is high within-group variance and low between-group variance, then
your statistical test will reflect that with a high p-value. This means it is likely that any
difference you measure between groups is due to chance. Your choice of statistical test will
be based on the type of data you collected.
3) Decide whether to reject or fail to reject your null hypothesis: The decision rule is a
statement that tells under what circumstances to reject the null hypothesis .Based on the
outcome of your statistical test, you will have to decide whether to reject or fail to reject your
null hypothesis. In most cases you will use the p-value generated by your statistical test to
guide your decision. And in most cases, your cutoff for rejecting the null hypothesis will be
0.05 – that is, when there is a less than 5% chance that you would see these results if the null
hypothesis were true.

4. Present the findings in your results and discussion section: The results of hypothesis
testing will be presented in the results and discussion sections of your research paper. In
the results section you should give a brief summary of the data and a summary of the results
of your statistical test (for example, the estimated relationship between income and consumption
expenditure between group means and associated p-value). In the discussion, you can discuss
whether your initial hypothesis was supported by your results or not. In the formal language
of hypothesis testing, we talk about rejecting or failing to reject the null hypothesis. You
will probably be asked to do this in your statistics assignments. If the P- value is less than 1%,
5% and 10% level of significance therefore, we can reject Ho and accept H1.

Table 1: Summery about Type I and Type II error

True in population Association ship No association ship


Reject Ho Correct Typed I
Fail to reject Ho Type II error Correction

First order test : They are different types of Hypothesis Test such as standard error test, student T test ,
Z test, confidence interval and P- value .

1) Standard error Test: This test is used for judging the statistical reliability of the estimates of the
regression coefficients. This test helps us decide whether the estimates and are significantly different
from zero, i.e. whether the sample from which they have been estimated might have come from a
population whose true parameters are zero.

var( 0)= and var( 1)= , but, =


Thus, the unbiased estimator var( 0) and var( 1) will be ;

var( 0)=

var( 1)=

S.E( 0)= Thus ,

S.E( 0)=

S.E( 1)= Thus ,

Hypothesis testing

Ho: )=0

H1: )≠0

Decision rule
 When S.E ( ) < ( ) the decision rule will be reject Ho and accept alternative Hypothesis

(H1) which implies ) is statically significant.

 When S.E ( ) > ( ) the decision rule will be accept Ho and reject alternative Hypothesis

(H1) which implies ) is statically not significant.

Example: Given the following regression model test hypothesis that Ho: ) = 0 H1: )≠0

Yi= 0.90+1.045Xi +Ui

S.E =(0.10 0.027)

= 0.027 <

Decision: 0.027 <0.50, the decision rule will be reject Ho and accept alternative hypothesis and

hence Xi ( 1) is significantly influencing Y.

2) Students T test
Student T-test is a type of inferential statistic used to determine if there is a significant difference
between the means of two groups, which may be related in certain features. The t-test is one of many
tests used for the purpose of hypothesis testing in statistics. Calculating a t-test requires three key data
values. They include the difference between the mean values from each data set (called the mean
difference), the standard deviation of each group, and the number of data values of each group. There
are several different types of t-test that can be performed depending on the data and type of analysis
required. Essentially, a t-test allows us to compare the average values of the two data sets and
determine if they came from the same population. Mathematically, the t-test takes a sample from
each of the two sets and establishes the problem statement by assuming a null hypothesis that the two
means are equal. Based on the applicable formulas, certain values are calculated and compared
against the standard values, and the assumed null hypothesis is accepted or rejected accordingly. If
the null hypothesis qualifies to be rejected, it indicates that data readings are strong and are probably
not due to chance. The t-test is just one of many tests used for this purpose.

T-Test Assumptions

1) The first assumption made regarding t-tests concerns the scale of measurement. The
assumption for a t-test is that the scale of measurement applied to the data collected
follows a continuous or ordinal scale, such as the scores for an IQ test

2) The second assumption made is that of a simple random sample, that the data is
collected from a representative, randomly selected portion of the total population.

3) The third assumption is the data, when plotted, results in a normal distribution, bell-
shaped distribution curve.

4) The final assumption is the homogeneity of variance. Homogeneous, or equal,


variance exists when the standard deviations of samples are approximately equal.
This test is used to test the significance of the parameters. T-calculated

t* = , We must compare t*, calculated t- critical value obtained from the table.

Decision rule: If t* >tc reject Ho and accept H1 and the X is significantly influencing Y.

Example: Given the regression function below test

Yi= 0.90+1.045Xi +Ui and assume n is 20


S.E = (0.10 0.027) Find the calculated t value (t*)

t*=

t*= =38

t* critical value from students t table level of significance =0.025 at n-

k degree of freedom (where k is number of parameters).The dgree of freedom is 20-2 is equal to


move column wise to 18 and Move row wise α/2 (usually=5%) then at the point of pivot we can
get T- calculated from student T-test table 2.1009 .

Decision rule: t*calculated > t critical value reject Ho and accept H1, X significantly influence
Y

4. Constructing Confidence Interval Test

+ (-) S.E ( from the table

Example: Given the regression function below test

H0: )= 0 H1: )≠0

Yi= 0.90+1.045Xi +Ui and assume n is 20

S.E = (0.10 0.027)

1.045 +/-/ (0.027) 2.10


(1.045+0.0567, 1.045-0.0567)
(1.1017, 0.9883)
Test the level of significance with the contracted confidence interval =0 is not included

with the interval so, we can reject Ho and accept H1 and hence, X ( is significantly.

influencing Y.
5) Z -Test
This test is based on the standard normal distribution. It is applicable when ;
1) When the population variance is known.
2) When the population variance is unknown but the sample size greather than or equal to 30
(i.e n≥30).

Z calculated for will be (Z*) = thus,

Z calculated for will be (Z*) =

Z calculated for will be (Z*) =

Z calculated for will be (Z*)=

To select Z* critical value from area under the normal curve distribution table at α 5% level of
significance level that is Zα/2= is 0.0250 So, find pivot value 0.0250 in the table and to find the calculated
Z value go to the column of Z and row of Z then To get Z column 1.9 and Z row value 0.06, So
finally join numbers and the result of the critical value obtained from Z table will be 1.96. That is
to mean.

P(Z<Z0.0250)=P(Z>0)-P(0<Z< Z0.0250) implies that ,

P(0<Z< Z0.0250) =0.475 ,So from the standard distribution table Z0.0250=1.96.
Decision Rule: When Z*calculated > Z critical value (1.96) obtained from the table which
means it is improbable the true slope βi equal to zero. That is to mean we can reject Ho and
accept H1, so we can say that, X significantly influence Y.

6) The P- value: It is also called the exact level of significance. The p value (i.e., probability value),
also known as the observed or exact level of significance or the exact probability of committing a
Type I Error is another method of hypothesis testing. The p value is defined as the lowest significance
level at which a null hypothesis can be rejected. P value is probability of obtaining a test statistic
more extreme ( or >) than the observed sample value, given H0 is true. Even though the
conventionally used levels of significances are 1% and 5%, in economics the accepted level of
significances are 1%, 5% and 10%. Since, both 1% and 10% are somewhat extreme as a default we
use 5% level of significance. This P- value is the easiest way and the most important statistical test
that we use to test statistical significance of our parameters. Although the probability value shows the
exact level of significance, we use the acceptable level of significance. If probability value is less than
or equal to 1%, we take it as 1% level of significance. If probability value is greater than 1% but less
than or equal to 5%, we take it as 5% level of significance. If probability value is greater than 5% but
less than or equal to 10%, we take it as 10% level of significance. If probability value is greater than
10%, we take it as it is not statistically. As a rules of thumb, when the p-value is smaller than 0.01,
the result is called statistically very significant. When the p-value is smaller than 0.05, the result is
called statistically significant. When the P-value is greater than 0.1 result is considered statistically
not significant.

Decision Rule

 If (Probability value of P is 0.000) the parameter is statistically significant at 1% level of significance.

 If (Probability value of P is less than 0.05 the parameter is statistically significant at 5% level of
significance.

 If (Probability value of P is less than 0.010 the parameter is statistically significant at 10% level of
significance.

II) Second order test: We test whether classical linear regression is violated or not. The commons are
normality, homoscedasticity, multicollinearity, and autocorrelation.

2.7. Forecast
Forecasting is the process of making predictions based on past and present data and most
commonly by analysis of trends. A common place example might be estimation of some variable
of interest at some specified future date. Prediction is a similar, but more general term.
Once a linear relationship is defined the independent variables can be used for forecasting the
dependent variable.

i = 0 + 1Xi + i

Where 0 is the intercept of the graph, 1 is the slope of the graph

Example: Given the following data, assume that the household income of Birr10 is to be
expended for next year; the forecasted consumption expenditures for the next year would be
computed:
Household Xi(Income) XiYi
consumption (Yi)
3 2 6 4 9
4 3 12 9 16
5 4 20 16 25
8 5 40 25 64
20 14 78 54 114
Solution:

1=

1= = =1.60
0= -

0= 5-1.6(3.50)

0=-0.60

Therefore the regression equation will be;

i = 0 + 1Xi

i =-0.60 +1.60Xi

i =-0.60 +1.60(100)

i= -0.60+160

=159.40
CHAPTER THREE

3. MULTIPLE REGRESSION ANALYSIS

3.1. Multivariate Case of the Classical Linear Regression Model


Multiple Linear Regression Model (MLRM) is an analysis procedure to use with more than one
explanatory variable. When there are more than one independent variables, the regression is
termed as multivariate regression (multiple linear regressions). Multiple linear regressions are
used to estimate the relationship between two or more independent variables and one dependent
variable. You can use multiple linear regressions when you want to know:

1. How strong the relationship is between two or more independent variables and one
dependent variable (e.g. how rainfall, temperature, and amount of fertilizer added affect crop
growth).
2. The value of the dependent variable at a certain value of the independent variables (e.g. the
expected yield of a crop at certain levels of rainfall, temperature, and fertilizer addition).

Y = β0 + β1x1 + β2x2 + . . . βkxk + u i


Where Y is an observed value of the response variable for a particular observation in the
population
β0 = the constant term (equivalent to the “y-intercept)
βi = the coefficient for the jth explanatory variable (i = 1, 2, …, k)
Xi = a value of the i th explanatory variable for a particular observation (i = 1, 2, …, k)
U i = the residual for the particular observation in the population
The regression coefficient βi is interpreted as the expected change in Y associated with a 1-unit
increase in X1 while X2,..., X1are held fixed. Other coefficients are interpreted in similar fashion.
Thus, these coefficients are called partial or adjusted regression coefficients. In contrast, the
simple regression slope is called the marginal (or unadjusted) coefficient.
Multiple linear regressions evaluates the relative effect of these explanatory, or independent,
variables on the dependent variable when holding all the other variables in the model constant.
In multiple linear regressions, the model calculates the line of best fit that minimizes the
variances of each of the variables included as it relates to the dependent variable. Because it fits a
line, it is a linear model. There are also non-linear regression models involving multiple
variables, such as logistic regression, quadratic regression, and probit models.
Why is there a need for more than one predictor variable?
 More than one variable influences a response variable.
 Predictors may themselves be correlated
3.1. Relaxing the Classical Linear Regression Model (CLRM) Basic Assumptions

Assumptions of Multiple Regression Model


In order to specify our multiple linear regression models and precede our analysis with regard to
this model, some assumptions are compulsory. But these assumptions are the same as in the
single explanatory variable model developed earlier except the assumption of no perfect
multicollinearity . These assumptions are:
1. Randomness of the error term: The variable u is a real random variable.
2. Zero mean of the error term:
3. Homoscedasticity or Homogeneity of variance: the size of the error in our prediction
doesn’t change significantly across the values of the independent variable. The variance of
each is the same for all the values.

i.e. (constant)

4. Normality of u: The data follows normal distribution i.e the values of each are
normally distributed.
i.e.
5. No auto or serial correlation: The values of (corresponding to ) are independent
from the values of any other (corresponding to Xj ) for i¹ j.
i.e. for
6. Independence of and Xi : Every disturbance term is independent of the explanatory
variables. i.e.
This condition is automatically fulfilled if we assume that the values of the X’s are a set
of fixed numbers in all (hypothetical) samples.

7. No perfect Multicollinearity: no major correlation between the independent variables.


This assumption is an additional assumption for multiple linear regerstion model unlike
simple linear regression model. The explanatory variables are not perfectly linearly
correlated. We can’t exclusively list all the assumptions but the above assumptions are
some of the basic assumptions that enable us to precede our analysis.
8. Linearity: the line of best fit through the data points is a straight line, rather than a curve
or some sort of grouping factor. There is a linear relationship between both the dependent
and independent variables.
9. The number of observations n must be greater than the number of parameters to be
estimated. Alternatively, the number of observations n must be greater than the number of
explanatory variables.
10. Variability in Xi values. The Xi values in a given sample must not all be the same.
var(Xi) must not be the same.
11. The regression model is correctly specified. Alternatively, there is no model
specification bias or error in the model used in empirical analysis.
12. Independence in Xi:
3.4. Selection of Models

When there are many predictors (with many possible interactions), it would be difficult to find a
good model (which main effects do we include? and which interactions do we include? but,
Model selection tries to “simplify” this difficult task. This is an “unsolved” problem in statistics:
there are no good procedures to get the “best model. However, to “implement” this, we need
some basic criteria. Such as:

A. R2: not a good criterion. Always increase with model size –> “optimum” is to take the
biggest model.
B. Adjusted R2: better. It “penalized” bigger models.
C. Mallow’s Cp: used to assess the fit of a regression model that has been estimated
using ordinary least squares.
D. Akaike’s Information Criterion (AIC), Schwarz’s BIC: The Akaike information
criterion (AIC) is an estimator of prediction error and thereby relative quality of statistics
models for a given set of data. Given a collection of models for the data, AIC estimates the
quality of each model, relative to each of the other models. Thus, AIC provides a means
for model selection.
3.4 . Estimation of Multiple Linear Regression By Using Ordinary Least Square Method

Consider the three-variable model PRF: Yi = β0 + β1X1i + β2X2i + Ui

The SRF will then be: Yi =

The residuals are: OLS, as usual, minimizes the RSS

(residual sum of squares):

RSS = =

Differentiate RSS w.r.t. as follows:

Note : x1=(X1- 1) , x2=(X2- 2 ) and y=(Y- )

These estimators are called OLS estimators.

Example: Given the following data taken from a total of 8 observations estimate the multiple
linear regression models?
Y X1 X2
45 8 4
44 7 5
50 6 6
43 9 6
45 8 5
44 8 3
40 9 4
43 6 2
Solution: Initially calculating the arithmetic mean value of X1,X2 and Y that is 1=7.625

2=4.375 and =44.25

Secondly constract the table as follows;

Y X1 X2 x1 x2 y x1y x2y x1x2


45 8 4 0.375 -0.375 0.75 0.140625 0.140625 0.28125 0.105469 -0.14063
44 7 5 -0.625 0.625 -0.25 0.390625 0.390625 0.15625 -0.09766 -0.39063
50 6 6 -1.625 1.625 5.75 2.640625 2.640625 -9.34375 15.18359 -2.64063
43 9 6 1.375 1.625 -1.25 1.890625 2.640625 -1.71875 -3.30078 2.234375
45 8 5 0.375 0.625 0.75 0.140625 0.390625 0.28125 0.292969 0.234375
44 8 3 0.375 -1.375 -0.25 0.140625 1.890625 -0.09375 -0.47266 -0.51563
40 9 4 1.375 -0.375 -4.25 1.890625 0.140625 -5.84375 -0.59766 -0.51563
43 6 2 -1.625 -2.375 -1.25 2.640625 5.640625 2.03125 -7.05078 3.859375
Total 9.875 13.875 -14.25 4.0625 2.125
Third step: Apply the formula;

= -1.688679

= 1.141509
=

= 52.13208
The coefficient of determination ( R 2) : In the simple regression model, we introduced R2 as a
measure of the proportion of variation in the dependent variable that is explained by variation in
the explanatory variable. In multiple regression model the same measure is relevant, and the
same formulas are valid but now we talk of the proportion of variation in the dependent variable
explained by all explanatory variables included in the model. The coefficient of determination

is:

In the present model of two explanatory variables:

Since TSS, RSS and ESS are all non-negative (being squared deviations), and since ESS £ TSS,

R²must lie in the interval, 0≤ R²≤1. As in simple regression, R2 is also viewed as a measure of
the prediction ability of the model over the sample period, or as a measure of how well the
estimated regression fits the data. The value of R2 is also equal to the squared sample correlation
coefficient between . Since the sample correlation coefficient measures the linear
association between two variables, if R2 is high, that means there is a close association between
the values of and the values of predicted by the model, . In this case, the model is said to
“fit” the data well. If R2 is low, there is no association between the values of and the values
predicted by the model, and the model does not fit the data well.
The Adjusted R2 (R-²)
One major problem with R² is that adding another independent variable to a particular equation

can never decrease R². A completely nonsensical variable could be added to the model and a
high value could be obtained. In addition, the inclusion of another variable requires the
estimation of another coefficient that lessens the degrees of freedom, or the excess of the number
of observations (n) over the number of parameters, including the intercept, estimated (K+1).
 The lower the degrees of freedom, the less reliable the estimates are likely to be.
 To incorporate the impact of changes in the number of independent variables, it is necessary

to adjust R² for the degrees of freedom:


The adjusted coefficient of determination is given by

R² = 1 - = 1- = 1 – (1-R²)

So `R² is just another measure of the goodness of fit or the explanatory power of the regression

model. In general `R²< R² (unless K=1 or R²=1 in which case `R²=R²), and it is possible for `R²
to be negative.

 While the unadjusted R² can never decrease as added explanatory variables are taken into

account, the adjusted `R² can decrease, increase or remain the same when a variable is added
to an equation.

 Thus `R² can be used to compare the fits of equations with the same dependent variable and
different numbers of independent variables.
Example: Given the following data taken from a total of 8 observations compute the coefficient

of determination (R²) and adjusted coefficient of determination `(R²) ? and interpret the results ?

Y X1 X2
45 8 4
44 7 5
50 6 6
43 9 6
45 8 5
44 8 3
40 9 4
43 6 2
Solution: Initially calculating the arithmetic mean value of X1,X2 and Y that is 1=7.625

2=4.375 and =44.25

Secondly contract the table as follows;

Y X1 X2 x1 x2 y x1y x2y x1x2


45 8 4 0.375 -0.375 0.75 0.5625 0.140625 0.140625 0.28125 0.105469 -0.14063

44 7 5 -0.625 0.625 -0.25 0.0625 0.390625 0.390625 0.15625 -0.09766 -0.39063

50 6 6 -1.625 1.625 5.75 33.0625 2.640625 2.640625 -9.34375 15.18359 -2.64063

43 9 6 1.375 1.625 -1.25 1.5625 1.890625 2.640625 -1.71875 -3.30078 2.234375

45 8 5 0.375 0.625 0.75 0.5625 0.140625 0.390625 0.28125 0.292969 0.234375

44 8 3 0.375 -1.375 -0.25 0.0625 0.140625 1.890625 -0.09375 -0.47266 -0.51563

40 9 4 1.375 -0.375 -4.25 18.0625 1.890625 0.140625 -5.84375 -0.59766 -0.51563

43 6 2 -1.625 -2.375 -1.25 1.5625 2.640625 5.640625 2.03125 -7.05078 3.859375

Total 55.50 9.875 13.875 -14.25 4.0625 2.125

Third step: Apply the formula;

= -1.688679

= 1.141509

= 52.13208
The four step :Calculating cofficent of determination by using the following formula ;

= 0.6855

The Five steps:Calculating the adjusted cofficent of determination by using the following

formula

=1– (1-R²) , Where k is numbers of parameter i.e 3 and n is sample size n=8 and R²

0.6855 obtained in step 4.

=0.5597

3.5. Global Hypothesis Test (Ftest)

In multiple regression models two basic tests of significance are employed. One is
significance of individual parameters of the model. This test of significance is the same
as the tests discussed in simple regression model. The second test is overall significance
of the model (F test).
3.5.1. Tests of individual significance
If we invoke the assumption that , then we can use either the t-test or
standard error test to test a hypothesis about any individual partial regression coefficient.
To illustrate consider the following example.
Let
A.
B.
The null hypothesis (A) states that holding X2 fixed X1 has no (linear) influence on Y. Similarly
hypothesis (B) states that holding X1 constant, X2 has no influence on the dependent variable
Yi.To test these null hypotheses we will use the following tests:
i- Standard error test: under this and the following testing methods we test only
for .The test for will be done in the same way.

; Where

 If , we accept the null hypothesis that is, we can conclude that the
estimate is not statistically significant.
 If , we reject the null hypothesis that is, we can conclude that the estimate
is statistically significant.
Note: The smaller the standard errors, the stronger the evidence that the estimates are
statistically reliable.
i. The student’s t-test: We compute the t-ratio for each

, where n is number of observation and k is number of

parameters. If we have 3 parameters, the degree of freedom will be n-3. So;

; with n-3 degree of freedom

In our null hypothesis the t* becomes:

 If t*<t (tabulated), we accept the null hypothesis, i.e. we can conclude that is
not significant and hence the regressor doesn’t appear to contribute to the
explanation of the variations in Y.
 If t*>t (tabulated), we reject the null hypothesis and we accept the alternative
one; is statistically significant. Thus, the greater the value of t* the stronger
the evidence that is statistically significant.

Example: On the basis of the information given below answer the following questions.

A. Find the OLS estimate of the slope coefficient


B. Compute variance of
C. Test the significant of slope parameter at 5% level of significant
D. Compute and and interpret the result
Solution:
A. Since the above model is a two explanatory variable model, we can estimate

using the formula in equation:

And

Since the x’s and y’s in the above formula are in deviation form we have to find the
corresponding deviation forms of the above given values.
We know that:
Now we can compute the parameters.

The intercept parameter can be computed using the following formula.

B.

Where k is the number of parameter

In our case k=3

C. can be tested using students t-test


This is done by comparing the computed value of t and critical value of t which is obtained from
the table at level of significance and n-k degree of freedom.

Hence;
The critical value of t from the t-table at level of significance and 22 degree of
freedom is 2.074.

The decision rule if is to reject the alternative hypothesis that says is different from zero
and to accept the null hypothesis that says is equal to zero. The conclusion is is statistically

insignificant or the sample we use to estimate is drawn from the population of Y & X1in which
there is no relationship between Y and X1(i.e. ).
D. can be easily using the following equation

1-

We know that and and


For two explanatory variable model:

1-

24% of the total variation in Y is explained by the regression line


) or by the explanatory variables (X1 and X2).

Adjusted

3.6. MULTICOLLINEARITY
In the construction of an econometric model, it may happen that two or more variables givingrise
to the same piece of information are included, that is, we may have redundant information or
unnecessarily included related variables. This is what we call a multicollinearity problem. One
of the assumptions of the CLRM is that there is no exact linear relationship exists between any of
the explanatory variables. When this assumption is violated, we speak of perfect
Muticollinarity. If all explanatory variables are uncorrelated with each other, we speak of
absence of MC. These are two extreme cases and rarely exist in practice. Of particular interest
are cases in between: moderate to high degree of Multicollinarity .Such kind of
Multicollinarity is so common in macroeconomic time series data (such as GNP, money supply,
income, etc) since economic variables tend to move together over time.
A. Consequences of perfect Muticollinarity
We say there is a perfect Multicollinarity if two or more explanatory variables are perfectly
correlated. One consequence of perfect Multicollinarity is non-identifiability of the regression
coefficient vector. This means that one cannot distinguish between two different models: Y = Xβ
+ u and Y = Xβˆ+ u. These two models are said to be observationally equivalent. Another
consequence of perfect Multicollinarity is that we cannot estimate the regression coefficients.
B. Consequences of a high degree of Multicollinarity (moderate to strong
Multicollinarity) .
Consider the case when there is a high degree (moderate to strong) MC but not perfect MC.
What happens to the parameter estimates? Again consider the model in deviations form (K = 3):
y = β2 x2 + β3 x3 + u
There is a high degree of MC means that r23 , the correlation coefficient between X2 and X3 ,
tends to 1 or –1 (but not equal to ± 1 for this would mean there is perfect MC). We can say that
the ordinary least squares (OLS) estimators of β2 and β3 are still unbiased, that is
E (jˆ), j 2, 3
Major implications (consequence) of a high degree of MC
i) OLS coefficient estimates are still unbiased.
ii) OLS coefficient estimates will have large variances (or the variances will be inflated).
iii) There is a high probability of accepting the null hypothesis of zero coefficient (using the test)
when in fact the coefficient is significantly different from zero.
iv) The regression model may do well, that is, R2 may be quite high.
v) The OLS estimates and their standard errors may be quite sensitive to small changes in the
data.
Chapter 4: Binary dependent variable
There are many dependent variables which are1 dichotomous variables that is take two value
either 1 or 0. To estimate such kinds of models, we can use the following four most common
used approaches

4.1 Linear probability model


Consider the following simple model, Yi=β0+β1Xi +Ui,............................................................. (1)
Where Yi=1 if the household is poor and Yi=0 if the household is not poor, X i is explanatory
variable like income. This model expresses the dichotomous dependent variables, Yi is a linear
function of the explanatory variable Xi . Such a type of model is linear probability model.
E(Yi/Xi)=which is the conditional mean of Yi given Xi ,gives the probability of household being
poor when income is Xi. Assuming E(Ui)=0 ,we obtain E(Yi/Xi) =β0+β1Xi let Pi= probability
of that the event has occurred (Yi=1) and 1-Pi =probability that Yi=0
E(Yi/Xi)= 0(1-Pi) +1(Pi)= Pi.........................................................................................................(2)
Thus, E(Yi/Xi) =β0+β1Xi = Pi i.e the conditional mean is also the conditional probability of Yi
Since the value of the probability lies between 0 and 1. 0 ≤E(Y i/Xi≤1 that is the conditional
mean must lie between 0 and 1.
Problem in Estimation of linear probability model
3) Non normality of the disturbance term Ui .
4) Heteroscadastic variance of the disturbance term.
5) Non fulfillments of 0 ≤E(Yi/Xi≤1 .
4.2. Logit Model
Consider the following simple model, Yi=β0+β1Xi +Ui
Where Yi=1 if the household is poor and Yi=0 if the household is not poor, Xi is explanatory
variable like income.
Recall that, Pi=E(Yi=1/Xi) =β0+β1Xi , re-write this expression as;
Pi=E(Yi=1/Xi)= 1/(1+e- β0+β1Xi)
=1/(1+ e-z ) ...................................................................................................................................(1)
where , Zi= β0+β1Xi This function represents logistic distribution function. Since, Zi ranges from
negative infinity to positive infinity. Pi ranges from 1 to 0.Since Pi is non linear we can’t use
Ordinary least square estimation technique rather maximum likelihood estimation techniques.
If Pi =1/(1+ e-z ), then 1-Pi =1-Pi =1/(1+ ez ) which is the probability of being non- poor.
Thus we can write,
Pi/(1-Pi)= 1/(1+ ez )/( 1/(1+ e-z )=eZi .............................................................................................(2)
Now Pi/(1-Pi) is the odd ratio in favor of being poor which is the ratio of the probability that a
household is being poor to the probability that the household is non poor . If we take the lateral
logarithm of both side, then we obtain;
Li= ln(pi/1-pi) =Zi = β0+β1Xi .......................................................................................................(3)
Using the estimated Value of Pi the estimated logit model is given by;

i= i= = 0 + 1Xi...............................................................................................................................(4)

The natural logarithm of the odd ratio is linear in explanatory variable X and parameter β 0 and β1
and β0+ β1Xi will solve the problem using OLS.

β0 The intercept tells the value of the log odds of in favor of being poor if income is zero .

β1 the slope which measures the change in L for a unit change in X.

4.3. Probit Model

The estimated model that emerges from the normal cumulative density function is popularly
known as probit model. Here the observed dependent variable Y, takes on one of the value 0
and 1 using the following criteria.
Define the latent variable (Y*) such that Yi*=Xi1B1+Ui
Y= 1 if Y* >0
0 if Y*≤0 ,
Thus the latent variable Y* is continuous from negative infinity to positive infinity. It generates
the observed binary variable Y. An observed variable Y can be observed in to two states;
If an event occurs it takes 1.
If an event not occurs it takes 0.
The latent variable is assumed to be a linear function of the observed X’s through the structural
model.
4.4. Interpretation of coefficients
The interpretation for logit and probit model is similar. The difference between probit and probit
model is the distribution that is normal for probit and logistically distributed for logit model.

1) Interpretation of maximum likelihood result

Income has positive significant positive influence has on the predicted probability of (p-hat) of
being poor.

2) Interpretations of Odd ratio

The odds of being poor is increased by (coefficient of beta) having access for credit compare to
that of peoples without access for credit, keeping other things constant.

3) Interpretation of marginal effect

Keeping all other factors constant, if a rural household Income increased by 1percent , the
average marginal effect shows that, the probability of the household being poor increases on
average by (coefficient of beta) percent and statistically significant at 1 or 5 percent.
Chapter 5: Time series

1.1 Nature of the data

By nature, data are either quantitative or qualitative. Quantitative data are numerical and
qualitative data are descriptive. It is possible to transform qualitative data into numerical values.
Additionally, in sciences, data can also be graphic in nature. Data with reference to time:
There are two types of data under this category. These are:
A) Time series data – Data recorded in a chronological order across time are referred to as time
series data. It takes different values at different times, e.g., the number of books added to a
library in different years, monthly production of steel in a plant, yearly intake of students in a
university.
B) Cross-sectional data – This refers to data for the same unit or for different units at a point of
time, e.g., data across sections of people, region or segments of the society.

1.2. Trends and seasonality

1.3. Stationary

1.4. Box-Jenkins Methodology

43 | P a g e
Part II for chapter 2

Stata Application commands

While you can perform a linear regression by hand, this is a tedious process, so most people use
statistical programs to help them quickly analyze the data like Sata ,SPSS ,Eviews and R. When
run the following command to generate a linear model describing the relationship between
income and happiness:

reg income happiness

This code takes the data you have collected and calculates the effect that the independent
variable income has on the dependent variable happiness using the equation for the linear model.
Function takes the most important parameters from the linear model and puts them into a table,
which looks like this:

44 | P a g e
This output table first repeats the formula that was used to generate the results,
Number of observation (Number of obs): The total sample size used to estimate the model.
The whole model F statistics: prob>F, The most important thing to notice here is the p-value of
the model. Here it is significant (p < 0.001), which means that this model is a good fit for the
observed data. It must be less than 1percent level of significance.
R--Squared value. The number in the table (0.7488) tells us that for every one unit increase in
income (where one unit of income = $10,000) there is a corresponding 0.7488-unit increase in
reported happiness (where happiness is a scale of 1 to 10).

std.Err This number shows how much variation there is in our estimate of the relationship
between income and happiness.
T test result (t): Unless you specify otherwise, the test statistic used in linear regression is the t-
value from a two-sided t-test. The larger the test statistic, the less likely it is that our results
occurred by chance.
The p-Value (Pr(>| t |) : This number tells us how likely we are to see the estimated effect of
income on happiness if the null hypothesis of no effect were true.
Because the p-value is so low (p < 0.001), we can reject the null hypothesis and conclude that
income has a statistically significant effect on happiness.
It can also be helpful to include a graph with your results. For a simple linear regression, you can
simply plot the observations on the x and y axis and then include the regression line and
regression function:
Scatter Plot

45 | P a g e
Command
twoway scatter income happiness , || lfit income happiness
8
6
4
2

0 2 4 6 8
happiness

income Fitted values

2.2. The nature of the error term


2.3 .The classical linear regression model (CLRM)
2.4. Parameter Estimation: Least Squares
2.5 .Covariance, correlation coefficient, coefficient of determination (r2)
2.6 .Hypotheses testing
2.7. Forecast

46 | P a g e
Work Sheet for chapter two (Simple linear Regression Model)
1) The following table gives data on quantity supplied and income of Teff four individuals
Quantity of Teff(Yi) Income of individuals (Xi)
3 10
4 20
5 25
6 50

Then:

A) Estimate the regression coefficient ( 0 and 1)?

B) Assume that quantity supplied of Teff is a liner function of income as; i = 0 + 1Xi + i,

C) Estimate the model and interpreted the estimated value of regression coefficient the model?
D) Estimate the covariance between X and Yi.e (cov(X,Y) and interpret the result ?

E) Compute variance of 0 and 1(i.e var( 0) and (var( 1)?

F) Compute the correlation coefficient(r) and coefficient of determination (r2) and interpret the result?

G) Contract a 95 % confidence interval for the intercept ( 0) and the slope ( 1)?

H) Test hypothesis Ho: ) = 0 against the alternative hypothesis

H1: )≠0 ,using confidence interval approach

47 | P a g e
I) Test hypothesis Ho: ) = 0 against the alternative hypothesis

H1: )≠0 ,using test statics

J) Plot the data in scatter plot?


2) Consider the OLS regression

i = 2.50 +0.40Xi

S.E =(0.12 0.11)


r2= 0.92
n=10
A) Test the level of significance of the estimates by using standard error test?

Ho: ( 0) = 0 against the alternative hypothesis H1: ( 1) ≠0

3) Consider the OLS regression

i = 2 +0.50Xi

S.E =(0.12) ( )
t= ( ) (0.11)
Find the missing number ?

Group Assignment
Wight 30 %
Data of submission one week before Final exam
1) What makes Econometrics special from other subject of economics; specifically, mathematical
statistics, economic statistics, economic theory, and mathematical economics? Discuss in detail, and
distinguish them.(5point )
2) Explain clearly the three types of data based on time horizon with an example? (5points)

48 | P a g e
3) Application for simple linear Regression Analysis(10 point)

Knowing the relationship between GDP and Government expenditure is a challenge in most developing
countries. Most notably, different hypotheses suggest that the relationship is contextual and country-
specific. Therefore, assume you are one of a research team is assigned to examine the relationship
between GDP and government expenditure using the cross-section data annexed here. Given this
hypothetical data you are given, answer the following questions using stata soft ware

A) Formulate hypothesis?
B) Draw bar graph?
C) Show the descriptive statics like measures of central tendency and Variation?
D) Shown the amount of TSS, ESS and RSS and the degree of freedom?
E) Compute and interpret the coefficients of the regression model?
F) Identify whether GDP is statistically significant or not?
G) Find the calculated F-test value and judge whether the model good or bad?
H) Shown before estimation (linearity, normality) and post estimation diagnostic tests(muti-collinarity
and Heteroscadasticty) ?

GDP Expenditure
60 98
55 88
59 92
60 91
54 80
54 79
54 77
58 83
61 86
63 86
69 92
63 83
68 85
63 77
64 77
75 86
86 95
89 95
94 97
100 100

49 | P a g e
102 99
67 67
85 82
97 90
108 97
120 105
126 107
133 110
133 107
146 114

4) What is the distinction between the binary logit model and binary probit model? (2.5 points)
5) When to use fixed effect model over random effect panel data model? (2.5points )
6) What are the overall procedures of time serious analysis in stata ? (5points )

Exercise

1) Which of the measures can be determined for quantitative data only?

A) Mean

B) Median

C) Mode

D) All

2) Which of the measures can have more than one value for a set of data?

A) Mean

B) Median

C) Mode

D) All

50 | P a g e
3) What is the equation for the relation between the mean, median and mode for moderately
skewed data?

Work out Part

1) What is the mean, media and mode of 3, 13, 6, 8, 10, 5, and 6?


2) Identify the mean, media and mode of the given distribution;

3) The mean and median of a distribution is 44.6 and 44.05 respectively. Find the mode.
4) Given a data set of height of 50 Economics students in center meter below calculate the
mean, media and modal height of economics students?

51 | P a g e
52 | P a g e
Z test Table

53 | P a g e
54 | P a g e

You might also like