The Simple Regression Model
The Simple Regression Model
The Simple Regression Model
y = b0 + b1x + u
Some Terminology
In the simple linear regression model,
where y = b0 + b1x + u, we typically refer
to y as the
Dependent Variable, or
Left-Hand Side Variable, or
Explained Variable, or
Regressand
Some Terminology, cont.
In the simple linear regression of y on x,
we typically refer to x as the
Independent Variable, or
Right-Hand Side Variable, or
Explanatory Variable, or
Regressor, or
Covariate, or
Control Variables
A Simple Assumption
The average value of u, the error term, in
the population is 0. That is,
E(u) = 0
.E(y|x) = b + b x
.
0 1
x1 x2
Ordinary Least Squares
Basic idea of regression is to estimate the
population parameters from a sample
Let {(xi,yi): i=1, …,n} denote a random
sample of size n from the population
For each observation in this sample, it will
be the case that
yi = b0 + b1xi + ui
Population regression line, sample data points
and the associated error terms
y E(y|x) = b0 + b1x
y4 .{
u4
y3 .} u3
y2 u2 {.
y1 .} u1
x1 x2 x3 x4 x
Deriving OLS Estimates
To derive the OLS estimates we need to
realize that our main assumption of E(u|x) =
E(u) = 0 also implies that
Cov(x,u) = E(xu) = 0
E(y – b0 – b1x) = 0
E[x(y – b0 – b1x)] = 0
y
n
n 1
i ˆ 0 ˆ 1 x i 0
i 1
n
n 1
i 1
x i y i ˆ 0 ˆ 1 x i 0
More Derivation of OLS
Given the definition of a sample mean, and
properties of summation, we can rewrite the first
condition as follows
y ˆ0 ˆ1 x ,
or
ˆ0 y ˆ1 x
More Derivation of OLS
n
i i
x
i 1
y y ˆ x ˆ x 0
1 1 i
n n
x
i iy y ˆ
1 xi xi x
i 1 i 1
n n
xi x yi y 1 xi x
ˆ 2
i 1 i 1
So the OLS estimated slope is
n
x x y
i i y
ˆ1 i 1
n
x x
2
i
i 1
n
provided that xi x 0
2
i 1
Summary of OLS slope estimate
The slope estimate is the sample
covariance between x and y divided by the
sample variance of x
If x and y are positively correlated, the
slope will be positive
If x and y are negatively correlated, the
slope will be negative
Only need x to vary in our sample
More OLS
Intuitively, OLS is fitting a line through the
sample points such that the sum of squared
residuals is as small as possible, hence the
term least squares
The residual, û, is an estimate of the error
term, u, and is the difference between the
fitted line (sample regression function) and
the sample point
Sample regression line, sample data points
and the associated estimated error terms
y
y4 .
û4 {
yˆ ˆ0 ˆ1 x
y3 .} û3
y2 û2 {.
} û1
y1 .
x1 x2 x3 x4 x
Alternate approach to derivation
Given the intuitive idea of fitting a line, we can
set up a formal minimization problem
That is, we want to choose our parameters such
that we minimize the following:
n n
ui yi 0 1 xi
2
ˆ ˆ ˆ
i 1 i 1
Alternate approach, continued
If one uses calculus to solve the minimization
problem for the two parameters you obtain the
following first order conditions, which are the
same as we obtained before, multiplied by n
y
n
i ˆ 0 ˆ1 x i 0
i 1
n
i 1
x i y i ˆ 0 ˆ1 x i 0
Algebraic Properties of OLS
The sum of the OLS residuals is zero
Thus, the sample average of the OLS
residuals is zero as well
The sample covariance between the
regressors and the OLS residuals is zero
The OLS regression line always goes
through the mean of the sample
Algebraic Properties (precise)
n
n uˆ i
x uˆ
i 1
i i 0
y ˆ 0 ˆ1 x
More terminology
We can think of each observation as being made
up of an explained part, and an unexplained part,
yi yˆ i uˆi We then define the following :
y y is the total sum of squares (SST)
2
i
y y y yˆ yˆ y
2 2
i i i i
uˆ yˆ y
2
i i
uˆ 2 uˆ yˆ y yˆ y
2 2
i i i i
SSR 2 uˆ yˆ y SSE
i i
R2 = SSE/SST = 1 – SSR/SST
Using Stata for OLS regressions
Now that we’ve derived the formula for
calculating the OLS estimates of our
parameters, you’ll be happy to know you
don’t have to compute them by hand
Regressions in Stata are very simple, to run
the regression of y on x, just type
reg y x
Unbiasedness of OLS
Assume the population model is linear in
parameters as y = b0 + b1x + u
Assume we can use a random sample of
size n, {(xi, yi): i=1, 2, …, n}, from the
population model. Thus we can write the
sample model yi = b0 + b1xi + ui
Assume E(u|x) = 0 and thus E(ui|xi) = 0
Assume there is variation in the xi
Unbiasedness of OLS (cont)
In order to think about unbiasedness, we need to
rewrite our estimator in terms of the population
parameter
Start with a simple rewrite of the formula as
x x y
ˆ1 i
2
i
, where
s x
s xi x
2 2
x
Unbiasedness of OLS (cont)
x x y x x x
i i i 0 1 i ui
x x x x x
i 0 i 1 i
x x u
i i
x x x x x
0 i 1 i i
x x u
i i
Unbiasedness of OLS (cont)
x x 0,
i
x xx x x
2
i i i
x x u
ˆ1 1 i
2
i
s x
Unbiasedness of OLS (cont)
let d i xi x , so that
1
i 1 2 d i ui , then
ˆ
sx
ˆ 1
E 1 1 2 d i E ui 1
sx
Unbiasedness Summary
The OLS estimates of b1 and b0 are
unbiased
Proof of unbiasedness depends on our 4
assumptions – if any assumption fails, then
OLS is not necessarily unbiased
Remember unbiasedness is a description of
the estimator – in a given sample we may
be “near” or “far” from the true parameter
Variance of the OLS Estimators
Now we know that the sampling
distribution of our estimate is centered
around the true parameter
Want to think about how spread out this
distribution is
Much easier to think about this variance
under an additional assumption, so
Assume Var(u|x) = s2 (Homoskedasticity)
Variance of OLS (cont)
Var(u|x) = E(u2|x)-[E(u|x)]2
E(u|x) = 0, so s2 = E(u2|x) = E(u2) = Var(u)
Thus s2 is also the unconditional variance,
called the error variance
s, the square root of the error variance is
called the standard deviation of the error
Can say: E(y|x)=b0 + b1x and Var(y|x) = s2
Homoskedastic Case
y
f(y|x)
.E(y|x) = b + b x
.
0 1
x1 x2
Heteroskedastic Case
f(y|x)
y
.
. E(y|x) = b0 +
. b1x
x1 x2 x3 x
Variance of OLS (cont)
ˆ 1
Var 1 Var 1
2 d i ui
sx
2 2
2 Var d i u i
1 1
sx
2
sx
i Var ui
d 2
2 2
1 1
2
sx
d s x2
i
2 2 2
i
d 2
2
1 2 2 ˆ
2
2 sx 2 Var 1
sx sx
Variance of OLS Summary
The larger the error variance, s2, the larger
the variance of the slope estimate
The larger the variability in the xi, the
smaller the variance of the slope estimate
As a result, a larger sample size should
decrease the variance of the slope estimate
Problem that the error variance is unknown
Estimating the Error Variance
We don’t know what the error variance, s2,
is, because we don’t observe the errors, ui
n 2
Error Variance Estimate (cont)
ˆ ˆ 2 Standard error of the regression
recall that sd ˆ
sx
if we substitute ˆ for then we have
the standard error of ˆ , 1
se ˆ1 ˆ / xi x
2
1
2