Probability and Statistics ch7
Probability and Statistics ch7
Probability and Statistics ch7
Types of Estimation:
1.Point Estimation
2.Interval Estimation
Point Estimation
A point estimate is a single value of a statistic used
to estimate a population parameter.
For example, the sample mean, 4
n
X1 + X 2 +... + X n
Xi
X= = i=1
n n
is an estimator for the population mean μ.
⦁ Statistical inference is the process by which
we inference population properties from
sample properties.
1. Point Estimator
2. Interval Estimator
⦁ A point estimator draws inferences about a
population by estimating the value of an
unknown parameter using a single value or
point
⦁ An interval estimator draws inferences about
a population by estimating the value of an
unknown parameter using an interval. Here,
we try to construct an interval that “covers”
the true population parameter with a
specified probability.
⦁ As an example, suppose we are trying to
estimate the mean summer income of
students. Then, an interval estimate might
say that the (unknown) mean income is
between $380 and $420 with probability
0.95.
• How good the estimator X ?
Look to its probability distribution!
Important properties of the distribution of the sample
mean X
1. Mean
2. Variance or Standard Deviation
3. Shape
Properties of Estimators
1. Unbiasedness
Example: A statistics class has six students,
ages are:
18, 18, 19, 20, 20, 21
The population mean
18 +18 +19 + 20 + 20 + 21 58 10
= = 19.33
6 3
Select a samples of size 2 (n = 2).
There are 15 possible samples.
Find the mean of every possible sample
where n = 2:
11
Sampling distribution for the sample mean X
E () =
If this property is not satisfied, is biased
estimator for θ.
Result:
Let X1, X2,…, Xn be random sample
from a population with mean µ and
variance σ2.
Consider: The sample mean
n
X i
X = i=1
n
(X i − X )2 13
S2 = i=1
n −1
Then
1
n
E(X ) = E X i
n i =1
n
E( X ) =
1
n
E( X i )
i =1
n
E(X ) =
1
n
i =1
1
E(X ) = n =
n
2. S2 is an unbiased estimator for σ2 14
n 2
(X i − X )
E(S 2 ) = E i=1 =2
n −1
then
n 2
(X i − X )
E(S 2 ) = E i=1 =2
n −1
But
n 2
(X i − X ) n −1
E i=1 = 2 2
n n
Then 15
n
(X i − X )2
i=1
1 n X
Va r ( X ) = Va r i
i =1
n
1 n
Var( X ) = 2 Var( X i )
n i =1
n
n 2
1
Va r ( X ) = 2 2
=
n i=1
n2
2
Va r ( X ) = → 0, as n →
n
The sample mean X is unbiased estimator
for μ, then X is consistent estimator for µ
Standard Deviation (S.D) of the estimator
X 17
S .D ( X ) =
n
To estimate σ2:
n
(X i − X )2
2 = S 2 = i=1
n −1
Estimate σ by S (the sample S.D)
= S = S2
Standard Error (S.E) of the estimator X
s
S .E ( X ) =
n
The smaller the sampling variability (i.e., 18
the S.E) is, the better an estimate will be.
As the sample size increases to infinity, the
sampling distribution concentrates around
the population mean.
Result: Let X1, X2,…, Xn be random sample
from normal population with mean μ and
variance σ2 , N ( , )
2
19