Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

General Concepts of Point Estimation

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 9

GENERAL CONCEPTS OF

POINT ESTIMATION
GENERAL CONCEPTS OF
ESTIMATION
UNBIASED VARIANCE STANDARD
ESTIMATOR ERROR
MEAN
SQUARED
ERROR
UNBIASED ESTIMATOR
A statistic is called an unbiased estimator of a population parameter
if the mean of the sampling distribution of the statistic is equal to the
value of the parameter. For example, the sample mean, x, is an
unbiased estimator of the population mean, μ. In symbols,  μₓ = μ. On
the other hand, since μs ≠ σ, the sample standard deviation, s, gives a
biased estimate of σ.
An estimator of a given parameter is said to be unbiased if its
expected value is equal to the true value of the parameter. In other
words, an estimator is unbiased if it produces parameter estimates
that are on average correct.
VARIANCE
In probability theory and statistics, variance is the expectation of the
squared deviation of a random variable from its mean. Variance is a
measure of dispersion, meaning it is a measure of how far a set of
numbers is spread out from their average value.

Variance measures variability from the average or mean. It is


calculated by taking the differences between each number in the data set
and the mean, then squaring the differences to make them positive,
and finally dividing the sum of the squares by the number of values in
the data set.
STANDARD ERROR
The standard error of a statistic is the standard deviation of its sampling
distribution or an estimate of that standard deviation. If the statistic is the
sample mean, it is called the standard error of the mean.
Standard error statistics are a class of inferential statistics that function
somewhat like descriptive statistics in that they permit the researcher to
construct confidence intervals about the obtained sample statistic. The two
most commonly used standard error statistics are the standard error of the
mean and the standard error of the estimate.
STANDARD ERROR
Example 1 
A general practitioner has been investigating whether the diastolic blood
pressure of men aged 20-44 differs between printers and farm workers. For
this purpose, she has obtained a random sample of 72 printers and 48 farm
workers and calculated the mean and standard deviations, as shown in
table 1. 
STANDARD ERROR
• To calculate the standard errors of the two mean blood pressures, the
standard deviation of each sample is divided by the square root of the
number of the observations in the sample.
Printers: SEM = 4.5 / √72 = 0.53mmHG
Farmers: SEM = 4.2 / √48 = 0.61mmHG
STANDARD ERROR
• These standard errors may be used to study the significance of the
difference between the two means. Standard error of a proportion
or a percentage Just as we can calculate a standard error associated
with a mean so we can also calculate a standard error associated with
a percentage or a proportion. Here the size of the sample will affect
the size of the standard error but the amount of variation is
determined by the value of the percentage or proportion in the
population itself, and so we do not need an estimate of the standard
deviation. 
MEAN SQUARED ERROR
• In statistics, the mean squared error or
mean squared deviation of an estimator
measures the average of the squares of
the errors—that is, the average squared
difference between the estimated values
and the actual value. MSE is a risk
function, corresponding to the expected
value of the squared error loss.

You might also like