Estimators1 PDF
Estimators1 PDF
Estimators1 PDF
Guy Lebanon
September 4, 2010
We assume that we have iid (independent identically distributed) samples X (1) , . . . , X (n) that follow some
unknown distribution. The task of statistics is to estimate properties of the unknown distribution. In this
note we focus one estimating a parameter of the distribution such as the mean or variance. In some cases
the parameter completely characterizes the distribution and estimating it provides a probability estimate.
In this note, we assume that the parameter is a real vector θ ∈ Rd . To estimate it, we use an es-
timator which is a function of our observations θ̂(x(1) , . . . , x(n) ). We follow standard practice and omit
(in notation only) the dependency of the estimator on the samples, i.e. we write θ̂. However, note that
θ̂ = θ̂(X (1) , . . . , X (n) ) is a random variable since it is a function of n random variables.
A desirable property of an estimator is that it is correct on average. That is, if there are repeated
samplings of n samples X (1) , . . . , X (n) , the estimator θ̂(X (1) , . . . , X (n) ) will have, on average, the correct
value. Such estimators are called unbiased.
Definition 1. The bias of θ̂ is1 Bias(θ̂) = E(θ̂) − θ. If it is 0, the estimator θ̂ is said to be unbiased.
There is, however, more important performance characterizations for an estimator than just being unbi-
ased. The mean squared error is perhaps the most important of them. It captures the error that the estimator
makes. However, since the estimator is a RV, we need to average over its distribution thus capturing the
average performance if there are many repeated samplings of X (1) , . . . , X (n) .
Pd
Definition 2. The mean squared error (MSE) of an estimator is E(kθ̂ − θk2 ) = E( j=1 (θ̂j − θj )2 ).
Theorem 1.
E(kθ̂ − θk2 ) = trace(Var(θ̂)) + kBias(θ̂)k2 .
Pd
Note that Var(θ̂) is the covariance matrix of θ̂ and so its trace is j=1 Var(θ̂j ).
Pd
Proof. Since the MSE equals j=1 E((θ̂j − θj )2 ) it is sufficient to prove for a scalar θ, E((θ̂ − θ)2 ) =
Var(θ̂) + Bias2 (θ̂):
E((θ̂ − θ)2 ) = E(((θ̂ − E(θ̂)) + (E(θ̂) − θ)2 ) = E{(θ̂ − E(θ̂))2 + (E(θ̂) − θ)2 + (θ̂ − E(θ̂))(E(θ̂) − θ)}
= Var(θ̂) + Bias2 (θ̂) + E((θ̂ − E(θ̂))(E(θ̂) − θ)) = Var(θ̂) + Bias2 (θ̂) + E(θ̂E(θ̂) − (E(θ̂))2 − θθ̂ + E(θ̂)θ)
= Var(θ̂) + Bias2 (θ̂) + (E(θ̂))2 − (E(θ̂))2 − θE(θ̂) + θE(θ̂) = Var(θ̂) + Bias2 (θ̂).
Since the MSE decomposes into a sum of the bias and variance of the estimator, both quantities are
important and need to be as small as possible to achieve good estimation performance. It is common to
trade-off some increase in bias for a larger decrease in the variance and vice-verse.
1 Note here and in the sequel all expectations are with respect to X (1) , . . . , X (n) .
1
1
X (i) which estimates the vector E(X) and
P
Two important special cases are the mean θ̂ = X̄ = n
n
1 X (i)
θ̂ = S 2 , where Sj2 = (X − X̄j )2 j = 1, . . . , d
n − 1 i=1 j
which estimates the diagonal of the covariance matrix Var(X). We show below that both are unbiased and
therefore their MSE is simply their variance.
Theorem 2. X̄ is an unbiased estimator of E(X) and S 2 is an unbiased estimator of the diagonal of the
covariance matrix Var(X).
Proof. !
n
X n
X
−1 (i)
E(X̄) = E n X = E(X (i) )/n = nE(X (i) )/n.
i=1 i=1
To prove that S is unbiased we show that it is unbiased in the one dimensional case i.e., X, S 2 are scalars
2
(if this holds, we can apply this result to each component separately to get unbiasedness of the vector S 2 ).
We first need the following result (recall that below X is a scalar)
n
X X X X X
(X (i) − X̄)2 = (X (i) )2 − 2X̄ X (i) + nX̄ 2 = (X (i) )2 − 2nX̄ X̄ + nX̄ 2 = (X (i) )2 − nX̄ 2
i=1
and therefore
n
! n
! n
X X X
E (X (i) − X̄)2 =E (X (i) )2 − nX̄ 2 = E((X (i) )2 ) − nE(X̄ 2 ) = nE((X (1) )2 ) − nE(X̄ 2 ).
i=1 i=1 i=1
where we used Boole’s and Chebyshev’s inequalities. This again shows (but in a different way than the bias
variance decomposition of the MSE) that the quality of unbiased estimators is determined by trace(Var(θ̂)).