Stat 581 Midterm Exam Solutions (Autumn 2011)
Stat 581 Midterm Exam Solutions (Autumn 2011)
Stat 581 Midterm Exam Solutions (Autumn 2011)
2. (10pts) Let δ1 and δ2 have finite variances for each θ ∈ Θ and suppose they are the
UMVUEs for g1 (θ) and g2 (θ), respectively. Show that a1 δ1 + a2 δ2 is the UMVUE for
a1 g1 (θ) + a2 g2 (θ), where a1 and a2 are known.
Solution: Recall that an estimator with finite variance is UMVUE for its expectation
iff it is uncorrelated with every unbiased estimator of zero (LC Theorem 2.1.7). The
estimator δ = a1 δ1 + a2 δ2 has finite variance, and for an unbiased estimator U of zero
we have
Cov[δ, U ] = E[δU ]
= a1 E[δ1 U ] + a2 E[δ2 U ]
= 0
since both δ1 and δ2 are UMVUE for their expectations and so uncorrelated with U .
Several people tried to solve this problem with the variance inequality on δ and then on
another estimator δ 0 = a1 δ10 + a2 δ20 . With this approach, you can show that the bounds
for δ are lower than the bounds for δ 0 based purely on δ1 and δ2 being UMVUEs.
However, this approach will not work for two reasons: An ordering on the bounds
doesn’t imply the variances themselves are ordered, and we need to consider estimators
δ 0 that are not linear combinations of estimators of g1 (θ) and g2 (θ).
A few people argued that since δ1 and δ2 are UMVUE, they must be functions of a
complete sufficient statistic. Then δ will be a function of the c.s.s., and is unbiased for
a1 g(θ) + a2 g(θ). This reasoning is flawed because there exist UMVUEs in situations
1
where there is not a c.s.s. (see L. Bondesson (1983), “On Uniformly Minimum Variance
Unbiased Estimation when no Complete Sufficient Statistics Exist”). If there is a c.s.s.,
then a UMVUE will be a function of it, but a UMVUE can exist without a c.s.s..
3. (15pts) Let X1 , . . . , Xn ∼ be i.i.d. with E[|Xi |] < ∞. Show formally that the sample
mean X̄ is a version of E[X1 |X(1) , . . . , X(n) ].
R R
Solution: For this problem you need to show that A
X̄ dP = A
X1 dP for every
A ∈ σ(X(1) , . . . , X(n) ). Do this by starting with X̄:
Z n Z
1X
X̄ dP = Xi dP
A n i=1 A
n
1X
= E[Xi 1A ]
n i=1
n
1X
= E[X1 1A ] because the Xi s are i.i.d.
n i=1
Z
= E[X1 1A ] = X1 dP.
A
You need to be careful about the second to last line, as it only generally holds for
A ∈ σ(X(1) , . . . , X(n) ). For A outside of this σ-algebra, E[X1 1A ] is not necessarily
the same as E[Xi 1A ] for arbitrary sets A even if the Xi ’s are i.i.d. For example if
A = {X2 < 3} then the result doesn’t hold.
2
where p(t|θ) = 6t5 /θ6 is the density of T . Letting g+ and g− be the positive and
negative parts of g = g+ − g− , we have
Z θ Z θ
5
g+ (t)t dt = g− (t)t5 ∀θ > 0.
0 0
It was sufficient for the exam to make reference to the result in Lehmann (or
my supplementary notes) that this implies g+ = g− a.e., so that g = 0 a.e..
Alternatively, you can take derivatives:
Z θ
d d
g(t)t5 dt = 0
dθ 0 dt
g(θ)θ5 = 0
g(θ) = 0 for almost all θ > 0 ⇒ g = 0 a.e.