Stat 411 – Lecture Notes Supplement
Rao–Blackwell and Lehmann–Scheffe theorems
*†‡
Ryan Martin
Spring 2012
1
Introduction
Unbiased estimation is a fundamental development in the theory of statistical inference.
Nowadays there is considerably less emphasis on unbiasedness in statistical theory and
practice, particularly because there are other more pressing concerns in modern high
dimensional problems (e.g., regularization). Nevertheless, it is important for students of
statistics to learn and appreciate these classical developments. In this note, I’ll elaborate
a bit more on the Rao–Blackwell and Lehmann–Scheffe theorems presented in class. In
particular, I’ll give proofs of both results (the proof of the Rao–Blackwell theorem given
below is different from that given in class), along with some further comments.
The
majority of the material presented here is taken from Chapter 2 of Lehmann and Casella,
Theory of Point Estimation
, Springer 1998.
Herein I shall assume, as usual, that
X
1
, . . . , X
n
are iid with common PDF/PMF
f
θ
(
x
), where
θ
is an unknown parameter to be estimated. For the presentation here, I
shall assume that
θ
is a realvalued quantity; however, with some modifications, things
can be extended to the more general vectorvalued
θ
case.
The focus is on producing
unbiased estimators of
θ
, specifically, unbiased estimators with small variance. For that
reason, I shall also assume throughout that all estimators in question have finite variance;
if there is at least one with finite variance, then there’s clearly no need to consider the
one without finite variance; if there’s no unbiased estimators with finite variance, then
there’s really nothing for us to do, right?
2
Minimum variance unbiased estimation
As I discussed earlier in the course, unbiasedness is a good property for an estimator
to have; however, I gave several examples which showed that unbiasedness does not
necessarily make an estimator good.
But if one insists on an unbiased estimator of
θ
,
*
Version: February 28, 2012
†
Please do not distribute these notes without the author’s consent (
[email protected]
)
‡
These notes are meant solely to supplement inclass lectures. The author makes no guarantees that
these notes are free of typos or other, more serious errors.
1
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
then it is natural to seek out the best among them. Here “best” means the one with the
smallest variance. That is, the goal is to find the
minimum variance unbiased estimator
(MVUE) of
θ
. After a brief discussion of MVUE uniqueness, I elaborate on the two main
theorems presented in class which are often used in tandem to find the unique MVUE.
This is the end of the preview.
Sign up
to
access the rest of the document.
 Spring '08
 STAFF
 Variance, Estimation theory, unbiased estimator, MVUE

Click to edit the document details