Mean Squared Error and Maximum
Likelihood: Lecture XVIII
Charles B. Moss
October 12, 2010
I. Mean Squared Error
A. As stated in our discussion on closeness, one partential measure
for the goodness of an estimator is
E
ˆ
θ
−
θ
2
(1)
B. In the preceding example, the mean square error of the estimate
can be written as
E (
T
−
θ
)
2
(2)
where
θ
is the true parameter value between zero and one.
C. This expected value is conditioned on the probability of
T
at each
level value of
θ
. For example, if
θ
= 0 then the probability of each
X
becomes
P
[
X, θ
] =
θ
X
(1
−
θ
)
1
−
X
(3)
If the two events are independent
P
[
X
1
, X
2
, θ
] =
θ
X
1
+
X
2
(1
−
θ
)
1
−
X
1
−
X
2
(4)
The mean squared error at any theta can then be derived as
MSE
(
θ
) =
P
[0
,
0
, θ
] (0
−
θ
)
2
+2
P
[0
,
1
, θ
] (0
.
5
−
θ
)
2
+
P
[1
,
1
, θ
] (1
−
θ
)
2
.
(5)
1
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
AEB 6571 Econometric Methods I
Professor Charles B. Moss
Lecture XVIII
Fall 2010
Figure 1: Comparison of MSE for Various Estimators
D. The mean squared error for
S
can similarly be computed as
MSE
(
θ
) =
P
[0
, θ
] (0
−
θ
)
2
+
P
[1
, θ
] (1
−
θ
)
2
(6)
E. Finally, the mean square error of
W
can be written as
MSE
(
θ
) = (0
.
5
−
θ
)
2
(7)
F. The mean squared error for each estimator is presented in Figure
1.
G.
Definition 7.2.1.
Let
X
and
Y
be two estimators of
θ
.
We
say that
X
is better (or more eﬃcient) than
Y
if E (
X
−
θ
)
2
≤
E (
Y
−
θ
)
2
for all
θ
∈
Θ and strictly less than for at least one
θ
∈
Θ.
This is the end of the preview.
Sign up
to
access the rest of the document.
 Spring '10
 Staff
 Estimation theory, Mean squared error, Bias of an estimator, Professor Charles B. Moss

Click to edit the document details