MVU estimator does not always exist, as
θ
must have smallest
variance for all values of
θ
.
1
1
To emphasize the fact that the MVU estimator must have the smallest variance for
all values of
θ
, B & D refer to it as
uniformly minimum variance unbiased
(UMVU).
EE 527, Detection and Estimation Theory, # 1
12
Comments:
•
Even if it exists for a particular problem, MVU estimator is
not optimal in terms of minimizing the MSE and we may be
able to do better.
•
Unbiasedness is nice, but not the most important
=
⇒
we can
relax this condition and consider biased estimators as well,
e.g. by making them
asymptotically unbiased
.
By relaxing
the unbiasedness condition, it is possible to outperform the
MVU estimators in terms of MSE, as shown in the following
example. What we really care about is minimizing the MSE!
Example 2
2
.
Consider now estimating the variance
σ
2
of
independent, identically distributed (i.i.d.) zeromean Gaussian
observations, using the following estimator:
σ
2
=
a
·
1
N
N

1
n
=0
x
2
[
n
]
(1)
where
a >
0
is variable.
If we choose
a
= 1
,
σ
2
a
=1
will be
unbiased
3
with
σ
2
a
=1
=
σ
2
MVU
=
1
N
N

1
n
=0
x
2
[
n
]
.
(2)
2
See also P. Stoica and R. Moses, “On biased estimators and the unbiased Cram´
erRao
lower bound,”
Signal Processing,
vol. 21, pp. 349–350, 1991.
3
We will show later that this choice yields an MVU estimate.
EE 527, Detection and Estimation Theory, # 1
13
Now, in general,
E [
σ
2
] =
a σ
2
and
MSE(
σ
2
)
=
E [(
σ
2

σ
2
)
2
]
=
E [
σ
4
] +
σ
4

2
σ
2
E [
σ
2
]
=
E [
σ
4
] +
σ
4
(1

2
a
)
=
a
2
N
2
N

1
n
1
=0
N

1
n
2
=0
E
{
x
2
[
n
1
]
x
2
[
n
2
]
}
+
σ
4
(1

2
a
)
=
a
2
N
2
[(
N
2

N
)
σ
4
+
N
·
E
{
x
4
[
n
]
}
3
σ
4
) +
σ
4
(1

2
a
)
=
σ
4
·
a
2
(1 +
2
N
) + (1

2
a
)
.
(3)
To evaluate the above expression, we have used the following
facts:
•
For
n
1
=
n
2
,
E
{
x
2
[
n
1
]
x
2
[
n
2
]
}
= E
{
x
2
[
n
1
]
} ·
E
{
x
2
[
n
2
]
}
=
σ
2
·
σ
2
=
σ
4
.
•
For
n
1
=
n
2
,
E
{
x
2
[
n
1
]
x
2
[
n
2
]
}
= E
{
x
4
[
n
1
]
}
= 3
σ
4
(which
is the fourthorder moment of a Gaussian distribution).
EE 527, Detection and Estimation Theory, # 1
14
It can be easily shown that (
3
) is minimized for
a
OPT
=
N
N
+ 2
yielding the estimator
σ
2
=
a
OPT
·
1
N
N

1
n
=0
x
2
[
n
]
whose MSE
MSE
MIN
=
2
σ
4
N
+ 2
.
is minimum for the family of estimators in (
1
).
Comments:
•
σ
2
is
biased
and has
smaller MSE
than the MVU estimator
in (
2
):
MSE
MIN
<
MSE(
σ
2
)
a
=1
=
2
σ
4
N
.
•
Note that we are able to construct an realizable estimator
in this case — compare with Example 1 in this handout.
•
For large
N
,
σ
2
and
σ
2
MVU
are approximately the same since
N/
(
N
+ 2)
→
1
as
N
→ ∞
.
This also implies that
σ
2
is
asymptotically unbiased
.
EE 527, Detection and Estimation Theory, # 1
15
Note:
I do not wish to completely dismiss bias considerations.
For example, we may have two estimators
θ
1
and
θ
2
with
[bias(
θ
1
)]
2
var(
θ
1
)
and
[bias(
θ
2
)]
2
var(
θ
2
)
and
MSE(
θ
1
)
≈
MSE(
θ
2
)
.
So, these two estimators are “equally good” as far as MSE is
concerned. But, we may have

bias(
θ
1
)


bias(
θ
2
)

making
θ
1
“more desirable” than
θ
2
.
Bias correction
methods
have been developed for constructing estimators that have
small bias. Hence, having small bias is typically a secondtier
You've reached the end of your free preview.
Want to read all 42 pages?
 Spring '19