Unformatted text preview: the constant δ ∗ that minimizes the mean square
error for estimation of a random variable Y by a constant is the mean, and the minimum possible
value of the mean square error for estimating Y by a constant is Var(Y ).
Another way to derive this result is to use the fact that E [Y − EY ] = 0 and EY − δ is constant,
to get
E [(Y − δ )2 ] = E [((Y − EY ) + (EY − δ ))2 ]
= E [(Y − EY )2 + 2(Y − EY )(EY − δ ) + (EY − δ )2 ]
= Var(Y ) + (EY − δ )2 .
From this expression it is easy to see that the mean square error is minimized with respect to δ if
and only if δ = EY , and the minimum possible value is Var(Y ). 4.10.2 Unconstrained estimators Suppose instead that we wish to estimate Y based on an observation X. If we use the estimator
g (X ) for some function g, the resulting mean square error (MSE) is E [(Y − g (X ))2 ]. We want to
ﬁnd g to minimize the MSE. The resulting estimator g ∗ (X ) is called the unconstrained optimal
estimator of Y based on X because no constrain...
View
Full
Document
This note was uploaded on 02/09/2014 for the course ISYE 2027 taught by Professor Zahrn during the Spring '08 term at Georgia Tech.
 Spring '08
 Zahrn
 The Land

Click to edit the document details