Unformatted text preview: X − µY + aµX )2 ]
= Var(Y − aX )
= Cov(Y − aX, Y − aX )
= Var(Y ) − 2aCov(Y, X ) + a2 Var(X ). (4.31) It remains to ﬁnd the constant a. The MSE is quadratic in a, so taking the derivative with respect
to a and setting it equal to zero yields that the optimal choice of a is a∗ = Cov((X ) ) . Therefore,
the minimum MSE linear estimator is given by
Cov(Y, X )
( X − µX )
X − µX
= µY + σY ρX,Y
σX L∗ (X ) = µY + (4.32)
(4.33) Setting a in (4.31) to a∗ gives the following expression for the minimum possible MSE:
minimum MSE for linear estimation = σY − Since Var(L∗ (X )) = (
minimum MSE hold σY ρX,Y 2
σX ) Var(X ) (Cov(X, Y ))2
= σY (1 − ρ2 ).
Var(X ) (4.34) 2
= σY ρ2 , the following alternative expressions for the
minimum MSE for linear estimation = σY − Var(L∗ (X )) = E [Y 2 ] − E [L∗ (X )2 ]. (4.35) In summary, the minimum mean square error linear estimator is given by (4.32) or (4.33), and the
resulting minimum mean square error is given by (4.34) or (4.35).
View Full Document
This note was uploaded on 02/09/2014 for the course ISYE 2027 taught by Professor Zahrn during the Spring '08 term at Georgia Tech.
- Spring '08
- The Land