8 411 joint gaussian distribution 171 the joint pdf in

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: X − µY + aµX )2 ] = Var(Y − aX ) = Cov(Y − aX, Y − aX ) = Var(Y ) − 2aCov(Y, X ) + a2 Var(X ). (4.31) It remains to find the constant a. The MSE is quadratic in a, so taking the derivative with respect Y,X to a and setting it equal to zero yields that the optimal choice of a is a∗ = Cov((X ) ) . Therefore, Var the minimum MSE linear estimator is given by Cov(Y, X ) ( X − µX ) Var(X ) X − µX . = µY + σY ρX,Y σX L∗ (X ) = µY + (4.32) (4.33) Setting a in (4.31) to a∗ gives the following expression for the minimum possible MSE: 2 minimum MSE for linear estimation = σY − Since Var(L∗ (X )) = ( minimum MSE hold σY ρX,Y 2 σX ) Var(X ) (Cov(X, Y ))2 2 = σY (1 − ρ2 ). X,Y Var(X ) (4.34) 2 = σY ρ2 , the following alternative expressions for the X,Y 2 minimum MSE for linear estimation = σY − Var(L∗ (X )) = E [Y 2 ] − E [L∗ (X )2 ]. (4.35) In summary, the minimum mean square error linear estimator is given by (4.32) or (4.33), and the resulting minimum mean square error is given by (4.34) or (4.35). Four...
View Full Document

This note was uploaded on 02/09/2014 for the course ISYE 2027 taught by Professor Zahrn during the Spring '08 term at Georgia Tech.

Ask a homework question - tutors are online