lect05

# lect05 - Lecture Notes 5 Mean Square Error Estimation...

This preview shows pages 1–4. Sign up to view the full content.

Lecture Notes 5 Mean Square Error Estimation Minimum MSE Estimation Linear Estimation Jointly Gaussian Random Variables EE 278: Mean Square Error Estimation 5 – 1 Minimum MSE Estimation Consider the following signal processing problem: X ˆ X Y g ( Y ) Noisy Channel Estimator f Y | X ( y | x ) f X ( x ) X is a signal with known statistics, i.e., known pdf f X ( x ) The signal is transmitted (or stored) over a noisy channel with known statistics, i.e., conditional pdf f Y | X ( y | x ) We observe the signal Y and wish to ﬁnd the estimate ˆ X = g ( Y ) of X that minimizes the mean square error MSE = E ± ( X - ˆ X ) 2 ² = E ± ( X - g ( Y )) 2 ² The ˆ X that achieves the minimum MSE is called the minimum MSE estimate (MMSE) of X (given Y ) EE 278: Mean Square Error Estimation 5 – 2

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
MMSE Estimate Theorem: The MMSE estimate of X given the observation Y and complete knowledge of the joint pdf f X,Y ( x, y ) is ˆ X = E( X | Y ) , and the MSE of ˆ X , i.e., the minimum MSE, is MMSE = E Y (Var( X | Y )) = E( X 2 ) - E ± (E( X | Y )) 2 ² Properties of the minimum MSE estimator: Since E( ˆ X ) = E Y [E( X | Y )] = E( X ) , the best MSE estimate is unbiased If X and Y are independent, then the best MSE estimate is E( X ) The conditional expectation of the estimation error, E ± ( X - ˆ X ) | Y = y ² , is 0 for all y , i.e., the error is unbiased for every Y = y EE 278: Mean Square Error Estimation 5 – 3 The estimation error and the estimate are “orthogonal” E ± ( X - ˆ X ) ˆ X ² = E Y ± E ( ( X - ˆ X ) ˆ X | Y = E Y ± ˆ X E(( X - ˆ X ) | Y ) ² = E Y ± ˆ X (E( X | Y ) - ˆ X ) | Y ) ² = 0 In fact, the estimation error is orthogonal to any function g ( Y ) of Y From the law of conditional variance Var( X ) = Var( ˆ X ) + E(Var( X | Y )) , i.e., the sum of the variance of the estimate and the minimum MSE is equal to the variance of the signal EE 278: Mean Square Error Estimation 5 – 4
Proof of Theorem: We ﬁrst show that min a E ( ( X - a ) 2 ) = Var( X ) and that the minimum is achieved for a = E( X ) , i.e., in the absence of any observations, the mean of X is its minimum MSE estimate To show this, consider E ± ( X - a ) 2 ² = E ± ( X - E( X ) + E( X ) - a ) 2 ² = E ± ( X - E( X )) 2 ² + ( E( X ) - a ) 2 + 2 E( X - E( X ))(E( X ) - a ) = E ± ( X - E( X )) 2 ² + ( E( X ) - a ) 2 E ± ( X - E( X )) 2 ² Equality holds if and only if a = E( X ) We use this result to show that E( X | Y ) is the MMSE estimate of X given Y EE 278: Mean Square Error Estimation 5 – 5 First write E ± ( X - g ( Y )) 2 ² = E Y ± E X (( X - g ( Y )) 2 | Y ) ² From the previous result we know that for each Y = y the minimum value for E X ± ( X - g ( y )) 2 | Y = y ² is obtained when g ( y ) = E( X | Y = y ) Therefore the overall MSE is minimized for g ( Y ) = E( X | Y ) In fact, E( X | Y ) minimizes the MSE conditioned on every Y = y and not just its average over Y To ﬁnd the minimum MSE, consider E ± ( X - E( X | Y )) 2 ² = E Y ( E X ± ( X - E( X | Y )) 2 | Y ² ) = E Y (Var( X | Y )) EE 278: Mean Square Error Estimation

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

## This note was uploaded on 11/28/2009 for the course EE 278 taught by Professor Balajiprabhakar during the Fall '09 term at Stanford.

### Page1 / 13

lect05 - Lecture Notes 5 Mean Square Error Estimation...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online