zz-8 - ECE 6010 Lecture 9 Linear Minimum Mean-Square Error...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: ECE 6010 Lecture 9 Linear Minimum Mean-Square Error Filtering Background Recall that for random variable X and Y with finite variance, the MSE E [( X- h ( Y )) 2 ] is minimized by h ( Y ) = E [ X | Y ] . That is, the best estimate of X using a measured value of Y is to find the conditional average of X . One aspect of this estimate is that: The error is orthogonal to the data. More precisely, the error X- E [ X | Y ] is orthogonal to Y and to every function of Y : E [( X- E [ X | Y ]) g ( Y )] = 0 for all measurable functions g . We will assume that E [ g 2 ( Y )] < . We want to show that h minimizes E [( X- h ( Y )) 2 ] if and only if E [( X- h ( Y )) g ( Y )] = (orthogonality), for all measurable g such that E [ g 2 ( Y )] < . E [( X- E [ X | Y ]) g ( Y )] = E [ E [( X- E [ X | Y ]) | Y ] g ( Y )] = E [( E [ X | Y ]- E [ X | Y ]) g ( Y )] = 0 . Conversely, suppose for some g , E [( X- h ( Y )) g ( Y )] 6 = 0 . Consider the estimate h ( Y ) = h ( Y ) + g ( Y ) , where = E [( X- h ( Y )) g ( Y )] E [ g 2 ( Y )] . Then E [( X- h ( Y )) 2 ] = E [( X- h ( Y )) 2 ]- ( E [( X- h ( Y )) g ( Y )]) 2 E [ g 2 ( Y )] < E [( X- h ( Y )) 2 ] . Suppose now we are given two random processes { X t } and { Y t } that are statistically related (that is, not independent). Suppose, to begin, that T = R . Suppose we observe Y over the interval [ a, b ] , and based on the information gained we want to estimate X t for some fixed t as a function of { Y t , a t b } . That is, we form X t = f ( { Y , a b } ) for some functional f mapping the function to real numbers. If t < b : We say that the operation of the function is smoothing . If t = b : We way that the operation of the function is filtering . If t > b : We way that the operation of the function is prediction . The error in the estimate is X t- X t . The mean-squared error is E [( X t- X t ) 2 ] . Fact (built on our previous intuition): The MSE E [( X t- X t ) 2 ] is minimized by the conditional expectation X ( t ) = E [ X t | Y , a b ] . Furthermore, the orthogonality principle applies: X t- E [ X t | Y , a b ] is orthogonal to every function of { Y , a b } . While we know the theoretical result, it is difficult in general to compute the desired conditional expectation....
View Full Document

This note was uploaded on 03/01/2012 for the course ECE 6010 taught by Professor Stites,m during the Spring '08 term at Utah State University.

Page1 / 6

zz-8 - ECE 6010 Lecture 9 Linear Minimum Mean-Square Error...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online