{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Lec2 - Stat 150 Stochastic Processes Spring 2009 Lecture 2...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon
Stat 150 Stochastic Processes Spring 2009 Lecture 2: Conditional Expectation Lecturer: Jim Pitman Some useful facts (assume all random variables here have finite mean square): E ( Y g ( X ) | X ) = g ( X ) E ( Y | X ) Y - E ( Y | X ) is orthogonal to E ( Y | X ), and orthogonal also to g ( X ) for every measurable function g . Since E ( Y | X ) is a measurable function of X , this characterizes E ( Y | X ) as the orthogonal projection of Y onto the linear space of all square-integrable random variables of the form g ( X ) for some measurable function g . Put another way g ( X ) = E ( Y | X ) minimizes the mean square prediction error E [( Y - g ( X )) 2 ] over all measurable functions g . ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± g ( X ) E ( Y | X ) ² Y Residual These facts can all be checked by computations as follows: Check orthogonality : E [( Y - E ( Y | X )) g ( X )] = E ( g ( X ) Y - g ( X ) E ( Y | X )) = E ( g ( X ) Y ) - E ( g ( X ) E ( Y | X )) = E ( E ( g ( X ) Y | X )) - E ( g ( X ) E ( Y | X )) = E ( g ( X ) E ( Y | X )) - E ( g ( X ) E ( Y | X )) = 0 Recall : V ar ( Y ) = E ( Y - E ( Y )) 2 and V ar ( Y | X ) = E ([ Y - E ( Y | X )] 2 | X ).
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 2
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

Page1 / 4

Lec2 - Stat 150 Stochastic Processes Spring 2009 Lecture 2...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon bookmark
Ask a homework question - tutors are online