{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

elemprob-page23

elemprob-page23 - Cov X,Y = E X-E X Y-E Y As with the...

This preview shows page 1. Sign up to view the full content.

11. Expectations. As in the one variable case, we have E g ( X, Y ) = g ( x, y ) p ( x, y ) in the discrete case and E g ( X, Y ) = g ( x, y ) f ( x, y ) dx dy in the continuous case. If we set g ( x, y ) = x + y , then E ( X + Y ) = ( x + y ) f ( x, y ) dx dy = xf ( x, y ) dx dy + yf ( x, y ) dx dy. If we now set g ( x, y ) = x , we see the first integral on the right is E X , and similarly the second is E Y . Therefore E ( X + Y ) = E X + E Y. Proposition 11.1. If X and Y are independent, then E [ h ( X ) k ( Y )] = E h ( X ) E k ( Y ) . In particular, E ( XY ) = ( E X )( E Y ) . Proof. By the above with g ( x, y ) = h ( x ) k ( y ), E [ h ( X ) k ( Y )] = h ( x ) k ( y ) f ( x, y ) dx dy = h ( x ) k ( y ) f X ( x ) f Y ( y ) dx dy = h ( x ) f X ( x ) k ( y ) f Y ( y ) dy dx = h ( x ) f X ( x )( E k ( Y )) dx = E h ( X ) E k ( Y ) . The covariance of two random variables X and Y is defined by
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Cov ( X,Y ) = E [( X-E X )( Y-E Y )] . As with the variance, Cov ( X,Y ) = E ( XY )-( E X )( E Y ). It follows that if X and Y are independent, then E ( XY ) = ( E X )( E Y ), and then Cov ( X,Y ) = 0. Note Var ( X + Y ) = E [(( X + Y )-E ( X + Y )) 2 ] = E [(( X-E X ) + ( Y-E Y )) 2 ] = E [( X-E X ) 2 + 2( X-E X )( Y-E Y ) + ( Y-E Y ) 2 ] = Var X + 2Cov ( X,Y ) + Var Y. We have the following corollary. 23...
View Full Document

{[ snackBarMessage ]}