{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

lecture18 - Stat 302 Introduction to Probability Jiahua...

Info iconThis preview shows pages 1–8. Sign up to view the full content.

View Full Document Right Arrow Icon
Stat 302, Introduction to Probability Jiahua Chen January-April 2011 Jiahua Chen () Lecture 18 January-April 2011 1 / 24
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Conditional means and variance Let us use the previous example once more. The conditional pmf of X given various values of Y is as follows: y = 0 y = 1 y = 2 y = 3 P ( X = 0 | Y = y ) 0 3 / 168 4 / 79 3 / 8 P ( X = 1 | Y = y ) 5 / 35 45 / 168 45 / 79 5 / 8 P ( X = 2 | Y = y ) 0 90 / 168 30 / 79 0 P ( X = 3 | Y = y ) 30 / 35 30 / 168 0 0 P ( Y = y ) 35 168 79 8 . We figured out that the pmf of the conditional mean is given by y = 0 y = 1 y = 2 y = 3 E ( X | Y = y ) 95 / 35 315 / 168 105 / 79 5 / 8 P ( Y = y ) 35 / 290 168 / 290 79 / 290 8 / 290 . In other words, E ( X | Y ) is a random variable whose pmf is given above. Jiahua Chen () Lecture 18 January-April 2011 2 / 24
Background image of page 2
Conditional variance of X given Y = 2 We computed E ( X 2 | Y = 2 ) = 0 2 × 4 / 79 + 1 2 × 45 / 79 + 2 2 × 30 / 79 = 165 / 79. and found var ( X | Y = 2 ) = E ( X 2 | Y = 2 ) − { E ( X | Y = 2 ) } 2 = 2326 / 6241 Apparently, var ( X | Y = y ) is a function of y . We may then define var ( X | Y ) in the same fashion as E ( X | Y ) . Jiahua Chen () Lecture 18 January-April 2011 3 / 24
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Conditional variance as a random variable The pmf of var ( X | Y = y ) y = 0 y = 1 y = 2 y = 3 var ( X | Y ) 0.4897959 0.5022321 0.3220638 0.234375 P ( Y = y ) 35 / 290 168 / 290 79 / 290 8 / 290 var ( X | Y ) p y 0.059113298 0.290948251 0.087734621 0.006465517 Summing up the last row, we get E { var ( X | Y ) } = 0.4442617. Recall that y = 0 y = 1 y = 2 y = 3 E ( X | Y = y ) 95 / 35 315 / 168 105 / 79 5 / 8 P ( Y = y ) 35 / 290 168 / 290 79 / 290 8 / 290 . We get var { E ( X | Y ) } = 0.2025873. Jiahua Chen () Lecture 18 January-April 2011 4 / 24
Background image of page 4
Verifying the formula We have E { var ( X | Y ) } = 0.4442617 and var { E ( X | Y ) } = 0.2025873. Hence, E { var ( X | Y ) } + var { E ( X | Y ) } = 0.4442617 + 0.2025873 = 0.646849. Recall that the marginal pmf of X is given by x = 0 x = 1 x = 2 x = 3 P ( X = x ) 10 / 290 100 / 290 120 / 290 60 / 290 . we find var ( X ) = 0.646849. We have verified for this example: var ( X ) = E { var ( X | Y ) } + var { E ( X | Y ) } . Jiahua Chen () Lecture 18 January-April 2011 5 / 24
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
New topic: Law of Large Numbers If a die is fair, we expect that after it is rolled 6000 times, each face will show up about 1000 times. Yet no matter how wishful we are, the actual numbers are likely slightly different. Let X be the outcome of a single toss. To what degree is P ( X = i ) = 1 / 6 a sensible summary of our intuition? Jiahua Chen () Lecture 18 January-April 2011 6 / 24
Background image of page 6
Weak Law of Large Numbers Let X 1 , X 2 , . . . , X n be the outcome of n independent repetitions of tossing a fair die. Let Y i = 1 of X i = 3. Our intuition should be n 1 ( Y 1 + Y 2 + · · · + Y n ) P ( X = 3 ) and the precision of the approximation gets better when n increases.
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 8
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}