{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

STA3007_0506_t01_ssol

STA3007_0506_t01_ssol - STA 3007 Applied Probability...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
STA 3007 Applied Probability 2005 Tutorial 1 Suggested Solution 1. Review on Probability i. 1.The probability mass function of Z p Z (1) = ( 1 2 )( 1 6 ) = 1 12 p Z (2) = ( 1 2 )( 1 3 ) = 1 6 p Z (3) = ( 1 2 )( 1 2 ) = 1 4 p Z (4) = ( 1 2 )( 1 6 ) = 1 12 p Z (5) = ( 1 2 )( 1 3 ) = 1 6 p Z (6) = ( 1 2 )( 1 2 ) = 1 4 2. Expectation of Z E [ Z ] = 6 z =1 zp Z ( z ) E [ Z ] = 1( 1 12 ) + 2( 1 6 ) + 3( 1 4 ) + 4( 1 12 ) + 5( 1 6 ) + 6( 1 4 ) E [ Z ] = 3 . 833 3. Variance of Z var [ Z ] = E [ Z 2 ] - [ E [ Z ]] 2 E [ Z 2 ] = 6 z =1 z 2 p Z ( z ) E [ Z 2 ] = 1 2 ( 1 12 ) + 2 2 ( 1 6 ) + 3 2 ( 1 4 ) + 4 2 ( 1 12 ) + 5 2 ( 1 6 ) + 6 2 ( 1 4 ) E [ Z 2 ] = 17 . 5 var [ Z ] = 17 . 5 - (3 . 833) 2 var [ Z ] = 2 . 806 ii. Let N be the number of flips required to have the first head appears: Y = 1 , if the first flip results in a head 0 , if the first flip results in a tail By using tower expectation: E [ X ] = E Y [ E X [ X | Y ]] E [ X ] = y E X [ X | Y = y ] Pr { Y = y } E [ N ] = y E [ N | Y = y ] Pr { Y = y } E [ N ] = E [ N | Y = 1] Pr { Y = 1 } + E [ N | Y = 0] Pr { Y = 0 } E [ N ] = E [ N | Y = 1] p + E [ N | Y = 0](1 - p ) E [ N | Y = 1] = 1 , E [ N | Y = 0] = 1 + E [ N ] E [ N ] = p + (1 + E [ N ])(1 - p ) E [ N ] = 1 + E [ N ] - pE [ N ] E [ N ] = 1 p 1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
iii. Using the law of total probability, we obtain Pr { X = k } = n =0 p X | N ( k | n ) p N ( n ) = n =1 ( n + k - 1)! k !( n - 1)! p n (1 - p ) k (1 - β ) β ( n - 1) = (1 - β )(1 - p ) k p n =1 n + k - 1 k ( βp ) n - 1 = (1 - β )(1 - p ) k p (1 - βp ) - k - 1 = p - βp 1 - βp 1 - p 1 - βp k for k = 0 , 1 , . . . Therefore, X has a geometric distribution with parameter 1 - p 1 - βp . 2. Introduction to Markov Chain i. State space: { 0 , 1 , 2 , 3 } State Box 1 Box 2 0 3B 3W 1 1W,2B 1B,2W 2 2W,1B 2B,1W 3 3W 3B P = 0 1 0 0 1 / 9 4 / 9 4 / 9 0 0 4 / 9 4 / 9 1 / 9 0 0 1 0 ii. Pr { X 0 = 0 , X 1 = 1 , X 2 = 2 } = Pr { X 2 = 2 | X 0 = 0 , X 1 = 1 } Pr { X 0 = 0 , X 1 = 1 } ( Conditional Probability ) = Pr {
Background image of page 2
Background image of page 3
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}