# Assignment5_Solution(1) - EE 503 Homework 5 Solutions 1 a...

• Notes
• 7
• 100% (21) 21 out of 21 people found this document helpful

This preview shows page 1 - 3 out of 7 pages.

EE 503 : Homework 5 Solutions 1. a) Let X denote the number of white balls and Y be the number of black balls. P X | Y ( x | 3) = P { X = x | Y = 3 } Given Y = 3, we are left with 3 more selections, out of 3 W and 6 R balls. Therefore, the probability of getting X white balls out of 3 W and 6 R balls is, P { X = x | Y = 3 } = ( 3 x )( 6 3 - x ) ( 9 3 ) ; x = { 0 , 1 , 2 , 3 } b) E [ X | Y = 1] = 3 X x =0 xP { X = x | Y = 1 } As in part a) P { X = x | Y = 1 } = ( 3 x )( 6 5 - x ) ( 9 5 ) ; x = { 0 , 1 , 2 , 3 } = E [ X | Y = 1] = 1 45 126 + 2 60 126 + 3 15 126 = 5 3 2. Let N i denote the time until the same outcome occurs i consecutive times. Upon conditioning on N i - 1 , we have E [ N i ] = E [ E [ N i | N i - 1 ]]. Now, E [ N i | N i - 1 ] = N i - 1 + 1 , with probability 1 /m N i - 1 + E [ N i ] , with probability ( m - 1) /m The above follows because after a run of i 1, either a run of i is attained if the next trial is the same type as those in the run, or else if the next trial is different, then it is exactly as if we were starting all over at that point. E [ N i ] = E [ E [ N i | N i 1 ]] = E [ N i - 1 ] + 1 , with probability 1 /m E [ N i - 1 ] + E [ N i ] , with probability ( m - 1) /m = 1 m [ E [ N i - 1 ] + 1] + m - 1 m [ E [ N i - 1 ] + E [ N i ]] = E [ N i ] = E [ N i - 1 ] + 1 m + m - 1 m E [ N i - 1 ] Hence, E [ N i ] = 1 + mE [ N i - 1 ]
Solving recursively now yields E [ N ] = E [ N k ] = 1 + mE [ N k - 1 ] = 1 + m (1 + mE [ N k - 2 ]) = 1 + m + m 2 E [ N k - 2 ] = 1 + m + m 2 + · · · + m k - 1 E [ N 1 ] = m k - 1 m - 1 3. Conditioning on the first flip, E [ X ] = pE [ X | H ] + (1 p ) E [ X | T ] Since getting a H doesnt start the sequence TTH , E [ X | H ] = 1 + E [ X ]. = E [ X ] = p (1 + E [ X ]) + (1 - p ) E [ X | T ] Conditioning on the next flip, we have E [ X | T ] = pE [ X | TH ] + (1 p ) E [ X | TT ] == p (2 + E [ X ]) + 2 + 1 p (1 - p ) where E [ X | TH ] = 2 + E [ X ] follows from the fact that once we get a H before any TT occur, we are back where we started, and E [ X | TT ] = 2 + 1 p follows from the fact that we only need another H , which requires 1 p flips in the average. Thus, we have
• • • 