{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

prelim 2 solutions

# This shows then that the pmf of x y is simply given

This preview shows pages 3–6. Sign up to view the full content.

This shows then that the PMF of X + Y is simply given by P ( X + Y = k )= braceleftbigg 1 4 n ( 2 n k ) , if 0 k 2 n 0 , otherwise That is X + Y is Binomial (2 n, 1 2 ) random variable. 3

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
(c) Assume 0 i n . Using the independence between X and Y , we can write that P ( X + Y = k | X = i )= P ( Y = k i | X = i )= P ( Y = k i ) . The conditional PMF of X + Y given X is then given by P ( X + Y = k | X = i )= braceleftbigg ( n k i ) 1 2 n , if i k i + n 0 , otherwise (d) Assume 0 k 2 n . The conditional PMF of X given X + Y can be computed using Bayes’ rule, P ( X = i | X + Y = k )= P ( X + Y = k | X = i ) P ( X = i ) P ( X + Y = k ) . Using the results in (b) and (c), we conclude that P ( X = i | X + Y = k )= ( n i )( n k - i ) ( 2 n k ) , if max(0 ,k n ) i min( n,k ) 0 , otherwise Note that using the results in (b), this can be also written under the following form P ( X = i | X + Y = k )= ( n i )( n k - i ) min( k,n ) j =max(0 ,k - n ) ( n j )( n k - j ) , if max(0 ,k n ) i min( n,k ) 0 , otherwise (e) Using symmetry, E [ X | X + Y = k ]= E [ Y | X + Y = k ] . Now using the linearity of conditional expectation, E [ X | X + Y = k ]+ E [ Y | X + Y = k ]= E [ X + Y | X + Y = k ]= E [ k | X + Y = k ]= k, We conclude therefore that E [ X | X + Y = k ] = k 2 . This is not surprising as the conditional PMF that we found in part (d) is symmetric about k 2 so it’s mean should be k 2 . (f) Note that for all choices of joint PMFs of X and Y we must have P ( X + Y =2 n )= P ( X = n,Y = n ) P ( X = n ) . If we take Y = X , then P ( X + Y =2 n )= P ( X = n ) . Therefore, the joint PMF P XY ( l,k )= braceleftbigg P X ( l )= ( n l ) 1 2 n , if 0 l = k n 0 , otherwise maximizes P ( X + Y =2 n ) . (g) For all the choices of joint PMFs of X and Y we must have P ( X + Y = 2 n ) 0 . Now take Y = n X , then Y is also Binomial ( n, 1 2 ) . But since we always have X + Y = n , P ( X + Y =2 n )=0 . Hence, this choice minimizes P ( X + Y =2 n ) . 4
3. (a) We can write K = N i =1 X i , where X i takes the values 1 if the i th student receives her exam copy and X i =0 otherwise. Since the exam copies are handed out randomly we have P ( X i = 1) = 1 N . It is important to note that the random variables X 1 ,...,X N are not independent, for instance, knowing that the first student received her exam copy (i.e., X 1 = 1 ) increases the chances that the second student receives her (i.e., X 2 = 1 ). Now we can compute the mean of K by using the linearity of expectation E [ K ]= E [ N summationdisplay i =1 X i ]= N summationdisplay i =1 E [ X i ]= N summationdisplay i =1 (1 × P ( X i =1)+0 × P ( X i =0))= N summationdisplay i =1 1 N =1 . (b) Markov’s inequality gives P ( K m ) E [ K ] m = 1 m . (c) The variance of K is given by Var( K )= E [ K 2 ] E [ K ] 2 = E [ K 2 ] 1 . Now E [ K 2 ] is computed as follows E [ K 2 ]= E [( N summationdisplay i =1 X i ) 2 ]= N summationdisplay i =1 E [ X 2 i ]+ summationdisplay 1 i,j N,i negationslash = j E [ X i X j ] . Note that X 2 i = X i and hence E [ X 2 i ]= 1 N . Now E [ X i X j ]= P ( X i =1 ,X j =1)= P ( X i =1 | X j =1) P ( X j =1)= 1 N 1 1 N . Indeed, given that the j th student received her copy, the i th student has 1 chance out of N 1 to receive her copy (it is as if the class has only N 1 students). Proceeding further, we have E [ K 2 ]= N summationdisplay i =1 1 N + summationdisplay 1 i,j N,i negationslash = j 1 N 1 1 N , =1+( N 2 N ) 1 N 2 N =2 .

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}