hw2_stat210a_solutions

hw2_stat210a_solutions - UC Berkeley Department of...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: UC Berkeley Department of Statistics STAT 210A: Introduction to Mathematical Statistics Problem Set 2 Fall 2006 Issued: Thursday, September 7, 2006 Due: Thursday, September 14, 2006 Graded exercises Problem 2.1 From the distribution of X , we have: P ( X = x ) = exp " n X i =1 x i ! log( )- n # 1 x 1 ! x 2 ! x n ! (a) We have: P ( X = x | T ( X ) = S ) = P ( X = x and T ( X ) = S ) P ( T ( X ) = S ) From the properties of the Poisson distribution: T ( X ) = n i =1 X i Poisson( n ): P ( T ( X ) = S ) = exp [ S log( n )- n ] 1 S ! Furthermore: P ( X = x and T ( X ) = S ) = I ( n X i =1 x i = S ) exp " n X i =1 x i ! log( )- n # 1 x 1 ! x 2 ! x n ! = I ( n X i =1 x i = S ) exp [ S log( )- n ] 1 x 1 ! x 2 ! x n ! which yields: P ( X = x | T ( X ) = S ) = exp [- S log( n )] S ! x 1 ! x 2 ! x n ! = 1 n S S ! x 1 ! x 2 ! x n ! which does not involve proving sufficiency of T . (b) Define: g ( S, ) = exp [ S log( )- n ] h ( x ) = 1 x 1 ! x 2 ! x n ! It is then possible to write P ( X = x ) as: P ( X = x ) = g ( T ( X ) , ) h ( x ) so sufficiency of T follows from the factorization theorem. 1 Problem 2.2 The result is a direct consequence of the Rao-Blackwell theorem. It can be proven directly using Jensens inequality by noticing that L ( , ) = ( - ) 2 is convex in . Using iterated expectations and Jensens inequality: E [ L ( ( X ) , )] = E [ E [ L ( ( X ) , ) | T ( X )]] E [ L ( E ( ( X ) | T ( X )) , )] = E h L ( X ) , i We now prove that ( X ) = min i X i +max i X i 2 . To do that, we first consider the expected value of X j given M = min i X i and W = max i X i . There is a 1 n chance that X j = M and a 1 n chance that X j = W . The remaining mass n- 2 n is uniformly spread between M and W . Hence: E [ X j | M, W ] = 1 n M + 1 n W + n- 2 n Z W M ( x- M ) ( W- M ) dx = 1 n ( M + W ) + n- 2 2 n ( M + W ) = M + W 2 Problem 2.3 We have: p ( X 1 = x 1 ) = exp x 1 log 1 1- 1 + log(1- 1 ) p ( X i = x i | X i- 1 = x i- 1 ) = exp (1- x i- 1 ) x i log 10 1- 10 + log(1- 10 ) + x i- 1 x i log 11 1- 11 + log(1- 11 ) = exp x i log 10 1- 10 + x i- 1 x i log 11 (1- 10 ) (1- 11 ) 10 + x i- 1 log(1- 11 ) + log(1- 10 ) Hence: P ( X = x ) = p ( X 1 = x 1 ) n Y i =2 p ( X i = x i | X i- 1 = x i- 1 ) = exp " x 1 log 1 1- 1 + ( n X i =2 x i ) log 10 1- 10 + ( n X i =2 x i- 1 x i ) log 11 (1- 10 ) (1- 11 ) 10 + ( n X i =2 x i- 1 ) log(1- 10 + log(1- 1 ) + ( n- 1) log(1- 10 ) + ( n- 1) log 1- 11 1- 10...
View Full Document

Page1 / 7

hw2_stat210a_solutions - UC Berkeley Department of...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online