This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: UC Berkeley Department of Statistics STAT 210A: Introduction to Mathematical Statistics Solutions  Problem Set 10 Fall 2006 Issued: Thursday, November 9, 2006 Due: Thursday, November 16, 2006 Graded exercises Problem 10.1 a) For each i , Z i = I ( Y i > μ ) follows a Bernoulli distribution with parameter 1 F ( μ ). Given that Y i are i.i.d., so are Z i and hence S = ∑ n i =1 follows a Binomial( n, 1 F ( μ )) distribution. This is useful in performing a statistical hypothesis test because the distribution of S is known under H . Furthermore, as it does not involve any nuisance parameter, the threshold for a αlevel test can be easily computed. b) For large n and under the null hypothesis, the distribution of S can be approximated by a N ( n 2 , n 4 ). As a result S n 2 √ n 2 has an approximately standard normal distribution for large enough n . E ( δ s ( Y )) = P ( S > s ) = P S n 2 √ n 2 > s n 2 √ n 2 ≈ 1 Φ s n 2 √ n 2 = Φ n 2 s √ n Problem 10.2 a) The action space is given by A = { , 1 } . For θ = θ , the loss function is given by: l ( θ ,δ ) = , if δ = 0 1 , if δ = 1 For θ = θ 1 , the loss function is given by: l ( θ 1 ,δ ) = , if δ = 1 1 , if δ = 0 It follows that E ( l ( θ,δ ( X ))  θ = θ )) = E ( δ ( X )  θ = θ )) and E ( l ( θ,δ ( X ))  θ = θ 1 )) = E (1 δ ( X )  θ = θ 1 )) As a result: r ( λ,δ ) = E ( l ( θ,δ ( X ))) = E ( l ( θ,δ ( X ))  θ = θ )) P ( θ = θ ) + E ( l ( θ,δ ( X ))  θ = θ 1 )) P ( θ = θ 1 ) = λ E ( δ ( X )) + (1 λ ) E 1 (1 δ ( X )) b) To minimize the Bayes risk, it is sufficient to minimize the posterior risk. The posterior risk of taking action δ is given by: E ( L ( θ,δ ) X ) = δ λ P ( X  θ ) λ P ( X  θ ) + (1 λ ) P ( X  θ 1 ) + (1 δ ) (1 λ ) P ( X  θ 1 ) λ P ( X  θ ) + (1 λ ) P ( X  θ 1 ) = δ λ P ( X  θ ) (1 λ ) P ( X  θ 1 ) λ P ( X  θ ) + (1 λ ) P ( X  θ 1 ) + (1 λ ) P ( X  θ 1 ) λ P ( X  θ ) + (1 λ ) P ( X  θ 1 ) 1 Hence, the optimal decision δ * ( X ) is given by: δ * ( X ) = I ((1 λ ) P ( X  θ 1 ) λ P ( X  θ ) ≥ 0) = I P ( X  θ 1 ) P ( X  θ ) ≥ λ 1 λ c) From item b: δ * ( X ) = I " 1 n n X i =1 log P ( X i  θ 1 ) P ( X i  θ ) ≥ 1 n log λ 1 λ # Now, under H : 1 n n X i =1 log P ( X i  θ 1 ) P ( X i  θ ) p → E θ " 1 n n X i =1 log P ( X i  θ 1 ) P ( X i  θ ) # = D ( θ  θ 1 ) > while, under H 1 : 1 n n X i =1 log P ( X i  θ 1 ) P ( X i  θ ) p → E θ 1 " 1 n n X i =1 log P ( X i  θ...
View
Full Document
 Fall '08
 Staff
 Statistics, Null hypothesis, Statistical hypothesis testing, zi

Click to edit the document details