This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: SOLUTIONS: CME 308 Final Exam Spring 2010 George Papanicolaou June 10, 2010 Please sign the Stanford Honor Code on your blue book. This exam is CLOSED notes and CLOSED book. You have three hours. Please write clearly and show all your calculations. Good luck! Problem 1 (10 pts): Suppose X 1 ,...,X n is an i.i.d. sample from X ∼ Bernoulli( p ) random variables. 1. Derive the MLE estimator for p . Verify that it is consistent and then state and justify a CLT result for it. 2. State and justify the MLE estimator for the variance of X . State and justify a CLT result for the MLE estimator. Solution: 1. The loglikelihood function is given by l n ( p ) = n X i =1 log ( p x i (1 p ) 1 x i ) = n X i =1 x i log( p ) + (1 x i )log(1 p ) Setting the derivative to zero, we get 0 = l n ( p ) = n X i =1 x i 1 p + ( x i 1) 1 1 p = n X i =1 x i ! 1 p + n X i =1 x i ! n ! 1 1 p which implies 1 p 1 n X i =1 x i ! = n X i =1 x i + n ⇒ 1 p n X i =1 x i = n ⇒ ˆ p n = 1 n n X i =1 x i 1 CME308 Final Exam Spring 2009 June 10, 2010 where ˆ p n is the MLE estimator for p . From the Weak Law of Large Numbers, ˆ p n → p in probability. Lastly, we have Var( X ) = p (1 p ) so by the conventional CLT, √ n (ˆ p n p ) D → N (0 ,p (1 p )) 2. Since X is Bernoulli with parameter p , it has variance p (1 p ). Since the variance is a simple function of the parameter p , for which we already have an MLE, ˆ p n , we can use Theorem 7.2.10 in Casella and Berger, the Invariance property. This tells us that the MLE for ˆ σ 2 = ˆ p n (1 ˆ p n ). From the Delta Method, we then have, √ n (ˆ p n (1 ˆ p n ) p (1 p )) D → N (0 ,p (1 p )(1 2 p ) 2 ) Problem 2 (10 pts): In Homework 3, we studied an importance sampling algorithm to efficiently compute α ( b ) = P (max 1 ≤ i ≤ d X i > b ), where X is a multivariate Gaussian with covariance matrix C and density, f ( x ; C ) = (2 π ) d/ 2  det C  1 / 2 exp ( x T C 1 x/ 2 ) . 1. Suppose that we sample X with a new distribution N (0 ,θC ), where θ > 0. For importance sampling, we need the likelihood ratio f ( x ; C ) g ( x ; C,θ ) , where g ( x ; C,θ ) is the density for N (0 ,θC ). Compute the likelihood ratio. 2. Show that the second moment E g L 2 of the importance sampling estimator, L = 1 { max 1 ≤ i ≤ d X i >b } f ( X ; C ) g ( X ; C,θ ) , is equal to, (2 θ 1) d/ 2 P r θ 2 θ 1 max 1 ≤ i ≤ d X i > b ! . Note that this is valid for θ > 1 / 2 but for θ ≤ 1 / 2, the second moment is infinite. 3. For θ = b , we can show that log E g L 2 log P (max 1 ≤ i ≤ d X i > b ) → 2 , (1) and thus this scheme is logarithmically efficient just like the scheme in the homework. An equivalent statement to this is, log P q b 2 b 1 max 1 ≤ i ≤ d X i > b log P (max 1 ≤ i ≤ d X i > b ) → 2 . (2) Use the following to show (2): max 1 ≤ i ≤ d P ( X i > b ) ≤ P ( max 1 ≤ i ≤ d X i > b ) ≤ d max 1 ≤ i ≤ d P ( X i > b ) , d db P ( X i > b ) = f ( b ; C ii ) , Var(...
View
Full Document
 Spring '08
 PETERGLYNN
 Probability theory, Quadratic equation, Complex number, Mathematics in medieval Islam

Click to edit the document details