Sol_Final2010

Sol_Final2010 - SOLUTIONS: CME 308 Final Exam Spring 2010...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: SOLUTIONS: CME 308 Final Exam Spring 2010 George Papanicolaou June 10, 2010 Please sign the Stanford Honor Code on your blue book. This exam is CLOSED notes and CLOSED book. You have three hours. Please write clearly and show all your calculations. Good luck! Problem 1 (10 pts): Suppose X 1 ,...,X n is an i.i.d. sample from X Bernoulli( p ) random variables. 1. Derive the MLE estimator for p . Verify that it is consistent and then state and justify a CLT result for it. 2. State and justify the MLE estimator for the variance of X . State and justify a CLT result for the MLE estimator. Solution: 1. The log-likelihood function is given by l n ( p ) = n X i =1 log ( p x i (1- p ) 1- x i ) = n X i =1 x i log( p ) + (1- x i )log(1- p ) Setting the derivative to zero, we get 0 = l n ( p ) = n X i =1 x i 1 p + ( x i- 1) 1 1- p = n X i =1 x i ! 1 p + n X i =1 x i !- n ! 1 1- p which implies 1 p- 1 n X i =1 x i ! =- n X i =1 x i + n 1 p n X i =1 x i = n p n = 1 n n X i =1 x i 1 CME308 Final Exam Spring 2009 June 10, 2010 where p n is the MLE estimator for p . From the Weak Law of Large Numbers, p n p in probability. Lastly, we have Var( X ) = p (1- p ) so by the conventional CLT, n ( p n- p ) D N (0 ,p (1- p )) 2. Since X is Bernoulli with parameter p , it has variance p (1- p ). Since the variance is a simple function of the parameter p , for which we already have an MLE, p n , we can use Theorem 7.2.10 in Casella and Berger, the Invariance property. This tells us that the MLE for 2 = p n (1- p n ). From the Delta Method, we then have, n ( p n (1- p n )- p (1- p )) D N (0 ,p (1- p )(1- 2 p ) 2 ) Problem 2 (10 pts): In Homework 3, we studied an importance sampling algorithm to efficiently compute ( b ) = P (max 1 i d X i > b ), where X is a multivariate Gaussian with covariance matrix C and density, f ( x ; C ) = (2 )- d/ 2 | det C |- 1 / 2 exp (- x T C- 1 x/ 2 ) . 1. Suppose that we sample X with a new distribution N (0 ,C ), where > 0. For importance sampling, we need the likelihood ratio f ( x ; C ) g ( x ; C, ) , where g ( x ; C, ) is the density for N (0 ,C ). Compute the likelihood ratio. 2. Show that the second moment E g L 2 of the importance sampling estimator, L = 1 { max 1 i d X i >b } f ( X ; C ) g ( X ; C, ) , is equal to, (2 - 1) d/ 2 P r 2 - 1 max 1 i d X i > b ! . Note that this is valid for > 1 / 2 but for 1 / 2, the second moment is infinite. 3. For = b , we can show that log E g L 2 log P (max 1 i d X i > b ) 2 , (1) and thus this scheme is logarithmically efficient just like the scheme in the homework. An equivalent statement to this is, log P q b 2 b- 1 max 1 i d X i > b log P (max 1 i d X i > b ) 2 . (2) Use the following to show (2): max 1 i d P ( X i > b ) P ( max 1 i d X i > b ) d max 1 i d P ( X i > b ) , d db P ( X i > b ) = f ( b ; C ii ) , Var(...
View Full Document

This note was uploaded on 06/17/2010 for the course CME 308 taught by Professor Peterglynn during the Spring '08 term at Stanford.

Page1 / 8

Sol_Final2010 - SOLUTIONS: CME 308 Final Exam Spring 2010...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online