stat210a_midterm_solutions - UC Berkeley Department of...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: UC Berkeley Department of Statistics STAT 210A: Introduction to Mathematical Statistics Midterm Examination-Solutions Fall 2006 Problem 1.1 [18 points total] Suppose that X i , i = 1 ,...,n are i.i.d. samples from the uniform Uni[0 , ] distribution. (a) Find a one-dimensional sufficient statistic for estimating . (b) Compute the maximum likelihood estimate MLE based on ( X 1 ,...,X n ). Using an elementary argument, show that MLE p * as n + . (c) Consider the estimator of given by ( X ) = 2 n n i =1 X i . Is it unbiased? Is it admissible under squared error loss? Justify your answers. Now suppose that we view the parameter as a random variable ~ , and assume a Pareto prior density of the form ( ) = - - 1 I [ ] , for all > , where > 0 and > 2 are fixed. (d) Compute the prior mean of the random variable ~ . (e) Compute the posterior distribution of ~ conditioned on X = ( X 1 ,...,X n ). (f) Compute the Bayes estimate of ~ under quadratic loss. Hint: New calculation may not be required given previous parts to the question. Solution 1.1: (a) By independence, we have p ( x ; ) = n Y i =1 1 I [ x i ] for x i = - n I [max( x i ) ] , so that Z = max { X 1 ,...,X n } is sufficient by the factorization criterion. (b) From part (a), the log likelihood takes the form L ( ) =- n log( ) for max { X i } , and- otherwise, so that the MLE is given by MLE = max { X 1 ,...,X n } . For any (0 , ), we compute P [ | max X i- | > ] = n Y i =1 P [ X i - ] = h 1- i n as n + , so that consistency of the MLE follows. 1 (c) We compute E [ ( X )] = 2 n n X i =1 E [ X i ] = 2 n ( n 2 ) = , so that the estimator is unbiased. However, since Z = max { X i } is sufficient from part (a) and this estimator depends on other information, the Rao-Blackwell theorem dic- tates that we can construct a better estimator ( X ) = E [ ( X ) | Z ]. The strict convexity of quadratic loss ensures that will dominate , so that must be inadmissible....
View Full Document

This note was uploaded on 10/17/2009 for the course STAT 210a taught by Professor Staff during the Fall '08 term at University of California, Berkeley.

Page1 / 8

stat210a_midterm_solutions - UC Berkeley Department of...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online