This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Statistics 612: Regular Parametric Models and Likelihood Based Inference Moulinath Banerjee March 30, 2009 We continue our discussion of likelihood based inference for parametric models; in particular, we will talk more about information bounds in the context of parametric models, and the role they play in likelihood based inference. We first introduce the multiparameter version of the celebrated CramerRao inequality. I will not describe the underlying assumptions in details. These are the usual sorts of assumptions one makes for parametric models, in order to be able to establish sensible results. See Page 11 of Chapter 3 of Wellners notes for a detailed description of the conditions involved. For a multidimensional parametric model { p ( x, ) : R k } , the information matrix I ( ) is given by: I ( ) = E ( l ( X, ) , l ( X, ) T ) = E l ( X, ) , where l ( X, ) = l ( X, ) being a k 1 column vector (recall that l ( x, ) = log p ( x, )), and l ( x, ) = 2 T l ( X, ) , is a k k matrix. Consider a smooth realvalued function q ( ) that is estimated by some statistic T ( X ), and let q ( ) denote the derivative of q (written as a k 1 vector). Let b ( ) = E ( T ( X )) q ( ) be the bias of the estimator T , and let b ( ) denote the derivative of the bias. We then have: Var ( T ( X )) ( q ( ) + b ( )) T I 1 ( ) ( q ( ) + b ( )) . In particular, if T ( X ) is unbiased for q ( ), then Var ( T ( X )) q ( ) T I 1 ( ) q ( ) . For a proof of this result, see Page 12 of Chapter 3 of Wellners notes the proof runs along lines similar to the onedimensional case. We will not be worried about the construction of exact 1 unbiased estimators for q ( ) that attain the information bound; in the vast majority of situations this is not feasible. Rather, we focus on the connection of the MLE n to the information bound arising from the multiparameter inequality above. Consider the asymptotically linear representation of the MLE given by: n ( n ) = 1 n n X i =1 I ( ) 1 l ( X i , ) + o p (1) . Invoke the Delta method to obtain: n ( q ( n ) q ( )) = 1 n n X i =1 q ( ) T I ( ) 1 l ( X i , ) + o p (1) . It is easily seen that the asymptotic variance of n ( q ( n ) q ( )) is exactly q ( ) T I 1 ( ) q ( ), the information bound arising from the multiparameter Cramer Rao inequality. The function q ( ) T I ( ) 1 l ( x, ) (that provides a linearization of the MLE) is called the efficient influence function for estimating q ( ). Motivated by the above considerations, we define efficient influence functions and information bounds for vectorvalued functions of ....
View
Full
Document
This note was uploaded on 04/14/2010 for the course STATS 612 taught by Professor Moulib during the Winter '08 term at University of Michigan.
 Winter '08
 moulib
 Statistics

Click to edit the document details