stat612-notes-1 - Statistics 612: Regular Parametric Models...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Statistics 612: Regular Parametric Models and Likelihood Based Inference Moulinath Banerjee December 6, 2006 A parametric model is a family of probability distributions P , such that there exists some (open) subset of a finite dimensional Euclidean space, say Θ, such that P can be written as { P θ : θ ∈ Θ ⊂ R k } . In other words, we can associate each distribution in P with a θ ∈ Θ. When this tagging/correspondence is one-one, we say that the parameter is identifiable; in other words, the parameter uniquely specifies the distribution. For meaningful statistical inference, this is usually a requirement. In what follows, identifiability will be implicitly assumed. Note that we are really interested in the class of probability distributions (this may be our postulated model for observed data) and not the parameter space itself. So what does a parametrization buy us? For meaningful inference, the parameter describes an integral feature of the probability distribution it is associated with, so that knowledge about the parameter translates easily to knowledge about the features of the distribution. Hence, to obtain meaningful results, one requires adequate regularity conditions that govern the behavior of the distribution functions or density functions in terms of θ , in a mathematically tractable manner. We will usually write parametric models as { p ( x,θ ) : θ ∈ Θ } where p ( x,θ ) is the density of P θ with respect to some dominating measure μ , and x assumes values in the range space of the random variable/vector. The log-density log p ( x,θ ) is denoted by l ( x,θ ). Estimation procedures for θ can be many and varied. A ubiquitous method is maximum likelihood, which, as you know has many desirable properties. Under appropriate regularity conditions on the parametric model, it is consistent for θ , and asymptotically normal, with an asymptotic variance that is the best possible among the class of so-called “regular” estimators. We will talk about “regularity” in some detail later. Roughly, it requires fairly nasty scenarios to render maximum likelihood impotent. Furthermore, the likelihood ratio statistic for testing θ = θ is asymptotically χ 2 , so that confidence sets for θ may be obtained by inversion. Likelihood ratio based confidence sets in many cases have better finite sample properties than their Wald type counterparts based on the asymptotic distribution of the MLE’s, since they are more data-driven and adapt nicely to the skewness in the underlying distribution. 1 The regularity conditions under which maximum likelihood works well can be found in any standard text (see, for example, Chapter 7 of Lehmann (Elements of Large Sample Theory) or Chapter 11 of Keener’s notes or Chapter 4 of Wellner’s notes). The smoothness of the log-density in θ is a key-requirement. Before, we proceed further, a brief discussion on the term “maximum likelihood estimator” is warranted. Recall that the MLE ˆ θ n...
View Full Document

This note was uploaded on 04/14/2010 for the course STATS 612 taught by Professor Moulib during the Winter '08 term at University of Michigan.

Page1 / 11

stat612-notes-1 - Statistics 612: Regular Parametric Models...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online