This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Parametric Inference Moulinath Banerjee University of Michigan April 14, 2004 1 General Discussion The object of statistical inference is to glean information about an underlying population based on a sample collected from it. The actual population is assumed to be described by some probability distribution. Statistical inference is concerned with learning about the distribution or at least some characteristics of the distribution that are of scientific interest. In parametric statistical inference, which we will be primarily concerned with in this course, the underlying distribution of the population is taken to be parametrized by a Euclidean parameter. In other words, there exists a subset of kdimensional Euclidean space such that the class of dsitributions P of the underlying population can be written as { P : } . You can think of the s as labels for the class of distributions under consideration. More precisely this will be our setup: Our data X 1 , X 2 , . . . X n are i.i.d. observations from the distribution P where , the parameter space. We assume identifiability of the parameter, i.e. 1 6 = 2 P 1 6 = P 2 . In general, we will also assume that X 1 has a density f ( x, ) (this can either be a probability mass function or an ordinary probability density function). Here x is a typical value assumed by the random variable. Thus, f ( x, ) for a discrete random variable X 1 just gives us the probability that X 1 assumes the value x when the underlying parameter is indeed . For a continuous random variable, f ( x, ) gives us the density function of the random variable X 1 at the point x when is the underlying parameter. Thus f ( x, ) dx where dx is a very small number is approximately the probability that X 1 lives in the interval [ x, x + dx ] under parameter value . We will be interested in estimating , or more generally, a function of , say g ( ). Let us consider a few examples that will enable us to understand these notions better. (1) Let X 1 , X 2 , . . . , X n be the outcomes of n independent flips of the same coin. Here, we code X i = 1 if the i th toss produces H and 0 otherwise. The parameter of interest is , the probability of H turning up in a single toss. This can be any number between 0 and 1. The 1 X i s are i.i.d. and the common distribution P is the Bernoulli( ) distribution which has probability mass function: f ( x, ) = x (1 ) 1 x , x { , 1 } . Check that this is indeed a valid expression for the p.m.f. Here the parameter space, i.e. the set of all possible values for is the closed interval [0 , 1]. (2) Let X 1 , X 2 , . . . , X n denote the failure times of n different bulbs. We can think of the X i s as independent and identically distributed random variables from an exponential distribution with an unknown parameter which we want to estimate. If F ( x, ) denotes the distribution function of X 1 under parameter value...
View
Full
Document
This note was uploaded on 04/14/2010 for the course STATS 610 taught by Professor Moulib during the Fall '09 term at University of Michigan.
 Fall '09
 moulib
 The Land

Click to edit the document details