This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Feb. 24, 2003 CHAPTER 2. SUFFICIENCY AND ESTIMATION 2.1 Suﬃcient statistics . Classically, a “statistic” is a measurable function of the ob servations, say f ( X 1 , . . . , X n ). The concept of “statistic” differs from that of “random variable” in that a random variable is defined on a probability space, so that one prob ability measure is singled out, where for statistics we have in mind a family of possible probability measures, R. A. Fisher (1922) called a statistic suﬃcient “when no other statistic which can be calculated from the same sample provides any additional information as to the value of the parameter to be estimated.” For example, given a sample ( X 1 , . . . , X n ) where the X j are i.i.d. N ( θ, 1), the sample mean X := ( X 1 + ··· X n ) /n turns out to be a suﬃcient statistic for the unknown parameter θ . J. Neyman (1935) gave one form of a “factorization theorem” for suﬃcient statistics. This section will give more general forms of the definitions and theorem due to Halmos and L. J. Savage (1949). In general, given a measurable space ( S, B ), that is a set S with a σalgebra B of subsets, and another measurable space ( Y, F ), a statistic is a measurable function T from S into Y . Often, Y = R or a Euclidean space R d with Borel σalgebra. Let T − 1 ( F ) := { T − 1 ( F ) : F ∈ F} . Then T − 1 ( F ) is a σalgebra and is the smallest σalgebra on S for which T is measurable. For any measure µ on B , L 1 ( µ ) := L 1 ( S, B , µ ) denotes the set of all realvalued, Bmeasurable, µintegrable functions on S . For any probability measure P on ( S, B ), sub σalgebra A ⊂ B , and f ∈ L 1 ( P ), we have a conditional expectation E P ( f A ), defined as a function g ∈ L 1 ( S, A , P ), so that g is measurable for A , such that for all A ∈ A , ∫ A gdP = ∫ A fdP . Such a g always exists, as a RadonNikodym derivative of the signed measure A → ∫ A f dP with respect to the restriction of P to A (RAP, Theorems 5.5.4, 10.1.1). If f = 1 B for some B ∈ B let P ( B A ) := E P (1 B A ). Then almost surely ≤ P ( B A ) ≤ 1. Conditional expectations for P are only defined up to equality Palmost surely. Now, we will need the fact that suitable Ameasurable functions can be brought outside conditional expectations (RAP, Theorem 10.1.9), much as constants can be brought outside ordinary expectations: 2.1.1 Lemma . E P ( fg A ) = fE P ( g A ) whenever both g and fg are in L 1 ( S, B , P ) and f is Ameasurable. Now, here are precise definitions of suﬃciency: Definition . Given a family P of probability measures on B , a sub σalgebra A ⊂ B is called suﬃcient for P if and only if for every B ∈ B there is an Ameasurable function f B ≥ such that f B = P ( B A ) Palmost surely for every P ∈ P . A statistic T from ( S, B ) to ( Y, F ) is called suﬃcient if and only if T − 1 ( F ) is suﬃcient....
View
Full Document
 Spring '09
 JJ
 Statistics, Conditional Probability, Probability theory, measure, probability measure, Borel, dν dµ

Click to edit the document details