{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

# c2s1 - Feb 24 2003 CHAPTER 2 SUFFICIENCY AND ESTIMATION 2.1...

This preview shows pages 1–2. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Feb. 24, 2003 CHAPTER 2. SUFFICIENCY AND ESTIMATION 2.1 Suﬃcient statistics . Classically, a “statistic” is a measurable function of the ob- servations, say f ( X 1 , . . . , X n ). The concept of “statistic” differs from that of “random variable” in that a random variable is defined on a probability space, so that one prob- ability measure is singled out, where for statistics we have in mind a family of possible probability measures, R. A. Fisher (1922) called a statistic suﬃcient “when no other statistic which can be calculated from the same sample provides any additional information as to the value of the parameter to be estimated.” For example, given a sample ( X 1 , . . . , X n ) where the X j are i.i.d. N ( θ, 1), the sample mean X := ( X 1 + ··· X n ) /n turns out to be a suﬃcient statistic for the unknown parameter θ . J. Neyman (1935) gave one form of a “factorization theorem” for suﬃcient statistics. This section will give more general forms of the definitions and theorem due to Halmos and L. J. Savage (1949). In general, given a measurable space ( S, B ), that is a set S with a σ-algebra B of subsets, and another measurable space ( Y, F ), a statistic is a measurable function T from S into Y . Often, Y = R or a Euclidean space R d with Borel σ-algebra. Let T − 1 ( F ) := { T − 1 ( F ) : F ∈ F} . Then T − 1 ( F ) is a σ-algebra and is the smallest σ-algebra on S for which T is measurable. For any measure µ on B , L 1 ( µ ) := L 1 ( S, B , µ ) denotes the set of all real-valued, B-measurable, µ-integrable functions on S . For any probability measure P on ( S, B ), sub- σ-algebra A ⊂ B , and f ∈ L 1 ( P ), we have a conditional expectation E P ( f |A ), defined as a function g ∈ L 1 ( S, A , P ), so that g is measurable for A , such that for all A ∈ A , ∫ A gdP = ∫ A fdP . Such a g always exists, as a Radon-Nikodym derivative of the signed measure A → ∫ A f dP with respect to the restriction of P to A (RAP, Theorems 5.5.4, 10.1.1). If f = 1 B for some B ∈ B let P ( B |A ) := E P (1 B |A ). Then almost surely ≤ P ( B |A ) ≤ 1. Conditional expectations for P are only defined up to equality P-almost surely. Now, we will need the fact that suitable A-measurable functions can be brought outside conditional expectations (RAP, Theorem 10.1.9), much as constants can be brought outside ordinary expectations: 2.1.1 Lemma . E P ( fg |A ) = fE P ( g |A ) whenever both g and fg are in L 1 ( S, B , P ) and f is A-measurable. Now, here are precise definitions of suﬃciency: Definition . Given a family P of probability measures on B , a sub- σ-algebra A ⊂ B is called suﬃcient for P if and only if for every B ∈ B there is an A-measurable function f B ≥ such that f B = P ( B |A ) P-almost surely for every P ∈ P . A statistic T from ( S, B ) to ( Y, F ) is called suﬃcient if and only if T − 1 ( F ) is suﬃcient....
View Full Document

{[ snackBarMessage ]}

### Page1 / 8

c2s1 - Feb 24 2003 CHAPTER 2 SUFFICIENCY AND ESTIMATION 2.1...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online