This preview shows pages 1–2. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Lecture 11 11.1 Sufficient statistic. (Textbook, Section 6.7) We consider an i.i.d. sample X 1 , . . . , X n with distribution from the family { : } . Imagine that there are two people A and B, and that 1. A observes the entire sample X 1 , . . . , X n , 2. B observes only one number T = T ( X 1 , . . . , X n ) which is a function of the sample. Clearly, A has more information about the distribution of the data and, in par ticular, about the unknown parameter . However, in some cases, for some choices of function T (when T is so called sufficient statistics) B will have as much information about as A has. Definition. T = T ( X 1 , , X n ) is called sufficient statistics if ( X 1 , . . . , X n  T ) = ( X 1 , . . . , X n  T ) , (11.1) i.e. the conditional distribution of the vector ( X 1 , . . . , X n ) given T does not depend on the parameter and is equal to . If this happens then we can say that T contains all information about the param eter of the disribution of the sample, since given T the distribution of the sample is always the same no matter what is. Another way to think about this is: why the second observer B has as much information about as observer A? Simply, given T , the second observer B can generate another sample X 1 , . . . , X, ....
View
Full
Document
This note was uploaded on 10/11/2009 for the course STATISTICS 18.443 taught by Professor Dmitrypanchenko during the Spring '09 term at MIT.
 Spring '09
 DmitryPanchenko
 Statistics

Click to edit the document details