This preview shows pages 1–5. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Mean Squared Error and Maximum Likelihood: Lecture XVIII Charles B. Moss October 18, 2010 Charles B. Moss () Mean Squared Error October 18, 2010 1 / 10 1 Data Reduction 2 Suﬃciency Principle Charles B. Moss () Mean Squared Error October 18, 2010 2 / 10 Data Reduction The typical mode of operation in statistics is to use information from a sample X 1 , · · · X 2 to make inferences about an unknown parameter θ . I Put slightly differently, the researcher summarizes the information in the sample (or the sample values) with a statistic. I Thus, any statistic T ( X ) summarizes the data, or reduces the information in the sample to a single number. We use only the information in the statistic instead of the entire sample. Charles B. Moss () Mean Squared Error October 18, 2010 3 / 10 Continued I Put in a slightly more mathematical formulation, the statistic partitions the sample space into two sets F Defining the sample space for the statistic T = { t : t = T ( x ) , x ∈ X } (1) F Thus, a given value of a sample statistic T ( x ) implies that the sample comes from a space of sets A t such that t ∈ T , A t = { x : T ( x ) = t } ....
View
Full
Document
This note was uploaded on 07/15/2011 for the course AEB 6180 taught by Professor Staff during the Spring '10 term at University of Florida.
 Spring '10
 Staff

Click to edit the document details