This preview shows pages 1–2. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Lecture 10 10.1 Bayes estimators. (Textbook, Sections 6.3 and 6.4) Once we find the posterior distribution or its p.d.f. or p.f. ξ ( θ  X 1 , . . . , X n ) we turn to constructing the estimate ˆ θ of the unknown parameter θ . The most common way to do this is simply take the mean of the posterior distribution ˆ θ = ˆ θ ( X 1 , . . . , X n ) = ( θ  X 1 , . . . , X n ) . This estimate ˆ θ is called the Bayes estimator . Note that ˆ θ depends on the sample X 1 , . . . , X n since, by definition, the posterior distribution depends on the sample. The obvious motivation for this choice of ˆ θ is that it is simply the average of the parameter with respect to posterior distribution that in some sense captures the information contained in the data and our prior intuition about the parameter. Besides this obvious motivation one sometimes gives the following motivation. Let us define the estimator as the parameter a that minimizes (( θ a ) 2  X 1 , . . . , X n ) , i.e. the posterior average squared deviation of θ from a is as small as possible. To find this a we find the critical point: ∂ ∂a (( θ a ) 2  X 1 , . . . , X n ) = 2 ( θ  X 1 , . . . , X n ) 2 a = 0 and it turns out to be the mean a = ˆ θ = ( θ  X 1 , . . . , X n ) ....
View
Full
Document
This note was uploaded on 10/11/2009 for the course STATISTICS 18.443 taught by Professor Dmitrypanchenko during the Spring '09 term at MIT.
 Spring '09
 DmitryPanchenko
 Statistics

Click to edit the document details