lec10 - Lecture 10 10.1 Bayes estimators. (Textbook,...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Lecture 10 10.1 Bayes estimators. (Textbook, Sections 6.3 and 6.4) Once we find the posterior distribution or its p.d.f. or p.f. ξ ( θ | X 1 , . . . , X n ) we turn to constructing the estimate ˆ θ of the unknown parameter θ . The most common way to do this is simply take the mean of the posterior distribution ˆ θ = ˆ θ ( X 1 , . . . , X n ) = ( θ | X 1 , . . . , X n ) . This estimate ˆ θ is called the Bayes estimator . Note that ˆ θ depends on the sample X 1 , . . . , X n since, by definition, the posterior distribution depends on the sample. The obvious motivation for this choice of ˆ θ is that it is simply the average of the parameter with respect to posterior distribution that in some sense captures the information contained in the data and our prior intuition about the parameter. Besides this obvious motivation one sometimes gives the following motivation. Let us define the estimator as the parameter a that minimizes (( θ- a ) 2 | X 1 , . . . , X n ) , i.e. the posterior average squared deviation of θ from a is as small as possible. To find this a we find the critical point: ∂ ∂a (( θ- a ) 2 | X 1 , . . . , X n ) = 2 ( θ | X 1 , . . . , X n )- 2 a = 0 and it turns out to be the mean a = ˆ θ = ( θ | X 1 , . . . , X n ) ....
View Full Document

This note was uploaded on 10/11/2009 for the course STATISTICS 18.443 taught by Professor Dmitrypanchenko during the Spring '09 term at MIT.

Page1 / 4

lec10 - Lecture 10 10.1 Bayes estimators. (Textbook,...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online