One-parameter models

# One-parameter models - 3 One-parameter models A...

This preview shows pages 1–3. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 3 One-parameter models A one-parameter model is a class of sampling distributions that is indexed by a single unknown parameter. In this chapter we discuss Bayesian inference for two one-parameter models: the binomial model and the Poisson model. In addition to being useful statistical tools, these models also provide a simple environment within which we can learn the basics of Bayesian data analysis, including conjugate prior distributions, predictive distributions and confidence regions. 3.1 The binomial model Happiness data Each female of age 65 or over in the 1998 General Social Survey was asked whether or not they were generally happy. Let Y i = 1 if respondent i reported being generally happy, and let Y i = 0 otherwise. If we lack information dis- tinguishing these n = 129 individuals we may treat their responses as being exchangeable. Since 129 is much smaller than the total size N of the female senior citizen population, the results of the last chapter indicate that our joint beliefs about Y 1 ,...,Y 129 are well approximated by • our beliefs about θ = ∑ N i =1 Y i /N ; • the model that, conditional on θ , the Y i ’s are i.i.d. binary random variables with expectation θ . The last item says that the probability for any potential outcome { y 1 ,...,y 129 } , conditional on θ , is given by p ( y 1 ,...,y 129 | θ ) = θ ∑ 129 i =1 y i (1- θ ) 129- ∑ 129 i =1 y i . What remains to be specified is our prior distribution. P.D. Hoff, A First Course in Bayesian Statistical Methods , Springer Texts in Statistics, DOI 10.1007/978-0-387-92407-6 3, c Springer Science+Business Media, LLC 2009 32 3 One-parameter models A uniform prior distribution The parameter θ is some unknown number between 0 and 1. Suppose our prior information is such that all subintervals of [0 , 1] having the same length also have the same probability. Symbolically, Pr( a ≤ θ ≤ b ) = Pr( a + c ≤ θ ≤ b + c ) for 0 ≤ a < b < b + c ≤ 1 . This condition implies that our density for θ must be the uniform density: p ( θ ) = 1 for all θ ∈ [0 , 1] . For this prior distribution and the above sampling model, Bayes’ rule gives p ( θ | y 1 ,...,y 129 ) = p ( y 1 ,...,y 129 | θ ) p ( θ ) p ( y 1 ,...,y 129 ) = p ( y 1 ,...,y 129 | θ ) × 1 p ( y 1 ,...,y 129 ) ∝ p ( y 1 ,...,y 129 | θ ) . The last line says that in this particular case p ( θ | y 1 ,...,y 129 ) and p ( y 1 , ... , y 129 | θ ) are proportional to each other as functions of θ . This is because the posterior distribution is equal to p ( y 1 ,...,y 129 | θ ) divided by something that does not depend on θ . This means that these two functions of θ have the same shape, but not necessarily the same scale....
View Full Document

{[ snackBarMessage ]}

### Page1 / 22

One-parameter models - 3 One-parameter models A...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online