This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: 3 Oneparameter models A oneparameter model is a class of sampling distributions that is indexed by a single unknown parameter. In this chapter we discuss Bayesian inference for two oneparameter models: the binomial model and the Poisson model. In addition to being useful statistical tools, these models also provide a simple environment within which we can learn the basics of Bayesian data analysis, including conjugate prior distributions, predictive distributions and confidence regions. 3.1 The binomial model Happiness data Each female of age 65 or over in the 1998 General Social Survey was asked whether or not they were generally happy. Let Y i = 1 if respondent i reported being generally happy, and let Y i = 0 otherwise. If we lack information dis tinguishing these n = 129 individuals we may treat their responses as being exchangeable. Since 129 is much smaller than the total size N of the female senior citizen population, the results of the last chapter indicate that our joint beliefs about Y 1 ,...,Y 129 are well approximated by • our beliefs about θ = ∑ N i =1 Y i /N ; • the model that, conditional on θ , the Y i ’s are i.i.d. binary random variables with expectation θ . The last item says that the probability for any potential outcome { y 1 ,...,y 129 } , conditional on θ , is given by p ( y 1 ,...,y 129  θ ) = θ ∑ 129 i =1 y i (1 θ ) 129 ∑ 129 i =1 y i . What remains to be specified is our prior distribution. P.D. Hoff, A First Course in Bayesian Statistical Methods , Springer Texts in Statistics, DOI 10.1007/9780387924076 3, c Springer Science+Business Media, LLC 2009 32 3 Oneparameter models A uniform prior distribution The parameter θ is some unknown number between 0 and 1. Suppose our prior information is such that all subintervals of [0 , 1] having the same length also have the same probability. Symbolically, Pr( a ≤ θ ≤ b ) = Pr( a + c ≤ θ ≤ b + c ) for 0 ≤ a < b < b + c ≤ 1 . This condition implies that our density for θ must be the uniform density: p ( θ ) = 1 for all θ ∈ [0 , 1] . For this prior distribution and the above sampling model, Bayes’ rule gives p ( θ  y 1 ,...,y 129 ) = p ( y 1 ,...,y 129  θ ) p ( θ ) p ( y 1 ,...,y 129 ) = p ( y 1 ,...,y 129  θ ) × 1 p ( y 1 ,...,y 129 ) ∝ p ( y 1 ,...,y 129  θ ) . The last line says that in this particular case p ( θ  y 1 ,...,y 129 ) and p ( y 1 , ... , y 129  θ ) are proportional to each other as functions of θ . This is because the posterior distribution is equal to p ( y 1 ,...,y 129  θ ) divided by something that does not depend on θ . This means that these two functions of θ have the same shape, but not necessarily the same scale....
View
Full
Document
This note was uploaded on 11/24/2010 for the course STAT 201a taught by Professor Wu during the Spring '10 term at Pasadena City College.
 Spring '10
 wu
 Binomial

Click to edit the document details