This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: 5 Many Parameter Inference 5.1 Introduction In this chapter, we illustrate Bayesian learning from several two parameter problems. Building on the one-parameter posteriors of Chapter 4, we first il- lustrate learning about both parameters of the normal density with noninfor- mative and informative priors. To compare two independent Poisson samples, we illustrate computing the marginal posterior density of the ratio of Pois- son means. Last, we illustrate learning about both the sample size and the probability of success for binomial data where only the number of successes is observed. In the last example, we illustrate constructing a dependent prior for the two parameters in a baseball setting where historical data is available. 5.2 Normal Sampling with Both Parameters Unknown 5.2.1 Noninformative Prior Suppose we observe y 1 , ..., y n from a normal distribution with mean and variance 2 , where both parameters are unknown. We assume that we have little prior knowledge about the location of either the mean or the variance, and so we assign ( , 2 ) the usual noninformative prior g ( , 2 ) = 1 2 . Before we consider this situation, lets review some results from the previ- ous chapter. 1. Suppose we wish to learn about the normal mean when the variance 2 is assumed known. If we assign the noninformative uniform prior, then the posterior distribution for is normal with mean y and variance 2 /n . 2 5 Many Parameter Inference 2. Suppose instead that we are interested in the variance 2 when the mean is known and the typical noninformative prior of the form 1 / 2 is assigned to the variance. Then 2 is distributed S- 2 v where v = n and S = n i =1 ( y i- ) 2 . In the general case where both parameters are unknown, the likelihood function is given by L ( , 2 ) = n productdisplay i =1 1 2 2 exp parenleftbigg- 1 2 2 ( y i- ) 2 parenrightbigg 1 ( 2 ) n/ 2 exp parenleftBigg- 1 2 2 n summationdisplay i =1 ( y i- ) 2 parenrightBigg If the likelihood is combined with the noninformative prior, we obtain the joint posterior density g ( , 2 | y ) 1 ( 2 ) n/ 2+1 exp parenleftBigg- 1 2 2 n summationdisplay i =1 ( y i- ) 2 parenrightBigg Suppose one subtracts and adds the sample mean y = 1 n n i =1 y i in the ex- pression n i =1 ( y i- ) 2 . Then one obtains the identity n summationdisplay i =1 ( y i- ) 2 = n summationdisplay i =1 ( y i- y ) 2 + n ( - y ) 2 . Using this identity and rearranging some terms, one obtains the following representation of the joint posterior density: g ( , 2 | y ) 1 ( 2 ) 1 / 2 exp parenleftBig- n 2 2 ( - y ) 2 parenrightBig 1 ( 2 ) n/ 2 exp parenleftBigg- 1 2 2 n summationdisplay i =1 ( y i- y ) 2 parenrightBigg ....
View Full Document
This note was uploaded on 01/01/2011 for the course STAT 665 taught by Professor Albert during the Spring '10 term at Bowling Green.
- Spring '10