{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

many.parameter

# many.parameter - 5 Many Parameter Inference 5.1...

This preview shows pages 1–3. Sign up to view the full content.

5 Many Parameter Inference 5.1 Introduction In this chapter, we illustrate Bayesian learning from several two parameter problems. Building on the one-parameter posteriors of Chapter 4, we first il- lustrate learning about both parameters of the normal density with noninfor- mative and informative priors. To compare two independent Poisson samples, we illustrate computing the marginal posterior density of the ratio of Pois- son means. Last, we illustrate learning about both the sample size and the probability of success for binomial data where only the number of successes is observed. In the last example, we illustrate constructing a dependent prior for the two parameters in a baseball setting where historical data is available. 5.2 Normal Sampling with Both Parameters Unknown 5.2.1 Noninformative Prior Suppose we observe y 1 , ..., y n from a normal distribution with mean μ and variance σ 2 , where both parameters are unknown. We assume that we have little prior knowledge about the location of either the mean or the variance, and so we assign ( μ, σ 2 ) the usual noninformative prior g ( μ, σ 2 ) = 1 σ 2 . Before we consider this situation, let’s review some results from the previ- ous chapter. 1. Suppose we wish to learn about the normal mean μ when the variance σ 2 is assumed known. If we assign μ the noninformative uniform prior, then the posterior distribution for μ is normal with mean ¯ y and variance σ 2 /n .

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
2 5 Many Parameter Inference 2. Suppose instead that we are interested in the variance σ 2 when the mean μ is known and the typical noninformative prior of the form 1 2 is assigned to the variance. Then σ 2 is distributed - 2 v where v = n and S = n i =1 ( y i - μ ) 2 . In the general case where both parameters are unknown, the likelihood function is given by L ( μ, σ 2 ) = n productdisplay i =1 1 2 πσ 2 exp parenleftbigg - 1 2 σ 2 ( y i - μ ) 2 parenrightbigg 1 ( σ 2 ) n/ 2 exp parenleftBigg - 1 2 σ 2 n summationdisplay i =1 ( y i - μ ) 2 parenrightBigg If the likelihood is combined with the noninformative prior, we obtain the joint posterior density g ( μ, σ 2 | y ) 1 ( σ 2 ) n/ 2+1 exp parenleftBigg - 1 2 σ 2 n summationdisplay i =1 ( y i - μ ) 2 parenrightBigg Suppose one subtracts and adds the sample mean ¯ y = 1 n n i =1 y i in the ex- pression n i =1 ( y i - μ ) 2 . Then one obtains the identity n summationdisplay i =1 ( y i - μ ) 2 = n summationdisplay i =1 ( y i - ¯ y ) 2 + n ( μ - ¯ y ) 2 . Using this identity and rearranging some terms, one obtains the following representation of the joint posterior density: g ( μ, σ 2 | y ) 1 ( σ 2 ) 1 / 2 exp parenleftBig - n 2 σ 2 ( μ - ¯ y ) 2 parenrightBig × 1 ( σ 2 ) n/ 2 exp parenleftBigg - 1 2 σ 2 n summationdisplay i =1 ( y i - ¯ y ) 2 parenrightBigg . What we have done is represent the joint posterior density as the product of terms g ( μ, σ 2 | y ) = g ( μ | σ 2 , y ) × g ( σ 2 | y ) .
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 11

many.parameter - 5 Many Parameter Inference 5.1...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online