This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: 4 Single Parameter Inference 4.1 Introduction In this chapter, we introduce Bayesian thinking for several oneparameter problems. We first consider normally distributed data and discuss learning about the normal mean given a known value of the variance and learning about the normal variance given a known mean. These situations are artificial, but we will learn particular forms of posterior distributions that will be used in the case where both parameters of the normal population are unknown. We continue by illustrating Bayesian inference for a Poisson mean and an exponential location parameter. In all examples, the parameter is assigned a conjugate prior and the posterior density has a convenient functional form and easy to summarize. One way of generalizing the class of conjugate priors is by use of mixtures and we illustrate the use of a simple mixture in estimating a Poisson mean. 4.2 Learning about a Normal Mean with Known Variance 4.2.1 A single observation A basic problem in statistics is to learn about the mean of a normal population. Suppose we observe a single observation y from a normal population with mean and known variance 2 . Here the likelihood function is the sampling density of y viewed as a function of the parameter . L ( ) = 1 2 2 exp ( y ) 2 2 2 . As an example, suppose Joe takes an IQ test and his score is y . Joes true IQ is this would represent Joes average IQ test score if he were able to 2 4 Single Parameter Inference take the test an infinite number of times. We assume that his test score y is normally distributed with mean equal to his true IQ and standard deviation . Here represents the measurement error of the IQ test and we know from published reports that the standard deviation = 10. Suppose one has some prior beliefs about the location of the mean and one represents these beliefs by a normal curve with mean and standard deviation . That is, g ( ) = 1 2 2 exp (  ) 2 2 2 . Here represents the persons best guess at the value of the normal mean, and reflects the sureness of this guess. In our IQ example, suppose one believes that Joe has average intelligence so one sets the prior mean = 100. Moreover, one is pretty confident (with probability 0.90) that falls in the interval (81, 119). This information can be matched with the standard deviation = 15. After we observe the observation y , we wish to find the posterior density of the mean . By Bayes rule, the posterior density is proportional to the product of the prior of the likelihood. g (  y ) g ( ) L ( ) . To find the functional form of this posterior, we combine the terms in the exponent....
View
Full
Document
This note was uploaded on 01/01/2011 for the course STAT 665 taught by Professor Albert during the Spring '10 term at Bowling Green.
 Spring '10
 albert
 Variance

Click to edit the document details