4714_Bayesian (not in book)

4714_Bayesian (not in book) - 1 STAT 4714 The Bayesian...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 1 STAT 4714 The Bayesian Approach to Estimation Introduction In the traditional approach to estimation and confidence intervals that we have been studying, an unknown population parameter is treated as fixed. For example, consider a 95% confidence interval for the unknown mean of a normal population with known standard deviation . Before the observations 1 2 , , , n X X X K are collected we can say that 1.96 1.96 .95 P X X n n - + = . Now suppose that we obtain the n observations and find that the confidence interval is 10.43 12.56 . If is assumed to be fixed, then the probability .95 does not apply to this particular interval. The probability that is in the interval [10.43,12.56] is either 0 or 1 depending on whether is in this interval or not. The probability .95 applies to the long run relative frequency with which intervals of the form 1.96 1.96 X X n n - + will include the fixed (but unknown) value of . It is the intervals that are assumed to be random here, not . If we continually repeat the process of drawing samples of n observations from the population and computing a confidence interval for each of these samples, then, in the long run, 95% of these intervals will include and 5% will not include . There is another approach to estimating unknown parameters and obtaining intervals for these parameters, called the Bayesian approach, that is based on a completely different paradigm. This approach represents prior information about an unknown parameter using a prior distribution . Once the data are obtained, Bayes Theorem is used to combine the prior distribution with the data to obtain what is called the posterior distribution for the parameter. This posterior distribution is actually proportional to the product of the likelihood function (for the fixed set of data) and the prior distribution. Once the posterior distribution is determined, it can then be used to obtain an estimate of the unknown parameter or to obtain an interval for the parameter. The prior distribution for the unknown parameter can be obtained from past information about the parameter, or can be based on subjective knowledge. In the Bayesian approach the unknown parameter is treated as a random variable, so this approach is completely different from the classical approach in which the parameter is treated as fixed. Before 2 explaining the Bayesian approach, it will be useful to review Bayes Theorem, and consider the extension of this theorem to random variables. If the events 1 2 , , , n A A A K form a partition of the sample space S , where ( ) i P A , 1,2, , i n = K , and B is an event with ( ) P B , then Bayes Theorem says that 1 ( | ) ( ) ( | ) ( | ) ( ) j j j n i j j P B A P A P A B P B A P A = = ....
View Full Document

This note was uploaded on 01/17/2011 for the course STAT 4714 at Virginia Tech.

Page1 / 15

4714_Bayesian (not in book) - 1 STAT 4714 The Bayesian...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online