This preview shows pages 1–7. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Bayesian Estimation and Bayesian Estimation and Confidence Intervals Confidence Intervals Lecture XXII Bayesian Estimation Bayesian Estimation • Implicitly in our previous discussions about estimation, we adopted a classical viewpoint. – We had some process generating random observations. – This random process was a function of fixed, but unknown. – We then designed procedures to estimate these unknown parameters based on observed data. • Specifically, if we assumed that a random process such as students admitted to the University of Florida, generated heights. This height process can be characterized by a normal distribution. – We can estimate the parameters of this distribution using maximum likelihood. – The likelihood of a particular sample can be expressed as – Our estimates of μ and σ 2 are then based on the value of each parameter that maximizes the likelihood of drawing that sample ( 29 ( 29 ( 29  = ∑ = 2 1 2 2 2 2 2 1 2 1 exp 2 1 , , , i i n n n X X X X L μ σ σ π σ μ • Turning this process around slightly, Bayesian analysis assumes that we can make some kind of probability statement about parameters before we start. The sample is then used to update our prior distribution. – First, assume that our prior beliefs about the distribution function can be expressed as a probability density function π ( θ ) where...
View
Full
Document
This note was uploaded on 07/18/2011 for the course AEB 6933 taught by Professor Carriker during the Fall '09 term at University of Florida.
 Fall '09
 CARRIKER

Click to edit the document details