This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Bayesian Estimation Confidence Intervals: Lecture XXII Charles B. Moss October 21, 2010 I. Bayesian Estimation A. Implicitly in our previous discussions about estimation, we adopted a classical viewpoint. 1. We had some process generating random observations. 2. This random process was a function of fixed, but unknown. 3. We then designed procedures to estimate these unknown pa rameters based on observed data. B. Specifically, if we assumed that a random process such as students admitted to the University of Florida, generated heights. This height process can be characterized by a normal distribution. 1. We can estimate the parameters of this distribution using maximum likelihood. 2. The likelihood of a particular sample can be expressed as L X 1 , X 2 , X n  , 2 = 1 (2 ) n/ 2 n exp " 1 2 2 n X i =1 ( X i ) 2 # (1) 3. Our estimates of and 2 are then based on the value of each parameter that maximizes the likelihood of drawing that sample. 1 AEB 6571 Econometric Methods I Professor Charles B. Moss Lecture XXII Fall 2010 C. Turning this process around slightly, Bayesian analysis assumes that we can make some kind of probability statement about pa rameters before we start. The sample is then used to update our prior distribution. 1. First, assume that our prior beliefs about the distribution function can be expressed as a probability density function ( ) where is the parameter we are interested in estimating....
View Full
Document
 Spring '10
 Staff

Click to edit the document details