Bayesian Estimation Confidence Intervals:
Lecture XXII
Charles B. Moss
October 21, 2010
I. Bayesian Estimation
A. Implicitly in our previous discussions about estimation, we adopted
a classical viewpoint.
1. We had some process generating random observations.
2. This random process was a function of fixed, but unknown.
3. We then designed procedures to estimate these unknown pa
rameters based on observed data.
B. Specifically, if we assumed that a random process such as students
admitted to the University of Florida, generated heights.
This
height process can be characterized by a normal distribution.
1. We can estimate the parameters of this distribution using
maximum likelihood.
2. The likelihood of a particular sample can be expressed as
L
X
1
, X
2
,
· · ·
X
n

μ, σ
2
=
1
(2
π
)
n/
2
σ
n
exp
−
1
2
σ
2
n
i
=1
(
X
i
−
μ
)
2
(1)
3. Our estimates of
μ
and
σ
2
are then based on the value of
each parameter that maximizes the likelihood of drawing that
sample.
1
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
AEB 6571 Econometric Methods I
Professor Charles B. Moss
Lecture XXII
Fall 2010
C. Turning this process around slightly, Bayesian analysis assumes
that we can make some kind of probability statement about pa
rameters before we start. The sample is then used to update our
prior distribution.
1. First, assume that our prior beliefs about the distribution
function can be expressed as a probability density function
π
(
θ
) where
θ
is the parameter we are interested in estimating.
This is the end of the preview.
Sign up
to
access the rest of the document.
 Spring '10
 Staff
 Normal Distribution, Probability theory, Estimation theory, Likelihood function, Professor Charles B. Moss, Lecture XXII

Click to edit the document details