Bayesian Estimation and Confidence Intervals
Lecture XXII
I.
Bayesian Estimation
A.
Implicitly in our previous discussions about estimation, we adopted a classical
viewpoint.
1.
We had some process generating random observations.
2.
This random process was a function of fixed, but unknown.
3.
We then designed procedures to estimate these unknown parameters based
on observed data.
B.
Specifically, if we assumed that a random process such as students admitted to the
University of Florida, generated heights. This height process can be characterized
by a normal distribution.
1.
We can estimate the parameters of this distribution using maximum
likelihood.
2.
The likelihood of a particular sample can be expressed as
2
2
2
12
2
1
2
11
,
,
,
exp
2
2
ni
n
i
n
L X X
X
X
3.
Our estimates of
and
2
are then based on the value of each parameter
that maximizes the likelihood of drawing that sample
C.
Turning this process around slightly, Bayesian analysis assumes that we can make
some kind of probability statement about parameters before we start. The sample
is then used to update our prior distribution.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
This is the end of the preview.
Sign up
to
access the rest of the document.
 Fall '09
 CARRIKER
 Normal Distribution, Probability theory, Estimation theory, Likelihood function, Professor Charles Moss, Lecture XXII Fall

Click to edit the document details