Examples on Maximum Likelihood Estimation and
Bayesian Inference
November 13, 2009
1
Estimating a Normal Population Mean
1.1
Unknown
μ
, Known
σ
2
Let
X
i
∼ N
(
μ, σ
2
) where
σ
2
is known and
μ
is to be estimated based on the observed
n
samples,
x
1
, ..., x
n
.
To be able to find the maximum likelihood estimator of
μ
or to obtain a posterior distribution over
μ
using
the Bayesian route, we first need to state the likelihood. Recall that the likelihood is obtained from the
data generating machinery and in this case it is a normal distribution. Our observations
x
i
are generated
by a normal distribution with mean
μ
which we would like to learn about. Thus the likelihood of
μ
(or
the joint density of the observed
x
i
) stated as the product of normal densities evaluated at each
x
i
.
L
(
μ
) =
p
(
x
1
, ..., x
n

μ
) =
n
Y
i
=1
1
√
2
πσ
exp
(

1
2
x
i

μ
σ
2
)
The maximum likelihood estimator, ˆ
μ
, is arg max
μ
log
L
(
μ
), i.e. the
μ
value that maximizes the logarithm
of the likelihood function given above.
If we take the log of the likelihood function we obtain
log
L
(
μ
) =
‘
(
μ
) =

n
2
log(2
π
)

n
2
log
σ
2

1
2
σ
2
n
X
i
=1
(
x
i

μ
)
2
.
If we take the derivative with respect to
μ
, and set it equal to zero evaluated at the maximum ˆ
μ
, we obtain
ˆ
μ
=
∑
i
x
i
/n
= ¯
x
.
If desired confidence intervals may be obtained on
μ
.
Our goal is to obtain Bayesian inferences on
μ
and likelihood is a central element to this procedure.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
This is the end of the preview.
Sign up
to
access the rest of the document.
 Fall '08
 Staff
 Normal Distribution, Probability theory, probability density function, Maximum likelihood, likelihood

Click to edit the document details