Statistics 612: Regular Parametric Models and Likelihood Based
Inference
Moulinath Banerjee
March 30, 2009
We continue our discussion of likelihood based inference for parametric models; in particular,
we will talk more about information bounds in the context of parametric models, and the role they
play in likelihood based inference. We first introduce the multiparameter version of the celebrated
CramerRao inequality.
I
will
not
describe
the
underlying
assumptions
in
details.
These
are
the
usual
sorts
of
assumptions one makes for parametric models, in order to be able to establish sensible results. See
Page 11 of Chapter 3 of Wellner’s notes for a detailed description of the conditions involved. For
a multidimensional parametric model
{
p
(
x, θ
) :
θ
∈
Θ
⊂
R
k
}
, the information matrix
I
(
θ
) is given
by:
I
(
θ
) =
E
θ
(
˙
l
(
X, θ
)
,
˙
l
(
X, θ
)
T
) =

E
θ
¨
l
(
X, θ
)
,
where
˙
l
(
X, θ
) =
∂
∂ θ
l
(
X, θ
)
being a
k
×
1 column vector (recall that
l
(
x, θ
) = log
p
(
x, θ
)), and
¨
l
(
x, θ
) =
∂
2
∂ θ ∂ θ
T
l
(
X, θ
)
,
is a
k
×
k
matrix. Consider a smooth realvalued function
q
(
θ
) that is estimated by some statistic
T
(
X
), and let ˙
q
(
θ
) denote the derivative of
q
(written as a
k
×
1 vector). Let
b
(
θ
) =
E
θ
(
T
(
X
))

q
(
θ
)
be the bias of the estimator
T
, and let
˙
b
(
θ
) denote the derivative of the bias. We then have:
Var
θ
(
T
(
X
))
≥
( ˙
q
(
θ
) +
˙
b
(
θ
))
T
I

1
(
θ
) ( ˙
q
(
θ
) +
˙
b
(
θ
))
.
In particular, if
T
(
X
) is unbiased for
q
(
θ
), then
Var
θ
(
T
(
X
))
≥
˙
q
(
θ
)
T
I

1
(
θ
) ˙
q
(
θ
)
.
For a proof of this result, see Page 12 of Chapter 3 of Wellner’s notes – the proof runs along
lines similar to the one–dimensional case. We will not be worried about the construction of exact
1
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
unbiased estimators for
q
(
θ
) that attain the information bound; in the vast majority of situations
this is not feasible. Rather, we focus on the connection of the MLE
ˆ
θ
n
to the information bound
arising from the multiparameter inequality above. Consider the asymptotically linear representation
of the MLE given by:
√
n
(
ˆ
θ
n

θ
) =
1
√
n
n
i
=1
I
(
θ
)

1
˙
l
(
X
i
, θ
) +
o
p
(1)
.
Invoke the Delta method to obtain:
√
n
(
q
(
ˆ
θ
n
)

q
(
θ
)) =
1
√
n
n
i
=1
˙
q
(
θ
)
T
I
(
θ
)

1
˙
l
(
X
i
, θ
) +
o
p
(1)
.
It is easily seen that the asymptotic variance of
√
n
(
q
(
ˆ
θ
n
)

q
(
θ
)) is exactly
˙
q
(
θ
)
T
I

1
(
θ
)
q
(
θ
),
the information bound arising from the multiparameter Cramer Rao inequality.
The function
˙
q
(
θ
)
T
I
(
θ
)

1
˙
l
(
x, θ
) (that provides a linearization of the MLE) is called the
efficient influence
function
for estimating
q
(
θ
). Motivated by the above considerations, we define efficient influence
functions and information bounds for vectorvalued functions of
θ
.
Let
ν
be a Euclidean parameter defined on a regular parametric model and
P
=
{
P
θ
:
θ
∈
Θ
}
. We
can identify
ν
with the parametric function
q
: Θ
→
R
m
defined by:
q
(
θ
) =
ν
(
P
θ
)
,
for
P
θ
∈ P
.
This is the end of the preview.
Sign up
to
access the rest of the document.
 Winter '08
 moulib
 Statistics, Normal Distribution, Likelihood function, Likelihoodratio test

Click to edit the document details