NTHU MATH 2820, 2008, Lecture Notes
Ch8, p.61
Question 6.6
Note that minimal su
ﬃ
cient statistics may still contain ancillary informa
tion. What other property can guarantee su
ﬃ
cient statistics containing no
ancillary information?
•
Statistics like
R
are called
ancillary statistics
, which have distri
butions free of the parameters and seemingly contain
no
information
about the parameters.
(other example of ancillary statistics?)
Definition 6.17
(completeness, TBp.310)
Let
f
(
s

θ
),
θ
∈
Ω
, be a family of pdfs or pmfs for a statistic
S
=
S
(
X
1
, . . . , X
n
).
The family of probability distributions is called
complete
if E[
u
(
S
)] = 0 for
all
θ
∈
Ω
implies
u
(
S
) = 0 with probability 1 for all
θ
∈
Ω
. Equivalently,
S
is called a
complete statistics.
T
=
g
(
X
1
, . . . , X
n
)
S
=
h
(
T
)
u
1
(
S
): a nonconstant
function
u
2
(
S
): a constant
function
made by ShaoWei Cheng (NTHU, Taiwan)
Ch8, p.62
Example 6.28
(sufficient and complete statistics of i.i.d. Uniform distribution
U(
0
,
θ
)
)
Let
X
1
, . . . , X
n
be i.i.d. from Uniform distribution
U
(0
,
θ
),
θ
>
0.
•
By factorization theorem,
X
(
n
)
,
the largest order statistics, is su
ﬃ
cient.
•
The pdf of
X
(
n
)
is
nx
n
−
1
θ
n
I
(0
,
θ
)
(
x
)
.
Let
u
be a function such that
E
[
u
(
X
(
n
)
)] = 0 for all
θ
. Then
θ
0
u
(
x
)
x
n
−
1
dx
= 0
,
for all
θ
>
0,
which implies
u
(
x
)
x
n
−
1
= 0
,
a.s. for
x
∈
(0
,
∞
),
therefore,
X
(
n
)
is complete.
Note.
•
If
S
is a complete statistic,
E
[
u
(
S
)] is a constant for all
θ
implies that
the trnasformation
u
is a constant transformation.
•
If
S
is a complete statistic, any transformations of
S
except the constant
functions contains
some
information about
θ
.
•
In Example 6.27,
E
(
R
) =
n
−
1
n
+1
. That is,
X
(
n
)
−
X
(1)
−
n
−
1
n
+ 1
=
R
−
E
(
R
)
has mean zero for all
θ
.
⇒
there is a
nonzero
function of
X
(1)
and
X
(
n
)
whose expectation is zero for all
θ
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
NTHU MATH 2820, 2008, Lecture Notes
Ch8, p.63
Example 6.29
(sufficient and complete statistic of i.i.d. Poisson distribution)
Theorem 6.14
Definition 6.18
(oneparameter exponential family of probability distributions, TBp.308)
Suppose
X
1
, . . . , X
n
is an i.i.d. sample from Poisson distribution
P
(
λ
). Then
f
(
x
1
, . . . , x
n
;
λ
) = (
e
−
n
λ
λ
n
i
=1
x
i
)
/
n
i
=1
x
i
!
.
So
S
=
n
i
=1
X
i
is su
ﬃ
cient for
λ
and
S
∼
P
(
n
λ
). If
u
(
S
) is any function of
S
s.t.,
0 = E[
u
(
S
)] =
e
−
n
λ
∞
s
=0
u
(
s
)(
n
)
s
s
!
λ
s
,
for all
λ
,
then, all coe
ﬃ
cients of
λ
are zero and
u
(
s
) = 0. Hence
S
is also complete.
A complete and su
ﬃ
cient statistic is minimal su
ﬃ
cient. However, a minimal
su
ﬃ
cient statistic is not necessarily complete.
(For example, the minimal
su
ﬃ
cient statistic (
X
(1)
, X
(
n
)
) in Ex. 6.26 is not complete because
E
(
X
(1)
) =
1
n
+1
+
θ
,
E
(
X
(
n
)
) =
n
n
+1
+
θ
, and
E
(
X
(
n
)
−
X
(1)
−
n
−
1
n
+1
) = 0, for any
θ
>
0.)
A family of distributions
{
f
(
x

θ
) :
θ
∈
Ω
}
is a
oneparameter exponential
family
if the pdf or pmf is of the form:
f
(
x

θ
) =
exp [
c
(
θ
)
T
(
x
) +
d
(
θ
) +
S
(
x
)] =
e
c
(
θ
)
T
(
x
)
e
d
(
θ
)
e
S
(
x
)
,
x
∈
A
0
,
x /
∈
A
where the set
A
does not depend on
θ
.
This is the end of the preview.
Sign up
to
access the rest of the document.
 Spring '11
 lisa
 Statistics, Normal Distribution, exponential family, UMVUE, ShaoWei Cheng, oneparameter exponential family

Click to edit the document details