NTHU MATH 2820, 2008, Lecture Notes
Ch8, p.61
Question 6.6
Note that minimal su
ﬃ
cient statistics may still contain ancillary informa
tion. What other property can guarantee su
ﬃ
cient statistics containing no
ancillary information?
•
Statistics like
R
are called
ancillary statistics
, which have distri
butions free of the parameters and seemingly contain
no
information
about the parameters.
(other example of ancillary statistics?)
Definition 6.17
(completeness, TBp.310)
Let
f
(
s

θ
),
θ
∈
Ω
, be a family of pdfs or pmfs for a statistic
S
=
S
(
X
1
,...,X
n
).
The family of probability distributions is called
complete
if E[
u
(
S
)] = 0 for
all
θ
∈
Ω
implies
u
(
S
) = 0 with probability 1 for all
θ
∈
Ω
.Equ
iv
a
l
en
t
ly
,
S
is called a
complete statistics.
T
=
g
(
X
1
,...,X
n
)
S
=
h
(
T
)
u
1
(
S
): a nonconstant
function
u
2
(
S
): a constant
function
made by ShaoWei Cheng (NTHU, Taiwan)
Ch8, p.62
Example 6.28
(sufficient and complete statistics of i.i.d. Uniform distribution
U(
0
,
θ
)
)
Let
X
1
,...,X
n
be i.i.d. from Uniform distribution
U
(0
,
θ
),
θ
>
0.
•
By factorization theorem,
X
(
n
)
,
the largest order statistics, is su
ﬃ
cient.
•
The pdf of
X
(
n
)
is
nx
n
−
1
θ
n
I
(0
,
θ
)
(
x
)
.
Let
u
be a function such that
E
[
u
(
X
(
n
)
)] = 0 for all
θ
.Then
θ
0
u
(
x
)
x
n
−
1
dx
=0
,
for all
θ
>
0,
which implies
u
(
x
)
x
n
−
1
=0
,
a.s. for
x
∈
(0
,
∞
),
therefore,
X
(
n
)
is complete.
Note.
•
If
S
is a complete statistic,
E
[
u
(
S
)] is a constant for all
θ
implies that
the trnasformation
u
is a constant transformation.
•
If
S
is a complete statistic, any transformations of
S
except the constant
functions contains
some
information about
θ
.
•
In Example 6.27,
E
(
R
)=
n
−
1
n
+1
.Tha
ti
s
,
X
(
n
)
−
X
(1)
−
n
−
1
n
+1
=
R
−
E
(
R
)
hasmeanzerofora
l
l
θ
.
⇒