UC Berkeley
Department of Statistics
STAT 210A: Introduction to Mathematical Statistics
Midterm Examination—Solutions
Fall 2006
Problem 1.1
[18 points total] Suppose that
X
i
, i
= 1
, . . . , n
are i.i.d. samples from the uniform Uni[0
, θ
]
distribution.
(a) Find a onedimensional sufficient statistic for estimating
θ
.
(b) Compute the maximum likelihood estimate
θ
MLE
based on (
X
1
, . . . , X
n
).
Using an
elementary argument, show that
θ
MLE
p
→
θ
*
as
n
→
+
∞
.
(c) Consider the estimator of
θ
given by
δ
(
X
) =
2
n
∑
n
i
=1
X
i
. Is it unbiased? Is it admissible
under squared error loss? Justify your answers.
Now suppose that we view the parameter as a random variable Θ, and assume a
Pareto
prior density
of the form
λ
(
θ
)
=
γβ
γ
θ

γ

1
I
[
β
≤
θ
]
,
for all
θ >
0
,
where
β >
0 and
γ >
2 are fixed.
(d) Compute the
prior
mean of the random variable Θ.
(e) Compute the posterior distribution of Θ conditioned on
X
= (
X
1
, . . . , X
n
).
(f) Compute the Bayes estimate of Θ under quadratic loss.
Hint:
New calculation may
not be required given previous parts to the question.
Solution 1.1:
(a) By independence, we have
p
(
x
;
θ
)
=
n
i
=1
1
θ
I
[
x
i
≤
θ
]
for
x
i
≥
0
=
θ

n
I
[max(
x
i
)
≤
θ
]
,
so that
Z
= max
{
X
1
, . . . , X
n
}
is sufficient by the factorization criterion.
(b) From part (a), the log likelihood takes the form
L
(
θ
) =

n
log(
θ
) for
θ
≥
max
{
X
i
}
,
and
∞
otherwise, so that the MLE is given by
θ
MLE
= max
{
X
1
, . . . , X
n
}
. For any
∈
(0
, θ
), we compute
P
[

max
X
i

θ

>
]
=
n
i
=1
P
[
X
i
≤
θ

]
=
1

θ
n
→
0
as
n
→
+
∞
, so that consistency of the MLE follows.
1
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
(c) We compute
E
[
δ
(
X
)]
=
2
n
n
i
=1
E
[
X
i
]
=
2
n
(
n
θ
2
) =
θ,
so that the estimator is unbiased. However, since
Z
= max
{
X
i
}
is sufficient from part
(a) and this estimator depends on other information, the RaoBlackwell theorem dic
tates that we can construct a better estimator
δ
(
X
) =
E
[
δ
(
X
)

Z
]. The strict convexity
of quadratic loss ensures that
δ
will dominate
δ
, so that
δ
must be inadmissible.
(d) We compute the prior mean
E
[Θ]
=
+
∞
β
θλ
(
θ
)
dθ
=
γβ
γ
+
∞
β
θ

γ
dθ
=
γβ
γ

1
.
This is the end of the preview.
Sign up
to
access the rest of the document.
 Fall '08
 Staff
 Statistics, Normal Distribution, Estimation theory, Likelihood function, max Xi

Click to edit the document details