UC Berkeley
Department of Statistics
STAT 210A: Introduction to Mathematical Statistics
Problem Set 1 Solutions
Fall 2006
Issued:
Thursday, August 31, 2006
Due:
Thursday, September 7, 2006
Problem 1.1
Solution to
1. Let:
Y
n
=
0
,
with probability 1

1
n
n,
with probability
1
n
Clearly,
E
(
Y
n
) = 1 for all
n
and, hence, lim
n
→∞
E
(
Y
n
) = 1. However, for all
δ >
0,
0
≤
P
(

Y
n

0
 ≥
δ
)
≤
1
n
and hence, for all
δ >
0, lim
n
→∞
P
(

Y
n

0
 ≥
δ
) = 0 so
Y
n
p
→
0.
2. Let:
Y
n
=

√
n,
with probability
1
2
n
0
,
with probability 1

1
n
√
n,
with probability
1
2
n
Hence
E
(
Y
n
) = 0 and
var
(
Y
n
) = 2
·
(
√
n
)
2
·
1
2
n
= 1 for all n and, therefore, lim
n
→∞
var
(
Y
n
) =
1.
As in item a, for all
δ >
0, 0
≤
P
(

Y
n

0
 ≥
δ
)
≤
1
n
and hence, for all
δ >
0,
lim
n
→∞
P
(

Y
n

0
 ≥
δ
) = 0 so
Y
n
p
→
0.
Problem 1.2
See Examples from section 2.2 in Large Sample Theory, by Erich Lehmann:
1. We have that
E
(
¯
X

μ
)
2
=
E
P
n
i
=1
(
X
i

μ
)
2
n
2
. Given independence,
E
(
¯
X

μ
)
2
=
P
n
i
=1
σ
2
i
n
2
→
0 establishing convergence in quadratic mean (
L
2
convergence). Convergence
in probability follows from convergence in quadratic mean.
1
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
2. Once it is proved that the
var
(
˜
X
n
)
≤
var
(
¯
X
n
),
L
2
convergence of
¯
X
n
implies
L
2
convergence of
˜
X
n
. Convergence in probability follows.
The statement is true in both the “original form”
˜
X
=
P
i
X
i
σ
i
P
i
1
σ
i
and the “corrected form
”
˜
X
=
P
i
X
i
σ
2
i
P
i
1
σ
2
i
For the “corrected form”:
Write the mean as the estimate for
α
in the regression
model
X
i
=
α
+
ε
i
. After reweighting by the inverse of the variance, the estimate is
This is the end of the preview.
Sign up
to
access the rest of the document.
 Fall '08
 Staff
 Statistics, Probability, Standard Deviation, Probability theory, Trigraph, yn, Yn

Click to edit the document details