620202 Tutorial/Computing Labo
ratory 2. Answers.
1.
(a) .
(b) The likelihood is
L
(
θ
) =
θ
n
n
Y
i
=1
x
θ

1
i
.
Hence the loglikelihood is
‘
(
θ
) =
n
ln
θ
+ (
θ

1)
n
X
i
=1
ln(
x
i
)
,
and the score function is
s
(
θ
) =
n
θ
+
n
X
i
=1
ln(
x
i
)
,
so that setting
s
(
θ
) = 0 yields
ˆ
θ
=

n
∑
n
i
=1
ln(
X
i
)
=

n
ln(
Q
n
i
=1
X
i
)
(c) Hence
ˆ
θ
X
= 0
.
549,
ˆ
θ
Y
= 2
.
210,
ˆ
θ
Z
= 0
.
959. To find the method of moments
estimators solve ¯
x
=
θ/
(
θ
+ 1) which yields
˜
θ
= ¯
x/
(1

¯
x
) and hence
˜
θ
X
=
0
.
598,
˜
θ
Y
= 2
.
400 and
˜
θ
Z
= 0
.
865.
2. Recall that for a random sample
X
1
,
· · ·
, X
n
,
E
(
¯
X
) =
1
n
n
X
i
=1
E
(
X
i
) =
E
(
X
1
)
and
Var(
¯
X
) =
1
n
2
n
X
i
=1
Var(
X
i
) =
Var(
X
1
)
n
1
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
(a) Now, the
X
i
are i.i.d. exponential random variables with mean
θ
. Hence
E
(
¯
X
) =
E
(
X
i
) =
θ
and
¯
X
is unbiased.
(b)
σ
2
= Var(
X
i
) =
θ
2
.
Hence Var(
¯
X
) =
σ
2
/n
=
θ
2
/n
as required.
A good
estimator of
θ
is
ˆ
θ
= ¯
x
= 3
.
48. (This is a good estimator as it is unbiased
and its variance tends to zero as
n
→ ∞
.)
3.
S
2
=
n
X
i
=1
(
X
i

¯
X
)
2
n

1
=
1
n

1
n
X
i
=1
(
X
2
i

2
X
i
¯
X
+
¯
X
2
)
=
1
n

1
n
X
i
=1
X
2
i

2
n
¯
X
2
+
n
¯
X
2
!
This is the end of the preview.
Sign up
to
access the rest of the document.
 One '09
 R
 Statistics, Normal Distribution, Maximum likelihood, Estimation theory, Bias of an estimator, Method of moments

Click to edit the document details