Theory of Unbiased Estimators Advantages of unbiased estimators
1) They dont consistently over or underestimate the parameter.
2) Requiring unbiasedness rules out some ridiculous estimators such as T (x) = 0 x. 3) Often we can nd an unbiased estimator whi
A
t: 
ti
I l 
t,
I
I
4 tY'/
)4 t</
_J(Y,.,
I
I
I
I
I
a  )x^/+
,Ao q
n@
X o
j
I
i
i
xora
5u
I
t
ofl
xor4.
4t
o Y'/
./o \'J.
X
I
* r ' ,
'

)
_4.o_
T=O.
\/ ,[
I'
J
J
g
e
:",/^
'v
/
/ra:
?7*
a\./
ryq
avz \
yA,1
T*
t "(,'t,t
azL EauLra7fr; .,
X,=
X,
I
I
,.ri6l trO"*, * "*tt, ,*, ,h" t *"*
I
l(a,frla):
ot t(o,p) *rt g @n*al i, "7"' N"*,
nlogf(a) * naloga+ a(tn  n  nlogr)  tn,
where tn : DT:tlogri. It can be verified empirically that for the data in Exercise 7'10, the
maximizer of the last
Parametric models
Suppose we have an experiment with sample
space . Let X = (X1, . . . , Xn) be a random
vector dened on the sample space.
The outcome of the experiment is a realization
x = (x1, . . . , xn) of the random vector X . We
call x the data.
Ass
Advantages of method of moments estimators (mmes)
a. They are often simple to derive.
b. They are consistent estimators when 1, . . . , k are continuous functions of 1, . . . , k . (More on this later.)
c. They provide starting values in search for maximu
Partial Solution to Assignment #7 7.37 The joint density of the data may be written f (x; ) = 1 2
n n
I(0,) (xi ).
i=1
Defining Y = max1in (Xi ), the likelihood is thus L() = 1 2
n
I[y,) (),
and we see that Y is sufficient. Since each random variable
Solution to Assignment #8 8.14 The test statistic, S = n Xi , has approximately a normal distribution with mean np i=1 and variance np(1  p). To make the test have size , we need to reject H0 when s  n(.49) n(.49)(.51) 2.326, i.e., when s n(.49) + 2.326
Partial Solution to Assignment #6
7.46(b) The likelihood function is
3
L() =
1
I(,2) (xi )
i=1
= 3 I(0,x(1) ] ()I[x(3) /2,) ()
= 3 I[x(3) /2,x(1) ] (),
which is maximized at x(3) /2. So, the MLE of is X(3) /2. Now, if Xi U (, 2), then
(Xi )/ U (0, 1), wh
STAT 611602
February 5, 2003
Solutions to Selected Problems from Assignment # 1
6.9(c) As a function of , f (x)/f (y) is proportional to
n
i=1
n
i=1
1 + e(yi )
1 + e(xi )
2
2.
The last quantity is constant for all i
n
n
yi
1+e e
1 + e exi
=c
i=1
()
i=
STAT 611
Solutions to Selected Problems from Assignment #9
8.5 c. Under H0 , T has the same distribution as
n
log
Yi / min Yi ,
i
i=1
where Y1 , . . . , Yn are iid from density f (y) = y 2 I[1,) (y). Letting Y(1) < Y(2) < < Y(n)
n
denote the ordered Yi s,
STAT 611602
Mathematical Statistics
Instructor: Je Hart
1
The basic inference problem
1. Population what might have been observed
2. Sample what was observed
3. Probability model probability distribution
for the population; usually specied by pdf
or cdf.
3. Maximum likelihood estimation Observe X = x. L( x) = f (x ). The likelihood function is
The estimate = (x) is called a maximum likelihood estimate (MLE) of if L(x) = max L( x). (X ) is called a maximum likelihood estimator of .
Example 15 Consider
Consider the case where X = (X1, . . . , Xn) is a
continuous random vector and f (  ) is continuous. Let be a small positive number. Then
P (X1 (x1 , x1 + ), . . . , Xn (xn , xn + )
(2 )nf (x ).
This follows from the mean value theorem for
integrals, a
Solution to Assignment #10
9.1 We have
P (L(X) U (X) = 1 P (cfw_L(X) > cfw_U (X) < ) .
Because L(x) U (x) for all x, the events cfw_L(X) > and cfw_U (X) < are mutually
exclusive. Therefore,
P (cfw_L(X) > cfw_U (X) < ) = P (L(X) > ) + P (U (X) < ) = 1
Example 11 Jointly sufficient statistics Suppose we observe X = (X1, . . . , Xn), where Xi = Xi1 + Zi, i = 2, 3, . . . , n.
The quantity is an unknown parameter such that  < 1. Z2, . . . , Zn are i.i.d. N (0, 2), where 2 is another unknown parameter. X
Example 17 Let the data be X1, . . . , Xr , whose
joint distribution is multinomial with sample
size n and parameters = (1, . . . , r ), where
r
= cfw_(1, . . . , r ) : 0 < i < 1,
i = 1.
i=1
The likelihood is
L( x) =
n!
x
x
11 r r ,
x1! xr !
where r xi
STAT 611602
March 26, 2008
Solution to Practice Problems for Exam II
1. (a) We have
1
(x 1)2
1
log log(2x3 )
,
2
2
2x
log f (x)
1
(x 1)2
=
2
2x
log f (x) =
and
2 log f (x)
1
= 2.
2
2
It follows that
E
2 log f (X1 )
1
= 2,
2
2
and hence the Cramr
STAT 611602
February 5, 2003
Solutions to Selected Problems from Assignment # 1
6.9(c) As a function of , f (x)/f (y ) is proportional to
n
i=1
n
i=1
1 + e(yi )
1 + e(xi )
2
2.
The last quantity is constant for all i
n
n
yi
1+e e
1 + e exi
=c
i=1
()
i
Solutions to Selected Problems in Assignment #2
6.34 The data are (N, X1 , . . . , XN ). Given N = n, X1 , . . . , Xn are i.i.d. Bernoulli random
variables. The pmf of the data may be represented
P (N = n, X1 = x1 , . . . , Xn = xn ) = P (N = n)P (X1 =
X,=
X,
I
I
,.ri6l t rO" *, * " *tt, , *, , h" t * "*
I
l(a,frla):
o t t (o,p ) * rt g @ n*al i , " 7"' N "*,
nlogf(a) * naloga+ a(tn  n  nlogr)  tn,
where t n : D T:tlogri. I t c an b e v erified e mpirically t hat f or t he d ata i n E xercise 7 '10
Partial Solution to Assignment #6
7.46(b) The likelihood function is
3
L() =
1
I(,2) (xi )
i=1
= 3 I(0,x(1) ] ()I[x(3) /2,) ()
= 3 I[x(3) /2,x(1) ] (),
which is maximized at x(3) /2. So, the MLE of is X(3) /2. Now, if Xi U (, 2), then
(Xi )/ U (0, 1), wh
Partial Solution to Assignment #7
7.37 The joint density of the data may be written
f (x; ) =
nn
1
2
I(0,) (xi ).
i=1
Dening Y = max1in (Xi ), the likelihood is thus
L() =
1
2
n
I[y,) (),
and we see that Y is sucient.
Since each random variable Xi 
Solution to Assignment #8
8.14 The test statistic, S = n=1 Xi , has approximately a normal distribution with mean np
i
and variance np(1 p). To make the test have size , we need to reject H0 when
s n(.49)
n(.49)(.51)
2.326,
i.e., when s n(.49) + 2.326 n(
STAT 611602
March 8, 2005
Solution to Exam I
1. First of all E (X1 ) = (N + 1)/2, which implies that N = 2E (X1 ) 1. Now we substitute
the sample mean X for E (X1 ), and so a moment estimator of N is
N = 2X 1.
It is easily veried that the largest Xi , ca
STAT 611602
May 26, 2004
Solution to Final Exam
1. The sample mean X is unbiased and hence M SE (X ) = 2 /n. We have
n + m
E ( ) =
,
n+m
and hence
2
n + m
+ Var( )
n+m
m2 ( )2
n 2 + m 2
=
+
(n + m)2
(n + m)2
m2 ( )2
2
=
.
+
(n + m)2
(n + m)
M SE ( ) =
No