Theory of Unbiased Estimators Advantages of unbiased estimators
1) They dont consistently over or underestimate the parameter.
2) Requiring unbiasedness rules out some ridiculous estimators such as T
A
t: 
ti
I l 
t,
I
I
4 tY'/
)4 t</
_J(Y,.,
I
I
I
I
I
a  )x^/+
,Ao q
[email protected]
X o
j
I
i
i
xora
5u
I
t
ofl
xor4.
4t
o Y'/
./o \'J.
X
I
* r ' ,
'

)
_4.o_
T=O.
\/ ,[
I'
J
J
g
e
:",/^
'v
/
/ra:
?7*
a
X,=
X,
I
I
,.ri6l trO"*, * "*tt, ,*, ,h" t *"*
I
l(a,frla):
ot t(o,p) *rt g @n*al i, "7"' N"*,
nlogf(a) * naloga+ a(tn  n  nlogr)  tn,
where tn : DT:tlogri. It can be verified empirically that fo
Parametric models
Suppose we have an experiment with sample
space . Let X = (X1, . . . , Xn) be a random
vector dened on the sample space.
The outcome of the experiment is a realization
x = (x1, . . .
Advantages of method of moments estimators (mmes)
a. They are often simple to derive.
b. They are consistent estimators when 1, . . . , k are continuous functions of 1, . . . , k . (More on this later
Partial Solution to Assignment #7 7.37 The joint density of the data may be written f (x; ) = 1 2
n n
I(0,) (xi ).
i=1
Defining Y = max1in (Xi ), the likelihood is thus L() = 1 2
n
I[y,) (),
and w
Solution to Assignment #8 8.14 The test statistic, S = n Xi , has approximately a normal distribution with mean np i=1 and variance np(1  p). To make the test have size , we need to reject H0 when s
Partial Solution to Assignment #6
7.46(b) The likelihood function is
3
L() =
1
I(,2) (xi )
i=1
= 3 I(0,x(1) ] ()I[x(3) /2,) ()
= 3 I[x(3) /2,x(1) ] (),
which is maximized at x(3) /2. So, the MLE of i
STAT 611602
February 5, 2003
Solutions to Selected Problems from Assignment # 1
6.9(c) As a function of , f (x)/f (y) is proportional to
n
i=1
n
i=1
1 + e(yi )
1 + e(xi )
2
2.
The last quantity is
STAT 611
Solutions to Selected Problems from Assignment #9
8.5 c. Under H0 , T has the same distribution as
n
log
Yi / min Yi ,
i
i=1
where Y1 , . . . , Yn are iid from density f (y) = y 2 I[1,) (y).
STAT 611602
Mathematical Statistics
Instructor: Je Hart
1
The basic inference problem
1. Population what might have been observed
2. Sample what was observed
3. Probability model probability distribu
3. Maximum likelihood estimation Observe X = x. L( x) = f (x ). The likelihood function is
The estimate = (x) is called a maximum likelihood estimate (MLE) of if L(x) = max L( x). (X ) is called a
Consider the case where X = (X1, . . . , Xn) is a
continuous random vector and f (  ) is continuous. Let be a small positive number. Then
P (X1 (x1 , x1 + ), . . . , Xn (xn , xn + )
(2 )nf (x ).
Th
Solution to Assignment #10
9.1 We have
P (L(X) U (X) = 1 P (cfw_L(X) > cfw_U (X) < ) .
Because L(x) U (x) for all x, the events cfw_L(X) > and cfw_U (X) < are mutually
exclusive. Therefore,
P (cfw_
Example 11 Jointly sufficient statistics Suppose we observe X = (X1, . . . , Xn), where Xi = Xi1 + Zi, i = 2, 3, . . . , n.
The quantity is an unknown parameter such that  < 1. Z2, . . . , Zn are i
Example 17 Let the data be X1, . . . , Xr , whose
joint distribution is multinomial with sample
size n and parameters = (1, . . . , r ), where
r
= cfw_(1, . . . , r ) : 0 < i < 1,
i = 1.
i=1
The like
STAT 611602
March 26, 2008
Solution to Practice Problems for Exam II
1. (a) We have
1
(x 1)2
1
log log(2x3 )
,
2
2
2x
log f (x)
1
(x 1)2
=
2
2x
log f (x) =
and
2 log f (x)
1
= 2.
2
2
It follows
STAT 611602
February 5, 2003
Solutions to Selected Problems from Assignment # 1
6.9(c) As a function of , f (x)/f (y ) is proportional to
n
i=1
n
i=1
1 + e(yi )
1 + e(xi )
2
2.
The last quantity is
Solutions to Selected Problems in Assignment #2
6.34 The data are (N, X1 , . . . , XN ). Given N = n, X1 , . . . , Xn are i.i.d. Bernoulli random
variables. The pmf of the data may be represented
P (N
X,=
X,
I
I
,.ri6l t rO" *, * " *tt, , *, , h" t * "*
I
l(a,frla):
o t t (o,p ) * rt g @ n*al i , " 7"' N "*,
nlogf(a) * naloga+ a(tn  n  nlogr)  tn,
where t n : D T:tlogri. I t c an b e v erified
Partial Solution to Assignment #6
7.46(b) The likelihood function is
3
L() =
1
I(,2) (xi )
i=1
= 3 I(0,x(1) ] ()I[x(3) /2,) ()
= 3 I[x(3) /2,x(1) ] (),
which is maximized at x(3) /2. So, the MLE of i
Partial Solution to Assignment #7
7.37 The joint density of the data may be written
f (x; ) =
nn
1
2
I(0,) (xi ).
i=1
Dening Y = max1in (Xi ), the likelihood is thus
L() =
1
2
n
I[y,) (),
and we s
Solution to Assignment #8
8.14 The test statistic, S = n=1 Xi , has approximately a normal distribution with mean np
i
and variance np(1 p). To make the test have size , we need to reject H0 when
s n(
STAT 611602
March 8, 2005
Solution to Exam I
1. First of all E (X1 ) = (N + 1)/2, which implies that N = 2E (X1 ) 1. Now we substitute
the sample mean X for E (X1 ), and so a moment estimator of N is
STAT 611602
May 26, 2004
Solution to Final Exam
1. The sample mean X is unbiased and hence M SE (X ) = 2 /n. We have
n + m
E ( ) =
,
n+m
and hence
2
n + m
+ Var( )
n+m
m2 ( )2
n 2 + m 2
=
+
(n + m)2