Stat 461-561: Solutions Quiz 4
Wednesday 11th April 2007
Exercise 1. Let n observations (x1 ; y1 ) ; (x2 ; y2 ) ; : : : ; (xn ; yn ) be modelled through
a simple linear regression model. We have for i = 1; :; n
yi =
+ xi + ei
where the errors ei
N 0; 2 ar
Stat 461-561: Quiz 4
Wednesday 9th April 2008
Exercise 1. Consider the following linear regression model where
y =X +"
T
with y = (y1 ; :; yn )T 2 Rn , = 1 ; :; p 2 Rp , X is a known matrix of appropriate dimension and "
N 0; 2 In where In is the identity
Stat 461-561: Quiz 3 Solutions
Friday 16th March 2007
Exercise 1. Suppose that x1:n = (x1 ; :; xn ) is a a random sample from a
Poisson distribution with unknown mean . Two models for the prior distribution of
are contemplated:
H0 :
0(
) = exp (
H1 :
1(
)
Stat 461-561: Quiz 3
Wednesday 19th March 2008
Exercise 1. Let X1 ; X2 ; : : : ; Xn be n independent observations from a normal
of mean m and variance denoted N (m; ) : We assume that m is known.
Question 1.1 : [2 points] Establish the likelihood ratio te
Stat 561: Quiz 2
Friday 16th January 2007
Exercise 1.
Assume we receive a single observation from the density
1
f ( xj ) = x
1(0;1) (x)
where > 1:
State the Neyman-Pearson lemma and choose an associated test statistic to test
H0 : = 0 versus H1 : = 1 when
Stat 461-561: Quiz 2
Friday 29th January 2008
i.i.d.
g (x) and assume that we want to model these data
Exercise 1. Let Xi
using the parametrized family of probability density functions (pdf) ff ( xj ) ; 2 g.
Let n be the Maximum Likelihood Estimate (MLE)
Stat 461-561: Quiz 1
Monday 29th January 2007
Exercise 1. Suppose X1 ; X2 ; :; Xn are independent identically distributed
from an exponential distribution f ( xj ) ; that is
is the true parameter. The
exponential distribution admits the following density
Stat 461-561: Quiz 1
Monday 28th January 2008
i.i.d.
f ( xj ) where
2
R. Let
Exercise 1. Let Xi
Likelihood Estimate (MLE) for n observations; that is
= arg max
n
2
n
X
i=1
n
be the Maximum
log f ( Xi j )
Under
suitableregularity assumptions, we have
p
n(
Stat 461-561: Quiz 1
Monday 29th January 2007
Exercise 1. Suppose X1 ; X2 ; :; Xn are independent identically distributed
from an exponential distribution f ( xj ) ; that is
is the true parameter. The
exponential distribution admits the following density
Stat 461-561: Exercises 6
Exercise 2 Changepoint detection.
(a) Derive the Gibbs sampler to sample from the posterior distribution
( k; ; ; b1 ; b2 j x1:m ) :
We have
( k; ; ; b1 ; b2 j x1:m )
/
k
Y
xi
exp (
)
xi !
i=1
1
ba 1
1
b 1 c1
a1 1
1
and we use th
Stat 461-561: Exercises 6
Exercise 1. Bayesian linear model.
Derive the posterior distribution, predictive distribution and marginal likelihood
for the Bayesian linear model with normal likelihood and conjugate normal inverse
gamma prior.
Exercise 2 Chang
Stat 461-561: Exercises 5
In Casella & Berger, Exercises 7.23, 7.24, 7.25 (Week 7) and 8.10, 8.11, 8.53 and
8.54 (Week 8)
Exercise 1 (Week 7) Let
be a random variable in (0; 1) with density
( )/
1
exp (
)
where ; 2 (1; 1).
Calculate the mean and mode of .
Stat 461-561: Exercises 5
Remarks: Exercises 8.10 & 8.11 in C&B make implicit use of the incomplete
Gamma function. No such question will be given at the exam.
Exercise C&B 8.53.
(a). We have
() =
(H0 ) ( j H0 ) + (H1 ) ( j H1 )
1
1
; 0; 2
0( )+ N
2
2
=
s
Stat 461-561: Exercises 5
In Casella & Berger, Exercises 7.23, 7.24, 7.25 (Week 7) and 8.10, 8.11, 8.53 and
8.54 (Week 8)
Remark: There are several conventions available for parameterising Gamma and
inverse Gamma distributions. I have adopted here the one
Stat 461-561: Solutions Exercises 4
Exercise 10.31ae
(a) The null hypothesis is H0 : p1 = p2 which we can write as H0 : p1 p2 = 0.
We have
S1
S2
p1 =
b
; p2 =
b
n1
n1
which are unbiased estimates of p1 and p2 (under H0 and H1 ). Moreover under H0
where p1
Stat 461-561: Solutions Exercises 3
Exercise P
8.3
Let y = m yi . The likelihood is given by
i=1
y
L ( j y) =
(1
)m
y
so the log-likelihood is
l ( j y) = y log ( ) + (m
y ) log (1
):
We want to compute
(y) =
sup
sup
0
2
L ( j y)
:
L ( j y)
The unconstrain
Stat 461-561: Solutions Exercises 2
January 24, 2007
Exercise 7.20
Let
Pn
b = Pi=1 Yi :
n
1
i=1 xi
So we have
E b1
We have
h
Pn
Pn
xi
i=1 E [Yi ]
i
= Pn
= P=1
=:
n
i=1 xi
i=1 xi
i
var b 1 =
where
n
X
xi
i=1
!
var [Yi ]
i=1
2
var [Yi ] =
so
2n
X
hi
n2
:
va
Stat 461-561 Exercises 1.
Exercise 5.12. We have Xi
i.i.d
N (0; 1) then we have
1X
Xi
Z1 =
n i=1
n
and Y1 = jZ1 j whereas
For any variable Z
N (0;
N
0;
1
n
1X
E [Y 2 ] =
E [jXi j] :
n i=1
n
2
), we have
Z1
z
p
E [jZ j] = 2
2
0
2
=p
2
r
2
=
:
Thus
z2
22
z2
k
S~xgAx~w! ~G~ ~wf g~1~ go) Gx U G~ ~ xAx g1oj~ ~w~ x~ S~ ~a ~ ~ wor 5 !w x ~w x~s1!~o EGG
E
w
& w
&GGsxf
GGG)pGsfGffxx)g
jgxm
h
|wGAkgwGgggm xxkS)f)ho1o
STAT461-561: Delta Method
AD
January 2008
AD ()
January 2008
1/5
Delta Method
Assume rst that 2 R and that you have
P
n !
and
p
A D ()
n ( n
) ) N 0, 2 ( )
January 2008
2/5
Delta Method
Assume rst that 2 R and that you have
P
n !
and
p
Then we have
p
A
1
Notes on Consistency, Asymptotic normality
Assume you have i.i.d. data Xi
f ( xj ) and you want to come up with an
estimate n of . You could obviously try to maximize the log-likelihood of the
observation but alternatively you could consider the followi
Lecture Stat 461-561
Wald, Rao and Likelihood Ratio Tests
AD
February 2008
AD ()
February 2008
1 / 30
Introduction
Wald test
Rao test
Likelihood ratio test
AD ()
February 2008
2 / 30
Introduction
We want to test H0 : = 0 against H1 : 6= 0 using the
log-li
Lecture Stat 461-561
Review Pseudo Likelihood
AD
April 2007
AD ()
Lecture Stat 461-561 Review Pseudo Likelihood
April 2007
1 / 13
Pseudo Likelihood
In many applications, the log-likelihood l ( ; y1 :n ) is very complex to
compute.
AD ()
Lecture Stat 461-5
Lecture Stat 461-561
Review EM
AD
April 2007
AD ()
Lecture Stat 461-561 Review EM
April 2007
1 / 14
Expectation-Maximization Algorithm
Although the EM algorithm does not apply to all models, it is
powerful and elegant: one of the most popular algorithms i
Lecture Stat 461-561
Maximum Likelihood Estimation
A.D.
January 2008
A.D. ()
January 2008
1 / 63
Maximum Likelihood Estimation
Invariance
Consistency
E ciency
Nuisance Parameters
A.D. ()
January 2008
2 / 63
Parametric Inference
Let f ( xj ) denote the joi
Lecture Stat 461-561
Maximum Likelihood Estimation
A.D.
January 2007
A.D. ()
January 2007
1 / 41
Maximum Likelihood Estimation
A.D. ()
January 2007
2 / 41
Maximum Likelihood Estimation
Invariance
A.D. ()
January 2007
2 / 41
Maximum Likelihood Estimation
I
Lecture Stat 461-561
M-Estimation
AD
February 2008
AD ()
February 2008
1 / 33
Introduction & Motivation
i.i.d.
In most applications, we have Xi
g and we obtain an estimate b
a
by minimizing a suitable cost function; e.g.
the mean corresponds to n=1 ( xi )
Lecture Stat 461-561
M-Estimation
AD
February 2007
AD ()
February 2007
1 / 33
Introduction & Motivation
i.i.d.
In most applications, we have Xi
g and we obtain an estimate b
a
by minimizing a suitable cost function; e.g.
AD ()
February 2007
2 / 33
Introdu
Lecture Stat 461-561
Markov Chain Monte Carlo
AD
March 2008
AD ()
March 2008
1 / 94
Introduction
Bayesian model: likelihood f ( x j ) and prior distribution ( ).
Bayesian inference is based on the posterior distribution
( j x ) =
where
(x ) =
Z
( ) f (
Lecture Stat 461-561
Markov Chain Monte Carlo
AD
March 2007
AD ()
March 2007
1 / 94
Introduction
Bayesian model: likelihood f ( x j ) and prior distribution ( ).
AD ()
March 2007
2 / 94
Introduction
Bayesian model: likelihood f ( x j ) and prior distribut