Stat 461-561: Solutions Quiz 4
Wednesday 11th April 2007
Exercise 1. Let n observations (x1 ; y1 ) ; (x2 ; y2 ) ; : : : ; (xn ; yn ) be modelled through
a simple linear regression model. We have for i
Stat 461-561: Quiz 4
Wednesday 9th April 2008
Exercise 1. Consider the following linear regression model where
y =X +"
T
with y = (y1 ; :; yn )T 2 Rn , = 1 ; :; p 2 Rp , X is a known matrix of appropr
Stat 461-561: Quiz 3 Solutions
Friday 16th March 2007
Exercise 1. Suppose that x1:n = (x1 ; :; xn ) is a a random sample from a
Poisson distribution with unknown mean . Two models for the prior distri
Stat 461-561: Quiz 3
Wednesday 19th March 2008
Exercise 1. Let X1 ; X2 ; : : : ; Xn be n independent observations from a normal
of mean m and variance denoted N (m; ) : We assume that m is known.
Ques
Stat 561: Quiz 2
Friday 16th January 2007
Exercise 1.
Assume we receive a single observation from the density
1
f ( xj ) = x
1(0;1) (x)
where > 1:
State the Neyman-Pearson lemma and choose an associat
Stat 461-561: Quiz 2
Friday 29th January 2008
i.i.d.
g (x) and assume that we want to model these data
Exercise 1. Let Xi
using the parametrized family of probability density functions (pdf) ff ( xj )
Stat 461-561: Quiz 1
Monday 29th January 2007
Exercise 1. Suppose X1 ; X2 ; :; Xn are independent identically distributed
from an exponential distribution f ( xj ) ; that is
is the true parameter. The
Stat 461-561: Quiz 1
Monday 28th January 2008
i.i.d.
f ( xj ) where
2
R. Let
Exercise 1. Let Xi
Likelihood Estimate (MLE) for n observations; that is
= arg max
n
2
n
X
i=1
n
be the Maximum
log f ( Xi
Stat 461-561: Quiz 1
Monday 29th January 2007
Exercise 1. Suppose X1 ; X2 ; :; Xn are independent identically distributed
from an exponential distribution f ( xj ) ; that is
is the true parameter. The
Stat 461-561: Exercises 6
Exercise 2 Changepoint detection.
(a) Derive the Gibbs sampler to sample from the posterior distribution
( k; ; ; b1 ; b2 j x1:m ) :
We have
( k; ; ; b1 ; b2 j x1:m )
/
k
Y
x
Stat 461-561: Exercises 6
Exercise 1. Bayesian linear model.
Derive the posterior distribution, predictive distribution and marginal likelihood
for the Bayesian linear model with normal likelihood and
Stat 461-561: Exercises 5
In Casella & Berger, Exercises 7.23, 7.24, 7.25 (Week 7) and 8.10, 8.11, 8.53 and
8.54 (Week 8)
Exercise 1 (Week 7) Let
be a random variable in (0; 1) with density
( )/
1
exp
Stat 461-561: Exercises 5
Remarks: Exercises 8.10 & 8.11 in C&B make implicit use of the incomplete
Gamma function. No such question will be given at the exam.
Exercise C&B 8.53.
(a). We have
() =
(H0
Stat 461-561: Exercises 5
In Casella & Berger, Exercises 7.23, 7.24, 7.25 (Week 7) and 8.10, 8.11, 8.53 and
8.54 (Week 8)
Remark: There are several conventions available for parameterising Gamma and
i
Stat 461-561: Solutions Exercises 4
Exercise 10.31ae
(a) The null hypothesis is H0 : p1 = p2 which we can write as H0 : p1 p2 = 0.
We have
S1
S2
p1 =
b
; p2 =
b
n1
n1
which are unbiased estimates of p
Stat 461-561: Solutions Exercises 3
Exercise P
8.3
Let y = m yi . The likelihood is given by
i=1
y
L ( j y) =
(1
)m
y
so the log-likelihood is
l ( j y) = y log ( ) + (m
y ) log (1
):
We want to comput
Stat 461-561: Solutions Exercises 2
January 24, 2007
Exercise 7.20
Let
Pn
b = Pi=1 Yi :
n
1
i=1 xi
So we have
E b1
We have
h
Pn
Pn
xi
i=1 E [Yi ]
i
= Pn
= P=1
=:
n
i=1 xi
i=1 xi
i
var b 1 =
where
n
X
Stat 461-561 Exercises 1.
Exercise 5.12. We have Xi
i.i.d
N (0; 1) then we have
1X
Xi
Z1 =
n i=1
n
and Y1 = jZ1 j whereas
For any variable Z
N (0;
N
0;
1
n
1X
E [Y 2 ] =
E [jXi j] :
n i=1
n
2
), we ha
k
S~xgAx~w! ~G~ ~wf g~1~ go) Gx U G~ ~ xAx g1oj~ ~w~ x~ S~ ~a ~ ~ wor 5 !w x ~w x~s1!~o EGG
E
w
& w
&GGsxf
GGG)pGsfGffxx)g
jgxm
h
|wGAkgwGgggm xxkS)f)ho1o
STAT461-561: Delta Method
AD
January 2008
AD ()
January 2008
1/5
Delta Method
Assume rst that 2 R and that you have
P
n !
and
p
A D ()
n ( n
) ) N 0, 2 ( )
January 2008
2/5
Delta Method
Assume rst t
1
Notes on Consistency, Asymptotic normality
Assume you have i.i.d. data Xi
f ( xj ) and you want to come up with an
estimate n of . You could obviously try to maximize the log-likelihood of the
obser
Lecture Stat 461-561
Wald, Rao and Likelihood Ratio Tests
AD
February 2008
AD ()
February 2008
1 / 30
Introduction
Wald test
Rao test
Likelihood ratio test
AD ()
February 2008
2 / 30
Introduction
We w
Lecture Stat 461-561
Review Pseudo Likelihood
AD
April 2007
AD ()
Lecture Stat 461-561 Review Pseudo Likelihood
April 2007
1 / 13
Pseudo Likelihood
In many applications, the log-likelihood l ( ; y1 :n
Lecture Stat 461-561
Review EM
AD
April 2007
AD ()
Lecture Stat 461-561 Review EM
April 2007
1 / 14
Expectation-Maximization Algorithm
Although the EM algorithm does not apply to all models, it is
pow
Lecture Stat 461-561
Maximum Likelihood Estimation
A.D.
January 2008
A.D. ()
January 2008
1 / 63
Maximum Likelihood Estimation
Invariance
Consistency
E ciency
Nuisance Parameters
A.D. ()
January 2008
Lecture Stat 461-561
Maximum Likelihood Estimation
A.D.
January 2007
A.D. ()
January 2007
1 / 41
Maximum Likelihood Estimation
A.D. ()
January 2007
2 / 41
Maximum Likelihood Estimation
Invariance
A.D.
Lecture Stat 461-561
M-Estimation
AD
February 2008
AD ()
February 2008
1 / 33
Introduction & Motivation
i.i.d.
In most applications, we have Xi
g and we obtain an estimate b
a
by minimizing a suitable
Lecture Stat 461-561
M-Estimation
AD
February 2007
AD ()
February 2007
1 / 33
Introduction & Motivation
i.i.d.
In most applications, we have Xi
g and we obtain an estimate b
a
by minimizing a suitable
Lecture Stat 461-561
Markov Chain Monte Carlo
AD
March 2008
AD ()
March 2008
1 / 94
Introduction
Bayesian model: likelihood f ( x j ) and prior distribution ( ).
Bayesian inference is based on the pos
Lecture Stat 461-561
Markov Chain Monte Carlo
AD
March 2007
AD ()
March 2007
1 / 94
Introduction
Bayesian model: likelihood f ( x j ) and prior distribution ( ).
AD ()
March 2007
2 / 94
Introduction
B