? Friday 22 May 2009 ?
? 09.30 - 10.30 am ?
EXAMINATION FOR THE DEGREE OF B.SCI (SCIENCE)
4H (Senior Honours) STATISTICS
Applied Bayesian Modelling - SOLUTIONS
Hand calculators with simple basic functions (log, exp, square root, etc.) may
be used in exami
Bayesian Statistics | Lab 2
Exterminating beetles
These data were reported in Bliss (1935) and are analysed in Dobson (2002, p. 119). Beetles
were exposed to dierent concentrations of carbon disulphide and the numbers of beetles dead
after ve hours were r
Bayesian Statistics | Lab 4
Part I: 8 schools data
These data illustrate the normal hierarchical model discussed in Lecture 10 and are analysed
in x5.5 of BDA. In this lab we will try and replicate their results, but using Gibbs sampling to
obtain a sampl
Lecture 2
2.1 A bit of history
Rev. Thomas Bayes (1763) \An Essay Towards Solving a Problem in the Doctrine
of Chances"
Pierre Simeon Laplace (1774, 1812)
General use of \inverse probability" method during the 19th century
Criticisms in the 2nd half o
Lecture 4: Inference for a binomial proportion
4.1 Analysis with an informative prior (cont.) [BDA2/3 x2.4; FCBSM x3.1 (p.37)]
We have seen that if
likelihood: y| Bin(n, ) and
prior:
Be(, ), then
posterior: |y Be( + y, + n y).
Posterior summaries
Pos
Lecture 9: Non-informative priors (BDA2 x2.9; BDA3 x2.10)
9.1 Non-informative priors
Bayesian methods require the specication of a prior distribution
Since
posterior prior likelihood
the prior, in principle, can be inuential
Mainly a problem with of med
Lecture 3: Inference for a binomial proportion
3.1 Analysis using a Uniform prior (FCBSM x3.1; BDA2/3 x2.1, x2.3)
y1 , . . . , yn | i.i.d. Ber()
total number of \successes" in n exchangeable (the order in which
the successes and failures happen doesn't m
Lecture 5
5.1 Central posterior intervals and HPDRs
Central posterior intervals: may be inadequate for
3
2
0
1
posterior density
3
2
1
0
posterior density
4
4
multimodal or highly-skewed posterior distributions
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.0
0.1
0.2
th
Monday, 10th May 2010
2.00 pm - 3.30 pm
EXAMINATION FOR THE DEGREES OF M.SCi. and B.Sc (SCIENCE)
4H (Senior Honours) STATISTICS
Applied Bayesian Modelling
Hand calculators with simple basic functions (log, exp, square root, etc.) may
be used in examinatio
Bayesian Statistics | Lab 5
Part I: 8 schools data
These data illustrate the normal hierarchical model discussed in Lecture 10 and are analysed
in x5.5 of BDA. In this lab we will try and replicate their results, but using Gibbs sampling to
obtain a sampl
Lecture 6: Normal data (BDA2 x2.6-2.7, BDA3 $2.5-2.6; FCBSM x5.15.3)
Three cases:
Mean unknown, variance known: BDA2 x2.6; BDA3 x2.5; FCBSM x5.2
Mean known, variance unknown: BDA2 x2.7; BDA3 x2.6
Both mean and variance unknown: BDA2 x3.2-3.4; BDA3 x3.2
Lecture 11: The Poisson and Exponential Models (BDA2 x2.7, BDA3
x2.6; FCBSM x3.2)
11.1 The Poisson model
Arises in the study of counts data
e.g. in epidemiological studies of the incidence of diseases
i = 1, . . . , n
i.i.d.
yi | Poi()
Likelihood, Prior,
Bayesian Statistics | Lab 3
Part I: Germinating Seeds
These data, reported in Crowder (1978), consist of the numbers of seeds n and the numbers of
seeds that actually germinate y, for each combination of two types of seed (Orobanche aegyptiaca 75 and O. a
Lecture 10: A Normal hierarchical model (BDA2/3 x5.4, 5.5, BDA2
x11.7, BDA3 x11.6; FCBSM x8.3)
J independent experiments: normal observations with known variance 2
yij | N(j , 2 )
i = 1, . . . , nj
j = 1, . . . , J
Aim: estimate the means j s
Could use
Lecture 13: Direct simulation from the posterior (BDA2 x11.1, BDA3
x10.3)
13.1 Direct simulation
Feasible in simple problems, such as analyses with conjugate priors (we've used
it many times!)
Sometimes can be done in stages, after factorizing the joint
Lecture 12: The Multinomial model (BDA2 x3.5, BDA3 x3.4)
12.1 Multinomial model
Observe result of n trials in an experiment with k possible outcomes
assume that all trials are independent and that in each
Pr[outcome j|] = j
j = 1, . . . , k
let y = (y1
Lecture 14: Markov Chain Monte Carlo (BDA2/3 Chapter 11; FCBSM
Chapters 6 and 10)
14.1 Markov Chain Monte Carlo Methods
Markov Chain Monte Carlo (MCMC) produces a dependent sample from the \target"
distribution () = p(|y), the posterior distribution.
The
Friday, 22nd May 2009
09.30 am - 10.30 am
EXAMINATION FOR THE DEGREES OF M.SCi. and B.Sc (SCIENCE)
4H (Senior Honours) STATISTICS
Applied Bayesian Modelling
Hand calculators with simple basic functions (log, exp, square root, etc.) may
be used in examinat
Lecture 15: Evaluating integrals by Monte Carlo
15.1 Introduction
Integration is basic to Bayes, as optimization (maximization) is to maximum like-
lihood.
Some instances:
Marginal posterior distributions
p(|y)
p(j |y)
Marginal (or integrated) likeliho
Lecture 7: Normal data (continued)
Three cases:
Mean unknown, variance known: BDA2 x2.6; BDA3 x2.5; FCBSM x5.2
Mean known, variance unknown: BDA2 x2.7; BDA3 x2.6
Both mean and variance unknown: BDA2 x3.2-3.4; BDA3 x3.2-3.3; FCBSM x5.3
Today:
Mean know
Homework 2 Solutions
Problem 1 - Exercise 2.11.1 in BDA
Model
n y
(1 )ny
y
p(y|) =
p() 3 (1 )3
With n = 10, we are only told that y < 3.
The posterior of is conditional on the available information, i.e. y < 3:
p(|y < 3) p() p(y < 3|)
3
(1 )
2
3
y=0
3
Bayesian Statistics | Lab 1
This aim of this lab is to serve as an introduction to WinBUGS, a computer program that
allows one to simulate from the posterior distribution of a (vector) parameter of interest, using
so-called Markov Chain Monte Carlo method
? Friday, 20th May 2011 ?
? 2.00pm 3.30pm ?
EXAMINATION FOR THE DEGREES OF M.SCI. AND B.SC.
(SCIENCE)
4H (Senior Honours) STATISTICS
Applied Bayesian Modelling
Hand calculators with simple basic functions (log, exp, square root, etc.) may be used
in exami
I
Prior induced by MVN on logits
Prior on (9 1.9 2 )
Indep. Un tO. 1)
median (&1)1.1) 2)
1
95% CI for &
median (oddsrati ol1) l . \)2 )
95% CI for oddsratio
Prl& ;, Diy 1.1)21
95% CI for lor
p
0.5
P
O~Lt
O'2~
0';3,
0.7
P
0.9
O'
'If
(0 . 01 , ',) (-0 00,0-
A few things to note: (a) in WINBUGS the normal distribution is specified with dnorm(mean,
precision), thus 0.25 is the precision (= l / variance) of the normal prior on <I> = 10git (8), so
s.d. (<I = 2; (b) <I> = 10git (8) is coded as logit(theta) <- phi