Journal of Machine Learning Research 8 (2007) 1769-1797
Submitted 7/06; Revised 1/07; Published 8/07
Characterizing the Function Space for Bayesian Kernel Models
Natesh S. Pillai Qiang Wu
Department of Statistical Science Duke University Durham, NC 27708,
Diagnostics in MCMC
Ho Chapter 6
October 13, 2010
Convergence to Posterior Distribution
Theory tells us that if we run the Gibbs sampler long enough the samples we obtain will be samples from the joint posterior distribution (target or stationary distribu
Introduction to Gibbs Sampling
October 8, 2010
Readings: Ho 6
October 7, 2010
Monte Carlo Sampling
We have seen that Monte Carlo sampling is a useful tool for sampling from prior and posterior distributions By limiting attention to conjugate prior distrib
Mixture Models and Gibbs Sampling
Readings: Ho CHapter 6
October 15, 2010
Eyes Example
Bowmaker et al. (1985) analyze data on the peak sensitivity wavelengths for individual microspectophotometric records on a small set of monkeys eyes. WinBUGs Examples V
One Parameter Models
September 22, 2010
Reading: Hoff Chapter 3
One Parameter Models p. 1/2
Highest Posterior Density Regions
Find
1 = cfw_ : p( | Y ) h such that P ( 1 | Y ) = 1
All points in 1 have a higher density than any point outside the regions. O
Markov Chain Sampling Methods for Dirichlet Process Mixture Models
Radford M. Neal, University of Toronto, Ontario, Canada
Presented by Colin DeLong
Outline
Introduction Dirichlet process mixture models Gibbs sampling w/ conjugate priors Algorithms 1, 2,
R plot layout
# make some data. cars<-c(1,3,6,4,9)
Axis label font size
# Change the font size of the axis labels plot(cars, type="l", col="blue", main="Autos", ylab='Number seen',xlab='Day', cex.lab=.7) # You can also change the font size of the title an
More on Prior Distributions
September 24, 2010
Reading: Ho Chapter 3
September 24, 2010
Binomial as an Exponential Family
Rearrange Bernoulli (a Binomial with n = 1) to get in to exponential family form: p (y | ) = y (1 )1y y (1 ) = 1 = exp log 1 = c ( )