mcmc - Stat 5102 Notes: Markov Chain Monte Carlo and...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Stat 5102 Notes: Markov Chain Monte Carlo and Bayesian Inference Charles J. Geyer April 6, 2009 1 The Problem This is an example of an application of Bayes rule that requires some form of computer analysis. We will use Markov chain Monte Carlo (MCMC). The problem is the same one that was done by maximum likelihood on the computer examples web pages ( http://www.stat.umn.edu/geyer/ 5102/examp/like.html ). The data model is gamma. We will use the Jef- freys prior. 1.1 Data The data are loaded by the R command > foo <- read.table(url("http://www.stat.umn.edu/geyer/5102/data/ex3-1.txt"), + header = TRUE) > x <- foo$x 1.2 R Package We load the R contributed package mcmc , which is available from CRAN. > library(mcmc) If this does not work, then get the library using the package menu for R. 1.3 Random Number Generator Seed In order to get the same results every time, we set the seed of the random number generator. > set.seed(42) To get different results, change the seed or simply omit this statement. 1 1.4 Prior We have not done the Fisher information matrix for the two-parameter gamma distribution. To calculate Fisher information, it is enough to have the log likelihood for sample size one. The PDF is f ( x | , ) = ( ) x - 1 exp(- x ) The log likelihood is l ( , ) = log f ( x | , ) = log( )- log ( ) + ( - 1)log( x )- x which has derivatives l ( , ) = log( )- d d log ( ) + log( x ) l ( , ) = - x 2 l ( , ) 2 =- d 2 d 2 log ( ) 2 l ( , ) = 1 2 l ( , ) 2 =- 2 Recall that 2 log(( )) / 2 is called the trigamma function. So the Fisher information matrix is I ( ) = trigamma( )- 1 /- 1 / / 2 Its determinant is | I ( ) | = trigamma( )- 1 /- 1 / / 2 = trigamma( )- 1 2 and the Jeffreys prior is g ( , ) = p trigamma( )- 1 2 2 The Markov Chain Monte Carlo 2.1 Ordinary Monte Carlo The Monte Carlo method refers to the theory and practice of learning about probability distributions by simulation rather than calculus. In ordi- nary Monte Carlo (OMC) we use IID simulations from the distribution of interest. Suppose X 1 , X 2 , ... are IID simulations from some distribution, and suppose we want to know an expectation = E { g ( X i ) } . Then the law of large numbers (LLN) then says n = 1 n n X i =1 g ( X i ) converges in probability to , and the central limit theorem (CLT) says n ( n- ) D- N (0 , 2 ) where 2 = var { g ( X i ) } which can be estimated by the empirical variance 2 = 1 n n X i =1 ( g ( X i )- n ) 2 and this allows us to say everything we want to say about , for example, an asymptotic 95% confidence interval for is n 1 . 96 n n The theory of OMC is just the theory of frequentist statistical inference....
View Full Document

This note was uploaded on 10/28/2010 for the course STAT 2102 taught by Professor Geyer during the Spring '09 term at Minnesota.

Page1 / 18

mcmc - Stat 5102 Notes: Markov Chain Monte Carlo and...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online