{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

# mcmc - Stat 5102 Notes Markov Chain Monte Carlo and...

This preview shows pages 1–4. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Stat 5102 Notes: Markov Chain Monte Carlo and Bayesian Inference Charles J. Geyer April 6, 2009 1 The Problem This is an example of an application of Bayes rule that requires some form of computer analysis. We will use Markov chain Monte Carlo (MCMC). The problem is the same one that was done by maximum likelihood on the computer examples web pages ( http://www.stat.umn.edu/geyer/ 5102/examp/like.html ). The data model is gamma. We will use the Jef- freys prior. 1.1 Data The data are loaded by the R command > foo <- read.table(url("http://www.stat.umn.edu/geyer/5102/data/ex3-1.txt"), + header = TRUE) > x <- foo\$x 1.2 R Package We load the R contributed package mcmc , which is available from CRAN. > library(mcmc) If this does not work, then get the library using the package menu for R. 1.3 Random Number Generator Seed In order to get the same results every time, we set the seed of the random number generator. > set.seed(42) To get different results, change the seed or simply omit this statement. 1 1.4 Prior We have not done the Fisher information matrix for the two-parameter gamma distribution. To calculate Fisher information, it is enough to have the log likelihood for sample size one. The PDF is f ( x | α,λ ) = λ α Γ( α ) x α- 1 exp(- λx ) The log likelihood is l ( α,λ ) = log f ( x | α,λ ) = α log( λ )- log Γ( α ) + ( α- 1)log( x )- λx which has derivatives ∂l ( α,λ ) ∂α = log( λ )- d dα log Γ( α ) + log( x ) ∂l ( α,λ ) ∂λ = α λ- x ∂ 2 l ( α,λ ) ∂α 2 =- d 2 dα 2 log Γ( α ) ∂ 2 l ( α,λ ) ∂α∂λ = 1 λ ∂ 2 l ( α,λ ) ∂λ 2 =- α λ 2 Recall that ∂ 2 log(Γ( α )) /∂α 2 is called the trigamma function. So the Fisher information matrix is I ( θ ) = trigamma( α )- 1 /λ- 1 /λ α/λ 2 Its determinant is | I ( θ ) | = trigamma( α )- 1 /λ- 1 /λ α/λ 2 = α trigamma( α )- 1 λ 2 and the Jeffreys prior is g ( α,λ ) = p α trigamma( α )- 1 λ 2 2 The Markov Chain Monte Carlo 2.1 Ordinary Monte Carlo The “Monte Carlo method” refers to the theory and practice of learning about probability distributions by simulation rather than calculus. In ordi- nary Monte Carlo (OMC) we use IID simulations from the distribution of interest. Suppose X 1 , X 2 , ... are IID simulations from some distribution, and suppose we want to know an expectation μ = E { g ( X i ) } . Then the law of large numbers (LLN) then says ˆ μ n = 1 n n X i =1 g ( X i ) converges in probability to μ , and the central limit theorem (CLT) says √ n (ˆ μ n- μ ) D-→ N (0 ,σ 2 ) where σ 2 = var { g ( X i ) } which can be estimated by the empirical variance ˆ σ 2 = 1 n n X i =1 ( g ( X i )- ˆ μ n ) 2 and this allows us to say everything we want to say about μ , for example, an asymptotic 95% confidence interval for μ is ˆ μ n ± 1 . 96 ˆ σ n √ n The theory of OMC is just the theory of frequentist statistical inference....
View Full Document

{[ snackBarMessage ]}

### Page1 / 18

mcmc - Stat 5102 Notes Markov Chain Monte Carlo and...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online