chapter8 - Chapter 8 Markov Chain Simulations 8.1...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
Chapter 8 Markov Chain Simulations 8.1 Introduction The early examples of bayesian analysis where we had to use simulation describe simulation approaches that work in low-dimensional problems (grid approach in bivariate or univariate cases when we did not know the shape of the distributions, or simulating directly from a known distribution). The theory behind those methods was the Montecarlo method which says that if you draw L random numbers θ i p ( θ | y ) the θ i are independent and identically distributed and then the average of any function of θ can be approximated by the average of those random numbers, i.e., E ( h ) = Z h ( θ ) p ( θ | y ) d θ 1 L X h ( θ i ) . With complicated models, it is rare that samples from the posterior distribution can be obtained directly. This chapter talks about the method of Markov chain simulation, in particular, Gibbs sampling and Metropolis Hasting algorithms. With these methods, it is not important that the θ i are iid . The key to Markov chain simulation is to create a Markov process whose stationary distribution is a specified p ( θ | y ). They rely on the possibility of producing (with a computer) an endless flow of random variables for known or new distributions. Such a simulation is, in turn, based on the production of uniform random variables on the interval (0,1). 8.2 Markov Chain Monte Carlo to summarize posterior distributions MCMC algorithms are very attractive in that they are easy to set up and program and require relatively little prior input from the user. R is a convenient language for programming these algorithms and is also very suitable for performing output analysis, where one does several graphical and numerical computations to check if the the algorithm is indeed producing draws from the target posterior distribution. 8.2.1 Introduction to discrete markov chains A markov chain describes probabilistic movements between several states. A person starts at location i of locations 1, 2, 3, 4, 5, 6. The probability that the person moves to another location j depends on the current location i only. Def: Transition probabilities: describe the likelihood of moving between particular states in one step. Def: A transition matrix T summarizes the transition probabilities. 67
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Stats C180 / C236 Introduction to Bayesian Statistics Juana Sanchez UCLA Department of Statistics T = 0 . 7 0 . 3 0 0 0 0 0 . 4 0 . 5 0 . 1 0 0 0 0 . 2 0 . 5 0 . 3 0 0 0 0 0 0 . 25 0 . 25 0 . 5 0 0 0 0 . 25 0 . 35 0 . 4 0 0 0 0 0 . 25 0 . 5 0 . 25 The first row in T gives the probabilities of moving to all states 1 through 6 in a single step from location 1, the second row gives the transition probabilities in a single step from location 2, and so on. Def: Irreducible transition matrix: it is possible to go from every state to every state in one or more steps.
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 11/24/2010 for the course STAT 201a taught by Professor Wu during the Spring '10 term at Pasadena City College.

Page1 / 5

chapter8 - Chapter 8 Markov Chain Simulations 8.1...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online