This preview shows pages 1–9. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: STAT 531: Markov Chains HM Kim Department of Mathematics and Statistics University of Calgary Fall 2010 1/32 Markov Chain Monte Carlo (MCMC) decomposition of highdimensional problems: the integration of highdimensional functions can be computationally very difficult very little is known about target f generates correlated variables MCMC methods attempt to simulate direct draws from some complex distribution of interest: one uses the previous sample values to randomly generate the next sample value Fall 2010 2/32 Markov chains are a special type of stochastic process, which are processes that move around a set of possible values where the future values can not be predicted with certainty. The set of possible values is called state space of the process. Let X ( n ) denote the value of a random variable at time n , and let the state space refer to the range of possible X values. The random variable is a Markov process if the transition probabilities between different values in the state space depends only on the random variables current state. A Markov chain is a sequence of dependent random variables X (0) , X (1) ,..., X ( n ) generated by a Markov process. Fall 2010 3/32 For example, the joint probabilities of the process at time 0 , 1 , 2: P ( X (2) = x 2 , X (1) = x 1 , X (0) = x ) = P ( X (2) = x 2  X (1) = x 1 , X (0) = x ) P ( X (1) = x 1  X (0) = x ) P ( X (0) = x ) = P ( X (2) = x 2  X (1) = x 1 )  {z } Markov property P ( X (1) = x 1  X (0) = x ) P ( X (0) = x ) Fall 2010 4/32 Transition probabilities A particular chain is defined mostly critically by its transition probabilities (or the transition kernel) p i , j = P ( X ( n +1) = j  X ( n ) = i ) , which is the probability that a process at stage i to j in a single step. , Example : p 1 , 1 = P ( X ( n +1) = 1  X ( n ) = 1); p 1 , 2 = P ( X ( n +1) = 2  X ( n ) = 1) Note that we will restrict ourselves to time invariant Markov chains where the transition probabilities only depend on states, not the time n . Fall 2010 5/32 Onestep transition probabilities : P = p 1 , 1 p 1 , K . . . . . . . . . p K , 1 p K , K Twostep transition probabilities : P (2) = p (2) 1 , 1 p (2) 1 , K . . . . . . . . . p (2) K , 1 p (2) K , K p (2) i , j = P ( X ( n +2) = j  X ( n ) = i ) is the probability going from state i to state j in two steps, n Fall 2010 6/32 p (2) i , j = P ( X (2) = j  X (0) = i ) = K k =1 P ( X (2) = j , X (1) = k  X (0) = i ) = K k =1 P ( X (2) = j  X (1) = k , X (0) = i ) P ( X (1) = k  X (0) = i ) = K k =1 P ( X (2) = j  X (1) = k ) P ( X (1) = k  X (0) = i ) = K k =1 p i , k p k , j Fall 2010 7/32 P 2 = p 1 , 1 p 1 , 2 p 2 , 1 p 2 , 2 p 1 , 1 p 1 , 2 p 2 , 1 p 2 , 2 = p 1 , 1 p 1 , 1 + p 1 , 2 p 2 , 1 p 1 , 1 p 1 , 2 + p 1 , 2 p 2 , 2 p 2 , 1 p 1 , 1 + p 2 , 2 p 2 , 1 p 2 , 1 p 1 , 2 + p 2 , 2 p 2 , 2 = 2 k =1 p 1 , k p k , 1 2 k =1 p 1 , k p k ,...
View
Full
Document
This note was uploaded on 02/04/2011 for the course STAT 531 taught by Professor Gaborlukacs during the Spring '11 term at Manitoba.
 Spring '11
 GABORLUKACS
 Statistics, Markov Chains

Click to edit the document details