{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

531f10MC

# 531f10MC - STAT 531 Markov Chains HM Kim Department of...

This preview shows pages 1–9. Sign up to view the full content.

STAT 531: Markov Chains HM Kim Department of Mathematics and Statistics University of Calgary Fall 2010 1/32

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Markov Chain Monte Carlo (MCMC) decomposition of high-dimensional problems: the integration of high-dimensional functions can be computationally very difficult very little is known about target f generates correlated variables MCMC methods attempt to simulate direct draws from some complex distribution of interest: one uses the previous sample values to randomly generate the next sample value Fall 2010 2/32
Markov chains are a special type of stochastic process, which are processes that move around a set of possible values where the future values can not be predicted with certainty. The set of possible values is called state space of the process. Let X ( n ) denote the value of a random variable at time n , and let the state space refer to the range of possible X values. The random variable is a Markov process if the transition probabilities between different values in the state space depends only on the random variable’s current state. A Markov chain is a sequence of dependent random variables X (0) , X (1) ,..., X ( n ) generated by a Markov process. Fall 2010 3/32

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
For example, the joint probabilities of the process at time 0 , 1 , 2: P ( X (2) = x 2 , X (1) = x 1 , X (0) = x 0 ) = P ( X (2) = x 2 | X (1) = x 1 , X (0) = x 0 ) P ( X (1) = x 1 | X (0) = x 0 ) P ( X (0) = x 0 ) = P ( X (2) = x 2 | X (1) = x 1 ) | {z } Markov property P ( X (1) = x 1 | X (0) = x 0 ) P ( X (0) = x 0 ) Fall 2010 4/32
Transition probabilities A particular chain is defined mostly critically by its transition probabilities (or the transition kernel) p i , j = P ( X ( n +1) = j | X ( n ) = i ) , which is the probability that a process at stage i to j in a single step. , Example : p 1 , 1 = P ( X ( n +1) = 1 | X ( n ) = 1); p 1 , 2 = P ( X ( n +1) = 2 | X ( n ) = 1) Note that we will restrict ourselves to time invariant Markov chains where the transition probabilities only depend on states, not the time n . Fall 2010 5/32

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
One-step transition probabilities : P = p 1 , 1 ··· p 1 , K . . . . . . . . . p K , 1 ··· p K , K Two-step transition probabilities : P (2) = p (2) 1 , 1 ··· p (2) 1 , K . . . . . . . . . p (2) K , 1 ··· p (2) K , K p (2) i , j = P ( X ( n +2) = j | X ( n ) = i ) is the probability going from state i to state j in two steps, n Fall 2010 6/32
p (2) i , j = P ( X (2) = j | X (0) = i ) = K k =1 P ( X (2) = j , X (1) = k | X (0) = i ) = K k =1 P ( X (2) = j | X (1) = k , X (0) = i ) P ( X (1) = k | X (0) = i ) = K k =1 P ( X (2) = j | X (1) = k ) P ( X (1) = k | X (0) = i ) = K k =1 p i , k p k , j Fall 2010 7/32

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
P 2 = p 1 , 1 p 1 , 2 p 2 , 1 p 2 , 2 p 1 , 1 p 1 , 2 p 2 , 1 p 2 , 2 = p 1 , 1 p 1 , 1 + p 1 , 2 p 2 , 1 p 1 , 1 p 1 , 2 + p 1 , 2 p 2 , 2 p 2 , 1 p 1 , 1 + p 2 , 2 p 2 , 1 p 2 , 1 p 1 , 2 + p 2 , 2 p 2 , 2 = 2 k =1 p 1 , k p k , 1 2 k =1 p 1 , k p k , 2 2 k =1 p 2 , k p k , 1 2 k =1 p 2 , k p k , 2 = P (2) So P (2) = P × P = P 2 : multiplication of matrices n -step transition probabilities : P ( n ) = P n Defining the n -step transition probability, p ( n ) i , j = P ( X ( t + n ) = j | X ( t ) = i ), it follows that p ( n ) i , j is the ij th element of P n .
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}