This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: ISyE8843A, Brani Vidakovic Handout 10 1 MCMC Methodology. Independence of X 1 ,...,X n is not critical for an approximation of the form E  x h ( X ) = 1 n n i =1 h ( X i ) , X i (  x ) . In fact, when X s are dependent, the ergodic theorems describe the approximation. An easy and convenient form of dependence is Markov chain dependence. The Markov dependence is perfect for computer simulations since for producing a future realization of the chain, only the current state is needed. 1.1 Theoretical Background and Notation Random variables X 1 ,X 2 ,...,X n ,... constitute a Markov Chain on continuous state space if they possess a Markov property, P ( X n +1 A  X 1 ,...,X n ) = P ( X n +1 A  X 1 ,...,X n ) = Q ( X n ,A ) = Q ( A  X n ) , for some probability distribution Q. Typically, Q is assumed a timehomogeneous, i.e., independent on n (time). The transition (from the state n to the state n +1 ) kernel defines a probability measure on the state space and we will assume that the density q exists, i.e., Q ( A  X n = x ) = Z A q ( x,y ) dy = Z A q ( y  x ) dy. Distribution is invariant, if for all measurable sets A ( A ) = Z Q ( A  x )( dx ) . If the transition density exists, it is stationary if q ( x  y ) ( y ) = q ( y  x ) ( x ) . Here and in the sequel we assume that the density for exists, ( A ) = R A ( x ) dx. A distribution is an equilibrium distribution if for Q n ( A  x ) = P ( X n A  X = x ) , lim n Q n ( A  x ) = ( A ) . In plain terms, the Markov chain will forget the initial distribution and will converge to the stationary distri bution. The Markov Chain is irreducible if for each A for which ( A ) > , and for each x , one can find n , so that Q n ( A  x ) > . The Markov Chain X 1 ,...,X n ,... is recurrent if for each B such that ( B ) > , P ( X n B i.o.  X = x ) = 1 , a.s. ( in distribution of X ) It is Harris recurrent if P ( X n B i.o.  X = x ) = 1 , ( x ) . The acronym i.o. stands for infinitely often. 1 Figure 1: Nicholas Constantine Metropolis, 19151999 1.2 Metropolis Algorithm Metropolis algorithm is the fundamental to MCMC development. Assume that the target distribution is known up to a normalizing constant. We would like to construct a chain with as its stationary distribution. As in ARM, we take a proposal distribution q ( x,y ) = q ( y  x ) , where the proposal for a new value of a chain is y , given that the chain is at value x . Thus q defines transition kernel Q ( A,x ) = R A q ( y  x ) dx which is the probability of transition to some y A. Detailed Balance Equation. A Markov Chain with transition density q ( x,y ) = q ( y  x ) satisfies detailed balance equation if there exists a distribution f such that q ( y  x ) f ( x ) = q ( x  y ) f ( y ) . (1) The distribution f is stationary (invariant) and the chain is reversible....
View
Full
Document
 Spring '11
 VIDAKOVIC

Click to edit the document details