Unformatted text preview: mild conditions we will still have convergence to the stationary distribution, whatever it is, but
our simple calculations go out the window. Is there a general theorem we can appeal to,
analogous to the Basic Limit Theorem we got in the discrete space case?
1.71 Example Markov sampling . We have seen this idea before in discrete state spaces; it works more generally also. If we want to simulate a sample from a given probability distribution on a set S, the Basic Limit Theorem will tell us that we can do this
approximately by running a Markov chain having state space S and stationary distribution
. There are a number of popular methods for manufacturing a Markov chain having a
given desired distribution as its stationary distribution, such as the Metropolis method and
the Gibbs sampler.
As discussed earlier, the Gibbs sampler proceeds by simulating from conditional distributions that are, one hopes, simpler to simulate than the original distribution. For
example, suppose we wish to simulate from a given probability density function f on R2 ,
which is an uncountable set, not discrete. For purposes of this discussion let X; Y denote
a pair of random variables having joint density f . We would like to simulate such a pair
of random variables, at least approximately. Given that we are now time t at the state
Xt ; Yt = x; y, we could generate the next state Xt+1 ; Yt+1 as follows. Flip a coin. If
Heads, let Xt+1 = Xt = x, and draw Yt+1 from the conditional distribution of Y given
X = x. If Tails, let Yt+1 = Yt = y, and draw Xt+1 from the conditional distribution of X
given Y = y. The sequence fXt ; Yt : t = 0; 1; : : :g is a Markov chain having stationary
density f .
What we would like here is a general Basic Limit Theorem that would allow us to prove
that the Gibbs sampler Markov chain converges in distribution to its stationary distribution. 1.10.1 Chains with an atom Do you remember our proof of the Basic Limit Theorem in the discrete case? We used the
coupling idea: run two independent copies of the chain until they couple , that is, until they
hit the same state at some time T . The coupling inequality kt , k PfT tg reduced
the problem of showing that kt , k ! 0 to the problem of showing that PfT 1g =
1. In other words, we reduced the problem to showing that with probability 1, the two
chains eventually must couple. However, in typical examples in general state spaces, each
Stochastic Processes J. Chang, March 30, 1999 1. MARKOV CHAINS Page 1-38 individual state is hit with probability 0, and independent copies of the chain will never
couple. An atom is a state that is hit with positive probability. If a Markov chain has an
atom, then we can hope to carry through the same sort of coupling argument as we used
in the descrete case. In this section we develop a basic limit theorem for chains having an
1.72 Definition. An accessible atom is a state that is hit with positive probability
starting from each state, that is, 1 Px fXt = g 0 for...
View Full Document