Markov

# This describes a random transformation of the

This preview shows page 1. Sign up to view the full content.

This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: table by aij . Choose a pair fi1 ; i2 g of rows at that random, that is, uniformly over the 4 = 6 possible pairs. Similarly, choose a random 2 pair of columns fj1 ; j2 g. Then ip a coin. If you get heads: add 1 to ai1 j1 and ai2 j2 , and subtract 1 from ai1 j2 and ai2 j1 if you can do so without producing any negative entries |if you cannot do so, then do nothing. Similarly, if the coin ip comes up tails, then subtract 1 from ai1 j1 and ai2 j2 , and add 1 to ai1 j2 and ai2 j1 , with the same nonnegativity proviso, and otherwise do nothing. This describes a random transformation of the original table that results in a new table in the desired set of tables S. Now repeat the same random transformation on the new table, and so on. 1.33 Exercise. Assuming the validity of the Basic Limit Theorem, show that if we run the algorithm" in Example 1.32 for a long time," then we will end up with a random table having probability distribution very close to the desired distribution. In order to do this, show that 1. The procedure generates a Markov chain whose state space is S, 2. that Markov chain is irreducible, 3. that Markov chain is aperiodic, and 4. that Markov chain has the desired distribution that is, uniform on S as its stationary distribution. I consider Exercise 1.33 to be an interesting application of the Basic Limit Theorem. I hope it helps whet your appetite for digesting the proof of that theorem! For the proof of the Basic Limit Theorem, we will need one more concept: recurrence. Analogously to what we did with the notion of periodicity, we will begin by saying what a recurrent state is, and then show in Theorem 1.35 below that recurrence is actually a class property. In particular, in an irreducible Markov chain, either all states are recurrent or all states are transient , which means not recurrent." Thus, if a chain is irreducible, we can speak of the chain being either recurrent or transient. The idea of recurrence is this: a state i is recurrent if, starting from the state i at time 0, the chain is sure to return to i eventually. More precisely, de ne the rst hitting time Ti of the state i by Ti = inf fn 0 : Xn = ig; and make the following de nition. Stochastic Processes J. Chang, March 30, 1999 1.6. IRREDUCIBILITY, PERIODICITY, AND RECURRENCE 1.34 Definition. The state i is recurrent if Pi fTi Page 1-17 1g = 1. If i is not recurrent, it is called transient . The meaning of recurrence is this: state i is recurrent if, when the Markov chain is started out in state i, the chain is certain to return to i at some nite future time. Observe the di erence in spirit between this and the de nition of accessible from" see the paragraph containing 1.27 , which requires only that it be possible for the chain to hit a state j . In terms of the rst hitting time notation, the de nition of accessible from" may be restated as follows: for distinct states i 6= j , we say that j is accessible from i if and only if Pi fTj 1g 0. Why did I bother to say for distinct states i 6= j "? Here is the promised result that implies that recurrence is a class property. 1.35 Theorem. Let i be a recurrent state, and suppose that j is accessible from i. Th...
View Full Document

{[ snackBarMessage ]}

Ask a homework question - tutors are online