Unformatted text preview: he Binomial2m; 1=2
distribution by the Normal distribution N m; m=2, with the usual continuity correction:
PfBinomial2m; 1=2 = mg Pfm , 1=2 N m; m=2 m + 1=2g
p
p
= Pf,p=2 2=m p N 0; 1 1=2 2=mg
1
p
0 2=m = 1= 2 2=m = 1=pm:
Stochastic Processes J. Chang, March 30, 1999 1.6. IRREDUCIBILITY, PERIODICITY, AND RECURRENCE Page 121 Although this calculation does not follow as a direct consequence of the usual Central Limit
Theorem, it is an example of a local Central Limit Theorem." 1.41 Exercise The other 3dimensional random walk . Consider a random walk on the 3dimensional integer lattice; at each time the random walk moves with equal probability
to one of the 6 nearest neighbors, adding or subtracting 1 in just one of the three coordinates.
Show that this random walk is transient.
Hint: You want to show that some series converges. An upper bound on the terms will be
enough. How big is the largest probability in the Multinomialn; 1=3; 1=3; 1=3 distribution? Here are a few additional problems about a simple symmetric random walk fSn g in one
dimension starting from S0 = 0 at time 0.
1.42 Exercise. Let a and b be integers with a 0 b. De ning the hitting times
c = inf fn 0 : Sn = cg, show that the probability Pf b
a g is given by 0 , a=b , a.
Show that Pfg 1.43 Exercise. Let S0 ; S1 ; : : : be a simple, symmetric random walk in one dimension as
we have discussed, with S0 = 0. Show that fS1 6= 0; : : : ; S2n 6= 0g = PfS2n = 0g: P Now you can do a calculation that explains why the expected time to return to 0 is in nite. 1.44 Exercise. As in the previous exercise, consider a simple, symmetric random walk
started out at 0. Letting k 6= 0 be any xed state, show that the expected number of times the
random walk visits state k before returning to state 0 is 1. We'll end this section with a discussion of the relationship between recurrence and the
existence of a stationary distribution. The results will be useful in the next section.
Suppose a Markov chain has a stationary distribution . If the
state j is transient, then j = 0.
1.45 Proposition. Proof: Since is stationary, we have P n = for all n, so that 1.46
Stochastic Processes X
i iP n i; j = j for all n:
J. Chang, March 30, 1999 Page 122 1. MARKOV CHAINS However, since j is transient, Corollary 1.37 says that limn!1 P n i; j = 0 for all i. Thus,
the left side of 1.46 approaches 0 as n approaches 1, which implies that j must be 0.
The last bit of reasoning about equation 1.46 may look a little strange, but in fact
iP n i; j = 0 for all i and n. In light of what we now know, this is easy to see. Firstly,
if i is transient, then i = 0. Otherwise, if i is recurrent, then P n i; j = 0 for all n, since
if not, then j would be accessible from i, which would contradict the assumption that j is
transient.
1.47 Corollary. If an irreducible Markov chain has a stationary distribution, then the chain is recurrent. Proof: Being irreducible, the chain must be either recurrent or transient. However, if the chain were transient...
View
Full Document
 Spring '10
 DURRETT
 Multiplication, Markov Chains, Probability theory, Markov chain, J. Chang

Click to edit the document details