To prove i let us imagine starting the chain in state

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: en in fact all of the following hold: i Pi fTj 1g = 1; ii Pj fTi 1g = 1; iii The state j is recurrent. Proof: The proof will be given somewhat informally; it can be rigorized. Suppose i 6= j , since the result is trivial otherwise. Firstly, let us observe that iii follows from i and ii: clearly if ii holds that is, starting from j the chain is certain to visit i eventually and i holds so that starting from i the chain is certain to visit j eventually , then iii must also hold since starting from j the chain is certain to visit i, after which it will de nitely get back to j . To prove i, let us imagine starting the chain in state i, so that X0 = i. With probability one, the chain returns at some time Ti 1 to i. For the same reason, continuing the chain after time Ti , the chain is sure to return to i for a second time. In fact, by continuing this argument we see that, with probability one, the chain returns to i in nitely many times. Thus, we may visualize the path followed by the Markov chain as a succession of in nitely many cycles," where a cycle is a portion of the path between two successive visits to i. That is, we'll say that the rst cycle is the segment X1 ; : : : ; XTi of the path, the second cycle starts with XTi +1 and continues up to and including the second return to i, and so on. The behaviors of the chain in successive cycles are independent and have identical probabilistic characteristics. In particular, letting In = 1 if the chain visits j sometime during the nth cycle and In = 0 otherwise, we see that I1 ; I2 ; : : : is an iid sequence of Bernoulli trials. Let p denote the common success probability" p = Pfvisit j in a cycleg = Pi " Ti  fXk = j g k=1 for these trials. Clearly if p were 0, then with probability one the chain would not visit j in any cycle, which would contradict the assumption that j is accessible from i. Therefore, Stochastic Processes J. Chang, March 30, 1999 1. MARKOV CHAINS Page 1-18 p 0. Now observe that in such a sequence of iid Bernoulli trials with a positive success probability, with probability one we will eventually observe a success. In fact, Pi fchain does not visit j in the rst n cyclesg = 1 , pn ! 0 as n ! 1. That is, with probability one, eventually there will be a cycle in which the chain does visit j , so that i holds. It is also easy to see that ii must hold. In fact, suppose to the contrary that Pj fTi = 1g 0. Combining this with the hypothesis that j is accessible from i, we see that it is possible with positive probability for the chain to go from i to j in some nite amount of time, and then, continuing from state j , never to return to i. But this contradicts the fact that starting from i the chain must return to i in nitely many times with probability one. Thus, ii holds, and we are done. The cycle" idea used in the previous proof is powerful and important; we will be using it again. The next theorem gives a useful equivalent condition for recurrence. The statement uses the notation Ni for the total number of visits of the Markov chain to the state i, tha...
View Full Document

Ask a homework question - tutors are online