To nd the stationary distribution of a k state chain

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Multiplying this times the probability for each (n1 , . . . , ni ) i gives the result. 4.3 Limiting Behavior Having worked hard to develop the convergence theory for discrete time chains, the results for the continuous time case follow easily. In fact the study of the limiting behavior of continuous time Markov chains is simpler than the theory for discrete time chains, since the randomness of the exponential holding times implies that we don’t have to worry about aperiodicity. We begin by generalizing some of the previous definitions The Markov chain Xt is irreducible, if for any two states i and j it is possible to get from i to j in a finite number of jumps. To be precise, there is a sequence of states k0 = i, k1 , . . . kn = j so that q (km 1 , km ) > 0 for 1 m n. Lemma 4.2. If Xt is irreducible and t > 0 then pt (i, j ) > 0. Proof. Since ps (i, j ) exp( j s) > 0 and pt+s (i, j ) to show that this holds for small t. Since lim ph (km h!0 1 , km )/h = q (km pt (i, j )ps (j, j ) it su ces 1 , km ) it follows that if h is small enough we have p...
View Full Document

Ask a homework question - tutors are online