Discrete-time stochastic processes

This then implies that limt1 pr mi t n 1 for all n or

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: ns to a given state. These renewal processes were used to rederive the basic properties of Markov chains using renewal theory as opposed to the algebraic PerronFrobenius approach of Chapter 4. The central result of this was Theorem 5.4, which showed that, for an irreducible chain, the states are positive-recurrent iff the steady-state equations, 5.9. SUMMARY 229 (5.14), have a solution. Also if (5.14) has a solution, it is positive and unique. We also showed that these steady-state probabilities are, with probability 1, time-averages for sample paths, and that, for an ergodic chain, they are limiting probabilities independent of the starting state. We found that the ma jor complications that result from countable state spaces are, first, different kinds of transient behavior, and second, the possibility of null-recurrent states. For finite-state Markov chains, a state is transient only if it can reach some other state from which it can’t return. For countably infinite chains, there is also the case, as in Figure 5.1 for p > 1/2, where the state just wanders away, never to return. Null recurrence is a limiting situation where the state wanders away and returns with probability 1, but with an infinite expected time. There is not much engineering significance to null recurrence; it is highly sensitive to modeling details over the entire infinite set of states. One usually uses countably infinite chains to simplify models; for example, if a buffer is very large and we don’t expect it to overflow, we assume it is infinite. Finding out, then, that the chain is transient or null-recurrent simply means that the modeling assumption was not very good. Branching processes were introduced in Section 5.3 as a model to study the growth of various kinds of elements that reproduce. In general, for these models (assuming p0 > 0), there is one trapping state and all other states are transient. Figure 5.3 showed how to find the probability that the trapping state is entered by the nth generation, and also the probability that it is entered eventually. If the expected number of offspring of an element is at most 1, then the population dies out with probability 1, and otherwise, the population dies out with some given probability q , and grows without bound with probability 1 − q . We next studied birth-death Markov chains and reversibility. Birth-death chains are widely used in queueing theory as sample time approximations for systems with Poisson arrivals and various generalizations of exponentially distributed service times. Equation (5.30) gives their steady-state probabilities if positive-recurrent, and shows the condition under which they are positive-recurrent. We showed that these chains are reversible if they are positiverecurrent. Theorems 5.6 and 5.7 provided a simple way to find the steady-state distribution of reversible chains and also of chains where the backward chain behavior could be hypothesized or deduced. We used reversibility to show that M/M/1 and M/M/m Markov chains satisfy Burke’s theorem for sampled-time — namely that the departure process is Bernoulli, and that the state at any time is independent of departures before that time. Round-robin queueing was then used as a more complex example of how to use the backward process to deduce the steady-state distribution of a rather complicated Markov chain; this also gave us added insight into the behavior of queueing systems and allowed us to show that, in the processor-sharing limit, the distribution of number of customers is the same as that in an M/M/1 queue. Finally, semi-Markov processes were introduced. Renewal theory again provided the key to analyzing these systems. Theorem 5.9 showed how to find the steady-state probabilities of these processes, and it was shown that these probabilities could be interpreted both as time-averages and, in the case of non-arithmetic transition times, as limiting probabilities in time. 230 CHAPTER 5. COUNTABLE-STATE MARKOV CHAINS For further reading on Markov chains with countably infinite state spaces, see [9], [16], or [22]. Feller [9] is particularly complete, but Ross and Wolff are somewhat more accessible. Harris, [12] is the standard reference on branching processes and Kelly, [13] is the standard reference on reversibility. The material on round-robin systems is from [24] and is generalized there. 5.10 Exercises Exercise 5.1. Let {Pij ; i, j ≥ 0} be the set of transition probabilities for an infinite-state Markov chain. For each i, j , let Fij (n) be the probability that state j occurs sometime between time 1 and n inclusive, given X0 = i. For some given j , assume that {xk ; k ≥ 0} P is a set of non-negative numbers satisfying xi = Pij + k6=j Pik xk . Show that xi ≥ Fij (n) for all n and i, and hence that xi ≥ Fij (1) for all i. Hint: use induction. Exercise 5.2. a) For the Markov chain in Figure 5.1, show that, for p ≥ 1/2, F00 (1) = 2(1 − p) and show that Fi0 (1) = [(1 − p)/p]i for i ≥ 1. Hint: first show that this solution satisfies (5.5) and then...
View Full Document

This note was uploaded on 09/27/2010 for the course EE 229 taught by Professor R.srikant during the Spring '09 term at University of Illinois, Urbana Champaign.

Ask a homework question - tutors are online