Unformatted text preview: ns to a given state. These renewal processes were used to rederive the
basic properties of Markov chains using renewal theory as opposed to the algebraic PerronFrobenius approach of Chapter 4. The central result of this was Theorem 5.4, which showed
that, for an irreducible chain, the states are positiverecurrent iﬀ the steadystate equations, 5.9. SUMMARY 229 (5.14), have a solution. Also if (5.14) has a solution, it is positive and unique. We also
showed that these steadystate probabilities are, with probability 1, timeaverages for sample paths, and that, for an ergodic chain, they are limiting probabilities independent of the
starting state.
We found that the ma jor complications that result from countable state spaces are, ﬁrst,
diﬀerent kinds of transient behavior, and second, the possibility of nullrecurrent states.
For ﬁnitestate Markov chains, a state is transient only if it can reach some other state
from which it can’t return. For countably inﬁnite chains, there is also the case, as in Figure
5.1 for p > 1/2, where the state just wanders away, never to return. Null recurrence is a
limiting situation where the state wanders away and returns with probability 1, but with
an inﬁnite expected time. There is not much engineering signiﬁcance to null recurrence; it
is highly sensitive to modeling details over the entire inﬁnite set of states. One usually uses
countably inﬁnite chains to simplify models; for example, if a buﬀer is very large and we
don’t expect it to overﬂow, we assume it is inﬁnite. Finding out, then, that the chain is
transient or nullrecurrent simply means that the modeling assumption was not very good.
Branching processes were introduced in Section 5.3 as a model to study the growth of
various kinds of elements that reproduce. In general, for these models (assuming p0 > 0),
there is one trapping state and all other states are transient. Figure 5.3 showed how to
ﬁnd the probability that the trapping state is entered by the nth generation, and also the
probability that it is entered eventually. If the expected number of oﬀspring of an element
is at most 1, then the population dies out with probability 1, and otherwise, the population
dies out with some given probability q , and grows without bound with probability 1 − q .
We next studied birthdeath Markov chains and reversibility. Birthdeath chains are widely
used in queueing theory as sample time approximations for systems with Poisson arrivals
and various generalizations of exponentially distributed service times. Equation (5.30) gives
their steadystate probabilities if positiverecurrent, and shows the condition under which
they are positiverecurrent. We showed that these chains are reversible if they are positiverecurrent.
Theorems 5.6 and 5.7 provided a simple way to ﬁnd the steadystate distribution of reversible
chains and also of chains where the backward chain behavior could be hypothesized or
deduced. We used reversibility to show that M/M/1 and M/M/m Markov chains satisfy
Burke’s theorem for sampledtime — namely that the departure process is Bernoulli, and
that the state at any time is independent of departures before that time.
Roundrobin queueing was then used as a more complex example of how to use the backward
process to deduce the steadystate distribution of a rather complicated Markov chain; this
also gave us added insight into the behavior of queueing systems and allowed us to show
that, in the processorsharing limit, the distribution of number of customers is the same as
that in an M/M/1 queue.
Finally, semiMarkov processes were introduced. Renewal theory again provided the key
to analyzing these systems. Theorem 5.9 showed how to ﬁnd the steadystate probabilities
of these processes, and it was shown that these probabilities could be interpreted both as
timeaverages and, in the case of nonarithmetic transition times, as limiting probabilities
in time. 230 CHAPTER 5. COUNTABLESTATE MARKOV CHAINS For further reading on Markov chains with countably inﬁnite state spaces, see [9], [16], or
[22]. Feller [9] is particularly complete, but Ross and Wolﬀ are somewhat more accessible.
Harris, [12] is the standard reference on branching processes and Kelly, [13] is the standard reference on reversibility. The material on roundrobin systems is from [24] and is
generalized there. 5.10 Exercises Exercise 5.1. Let {Pij ; i, j ≥ 0} be the set of transition probabilities for an inﬁnitestate
Markov chain. For each i, j , let Fij (n) be the probability that state j occurs sometime
between time 1 and n inclusive, given X0 = i. For some given j , assume that {xk ; k ≥ 0}
P
is a set of nonnegative numbers satisfying xi = Pij + k6=j Pik xk . Show that xi ≥ Fij (n)
for all n and i, and hence that xi ≥ Fij (1) for all i. Hint: use induction.
Exercise 5.2. a) For the Markov chain in Figure 5.1, show that, for p ≥ 1/2, F00 (1) =
2(1 − p) and show that Fi0 (1) = [(1 − p)/p]i for i ≥ 1. Hint: ﬁrst show that this solution
satisﬁes (5.5) and then...
View
Full
Document
This note was uploaded on 09/27/2010 for the course EE 229 taught by Professor R.srikant during the Spring '09 term at University of Illinois, Urbana Champaign.
 Spring '09
 R.Srikant

Click to edit the document details