Unformatted text preview: 66 6.8 CHAPTER 6. MARKOV PROCESSES WITH COUNTABLE STATE SPACES Summary We have seen that Markov processes with countable state spaces are remarkably similar
to Markov chains with countable state spaces, and throughout the chapter, we frequently
made use of both the embedded chain corresponding to the process and to the sampled
time approximation to the process.
P
For irreducible processes, the steady state equations, (6.20) and i ∫i = 1, were found to
specify the steady state probabilities, pi , which have signiﬁcance both as time averages and
as limiting probabilities. If the transition rates ni are bounded, then the sampled time
approximation exists and has the same steady state probabilities as the Markov process
P
itself. If the transition rates ni are unbounded but i pi ∫i < 1, then the embedded chain
is positive recurrent and has steady state probabilities, but the sampled time approximation
P
does not exist. We assumed throughout the remainder of the chapter that i pi ∫i < 1.
This ruled out irregular processes in which there is no meaningful steady state, and also
some peculiar processes such as that in Exercise 6.6 where the embedded chain is null
recurrent.
Section 6.3 developed the Kolmogoroﬀ backward and forward diﬀerential equations for the
transient probabilities Pij (t) of being in state j at time t given state i at time 0. We showed
that for ﬁnitestate processes, these equations can be solved by ﬁnding the eigenvalues
and eigenvectors of the transition rate matrix Q. There are close analogies between this
analysis and the algebraic treatment of ﬁnitestate Markov chains in chapter 4, and exercise
6.7 showed how the transients of the process are related to the transients of the sampled
time approximation.
For irreducible processes with bounded transition rates, uniformization was introduced as a
way to simplify the structure of the process. The addition of self transitions does not change
the process itself, but can be used to adjust the transition rates ∫i to be the same for all
states. This changes the embedded Markov chain, and the steady state probabilities for the
embedded chain become the same as those for the process. The epochs at which transitions
occur then form a Poisson process which is independent of the set of states entered. This
yields a separation between the transition epochs and the sequence of states.
The next two sections analyzed birthdeath processes and reversibility. The results about
birthdeath Markov chains and reversibility for Markov chains carried over almost without
change to Markov processes. These results are central in queueing theory, and Burke’s
theorem allowed us to look at simple queueing networks with no feedback and to understand
how feedback complicates the problem.
Finally, Jackson networks were discussed. These are important in their own right and also
provide a good example of how one can solve complex queueing problems by studying the
reverse time process and making educated guesses about the steady state behavior. The
somewhat startling result here is that in steady state, and at a ﬁxed time, the number
of customers at each node is independent of the number at each other node and satisﬁes
the same distribution as for an M/M/1 queue. Also the exogenous departures from the
network are Poisson and independent from node to node. We emphasized that the number
of customers at one node at one time is often dependent on the number at other nodes at 6.8. SUMMARY 267 other times. The independence holds only when all nodes are viewed at the same time.
For further reading on Markov processes, see [13], [16], [22], and [9]. 268 6.9 CHAPTER 6. MARKOV PROCESSES WITH COUNTABLE STATE SPACES Exercises Exercise 6.1. Consider a Markov process for which the embedded Markov chain is a birthdeath chain with transition probabilities Pi,i+1 = 2/5 for all i ≥ 1, Pi,i−1 = 3/5 for all i ≥ 1,
P01 = 1, and Pij = 0 otherwise.
a) Find the steady state probabilities {πi ; i ≥ 0} for the embedded chain. b) Assume that the transition rate ∫i out of state i, for i ≥ 0, is given by ∫i = 2i . Find
the transition rates {qij } between states and ﬁnd the steady state probabilities {pi } for the
Markov process. Explain heuristically why πi 6= pi .
c) Now assume in parts c) to f ) that the transition rate out of state i, for i ≥ 0, is given
by ∫i = 2−i . Find the transition rates {qij } between states and draw the directed graph
with these transition rates.
d) Show that there is no probability vector solution {pi ; i ≥ 0} to (6.20).
e) Argue that the expected time between visits to any given state i is inﬁnite. Find the
expected number of transitions between visits to any given state i. Argue that, starting
from any state i, an eventual return to state i occurs with probability 1.
f ) Consider the sampledtime approximation of this process with δ = 1. Draw the graph of
the resulting Markov chain and argue why it must be nullrecurrent.
Exercise 6.2. Consider the Markov process illustrated below. The transitions are labelled
by the rate qij at which th...
View
Full
Document
This note was uploaded on 09/27/2010 for the course EE 229 taught by Professor R.srikant during the Spring '09 term at University of Illinois, Urbana Champaign.
 Spring '09
 R.Srikant

Click to edit the document details