This preview shows page 1. Sign up to view the full content.
Unformatted text preview: ③
♥
2
②
∏+µ ∏/(∏+µ)
µ/(∏+µ) ③
♥
3
∏+µ ... Figure 6.1: The Markov process for an M/M/1 queue. Each node i is labeled with
its corresponding rate ∫i to the next transition, and each transition is labeled with the
corresponding transition probability in the embedded Markov chain. The embedded Markov chain is a Birthdeath Markov chain, and its steady state probabil 6.1. INTRODUCTION 237 ities can be calculated easily using (5.27). The result is
π0 =
πn = 1−ρ
2
1 − ρ2 n−1
ρ
2 where ρ =
for n ≥ 1. ∏
µ
(6.4) Note that if ∏ << µ, then π0 and π1 are each close to 1/2, whereas because of the large
holding time in state 0, the process spends most of its time in state 0 waiting for arrivals.
We will shortly learn how to treat the expected time in a state as opposed to the expected
fraction of transitions to that state, but for now we return to the general study of Markov
processes.
The evolution of a Markov process can be visualized in several ways. First, assume the
process starts in a known state X0 = i at time 0. The next state X1 is determined by the
probabilities {Pij ; j ≥ 0} of the embedded Markov chain, and the interval U1 is independently determined by the exponential distribution with rate ∫i . Given that X1 = j , the
next state X2 and next interval U2 are independently determined by {Pj k ; k ≥ 0} and ∫j
respectively. Subsequent transitions and intervals evolve in the same way.
For a second viewpoint, suppose an independent Poisson process of rate ∫i is associated
with each state i. When the Markov process enters a given state i, the next transition
occurs at the next arrival epoch in the Poisson process for state i. At that epoch, a new
state is chosen according to the transition probabilities Pij . Since the choice of next state,
given state i, is independent of the interval in state i, this view describes the same process
as the ﬁrst view.
For a third visualization, suppose, for each pair of states i and j , that an independent
Poisson process of rate ∫i Pij is associated with a possible transition to j conditional on
being in i. When the Markov process enters a given state i, both the time of the next
transition and the choice of the next state are determined by the set of i to j Poisson
processes over all possible next states j . The transition occurs at the epoch of the ﬁrst
arrival, for the given i, to any of the i to j processes, and the next state is the j for which
that ﬁrst arrival occurred. Since such a collection of Poisson processes is equivalent to a
single process of rate ∫i followed by an independent selection according to the transition
probabilities Pij , this view again describes the same process as the other views.
It is convenient in this visualization to deﬁne the rate from any state i to any other state j
as
qij = ∫i Pij .
If we sum over j , we see that ∫i and Pij can be expressed in terms of qij for each i, j as
X
∫i =
qij ;
Pij = qij /∫i .
(6.5)
j This means that the fundamental characterization of the Markov process in terms of the
rates ∫i and the embedded chain transition probabilities Pij can be replaced by a characterization in terms of the set of transition rates qij . In many cases, this is a more natural 238 CHAPTER 6. MARKOV PROCESSES WITH COUNTABLE STATE SPACES approach. For the M/M/1 queue, for example, qi,i+1 is simply the arrival rate ∏. Similarly,
for i > 0, qi,i−1 = µ is the departure rate when there are customers to be served. Figure
6.2 shows Figure 6.1 incorporating this notational simpliﬁcation
♥
0
② ∏
µ ③
♥
1
② ∏
µ ③
♥
2
② ∏
µ ③
♥
3 ... Figure 6.2: The Markov process for an M/M/1 queue. Each transition (i, j ) is labelled
with the transition rate qij in the embedded Markov chain. Note that the interarrival density for the Poisson process from i to a given j is qij exp(−qij x).
On the other hand, given that the process is in state i, the probability density for the interval
until the next arrival, whether conditioned on an arrival1 to j or not, is ∫i exp(−∫i x). 6.1.1 The sampledtime approximation to a Markov process As yet another way to visualize a Markov process, consider approximating the process by
viewing it only at times separated by a given increment size δ . The Poisson processes above
are then approximated by Bernoulli processes where the transition probability from i to j
in the sampledtime chain is deﬁned to be qij δ for all j 6= i.
The Markov process is then approximated by a Markov chain and selftransition probabilities from i to i are required for those time increments in which no transition occurs.
P
Thus, as illustrated in Figure 6.3, we have Pii = 1 − j qij δ = 1 − ∫i δ for each i. Note
that this is an approximation to the Markov process in two ways. First, transitions occur only at integer multiples of the increment δ , and second, qij δ is an approximation to
Pr {X (δ )=j  X (0)=i}. From (6.3), Pr {X (δ )...
View Full
Document
 Spring '09
 R.Srikant

Click to edit the document details