This preview shows page 1. Sign up to view the full content.
Unformatted text preview: =j  X (0) = i} = qij δ + o(δ ), so this second
approximation is increasingly good as δ → 0 Since the transition probability from i to itself
in this approximation is 1 − ∫i δ , we require that ∫i δ ≤ 1 for all i. For a ﬁnite state space,
this is satisﬁed for any δ ≤ [maxi ∫i ]−1 . For a countably inﬁnite set of states, however, the
sampledtime approximation requires the existence of some ﬁnite B such that ∫i ≤ B for
all i.
The sampledtime Markov chain for the M/M/1 queue was analyzed in Section 5.5. Recall
that this required a selfloop for each state to handle the probability of no transitions in a
time increment. In that sampledtime model, the steadystate probability of state i is given
by (1 − ρ)ρi where ρ = ∏/µ. We will see that even though the sampledtime model contains
several approximations, the resultingsteady probabilities are exact.
1 This is the same paradoxical situation that arises whenever we view one Poisson process as the sum of
several other Poisson processes. Perhaps the easiest way to understand this is with the M/M/1 example.
Given an entry into state i > 0, customer arrivals occur at rate ∏ and departures with rate µ, but the state
changes at rate ∏ + µ, and the epoch of this change is independent of whether it is caused by an arrival or
departure. 6.2. STEADYSTATE BEHAVIOR OF IRREDUCIBLE MARKOV PROCESSES q13
q31 239 δ q13
δ q31 ❥
✙
♥
✲2
♥
✲3
♥
1
q12
q23 ❥
✙
♥
✲2
♥
✲3
♥
1
δ q12
δ q23
❖
❖
❖
1−δ q12 −δ q13 1 − δ q23 1 − δ q31 Figure 6.3: Approximating a Markov process by its sampledtime Markov chain. 6.2 Steadystate behavior of irreducible Markov processes As one might guess, the appropriate approach to exploring the steadystate behavior of
Markov processes comes from applying renewal theory to various renewal processes associated with the Markov process. Many of the needed results for this have already been
developed in looking at the steadystate behavior of countablestate Markov chains.
We restrict our analysis to Markov processes for which the embedded Markov chain is
irreducible, i.e., consists of a single class of states. Such Markov processes are themselves
called irreducible. The reason for this restriction is not that Markov processes with multiple
classes of states are unimportant, but rather that they can usually be best understood by
looking at the embedded Markov chain and the various classes making up that chain.
Recall the following results about irreducible Markov chains from Theorems 5.4 and 5.2.
An irreducible countablestate Markov chain with transition probabilities {Pij ; i, j ≥ 0} is
positive recurrent if and only if there is a set of numbers {πi ; i ≥ 0} satisfying
X
X
πj =
πi Pij for all j ;
πj ≥ 0 for all j ;
πj = 1.
(6.6)
i j If such a solution exists, it is unique and πj > 0 for all j . Furthermore, if such a solution
exists (i.e., if the chain is positive recurrent), then for each i, j , a delayed renewal counting
process {Nij (n)} exists counting the renewals into state j over the ﬁrst n transitions of the
chain, given an initial state X (0) = i. These processes each satisfy
lim Nij (t)/t = πj t→1 with probability 1 lim E [Nij (t)/t] = πj . t→1 (6.7)
(6.8) Now consider a Markov process which has a positive recurrent embedded Markov chain and
thus satisﬁes (6.6  6.8). When a transition in the embedded chain leads to state j , the time
until the next transition is exponential with rate ∫j . Reasoning intuitively, we would expect
the fraction of time the process spends in a state j to be proportional to πj (the fraction
of transitions going to j ), but also to be proportional to the expected holding time in state
j , which is 1/∫j . Since the fraction of time in diﬀerent states must add up to 1, we would
then hypothesize that the fraction of time pj spent in any given state j should satisfy
πj /∫j
pj = P
.
i πi /∫i (6.9) 240 CHAPTER 6. MARKOV PROCESSES WITH COUNTABLE STATE SPACES In fact, if we apply this formula to the embedded chain for the M/M/1 queue in (6.4), we
ﬁnd that pi = (1−ρ)ρi . This is the same result given by the sampledtime analysis of the
M/M/1 queue. In other words, the self loops in the sampledtime model provide the same
eﬀect as the diﬀerence in holding times in the Markov process model. 6.2.1 The number of transitions per unit time We now turn to a careful general derivation of (6.9), but the most important issue is to
understand what is meant by the fraction of time in a state (e.g., is there a strong law
interpretation for pj such as that for πj in (6.7)?). There is also the question of what
P
happens if i πi /∫i = 1. Assume that the embedded Markov chain starts in some arbitrary state X0 = i. Then the
time U1 until the next transition is exponential with rate ∫i . The interval U2 until the next
following transition is a mixture of exponentials depending on X1 , but it is a welldeﬁned
rv . In fact, for each n, Un is a rv with the distri...
View
Full
Document
This note was uploaded on 09/27/2010 for the course EE 229 taught by Professor R.srikant during the Spring '09 term at University of Illinois, Urbana Champaign.
 Spring '09
 R.Srikant

Click to edit the document details