This preview shows page 1. Sign up to view the full content.
Unformatted text preview: has a nonzero transition to a
lower numbered state). Show that there is some state (other than 1 to i), say i + 1 and
(ki )
some decision ki+1 such that Pi+1+1 > 0 for some l ≤ i.
,l
d) Use parts a), b), and c) to observe that there is a stationary policy k = k1 , . . . , kM for
which state 1 is accessible from each other state. Chapter 5 COUNTABLESTATE MARKOV
CHAINS
5.1 Introduction and classiﬁcation of states Markov chains with a countablyinﬁnite state space (more brieﬂy, countablestate Markov
chains ) exhibit some types of behavior not possible for chains with a ﬁnite state space.
Figure 5.1 helps explain how these new types of behavior arise. If the rightgoing transitions
p in the ﬁgure satisfy p > 1/2, then transitions to the right occur with higher frequency
than transitions to the left. Thus, reasoning heuristically, we expect the state Xn at time
n
n to drift to the right with increasing n. Given X0 = 0, the probability P0j of being in
state j at time n, should then tend to zero for any ﬁxed j with increasing n. If one tried to
n
deﬁne the steadystate probability of state j as limn→1 P0j , then this limit would be 0 for
all j . These probabilities would not sum to 1, and thus would not correspond to a limiting
distribution. Thus we say that a steadystate does not exist. In more poetic terms, the
state wanders oﬀ into the wild blue yonder. q ✿♥
✘0
② ③
♥
1
②
q =1−p
p p
q ③
♥
2
② p
q ③
♥
3
② p
q ③
♥
4 ... Figure 5.1: A Markov chain with a countable state space. If p > 1/2, then as time n
increases, the state Xn becomes large with high probability, i.e., limn→1 Pr {Xn ≥ j } =
1 for each integer j . The truncation of Figure 5.1 to k states is analyzed in Exercise 4.7. The solution there
deﬁnes ρ = p/q and shows that if ρ 6= 1, then πi = (1 − ρ)ρi /(1 − ρk ) for each i, 0 ≤ i < k.
For ρ = 1, πi = 1/k for each i. For ρ < 1, the limiting behavior as k → 1 is πi = (1 − ρ)ρi .
Thus for ρ < 1, the truncated Markov chain is similar to the untruncated chain. For ρ > 1,
on the other hand, the steadystate probabilities for the truncated case are geometrically
decreasing from the right, and the states with signiﬁcant probability keep moving to the right
197 198 CHAPTER 5. COUNTABLESTATE MARKOV CHAINS as k increases. Although the probability of each ﬁxed state j approaches 0 as k increases,
the truncated chain never resembles the untruncated chain. This example is further studied
in Section 5.3, which considers a generalization known as birthdeath Markov chains.
Fortunately, the strange behavior of Figure 5.1 when p > q is not typical of the Markov
chains of interest for most applications. For typical countablestate Markov chains, a steadystate does exist, and the steadystate probabilities of all but a ﬁnite number of states (the
number depending on the chain and the application) can almost be ignored for numerical
calculations.
It turns out that the appropriate tool to analyze the behavior, and particularly the long
term behavior, of countablestate Markov chains is renewal theory. In particular, we will
ﬁrst revise the deﬁnition of recurrent states for ﬁnitestate Markov chains to cover the
countablestate case. We then show that for any given recurrent state j , the sequence of
discrete time epochs n at which the state Xn is equal to j essentially forms a renewal
process.1 The renewal theorems then specify the timeaverage relativefrequency of state j ,
the limiting probability of j with increasing time, and a number of other relations.
To be slightly more precise, we want to understand the sequence of epochs at which one
state, say j , is entered, conditional on starting the chain either at j or at some other state,
say i. We will see that, sub ject to the classiﬁcation of states i and j , this gives rise to a
delayed renewal process. In preparing to study this delayed renewal process, we need to
understand the interrenewal intervals. The probability mass functions (PMF’s) of these
intervals are called ﬁrstpassagetime probabilities in the notation of Markov chains.
Deﬁnition 5.1. The ﬁrstpassagetime probability, fij (n), is the probability that the ﬁrst
entry to state j occurs at discrete time n (for n ≥ 1), given that X0 = i. That is, for n = 1,
fij (1) = Pij . For n ≥ 2,
fij (n) = Pr {Xn =j, Xn−1 6=j, Xn−2 6=j, . . . , X1 6=j X0 =i} . (5.1) n
For n ≥ 2, note the distinction between fij (n) and Pij = Pr {Xn = j X0 = i}. The deﬁnition
in (5.1) also applies for j = i; fii (n) is thus the probability, given X0 = i, that the ﬁrst
occurrence of state i after time 0 occurs at time n. Since the transition probabilities are
independent of time, fij (n − 1) is also the probability, given X1 = i, that the ﬁrst subsequent
occurrence of state j occurs at time n. Thus we can calculate fij (n) from the iterative
relations
X
fij (n) =
Pik fkj (n − 1); n > 1;
fij (1) = Pij .
(5.2)
k6=j With this iterative approach, the ﬁrst passage time probabilities fij (n) for a given n mu...
View
Full
Document
This note was uploaded on 09/27/2010 for the course EE 229 taught by Professor R.srikant during the Spring '09 term at University of Illinois, Urbana Champaign.
 Spring '09
 R.Srikant

Click to edit the document details