This preview shows page 1. Sign up to view the full content.
Unformatted text preview: An (nstep) walk1 is an ordered string of nodes {i0 , i1 , . . . in }, n ≥ 1, in
which there is a directed arc from im−1 to im for each m, 1 ≤ m ≤ n. A path is a walk
in which the nodes are distinct. A cycle is a walk in which the ﬁrst and last nodes are the
same and the other nodes are distinct.
Note that a walk can start and end on the same node, whereas a path cannot. Also the
number of steps in a walk can be arbitrarily large, whereas a path can have at most M − 1
steps and a cycle at most M steps.
Deﬁnition 4.3. A state j is accessible from i (abbreviated as i → j ) if there is a walk in
the graph from i to j .
For example, in ﬁgure 4.1(a), there is a walk from node 1 to node 3 (passing through node
2), so state 3 is accessible from 1. There is no walk from node 5 to 3, so state 3 is not
accessible from 5. State 2, for example, is accessible from itself, but state 6 is not accessible
from itself. To see the probabilistic meaning of accessibility, suppose that a walk i0 , i1 , . . . in
exists from node i0 to in . Then, conditional on X0 = i0 , there is a positive probability,
Pi0 i1 , that X1 = i1 , and consequently (since Pi1 i2 > 0), there is a positive probability that
1 We are interested here only in directed graphs, and thus undirected walks and paths do not arise. 142 CHAPTER 4. FINITESTATE MARKOV CHAINS X2 = i2 . Continuing this argument there is a positive probability that Xn = in , so that
Pr {Xn =in  X0 =i0 } > 0. Similarly, if Pr {Xn =in  X0 =i0 } > 0, then there is an nstep
walk from i0 to in . Summarizing, i → j if and only if (iﬀ ) Pr {Xn =j  X0 =i} > 0 for some
n
n
n ≥ 1. We denote Pr {Xn =j  X0 =i} by Pij . Thus, for n ≥ 1, Pij > 0 iﬀ the graph has an n
step walk from i to j (perhaps visiting the same node more than once). For the example in
2
n
Figure 4.1(a), P13 = P12 P23 > 0. On the other hand, P53 = 0 for all n ≥ 1. An important
relation that we use often in what follows is that if there is an nstep walk from state i to j
and an mstep walk from state j to k, then there is a walk of m + n steps from i to k. Thus
n
Pij > 0 and Pm > 0 imply
jk Pn+m > 0.
ik (4.4) i → k. (4.5) This also shows that
i → j and j → k imply Deﬁnition 4.4. Two distinct states i and j communicate (abbreviated i ↔ j ) if i is accessible from j and j is accessible from i.
An important fact about communicating states is that if i ↔ j and m ↔ j then i ↔ m. To
see this, note that i ↔ j and m ↔ j imply that i → j and j → m, so that i → m. Similarly,
m → i, so i ↔ m.
Deﬁnition 4.5. A class T of states is a nonempty set of states such that for each state
i ∈ T , i communicates with each j ∈ T (except perhaps itself ) amd does not communicate
with any j ∈ T .
/
For the example of Fig. 4.1(a), {1, 2, 3, 4} is one class of states, {5} is another, and {6} is
another. Note that state 6 does not communicate with itself, but {6} is still considered to
be a class. The entire set of states in a given Markov chain is partitioned into one or more
disjoint classes in this way.
Deﬁnition 4.6. For ﬁnitestate Markov chains, a recurrent state is a state i that is accessible from al l states that are accessible from i (i is recurrent if i → j implies that j → i).
A transient state is a state that is not recurrent.
Recurrent and transient states for Markov chains with a countably inﬁnite set of states will
be deﬁned in the next chapter.
According to the deﬁnition, a state i in a ﬁnitestate Markov chain is recurrent if there
is no possibility of going to a state j from which there can be no return. As we shall see
later, if a Markov chain ever enters a recurrent state, it returns to that state eventually
with probability 1, and thus keeps returning inﬁnitely often (in fact, this property serves as
the deﬁnition of recurrence for Markov chains without the ﬁnitestate restriction). A state
i is transient if there is some j that is accessible from i but from which there is no possible
return. Each time the system returns to i, there is a possibility of going to j ; eventually
this possibility will occur, and then no more returns to i can occur (this can be thought of
as a mathematical form of Murphy’s law). 4.2. CLASSIFICATION OF STATES 143 Theorem 4.1. For ﬁnitestate Markov chains, either al l states in a class are transient or
al l are recurrent.2
Proof: Assume that state i is transient (i.e., for some j , i → j but j 6→ i) and suppose
that i and m are in the same class (i.e., i ↔ m). Then m → i and i → j , so m → j . Now
if j → m, then the walk from j to m could be extended to i; this is a contradiction, and
therefore there is no walk from j to m, and m is transient. Since we have just shown that
all nodes in a class are transient if any are, it follows that the states in a class are either all
recurrent or all transient.
For the example of ﬁg. 4.1(a), {1, 2, 3, 4} is a transient class and {5} is a recurrent class.
In terms of the graph of a Markov chain,...
View
Full
Document
This note was uploaded on 09/27/2010 for the course EE 229 taught by Professor R.srikant during the Spring '09 term at University of Illinois, Urbana Champaign.
 Spring '09
 R.Srikant

Click to edit the document details