This preview shows page 1. Sign up to view the full content.
Unformatted text preview: sented by t = j s + ` where 1 ≤ ` < s and j ≥ 0. Iterating
(4.15), we get T (k) = T (k + j s), and applying (4.10) to this,
T (k + `) = T (k + j s + `)
= T (k + t)
= T (k). where we have used t = j s + ` followed by (4.14). This is the desired contradiction, since
` < s. Thus s = 1 and T (k) = T (k + 1). Iterating this,
T (k) = T (k + n) for all n ≥ 0. (4.16) Since the chain is ergodic, each state j continues to be accessible after k steps. Therefore j
must be in T (k + n) for some n ≥ 0, which, from (4.16), implies that j ∈ T (k). Since j is
n
arbitrary, T (k) must be the entire set of states. Thus Pij > 0 for all n ≥ k and all j .
This same argument can be applied to any state i on the given cycle with τ nodes. Any
state m not on this cycle has a path to the cycle using at most M − τ steps. Using this
path to reach a node i on the cycle, and following this with all the walks from i of length
k = (M − 1)τ , we see that
M−τ +(M−1)τ Pmj >0 for all j, m. The proof is complete, since M − τ + (M − 1)τ ≤ (M − 1)2 + 1 for all τ , 1 ≤ τ ≤ M − 1, with
equality when τ = M − 1.
Figure 4.4 illustrates a situation where the bound (M − 1)2 + 1 is met with equality. Note
that there is one cycle of length M − 1 and the single node not on this cycle, node 1, is the
unique starting node at which the bound is met with equality. 4.3 The Matrix representation The matrix [P ] of transition probabilities of a Markov chain is called a stochastic matrix;
that is, a stochastic matrix is a square matrix of nonnegative terms in which the elements
n
in each row sum to 1. We ﬁrst consider the n step transition probabilities Pij in terms of
[P]. The probability of going from state i to state j in two steps is the sum over h of all
possible two step walks, from i to h and from h to j . Using the Markov condition in (4.1),
2
Pij = M
X Pih Phj . h=1 It can be seen that this is just the ij term of the product of matrix [P ] with itself; denoting
2
n
[P ][P ] as [P ]2 , this means that Pij is the (i, j ) element of the matrix [P ]2 . Similarly, Pij is 148 CHAPTER 4. FINITESTATE MARKOV CHAINS the ij element of the nth power of the matrix [P ]. Since [P ]m+n = [P ]m [P ]n , this means
that
m
Pij +n = M
X mn
Pih Phj . (4.17) h=1 This is known as the ChapmanKolmogorov equation. An eﬃcient approach to compute
n
[P ]n (and thus Pij ) for large n, is to multiply [P ]2 by [P ]2 , then [P ]4 by [P ]4 and so forth
and then multiply these binary powers together as needed.
The matrix [P ]n (i.e., the matrix of transition probabilities raised to the nth power) is very
n
important for a number of reasons. The i, j element of this matrix is Pij , which is the
probability of being in state j at time n given state i at time 0. If memory of the past dies
out with increasing n, then we would expect the dependence on both n and i to disappear
n
in Pij . This means, ﬁrst, that [P ]n should converge to a limit as n → 1, and, second, that
each row of [P ]n should tend to the same set of probabilities. If this convergence occurs
(and we later determine the circumstances under which it occurs), [P ]n and [P ]n+1 will be
the same in the limit n → 1 which means lim[P ]n = (lim[P ]n )P . If all the rows of lim[P n ]
are the same, equal to some row vector π = (π1 , π2 , . . . , πM ), this simpliﬁes to π = π [P ].
Since π is a probability vector (i.e., its components are the probabilities of being in the
various states in the limit n → 1), its components must be nonnegative and sum to 1.
Deﬁnition 4.9. A steadystate probability vector (or a steadystate distribution) for a Markov
chain with transition matrix [P ] is a vector π that satisﬁes
X
π = π [P ] ; where
πi = 1 ; πi ≥ 0 , 1 ≤ i ≤ M.
(4.18)
i The steadystate probability vector is also often called a stationary distribution. If a probability vector π satisfying (4.18) is taken as the initial probability assignment of the chain
at time 0, then that assigment is maintained forever. That is, if Pr {X0 =i} = πi for all i,
P
then Pr {X1 =j } = i πi Pij = πj for all j , and, by induction, Pr {Xn = j } = πj for all j
and all n > 0.
If [P ]n converges as above, then, for each starting state, the steadystate distribution is
reached asymptotically. There are a number of questions that must be answered for a
steadystate distribution as deﬁned above:
1. Does π = π [P ] always have a probability vector solution?
2. Does π = π [P ] have a unique probability vector solution?
3. Do the rows of [P ]n converge to a probability vector solution of π = π [P ]?
We ﬁrst give the answers to these questions for ﬁnitestate Markov chains and then derive
them. First, (4.18) always has a solution (although this is not necessarily true for inﬁnitestate chains). The answer to the second and third questions is simpler with the following
deﬁnition: 4.3. THE MATRIX REPRESENTATION 149 Deﬁnition 4.10. A unichain is a ﬁnite...
View
Full
Document
This note was uploaded on 09/27/2010 for the course EE 229 taught by Professor R.srikant during the Spring '09 term at University of Illinois, Urbana Champaign.
 Spring '09
 R.Srikant

Click to edit the document details