This preview shows page 1. Sign up to view the full content.
Unformatted text preview: -state Markov chain that contains a single recurrent
class plus, perhaps, some transient states. An ergodic unichain is a unichain for which the
recurrent class is ergodic.
A Unichain, as we shall see, is the natural generalization of a recurrent chain to allow for
some initial transient behavior without disturbing the long term aymptotic behavior of the
underlying recurrent chain.
The answer to the second question above is that the solution to (4.18) is unique iﬀ [P] is
the transition matrix of a unichain. If there are r recurrent classes, then π = π [P ] has r
linearly independent solutions. For the third question, each row of [P ]n converges to the
unique solution of (4.18) if [P] is the transition matrix of an ergodic unichain. If there are
multiple recurrent classes, but all of them are aperiodic, then [P ]n still converges, but to
a matrix with non-identical rows. If the Markov chain has one or more periodic recurrent
classes, then [P ]n does not converge.
We ﬁrst look at these answers from the standpoint of matrix theory and then proceed in
Chapter 5 to look at the more general problem of Markov chains with a countably inﬁnite
number of states. There we use renewal theory to answer these same questions (and to
discover the diﬀerences that occur for inﬁnite-state Markov chains). The matrix theory
approach is useful computationally and also has the advantage of telling us something
about rates of convergence. The approach using renewal theory is very simple (given an
understanding of renewal processes), but is more abstract. 4.3.1 The eigenvalues and eigenvectors of P A convenient way of dealing with the nth power of a matrix is to ﬁnd the eigenvalues and
eigenvectors of the matrix.
Deﬁnition 4.11. The row vector π is a left eigenvector of [P ] of eigenvalue ∏ if π 6= 0
and π [P ] = ∏π . The column vector ∫ is a right eigenvector of eigenvalue ∏ if ∫ 6= 0 and
[P ]∫ = ∏∫ .
We ﬁrst treat the special case of a Markov chain with two states. Here the eigenvalues and
eigenvectors can be found by elementary (but slightly tedious) algebra. The eigenvector
equations can be written out as
π1 P11 + π2 P21 = ∏π1
π1 P12 + π2 P22 = ∏π2 P11 ∫1 + P12 ∫2 = ∏∫1
P21 ∫1 + P22 ∫2 = ∏∫2 (4.19) These equations have a non-zero solution iﬀ the matrix [P − ∏I ], where [I ] is the identity
matrix, is singular (i.e., there must be a non-zero ∫ for which [P − ∏I ]∫ = 0 ). Thus ∏ must
be such that the determinant of [P − ∏I ], namely (P11 − ∏)(P22 − ∏) − P12 P21 , is equal
to 0. Solving this quadratic equation in ∏, we ﬁnd that ∏ has two solutions, ∏1 = 1 and
∏2 = 1 − P12 − P21 . Assume initially that P12 and P21 are not both 0. Then the solution
for the left and right eigenvectors, π (1) and ∫ (1) , of ∏1 and π (2) and ∫ (2) of ∏2 , are given by
(1) π1 =
π1 = P21
P12 +P21 1 (1) π2 =
π2 = P12
P12 +P21 −1 (1) ∫1 =
∫1 = 1
P12 +P21 (1) ∫2 =
∫2 = 1
P12 +P21 . 150 CHAPTER 4. FINITE-STATE MARKOV CHAINS
These solutions contain an arbitrary normalization factor. Now let [Λ] =
let [U ] be a matrix with columns ∫ (1) and ∫ (2) . Then the two right eigenvector equations in
(4.19) can be combined compactly as [P ][U ] = [U ][Λ]. It turns out (given the way we have
normalized the eigenvectors) that the inverse of [U ] is just the matrix whose rows are the
left eigenvectors of [P ] (this can be veriﬁed by direct calculation, and we show later that any
right eigenvector of one eigenvalue must be orthogonal to any left eigenvector of another
eigenvalue). We then see that [P ] = [U ][Λ][U ]−1 and consequently [P ]n = [U ][Λ]n [U ]−1 .
Multiplying this out, we get
[P ]n = ∑ π1 + π2 ∏n
π1 − π1 ∏n
2 π2 − π2 ∏n
π2 + π1 ∏n
2 ∏ where π1 = P21
, π2 = 1 − π1 .
P12 + P21 Recalling that ∏2 = 1 − P12 − P21 , we see that |∏2 | ≤ 1. If P12 = P21 = 0, then ∏2 = 1 so
that [P ] and [P ]n are simply identity matrices. If P12 = P21 = 1, then ∏2 = −1 so that
[P ]n alternates between the identity matrix for n even and [P ] for n odd. In all other cases,
|∏2 | < 1 and [P ]n approaches the matrix whose rows are both equal to π .
Parts of this special case generalize to an arbitrary ﬁnite number of states. In particular,
∏ = 1 is always an eigenvalue and the vector e whose components are all equal to 1 is
always a right eigenvector of ∏ = 1 (this follows immediately from the fact that each row of
a stochastic matrix sums to 1). Unfortunately, not all stochastic matrices can be represented
in the form [P ] = [U ][Λ][U −1 (since M independent right eigenvectors need not exist—see
Exercise 4.9) In general, the diagonal matrix of eigenvalues in [P ] = [U ][Λ][U −1 ] must be
replaced by something called a Jordan form, which does not easily lead us to the desired
results. In what follows, we develop the powerful Perron and Fro...
View Full Document
This note was uploaded on 09/27/2010 for the course EE 229 taught by Professor R.srikant during the Spring '09 term at University of Illinois, Urbana Champaign.
- Spring '09