This preview shows page 1. Sign up to view the full content.
Unformatted text preview: scale factor) and is the
only nonnegative nonzero vector (within a scale factor) that satisﬁes ∏π ≤ π [A].
Proof: A left eigenvector of [A] is a right eigenvector (transposed) of [A]T . The graph
corresponding to [A]T is the same as that for [A] with all the arc directions reversed, so
that all pairs of nodes still communicate and [A]T is irreducible. Since [A] and [A]T have
the same eigenvalues, the corollary is just a restatement of the theorem. 154 CHAPTER 4. FINITESTATE MARKOV CHAINS Corollary 4.2. Let ∏ be the largest real eigenvalue of an irreducible matrix and let the
right and left eigenvectors of ∏ be ∫ > 0 and π > 0. Then, within a scale factor, ∫ is
the only nonnegative right eigenvector of [A] (i.e., no other eigenvalues have nonnegative
eigenvectors). Similarly, within a scale factor, π is the only nonnegative left eigenvector of
[A].
Proof: Theorem 4.6 asserts that ∫ is the unique right eigenvector (within a scale factor) of
the largest real eigenvalue ∏, so suppose that u is a right eigenvector of some other eigenvalue
µ. Letting π be the left eigenvector of ∏, we have π [A]u = ∏π u and also π [A]u = µπ u .
Thus π u = 0. Since π > 0 , u cannot be nonnegative and nonzero. The same argument
shows the uniqueness of π .
Corollary 4.3. Let [P ] be a stochastic irreducible matrix (i.e., the matrix of a recurrent
Markov chain). Then ∏ = 1 is the largest real eigenvalue of [P ], e = (1, 1, . . . , 1)T is the
right eigenvector of ∏ = 1, unique within a scale factor, and there is a unique probability
vector π > 0 that is a left eigenvector of ∏ = 1.
Proof: Since each row of [P ] adds up to 1, [P ]e = e . Corollary 4.2 asserts the uniqueness
of e and the fact that ∏ = 1 is the largest real eigenvalue, and Corollary 4.1 asserts the
uniqueness of π .
The proof above shows that every stochastic matrix, whether irreducible or not, has an
eigenvalue ∏ = 1 with e = (1, . . . , 1)T as a right eigenvector. In general, a stochastic
matrix with r recurrent classes has r independent nonnegative right eigenvectors and r
independent nonnegative left eigenvectors; the left eigenvectors can be taken as the steadystate probability vectors within the r recurrent classes (see Exercise 4.14).
The following corollary, proved in Exercise 4.13, extends corollary 4.3 to unichains.
Corollary 4.4. Let [P ] be the transition matrix of a unichain. Then ∏ = 1 is the largest
real eigenvalue of [P ], e = (1, 1, . . . , 1)T is the right eigenvector of ∏ = 1, unique within
a scale factor, and there is a unique probability vector π ≥ 0 that is a left eigenvector of
∏ = 1; πi > 0 for each recurrent state i and πi = 0 for each transient state.
Corollary 4.5. The largest real eigenvalue ∏ of an irreducible matrix [A] ≥ 0 is a strictly
increasing function of each component of [A].
Proof: For a given irreducible [A], let [B ] satisfy [B ] ≥ [A], [B ] 6= [A]. Let ∏ be the
largest real eigenvalue of [A] and ∫ > 0 be the corresponding right eigenvector. Then
∏∫ = [A]∫ ≤ [B ]∫ , but ∏∫ 6= [B ]∫ . Let µ be the largest real eigenvalue of [B ], which is also
irreducible. If µ ≤ ∏, then µ∫ ≤ ∏∫ ≤ [B ]∫ , and µ∫ 6= [B ]∫ , which is a contradiction of
property 1 in Theorem 4.6. Thus, µ > ∏.
We are now ready to study the asymptotic behavior of [A]n . The simplest and cleanest
result holds for [A] > 0. We establish this in the following corollary and then look at the
case of greatest importance, that of a stochastic matrix for an ergodic Markov chain. More
general cases are treated in Exercises 4.13 and 4.14. 4.4. PERRONFROBENIUS THEORY 155 Corollary 4.6. Let ∏ be the largest eigenvalue of [A] > 0 and let π and ∫ be the positive
left and right eigenvectors of ∏, normalized so that π ∫ = 1. Then
lim n→1 [A]n
= ∫π.
∏n (4.26) Proof*: Since ∫ > 0 is a column vector and π > 0 is a row vector, ∫ π is a positive matrix
of the same dimension as [A]. Since [A] > 0, we can deﬁne a matrix [B ] = [A] − α∫ π which
is positive for small enough α > 0. Note that π and ∫ are left and right eigenvectors of [B ]
with eigenvalue µ = ∏ − α. We then have µn∫ = [B ]n∫ , which when premultiplied by π
yields
XX
n
(∏ − α)n = π [B ]n∫ =
πi Bij ∫j .
i j n
where Bij is the i, j element of [B ]n . Since each term in the above summation is positive,
n
n
we have (∏ − α)n ≥ πi Bij ∫j , and therefore Bij ≤ (∏ − α)n /(πi ∫j ). Thus, for each i,
n ∏−n = 0, and therefore lim
n −n = 0. Next we use a convenient
j , limn→1 Bij
n→1 [B ] ∏
matrix identity: for any eigenvalue ∏ of a matrix [A], and any corresponding right and left
eigenvectors ∫ and π , normalized so that π ∫ = 1, we have {[A] − ∏∫ π }n = [A]n − ∏n∫ π (see
Exercise 4.12). Applying the same identity to [B ], we have {[B ] − µ∫ π }n = [B ]n − µn∫ π .
Finally, since [B ] = [A] − α∫ π , we have [B ] − µ∫ π = [A] − ∏∫ π , so that [A]n − ∏n∫ π = [B ]n − µn∫ π . (...
View
Full
Document
This note was uploaded on 09/27/2010 for the course EE 229 taught by Professor R.srikant during the Spring '09 term at University of Illinois, Urbana Champaign.
 Spring '09
 R.Srikant

Click to edit the document details