This preview shows page 1. Sign up to view the full content.
Unformatted text preview: u 6= 0, it follows
from the deﬁnition of g (x ) that µ ≤ g (u ). From (4.20), g (u ) ≤ ∏, so µ ≤ ∏.
Next assume that µ = ∏. From (4.25), then, ∏u ≤ [A]u , so u achieves the maximization
in (4.20) and part 1 of the theorem asserts that ∏u = [A]u . This means that (4.25) is
satisﬁed with equality, and it follows from this (see Exercise 4.11) that x = β u for some
(perhaps complex) scalar β . Thus x is an eigenvector of ∏, and µ = ∏. Thus µ = ∏ is
impossible for µ 6= ∏, so ∏ > µ for all eigenvalues µ 6= ∏.
Property 3: Let x be any eigenvector of ∏. Property 2 showed that x = β u where ui = xi 
for each i and u is a nonnegative eigenvector of eigenvalue ∏. Since ∫ > 0 , we can choose
α > 0 so that ∫ − αu ≥ 0 and ∫i − αui = 0 for some i. Now ∫ − αu is either identically
0 or else an eigenvector of eigenvalue ∏, and thus strictly positive. Since ∫i − αui = 0 for
some i, ∫ − αu = 0 . Thus u and x are scalar multiples of ∫ , completing the proof.
Next we apply the results above to a more general type of nonnegative matrix called an
irreducible matrix. Recall that we analyzed the classes of a ﬁnitestate Markov chain in
terms of a directed graph where the nodes represent the states of the chain and a directed
arc goes from i to j if Pij > 0. We can draw the same type of directed graph for an arbitrary
nonnegative matrix [A]; i. e., a directed arc goes from i to j if Aij > 0.
Deﬁnition 4.12. An irreducible matrix is a nonnegative matrix such that for every pair
of nodes i, j in its graph, there is a walk from i to j .
For stochastic matrices, an irreducible matrix is thus the matrix of a recurrent Markov
chain. If we denote the i, j element of [A]n by An , then we see that An > 0 iﬀ there is a
ij
ij
walk of length n from i to j in the graph. If [A] is irreducible, a walk exists from any i to
any j (including j = i) with length at most M, since the walk need visit each other node at
P
most once. Thus An > 0 for some n, 1 ≤ n ≤ M, and M An > 0 . The key to analyzing
ij
n=1 ij
P
irreducible matrices is then the fact that the matrix B = M [A]n is strictly positive.
n=1 4.4. PERRONFROBENIUS THEORY 153 Theorem 4.6 (Frobenius). Let [A] ≥ 0 be a M by M irreducible matrix and let ∏ be the
supremum in (4.20) and (4.21). Then the supremum is achieved as a maximum at some
vector ∫ and the pair ∏, ∫ have the fol lowing properties:
1. ∏∫ = [A]∫ and ∫ > 0.
2. For any other eigenvalue µ of [A], µ ≤ ∏.
3. If x satisﬁes ∏x = [A]x, then x = β∫ for some (possibly complex) number β .
Discussion: Note that this is almost the same as the Perron theorem, except that [A] is
irreducible (but not necessarily positive), and the magnitudes of the other eigenvalues need
not be strictly less than ∏. When we look at recurrent matrices of period d, we shall ﬁnd
that there are d − 1 other eigenvalues of magnitude equal to ∏. Because of the possibility of
other eigenvalues with the same magnitude as ∏, we refer to ∏ as the largest real eigenvalue
of [A].
Proof* Property 1: We ﬁrst establish property 1 for a particular choice of ∏ and ∫
and then show that this choice satisﬁes the optimization problem in (4.20) and (4.21).
P
Let [B ] = M [A]n > 0. Using theorem 4.5, we let ∏B be the largest eigenvalue of [B ]
n=1
and let ∫ > 0 be the corresponding right eigenvector. Then [B ]∫ = ∏B ∫ . Also, since
[B ][A] = [A][B ], we have [B ]{[A]∫ } = [A][B ]∫ = ∏B [A]∫ . Thus [A]∫ is a right eigenvector
for eigenvalue ∏B of [B ] and thus equal to ∫ multiplied by some positive scale factor.
Deﬁne this scale factor to be ∏, so that [A]∫ = ∏∫ and ∏ > 0. We can relate ∏ to ∏B by
P
[B ]∫ = M [A]n∫ = (∏ + · · · + ∏M )∫ . Thus ∏B = ∏ + · · · + ∏M .
n=1 Next, for any nonzero x ≥ 0 , let g > 0 be the largest number such that [A]x ≥ g x .
Multiplying both sides of this by [A], we see that [A]2 x ≥ g [A]x ≥ g 2 x . Similarly, [A]i x ≥
g i x for each i ≥ 1, so it follows that B x ≥ (g + g 2 + · · · + g M )x . From the optimization
property of ∏B in theorem 4.5, this shows that ∏B ≥ g + g 2 + · · · + g M . Since ∏B =
∏ + ∏2 + · · · + ∏M , we conclude that ∏ ≥ g , showing that ∏, ∫ solve the optimization problem
for A in (4.20) and (4.21).
Properties 2 and 3: The ﬁrst half of the proof of property 2 in Theorem 4.5 applies here
also to show that µ ≤ ∏ for all eigenvalues µ of [A]. Finally, let x be an arbitrary vector
satisfying [A]x = ∏x . Then, from the argument above, x is also a right eigenvector of [B ]
with eigenvalue ∏B , so from Theorem 4.5, x must be a scalar multiple of ∫ , completing the
proof.
Corollary 4.1. The largest real eigenvalue ∏ of an irreducible matrix [A] ≥ 0 has a positive
left eigenvector π . π is the unique left eigenvector of ∏ (within a...
View
Full
Document
This note was uploaded on 09/27/2010 for the course EE 229 taught by Professor R.srikant during the Spring '09 term at University of Illinois, Urbana Champaign.
 Spring '09
 R.Srikant

Click to edit the document details