Unformatted text preview: ] can be partitioned
as [P ] =
a) Show that [P ]n can be partitioned as [P ]n =
. That is, the blocks on
[Pij ] [Ptt ]n
the diagonal are simply products of the corresponding blocks of [P ], and the lower left block
is whatever it turns out to be. 186 CHAPTER 4. FINITE-STATE MARKOV CHAINS b) Let Qi be the probability that the chain will be in a recurrent state after K transitions,
starting from state i, i.e., Qi = j ≤M Pij . Show that Qi > 0 for all transient i.
c) Let Q be the minimum Qi over all transient i and show that Pij ≤ (1 − Q)n for all
n approaches the all zero matrix  with increasing n).
transient i, j (i.e., show that [Ptt ] d) Let π = (π r , π t ) be a left eigenvector of [P ] of eigenvalue 1 (if one exists). Show that
π t = 0 and show that π r must be positive and be a left eigenvector of [Pr ]. Thus show that
π exists and is unique (within a scale factor).
e) Show that e is the unique right eigenvector of [P ] of eigenvalue 1 (within a scale factor).
Exercise 4.14. Generalize Exercise 4.13 to the case of a Markov chain [P ] with r recurrent
classes and one or more transient classes. In particular,
a) Show that [P ] has exactly r linearly independent left eigenvectors, π (1) , π (2) , . . . , π (r) of
eigenvalue 1, and that the ith can be taken as a probability vector that is positive on the
ith recurrent class and zero elsewhere.
b) Show that [P ] has exactly r linearly independent right eigenvectors, ∫ (1) , ∫ (2) , . . . , ∫ (r)
of eigenvalue 1, and that the ith can be taken as a vector with ∫j equal to the probability
that recurrent class i will ever be entered starting from state j .
Exercise 4.15. Prove Theorem 4.8. Hint: Use Theorem 4.7 and the results of Exercise
Exercise 4.16. Generalize Exercise 4.15 to the case of a Markov chain [P ] with r aperiodic
recurrent classes and one or more transient classes. In particular, using the left and right
eigenvectors π (1) , π (2) , . . . , π (r) and ∫ (1) , . . . , ∫ (r) of Exercise 4.14, show that
lim [P ]n =
∫ (i)π (i) .
n→1 i Exercise 4.17. Suppose a Markov chain with an irreducible matrix [P ] is periodic with
period d and let Ti , 1 ≤ i ≤ d, be the ith subset in the sense of Theorem 4.3. Assume the
states are numbered so that the ﬁrst M1 states are in T1 , the next J2 are in T2 , and so forth.
Thus [P ] has the block form given by 0 0 [P ] = . . . 0 [Pd ] [P1 ]
0 .. . .. . .. . 0
. [P2 ] ..
. [Pd−1 ] ..
0 where [Pi ] has dimension Mi by Mi+1 for i < d and Md by M1 for i = d 4.8. EXERCISES 187 a) Show that [P ]d has the form ..
0 [Q1 ] d
[P ] = 0
[Q2 ] ..
. [Qd ]
0 where [Qi ] = [Pi ][Pi+1 ] . . . [Pd ][P1 ] . . . [Pi−1 ]. b) Show that [Qi ] is the matrix of an ergodic Markov chain, so that with the eigenvectors
deﬁned in Exercises 4.14 and 4.16, limn→1 [P ]nd = i ∫ (i)π (i) .
c) Show that π (i) , the left eigenvector of [Qi ] of eigenvalue 1 satisﬁes π (i) [Pi ] = π (i+1) for
i < d and π (d) [Pd ] = π (1) .
d) Let α = 2π d −1 and let π (k) = (π (1) , π (2) eαk , π (3) e2αk , . . . , π (d) e(d−1)αk ). Show that π (k)
is a left eigenvector of [P ] of eigenvalue e−αk .
Exercise 4.18. (continuation of Exercise 4.17).
deﬁned in Exercises 4.14 and 4.16,
nd lim [P ] [P ] = n→1 d
X a) Show that, with the eigenvectors ∫ (i)π (i+1) i=1 where π (d+1) is taken to be π (1) .
b) Show that, for 1 ≤ j < d,
lim [P ]nd [P ]j = n→1 d
X ∫ (i)π (i+j ) i=1 where π (d+m) is taken to be π (m) for 1 ≤ m < d.
c) Show that
lim [P ]nd I + [P ] + . . . , [P ]d−1 = n→1 √d
i=1 ∫ (i) !√ d
X ! π (i+j ) . i=1 d) Show that
[P ] + [P ]n+1 + · · · + [P ]n+d−1 = e π
lim where π isP steady-state probability vector for [P ]. Hint: Show that e =
π = (1/n) i π (i) .
e) Show that the above result is also valid for periodic unichains. P i∫ (i) and 188 CHAPTER 4. FINITE-STATE MARKOV CHAINS Exercise 4.19. Assume a friend has developed an excellent program for ﬁnding the steadystate probabilities for ﬁnite-state Markov chains. More precisely, given the transition matrix
[P], the program returns limn→1 Pii for each i. Assume all chains are aperiodic.
a) You want to ﬁnd the expected time to ﬁrst reach a given state k starting from a diﬀerent
state m for a Markov chain with transition matrix [P ]. You modify the matrix to [P 0 ] where
Pkm = 1, Pkj = 0 for j 6= m, and Pij = Pij otherwise. How do you ﬁnd the desired ﬁrst
passage time from the program output given [P 0 ] as an input? (Hint: The times at which a
Markov chain enters any given state can be considered as renewals in a (perhaps delayed)
b) Using the same [P 0 ] as the program input, how can you ﬁnd the expected number of
returns to state m before the ﬁrst passage to state k?
c) Suppose, for the same Markov chain [P ] and th...
View Full Document
This note was uploaded on 09/27/2010 for the course EE 229 taught by Professor R.srikant during the Spring '09 term at University of Illinois, Urbana Champaign.
- Spring '09