This preview shows page 1. Sign up to view the full content.
Unformatted text preview: show that (5.5) has no smaller solution (see Exercise 5.1). Note that
you have shown that the chain is transient for p > 1/2 and that it is recurrent for p = 1/2.
b) Under the same conditions as part (a), show that Fij (1) equals 2(1 − p) for j = i, equals
[(1 − p)/p]i−j for i > j , and equals 1 for i < j .
Exercise 5.3. Let j be a transient state in a Markov chain and let j be accessible from i.
Show that i is transient also. Interpret this as a form of Murphy’s law (if something bad
can happen, it will, where the bad thing is the lack of an eventual return). Note: give a
direct demonstration rather than using Lemma 5.3.
Exercise 5.4. Consider an irreducible positiverecurrent Markov chain. Consider the renewal process {Nj j (t); t ≥ 0} where, given X0 = j , Nj j (t) is the number of times that state
j is visited from time 1 to t. For each i ≥ 0, consider a renewalreward function Ri (t) equal
to 1 whenever the chain is in state i and equal to 0 otherwise. Let πi be the timeaverage
reward.
a) Show that πi = 1/T ii for each i with probability 1.
P
P
b) Show that i πi = 1. Hint: consider i≤M πi for any integer M . c) Consider a renewalreward function Rij (t) that is 1 whenever the chain is in state i and
the next state is state j . Rij (t) = 0 otherwise. Show that the timeaverage reward is equal
P
to πi Pij with probability 1. Show that pk = i πi Pik for all k.
Exercise 5.5. Let {Xn ; n ≥ 0} be a branching process with X0 = 1. Let Y , σ 2 be the
mean and variance of the number of oﬀspring of an individual. a) Argue that limn→1 Xn exists with probability 1 and either has the value 0 (with probability F10 (1)) or the value 1 (with probability 1 − F10 (1)). 5.10. EXERCISES 231 b) Show that VAR (Xn ) = σ 2 Y
Y = 1. n−1 n (Y − 1)/(Y − 1) for Y 6= 1 and VAR(Xn ) = nσ 2 for Exercise 5.6. There are n states and for each pair of states i and j , a positive number
dij = dj i is given. A particle moves from state to state in the following manner: Given that
the particle is in any state i, it will next move to any j 6= i with probability Pij given by
Pij = P dij j 6=i dij . Assume that Pii = 0 for all i. Show that the sequence of positions is a reversible Markov
chain and ﬁnd the limiting probabilities.
Exercise 5.7. Consider a reversible Markov chain with transition probabilities Pij and
limiting probabilities πi . Also consider the same chain truncated to the states 0, 1, . . . , M .
0
That is, the transition probabilities {Pij } of the truncated chain are
0
Pij = ( P mPij
k=0 ; 0 ≤ i, j ≤ M
0 ; elsewhere. Pik Show that the truncated chain is also reversible and has limiting probabilities given by
PM j =0 Pij
.
PM
k=0 πi
m=0 Pkm π i = PM πi Exercise 5.8. A Markov chain (with states {0, 1, 2, . . . , J − 1} where J is either ﬁnite or
inﬁnite) has transition probabilities {Pij ; i, j ≥ 0}. Assume that P0j > 0 for all j > 0 and
Pj 0 > 0 for all j > 0. Also assume that for all i, j , k, we have Pij Pj k Pki = Pik Pkj Pj i .
a) Assuming also that all states are positive recurrent, show that the chain is reversible and
ﬁnd the steady state probabilities {πi } in simplest form.
b) Find a condition on {P0j ; j ≥ 0} and {Pj 0 ; j ≥ 0} that is suﬃcient to ensure that all
states are positive recurrent.
Exercise 5.9. a) Use the birth and death model described in ﬁgure 5.4 to ﬁnd the steady
state probability mass function for the number of customers in the system (queue plus
service facility) for the following queues:
i) M/M/1 with arrival probability ∏δ , service completion probability µδ .
ii) M/M/m with arrival probability ∏δ , service completion probability iµδ for i servers busy,
1 ≤ i ≤ m.
iii) M/M/1 with arrival probability ∏δ , service probability imd for i servers. Assume d so
small that iµδ < 1 for all i of interest. 232 CHAPTER 5. COUNTABLESTATE MARKOV CHAINS Assume the system is positive recurrent.
b) For each of the queues above give necessary conditions (if any) for the states in the chain
to be i) transient, ii) null recurrent, iii) positive recurrent.
c) For each of the queues ﬁnd:
L = (steady state) mean number of customers in the system.
Lq = (steady state) mean number of customers in the queue.
W = (steady state) mean waiting time in the system.
Wq = (steady state) mean waiting time in the queue.
Exercise 5.10. a) Given that an arrival occurs in the interval (nδ, (n+1)δ ) for the sampledtime M/M/1 model in ﬁgure 5, ﬁnd the conditional PMF of the state of the system at time
nδ (assume n arbitrarily large and assume positive recurrence).
b) For the same model, again in steady state but not conditioned on an arrival in (nδ, (n +
1)δ ), ﬁnd the probability Q(i, j )(i ≥ j > 0) that the system is in state i at nδ and that i − j
departures occur before the next arrival.
c) Find the expected number of customers seen in the system by the ﬁrst arrival after time
nδ . (Note: the purpose of this exercise is to make you cautious about the meaning of “the
state seen by a random arrival”).
Exercise 5.11. Find the backward transition probabilities for the Markov chain model of
age in ﬁgure 2. Draw the graph fo...
View
Full
Document
This note was uploaded on 09/27/2010 for the course EE 229 taught by Professor R.srikant during the Spring '09 term at University of Illinois, Urbana Champaign.
 Spring '09
 R.Srikant

Click to edit the document details