This preview shows page 1. Sign up to view the full content.
Unformatted text preview: ose transitions occur. The process can be viewed as a single
server queue where arrivals become increasingly discouraged as the queue lengthens. The
word timeaverage below refers to the limiting timeaverage over each samplepath of the
process, except for a set of sample paths of probability 0.
♥
0
② ∏
µ ③
♥
1
② ∏/2
µ ③
♥
2
② ∏/3
µ ③
♥
3
② ∏/4
µ ③
♥
4 ... a) Find the timeaverage fraction of time pi spent in each state i > 0 in terms of p0 and
then solve for p0 . Hint: First ﬁnd an equation relating pi to pi+1 for each i. It also may
help to recall the power series expansion of ex .
P
b) Find a closed form solution to
i pj ∫i where ∫i is the departure rate from state i.
Show that the process is positive recurrent for all choices of ∏ > 0 and µ > 0 and explain
intuitively why this must be so.
c) For the embedded Markov chain corresponding to this process, ﬁnd the steadystate
probabilities πi for each i ≥ 0 and the transition probabilities Pij for each i, j .
d) For each i, ﬁnd both the timeaverage interval and the timeaverage number of overall
state transitions between successive visits to i.
Exercise 6.3. (Continuation of Exercise 6.2 a) Assume that the Markov process in Exercise
6.2 is changed in the following way: whenever the process enters state 0, the time spent 6.9. EXERCISES 269 before leaving state 0 is now a uniformly distributed rv, taking values from 0 to 2/∏. All
other transitions remain the same. For this new process, determine whether the successive
epochs of entry to state 0 form renewal epochs, whether the successive epochs of exit from
state 0 form renewal epochs, and whether the successive entries to any other given state i
form renewal epochs.
e) For each i, ﬁnd both the timeaverage interval and the timeaverage number of overall
state transitions between successive visits to i.
f ) Is this modiﬁed process a Markov process in the sense that Pr {X (t) = i  X (τ ) = j, X (s) = k} =
Pr {X (t) = i  X (τ ) = j } for all 0 < s < τ < t and all i, j, k? Explain.
Exercise 6.4. a) Consider a Markov process with the set of states {0, 1, . . . } in which the
transition rates {qij } between states are given by qi,i+1 = (3/5)2i for i ≥ 0, qi,i−1 = (2/5)2i
for i ≥ 1, and qij = 0 otherwise. Find the transition rate ∫i out of state i for each i ≥ 0
and ﬁnd the transition probabilities {Pij } for the embedded Markov chain.
P
b) Find a solution {pi ; i ≥ 0} with i pi = 1 to (6.20).
c) Show that all states of the embedded Markov chain are transient. Exercise 6.5. a) Consider the process in the ﬁgure below. The process starts at X (0) = 1,
and for all i ≥ 1, Pi,i+1 = 1 and ∫i = i2 for all i. Let Tn be the time that the nth transition
occurs. Show that
E [Tn ] = n
X i−2 < 2 for all n. i=1 Hint: Upper bound the sum from i = 2 by integrating x−2 from x = 1.
♥
1 1 ✲2
♥ 4 ✲3
♥ 9 ✲4
♥ 16 ✲ b) Use the Markov inequality to show that Pr {Tn > 4} ≤ 1/2 for all n. Show that the
probability of an inﬁnite number of transitions by time 4 is at least 1/2.
Exercise 6.6. Let qi,i+1 = 2i−1 for all i ≥ 0 and let qi,i−1 = 2i−1 for all i ≥ 1. All other
transition rates are 0.
a) Solve the steady state equations and show that pi = 2−i−1 for all i ≥ 0.
b) Find the transition probabilities for the embedded Markov chain and show that the chain
is null recurrent.
c) For any state i, consider the renewal process for which the Markov process starts in state
i and renewals occur on each transition to state i. Show that, for each i ≥ 1, the expected
interrenewal interval is equal to 2. Hint: Use renewalreward theory. 270 CHAPTER 6. MARKOV PROCESSES WITH COUNTABLE STATE SPACES d) Show that the expected number of transitions between each entry into state i is inﬁnite.
Explain why this does not mean that an inﬁnite number of transitions can occur in a ﬁnite
time.
Exercise 6.7. A two state Markov process has transition rates q01 = 1, q10 = 2. Find
P01 (t), the probability that X (t) = 1 given that X (0) = 0. Hint: You can do this by solving
a single ﬁrst order diﬀerential equation if you make the right choice between forward and
backward equations.
Exercise 6.8. a) Consider a two state Markov process with q01 = ∏ and q10 = µ. Find
the eigenvalues and eigenvectors of the transition rate matrix [Q].
b) Use (6.36) to solve for [P (t)].
c) Use the Kolmogorov forward equation for P01 (t) directly to ﬁnd P01 (t) for t ≥ 0. Hint:
you don’t have to use the equation for P00 (t); why?
d) Check your answer in b) with that in c).
Exercise 6.9. Consider an irreducible Markov process with n states and assume that the
transition rate matrix [Q] = [V ][Λ][V ]−1 where [V ] is the matrix of right eigenvectors of
[Q], [Λ] is the diagonal matrix of eigenvalues of {Q], and the inverse of [Q] is the matrix of
left eigenvectors.
a) Consider the sampledtime approximation to the process with a...
View
Full
Document
 Spring '09
 R.Srikant

Click to edit the document details