This preview shows page 1. Sign up to view the full content.
Unformatted text preview: e a semiMarkov chain following
the framework for the M/G/1 queue in Section 5.8.
a) Find P0j ; j ≥ 0.
b) Find Pij for i > 0; j ≥ i − 1.
Exercise 5.19. Consider a semiMarkov process for which the embedded Markov chain is
irreducible and positiverecurrent. Assume that the distribution of interrenewal intervals
for one state j is arithmetic with span d. Show that the distribution of interrenewal
intervals for all states is arithmetic with the same span. Chapter 6 MARKOV PROCESSES WITH
COUNTABLE STATE SPACES
6.1 Introduction Recall that a Markov chain is a discretetime process {Xn ; n ≥ 0} with the property that
for each integer n ≥ 1, the state Xn at time n is a random variable (rv) that is statistically
dependent on past states only through the most recent state Xn−1 . A Markov process is a
generalization of a Markov chain in the sense that, along with the sequence of states, there
is a random time interval from the entry to one state until the entry to the next. We denote
the sequence of states by {Xn ; n ≥ 0} and, as before, assume this sequence forms a Markov
chain with a countable state space. We assume the process is in state X0 at time 0, and let
S1 , S2 , . . . , be the epochs at which successive state transitions occur. Thus the process is
in state X0 in the time interval [0, S1 ), in state X1 in the interval [S1 , S2 ), etc.
The intervals between successive transitions are denoted U1 , U2 , . . . , and thus U1 =S1 , U2 =
S2 − S1 , and in general Un = Sn − Sn−1 . The epoch of the nth transition is then Sn =
Pn
i=1 Ui . Note that Ui is the interval preceding the entry to Xi . A Markov process is thus speciﬁed by specifying both the sequence of rv’s {Xn ; n ≥ 0} and
{Un ; n ≥ 1}. There are two additional requirements for the process to be deﬁned as Markov.
The ﬁrst is that for each n ≥ 1, Un , conditional on Xn−1 , is statistically independent of all
the other states and of all the other intervals. Second, Un (conditional on Xn−1 ) is required
to be an exponential rv with a rate ∫i that is a function only of the sample value i of Xn−1 .
Thus, for all integers n ≥ 1, all sample values i of the state space, and all u ≥ 0,
Pr {Un ≤ u  Xn−1 = i} = 1 − exp(−∫i u). (6.1) A Markov process is then speciﬁed by assigning a rate ∫i to each state i in the state space
and by specifying the transition probabilities Pij for the Markov chain. The Markov chain
here is called the embedded Markov chain of the process.
The state of a Markov process at any time t > 0 is denoted by X (t) and is given by
X (t) = Xn for Sn ≤ t < Sn+1
235 (6.2) 236 CHAPTER 6. MARKOV PROCESSES WITH COUNTABLE STATE SPACES This deﬁnes a stochastic process {X (t); t ≥ 0} in the sense that each sample point ω ∈ ≠
maps into a sequence of sample values of {Xn ; n ≥ 0} and {Sn ; n ≥ 1}, and thus into
a sample function of {X (t); t ≥ 0}. This stochastic process is what is usually referred
to as a Markov process, but it is often simpler to view {Xn ; n ≥ 0}, {Sn ; n ≥ 1} as a
characterization of the process.
We assume throughout this chapter (except in a few places where speciﬁed otherwise) that
the embedded Markov chain has no self transitions, i.e., Pii = 0 for all states i. One reason
for this is that such transitions are invisible in {X (t); t ≥ 0}. Another is that with this
assumption, the representation of {X (t); t ≥ 0} in terms of the embedded chain and the
transition rates is unique.
We are not interested for the moment in exploring the probability distribution of X (t) for
given values of t, but one feature we can see immediately is that for any times t > τ > 0
and any states i, j ,
Pr {X (t)=j  X (τ )=i, {X (s) = x(s); s < τ }} = Pr {X (t−τ )=j  X (0)=i} . (6.3) This property arises because of the memoryless property of the exponential distribution.
If X (τ ) = i, it makes no diﬀerence how long the process has been in state i before τ ; the
time to the next transition is still exponential and the next state is still determined by the
embedded chain. This will be seen more clearly in the following exposition. This property
is the reason why these processes are called Markov, and is often taken as the deﬁning
property of Markov processes.
Example 6.1.1. The M/M/1 queue: An M/M/1 queue has Poisson arrivals at a rate
denoted by ∏ and has a single server with an exponential service distribution of rate µ > ∏
(see Figure 6.1). Successive service times are independent both of each other and of arrivals.
The state X (t) of the queue is the total number of customers either in the queue or in
service. When X (t) = 0, the time to the next transition is the time until the next arrival,
i.e., ∫0 = ∏. For any state i > 0, the server is busy and the time to the next transition
is the time until either a new arrival occurs or a departure occurs. Thus ∫i = ∏ + µ. For
the embedded Markov chain, P01 = 1 since only arrivals are possible in state 0, and they
increase the state to 1. In the other states, Pi,i−1 = µ/(∏+µ) and Pi,i+1 = ∏/(∏+µ).
♥
0
②
∏ 1
µ/(∏+µ) ③
♥
1
②
∏+µ ∏/(∏+µ)
µ/(∏+µ)...
View Full
Document
 Spring '09
 R.Srikant

Click to edit the document details