This preview shows page 1. Sign up to view the full content.
Unformatted text preview: transitions from i + 1 to i for every sample path, we conclude
that
pi ∏i = pi+1 µi+1 . (6.40) This can also be obtained inductively from (6.20) using the same argument that we used
earlier for birthdeath Markov chains.
✿♥
✘0
② ∏0
µ1 ③
♥
1
② ③
♥
2
② ∏1
µ2 ③
♥
3
② ∏2
µ3 ∏3
µ4 ③
♥
4 ... Figure 6.6: Birthdeath process. Deﬁne ρi as ∏i /µi+1 . Then applying (6.40) iteratively, we obtain the steady state equations
pi = p0 i−1
Y ρj ; i ≥ 1. j =0 We can solve for p0 by substituting (6.41) into
p0 = 1+ P1 P i 1 i=1 (6.41) pi , yielding Qi−1 j =0 ρj . (6.42) For the M/M/1 queue, the state of the Markov process is the number of customers in the
system (i.e., waiting in queue or in service). The transitions from i to i + 1 correspond to
arrivals, and since the arrival process is Poisson of rate ∏, we have ∏i = ∏ for all i ≥ 0. The
transitions from i to i − 1 correspond to departures, and since the service time distribution
is exponential with parameter µ, say, we have µi = µ for all i ≥ 1. Thus, (6.42) simpliﬁes
to p0 = 1 − ρ, where ρ = ∏/µ and thus
pi = (1 − ρ)ρi ; i ≥ 0. (6.43) We assume that ρ < 1, which is required for positive recurrence. The probability that there
are i or more customers in the system in steady state is then given by P {X (t) ≥ i} = ρi
and the expected number of customers in the system is given by
E [X (t)] = 1
X
i=1 P {X (t) ≥ i} = ρ
.
1−ρ (6.44) 252 CHAPTER 6. MARKOV PROCESSES WITH COUNTABLE STATE SPACES The expected time that a customer spends in the system in steady state can now be determined by Little’s formula (Theorem 3.8).
E [System time] = E [X (t)]
ρ
1
=
=
.
∏
∏(1 − ρ)
µ−∏ (6.45) The expected time that a customer spends in the queue (i.e., before entering service) is just
the expected system time less the expected service time, so
E [Queueing time] = 1
1
ρ
−=
.
µ−∏ µ
µ−∏ (6.46) Finally, the expected number of customers in the queue can be found by applying Little’s
formula to (6.46),
E [Number in queue] = ∏ρ
.
µ−∏ (6.47) Note that the expected number of customers in the system and in the queue depend only on
ρ, so that if the arrival rate and service rate were both speeded up by the same factor, these
expected values would remain the same. The expected system time and queueing time,
however would decrease by the factor of the rate increases. Note also that as ρ approaches
1, all these quantities approach inﬁnity as 1/(1 − ρ). At the value ρ = 1, the embedded
Markov chain becomes nullrecurrent and the steady state probabilities (both {πi ; i ≥ 0}
and {pi ; i ≥ 0}) can be viewed as being all 0 or as failing to exist.
There are many types of queueing systems that can be modeled as birthdeath processes.
For example the arrival rate could vary with the number in the system and the service rate
could vary with the number in the system. All of these systems can be analyzed in steady
state in the same way, but (6.41) and (6.42) can become quite messy in these more complex
systems. As an example, we analyze the M/M/m system. Here there are m servers, each
with exponentially distributed service times with parameter µ. When i customers are in
the system, there are i servers working for i < m and all m servers are working for i ≥ m.
With i servers working, the probability of a departure in an incremental time δ is iµδ , so
that µi is iµ for i < m and mµ for i ≥ m (see Figure 6.7).
Deﬁne ρ = ∏/(mµ). Then in terms of our general birthdeath process notation, ρi =
mρ/(i + 1) for i < m and ρi = ρ for i ≥ m. From (6.41), we have
mρ mρ
mρ
p0 (mρ)i
···
=
;
12
i
i!
p0 ρi mm
;
i ≥ m.
m! pi = p0
pi = i≤m (6.48)
(6.49) We can ﬁnd p0 by summing pi and setting the result equal to 1; a solution exists if ρ < 1.
P
Nothing simpliﬁes much in this sum, except that i≥m pi = p0 (ρm)m /[m!(1 − ρ)], and the
solution is
"
#−1
m−1
X (mρ)i
(mρ)m
p0 =
+
.
(6.50)
m!(1 − ρ)
i!
i=0 6.6. REVERSIBILITY FOR MARKOV PROCESSES ♥
0
② ∏
µ ③
♥
1
② ∏
2µ ③
♥
2
② ∏
3µ 253 ③
♥
3
② ∏
3µ ③
♥
4 ... Figure 6.7: M/M/m queue for m = 3.. 6.6 Reversibility for Markov processes In Section 5.4 on reversibility for Markov chains, (5.37) showed that the backward transition
∗
probabilities Pij in steady state satisfy
∗
πi Pij = πj Pj i . (6.51) These equations are then valid for the embedded chain of a Markov process. Next, consider
backward transitions in the process itself. Given that the process is in state i, the probability
of a transition in an increment δ of time is ∫i δ + o(δ ), and transitions in successive increments
are independent. Thus, if we view the process running backward in time, the probability
of a transition in each increment δ of time is also ∫i δ + o(δ ) with independence between
increments. Thus, going to the limit δ → 0, the distribution of the tim...
View
Full
Document
This note was uploaded on 09/27/2010 for the course EE 229 taught by Professor R.srikant during the Spring '09 term at University of Illinois, Urbana Champaign.
 Spring '09
 R.Srikant

Click to edit the document details