This preview shows page 1. Sign up to view the full content.
Unformatted text preview: function G(y ); the service time is the interval from start to completion of service and is
also independent of arrival epochs. We would like to ﬁnd the distribution function of the
number of customers being served at a given epoch τ .
Let {N (t); t ≥ 0} be the Poisson counting process of customer arrivals. Consider the arrival
times of those customers that are still in service at some ﬁxed time τ . In some arbitrarily
small interval (t, t + δ ], the probability of an arrival is δ ∏ + o(δ ) and the probability of 2 or
more arrivals is negligible (i.e., o(δ )). The probability that an arrival occurred in (t, t + δ ]
and that that customer is still being served at time τ > t is then δ ∏[1 − G(τ − t)] + o(δ ).
Consider a counting process {N1 (t); 0≤t≤τ } where N1 (t) is the number of arrivals between
0 and t that are still in service at τ . This counting process has the independent increment
property. To see this, note that the overall arrivals in {N (t); t ≥ 0} have the independent
increment property; also the arrivals in {N (t); t ≥ 0} have independent service times, and
thus are independently in or not in {N1 (t); 0 ≤ t < τ }. It follows that {N1 (t); 0 ≤ t < τ } is
a nonhomogeneous Poisson process with rate ∏[1 − G(τ − t)] at time t ≤ τ . The expected
number of arrivals still in service at time τ is then
Zτ
Zτ
m(τ ) = ∏
[1 − G(τ − t)] dt = ∏
[1 − G(t)] dt.
(2.32)
t=0 t=0 and the PMF of the number in service at time τ is given by
Pr {N1 (τ ) = n} = m(τ )n exp(−m(τ ))
.
n! (2.33) Note that as τ → 1, the integral in (2.32) approaches the mean of the service time distribution (i.e., it is the integral of the complementary distribution function, 1 − G(t), of the 2.5. CONDITIONAL ARRIVAL DENSITIES AND ORDER STATISTICS 77 ✲ N1 (τ ) = Customers in service at τ
✟✟G(τ −t)
✟1 −
N (t) ✲✟✟
❍❍
❍G(τ −t)
❍❍
✲ N (τ ) − N1 (τ ) = Customers departed by τ Figure 2.8: Poisson arrivals {N (t); t ≥ 0} can be considered to be split in a nonhomogeneous way. An arrival at t is split with probability 1 − G(τ − t) into a process
of customers still in service at τ . service time). This means that in steadystate (as τ → 1), the distribution of the number in service at τ depends on the service time distribution only through its mean. This
example can be used to model situations such as the number of phone calls taking place
at a given epoch. This requires arrivals of new calls to be modeled as a Poisson process
and the holding time of each call to be modeled as a random variable independent of other
holding times and of call arrival times. Finally, as shown in Figure 2.8, we can regard
{N1 (t); 0≤t ≤ τ } as a splitting of the arrival process {N (t); t≥0}. By the same type of
argument as in Section 2.3, the number of customers who have completed service by time
τ is independent of the number still in service. 2.5 Conditional arrival densities and order statistics A diverse range of problems involving Poisson processes are best tackled by conditioning on
a given number n of arrivals in the interval (0, t], i.e., on the event N (t) = n. Because of
the incremental view of the Poisson process as independent and stationary arrivals in each
incremental interval of the time axis, we would guess that the arrivals should have some
sort of uniform distribution given N (t) = n. More precisely, the following theorem shows
that the joint density of S (n) = (S1 , S2 , . . . , Sn ) given N (t) = n is uniform over the region
0 < S1 < S2 < · · · < Sn < t.
Theorem 2.6. Let fS(n) N (t) (s(n)  n) be the joint density of S(n) conditional on N (t) = n.
This density is constant over the region 0 < s1 < · · · < sn < t and has the value
fS(n) N (t) (s(n)  n) = n!
.
tn (2.34) Two proofs are given, each illustrative of useful techniques.
Proof 1: Recall that the joint density of the ﬁrst n + 1 arrivals S n+1 = (S1 . . . , Sn , Sn+1
with no conditioning is given in (2.14). We ﬁrst use Bayes law to calculate the joint density
of S n+1 conditional on N (t) = n.
fS (n+1) N (t) (s (n+1)  n) pN (t) (n) = pN (t)S (n+1) (ns (n+1) )fS (n+1) (s (n+1) ).
Note that N (t) = n if and only if Sn ≤ t and Sn+1 > t. Thus pN (t)S (n+1) (ns (n+1) ) is 1 if
Sn ≤ t and Sn+1 > t and is 0 otherwise. Restricting attention to the case N (t) = n, Sn ≤ t 78 CHAPTER 2. POISSON PROCESSES and Sn+1 > t,
fS (n+1) N (t) (s (n+1)  n) =
=
= fS (n+1) (s (n+1) )
pN (t) (n)
∏n+1 exp(−∏sn+1 )
(∏t)n exp(−∏t) /n!
n!∏ exp(−∏(sn+1 − t)
.
tn (2.35) This is a useful expression, but we are interested in S (n) rather than S (n+1) . Thus we break
up the left side of (2.35) as follows:
fS (n+1) N (t) (s (n+1)  n) = fS (n) N (t) (s (n)  n) fSn+1 S (n) N (t) (sn+1 s (n) , n).
Conditional on N (t) = n, Sn+1 is the ﬁrst arrival epoch after t, which by the memoryless
property is independent of S n . Thus that ﬁnal term is simply ∏ exp(−∏(sn+1 − t)) for
sn+1 > t. Substituting this into (2.35), the r...
View Full
Document
 Spring '09
 R.Srikant

Click to edit the document details