This preview shows page 1. Sign up to view the full content.
Unformatted text preview: above argument shows that the portion of a Poisson process starting at some time t > 0
is a probabilistic replica of the process starting at 0; that is, the time until the ﬁrst arrival
after t is an exponentially distributed rv with parameter ∏, and all subsequent arrivals
are independent of this ﬁrst arrival and of each other and all have the same exponential
distribution.
Deﬁnition 2.4. A counting process {N (t); t ≥ 0} has the stationary increment property if
for every t0 > t > 0, N (t0 ) − N (t) has the same distribution function as N (t0 − t).
e
Let us deﬁne N (t, t0 ) = N (t0 ) − N (t) as the number of arrivals in the interval (t, t0 ] for any
0 ≥ t. We have just shown that for a Poisson process, the rv N (t, t0 ) has the same
e
given t 64 CHAPTER 2. POISSON PROCESSES distribution as N (t0 − t), which means that a Poisson process has the stationary increment
property. Thus, the distribution of the number of arrivals in an interval depends on the size
of the interval but not on its starting point.
Deﬁnition 2.5. A counting process {N (t); t ≥ 0} has the independent increment property
if, for every integer k > 0, and every ktuple of times 0 < t1 < t2 < · · · < tk , the ktuple of
e
e
rv’s N (t1 ), N (t1 , t2 ), . . . , N (tk−1 , tk ) of rv’s are statistical ly independent. For the Poisson process, Theorem 2.1 says that for any t, the time Z1 until the next
arrival after t is independent of N (τ ) for all τ ≤ t. Letting t1 < t2 < · · · tk−1 <
e
e
t, this means that Z1 is independent of N (t1 ), N (t1 , t2 ), . . . , N (tk−1 , t). We have also
e
seen that the subsequent interarrival times after Z1 , and thus N (t, t0 ) are independent
e (t1 , t2 ), . . . , N (tk−1 , t). Renaming t as tk and t0 as tk+1 , we see that N (tk , tk+1 )
e
e
of N (t1 ), N
e
e
is independent of N (t1 ), N (t1 , t2 ), . . . , N (tk−1 , tk ). Since this is true for all k, the Poisson
process has the independent increment property. In summary, we have proved the following:
Theorem 2.2. Poisson processes have both the stationary increment and independent increment properties. Note that if we look only at integer times, then the Bernoulli process also has the stationary
and independent increment properties. 2.2.2 Probability density of Sn and S1 , . . . Sn Recall from (2.1) that, for a Poisson process, Sn is the sum of n IID rv’s, each with the
density function f (x) = ∏ exp(−∏x), x ≥ 0. Also recall that the density of the sum of two
independent rv’s can be found by convolving their densities, and thus the density of S2 can
be found by convolving f (x) with itself, S3 by convolving the density of S2 with f (x), and
so forth. The result, for t ≥ 0, is called the Erlang density,4
fSn (t) = ∏n tn−1 exp(−∏t)
.
(n − 1)! (2.12) We can understand this density (and other related matters) much better by reviewing the
above mechanical derivation more carefully. The joint density for two continuous independent rv’s X1 and X2 is given by fX1 ,X2 (x1 , x2 ) = fX1 (x1 )fX2 (x2 ). Letting S2 = X1 + X2
and substituting S2 − X1 for X2 , we get the following joint density for X1 and the sum S2 ,
fX1 S2 (x1 s2 ) = fX1 (x1 )fX2 (s2 − x1 ).
The marginal density for S2 then results from integrating x1 out from the joint density, and
this, of course, is the familiar convolution integration. For IID exponential rv’s X1 , X2 , the
joint density of X1 , S2 takes the following interesting form:
fX1 S2 (x1 s2 ) = ∏2 exp(−∏x1 ) exp(−∏(s2 −x1 )) = ∏2 exp(−∏s2 )
4 for 0 ≤ x1 ≤ s2 . (2.13) Another (somewhat poorly chosen and rarely used) name for the Erlang density is the gamma density. 2.2. DEFINITION AND PROPERTIES OF THE POISSON PROCESS 65 This says that the joint density does not contain x1 , except for the constraint 0 ≤ x1 ≤ s2 .
Thus, for ﬁxed s2 , the joint density, and thus the conditional density of X1 given S2 = s2
is uniform over 0 ≤ x1 ≤ s2 . The integration over x1 in the convolution equation is then
simply multiplication by the interval size s2 , yielding the marginal distribution fS2 (s2 ) =
∏2 s2 exp(−∏s2 ), in agreement with (2.12) for n = 2.
This same curious behavior exhibits itself for the sum of an arbitrary number n of IID
exponential rv’s. That is, fX1 ,... ,Xn (x1 , . . . , xn ) = ∏n exp(−∏x1 − ∏x2 − · · · − ∏xn ). Letting
Sn = X1 + · · · + Xn and substituting Sn − X1 − · · · − Xn−1 for Xn , this becomes
fX1 ···Xn−1 Sn (x1 , . . . , xn−1 , sn ) = ∏n exp(−∏sn ).
since each xi cancels out above. This equation is valid over the region where each xi ≥ 0
and sn − x1 − · · · − xn−1 ≥ 0. The density is 0 elsewhere.
The constraint region becomes more clear here if we replace the interarrival intervals
X1 , . . . , Xn−1 with the arrival epochs S1 , . . . , Sn−1 where S1 = X1 and Si = Xi + Si−1
for 2 ≤ i ≤ n − 1. The joint densi...
View
Full
Document
 Spring '09
 R.Srikant

Click to edit the document details