This preview shows pages 1–9. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Generating Samples of Poisson Arrival Processes and Normal
Random Variables 1. Poisson Processes Let N (t) be the number of arrivals during the time interval (0, t] in some system, with N (0) z
0. The stochastic process {N (if) : t _>_ 0} is called an arrival process. Clearly, the stochastic process {N (t) : t Z 0} increases by jumps only. We let Tn be the
time of the n—th arrival (or jump). For convenience, we let To = 0. Also, we let An be the interarrival time for the n—th arrival. That is, An=Tn—Tn_1.
Na)
.—
.—
.—
.—
.—
I
I   
To 1'1 T2 3 3!; Ts
\ Y A Y k Y A Y A Y J
A: A2 A3 A4 A5 If the interarrival times 2511,1412, . .. are i.i.d. random variables that have exponential distri—
bution with parameter A, then we obtain a very important arrival process, called a Poisson
process. A is called the arrival rate of the Poisson process. Example a Let the interarrival times for the customers arriving at a store be i.i.d. random
variables that have exponential distribution with parameter /\ : 4 hoursil. Remember that
the expected value of an exponentially distributed random variable with parameter A is 1 / A.
Thus, on the average, the arrivals of two successive customers are separated by 1/4 hours, and
hence, the rate of customer arrivals is 4 customers/hour. This is why /\ is called the arrival rate of the Poisson process.
Important properties of Poisson processes are the following: 1. Note that N (t + s) a N is the number of arrivals during the time period [Lt + s]. In
a Poisson process, N (t + s) — N (t) is independent of the number of arrivals during the time period (0, t]. More generally, if 151 3 t; S t3 3 754, then (t1,t2] and (t3,t4] are nonoverlapping time
intervals, and N (t4) 7 N (t3) is independent of N (t2) e N (751). 47 fl 2. If {N (t) : t 2 0} is a Poisson process with rate A, then N (t + s) — N (t) has Poisson
distribution with parameter As. That is, e—As 1P{N(t + s) — N(t) = k} = k] Therefore, in a Poisson process with rate A, the expected number of arrivals during an interval
of length 8 is As. 3. The distribution of the number of arrivals during the time interval (*6, t+s] does not depend
on t. It only depends on 3. Therefore, the number of arrivals during an interval of length s
only depends on the length of the time interval, not where the time interval is located. 4. In a Poisson process with rate A, the interarrival times 241,142, . . . are i.i.d. random variables
that have exponential distribution with parameter A. Esample — As a quick check, let us compute the probability that there are no arrivals during;
the time interval (0, t]. We have ewe) _ M0) = o} = = ear Note that there are no arrivals during the time interval (0, t] if and only if the first interarrival
time is greater it. That is, ewe) — Mo) = 0} = IP{A1 2 t} = f” Ate—Audit = as”, where the second equality follows by the fact that A1 is exponentially distributed. So it checks! The formal deﬁnition of a Poisson process is as follows: An arrival process {N03 : t Z 0}
is called a Poisson process with rate A if 1. N (t + s) — N (t) is independent of the number of arrivals during the time period (0, t].
2. P{N(t + s) — N (t) = 1} = As + 0(3), where 0(8) denotes a function satisfying limﬂ = O.
3. lP’{N(t+ s) — N(t) Z 2} = 0(5). All the properties of the Poisson process mentioned above are in alignment with this deﬁnition.
For example, we have seen that 1P{N(t + s) — N(t) = 1} = (has). 48 l'_l $2 3 Remember that the Taylor series expansion of 63‘s is 1 + a: + a + g i . . . . Then, (ASV (ASJB’ ll (MOS) = [l + (—A3) +
(Asia (A314 2! 3! now + s) _ N(t) = 1} [As — (As)2 + 2. Nonstationary Poisson Processes Now imagine a situation where the arrival rate of the Poisson process is not constant, but
changes with time. In particular, let Mt) be the arrival rate at time t. This type of a stochastic
process is ideal when modeling “time of the day” or “seasonality” effects, where we tend to have more (or less) arrivals during certain times or seasons. The formal deﬁnition of a nonstationary Poisson process is as follows: An arrival process
{N(t) : t Z 0} is called a nonstationary Poisson process with rate function M if 1. N (t + s) — N (t) is independent of the number of arrivals during the time period (0, t].
2. 1P{N(t + s) — N0?) = 1} = Mt)s + 0(5).
3. P{N(t + s) — N(t) Z 2} = 0(3). Using the deﬁnition above, one can derive the following important properties of nonstationary
Poisson processes: 1. N (t + s) — N (t) is independent of the number of arrivals during the time period (0, t]. 2. If {N : t 2 0} is a nonstationary Poisson process with rate function M), than N (t +
s) — N05) has Poisson distribution with parameter LHS Mu)du. That is, t
It! e—L‘+wu)du( Hum) a)"
1P{N(t + s) — N(t) = k} = —. Note that if the rate function is constant (that is, Mt) = A, for all t Z 0), then the expression
above reduces to the expression for a stationary Poisson process. 3. However, in nonstationary Poisson processes, the distribution of the number of arrivals
during the time interval (15, t + s] does depend on t. Example * Norbert runs a hot dog stand which opens at 8 a.m.. From 8 am. until 11 a.m.,
customers seem to arrive at a steadily increasing rate that starts with an initial rate of 5
customers per hour at 8 am. and reaches a maximum of 20 customers per hour at 11 am.
From 11 am. to 1 p.m., the arrival rate remains constant at 20 customers per hour. However,
the arrival rate then drops steadily from 1 pm. until closing time 5 p.m., at which time it 49 ii has the value of 12 customers per hour. Assuming that the numbers of customers arriving during nonoverlapping time intervals
are independent, nonstationary Poisson process could be used to model the customer arrivals. The rate function is given by 5+5t ifogtgs
Mr): 20 ifsstgs
20—2(t—5) if5§t39, assuming that t = 0 indicates 8 a.m.. Then, for example, the number of arrivals between 8:30
am. and 9:30 am. has Poisson distribution with parameter 3/2
/ (5 + 5t)dt = 10.
1/2 #10 107
The probability of having 7 customer arrivals between 8:30 am. and 9:30 a.m. is 8T. 3. Generating Samples of Poisson Processes Using the fact that the interarrival times of a Poisson process with rate A are exponentially
distributed with parameter A, one can easily generate samples of a Poisson process: 1. Set the arrival counter 11 = 0. Set To = 0. 2. Increment 7:, by 1. Let An be a sample from exponential distribution with parameter A. 3. Let
Tn = Tn_1 + An. 4. Return to Step 2. Then, the arrival times T1, T2, . . . characterize a sample of the Poisson process. 4. Generating Samples of N onstationary Poisson Processes The algorithm that we describe for generating samples of a nonstationary Poisson process is
Similar to the acceptance—rejection idea. Let X“ = maX{/\(t) :t Z 0}. We start by generating a sample of a stationary Poisson process {N * (t) : t 2 0} with constant
rate )3. Suppose that the arrival times we obtain are Tf, Tg, . . .. 50 I"! MT?) Xf' Then, we accept each arrival time T? with probability _ This leads to the following algorithm:
1. Set the arrival counter n = 0. Set T* = 0, To = 0. 2. Let A be a sample from exponential distribution with parameter A*.
3. LetT* <—T*+A. 4. Generate U N U[O, 1]. If
MT") U<
_ Aa‘] then increment n by 1 and let Tn = T*. 5. Return to Step 2. Then, the arrival times T1, T2, . . . characterize a sample of the nonstationary Poisson process
with rate function To see that this algorithm works, we verify Postulate 2 required in the deﬁnition of a
nonstationary Poisson process (veriﬁcations of Postulates 1 and 3 are immediate). Note that lP’{0ne arrival occurs in (t, t + dt]}
= lF’{one arrival occurs from the Poisson process with rate A“ in (t, t + cit]
and this arrival is accepted} 2 [v alt + 0(a)] if? = Mt) a: + 0(a) This approach is called thinning. It works eﬂicientlyr when P ’Ho‘i 51 ii 5. Generating Samples of Normal Random Variables In the early days of simulation, one way of generating samples from N (0, 1) was to use the
central limit theorem. Note that if U1, U2, . . . are uniformly distributed over [0,1], then <UHU2+.. + Chi. f) 4? “(all L 12, Setting n = 12 yields Q (0m (Abra {rm—t) 2:: NLM) This convolution method is quite fast, but it is not exact. The most commonly used ex
act algorithms for generating samples of normal random variables generate them in pairs.
The reason is that the bivariate normal density for two independent normal random variables
having mean zero and unit variance has a particularly nice structure in polar coordinates.
Speciﬁcally, assume that (N1, N2) is such a pair of normal random variables. Then, (N1, N2) can be expressed in polar coordinates as
(N1, N2) = (R cos 6, Rsin 8), where 6 E [0, 211') is the angular component in radians and R is the radial component. Because of the spherical symmetry of such a bivariate normal density, (9 is uniformly distributed
over [0, 2x] and independent of R. Furthermore, R=VN12+N§=VX§ where xg has chi—square distribution with 2 degrees of freedom. This implies that X; has
the same distribution as 2X, where X is an exponentially distributed random variable with
parameter 1. Therefore, we can utilize the following algorithm to generate samples of N (0, 1): 1. Generate U1 N U[O, 1], U2 N U[U, 1]. 52 2. Set 9: [Ti U1 3. Set ENFLMIH indepwlimue ma
N1: rd Ch: 2 rwéoM 'H/stpﬂj)
NF :3 laws This method is known as the BoxMuller algorithm. It can be a bit slow, because of the cosine
and sine calculations. A variant, that is typically faster, is the polar transformation method.
This method involves an acceptancerejection procedure. 1. Generate U1 N U[0, 1], U2 ~ U[0, 1].
2. SetVl=2U1—1,V2=2Ug—1. 3. Compute W = V12 + V22. 4. H W S 1, then set —21 W
—2an
N27 M; Otherwise, go to Step 1. These algorithms generate samples of N (0, 1). To generate samples of N (a, 02), we use the
relationship N“ Null) {41' 0”. 6. Generating Samples of Correlated Normal Random Variables L) afﬂilal’e
3) ﬁti’vai The til—dimensional multivariate normal distribution is used to model a vector of 0! random
variables X1,X2, . . . ,Xd, where each X1 is normally distributed with mean pm and variance
0145, and the covariance between X. and Xj is Uij We let p, be the vector (a1, #2, . . . , ad) and
E be the symmetric variance—covariance matrix 011 012 015:
0'21 0'22 02d
2):
0:11 Udz add
7 53 l Our goal is to generate samples of such a vector of correlated random variables. This kind of
multivariate normal random variables are widely used in ﬁnance applications where various
ﬁnancial instruments have correlated returns. To illustrate the idea, we start with the case at = 2. We have M = #1 E = 011 021 = 0% P 0102 ]
#2 ’ U21 U22 :0 0102 0% a
where we let of = 011 = Var(X1), o2 = 022 = Var(X2) and 012 021 0102 0102 We now describe a way of generating samples of X1 and X 2 by using two independent samples
of N (O, 1). Let N1 and N2 be two independent standard normal random variables. If we set X1 : #1 + 0'1le then X1 ~ N014, 0%). For X2, we will try to write it as X2:u2+aN1+bN2. Elm— lfilm‘lil Note that lE{X2} 2 n2 is automatically satisﬁed. Since N1 and N2 are independent, Var(X2) =a2Var(N1) + sza’rCst) = q I 4‘ 1° 1 3 (Y1 L Coo(X1,X2) =Coo(#1 + 01N1,n2 + aNl + 5N2) :
at Q m.E{MWa m.B.EinM] erQTi a E\UL w. MW+BM\ Hence, we arrive at the equations a2+bz=o§ 0,01 = p010'2. Solving these equations, we get a 2 p02 and l) = (72V 1 7 p2. In other words, X1 : #1 + 01 0 N1
X2 #12 p02 [TQM/lme N2 I
Or in matrix notation, X = u+LN where L is a lower triangular matrix. Thus, by generating two independent samples of N (O, 1), we can obtain X through the above linear transformation. This methodology works when d > 2 as well. In general, X can be written in the form X=a+LM 54 i1 Where N is a til—dimensional vector Whose components are independent N (O, 1). To connect
the matrix L to E , observe that Z3=1E{(XIJ)(X.u)‘}= iii LN (Lsi‘j C m LNNlL‘IRi
= L. iilNN'iua LI Luca N' llu‘ Ht *' Nd} h Nkl Mt N1 ' N‘ N"; 'Z
r’
‘1 So L is in some sense a “s uare root” of 2. Furthermore because 2 is s Inmetric and
3 3 I q I y
positive semi—deﬁnite, L can always be chosen to be a lower triangular matrix with real en— tries. Writing 2 as
E = LLt is called the Cholesky factorization of 2. Clearly, the key to generating X is the computation
of the Cholesky factor L. Assuming that E is a d X d matrix, L (Cholesky factor of E) can be computed by using
the following algorithm: (Let Lij be the (2', j)—th component of matrix L.)
For i = 1 to d For 3' = 1 to 6—1 j—l
Uij * E [1%ij}:
k:1 Set L =
1.3 Lj'
Set Lji = 0 End for if 1 1/2
Set Lﬁ : [03,35 * k:1
End for Provided that d is not too large, this approach offers an efﬁcient way to generate samples of
correlated normal random variables. 55 ...
View Full
Document
 '08
 TOPALOGLU

Click to edit the document details