Week06-Lecture (20100928) - Simulation Modeling and...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Simulation Modeling and Analysis (ORIE 4580/5580/5581) Week 6: Generating Samples of Poisson Processes, Normal Random Variables, and Introduction to Input Modeling (09/28/10 - 09/30/10) 1 Announcement and Agenda • • HW 4 is available on Blackboard and it will be due at 11am on 10/7 For ORIE 5581 Students ONLY: There will be a midterm at 7:30pm on Thursday, September 30 at Rhodes 471. • • • • • Sample midterm is available on Blackboard (see the “Exam” Folder in Course Documents) Review Session: Tuesday (9/28) at 7:30pm in Rhodes 253 • I’ll go over the sample midterm questions Open-book and open-notes (no laptop, iPad, iPhone, iTouch etc) Please bring a calculator! I will be traveling next Thursday (10/7). The lecture will be covered by Fan Zhu. 2 Motivation: Arrival Process • In discrete event simulation (2nd half of the course), we are interested in modeling arrivals of customers to a system • • • • Arrival of customers to a bank, ATM, web site, restaurant, etc. Number of telephone calls to 911 centers, etc. Let N(t) denote the number of arrivals during the time interval (0, t] in some system, with N(0) = 0. The stochastic process { N(t) : t ≥ 0 } is called an arrival process • What does the process { N(t) : t ≥ 0 } look like? 3 Picture, Notation, and Terminology N (t ) t T0 A1 T1 A2 T2 A3 T3 A4 T4 A5 T5 • N(t) denote the number of arrivals during the time interval (0, t] • • • Observation: the process { N(t) : t ≥ 0 } increases by jumps only Let Tn be the time of the nth arrival (or jump). Convention: T0 = 0 Let An denote the interarrival time between the (n-1)th and the nth arrivals, that is, An = Tn - Tn-1 4 Definition of the Poisson Process based on Interarrival Time Distribution • • If the interarrival times A1, A2, A3, .... are independent and identically distributed random variables having an exponential distribution with a parameter λ, then the arrival process is called the Poisson process. • • • • • λ is called the arrival rate of the Poisson process Example: Suppose the interarrival times for customers arriving at a store are i.i.d. random variables with an exponential distribution with parameter λ = 4 hours-1. IMPORTANT NOTE: The unit for λ is 1/hour. The expected value of an Exponential(λ) random variable is 1/λ On average, the arrivals of two successive customers are separated by 1/λ = 1/4 hours = 0.25 hours (15 minutes) The rate of customer arrivals is 4 customers per hour. 5 Important Properties of Poisson Process • Number of arrivals during the time period (t, t+s] is independent of the number of arrivals during the interval (0,t] • • • N(t+s) - N(t) is independent of N(t) More generally, if t1 ≤ t2 ≤ t3 ≤ t4, then N(t4) - N(t3) is independent of N(t2) - N(t1) For any t and s, the number of arrivals during (t, t+s], N(t+s) - N(t), has a Poisson distribution with parameter λs • • • The expected # of arrivals during an interval of length s is λs The probability mass function of N(t+s) - N(t) is given by: (λs)k Pr {N (t + s) − N (t) = k } = e−λs k! k = 0, 1, 2, . . . IMPORTANT: The number of arrivals during (t, t+s] is independent of t! It only depends on the length of the interval s 6 Example: Computing Probability of No Arrival • Method 1: The number of arrivals during (0,t] follows a Poisson distribution. (λt)0 Pr {N (t) − N (0) = 0} = e−λt = e−λt 0! • Method 2: If there is no arrival during (0,t], this means that the first interarrival time is greater than t Pr {N (t) − N (0) = 0} = Pr {A1 > t} = ￿ ∞ λe−λu du = e−λt t 7 A Formal Definition of Poisson Process: Why? • An arrival process { N(t) : t ≥ 0 } is a Poisson process if 1. For any s and t, N(t+s) - N(t) is independent of N(t) 2. For any s and t, Pr { N(t+s) - N(t) = 1 } = λs + o(s), where o(s) denotes any function g(⋅) such that lims→0 g(s)/s = 0 3. For any s and t, Pr { N(t+s) - N(t) ≥ 2 } = o(s) • Consistency between the two definitions (λs)1 Pr {N (t + s) − N (t) = 1} = e−λs = (λs)e−λs ￿ 1! ￿ 2 3 4 (λs) (λs) (λs) = (λs) 1 − λs + − + − ··· 2! 3! 4! (λs)3 (λs)4 (λs)5 = (λs) − (λs)2 + − + − ··· 2! 3! 4! = λs + o(s) 8 • Using our definition based on interarrival time distribution, we have Nonstationary Poisson Process • • • Suppose the arrival rate of the Poisson process changes with time • Example: A restaurant is more busy around noon and dinner time Application: Modeling “time of day” and “seasonality” effects Let λ(t) denote the arrival rate at time time • Formal definition of nonstationary Poisson process 1. For any s and t, N(t+s) - N(t) is independent of N(t) 2. For any s and t, Pr { N(t+s) - N(t) = 1 } = λ(t)s + o(s) 3. For any s and t, Pr { N(t+s) - N(t) ≥ 2 } = o(s) Important Property: For any s and t, N(t+s) - N(t) as a Poisson distribution with parameter ∫tt+s λ(u) du, that is, ￿￿ ￿k t+s λ(u)du R t+s t Pr {N (t + s) − N (t) = k } = e− t λ(u)du · k = 0, 1, 2, . . . k! • • The number of arrivals during the interval (t, t+s] depends on t and s! 9 • Suppose a food stand operates from 8am until 5pm Example • • • From 8am until 11am, customers seem to arrive at a steadily increasing rate of 5 customers per hour at 8am, reaching a maximum of 20 customers per hour at 11am From 11am - 1pm, the arrival rates remain constant at 20 customers per hour After 1pm, the arrival rates drop steadily until it hits the rate of 12 customers per hour at 5pm • # of arrivals between 8:30 and 9:30 has a Poisson distribution with parameter If we assume that arrivals in nonoverlapping time intervals are independent, we can model this as a nonstationary Poisson process: if 0 ≤ t ≤ 3, 5 + 5t t=0 corresponds to 8am 20 if 3 ≤ t ≤ 5, λ(t) = 20 − 2(t − 5) if 5 ≤ t ≤ 9, ￿ 3/2 λ(u)du = 1/2 ￿ 3/2 (5 + 5u)du = 10 1/2 10 Generating Samples of Stationary Poisson Process • • Suppose we have a Poisson process with a constant rate λ To generate samples, we use the fact that the interarrival time is an exponential random variable. • • • STEP 1: Initialize the arrival counter n = 0 and set T0 = 0 STEP 2: Increment n by 1. Let An be a sample from an exponential distirbution with parameter λ STEP 3: Let Tn = Tn-1 + An. Return to STEP 2. • The arrival times T1, T2, T3,... characterize sample of the Poisson process. 11 Generating Samples of Nonstationary Poisson Process • • For nonstationary Poisson process, the interarrival time is NOT exponential, so we need a new technique. We will use something similar to Acceptance-Rejection method • • • Let λ* = max { λ(t) : t ≥ 0 } denote the largest arrival rate Generate arrival times T1*, T2*, T3*, ..... associated with a stationary Poisson process with parameter λ* • • We can do this using the method from the previous slide For each arrival time Ti* , we generate a Uniform[0,1] random variable Ui. If Ui ≤ λ(Ti*) / λ*, accept Ti*; otherwise we discard it. We accept each arrival time Ti* with probability λ(Ti*) / λ* 0 check if Ui ≤ λ(Ti*) / λ* T1* U1 T2* U2 T3* U3 T4* U4 T5* U5 T6* U6 12 • Proof of Correctness To establish the correctness of the sampling algorithm, we need to check against the definition of the nonstationary Poisson process: 1. For any s and t, N(t+s) - N(t) is independent of N(t) 2. For any s and t, Pr { N(t+s) - N(t) = 1 } = λ(t)s + o(s) 3. For any s and t, Pr { N(t+s) - N(t) ≥ 2 } = o(s) • • Condition 1 and 3 are immediate. Why? To verify Condition 2, note that Pr { one arrival occurs in (t, t + dt]} Pr { one￿arrival occurs in (t, t + dt]} ￿ = Pr one arrival occurs in (t, t + dt] from the Poisson process with rate λ∗ ∗ = Pr one arrival occurs in (t, t + dt] from the Poisson process with rate λ￿ ￿ and this arrival is accepted and this arrival is accepted λ(t) ∗ t = (λ∗ dt + o(dt)) λ(∗) = (λ dt + o(dt)) λ∗ λ = λ(t)dt + o(dt) = λ(t)dt + o(dt) 13 Generating Samples from Normal Distribution • In the early days of simulation, one way to generate a sample from N(0,1) was to use the Central Limit Theorem. • ￿ If U1, U2,.... are i.i.d. Uniform[0,1] random variables, then n￿ 2 n￿ ≈ N 0, 12 D U1 + U2 + · · · + Un − • This method is NOT exact; it is only an approximation! ￿ ⇔ U1 + U2 + · · · + Un − ￿ n/12 n 2D ≈ N (0, 1) • • The most commonly used method for generating samples of normal random variables exactly is the Box-Muller method. Key Insight: If N1 and N2 are two independent N(0,1) random variables, then the joint density function f(⋅,⋅) for N1 and N2 has a very nice structure 1 − x2 +y2 2 f (x, y ) = e , 2π −∞ < x, y < ∞ Why? 14 Bivariate Density for Independent Normals The contours of the density function form a collection of concentric circle! {(x, y ) : f (x, y ) = c} ￿ ￿ 2 2 = (x, y ) : x + y = −2 ln(2π c) 1 − x2 +y2 2 f (x, y ) = e 2π Suppose we express (N1, N2) in polar coordinates, then we have that (N1, N2) = (R cos Θ, R sin Θ) where R denotes the radial component and Θ ∈ [0,2π) denotes the angular component (in radians) Observation 1: By the “spherical symmetry” of the density function, Θ is uniformly distributed over the interval [0,2π) and is independent of R! Observation 2: R2 = N12 + N22. So, R2 has a chi-square distribution with two degrees of freedom. The density g(⋅) and cumulative distribution G(⋅) functions associated with R2 are given by: for all r ≥ 0, ￿r ￿r 1 −r/2 1 g (r) = e and G(r) = g (u)du = e−u/2 du = 1 − e−r/2 2 20 0 15 • • • Box-Muller Method Generate independent U1 ~ Uniform[0,1] and U2 ~ Uniform[0,1] ￿ Set R = −2 ln U1 and Θ = 2π U2 ￿ Note that G−1 (u) = −2 ln(1 − u) , but 1 - U1 has the same distribution as U1 • Return N1 = R cos Θ and N2 = R sin Θ Faster Variant Using Acceptance-Rejection STEP 1: Generate independent U1 ~ Uniform[0,1] and U2 ~ Uniform[0,1] STEP 2: Set V1 = 2U1 − 1 and V2 = 2U2 − 1 STEP 3: Compute W = V12 + V22 STEP 4: If W ≤ 1, then set N1 = V1 · Otherwise, return to STEP 1. ￿ −2 ln W W and N2 = V2 · ￿ −2 ln W W 16 Why does this work? Samples of Non-Standard Normals • • Suppose that X is a normal random variable with mean μ and variance σ2 • We can write X = μ + σ Z where Z is the standard normal random variable with mean 0 and variance 1 We can generate samples of X by generating samples of Z and applying the above transformation 17 Correlated Normal Random Variables • Suppose X = (X1, X2,..., Xd) is a d-dimensional multivariate normal random vector with mean μ and covariance matrix Σ µ1 µ2 . . . µd σ11 σ21 . . . σd1 σ12 σ22 . . . σd2 ··· ··· .. . ··· σ 1d σ 2d . . . σdd µ= µi = E [Xi ] Σ= i = 1, 2, . . . , d σij = Cov(Xi , Xj ) i = 1, 2, . . . , d i, j = 1, 2, . . . d σii = Cov(Xi , Xi ) = Var(Xi ) Multivariate normals have a large number of applications in finance, especially when modeling various financial instruments with correlated returns. 18 µ= • • 2 σ1 = σ11 = Var(X1 ) 2 σ2 = σ22 = Var(X2 ) Two-Dimensional Case (d = 2) ￿ ￿ µ1 µ2 Σ= ￿ σ11 σ21 σ12 σ22 ￿ = ￿ 2 σ1 ρσ1 σ2 ρσ1 σ2 2 σ2 σ12 σ21 ρ= = = Corr(X1 , X2 ) σ1 σ2 σ1 σ2 ￿ Let N1 and N2 be two independent standard normal random variables Suppose we set X1 = μ1 + σ1N1 and X2 = μ2 + a N1 + b N2 • How do we choose the constants a and b? 2 and Var(X1 ) = σ1 E [X1 ] = µ1 E [X2 ] = µ2 and Var(X2 ) = a2 Var(N1 ) + b2 Var(N2 ) = a2 + b2 Cov(X1 , X2 ) = E [(X1 − µ1 ) (X2 − µ2 )] = E [σ1 N1 (aN1 + bN2 )] = aσ1 System of Equations: ￿ X1 X2 ￿ = ￿ 2 a2 + b2 = σ2 aσ1 = ρσ1 σ2 µ1 µ2 ￿ + ￿ σ1 ρσ2 a = ρσ2 ￿￿ N1 N2 ￿0 σ 2 1 − ρ2 ￿ and b = σ2 1 − ρ2 ￿ Note the lower triangular matrix! 19 • • General Case In general, we can use the same procedure to transfer independent standard normal random variables to correlated ones. We can write X = μ + L N where L is a “lower triangular matrix” and N = (N1, N2,..., Nd) are independent and identically standard normal random variables with mean 0 and variance 1 How do we determine the matrix L ? ￿ ￿ ￿ ￿ ￿ ￿ T T TT Σ = E (X − µ) (X − µ) = E LN (LN ) = E LN N L ￿ ￿T T = LE N N L = LILT = LLT The matrix L is a “square root” of the target covariance matrix Σ • • • • Since Σ is a symmetric positive semi-definite matrix, we can always write Σ as L LT, where L is a lower triangular matrix! This matrix L is called the Cholesky factor of Σ 20 Computing the Cholesky Factor (d=3) σ11 σ12 σ13 L11 0 0 L11 L21 L31 σ21 σ22 σ23 = LLT 11 σ1221 σ13 22 L11L22 0 L32 0 L L 0 0 Σ= σ= σ31 σ32 σ33 Σ = σ21 σ2231 σ23 32 = 33 T = L21 0 L22L33 0 L L L LL 0 2 σ31 σsymmetric σ33 L31 L32 L33 32 L11 2 2 2 L21 L11 L21 + L22 = L11 symmetric 2 L31 L11 L31 L21 + =32 L22 21 L11 + L2 L+ L2 L2 L L L2 31 32 21 + 22 33 L31 L11 L31 L21 + L32 L22 L2 + L2 + L2 31 32 33 1. L11 = √σ11 To determine the matrix L, we solve for entries Lij in the following order: 2. L21 = σ21 / L11 3. L22 = √( σ22 - L212 ) 4. L31 = σ31 / L11 5. L32 = ( σ32 - L31L21)/L22 6. L33 = √( σ33 - L312 - L322) 21 Introduction to Input Modeling 22 Motivation • • • To develop complex simulation models, we need to define the probability distribution of the “input sequences” to the model. • • • • • Example: In call center simulation, we need to generate call arrival times, call types, and call response times Depend on the amount of relevant historical data we have at hand No Data Huge Amount of Data Moderate Amount of Data Question: How do we generate the input sequences? Three scenarios to consider 23 • Scenario 1: No Data Why might you have no historical data? Density Function • • • no records have been kept new products or operation processes What should we do? What distribution should we use? • • A common approach is to use the triangular distribution The triangular distribution requires THREE parameters: Cumulative Distribution • • • • a -- the minimum value b -- the maximum value c -- the most likely value 24 Other Choices: Uniform, Beta Scenario 2: Huge Amount of Data • • • • • Most organizations keep extensive records on many aspects of their operations in large databases • • • Example: credit card companies, Amazon.com, and Verizon Collection Errors: Data may be contaminated with noises Non-Stationarity: Patterns may have shifted. Data might be “stale” Caveats: Things to keep in mind when faced with large amount of data If there is an enormous amount of “clean” data, we can use the data directly within the simulation model via resampling (or bootstrapping, or using the empirical distribution) Resampling: When a new observation is required, one of the observations from the pool of data is selected uniformly at random Example: Suppose we have the observations 1, 1, 2, 3, 4, 2 for the number of items in a shopping cart of the customers (just before checking out). We can use resampling based on this dataset to generate the number of items purchased. 25 Drawbacks of Resampling Method • • • • This approach cannot generate values outside the range of observed data Many data points need to be kept in memory Difficult to perform “What If” analysis? • Suppose we want to know the impact of lowering the price of all items by 10%. People are reluctant to use resampling approach 26 Scenario 3: Moderate Amount of Data • • • • Suppose have a reasonable amount of data, but not enough to employ the resampling method described above • This is the case that we will focus on! A Standard Approach: Choose a family of distributions (e.g., normal, exponential, or Weibull) to model the random quantity and determine the parameters of the selected distribution from the data The fitted distribution is used within the simulation model to generate the input sequences. THREE issues that need to be resolved: • • • • (Model Specification) How do we choose the family of distributions? (Parameter Estimation) How do we select parameters of the distribution? (Goodness of Fit) Once we have chosen the distribution and the corresponding parameters, how do we assess the “fit”? Modeling correlated input sequences is an active research area. • Key Simplifying Assumption: Successive elements of an input sequence are i.i.d. 27 Model Specification • THREE things to consider when choosing the family of distribution • • • • • • • • Theoretical justification: Using theoretical properties of each family of distribution to justify its use in modeling different situations Histogram and Bar Plots: Assessing the “rough shape” of the probability density function Q-Q Plots: Assessing the cumulative distribution function Normal Log-normal Poisson Process Weibull When should we use each of these random variables? How to check if they are appropriate? 28 • Theoretical properties of commonly used random variables ...
View Full Document

Ask a homework question - tutors are online