03LectureNotes_Continuation_SingleSystem

03LectureNotes_Continuation_SingleSystem - IE 305...

Info iconThis preview shows pages 1–8. Sign up to view the full content.

View Full Document Right Arrow Icon
IE 305 Simulation Output Analysis of a Single System Continued Single Run Estimates For Steady State Simulations It may be desirable to estimate performance (and assess estimate accuracy) from one single (probably long) run of a simulation. We have seen that ARENA can do this, but that it often fails in its attempts. There are a couple of reasons why we might want to use only a single run for this. 1. We will not have to “throw away” data from many initial transients, thus saving computer time in steady state simulations.
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
2. We may be able to provide useful statistics to people who do not have the statistical background to understand replicates, correlation, etc. Recall the basic problem: Suppose we are interested in average time in queue. Each customer incurs a time in queue. We “write” each customer’s time in queue to a data file. Let X i be the time in queue of customer i. We could calculate and average X and sample variance S 2 from this data. We could theoretically calculate a confidence interval. What is wrong?
Background image of page 2
The data points are not independent , but rather are “correlated”. The estimate S 2 will be biased, and the confidence interval wrong. Actually, the X i values are “correlated with themselves”. Thus we refer to this phenomenon as “ autocorrelation ”. In particular, X i will be highly correlated with X i+1 . We would expect to see much lower correlation between, say, X i and X i+200 . How can we deal with this “autocorrelation” problem?
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Batch Means: Confidence intervals from a single run Suppose we divide the run into batches of data points. Then we take the average of the data points in each batch. For example, in the graph below, we have plotted time in queue over time, grouped in batches, and found the average of each batch. Warm-up Batch 1 Batch 2 Batch 3 Batch 4 Discard
Background image of page 4
The idea is that if the batches are big enough, the batch averages will not be highly correlated. The batch averages will be correlated somewhat . For example, the last observation in batch one will be correlated to the first observation in batch two. However, the idea is that the correlation “washes out”. Thus while there is correlation between batch averages, it will be small, and thus can (hopefully) be ignored. Once we have the batch averages, we can treat them in the same fashion as if they were averages from independent replicates .
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Of course we have to make sure that the batches are big enough so that there is little correlation between batch averages. How big is big enough ? Before we answer this question, we need to know a little more about autocorrelation.
Background image of page 6
Review of Covariance and Correlation Given two random variables X and Y, and a Joint distribution, f XY (x,y), we know how to find E(X), E(Y), V(X), and V(Y). We simply calculate them as before from the marginal distributions
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 8
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 37

03LectureNotes_Continuation_SingleSystem - IE 305...

This preview shows document pages 1 - 8. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online