This preview shows pages 1–3. Sign up to view the full content.
MS&E 223
Lecture Notes #11
Simulation
EfficiencyImprovement Techniques
Brad Null
Spring Quarter 200809
Page 1 of 9
EfficiencyImprovement Techniques
Ref
: Law, Chapter 11, Handbook of Simulation, Ch. 10
1.
Variance Reduction and Efficiency Improvement
The techniques that we will look at typically are designed to reduce the variance of the estimator for the
performance measure that we are trying to estimate via simulation. (Reduction of the variance leads to
narrower confidence intervals, and hence less computational effort is required to achieve a given level of
precision.) The reduction is measured relative to the variance obtained when using “straightforward
simulation.” For this reason these techniques are sometimes called
variancereduction
methods.
One must be careful when evaluating these techniques: they are only worthwhile if the reduction in
computer effort due to the variance reduction outweighs the increase in computational effort needed to
execute the technique during the simulation, as well as the additional programming complexity.
For many of the methods that we will look at, it is obvious that the additional effort to implement the
variance reduction is small, so the technique is a clear win. How do we deal with more complicated
situations?
One fair way to compare two estimation methods is to assume a fixed computer budget of c time units,
and compare the confidenceinterval width for the two methods at the end of the simulation.
Example
:
Suppose
α
=
E[X]
=
E[Y].
Should we generate i.i.d. replicates of X or Y in order to estimate
α
?
Let c be the computer budget.
Then, the number of Xobservations generated within budget c is
N
X
(c)
=
max {n
≥
0 :
τ
X
(1) + .
.. +
τ
X
(n)
≤
c} ,
where
τ
X
(i) is the (random) computer time required to generate X
i
.
Assume (reasonably) that (X
1,
τ
X
(1)),
(X
2,
τ
X
(2)), .
.. is a sequence of i.i.d. pairs. It can be shown that
(c)
N
c
1
X
→
λ
X
with probability 1 as c
∞
→
, where
λ
X
=
]
E[
1
X
τ
.
Now, note that
α
X
(c)
=
∑
(c)
N
1
=
i
i
X
X
X
(c)
N
1
(set
α
X
(c) = 0 if N
X
(c) = 0)
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentMS&E 223
Lecture Notes #11
Simulation
EfficiencyImprovement Techniques
Brad Null
Spring Quarter 200809
Page 2 of 9
is the estimator for
α
(based on X) obtained after expending budget c.
Since
(c)
N
X
≈
c
X
⋅
λ
, it follows
that
α
X
(c)

α
=
α

∑
X
(c)
N
1
(c)
N
1
=
i
i
X
X
≈
∑
λ
=
α

λ
c
1
i
i
X
X
X
c
1
≈
D
N(0,1)
c
]
X
[
Var
X
λ
=
N(0,1)
Var[X]
]
E[
c
1
X
⋅
τ
,
Similarly, if
α
Y
(c) is the estimator based on i.i.d. replications of Y,
α
Y
(c)

α
≈
D
N(0,1)
Var[Y]
]
E[
c
1
Y
⋅
τ
.
Clearly, we should choose the estimator that
minimizes the product of the mean computer time per
observation and the variance per observation
. This conclusion holds more generally, so the inverse of
this product is a reasonable measure of efficiency.
2.
This is the end of the preview. Sign up
to
access the rest of the document.
 Spring '09
 UNKNOWN

Click to edit the document details