This preview shows pages 1–3. Sign up to view the full content.
MS&E 223
Supplemental Notes
Simulation
Making Decisions via Simulation
Brad Null
Spring Quarter 200809
Page 1 of 10
Making Decisions via Simulation
Ref
: Law, Chapter 10;
Handbook of Simulation
, Chapters 1721; Haas, Sec. 6.3.6.
We give an introduction into some methods for selecting the “best” system design (or parameter setting
for a given design), where the performance under each alternative is estimated via simulation.
The methods presented are chosen because of simplicity–see the references for more details on exactly
when they are applicable, and whether improved versions of the algorithms are available. Current
understanding of the behavior of these algorithms is incomplete, and they should be applied with due
caution.
1.
Continuous Stochastic Optimization
RobbinsMonro Algorithm
Here we will consider the problem min
f( )
θ∈Θ
θ
where, for a given value of
θ
, we are not able to evaluate
f( )
θ
analytically or numerically, but must obtain (noisy) estimates of f( )
θ
using simulation. We will
assume for now that the set of possible solutions
Θ
is uncountably infinite. In particular, suppose that
Θ
is an interval [
θ
θ
,
] of real numbers. One approach to solving the problem
)
(
f
min
θ
Θ
∈
θ
is to estimate
)
(
'
f
θ
using simulation and then use an iterative method to solve the equation
)
(
'
f
θ
= 0. The most basic
such method is called the RobbinsMonro Algorithm, and can be viewed as a stochastic version of
Newton’s method for solving nonlinear equations.
The basic iteration is
(*)
θ
n+1
=

θ
π
n
n
Z
n
a
,
where a > 0 is a fixed parameter of the algorithm (the quantity a/n is sometimes called the “gain”),
n
Z is
an unbiased estimator of
)
(
'
f
n
θ
, and
θ
θ
θ
θ
≤
θ
≤
θ
θ
θ
<
θ
θ
=
θ
π
.
if
;
if
;
if
)
(
(The function
( )
π ⋅
projects the current parameter value onto the feasible set. Algorithms that iterate as
described above are called
stochastic approximation
algorithms.) Denote by
θ
* the global minimizer of f.
If the only local minimizer of f is
θ
*, then under very general conditions
θ
n
→
θ
*
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentMS&E 223
Supplemental Notes
Simulation
Making Decisions via Simulation
Brad Null
Spring Quarter 200809
Page 2 of 10
as n
→
∞
with probability 1. (Otherwise the algorithm can converge to some other local minimizer.) For
large n, it can be shown that
θ
n
has approximately a normal distribution. Thus, if the procedure is run m
independent times (where m = 5 to 10) with n iterations for each run, generating i.i.d. replicates
θ
n,1
, .
.. ,
θ
n,m
, then a point estimator for
θ
* is
m
θ
=
m
1
n,j
m
j 1
=
θ
∑
and
confidence intervals can be formed in the
usual way, based on the Studentt distribution with m – 1 degrees of freedom.
Remark
This is the end of the preview. Sign up
to
access the rest of the document.
 Spring '09
 UNKNOWN

Click to edit the document details