This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: 8 STOCHASTIC SIMULATION 59 8 STOCHASTIC SIMULATION Whereas in optimization we seek a set of parameters x to minimize a cost, or to maximize a reward function J ( x ), here we pose a related but different question. Given a system S , it is desired to understand how variations in the defining parameters x lead to variations in the system output . We will focus on the case where x is a set of random variables, that can be considered unchanging- they are static. In the context of robotic systems, these unknown parameters could be masses, stiffness, or geometric attributes. How does the system behavior depend on variations in these physical parameters? Such a calculation is immensely useful because real systems have to be robust against modeling errors. At the core of this question, the random parameters x i in our discussion are described by distributions; for example each could have a pdf p ( x i ). If the variable is known to be normal or uniformly distributed, then of course it suﬃces to specify the mean and variance, but in the general case, more information may be needed. 8.1 Monte Carlo Simulation Suppose that we make N simulations, each time drawing the needed random parameters x i from a random number ”black box” (about which we will give more details in the next section). We define the high-level output of our system S to be g ( x ). For simplicity, we will say that g ( x ) is a scalar. g ( x ) can be virtually any output of interest, for example: the value of one state at a given time after an impulsive input, or the integral over time of the trajectory of one of the outputs, with a given input. In what follows, will drop the vector notation on x for clarity. Let the estimator G of g ( x ) be defined as 1 N G = g ( x j ) . N j =1 You recognize this as a straight average. Indeed, taking the expectation on both sides, 1 N E ( G ) = E ( g ( x j )) , N j =1 it is clear that E ( G ) = E ( g ). At the same time, however, we do not know E ( g ); we calculate G understanding that with a very large number of trials, G should approach E ( g ). Now let’s look at the variance of the estimator. This conceptually results from an infinite number of estimator trials, each one of which involves N evaluations of g according to the above definition. It is important to keep in mind that such a variance involves samples of the estimator (each involving N evaluations)- not the underlying function g ( x ). We have 8 STOCHASTIC SIMULATION 60 σ 2 ( G ) = σ 2 ⎡ ⎣ ⎤ ⎦ N N j =1 N 1 g ( x j ) ⎡ ⎣ ⎤ ⎦ 1 σ 2 g ( x j ) = N 2 j =1 N j =1 1 = N 2 σ 2 ( g ) 1 = σ 2 ( g ) . N This relation is key. The first equality follows from the fact that σ 2 ( nx ) = n 2 σ 2 ( x ), if n is a constant. The second equality is true because σ 2 ( x + y ) = σ 2 ( x ) + σ 2 ( y ), where x and y are random variables. The major result is that σ 2 ( G ) = σ 2 ( g ) if only one-sample trials are considered, but that σ 2 ( G ) 0 as N → ∞ . Hence with a large...
View Full Document
This note was uploaded on 11/29/2011 for the course CIVIL 1.00 taught by Professor Georgekocur during the Spring '05 term at MIT.
- Spring '05