MIT2_017JF09_ch08

MIT2_017JF09_ch08 - 8 STOCHASTIC SIMULATION 59 8 STOCHASTIC...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 8 STOCHASTIC SIMULATION 59 8 STOCHASTIC SIMULATION Whereas in optimization we seek a set of parameters x to minimize a cost, or to maximize a reward function J ( x ), here we pose a related but different question. Given a system S , it is desired to understand how variations in the defining parameters x lead to variations in the system output . We will focus on the case where x is a set of random variables, that can be considered unchanging- they are static. In the context of robotic systems, these unknown parameters could be masses, stiffness, or geometric attributes. How does the system behavior depend on variations in these physical parameters? Such a calculation is immensely useful because real systems have to be robust against modeling errors. At the core of this question, the random parameters x i in our discussion are described by distributions; for example each could have a pdf p ( x i ). If the variable is known to be normal or uniformly distributed, then of course it suces to specify the mean and variance, but in the general case, more information may be needed. 8.1 Monte Carlo Simulation Suppose that we make N simulations, each time drawing the needed random parameters x i from a random number black box (about which we will give more details in the next section). We define the high-level output of our system S to be g ( x ). For simplicity, we will say that g ( x ) is a scalar. g ( x ) can be virtually any output of interest, for example: the value of one state at a given time after an impulsive input, or the integral over time of the trajectory of one of the outputs, with a given input. In what follows, will drop the vector notation on x for clarity. Let the estimator G of g ( x ) be defined as 1 N G = g ( x j ) . N j =1 You recognize this as a straight average. Indeed, taking the expectation on both sides, 1 N E ( G ) = E ( g ( x j )) , N j =1 it is clear that E ( G ) = E ( g ). At the same time, however, we do not know E ( g ); we calculate G understanding that with a very large number of trials, G should approach E ( g ). Now lets look at the variance of the estimator. This conceptually results from an infinite number of estimator trials, each one of which involves N evaluations of g according to the above definition. It is important to keep in mind that such a variance involves samples of the estimator (each involving N evaluations)- not the underlying function g ( x ). We have 8 STOCHASTIC SIMULATION 60 2 ( G ) = 2 N N j =1 N 1 g ( x j ) 1 2 g ( x j ) = N 2 j =1 N j =1 1 = N 2 2 ( g ) 1 = 2 ( g ) . N This relation is key. The first equality follows from the fact that 2 ( nx ) = n 2 2 ( x ), if n is a constant. The second equality is true because 2 ( x + y ) = 2 ( x ) + 2 ( y ), where x and y are random variables. The major result is that 2 ( G ) = 2 ( g ) if only one-sample trials are considered, but that 2 ( G ) 0 as N . Hence with a large...
View Full Document

Page1 / 9

MIT2_017JF09_ch08 - 8 STOCHASTIC SIMULATION 59 8 STOCHASTIC...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online