20111202_125325 - Variance Reduction Variance reduction is...

Info iconThis preview shows pages 1–7. Sign up to view the full content.

View Full Document Right Arrow Icon
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 2
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 4
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 6
Background image of page 7
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Variance Reduction Variance reduction is concerned with developing alternative estimators that have lower vari— ances. The idea is that if we have an estimator with smaller variance, then fewer replications are going to be needed to build a confidence interval with a certain width. By using basic methods, it is often possible to reduce the variances of the estimators by large amounts. Some— times, one has to exploit more information about the problem at hand in order to reduce the variances. Depending on the application, this effort can be justified. 1. Estimators with Smaller Variances Assume that wish to compute the area of the region A contained within the unit square. Let 10;; be the area of this region. ' One method to estimate 30A is to generate points that are uniformly distributed over the unit square and compute the fraction of points falling in region A. Let X1, . . . ,Xn be n points that are uniformly distributed over the unit square. ThenJ ’\ ML: i n its Mae/«3‘ is an estimator of pA, where H 1 if X E A {XEA} 0 otherwise. Noting that 1121 w.- 6A3. (4-: 1.??? M: 6A3) . PA (41)») This estimator has variance VGTQM): V“ (J- flixnal) : L' Varflhxieel : '1 FA UTA) 139 Another approach is to generate points that are uniformly distributed over the square B. Let 1/1,. . . ,Yn be n points that are uniformly distributed over the latter square. Then, is another estimator of 39A. Noting that VGTGlfi/EEAQ = E A} (1 — E = 410,110. — 413A). This estimator has variance A 1 ” 1 n 1 VGTUDA) = V03" (47; ;H{neA}) = 16mg VGT(H{neA}) : mVarUIm-En) 1 19A 1 ——4 1—4 =— —— . 1677. p“ 1“) 11(4 “1) Since 1);; S 1/4, in has smaller variance than 13 A. Therefore, although both 15A and 15 A are legitimate estimators of 114, using 33A will give more accurate estimates. Finally, let Z1, . . . , Zn be it points that are uniformly distributed over the circle 0'. Then, ,_ '71- A 191— _,.s_. 241 3., lb " ‘15: fiCAas is another estimator of pA. Noting that t . , 15.. Vanna-gab mi :Zi 6M. U” ll??? 6'4?) s ’7': M p 7i F!) This estimator has variance A n VarfiiA): Vw( 1T... 1... fl Hie/12) : L U91; {21‘ EA?) “9' n 1:; IE? n1 1"! _ 11L n, n, V ,_ 11 _ P, 7, Ian"- “f'w ( 11W)“ ‘7" ("llawwrl Since pA g 7r/16 < 1/4, the estimator 35A is even less variable than 16A and 3%. However, is it a good idea to use the estimator 13 A? Note that in order to use 324, we need to generate points that are uniformly distributed over a circle. Without going into the details, the following algorithm returns a point that is uniformly disributed over the circle centered at (0, 0) and having radius a: 1. Generate U1 N U[O, 1], U2 N U[0, l] and U3 ~ U[0, l]. 2. Set R : amax{U1, U2}, 6 = 27rU3. 3. Return (R0056, Rsin 6). This algorithm has to make cosine and sine computations, which will take extra computational 2 140 effort. In a given amount of computational time, we will be able to generate more points that are uniformly distributed over a square than over a circle. Therefore, even though 11; has smaller variance than 35A, it may be more advantegous to use 13A. When applying variance reduction techniques ones should consider the tradeoff between the reduction in the variance and the extra computational efiort introduced by the variance reduction technique in question. 2. Common Random Numbers We cover common random numbers in another part of the course where we talk about com— parison of alternative systems. 3. Antithetic Variables The method of antithetic variables takes advantage of the following observation: Suppose that X and X ’ are identically distributed random variables. Then I . Va?” (X EX) 2i VmX) +3 00v(X,X’) +i vaaX') = %{Var(X) + CovaaX’H- If X and X " are independent, then Var (X+X) : éVafiX). 2 On the other hand, if X and X ’ are negatively correlated, then X X’ 1 Var< J; )<§VaT(X). Therefore, the goal of antithetic variables is to make the simulation model give two estimates of the performance measure X and X ’ such that Cov(X , X ’ ) < 0. As an example, consider the problem of computing lE{f(U)}, where U is a uniformly distributed random variable over [0,1] and f is a real—valued function. Note that many Monte Carlo simulation models can be cast in this way. For example, if we want to compute f: g(m)d$, one can write this integral as (b — a) / gs), i ads: : (b — a) law» = (b — a) 1E{g(a + (b — awn = mm}, where Z and U are uniformly distributed random variables over [0,, b] and [0,1] respectively, and f is defined as ft?) = (b — a) [9([:1 + (b — alffill- If U1, . . . ,Ugn are independent uniformly distributed random variables over [0,1], then an estimator of lE{f(U is ,1 q a? Z L z 1 (LL; ) in l 1 l 3 141 Whenever U is large, 1 — U is small. Therefore, if f is monotone, then f (U) is negatively correlated with f(1 — U). The idea of antithetic variables suggests the use of the estimator an: _L, [MW-v Meter] h. Note that n Var(a“‘)= \Jn/( .‘h- 311%”) : Z.— “ - .. 3 a» . m, W (ilu:)+ til—uni) Var(a“)= \1~r( 5.1: [l lull + it! Wl) [ML = ‘L- “V’rlilul‘ll + ‘2 CW(£(ULJI, £H~Uf)):j; r [torque-J) Am in -t C0u(ijull,£(jri,lifl Since Cov(f(IL;), f(1 7 S 0, we have Var(o:"“) S Vafiofl"). Note that, in order for the method of antithetic variables to work, it is crucial that the per— formance. measure of interest should be monotone (increasing or decreasing) in the antithetic variable. Example 7 The following network is used to represent the precedence relationships among the activities in a project. For example, activity 8 can start only after activity 5 and 6 have been completed. The project is said to have been completed when all the activities are completed. 4 SOUI‘CB Now assume that the independent random variables X1, . . . ,Xg represent the activity dura— tions. Then, the length of the project is 0(Xl,...,X9)=Xg+maX{X1+X2+X4+X7,maX{X1+X2+X5,X1+X3+X5}+X3}. (D )(‘l =P\ ©XL+¥1fKT :Fl— @ KM)“, lFL @ )4“; XI-th ch ® amt}, J: o w my New =15; (9 XS, Y\t\éw “fir-D1 142 6) W1 F}, m. Y1: W5 \M w mm, Ww‘fi WM \111 Xi“) 561+ Btu 3MB +‘7li’5 +X‘1 We want to compute 1E{C(X1, . . . ,X9)}. C(-, . . . , is nondecreasing, so this appears to be a good candidate problem for antithetic~variables In order to use antithetic variables, we need a wayto generate samples X1,. . . ,X9 and X1, . . . ,Xg of X1, . . . ,Xg such that whenever 0(X1, . . . ,X9) is large, 0031,. . . ,X9) is small. Assume that the cumulative distribution function for the random variables X1, . . . ,Xg are F10, .. . , F9(-). Let U1, . . . , U; be uniformly distributed random variables over [0,1]. Then, we can generate a sample of X,- by Arl- “ - Ill—lib?)- 1'. Since 1 — U,- is uniformly distributed over [0, 1], FEW — Us) can also be used to generate a sample of X,. Furthermore, note that F1710) is a nondecreasing function. Therefore, whenever F;_1(U,) is large, F,le 7 [1,) is small. Let {(Uf”, . . . , U51) : n = 1,2,. . be 9~dimensional vectors of independent uniformly dis— tributed random variables over [0,1]. Then, 0(F1‘1(U{‘),...,F;1(Ugf’)) and 0(Ff1(1 —U{‘),...,F;1(l—U§‘)) are expected to be negatively correlated. Thus, the estimator %Z% {C(Ff1(U{‘),.. .,F9‘1(U5")) + 0(Ff1(1 — U?),---.F;1(1 - FED} should give smaller variance than the estimator 1 2N W Z 0(F1—1(Ur).....Fs‘1(Ur)). n21 In most applications of antithetic variables, the underlying random variables must be generated by inversion. Given that inversion is sometimes expensive, this can limit the ap— plicability of antithetic variables. But, as the next example shows, inversion is not always needed. Example 7 Assume that the price of an asset remains constant over the time intervals [0, A), [A, 2A), . . . , [(N a 1)A,NA), and only changes at time points 0, A, 2A, . . . , NA. Let 143 Y” be the price of the asset over the time interval [Ta/A, (n + DA). Assume that the price of the asset evolves as A An n+1=ner+°fl+a Where §n+1 is a standard normal random variable. Also, assume that 61,. . . , N are indepen— dent. A European call option allows the holder of the option to buy the asset at a determined price (strike price) at the expiration time of the contract. If N A is the expiration time and K is the strike price, then the value of the call option at the expiration time is IE[YN — K]+. Thus, we should price the call option at 3—?“ 1E[YN u K]+, where 7' is the rate of return on a riskless asset. An Asian option pays out on the average value of the asset up to the expiration date. Thus, the expectation that needs to be computed is There is no closed—form expression for this expectation and it must be computed numerically. Simulation is one approach. How do we apply antithetic variables here? Noting y” 2 K, eunfi+avfi(éi+m+5n): it is not hard to see that 1 N“1 + Y N—l + C(‘glr - ' ’ JEN) I :3 YR — K : eflnA+0fl(§1+-n+€n) _ K 71:0 n:0 is monotone in 51,. . . ,gN. Let F’1(-) be the inverse of the cumulative distribution function of the standard normal distribution. Let U1, . . . , UN be N independent uniformly distributed random variables over [0, 1]. Then, 0(F_1(Ul).- - - :F—1(UNJ) and C(F‘lfl ~ 03),. . . ,F-1(1 _ UN» are expected to be negatively correlated, and we can apply antithetic variables as before. While this approach is viable, it requires that the normal random variables be generated by using the inversion method. As we saw earlier, inversion is not the method of choice for generating samples of normally distributed random variables. But observe that F—1(U) = g <=> F-1(1 7 U) = fig. 144 SO, we need not use the inversion method to generate the antithetic variables. Instead, we can use any method to generate 61,. . . ,EN. Then, 0(511' ' ' BEN) and 01—61: ' ' are identically distributed and negatively correlated. 145 ...
View Full Document

Page1 / 7

20111202_125325 - Variance Reduction Variance reduction is...

This preview shows document pages 1 - 7. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online