This preview shows page 1. Sign up to view the full content.
Unformatted text preview: Rare Event Sampling Importance Sampling Statement of Problem Suppose that X is an (E , d)valued random variable with distribution and that we need to calculate the expectation E [f (X )] = f (x)(dx).
E Unfortunately, analytical and numerical evaluations of this integral may be unfeasible for any of the following reasons: E is highdimensional; is known explicitly, but concentrated on a set with complicated geometry; is only known implicitly. Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 1 / 42 Rare Event Sampling Importance Sampling Monte Carlo Estimation In these cases it may still be possible to estimate the integral using a Monte Carlo method. The most basic estimate is given by: E [f (X )] N 1 f (Xi ) IN [f ], N i=1 where X1 , , XN are independent, identicallydistributed random variables with the same distribution as X . By the SLLN, we know that IN [f ] E[f (X )] a.s. whenever the expectation exists. For this method to work, we need to be able to generate IID samples of X .
Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 2 / 42 Rare Event Sampling Importance Sampling Although the size of the approximation error is random, we can often use the Central Limit Theorem to estimate an upper bound. If 2 = Var(f (X )) < , then the CLT implies that the difference IN [f ]  E [f (X )] N is asymptotically normal when N is large. If f  = supx f (x) < , then 2 f 2 and we can conservatively estimate the distribution of the error as f 2 N 0; . N 2 0; N Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 3 / 42 Rare Event Sampling Importance Sampling Importance Sampling Importance sampling is an important alternative to direct MC estimation. Suppose that is a probability distribution on E and that is absolutely continuous w.r. to , i.e., (A) = 0 whenever (A) = 0. Under this condition, the RadonNikodym theorem asserts the existence of an integrable function h 0 such that for every measurable set F E (F ) = h(x)(dx) =
F F d (x) (dx). d h = d/d is the RadonNikodym derivative of with respect to .
Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 4 / 42 Rare Event Sampling Importance Sampling By writing the expectation as an integral w.r.t. , we obtain a new estimator for the expectation: E [f (X )] = where f (x)(dx) = f (x)h(x)(dx) E E N 1 f (Yi )h(Yi ) IN [f ; ] N i=1 Y1 , , YN are IID RV's with distribution ; IN [f ; ] E[f (X )] by the SLLN. The quantities h(Yi ) are called the importance sampling weights. Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 5 / 42 Rare Event Sampling Importance Sampling We can still use the CLT to estimate the error of the IS estimate: 2 IN [f ; ]  E [f (X )] N 0, , N where 2 = Var(f (Y )h(Y )) In particular, if f > 0 and the function h (y ) = is in L1 (), then E [f (X )] = Var f (Y ) f (Y )
2
Jay Taylor (ASU) E [f (x)] f (y ) = Var E [f (X )] = 0.
Fall 2010 6 / 42 APM 530  Lecture 11 Rare Event Sampling Importance Sampling It follows from this last calculation that the distribution (dx) 1 h (x) (dx) = f (x)(dx) , E [f (x)] is the optimal proposal distribution for this particular estimation problem. Since (dx) depends on E [f (x)], the optimal proposal distribution is unknown whenever the expectation is unknown. However, it is sometimes possible to design an effective proposal distribution by mimicking known features of . Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 7 / 42 Rare Event Sampling MCMC Markov Chain Monte Carlo Notice that the estimate E [f (X )] N 1 f (Xi ) , N i=1 is valid even if the Xi 's are not independent, provided that the marginal distribution of each Xi is ; Xi and Xj are `nearly' independent when j  i 1. In particular, if the X1 , X2 , are sampled from a stationary Markov chain with initial distribution and (say) exponential decay of correlation, then the above estimate may be good. This observation provides the justification for MCMC methods.
Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 8 / 42 Rare Event Sampling MCMC The MetropolisHastings Algorithm The MetropolisHastings algorithm provides one recipe for constructing a Markov chain with as its stationary distribution. Suppose that we are given the following ingredients: (dx) is a reference measure on E , e.g., could be a counting measure or a restriction of Lebesgue measure; (dx) = (x)(dx) is absolutely continuous w.r.t. ; {Q(x; y )(dy ); x E } is a family of probability measures on E ; U1 , U2 , is a sequence of independent RV's, distributed uniformly on (0, 1). The family {Q(x, y ) : x, y E } is known as the proposal density.
Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 9 / 42 Rare Event Sampling MCMC Given that Xn+1 = x, the next value of the Markov chain is determined in three steps: Step 1: We first sample Yn+1 from the distribution Q(x; y )(dy ). Step 2: We then calculate the acceptance probability (y )Q(y ; x) (x, y ) = min ,1 (x)Q(x; y ) Step 3: If Un+1 < (x, y ), then we set Xn+1 = Yn+1 . Otherwise, set Xn+1 = Xn . Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 10 / 42 Rare Event Sampling MCMC Claim: is a stationary distribution of the chain (Xn ; n 1). Proof: We need to show that if Xn has distribution (x)(dx), then so does Xn+1 . We first observe that the transition probability of the chain is Q(x; y )(x; y )(dy ) if y = x if y = x. P(x; dy ) = Q(x; z) 1  (x; z) (dz) x (dy ) E For the following calculation, we will also need to define Ey = {x E : (x; y ) 1}. Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 11 / 42 Rare Event Sampling MCMC Then, E (x)(dx)P(x; dy ) = +
c Ey (y )Q(y ; x) (x)Q(x; y ) (dx) (dy ) (x)Q(x; y ) Ey (x)Q(x; y ) 1(dx) (dy ) c Ey + (y )(dy ) (x)Q(x; y ) Q(y ; x) 1  (y )Q(y ; x) (dx) Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 12 / 42 Rare Event Sampling MCMC = + + (y )Q(y ; x)(dx) (dy ) (x)Q(x; y )(dx) (dy ) (y )Q(y ; x)  (x)Q(x; y ) (dx) (dy ) Ey = c Ey c Ey (y )Q(y ; x)(dx) (dy ) c Ey Ey = (y )(dy ). Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 13 / 42 Rare Event Sampling MCMC The Proposal Distribution Choosing a suitable proposal distribution is often difficult. However, the following considerations are important: It should be easy to sample from Q(x; dy ); Given any two sets A, B E with (A) > 0, (B) > 0, it should be possible to go from A to B in finitely many moves; Q should be chosen so that the chain has good mixing properties, i.e., the chain should rapidly approach equilibrium when started from any x E : P{Xn dy X0 = x} (dy ). One practical guideline is that the acceptance probability should be on the order of 0.2  0.6.
Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 14 / 42 Rare Event Sampling MCMC Initialization and BurnIn Because the early values of an MCMC simulation are correlated with the initial value, these values are often omitted from estimates of expectations: E f (X ) N 1 f (Xn ) N B n=B+1 The initial steps X1 , , XB are called the burnin. The real goal of the burnin period is to initiate the chain at a value x E that is likely under . If this can be done by hand, then burnin is unnecessary. Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 15 / 42 Rare Event Sampling MCMC Effective Sample Size Because the N consecutive values generated by a MCMC simulation are correlated, these usually provide less accurate estimates of E[f (X )] than would N independent samples. One measure of the accuracy of this estimate is the effective sample size (ESS): ESS = 1+2 N k=1 k k is the kstep autocorrelation of f (Xn ); n 1 . ESS can be estimated using the sample autocorrelation function. Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 16 / 42 Rare Event Sampling MCMC Example: Sampling from the Boltzmann Distribution Suppose that E is the conformational space of a molecule with potential energy function V and that our goal is to sample from the Boltzmann distribution: (X)dX = where =
1 kB T 1 V (X) e dX Z and Z is the partition function Z e V (X) dX. E In practice, Z is almost never known explicitly, which makes it impossible to directly sample from .
Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 17 / 42 Rare Event Sampling MCMC On the other hand, can be sampled using MCMC. For example, if Q is a symmetric proposal distribution, i.e., Q(x, y ) = Q(y , x), then the acceptance probability is given by (Y)Q(Y; X) (X; Y) = min ,1 (X)Q(X; Y) exp(V (Y)) = min ,1 exp(V (X)) which does not depend on Z . Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 18 / 42 Rare Event Sampling TPS Sampling in Molecular Dynamics Simulations In Molecular Dynamics simulations, we are often interested in statistical properties of the trajectories of a molecule. In this case, E is a function space: E = C[0,T ] U R3N where U R3N is the set of allowable conformations of the molecule and each element (q, p) E is a path (q, p) : [0, T ] U R3N which specifies the location and momentum of each of the N atoms in the molecule at each time t [0, T ].
Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 19 / 42 Rare Event Sampling TPS The distribution on E depends on the source of stochasticity: In deterministic MD simulations, we may be interested in allowing the initial conditions to be random, e.g., the initial coordinates may be fixed, but the initial velocities may be sampled from the Maxwell distribution (dp). In this case, (dx) has the form (dx) = (dp){x(0)=p,x(t)=t (x(0))} dx . In Langevin dynamics, both the initial conditions and the subsequent evolution of the trajectory are random. In practice, the distribution function is never known explicitly, but can be approximated by using the Markov property: (dx) (dx(0))
S P{x(tn+1 )x(tn )} n=1 where (x(tn ); 1 n S) is some discretization of the sample path.
Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 20 / 42 Rare Event Sampling TPS Effective proposal distributions for E usually mix several types of `moves' that depend on the current path x: Shooting moves: A random time [0, T ] is sampled from a distribution Q (x; d ) and then a random velocity v is sampled from a distribution Qv (x, ; dv). The new path x is obtained by propagating forwards and backwards from x( ) with the new velocity x( ) = v. Shifting moves: A random time [T , T ] is sampled from a distribution Q (x; d) and then the start of x is shifted from 0 to . If < 0, then the path is truncated on the left and propagated forward over [T + , T ]. If > 0, then the path is truncated on the right and propagated backwards over [0, ]. Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 21 / 42 Rare Event Sampling TPS Transition Path Sampling In transition path sampling, we are interested in the statistics of paths that move between basins surrounding different stable conformations. Transit between basins usually requires movement over a high energy barrier. Realistic energy surfaces are very rugged, but barriers less than kBT in height are effectively smoothed out by thermal noise. (Bolhuis et al. (2002), Figure 2.)
Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 22 / 42 Rare Event Sampling TPS Transition paths between basins A and B have two important features: Movement between basins is a rare event in either of the following senses:
In deterministic MD, most initial configurations beginning in A will never enter B, although escape may be possible for sufficiently large velocities. In Langevin dynamics, the expected time to leave A may be orders of magnitude larger than the expected transit time. Conditional on escape, the transit time to B upon exiting A is often very short. While the second feature suggests that it should be feasible to generate ensembles of transition paths, the first feature means that this cannot be done by simple rejection sampling of paths beginning in A. Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 23 / 42 Rare Event Sampling TPS A more tractable method for sampling transition paths is to start with a transition path and then modify it using a MCMC scheme with a conditional path measure: A,B (x) = 1 (x, x(0) A, x(T ) B) . C The normalizing constant C is the probability that a randomly sampled path is in A at time 0 and in B at time T . Notice that C is not needed for MetropolisHastings sampling. This measure can be modified to allow T to vary. We need at least one transition path to initiate this method. Multiple transition paths may be required if the TP ensemble has a complex geometry. Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 24 / 42 Rare Event Sampling TPS Because it may be difficult to sample even one realistic transition path from scratch, several authors have developed sampling schemes that gradually deform an ensemble of paths into an ensemble of transition paths. These are based on the concept of an order parameter. Definition: We say that : U R is an order parameter for two basins A and B if there are numbers 0 < 1 < 2 < 3 such that A 1 ([0 , 1 ]) and B 1 ([2 , 3 ]) . Although is only required to discriminate between A and B, there is some advantage if the inclusions are nearly identities as well. Statistical learning theory (e.g., neural networks) can be used to identify effective order parameters. Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 25 / 42 Rare Event Sampling TPS BCDG Method One approach, due to Bolhuis, Chandler, Dellago and Geissler, chooses a sequence of sets H0 , H1 , , HN such that H0 = A, HN = B and Hn and Hn+1 extensively overlap. One first generates an ensemble of trajectories that both begin and terminate in A. Since A is attracting, this is relatively easy. Having generated an ensemble of trajectories that begin in A and terminate in Hn , one then uses MCMC to generate an ensemble of trajectories that start in A and end in Hn+1 . The procedure stops when n = N, at which point one has an ensemble of transition paths. Remark: There is a computational tradeoff between the overlap of successive target sets Hn and the number of target sets needed.
Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 26 / 42 Rare Event Sampling TPS Schematic Illustration of the BCDG Method Here the target sets H are indexed by a parameter [0, 1]. As increases, the trajectories are `pulled' towards B. (Bolhuis et al. (2002), Figure 4.)
Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 27 / 42 Rare Event Sampling TPS Bias Annealing Bias annealing is an alternative to the BCDG method proposed by Hu, Ma and Dinner (2006) based on steered molecular dynamics. Here one modifies the Hamiltonian H0 by adding a timedependent term that favors transition paths: 2 Hk (x, t) = H0 (x) + k q(x)  q0 (t) where q0 : [0, T ] R is a steering function with 1 (q0 (0)) A and 1 (q0 (T )) B. The force constant k is initially taken to be very large, so as to force a transition.
Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 28 / 42 Rare Event Sampling TPS Schematic Illustration of Bias Annealing k is gradually reduced and MCMC is used to generate a new, less biased ensemble of TPs at each step. When k = 0, the method yields an unbiased ensemble of TPs. Hu et al. (2006) claim that bias annealing is 10fold faster than the BCDG method. (Hu et al. (2006), Figure 2.)
Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 29 / 42 Rare Event Sampling TPS The Transition State Ensemble Often the barrier of highenergy transition states separating two stable conformations is itself of interest. These states can be identified from the transition path ensemble with the help of the committor probability: pA (q, ts ) P {x(ts ) Ax(0) = q} . ts is on the order of the transit time between A and B; pA (q, ts ) 1 when q A; pA (q, ts ) 0 when q B; pA (q, ts ) 1/2 and pB (q, ts ) 1/2 when q is a transition state. Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 30 / 42 Rare Event Sampling TPS The figure shows the committor probability as a function of the time along a transition path. Provided that ts is small, pA (q, ts ) can be estimated by simulating 100 trajectories starting at q. (Bolhuis et al. (2002), Figure 7.) Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 31 / 42 Rare Event Sampling TPS Reaction Coordinates Informal Definition: A reaction coordinate for a transition from A to B is a function R : U R with the property that for some , R(q) > if and only if q is a transition state between A and B. The committor probability can be used to assess the performance of a proposed reaction coordinate. Specifically, if p, is the set of q U such that pA (q, ts )  p < pB (q, ts )  p < , then R (p) = p, should be peaked around p = 1/2 and close to zero otherwise. dqR(q) > Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 32 / 42 Rare Event Sampling AGT Repair Hu et al. (2008) A twostep nucleotideflipping mechanism enables kinetic discrimination of DNA lesions by AGT. PNAS 105: 46154620. Background O6 alkylguanine is the most common DNA modification produced by alkylating agents. Since the alkylated base pairs with T rather than C, its presence causes a G:C to T:A transition in one of the daughter cells. O6 alkylguanine lesions are repaired by the protein O6 alkylguanineDNA alkyltransferase (AGT). Upregulation of AGT expression can also lead to drug resistance in tumors.
Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 33 / 42 Rare Event Sampling AGT Repair AGT is unusual in several respects. AGT is one of only a few human proteins that directly reverses DNA damage. In contrast, most other repair mechanisms function by excising the damaged bases. The alkylated base is repaired by transfer of the alky group to the sulphur atom of the Cys145 residue of AGT. (Daniels et al. (2004), Figure 1.)
Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 34 / 42 Rare Event Sampling AGT Repair AGT was also the first HTH protein shown to bind BDNA via the minor groove: Binding occurs through a helixturnhelix (HTH) motif. The helix shown in green is the recognition helix. Minor groove binding may be nonspecific due to the inaccessibility of the base pairs. Remark: Sequenceindependent binding may allow AGT to efficiently scan genomic DNA for O6 alkylguanine lesions.
Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 35 / 42 Rare Event Sampling AGT Repair Because Cys145 remains extrahelical, transfer of the alkyl group to the protein occurs through a process called nucleotide flipping. Hu et al. (2008) studied this process using bias annealing and TPS as implemented in CHARMM. The initial coordinates were taken from an Xray crystallographic structure of an AGTDNA complex. Bias annealing revealed that mGua flips through the major groove in a twostep process involving:
1 2 Flipping from the base stack to a metastable extrahelical intermediate; Insertion into the AGT active site. Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 36 / 42 Rare Event Sampling AGT Repair This study used bias annealing of a pseudodihedral angle to drive flipping of the methylguanine to and then from the extrahelical intermediate. (Hu et al. 2006; Figure 4) Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 37 / 42 Rare Event Sampling AGT Repair Dynamics of Flipping from the DNA Base Stack increases from 0.2 rad to 1.4 rad. TPS harvesting suggests that the initial step involves the following sequence of events (Movie 1):
1 2 3 mGua and Cyt fluctuate apart; Arg128 binds to Cyt; The mGua then diffuses to form the extrahelical intermediate. Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 38 / 42 Rare Event Sampling AGT Repair Dynamics of Insertion into the Active Site increases from 1.4 rad to 2.6 rad. This step requires the motion of a loop of AGT residues, Gly153 to Gly160. (Movie 3): Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 39 / 42 Rare Event Sampling AGT Repair Does AGT discriminate between mGua and Gua during scanning? The energetics of unstacking are similar for Gua and mGua:
Gua: G 2.2 kcal mol1 mGua: G 2.6 kcal mol1 The barrier to active site entry is much higher for Gua (green) than for mGua (red):
Gua: G 7.0 kcal mol1 mGua: G 1.1 kcal mol1 (Hu et al. 2008, SI Table 1) Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 40 / 42 Rare Event Sampling AGT Repair AGT may use a kinetic gatekeeping strategy to detect lesions. The unstacking rate for both mGua and Gua is about 103  102 ps1 . The rate of entry into the active site is about 101 ps1 for mGua and 106 ps1 for Gua. Experimental studies of a bacterial homolog of AGT yield a sliding rate of 2.6 104 bp ps1 . Comparison of the sliding and flipping rates suggests that mGua, but not Gua, will usually enter into the AGT active site while the protein is docked at each base. Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 41 / 42 Rare Event Sampling References References Daniels, D. S. et al. (2004) DNA binding and nucleotide flipping by the human DNA repair protein AGT. Nat. Struct. Mol. Biol. 11: 714720. Frenkel, D. and Smit, B. (2002) Understanding Molecular Simulation: From Algorithms to Applications. Academic Press. Hu, J., Ma, A., and Dinner, A. R. (2006) Bias annealing: A method for obtaining transition paths de novo. J. Chem. Phys. 125: 114101. Robert, C. P. and Casella, G. (2004). Monte Carlo Statistical Methods. Springer. Jay Taylor (ASU) APM 530  Lecture 11 Fall 2010 42 / 42 ...
View
Full
Document
This note was uploaded on 03/11/2012 for the course APM 530 taught by Professor Staff during the Fall '10 term at ASU.
 Fall '10
 Staff

Click to edit the document details