lecture_12

lecture_12 - Markov chain Monte Carlo Peter Beerli[this...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
Markov chain Monte Carlo Peter Beerli October 10, 2005 [this chapter is highly influenced by chapter 1 in Markov chain Monte Carlo in Practice , eds Gilks W. R. et al. Chapman and Hall/CRC, 1996] 1 Short history Many problems can not be solved analytically, but can be solved using statistical sampling. This idea is certainly old and was first used in a question by Georges-Louis Leclerc, Comte de Buffon (Buffon’s needle experiment) and William Gosset. Although these early applications were typically used to simulate data on a understood analytical problem. In 1945 and the following years Nicolas Metropolis and others, including Stanislaw (Stan) Ulam developed statistical sampling method to test the ENIAC computer. Metropolis coined the term Monte Carlo methods (the famous casino town in Monaco in Southern France) [influenced by the fondness of poker of Ulam who had an uncle who once borrowed money to go gambling in Monte Carlo]. Enrico Fermi was using statistical sampling for many problems in the 1930 and later, but he never published his way but used it to impress others about the accuracy of results. In 1953 Metropolis et al. described the now famous Metropolis algorithm and so the first Markov chain Monte Carlo method. 2 Monte Carlo methods Monte Carlo methods are methods that perform statistical sampling to get the expectation of a function. We want to approximate μ = E( f ( X i )) 1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
BSC5936-Fall 2005 Computational Evolutionary Biology with independently identically distributed (iid) samples X 1 , X 2 , ... using the sample mean μ = 1 n n X i =1 ( f ( X i )) If we sample long enough we approximate the original expectation, it is important to note that we always should supply the standard deviation of this sampling process because, that should converge to the standard deviation of a Normal distribution. Note that n is under the control of the researcher and is data-independent, we always can run the analysis longer (increase n ) and get a more accurate result. Monte Carlo simulation is an important tool for integration in almost any field of research. 3 Markov chain (MC) If a process is producing points and the future is independent from the past, for example Prob( X n = a n | X 0 = a 0 , X 1 = a 1 , ..., X n - 2 = a n - 2 , X n - 1 = a n - 1 ) = Prob( X n = a n | X n - 1 = a n - 1 ) The values a 0 , a 1 , .... a n form a Markov chain. A random walk or a sequence of mutations are examples of a Markov chain. 4 Markov chain Monte Carlo (MCMC) Markov chain Monte Carlo methods are methods that perform statistical sampling to get the expectation of a function. We want to approximate μ = E π ( f ( X i )) where π is the equilibrium distribution or stationary distribution, with samples from a Markov chain X 1 , X 2 , ... from the distribution f ( · ) using the sample mean μ = 1 n n X i =1 ( f ( X i )) The only difference to MC is that instead of iid samples we draw dependent samples. We would
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 11/27/2011 for the course BSC 5936 taught by Professor Staff during the Spring '08 term at FSU.

Page1 / 7

lecture_12 - Markov chain Monte Carlo Peter Beerli[this...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online