This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Lecture 13: 03/07/2007 Recall: Simulated Annealing Given S, є : S>R, find x є S minimizing є(x) (є(i)). SA solution : for each of a bunch of Tvalues, “run metropolis” at each T using some Q(I,j) (symmetric) for “proposal’s” acceptance prob ( , )= < ( ) ( ) > ( ) α i j 1 if ϵj ϵ i e ϵj ϵi kT if ϵj ϵ i At “end of run,” use “best I” to seed another run at a lower Tvalue, etc. One hopes that , as T>0, end up with populations of near minima for є. More precisely: have a sequence {T n }; T n>0, n>∞. This is called an annealing schedule . For each n, run metropolis at temp T n . This is same as a given Markov chain on S; get a population of points in S. Let S * =set of all i є S maximizing є. • Let π Tn =unique stationary distribution of “T nchain.” • Let π Tn (S * )=”amount of probability” concentrated on S * at “end” of T nmn (or is mn up???) = ∈ * ( ) j S πTn j Can prove: →∞[ ( *)]= limn πTn S 1 n>∞ as temp>0 • Let X k,Tn = state at time k of T n chain. The last equation says: →∞ →∞ , ∈ *= limn limk ProbXk Tn S 1 Inside of =π Tn (S * ) A stronger convergence result (due to B. Hajek and J. Tstsiklis (at MIT)):A stronger convergence result (due to B....
View
Full Document
 Spring '07
 DELCHAMPS
 Algorithms, Evolution, Tn, Markov chain, Andrey Markov, Mealy machine

Click to edit the document details