030707 - Lecture 13 Recall Simulated Annealing Given S є...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Lecture 13: 03/07/2007 Recall: Simulated Annealing Given S, є : S->R, find x є S minimizing є(x) (є(i)). SA solution : for each of a bunch of T-values, “run metropolis” at each T using some Q(I,j) (symmetric) for “proposal’s” acceptance prob ( , )= < ( ) -(- ) > ( ) α i j 1 if ϵj ϵ i e ϵj ϵi kT if ϵj ϵ i At “end of run,” use “best I” to seed another run at a lower T-value, etc. One hopes that , as T->0, end up with populations of near minima for є. More precisely: have a sequence {T n }; T n->0, n->∞. This is called an annealing schedule . For each n, run metropolis at temp T n . This is same as a given Markov chain on S; get a population of points in S. Let S * =set of all i є S maximizing є. • Let π Tn =unique stationary distribution of “T n-chain.” • Let π Tn (S * )=”amount of probability” concentrated on S * at “end” of T n-mn (or is mn up???) = ∈ * ( ) j S πTn j Can prove: →∞[ ( *)]= limn πTn S 1 n->∞ as temp->0 • Let X k,Tn = state at time k of T n chain. The last equation says: →∞ →∞ , ∈ *= limn limk ProbXk Tn S 1 Inside of =π Tn (S * ) A stronger convergence result (due to B. Hajek and J. Tstsiklis (at MIT)):A stronger convergence result (due to B....
View Full Document

{[ snackBarMessage ]}

Page1 / 4

030707 - Lecture 13 Recall Simulated Annealing Given S є...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online