Caveat this algorithm has no end geman gemans t

This preview shows 27 out of 33 pages.

Caveat:  this algorithm has no end (Geman & Geman’s T decrease  schedule is in the 1/log of the number of iterations, so, T will never  reach zero), so it may take an infinite amount of time for it to find  the global minimum.
Image of page 27

Subscribe to view the full document.

28 Simulated annealing algorithm Idea: Escape local extrema by allowing “bad moves,”  but gradually  decrease their size and frequency. Note: goal here is to maximize E. -
Image of page 28
29 Simulated annealing algorithm Idea: Escape local extrema by allowing “bad moves,”  but gradually  decrease their size and frequency. Algorithm when goal is to minimize E. < - -
Image of page 29

Subscribe to view the full document.

30 Note on simulated annealing: limit cases Boltzmann distribution:  accept “bad move” with  E<0 (goal is to  maximize E) with probability P( E) = exp( E/T) If T is large: E < 0 E/T < 0 and very small exp( E/T) close to 1 accept bad move with  high  probability If T is near 0: E < 0 E/T < 0 and very large exp( E/T) close to 0 accept bad move with  low  probability
Image of page 30
31 Note on simulated annealing: limit cases Boltzmann distribution:  accept “bad move” with  E<0 (goal is to  maximize E) with probability P( E) = exp( E/T) If T is large: E < 0 E/T < 0 and very small exp( E/T) close to 1 accept bad move with  high  probability If T is near 0: E < 0 E/T < 0 and very large exp( E/T) close to 0 accept bad move with  low  probability Random walk Deterministic down-hill
Image of page 31

Subscribe to view the full document.

32 Summary Best-first search = general search, where the minimum-cost nodes  (according to some measure) are expanded first. Greedy search = best-first with the estimated cost to reach the goal  as a heuristic measure. - Generally faster than uninformed search - not optimal - not complete. A* search = best-first with measure = path cost so far + estimated  path cost to goal. - combines advantages of uniform-cost and greedy searches - complete, optimal and optimally efficient - space complexity still exponential
Image of page 32
33 Summary Time complexity of heuristic algorithms depend on quality of  heuristic function.  Good heuristics can sometimes be constructed  by examining the problem definition or by generalizing from  experience with the problem class. Iterative improvement algorithms keep only a single state in  memory. Can get stuck in local extrema; simulated annealing provides a way  to escape local extrema, and is complete and optimal given a slow  enough cooling schedule.
Image of page 33
You've reached the end of this preview.

{[ snackBarMessage ]}

What students are saying

  • Left Quote Icon

    As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

    Student Picture

    Kiran Temple University Fox School of Business ‘17, Course Hero Intern

  • Left Quote Icon

    I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

    Student Picture

    Dana University of Pennsylvania ‘17, Course Hero Intern

  • Left Quote Icon

    The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

    Student Picture

    Jill Tulane University ‘16, Course Hero Intern