{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

global-opt - Global optimization Calculus-based methods are...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
Global optimization Calculus-based methods are necessarily local methods: they do not guar- antee to find the global minimum unless the function is in a restricted cate- gory (convex, for example). However, users typically want to find the global minimum. Doing this can be a very costly task, but worthwhile. Because of this, there have been a large number of heuristic methods which converge in some sense to the global minimum. Very often these methods are de- scribed as search methods. Here I describe a number of these methods and how they relate to the local methods we will describe. Some of them have theoretical support, many have limited or no theoretical support. Since derivatives only give local information, most global methods just use function values to work. There are a number of optimization meth- ods that do not use derivatives, such as the well-known Nelder–Mead sim- plex method. These “direct” methods are not global optimization methods. They are, in fact, local methods that do not use derivatives. There has been a divergence between the community that develop global optimization algorithms such as simulated annealing and genetic algorithms, and the community that develops more traditional calculus-based meth- ods. Very often the global optimization community has been more con- cerned with creating algorithms (heuristics) without much concern for rig- orous analysis, and various heuristics are compared. When methods such as simulated annealing and genetic algorithms are compared with more traditional algorithms, the traditional algorithms tend to win out in speed and accuracy, especially when simple “random start” techniques are used. Exhaustive search The idea is simple and expensive. Very often the number of possible sig- nificantly different values of the input to an objective function is extremely large, and beyond the number that can be tested in reasonable time. Never- theless, this is a good reference point: any algorithm that takes significantly longer than exhaustive search should not be used. Simulated annealing The idea of simulated annealing is to do local search, except that we allow increases in the objective function value with a certain probability. This 1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
makes it an example of a “random search” or “stochastic search” algo- rithm. The name “simulated annealing” comes from analogy with ther- modynamics: when a material is in a high energy disordered state and is
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}