Global+Optimization+Algorithms+Theory+and+Application_Part4

Global+Optimization+Algorithms+Theory+and+Application_Part4...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 1.4 Problems in Optimization 61 A very crude and yet, sometimes effective measure is restarting the optimization pro- cess at randomly chosen points in time. One example for this method is GRASP s, Greedy Randomized Adaptive Search Procedures [663, 652] (see Section 10.6 on page 256 ), which con- tinuously restart the process of creating an initial solution and refining it with local search. Still, such approaches are likely to fail in domino convergence situations. Increasing the proportion of exploration operations may also reduce the chance of premature convergence. In order to extend the duration of the evolution in evolutionary algorithms, many meth- ods have been devised for steering the search away from areas which have already been frequently sampled. This can be achieved by integrating density metrics into the fitness assignment process. The most popular of such approaches are sharing and niching (see Sec- tion 2.3.4 ). The Strength Pareto Algorithms, which are widely accepted to be highly efficient, use another idea: they adapt the number of individuals that one solution candidate dom- inates as density measure [2329, 2332]. One very simple method aiming for convergence prevention is introduced in Section 2.4.8 . Using low selection pressure furthermore decreases the chance of premature convergence but also decreases the speed with which good solutions are exploited. Another approach against premature convergence is to introduce the capability of self- adaptation, allowing the optimization algorithm to change its strategies or to modify its parameters depending on its current state. Such behaviors, however, are often implemented not in order to prevent premature convergence but to speed up the optimization process (which may lead to premature convergence to local optima) [1776, 1777, 1778]. 1.4.3 Ruggedness and Weak Causality The Problem: Ruggedness Optimization algorithms generally depend on some form of gradient in the objective or fitness space. The objective functions should be continuous and exhibit low total variation 49 , so the optimizer can descend the gradient easily. If the objective functions are unsteady or fluctuating, i. e., going up and down, it becomes more complicated for the optimization process to find the right directions to proceed to. The more rugged a function gets, the harder it becomes to optimize it. For short, one could say ruggedness is multi-modality plus steep ascends and descends in the fitness landscape. Examples of rugged landscapes are Kauffman’s NK fitness landscape (see Section 21.2.1 ), the p-Spin model discussed in Section 21.2.2 , Bergman and Feldman’s jagged fitness landscape [182], and the sketch in Fig. 1.19.d on page 57 ....
View Full Document

This document was uploaded on 08/10/2011.

Page1 / 20

Global+Optimization+Algorithms+Theory+and+Application_Part4...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online