1.4 Problems in Optimization
61
A very crude and yet, sometimes effective measure is restarting the optimization pro-
cess at randomly chosen points in time. One example for this method is
GRASP
s,
Greedy
Randomized Adaptive Search Procedures
[663, 652] (see
Section 10.6 on page 256
), which con-
tinuously restart the process of creating an initial solution and refining it with local search.
Still, such approaches are likely to fail in domino convergence situations. Increasing the
proportion of exploration operations may also reduce the chance of premature convergence.
In order to extend the duration of the evolution in evolutionary algorithms, many meth-
ods have been devised for steering the search away from areas which have already been
frequently sampled. This can be achieved by integrating density metrics into the fitness
assignment process. The most popular of such approaches are sharing and niching (see
Sec-
tion 2.3.4
). The Strength Pareto Algorithms, which are widely accepted to be highly eﬃcient,
use another idea: they adapt the number of individuals that one solution candidate
dom-
inates
as density measure [2329, 2332]. One very simple method aiming for convergence
prevention is introduced in
Section 2.4.8
. Using low selection pressure furthermore decreases
the chance of premature convergence but also decreases the speed with which good solutions
are exploited.
Another approach against premature convergence is to introduce the capability of self-
adaptation, allowing the optimization algorithm to change its strategies or to modify its
parameters depending on its current state. Such behaviors, however, are often implemented
not in order to prevent premature convergence but to speed up the optimization process
(which may lead to premature convergence to local optima) [1776, 1777, 1778].
1.4.3 Ruggedness and Weak Causality
The Problem: Ruggedness
Optimization algorithms generally depend on some form of gradient in the objective or
fitness space. The objective functions should be continuous and exhibit low total variation
49
,
so the optimizer can descend the gradient easily. If the objective functions are unsteady
or ﬂuctuating, i. e., going up and down, it becomes more complicated for the optimization
process to find the right directions to proceed to. The more rugged a function gets, the harder
it becomes to optimize it. For short, one could say ruggedness is multi-modality plus steep
ascends and descends in the fitness landscape. Examples of rugged landscapes are Kauffman’s
NK fitness landscape (see
Section 21.2.1
), the p-Spin model discussed in
Section 21.2.2
,
Bergman and Feldman’s jagged fitness landscape [182], and the sketch in
Fig. 1.19.d on
page 57
.
One Cause: Weak Causality
During an optimization process, new points in the search space are created by the search
operations. Generally we can assume that the genotypes which are the input of the search
operations correspond to phenotypes which have previously been selected. Usually, the better
or the more promising an individual is, the higher are its chances of being selected for further
investigation. Reversing this statement suggests that individuals which are passed to the