Global+Optimization+Algorithms+Theory+and+Application_Part19

Global+Optimization+Algorithms+Theory+and+Application_Part19...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 21.3 Genetic Programming Problems 361 Number of Training Cases tc The number of training cases used for evaluating the objective func- tions. tc ∈ { 1 , 10 } Training Case Change Policy ct The policy according to which the training cases are changed. ct = 0 → The training cases do not change. ct = 1 → The training cases change each generation. Generation Limit mx t The maximum number of generations that each run is allowed to perform. (see Definition 1.43) mx t = 501 System Configuration Cfg normal off-the-shelf PCs with approximately 2GHz processor power Table 21.5: The settings of the RBGP-Genetic Programming experiments for the GCD problem. Convergence Prevention In our past experiments, we have made the experience that Genetic Programming in rugged fitness landscapes and Genetic Programming of real algorithms (which usually leads to rugged fitness landscapes) is very inclined to converge prematurely. If it finds some half-baked solution, the population often tended to converge to this individual and the evolutions stopped. There are many ways to prevent this, like modifying the fitness assignment process by using sharing functions (see Section 2.3.4 on page 114 ), for example. Such methods influence individuals close in objective space and decrease their chance to reproduce. Here, we decided to choose a very simple measure which only decreases probability of reproduction of individuals with exactly equal objective functions: the simple convergence prevention algorithm SCP introduced in Section 2.4.8 . This filter has either been applied with strength cp = 0 . 3 or not been used ( cp = 0). Comparison with Random Walks We found it necessary to compare the Genetic Program- ming approach for solving this problem with random walks in order to find out whether or not Genetic Programming can provide any advantage in a rugged fitness landscape. Therefore, we either used an evolutionary algorithm with the parameters discussed above ( alg = 0) or parallel random walks ( alg = 1). Random walks, in this context, are principally evolutionary algorithms where neither fitness assignment nor selection are preformed. Hence, we can test parameters like ps , ct , and tc , but no convergence prevention ( cp = 0) and also no steady state ( expSteadyState = 0). The results of these random walks are the best individuals encountered during their course. Results We have determined the following parameters from the data obtained with our experiments. Measure Short Description Perfection Fraction p / r The fraction of experimental runs that found perfect individuals. This fraction is also the estimate of the cumulative probability of finding a perfect individual until the 500th generation. (see Sec- tion 20.3.1) p / r = CPp ( ps , 500) (see Equation 20.20 ) 362 21 Benchmarks and Toy Problems Number of Perfect Runs # p The number of runs where perfect individuals were discovered. (see Section 20.3.1) Number of Successful Runs # s The number of runs where successful individuals were discovered.The number of runs where successful individuals were discovered....
View Full Document

Page1 / 20

Global+Optimization+Algorithms+Theory+and+Application_Part19...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online