Liu, Yao, Higuchi_2002_Evolutionary ensembles with negative correlation learning

Liu, Yao, Higuchi_2002_Evolutionary ensembles with negative correlation learning

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 380 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 4, NO. 4, NOVEMBER 2000 Evolutionary Ensembles with Negative Correlation Learning Yong Liu, Xin Yao, and Tetsuya Higuchi Abstract— Based on negative correlation learning and evolu- tionary learning, this brief paper presents evolutionary ensembles with negative correlation learning (EENCL) to address the issues of automatic determination of the number of individual neural networks (NNs) in an ensemble and the exploitation of the interaction between individual NN design and combination. The idea of EENCL is to encourage different individual NNs in the ensemble to learn different parts or aspects of the training data so that the ensemble can learn better the entire training data. The cooperation and specialization among different individual NNs are considered during the individual NN design. This provides an opportunity for different NNs to interact with each other and to specialize. Experiments on two real-world problems demonstrate that EENCL can produce NN ensembles with good generalization ability. Index Terms— Evolutionary ensembles, negative correlation learning, neural networks. I. INTRODUCTION Many real-world problems are too large and too complex for a single monolithic system to solve alone. There are many ex- amples from both natural and artificial systems that show that an integrated system consisting of several subsystems can re- duce the total complexity of the system while solving a difficult problem satisfactorily. The success of neural network (NN) en- sembles in improving a classifier’s generalization is a typical example [1]. NN ensembles adopt the divide-and-conquer strategy. Instead of using a single network to solve a task, an NN ensemble com- bines a set of NNs that learn to subdivide the task and thereby solve it more efficiently and elegantly. An NN ensemble offers several advantages over a monolithic NN [2]. First, it can per- form more complex tasks than any of its components (i.e., in- dividual NNs in the ensemble). Second, it can make an overall system easier to understand and modify. Finally, it is more ro- bust than a monolithic NN, and can show graceful performance degradation in situations where only a subset of NNs in the en- semble are performing correctly. Given the advantages of NN ensembles and the complexity of the problems that are beginning to be investigated, it is clear that the NN ensemble method is and will be an important and pervasive problem-solving technique. However, designing NN ensembles is a very difficult task. It relies heavily on human experts and prior knowledge about the problem. Manuscript received February 23, 2000; revised July 25, 2000....
View Full Document

This note was uploaded on 07/08/2011 for the course CS 101 taught by Professor Khliu during the Spring '11 term at Xiamen University.

Page1 / 8

Liu, Yao, Higuchi_2002_Evolutionary ensembles with negative correlation learning

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online