radical_03

radical_03 - Journal of Machine Learning Research 4 (2003)...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
Journal of Machine Learning Research 4 (2003) 1271-1295 Submitted 10/02; Published 12/03 ICA Using Spacings Estimates of Entropy Erik G. Learned-Miller EGMIL @ EECS . BERKELEY . EDU Department of Electrical Engineering and Computer Science University of California Berkeley, CA 94720-1776, USA John W. Fisher III FISHER @ CSAIL . MIT . EDU Computer Science and ArtiFcial Intelligence Laboratory Massachusetts Institute of Technology 200 Technology Square, OfFce NE43-V 626 Cambridge MA 02139, USA Editors: Te-Won Lee, Jean-Franc ¸ois Cardoso, Erkki Oja and Shun-ichi Amari Abstract This paper presents a new algorithm for the independent components analysis (ICA) problem based on an ef±cient entropy estimator. Like many previous methods, this algorithm directly min- imizes the measure of departure from independence according to the estimated Kullback-Leibler divergence between the joint distribution and the product of the marginal distributions. We pair this approach with ef±cient entropy estimators from the statistics literature. In particular, the en- tropy estimator we use is consistent and exhibits rapid convergence. The algorithm based on this estimator is simple, computationally ef±cient, intuitively appealing, and outperforms other well known algorithms. In addition, the estimator’s relative insensitivity to outliers translates into su- perior performance by our ICA algorithm on outlier tests. We present favorable comparisons to the Kernel ICA, FAST-ICA, JADE, and extended Infomax algorithms in extensive simulations. We also provide public domain source code for our algorithms. 1. Introduction We present a new independent components analysis (ICA) algorithm, RADICAL. Empirical re- sults indicate that it outperforms a wide array of well known algorithms. Several straightforward principles underly the development of RADICAL: 1. Since ICA is, by deFnition, about maximizing statistical independence, we attempt to directly optimize a measure of statistical independence, rather than a surrogate for this measure. 2. We avoid explicit estimation of probability densities as an intermediate step. Indeed, given the formulation of the objective function, density estimation (even implicitly) is entirely un- necessary. 3. Since our objective function involves one-dimensional entropy estimation, we employ a well- known, 1 consistent, rapidly converging and computationally efFcient estimator of entropy 1. Although the estimator that we use (Vasicek, 1976) has been extensively analyzed in the statistics literature, it has received little attention in the machine learning community. c ± 2003 Erik G. Learned-Miller and John W. Fisher III.
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
L EARNED -M ILLER AND F ISHER which is robust to outliers. For this task, we turned to the statistics literature, where entropy estimators have been studied extensively (c.f. Beirlant et al., 1997). 4. As the optimization landscape has potentially many local minima, we eschew gradient descent
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 25

radical_03 - Journal of Machine Learning Research 4 (2003)...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online