GraDeS - Gradient Descent with Sparsification: An iterative...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Gradient Descent with Sparsification: An iterative algorithm for sparse recovery with restricted isometry property Rahul Garg grahul@us.ibm.com Rohit Khandekar rohitk@us.ibm.com IBM T. J. Watson Research Center, 1101 Kitchawan Road, Route 134, Yorktown Heights, NY 10598 Abstract We present an algorithm for finding an s- sparse vector x that minimizes the square- error k y- x k 2 where satisfies the re- stricted isometry property (RIP), with iso- metric constant 2 s < 1 / 3. Our algorithm, called GraDeS (Gradient Descent with Spar- sification) iteratively updates x as: x H s x + 1 > ( y- x ) where > 1 and H s sets all but s largest magnitude coordinates to zero. GraDeS con- verges to the correct solution in constant number of iterations. The condition 2 s < 1 / 3 is most general for which a near-linear time algorithm is known. In comparison, the best condition under which a polynomial- time algorithm is known, is 2 s < 2- 1. Our Matlab implementation of GraDeS out- performs previously proposed algorithms like Subspace Pursuit, StOMP, OMP, and Lasso by an order of magnitude. Curiously, our experiments also uncovered cases where L1-regularized regression (Lasso) fails but GraDeS finds the correct solution. 1. Introduction Finding a sparse solution to a system of linear equa- tions has been an important problem in multiple do- mains such as model selection in statistics and ma- chine learning (Golub & Loan, 1996; Efron et al., 2004; Wainwright et al., 2006; Ranzato et al., 2007), sparse principal component analysis (Zou et al., 2006), image Appearing in Proceedings of the 26 th International Confer- ence on Machine Learning , Montreal, Canada, 2009. Copy- right 2009 by the author(s)/owner(s). deconvolution and de-noising (Figueiredo & Nowak, 2005) and compressed sensing (Cand` es & Wakin, 2008). The recent results in the area of compressed sensing, especially those relating to the properties of random matrices (Cand` es & Tao, 2006; Cand` es et al., 2006) has exploded the interest in this area which is finding applications in diverse domains such as coding and information theory, signal processing, artificial in- telligence, and imaging. Due to these developments, efficient algorithms to find sparse solutions are increas- ingly becoming very important. Consider a system of linear equations of the form y = x (1) where y < m is an m-dimensional vector of mea- surements, x < n is the unknown signal to be recon- structed and < m n is the measurement matrix. The signal x is represented in a suitable (possibly over- complete) basis and is assumed to be s-sparse (i.e. at most s out of n components in x are non-zero). The sparse reconstruction problem is min x < n || x || subject to y = x (2) where || x || represents the number of non-zero entries in x . This problem is not only NP-hard (Natarajan, 1995), but also hard to approximate within a factor O (2 log 1- ( m ) ) of the optimal solution (Neylon, 2006).) of the optimal solution (Neylon, 2006)....
View Full Document

Page1 / 8

GraDeS - Gradient Descent with Sparsification: An iterative...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online