0804.3439v4 - Information theoretic bounds to performance...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Information theoretic bounds to performance of Compressed Sensing and Sensor Networks Shuchin Aeron, Venkatesh Saligrama and Manqi Zhao * Abstract In this paper we derive information theoretic performance bounds to sensing and reconstruc- tion of sparse phenomena from noisy random projections of data. The problem has received significant interest in Compressed Sensing and Sensor Networks(SNETs) literature. Our goal here is two-fold: (a) analyze these problems in an information theoretic setting, namely, pro- vide algorithm independent performance bounds; (b) derive explicit formulas that relate the number of measurements to SNR and distortion level. We consider two types of distortion: mean-squared errors and errors in estimating the support of the signal. Our main technical tool for necessary conditions is to derive extensions to Fano lower bound to handle continuous domains and approximate reconstruction. To derive sufficient conditions we develop new insight on max-likelihood analysis. In particular we show that in support recovery problems, the small support errors are the dominant error events. Consequently, our ML analysis does not suffer the conservatism of union bound and leads to a tight analysis of max-likelihood. These results provide tight achievable bounds for the two types of distortion. For instance, for support re- covery we show that asymptotically an SNR of log( n ) together with k log( n/k ) measurements is necessary and sufficient for exact recovery. Furthermore, if a small fraction of support errors can be tolerated, a constant SNR turns out to be sufficient. We also comment on the salient differences between standard CS setup and some problems that arise in SNETs. For these latter problems we show that the compression can be poor, in that, the number of measurements required can be significant relative to the sparsity level. 1 Introduction Sparsity arises in many signal processing applications ranging from, image processing [1], geophysics [2], finite rate of innovations signals [3], group testing [4], cognitive radios [5], source localization [6] and sensor networks [7]. This paper deals with fundamental limits to sensing and reconstruction of such sparse signals. In more concrete terms, our goal is to estimate X based on the observations, Y = Φ X + N where Φ ∈ R m × n is a sensing matrix, X ∈ R n is a sparse signal with at most k non-zero components and N ∈ R m is additive noise. To solve for X one usually solves the so called ‘ problem, where one looks for the sparsest solution, ˆ X that matches the data as closely as possible. For the noiseless case, N = 0 and m ≥ 2 k * The authors are with the department of Electrical and Computer Engineering at Boston University, MA -02215....
View Full Document

Page1 / 41

0804.3439v4 - Information theoretic bounds to performance...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online