1
Compressive Distilled Sensing: Sparse Recovery
Using Adaptivity in Compressive Measurements
Jarvis D. Haupt
1
, Richard G. Baraniuk
1
, Rui M. Castro
2
, and Robert D. Nowak
3
1
Dept. of Electrical and Computer Engineering, Rice University, Houston TX 77005
2
Dept. of Electrical Engineering, Columbia University, New York NY 10027
3
Dept. of Electrical and Computer Engineering, University of Wisconsin, Madison WI 53706
Abstract
—The recentlyproposed theory of
distilled sensing
establishes that adaptivity in sampling can dramatically improve
the performance of sparse recovery in noisy settings. In par
ticular, it is now known that adaptive point sampling enables
the detection and/or support recovery of sparse signals that are
otherwise too weak to be recovered using any method based on
nonadaptive point sampling. In this paper the theory of dis
tilled sensing is extended to highlyundersampled regimes, as in
compressive sensing. A simple adaptive samplingandrefinement
procedure called
compressive distilled sensing
is proposed, where
each step of the procedure utilizes information from previous
observations to focus subsequent measurements into the proper
signal subspace, resulting in a significant improvement in effective
measurement SNR on the signal subspace. As a result, for the
same budget of sensing resources,
compressive distilled sensing
can result in significantly improved error bounds compared to
those for traditional compressive sensing.
I. I
NTRODUCTION
Let
x
∈
R
n
be a sparse vector supported on the set
S
=
{
i
:
x
i
6
= 0
}
, where
S
=
s
¿
n
, and consider observing
x
according to the linear observation model
y
=
Ax
+
w,
(1)
where
A
is an
m
×
n
realvalued matrix (possibly random)
that satisfies
E
£
k
A
k
2
F
/
≤
n
, and where
w
i
iid
∼ N
(0
, σ
2
)
for
some
σ
≥
0
. This model is central to the emerging field of
compressive sensing
(CS), which deals primarily with recovery
of
x
in highlyunderdetermined settings (that is, where the
number of measurements
m
¿
n
).
Initial results in CS establish a rather surprising result—
using certain observation matrices
A
for which the number of
rows is a constant multiple of
s
log
n
, it is possible to recover
x
exactly
from
{
y, A
}
, and in addition, the recovery can be
accomplished by solving a tractable convex optimization [1]–
[3]. Matrices
A
for which this exact recovery is possible are
easy to construct in practice. For example, matrices whose
entries are i.i.d. realizations of certain zeromean distributions
(Gaussian, symmetric Bernoulli, etc.) are sufficient to allow
this recovery with high probability [2]–[4].
In practice, however, it is rarely the case that observations
are perfectly noisefree. In these settings, rather than attempt
This work was partially supported by the ARO, grant no. W911NF091
0383, the NSF, grant no. CCF0353079, and the AFOSR, grant no. FA9550
0910140.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
This is the end of the preview.
Sign up
to
access the rest of the document.
 Spring '10
 RunyiYu
 Probability, Signal Processing, Nyquist–Shannon sampling theorem, Compressed Sensing, Emmanuel Candès, Noiselet

Click to edit the document details