This preview shows pages 1–3. Sign up to view the full content.
Compressive sampling
Emamnuel J. Candès
∗
Abstract.
Conventional wisdom and common practice in acquisition and reconstruction of
images from frequency data follow the basic principle of the Nyquist density sampling theory.
This principle states that to reconstruct an image, the number of Fourier samples we need to
acquire must match the desired resolution of the image, i.e. the number of pixels in the image.
This paper surveys an emerging theory which goes by the name of “compressive sampling” or
“compressed sensing,” and which says that this conventional wisdom is inaccurate. Perhaps
surprisingly, it is possible to reconstruct images or signals of scienti±c interest accurately and
sometimes even exactly from a number of samples which is far smaller than the desired resolution
of the image/signal, e.g. the number of pixels in the image.
It is believed that compressive sampling has far reaching implications. For example, it
suggests the possibility of new data acquisition protocols that translate analog information into
digital form with fewer sensors than what was considered necessary. This new sampling theory
may come to underlie procedures for sampling and compressing data simultaneously.
In this short survey, we provide some of the key mathematical insights underlying this new
theory, and explain some of the interactions between compressive sampling and other ±elds such
as statistics, information theory, coding theory, and theoretical computer science.
Mathematics Subject Classifcation (2000).
Primary 00A69, 4102, 68P30; Secondary 62C65.
Keywords.
Compressive sampling, sparsity, uniform uncertainty principle, underdertermined
systems of linear equations,
`
1
minimization, linear programming, signal recovery, error cor
rection.
1. Introduction
One of the central tenets of signal processing is the Nyquist/Shannon sampling theory:
the number of samples needed to reconstruct a signal without error is dictated by its
bandwidth – the length of the shortest interval which contains the support of the
spectrum of the signal under study. In the last two years or so, an alternative theory
of “compressive sampling” has emerged which shows that superresolved signals and
images can be reconstructed from far fewer data/measurements than what is usually
considered necessary. The purpose of this paper is to survey and provide some of
the key mathematical insights underlying this new theory. An enchanting aspect of
compressive sampling it that it has signi±cant interactions and bearings on some ±elds
in the applied sciences and engineering such as statistics, information theory, coding
∗
The author is partially supported by an NSF grant CCF–515362.
Proceedings of the International Congress
of Mathematicians, Madrid, Spain, 2006
© 2006 European Mathematical Society
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document 2
Emmanuel J. Candès
theory, theoretical computer science, and others as well. We will try to explain these
connections via a few selected examples.
This is the end of the preview. Sign up
to
access the rest of the document.
This note was uploaded on 05/28/2010 for the course EE EE564 taught by Professor Runyiyu during the Spring '10 term at Eastern Mediterranean University.
 Spring '10
 RunyiYu
 Frequency

Click to edit the document details