Emamnuel J. Candès
Conventional wisdom and common practice in acquisition and reconstruction of
images from frequency data follow the basic principle of the Nyquist density sampling theory.
This principle states that to reconstruct an image, the number of Fourier samples we need to
acquire must match the desired resolution of the image, i.e. the number of pixels in the image.
This paper surveys an emerging theory which goes by the name of “compressive sampling” or
“compressed sensing,” and which says that this conventional wisdom is inaccurate. Perhaps
surprisingly, it is possible to reconstruct images or signals of scienti±c interest accurately and
sometimes even exactly from a number of samples which is far smaller than the desired resolution
of the image/signal, e.g. the number of pixels in the image.
It is believed that compressive sampling has far reaching implications. For example, it
suggests the possibility of new data acquisition protocols that translate analog information into
digital form with fewer sensors than what was considered necessary. This new sampling theory
may come to underlie procedures for sampling and compressing data simultaneously.
In this short survey, we provide some of the key mathematical insights underlying this new
theory, and explain some of the interactions between compressive sampling and other ±elds such
as statistics, information theory, coding theory, and theoretical computer science.
Mathematics Subject Classifcation (2000).
Primary 00A69, 41-02, 68P30; Secondary 62C65.
Compressive sampling, sparsity, uniform uncertainty principle, underdertermined
systems of linear equations,
-minimization, linear programming, signal recovery, error cor-
One of the central tenets of signal processing is the Nyquist/Shannon sampling theory:
the number of samples needed to reconstruct a signal without error is dictated by its
bandwidth – the length of the shortest interval which contains the support of the
spectrum of the signal under study. In the last two years or so, an alternative theory
of “compressive sampling” has emerged which shows that super-resolved signals and
images can be reconstructed from far fewer data/measurements than what is usually
considered necessary. The purpose of this paper is to survey and provide some of
the key mathematical insights underlying this new theory. An enchanting aspect of
compressive sampling it that it has signi±cant interactions and bearings on some ±elds
in the applied sciences and engineering such as statistics, information theory, coding
The author is partially supported by an NSF grant CCF–515362.
Proceedings of the International Congress
of Mathematicians, Madrid, Spain, 2006
© 2006 European Mathematical Society