This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: 1 Imaging via Compressive Sampling Justin Romberg Electrical and Computer Engineering, Georgia Tech I. INTRODUCTION The ease with which we store and transmit images in modern-day applications would be unthinkable without compression. Image compression algorithms can reduce data sets by orders of magnitude, making systems which acquire extremely high-resolution images (billions or even trillions of pixels) feasible. There is an extensive body of literature on image compression, but the central concept is straightforward: we transform the image into an appropriate basis and then code only the important expansion coefficients. The crux is finding a good transform, a problem which has been studied extensively from both a theoretical  and practical  standpoint. The most notable product of this research is the wavelet transform , ; switching from sinusoid-based representations to wavelets marked a watershed in image compression, and is the essential difference between the classical JPEG  and modern JPEG-2000  standards. Image compression algorithms convert high-resolution images into a relatively small bit streams (while keeping the essential features intact), in effect turning a large digital data set into a substantially smaller one. But is there a way to avoid the large digital data set to begin with? Is there a way we can build the data compression directly into the acquisition? The answer is yes, and is what Compressive Sampling is all about. To begin, we need to generalize our notion of sampling an image. Instead of collecting point evaluations of the image X at distinct locations, or averages over small areas (pixels), each measurement y k in our acquisition system is an inner product against a different test function k : y 1 = h X, 1 i , y 2 = h X, 2 i , ..., y m = h X, m i . (1) We note here that our entire discussion in this paper (and the majority of the work to date in the field of compressive sampling) will revolve around finite dimensional signals and images. To Email: firstname.lastname@example.org. Phone: 404-894-3930. May 2007 DRAFT 2 make the transition to acquisition of continuous-time (and -space) signals, we would choose a discretization space on which to apply the discrete theory. For example, we might assume that the image is (or can very closely approximated by) a gridded array of n pixels. The test functions k , which would also be pixellated, then give us measurements of the projection of the continuous image onto this discretization space. The choice of the k allows us to choose in which domain we gather information about the image. For example, if the k are sinusoids at different frequencies, we are essentially collecting Fourier coefficients (as in magnetic resonance imaging), if they are delta ridges, we are observing line integrals (as in tomography), and if they are indicator functions on squares, we are back to collecting pixels (as in a standard digital camera). Imagers which take these generalized kindscollecting pixels (as in a standard digital camera)....
View Full Document
This note was uploaded on 05/20/2011 for the course CAP 6701 taught by Professor Staff during the Spring '08 term at University of Florida.
- Spring '08