This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Compressed Sensing and Linear Codes over Real Numbers Henry D. Pfister (joint with Fan Zhang) Texas A&M University College Station Information Theory and Applications Workshop UC San Diego January 31st, 2008 Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 1 / 27 Outline 1 Introduction Compressed Sensing Connections with Coding 2 Benefits of the Coding Perspective Large Body of Previous Work LowDensity ParityCheck Codes for Compressed Sensing Confessions 3 Summary and Open Problems Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 2 / 27 Outline 1 Introduction Compressed Sensing Connections with Coding 2 Benefits of the Coding Perspective Large Body of Previous Work LowDensity ParityCheck Codes for Compressed Sensing Confessions 3 Summary and Open Problems Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 3 / 27 What is Compressed Sensing? (1) Compressed sensing (CS) is a relatively new area of signal processing and statistics that focuses on signal reconstruction from a small number of linear (i.e. dot product) measurements CS originated with the observation that many systems: I (1) Sample a large amount of data I (2) Perform a linear transform (e.g., DCT, wavelet, etc...) I (3) Throw away all the small coefficients If the locations of the important transform coefficients were known in advance, then they could be sampled directly with reduced complexity Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 4 / 27 What is Compressed Sensing? (2) CS is a research area that emerged when it was realized that random sampling could be used (with a small penalty) to achieve the same result without prior knowledge of the locations of important coefficients CS has two stages: sampling and reconstruction I During sampling, the signalofinterest (SOI) is sampled by computing its dot product with a set of sampling kernels I During reconstruction, the SOI is estimated from the samples I In many cases, the number of samples required for a good estimate is much smaller than other methods (e.g., Nyquist sampling at twice the maximum frequency) Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 5 / 27 Basic Problem Statement Use m dotproduct samples to reconstruct a signal I The signal vector is x ∈ R n I The m × n measurement matrix is Φ ∈ R m × n I The length m sample vector is y = Φ x Zhang and Pfister (Texas A&M) CS and Coding over Real Numbers ITA 2008 6 / 27 Basic Problem Statement Use m dotproduct samples to reconstruct a signal I The signal vector is x ∈ R n I The m × n measurement matrix is Φ ∈ R m × n I The length m sample vector is y = Φ x Given y , the valid signal set is V ( y ) = { x ∈ R n  Φ x = y } I If m < n , then a unique solution is not possible I With prior knowledge, we try to choose a “good” solutions I If x is i.i.d. zeromean Gaussian, then the ML solution is ˆ x = arg min x ∈ V ( y ) k x k 2 = Φ T ΦΦ...
View
Full
Document
 Spring '10
 RunyiYu
 Real Numbers, Information Theory, Coding theory, Hamming Code, Henry D. Pfister

Click to edit the document details