How to numerically recover the sparse representation 2011

# How to numerically recover the sparse representation 2011 -...

This preview shows pages 1–3. Sign up to view the full content.

4/8/2011 1 How to numerically recover the sparse representation From Least Squares to sparsification Recall the sparse face recognition If A is the matrix formed by the columns of training images and b is a testing image, then the sparse face recognition looks for a sparse coefficient vector x such that b=Ax Recall also that A is in general a short and wide matrix so that the system is a underdetermined system that has infinitely many solutions. We only want to find the sparsest solution x*. Finding the sparsest solution is NP-hard. But if A satisfies some property (RIP), then the recently developed compressive sensing theory asserts that the smallest l1-norm solution is also the sparse solution (in some cases, but not all).

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
4/8/2011 2 Method of using l1-minimization So, one way to solve (in some cases) of the sparse representation problem is by solving: ||x|| 1 min subject to Ax=b This l1 minimization problem can be solved through a linear programming procedure. Using linear programming method can give us a solution but can we speed up the performance? Yes, many current research is on finding faster ways to find the sparse solution We will look at one next. We start with Least Squares method Recall that for a vector x=[x 1 ,…, x n ] T , the Euclidean norm ||x|| 2 is defined as We now consider a classical data fitting problem: given
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 9

How to numerically recover the sparse representation 2011 -...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online