4/8/2011
1
How to numerically recover the
sparse representation
From Least Squares to sparsification
Recall the sparse face recognition
•
If A is the matrix formed by the columns of training images
and b is a testing image, then the sparse face recognition
looks for a sparse coefficient vector x such that
b=Ax
•
Recall also that A is in general a short and wide matrix so
that the system is a underdetermined system that has
infinitely many solutions. We only want to find the sparsest
solution x*.
•
Finding the sparsest solution is NPhard.
•
But if A satisfies some property (RIP), then the recently
developed compressive sensing theory asserts that the
smallest l1norm solution is also the sparse solution (in
some cases, but not all).
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document4/8/2011
2
Method of using l1minimization
•
So, one way to solve (in some cases) of the sparse
representation problem is by solving:
x
1
min
subject to Ax=b
•
This l1 minimization problem can be solved
through a linear programming procedure.
•
Using linear programming method can give us a
solution but can we speed up the performance?
•
Yes, many current research is on finding faster
ways to find the sparse solution
•
We will look at one next.
We start with Least Squares method
•
Recall that for a vector x=[x
1
,…,
x
n
]
T
, the Euclidean norm
x
2
is defined as
•
We now consider a classical data fitting problem: given
This is the end of the preview.
Sign up
to
access the rest of the document.
 Spring '11
 Li
 Least Squares, Yi, best fitting line

Click to edit the document details