This preview shows page 1. Sign up to view the full content.
Unformatted text preview: ion
A is the sum of r rank one matrices: A = r
i =1 σi ui vj Theorem
(Eckart-Young 1936) Let A = U ΣV = U diag(σ1 , . . . , σr , 0, . . . , 0)V .
For any ν with 0 ≤ ν ≤ r , Aν = ν=1 σi ui vi ,
A − Aν 2 =
A − B 2 = σν +1
rank(B )≤ν Proof.
Let Σν = U (A − Aν )V , then
Σν = U (diag(σ1 , . . . , σν , σν +1 , . . . , σp ) − diag(σ1 , . . . , σν , 0, . . . , 0))V
= U diag(0, . . . , 0, σν +1 , . . . , σp ) V
consequently A − Aν 2 = Σν 2 = σν +1 .
11 / 20 Geometric interpretation of Eckart-Young theorem What is the best approximation of a hyperellipsoid by a line segment?
Take the line segment to be the longest axis Next, what is the best approximation by a two-dimensional ellipsoid?
Take the ellipsoid spanned by the longest and the second longest axis Continue and improve the approximation by adding into our
approximation the largest axis of the hyperellipsoid not yet included
Reminiscent of techniques used in image compression, machine
learning, and functional analysis (e.g., matching pursuit) Theorem
For any ν with 0 ≤ v ≤ r , A...
View Full Document
This document was uploaded on 02/10/2014.
- Spring '14
- Computer Science