{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

# 1r 0 0 0 the matrix a v u 0 0 is

This preview shows page 1. Sign up to view the full content.

This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: ion Theorem A is the sum of r rank one matrices: A = r i =1 σi ui vj Theorem (Eckart-Young 1936) Let A = U ΣV = U diag(σ1 , . . . , σr , 0, . . . , 0)V . For any ν with 0 ≤ ν ≤ r , Aν = ν=1 σi ui vi , i A − Aν 2 = min A − B 2 = σν +1 rank(B )≤ν Proof. Let Σν = U (A − Aν )V , then Σν = U (diag(σ1 , . . . , σν , σν +1 , . . . , σp ) − diag(σ1 , . . . , σν , 0, . . . , 0))V = U diag(0, . . . , 0, σν +1 , . . . , σp ) V consequently A − Aν 2 = Σν 2 = σν +1 . 11 / 20 Geometric interpretation of Eckart-Young theorem What is the best approximation of a hyperellipsoid by a line segment? Take the line segment to be the longest axis Next, what is the best approximation by a two-dimensional ellipsoid? Take the ellipsoid spanned by the longest and the second longest axis Continue and improve the approximation by adding into our approximation the largest axis of the hyperellipsoid not yet included Reminiscent of techniques used in image compression, machine learning, and functional analysis (e.g., matching pursuit) Theorem For any ν with 0 ≤ v ≤ r , A...
View Full Document

{[ snackBarMessage ]}