Unformatted text preview: f nonzero and the other components zero). Any other change gives a bigger ± ˜ E ± F . Thus the smallest value of the minimization function is ˜ σ n +1 , and since we veriFed in part (a) that our solution has this value, we are Fnished. If you don’t Fnd that argument convincing, we can be more precise. We use a fact found in the Frst pointer in Chapter 2: for any matrix B and vector z for which Bz is deFned: ± Bz ± 2 ≤ ± B ± 2 ± z ± 2 , where ± B ± 2 is deFned to be the largest singular value of B . Therefore, • ± B ± 2 ≤ ± B ± F , since we can see from part (b) and the singular value decom-position of B that the ±robenius norm of B is just the square root of the sum of the squares of its singular values. • If ( ˜ Σ + ˜ E ) ˜ f = , then ˜ Σ ˜ f = − ˜ E ˜ f ....
View
Full Document
- Fall '11
- Dr.Robin
- Linear Algebra, Singular value decomposition, Frobenius norm, orthogonal matrices U.
-
Click to edit the document details