Proof i since k m r 0 k 2 σ 1 m r 0 and k m r 0 k 2 f

This preview shows page 18 - 21 out of 25 pages.

Proof.(i) SincekM-R0k2=σ1(M-R0)andkM-R0k2F=i>0σ2i(M-R0)follows from Lemma 2.23b,c, we obtain from Lemma 2.34a thatkM-R0k2σr+1(M),kM-R0k2FXi>rσ2i(M)forR0withrank(R0)r.Since equality holds forR0=R, this is the solution of the minimisation problems.(ii) Ifσk=σk+1, we may interchange ther-th and(r+ 1)-th columns inUandVobtaining another singular-value decomposition. Thus, anotherRresults.utNext, we consider a convergent sequenceM(ν)and use Exercise 2.26.
2.7 Linear Algebra Procedures41Lemma 2.37.ConsiderM(ν)Kn×mwithM(ν)M. Then there are bestapproximationsR(ν)according to (2.24) so that a subsequence ofR(ν)convergestoR, which is the best approximation toM.Remark 2.38.The optimisation problems (2.24) can also be interpreted as the bestapproximation of the range ofM:max{kPMkF:Porthogonal projection withrank(P) =r}.(2.25)Proof.The best approximationR∈ RrtoMhas the representationR=PMforP=P(r)1(cf. Remark 2.35). By orthogonality,kPMk2F+k(I-P)Mk2F=kMk2Fholds. Hence minimisation ofk(I-P)Mk2F=kM-Rk2Fis equivalent to maximisingkPMk2F.ut2.7 Linear Algebra ProceduresFor later use, we formulate procedures based on the previous techniques.The reduced QR decomposition is characterised by the dimensionsnandm,the input matrixMKn×m, the rankr, and resulting factorsQandR. Thecorresponding procedure is denoted byprocedureRQR(n, m, r, M, Q, R);{reduced QR decomposition}input:MKn×m;output:r= rank(M),QKn×rorthogonal,RKr×mupper triangular.(2.26)and requiresNQR(n, m)operations (cf. Lemma 2.22).The modified QR decomposition in (2.15) produces an additional permutationmatrixPand the decomposition ofRinto[R1R2]:procedurePQR(n, m, r, M, P, Q, R1, R2);{pivotised QR decomposition}input:MKn×m;output:QKn×rorthogonal,PKm×mpermutation matrix,R1Kr×rupper triangular withr= rank(M), R2Kr×(m-r).(2.27)A modified version ofPQRwill be presented in (2.37).
422 Matrix ToolsThe (two-sided) reduced singular-value decomposition from Definition 2.27leads toprocedureRSVD(n, m, r, M, U, Σ, V);{reduced SVD}input:MKn×m;output:UKn×r, VKm×rorthogonal withr= rank(M),Σ= diag{σ1, . . . , σr} ∈Rr×rwithσ1. . .σr>0.(2.28)Here the integersn, mmay also be replaced with index setsIandJ. For the costNSVD(n, m), see Corollary 2.24a.The left-sided reduced singular-value decomposition (cf. Remark 2.28) is de-noted byprocedureLSVD(n, m, r, M, U, Σ);{left-sided reduced SVD}input:MKn×m;output:U, r, Σas in (2.28).(2.29)Its cost isNLSVD(n, m) :=12n(n+ 1)Nm+83n3,whereNmis the cost of the scalar product of rows ofM. In general,Nm= 2m-1holds, but it may be smaller for structured matrices (cf. Remark 7.16).In the procedures above,Mis a general matrix fromKn×m. MatricesM∈ Rr(cf. (2.6)) may be given in the formM=rXν=1rXμ=1cνμaνbHμ=ACBHaνKn, A=[a1a2· · ·]Kn×r,bνKm, B=[b1b2· · ·]Km×r!.(2.30)Then the following approach has a cost proportional ton+mifrn, m(cf.[138, Alg. 2.17]), but also forrn, mit is cheaper than the direct computation18of the productM=ACBHfollowed by a singular-value decomposition.

Upload your study docs or become a

Course Hero member to access this document

Upload your study docs or become a

Course Hero member to access this document

End of preview. Want to read all 25 pages?

Upload your study docs or become a

Course Hero member to access this document

Term
Fall
Professor
N/A
Tags
Linear Algebra, Matrices, Singular value decomposition, Orthogonal matrix

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture