Ans 5 - may be

Ans 5 - may be - N umerische Mathematik 7, 206-2t 6 (t 965)...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
Numerische Mathematik 7, 206--2t 6 (t 965) Numerical Methods for Solving Linear Least Squares Problems* By G. GOLUB Abstract. A common problem in a Computer Laboratory is that of finding linear least squares solutions. These problems arise in a variety of areas and in a variety of contexts. Linear least squares problems are particularly difficult to solve because they frequently involve large quantities of data, and they are ill-conditioned by their very nature. In this paper, we shall consider stable numerical methods for handling these problems, Our basic tool is a matrix decomposition based on orthogonal House- holder transformations. 1. Introduction Let A be a given m • real matrix of rank r, and b a given vector. We wish to determine a vector such that 1[ b -- A ~][ = min. (t.1) where I['"I[ indicates the euclidean norm. If m>n and r<n then there is no unique solution. Under these conditions, we require simultaneously to (t.1) that ll~ll----min. (L2) Condition (1.2) is a very natural one for many statistical and numerical problems. m>n and r=n, then it is well known (cf. [4~) that ~ satisfies the equation ATAx=ATb. (t.3) Unfortunately, the matrix A rA is frequently ill-conditioned [6] and influenced greatly by roundoff errors. The following example of L~IUCHLI [8] illustrates A = this well. Suppose "1 1 0 0 0 0 0 0 0 0 1 I 1" 0 0 0 0 0 0 0 0 0 e 0 0 0 e * Reproduction in Whole or in Part is permitted for any Purpose of the United States government. This report was supported in part by Office of Naval Research Contract Nonr-225(37) (NR 044-tt) at Stanford University.
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Numerical Methods for Solving Linear Least Squares Problems 207 then ATA = -t + e 2 t 1 1 1 1 t+d t 1 t 1 t l+e 2 t t t t t t+e s 1 t 1 t t t+e z (1.4) Clearly for e#O, the rank of is five since the elgenvalues of are Let us assume that the elements of ArA are computed using double precision arithmetic, and then rounded to single precision accuracy. Now let ~7 be the largest number on the computer such that ]l (l.0+r/) -------t.0 where ]l (. ..) indicates the floating point computation. Then if e< ~, the rank of the computed re- presentation of (t.4) ~ be one. Consequently, no matter how accurate the linear equation solver, it is impossible to solve the normal equations (t.3). In [2], HOUSEHOLDER stressed the use of orthogonal transformations for solving linear least squares problems. In this paper, we shall exploit these trans- formations and show their use in a variety of least squares problems. 2. A Matrix Decomposition Throughout this section, we shall assume m~n= r. Since the euclidean norm of a vector is unitarily invariant, lib-- A ~ll = lie- QA ~l[ where c= Q b and Q is an orthogonal matrix. We choose Q so that . (2.1) QA=R= 0 }(,~-,ox, where R is an upper triangular matrix. Clearly, ~=/7-1 c where ~ is the first n components of e and consequently, Ilb-- A = ( Y, c t.
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 01/14/2011 for the course ECE 210a taught by Professor Chandrasekara during the Fall '08 term at UCSB.

Page1 / 11

Ans 5 - may be - N umerische Mathematik 7, 206-2t 6 (t 965)...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online