This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: AIMS Lecture Notes 2006 Peter J. Olver 13. Approximation and Interpolation We will now apply our minimization results to the interpolation and least squares fitting of data and functions. 13.1. Least Squares. Linear systems with more equations than unknowns typically do not have solutions. In such situations, the least squares solution to a linear system is one means of getting as close as one can to an actual solution. Definition 13.1. A least squares solution to a linear system of equations A x = b (13 . 1) is a vector x ? ∈ R n that minimizes the Euclidean norm k A x b k . If the system (13.1) actually has a solution, then it is automatically the least squares solution. Thus, the concept of least squares solution is new only when the system does not have a solution. To find the least squares solution, we need to minimize the quadratic function k A x b k 2 = ( A x b ) T ( A x b ) = ( x T A T b T )( A x b ) = x T A T A x 2 x T A T b + b T b = x T K x 2 x T f + c, where K = A T A, f = A T b , c = k b k 2 . (13 . 2) According to Theorem 12.10, the Gram matrix K = A T A is positive definite if and only if ker A = { } . In this case, Theorem 12.12 supplies us with the solution to this minimization problem. Theorem 13.2. Assume that ker A = { } . Set K = A T A and f = A T b . Then the least squares solution to the linear system A x = b is the unique solution x ? to the socalled normal equations K x = f or, explicitly, ( A T A ) x = A T b , (13 . 3) namely x ? = ( A T A ) 1 A T b . (13 . 4) 3/15/06 212 c ° 2006 Peter J. Olver The least squares error is k A x ? b k 2 = k b k 2 f T x ? = k b k 2 b T A ( A T A ) 1 A T b . (13 . 5) Note that the normal equations (13.3) can be simply obtained by multiplying the original system A x = b on both sides by A T . In particular, if A is square and invertible, then ( A T A ) 1 = A 1 ( A T ) 1 , and so (13.4) reduces to x = A 1 b , while the two terms in the error formula (13.5) cancel out, producing zero error. In the rectangular case — when inversion is not allowed — (13.4) gives a new formula for the solution to a compatible linear system A x = b . Example 13.3. Consider the linear system x 1 + 2 x 2 = 1 , 3 x 1 x 2 + x 3 = 0 , x 1 + 2 x 2 + x 3 = 1 , x 1 x 2 2 x 3 = 2 , 2 x 1 + x 2 x 3 = 2 , consisting of 5 equations in 3 unknowns. The coefficient matrix and right hand side are A = 1 2 3 1 1 1 2 1 1 1 2 2 1 1 , b = 1 1 2 2 . A direct application of Gaussian Elimination shows that the system is incompatible — it has no solution. Of course, to apply the least squares method, we are not required to check this in advance. If the system has a solution, it is the least squares solution too, and the least squares method will find it....
View
Full Document
 Fall '09
 Olver
 Equations, Approximation, Linear Systems, Least Squares, Linear least squares, Peter J. Olver

Click to edit the document details