EE 103 Lecture Notes Section 6 Addendum, Professor S. E. Jacobsen 1 Consider the system of linear equations Axbwhere Ais mxn, m > n, and the columns of 12nAaaa,,,,, are linearly independent (i.e., rankA= n). Such a system of equations usually has no solution. We define, for a given nxR, the error vector e xAxb( )Linear Least Squares (LLS)Least-squaresproblems are those for which the norm is the Euclidean norm 2Tzz z||||In this case we may write 22221mTinnnnix Rx Rx Rx Re xe xe xe xexmin || ( ) ||min || ( ) ||min ( )( )min( )(6.1) That is, if the Euclidean norm is used, we choose an xthat minimizes the sum of the squared errors;hence the term “least squares”. Let22Tfxe xe xe x( )( )( )|| ( ) ||; we wish to minimize fx( )and, of course, if xis a minimizer, then xmust satisfy the vector equation 0fx( )When this vector equation is linear in the unknowns, x, we have a so-called linear least squaresproblem. For the remainder of this section, we will focus on the linear least squares problem. Now, 222TTTTtTTTTfxe xe xAxbAxbx A Axb Axb bfxx A Ab A ( )( )( )() ()( ). Therefore, 0TTTfxx A Ab A( )
has intentionally blurred sections.
Sign up to view the full version.