p103_6_AddendumS10

p103_6_AddendumS10 - EE 103 Lecture Notes Section 6...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
EE 103 Lecture Notes Section 6 Addendum, Professor S. E. Jacobsen 1 Consider the system of linear equations Ax b where A is mxn, m > n, and the columns of 12 n Aaa a , ,,, , are linearly independent (i.e., rank A = n ). Such a system of equations usually has no solution. We define, for a given n x R , the error vector ex A x b () Linear Least Squares (LLS) Least-squares problems are those for which the norm is the Euclidean norm 2 T zz z || || In this case we may write 22 1 m T i nn n n i xR ex ex e x   min ||() || min ( ) ( ) min ( ) (6.1) That is, if the Euclidean norm is used, we choose an x that minimizes the sum of the squared errors; hence the term “least squares”. Let 2 2 T f xe x e x e x  () () () ||() || ; we wish to minimize f x and, of course, if x is a minimizer, then x must satisfy the vector equation 0 fx When this vector equation is linear in the unknowns, x , we have a so-called linear least squares problem. For the remainder of this section, we will focus on the linear least squares problem. Now, 2 TT T T t T T f x e x e x Ax b Ax b x A Ax b Ax b b xAA bA  () () () ( )( ) . Therefore, 0 T f xx A A b A 
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
EE 103 Lecture Notes Section 6 Addendum, Professor S. E. Jacobsen 2 or, by taking the transpose of this latter equation, TT AA x Ab (6.2) These equations are called the normal equations , and we'll soon learn of the meaning of that term. T is x nn , nonsingular, and positive definite. Therefore, mathematically 1 x AA Ab () (6.3) ( this is a mathematical expression and is not to be used for computation ). Since the columns of A are linearly independent, the matrix T is PD (Positive Definite) and therefore Choleski's method may be employed to factor T as AA LL where L is a lower triangular matrix. Given this Choleski factorization, the equation LL x A b may be simply solved by forward and back substitution: solves by forward substitution solves by back substitution T T yL y A b xL x y , . Geometric Interpretation and Orthogonality Recall, we derived the solution method for the linear least squares problem by considering the minimization problem (6.1) 22 2 1 n m T i xR i ex ex ex e x   min ||() || min ( ) ( ) min ( ) To understand this concept geometrically, and to introduce the notion of orthogonality, consider the set of all linear combinations of the columns of the matrix A , called the column space of A (also referred to as the range of A ): 12 m mn n n
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 06/01/2010 for the course EE EE 103 taught by Professor Jacobsen during the Spring '09 term at UCLA.

Page1 / 9

p103_6_AddendumS10 - EE 103 Lecture Notes Section 6...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online