# 8b we see that we can regard the coefficient vector

• Lesson Plan
• 66

This preview shows pages 41–43. Sign up to view the full content.

the rewritten equation 11.8b we see that we can regard the coefficient vector being defined by the orthogonal vectors of V . Moreover, the covariance matrix of equation 11.6 is diagonal, so that the principal axes of the χ 2 ellipse are defined by the orthonormal column vectors of V with lengths proportional to bracketleftbig 1 w 2 bracketrightbig . See NR Figure 15.6.5 and the associated discussion. The N orthonormal vectors in V define an N -dimensional space for a with N orthogonal directions. Suppose that a particular value w n is small. In equation 11.5, this means that the associated orthonormal vector v n —i.e., the associated direction n —in V is not well-represented by the set of original nonorthogonal vectors in X . This, in turn, means that you can’t represent that direction in a of equation 11.8a without amplifying that orthonormal vector by a large factor; these amplification factors w - 1 n . This is an unsatisfactory situation because it means some combinations of the original m measurements are highly weighted. As w n 0 this situation becomes not only unsatisfactory, but numerically impossible. Consider the limiting case, w n = 0. In this case, the original x m values do not have any projection along associated orthonormal vector v n in V . (These v n are called “null” orthonormal vectors.) In equation 11.8a, you can add any multiple of the null v n to the solution for a and it won’t change a at all (!) because it has absolutely no effect on the fit to the data (because of the particular set of values x m ). What multiple is appropriate? Common sense says that, because these V vectors have no meaning for the solution, the multiple should be zero . So in equation 11.8a, instead of trying, in vain, to include this null vector v n by using a huge multiple, you toss in the towel and eliminate

This preview has intentionally blurred sections. Sign up to view the full version.

– 42 – it altogether by replacing its corresponding w - 1 n = by w - 1 n = 0. So we have the rule: wherever w n = 0 (or sufficiently small), replace w - 1 n by 0! This replacement provides the minimum length for x and thereby constitutes the least-squares solution. 11.3. Important Conclusion for Least Squares!!! Suppose you have degeneracy, or near-degeneracy. This, in turn, means that the formulation of the least-squares model is faulty : some of the original basis functions represent (or nearly ) represent the same physical quantity, or at least one of the functions is (nearly) a linear combination of others. Sometimes you can discover the problem and fix it by imposing additional constraints, or by finding two unknown coefficients that are nearly identical. If so, you can reformulate the model and try again. Even if you can’t discover the root cause(s) and remove the degeneracy, the SVD solution allows you to bypass problems associated with degeneracy and provides reasonable best-fit parameter values.
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### What students are saying

• As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

Kiran Temple University Fox School of Business ‘17, Course Hero Intern

• I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

Dana University of Pennsylvania ‘17, Course Hero Intern

• The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

Jill Tulane University ‘16, Course Hero Intern