# If both variables have uncertainties then you have to

• Lesson Plan
• 66

This preview shows pages 52–54. Sign up to view the full content.

-direction only. If both variables have uncertainties, then you have to be careful because the essential assumption is violated. If you go ahead with a standard least-squares fit when there are errors in both coordinates, the slope will be systematically too small. Thanks to Jefferys (1980), the ML formulation of this problem is straightforward. Nevertheless, there is a lot of confusion on such fitting and not-inconsiderable propagation of myth. Before reviewing Jefferys’ formulation, let’s see two approaches: 1. Taylor § 8.4 argues that you can account for x -variance σ 2 x m by increasing the y -variance by the usual error propagation, i.e. define an equivalent y -variance σ 2 y m ( equiv ) = [ σ 2 y m + ( a 1 σ x m ) 2 ], where a 1 is the slope. This is equivalent to our results below. 2. Isobe et al. (1990, ApJ 364, 104) discuss the case incorrectly. Look in particular at their Section V, where they make 5 numbered recommendations. Two of these are incorrect: (a) Number 3 says, in essence, that if you have measurement errors in y but not in x , and want to predict x from y in some future dataset, that you should least-squares fit the x values (which have no errors) to the y . This is flat wrong . Again, it leads to a slope that is systematically too small. The proper procedure is to fit y to x in the standard way, which is consistent with the ML formulation and gives the right answer; then use the resulting parameters, whose errors you know about, to predict x from y in the future. (b) Number 4 says that if both x and y have errors, and your main focus is finding the true slope, you should use their “bisector” method. I won’t explain this because this concept is wrong. 14.1. A preliminary: Why the slope is systematically small Why is the derived slope systematically too small if you use the standard least-squares tech- nique when both variables have errors? To see this, take a look back at equation 0.5, where we explicitly write the normal equations for fitting a straight line of the form As m + Bt m = y m . To focus the discussion and make it easy, replace that problem with a single-parameter solution for only the slope B , and use the usual variables ( x, y ) in place of ( t, y ). Then we are fitting the set of M equations

This preview has intentionally blurred sections. Sign up to view the full version.

– 53 – Bx m = y m . (14.1a) The set of two normal equations becomes just the single equation B [ x 2 ] = [ xy ] , (14.1b) or, writing out the sums explicitly, B = M - 1 m =0 x * m y m M - 1 m =0 x * 2 m . (14.1c) Here we use the star to designate the perfectly-known independent variable x * m . It is important to realize that the x m that appear in this equation are the perfectly-known ones x * m ; this is a fundamental tenet of least-squares fitting, which comes from the concept and principle of maximum likelihood ML. Because B is defined by the x * m and we are asking what happens when we use the imperfectly known x m instead, let us reduce the problem to the essence and imagine that y m is perfectly known, i.e. y m = y * m ; and that x * m = x m δx m , (14.2) where δx m is the observational error in point m . If we do standard least squares on this situation, then we (incorrectly) rewrite equation 14.1c to read B = M - 1 m =0
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### What students are saying

• As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

Kiran Temple University Fox School of Business ‘17, Course Hero Intern

• I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

Dana University of Pennsylvania ‘17, Course Hero Intern

• The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

Jill Tulane University ‘16, Course Hero Intern