Thus if we denote the covariance matrix for the

Info icon This preview shows pages 57–60. Sign up to view the full content.

View Full Document Right Arrow Icon
are the off-diagonal terms in the covariance matrix. Thus, if we denote the covariance matrix for the measured x values by the ( MJ × ( MJ ) matrix σ , then the most general case has σ with no nonzero elements. Less general, but much more common, is the situation shown in Figure 14.1. Here, the co- variances among the J measured parameters nonzero are for a particular experiment m , but the covariances from one experiment m to another are zero; in other words, each experiment is com- pletely independent of the others. In this less general but very common case, the covariance matrix looks like this. ( Note: we denote the covariance matrix by σ , but it contains variances , not disper- sions.) σ = σ 0 0 0 . . . 0 σ 1 0 . . . 0 0 σ 2 . . . . . . (14.6) Here, each element (including the 0 elements) is itself a J × J matrix. For our specific example of § 14.7, J = 2 so σ 0 is a covariance matrix of the form σ 0 = bracketleftBigg σ yy σ yt σ yt σ tt bracketrightBigg (14.7)
Image of page 57

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
– 58 – Generally, the chi-square is given by (e.g. Cowan equation 2.39) χ 2 = δ x T · σ - 1 · δ x (14.8) 14.5. Formulation of the Problem and its Solution with Lagrange Multipliers We will be referring to various versions of the data parameters x and derived parameters a : measured , best-fit , and (for the iterative solution) guessed . The subscript d denotes the set of measured datapoints , of which there are ( JM ). The subscript denotes the set of best-fit quantities; these parameters include not only the datapoints x , but also the derived parameters a . We will be doing an iterative fit using guessed values of both the data and derived parameters, represented by the subscript g . We begin by writing exact equations for each measurement. The fitted vales, subscripted with stars, satisfy the exact equations of condition f ( x * ,a * ) = 0 (14.9a) This is an M -long vector of functions f ( x , a ) = 0 (one row for each measurement). This set of M equations doesn’t do us much good because we don’t know the best-fit (starred) values. Conse- quently, for the datapoints we define the difference between the best-fit and measured data values δ x = x d x * (14.9b) This is the negative of Jefferys’ definition of the corresponding quantity ˆv in his section II. With this, the equation 14.9a becomes f ( x d δ x , a * ) = 0 . (14.9c) Our goal is to solve these M equations for the ( MJ ) differences δ x and the N parameters a * and, simultaneously, minimize χ 2 . This is a classic minimization problem: we minimize χ 2 with respect to the ( MJ + N ) values of δ x and a , subject to the M constraints of equation 14.9c. Such problems are solved using Lagrange multipliers. Here, the M Lagrange multipliers form the vector λ . We define the Lagrangian L as L = bracketleftbigg 1 2 δ x T · σ - 1 · δ x bracketrightbigg + bracketleftbig f T ( x d δ x , a ) · λ bracketrightbig ; (14.10)
Image of page 58
– 59 – the 1 2 arises because, for a Gaussian pdf for the errors, the residuals are distributed as e - χ 2 2 (e.g.
Image of page 59

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 60
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

What students are saying

  • Left Quote Icon

    As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

    Student Picture

    Kiran Temple University Fox School of Business ‘17, Course Hero Intern

  • Left Quote Icon

    I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

    Student Picture

    Dana University of Pennsylvania ‘17, Course Hero Intern

  • Left Quote Icon

    The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

    Student Picture

    Jill Tulane University ‘16, Course Hero Intern