{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

chap7 - CHAPTER 7 MISCELLANEOUS TOPICS IN REGRESSION 1...

Info icon This preview shows pages 1–9. Sign up to view the full content.

View Full Document Right Arrow Icon
CHAPTER 7 MISCELLANEOUS TOPICS IN REGRESSION 1. Weighted and Generalized Least Squares 2. Testing and correcting for heteroscedastic- ity 3. Polynomial regression and response surface methodology 4. Nonlinear regression 1
Image of page 1

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
1. Weighted and Generalized Least Squares Consider the model y i = p X j =1 x ij β j + ² i , 1 i n, (1) where the { ² i } are uncorrelated with mean 0 but, in contrast to earlier chapters, do not have common variance. In many cases it is possi- ble (exactly or hypothetically) to determine the variances up to an unknown constant, and this suggests a model of the form Var( ² i ) = σ 2 v i , (2) where the { v i } are known and > 0. 2
Image of page 2
The appropriate generalization of least squares is weighted least squares : choose the parame- ters b β 1 , ..., b β p to minimize n X i =1 v - 1 i y i - p X j =1 x ij b β j 2 . (3) It is intuitively clear that the weight on the i ’th observation should decrease as v i increases, but it is not instantly obvious why the weights should be proportional to v - 1 i . There are at least four justifications of (3). Three of them are as follows: 3
Image of page 3

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
1. Rescaling the model Define y * i = v - 1 / 2 i y i , x * ij = v - 1 / 2 i x ij , ² * i = v - 1 / 2 i ² i . Then equation (1) is identical to y * i = p X j =1 x * ij β j + ² * i , 1 i n, (4) in which the coefficients { β j } are unchanged, but we now have Var( ² * i ) = σ 2 all equal. The least squares criterion for (4) is to choose b β 1 , ..., b β p to minimize n X i =1 y * i - p X j =1 x * ij b β j 2 which is the same as (3). 4
Image of page 4
2. Weighted least squares estimate are BLUE It’s true! (Proof left as an exercise.) 5
Image of page 5

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
3. Maximum likelihood When (2) holds, an extension of (5.12) and (5.13) shows that the likelihood function is given by L = n Y i =1 (2 πσ 2 v i ) - 1 / 2 exp ( - ( y i - j x ij β j ) 2 2 σ 2 v i ) . Maximizing this with respect to β 1 , ..., β p is equiv- alent to minimizing (3). 6
Image of page 6
Grouped data A specific context where the “right” answer is clear-cut, but also an approximation to the general case. Suppose y i = k y 0 ik /N i where y 0 i 1 , ..., y 0 iN i are independent data points (with common vari- ance) sampled at the same ( x i 1 , ..., x ip ) vector. Then Var( y i ) = σ 2 /N i so the model (2) holds with v i = N - 1 i . The ordinary least squares criterion applied to the { y 0 ik } implies that we should choose b β 1 , ..., b β p to minimize X i X k y 0 ik - p X j =1 x ij b β j 2 . (5) However, by adding and subtracting y i inside the parentheses, we easily see that (5) is the same as X i X k ( y 0 ik - y i ) 2 + X i N i y i - p X j =1 x ij b β j 2 . (6) 7
Image of page 7

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
The first term is independent of the unknown parameters, so minimizing (6) is equivalent to minimizing (3) with v i = N - 1 i . This provides our fourth method of justify- ing (3). Whenever the { v i } are of the form v i = AN - 1 i for some constant A and integers N 1 , ..., N n , then the experiment is identical to a grouped data experiment and so directly jus- tifies (3). However, by a process of rational approximation, we can get arbitrarily close to this situation for any v 1 , ..., v n , so (3) is justified in general.
Image of page 8
Image of page 9
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

What students are saying

  • Left Quote Icon

    As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

    Student Picture

    Kiran Temple University Fox School of Business ‘17, Course Hero Intern

  • Left Quote Icon

    I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

    Student Picture

    Dana University of Pennsylvania ‘17, Course Hero Intern

  • Left Quote Icon

    The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

    Student Picture

    Jill Tulane University ‘16, Course Hero Intern