chap7 - CHAPTER 7 MISCELLANEOUS TOPICS IN REGRESSION 1....

Info iconThis preview shows pages 1–9. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: CHAPTER 7 MISCELLANEOUS TOPICS IN REGRESSION 1. Weighted and Generalized Least Squares 2. Testing and correcting for heteroscedastic- ity 3. Polynomial regression and response surface methodology 4. Nonlinear regression 1 1. Weighted and Generalized Least Squares Consider the model y i = p X j =1 x ij j + i , 1 i n, (1) where the { i } are uncorrelated with mean 0 but, in contrast to earlier chapters, do not have common variance. In many cases it is possi- ble (exactly or hypothetically) to determine the variances up to an unknown constant, and this suggests a model of the form Var( i ) = 2 v i , (2) where the { v i } are known and > 0. 2 The appropriate generalization of least squares is weighted least squares : choose the parame- ters b 1 ,..., b p to minimize n X i =1 v- 1 i y i- p X j =1 x ij b j 2 . (3) It is intuitively clear that the weight on the i th observation should decrease as v i increases, but it is not instantly obvious why the weights should be proportional to v- 1 i . There are at least four justifications of (3). Three of them are as follows: 3 1. Rescaling the model Define y * i = v- 1 / 2 i y i , x * ij = v- 1 / 2 i x ij , * i = v- 1 / 2 i i . Then equation (1) is identical to y * i = p X j =1 x * ij j + * i , 1 i n, (4) in which the coefficients { j } are unchanged, but we now have Var( * i ) = 2 all equal. The least squares criterion for (4) is to choose b 1 ,..., b p to minimize n X i =1 y * i- p X j =1 x * ij b j 2 which is the same as (3). 4 2. Weighted least squares estimate are BLUE Its true! (Proof left as an exercise.) 5 3. Maximum likelihood When (2) holds, an extension of (5.12) and (5.13) shows that the likelihood function is given by L = n Y i =1 (2 2 v i )- 1 / 2 exp (- ( y i- j x ij j ) 2 2 2 v i ) . Maximizing this with respect to 1 ,..., p is equiv- alent to minimizing (3). 6 Grouped data A specific context where the right answer is clear-cut, but also an approximation to the general case. Suppose y i = k y ik /N i where y i 1 ,...,y iN i are independent data points (with common vari- ance) sampled at the same ( x i 1 ,...,x ip ) vector. Then Var( y i ) = 2 /N i so the model (2) holds with v i = N- 1 i . The ordinary least squares criterion applied to the { y ik } implies that we should choose b 1 ,..., b p to minimize X i X k y ik- p X j =1 x ij b j 2 . (5) However, by adding and subtracting y i inside the parentheses, we easily see that (5) is the same as X i X k ( y ik- y i ) 2 + X i N i y i- p X j =1 x ij b j 2 . (6) 7 The first term is independent of the unknown parameters, so minimizing (6) is equivalent to minimizing (3) with v i = N- 1 i ....
View Full Document

Page1 / 66

chap7 - CHAPTER 7 MISCELLANEOUS TOPICS IN REGRESSION 1....

This preview shows document pages 1 - 9. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online