F23-RemedialMeasures

F23-RemedialMeasures - PubH 7405: REGRESSION ANALYSIS MLR:...

Info iconThis preview shows pages 1–11. Sign up to view the full content.

View Full Document Right Arrow Icon
PubH 7405: REGRESSION ANALYSIS MLR: MORE DIAGNOSTICS & SOME REMEDIES
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
) , 0 ( : Model Regression Multiple )} , , , ; {( : form in the are data The 2 2 2 1 1 0 , , 1 2 1    N x x x Y x x x y k k n i ki i i i Do data fit the Normal Error Regression Model? If not , could we do something to make them fit?
Background image of page 2
REMEDIAL MEASURES FOR NONLINEARITY If a “linear model” is found not appropriate for the added value of certain predictor, there are two choices: (1) Add in a power terms ( quadratic ), or (2) Use some transformation on the data to create a fit for the transformed data
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Each has advantages & disadvantages: first approach (quadratic model) may yield better insights but may lead to more technical difficulties; transformations (log, reciprocal, etc…) are more simple but may obscure the fundamental real relationship between Y and that predictor
Background image of page 4
It’s more time-consuming to detect non- linearity , but it’s more simple to fix it : A log transformation of an X or addition of its quadratic term would normally solve the problem on non-linearity . It’s more simple to detect a non-constant variance ; added-value plot is not needed. However, it would be more difficult to fix because, in order to change the variance of Y but we to make a transformation on Y.
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Transformations of Y maybe helpful in reducing or eliminating unequal variances of the error terms. However, a transformations of Y also changes the regression relation/function . In many circumstances an appropriate linear regression relationship has been found but the variances of the error terms are unequal; a transformation would make that linear relationship non-linear which is a more severe violation. An alternative to data transformations: – which are more difficult to find - using method “ weighted least squares instead of regular least squares.
Background image of page 6
With the Weighted Least Squares (WLS), estimators for regression coefficients are obtained by minimizing the quantity Q w where “w” is a “ weight (associated with the error term); setting the partial derivatives equal to zero to obtain the “normal equations”:  2 1 0 ) ( i k i i w X Y w Q 
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
The optimal choice for the weight is the inverse of variance ; when the variance is constant, ordinary and weighted least squares estimators are identical. For example, in the marginal SLR, when standard deviation is proportional to X 5 (or variance is kX 5 2 ), we minimize:  2 5 1 0 2 5 ) ( 1 i i i X Y X Q 
Background image of page 8
When error variances are known , estimators for regression coefficients are obtained by minimizing the quantity Q w where “w” is a “weight” (associated with the error term, optimal choice is inverse of the known variance); setting the partial derivatives equal to zero to obtain the “normal equations”. The weighted least squares estimators of the regression coefficients are unbiased, consistent, and have minimum variance among unbiased linear estimators 2 1 1 0 ) ( ik k i i i w X X Y w Q 
Background image of page 9

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon