WLS - Extending Linear Regression: Weighted Least Squares,

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Extending Linear Regression: Weighted Least Squares, Heteroskedasticity, Local Polynomial Regression 36-350, Data Mining 23 October 2009 Contents 1 Weighted Least Squares 1 2 Heteroskedasticity 3 2.1 Weighted Least Squares as a Solution to Heteroskedasticity . . . 5 3 Local Linear Regression 10 4 Exercises 15 1 Weighted Least Squares Instead of minimizing the residual sum of squares, RSS ( ) = n X i =1 ( y i- ~x i ) 2 (1) we could minimize the weighted sum of squares, WSS ( , ~w ) = n X i =1 w i ( y i- ~x i ) 2 (2) This includes ordinary least squares as the special case where all the weights w i = 1. We can solve it by the same kind of algebra we used to solve the ordinary linear least squares problem. But why would we want to solve it? For three reasons. 1. Focusing accuracy. We may care very strongly about predicting the re- sponse for certain values of the input ones we expect to see often again, ones where mistakes are especially costly or embarrassing or painful, etc. 1 than others. If we give the points x i near that region big weights w i , and points elsewhere smaller weights, the regression will be pulled towards matching the data in that region. 2. Discounting imprecision. Ordinary least squares is the maximum likeli- hood estimate when the in Y = ~ X + is IID Gaussian white noise. This means that the variance of has to be constant, and we measure the regression curve with the same precision elsewhere. This situation, of constant noise variance, is called homoskedasticity . Often however the magnitude of the noise is not constant, and the data are heteroskedastic . When we have heteroskedasticity, even if each noise term is still Gaussian, ordinary least squares is no longer the maximum likelihood estimate, and so no longer efficient. If however we know the noise variance 2 i at each measurement i , and set w i = 1 / 2 i , we get the heteroskedastic MLE, and recover efficiency. To say the same thing slightly differently, theres just no way that we can estimate the regression function as accurately where the noise is large as we can where the noise is small. Trying to give equal attention to all parts of the input space is a waste of time; we should be more concerned about fitting well where the noise is small, and expect to fit poorly where the noise is big. 3. Doing something else. There are a number of other optimization prob- lems which can be transformed into, or approximated by, weighted least squares. The most important of these arises from generalized linear mod- els, where the mean response is some nonlinear function of a linear pre- dictor. (Logistic regression is an example.) In the first case, we decide on the weights to reflect our priorities. In the third case, the weights come from the optimization problem wed really rather be solving. What about the second case, of heteroskedasticity?...
View Full Document

This note was uploaded on 02/15/2012 for the course GEO 4167 taught by Professor Staff during the Spring '12 term at University of Florida.

Page1 / 15

WLS - Extending Linear Regression: Weighted Least Squares,

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online