This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Linear Regression The idea is to predict y using a linear function of x but with allowance for variability in the y values. We write this as the model y = β 1 x + β + where denotes the variability around the central value. Of course, β and β 1 are unknown so we estimate them from some data ( x 1 , y 1 ) , . . . , ( x n , y n ). This procedure will give us numbers a and b say which we hope will be close to β and β 1 . The basic principle is that of least squares . We choose a and b so that the line does the best job of predicting y from x . Since we can’t predict the variable bit, the natural prediction for y from x is going to be bx + a . Thus we choose a and b by minimising the prediction error sum of squares S ( a, b ) = X ( y i ( bx i + a )) 2 Differentiating with respect to a and b we get ∂S ∂b = 2 X x i ( y i bx i a ) , ∂S ∂a = 2 X ( y i bx i a ) . and now we look for stationary values ˆ a and ˆ b . Solving ∂S/∂a = 0 with respect to ˆ a gives ˆ a = n...
View
Full
Document
This note was uploaded on 05/12/2010 for the course APPLIED ST 2010 taught by Professor Various during the Spring '10 term at Universidad Nacional Agraria La Molina.
 Spring '10
 Various

Click to edit the document details