lin_regr - Linear Regression The idea is to predict y using...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Linear Regression The idea is to predict y using a linear function of x but with allowance for variability in the y values. We write this as the model y = β 1 x + β + where denotes the variability around the central value. Of course, β and β 1 are unknown so we estimate them from some data ( x 1 , y 1 ) , . . . , ( x n , y n ). This procedure will give us numbers a and b say which we hope will be close to β and β 1 . The basic principle is that of least squares . We choose a and b so that the line does the best job of predicting y from x . Since we can’t predict the variable bit, the natural prediction for y from x is going to be bx + a . Thus we choose a and b by minimising the prediction error sum of squares S ( a, b ) = X ( y i- ( bx i + a )) 2 Differentiating with respect to a and b we get ∂S ∂b =- 2 X x i ( y i- bx i- a ) , ∂S ∂a =- 2 X ( y i- bx i- a ) . and now we look for stationary values ˆ a and ˆ b . Solving ∂S/∂a = 0 with respect to ˆ a gives ˆ a = n...
View Full Document

This note was uploaded on 05/12/2010 for the course APPLIED ST 2010 taught by Professor Various during the Spring '10 term at Universidad Nacional Agraria La Molina.

Page1 / 2

lin_regr - Linear Regression The idea is to predict y using...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online