This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Lecture 29 Simple linear regression. 29.1 Method of least squares. Suppose that we are given a sequence of observations ( X 1 , Y 1 ) , . . . , ( X n , Y n ) where each observation is a pair of numbers X , Y i ∈ . Suppose that we want to predict variable Y as a function of X because we believe that there is some underlying relationship between Y and X and, for example, Y can be approximated by a function of X, i.e. Y ≈ f ( X ). We will consider the simplest case when f ( x ) is a linear function of x : f ( x ) = β + β 1 x. x x x x x X Y Figure 29.1: The leastsquares line. Of course, we want to find the line that fits our data best and one can define the measure of the quality of the fit in many different ways. The most common approach 116 LECTURE 29. SIMPLE LINEAR REGRESSION. 117 is to measure how Y i is approximated by β + β 1 X i in terms of the squared difference ( Y i ( β + β 1 X i )) 2 which means that we measure the quality of approximation globally by the loss function L = n X i =1 ( Y i {z} actual ( β + β 1 X i  {z } estimate )) 2 → minimize over β , β 1 and we want to minimize it over all choices of parameters β , β 1 . The line that mini mizes this loss is called the leastsquares line . To find the critical points we write: ∂L ∂β = n X i =1 2( Y...
View
Full
Document
This note was uploaded on 10/11/2009 for the course STATISTICS 18.443 taught by Professor Dmitrypanchenko during the Spring '09 term at MIT.
 Spring '09
 DmitryPanchenko
 Statistics, Least Squares, Linear Regression

Click to edit the document details