chap2 - CHAPTER 2 SIMPLE LINEAR REGRESSION 1 Examples 1...

Info iconThis preview shows pages 1–8. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: CHAPTER 2 SIMPLE LINEAR REGRESSION 1 Examples: 1. Amherst, MA, annual mean temperatures, 1836–1997 2. Summer mean temperatures in Mount Airy (NC) and Charleston (SC), 1948–1996 Scatterplots — outliers? influential values? independent v. dependent variables 2 • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • Year Mean Temperature 1850 1900 1950 2000 6 7 8 9 10 • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • Mount Airy Charleston 22.5 23.5 24.5 26.0 26.5 27.0 27.5 28.0 A B Figure 2.1. (a) Plot of mean temperature against year for Amherst data. (b) Plot of mean summer temperature in Charleston against mean summer temperature in Mount Airy. In each case a least squares regression line is shown on the plot. 3 The basic model: y i = α + βx i + ² i , 1 ≤ i ≤ n. (a) ² 1 ,...,² n uncorrelated with mean 0 and vari- ance σ 2 (b) ² 1 ,...,² n independent N [0 ,σ 2 ]. σ 2 unknown in practice 4 Method of Least Squares Rewrite basic equation as y i = β + β 1 ( x i- ¯ x ) + ² i , 1 ≤ i ≤ n. (¯ x = 1 n ∑ n i =1 x i ) Method: choose estimates ˆ β , ˆ β 1 to minimize S = n X i =1 n y i- ˆ β- ˆ β 1 ( x i- ¯ x ) o 2 . See picture. Solution is ˆ β = ∑ n i =1 y i n = ¯ y, ˆ β 1 = ∑ n i =1 y i ( x i- ¯ x ) ∑ n i =1 ( x i- ¯ x ) 2 . 5 • • • • • • x y 1 2 3 4 5 6 4 6 8 10 12 Figure 2.2. Illustration of the least squares criterion on a small artificial data set. The fitted straight line is chosen so as to minimize the sum of squares of vertical distances from the observations to the line. 6 Estimation of σ 2 First define residuals (see next figure) e i = y i- ˆ β- ˆ β 1 ( x i- ¯ x ) The e i s are estimates of the original ² i s. Therefore, an obvious estimate of ˆ σ 2 would be ˆ σ 2 = ∑ n i =1 e 2 i n . It turns out this is a biased estimator; an un- biased estimator is s 2 = ∑ n i =1 e 2 i n- 2 . The numerator is the residual sum of squares (RSS); s 2 is the mean sum of squares (MSS)....
View Full Document

This note was uploaded on 11/17/2011 for the course STOR 664 taught by Professor Staff during the Fall '11 term at UNC.

Page1 / 81

chap2 - CHAPTER 2 SIMPLE LINEAR REGRESSION 1 Examples 1...

This preview shows document pages 1 - 8. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online