{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Intro to Stat_Part_58

# Intro to Stat_Part_58 - 188 CHAPTER 12 LINEAR REGRESSION...

This preview shows pages 1–3. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 188 CHAPTER 12. LINEAR REGRESSION AND CORRELATION 17 0.456 18 0.444 19 0.433 20 0.423 21 0.413 22 0.404 23 0.396 24 0.388 25 0.381 26 0.374 27 0.367 28 0.361 29 0.355 30 0.349 40 0.304 50 0.273 60 0.250 70 0.232 80 0.217 90 0.205 100 and over 0.195 Table 12.1 12.2 Summary 2 Bivariate Data: Each data point has two values. The form is ( x , y ) . Line of Best Fit or Least Squares Line (LSL): ^ y = a + bx x = independent variable; y = dependent variable Residual: Actual y value- predicted y value = y- ^ y Correlation Coefficient r: 1. Used to determine whether a line of best fit is good for prediction. 2. Between -1 and 1 inclusive. The closer r is to 1 or -1, the closer the original points are to a straight line. 3. If r is negative, the slope is negative. If r is positive, the slope is positive. 4. If r = 0, then the line is horizontal. 2 This content is available online at <http://cnx.org/content/m17081/1.4/>. 189 Sum of Squared Errors (SSE): The smaller the SSE , the better the original set of points fits the line of best fit. Outlier: A point that does not seem to fit the rest of the data....
View Full Document

{[ snackBarMessage ]}

### Page1 / 3

Intro to Stat_Part_58 - 188 CHAPTER 12 LINEAR REGRESSION...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online