DSC 203 Lecture Notes - Chapter 14

# DSC 203 Lecture Notes - Chapter 14 - 1 Chapter 14 Multiple...

This preview shows pages 1–8. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 1 Chapter 14: Multiple Regression We employ more than one independent (predictor) variable in the prediction equation. Section 14.1: The Multiple Regression Model and the Least Squares Estimates Example: Tasty Sub Shop Case y = yearly revenue (thousands of dollars) x 1 = population size (thousands of residents) x 2 = business rating (a measure of business activity in the area) This is measured on a scale from 1 to 10 1: limited business and shopping activity nearby 10: lots of business and shopping activity nearby Goal: Predict y using both x 1 and x 2 See Table 14.1 (on page 581 ) and Figures 14.1 and 14.2 ( on page 582) 2 Independent (predictor) Dependent v ariables variable Linear Relationship Linear Relationship y = β + β 1 x 1 + ε y = β + β 1 x 2 + ε where β 1 > 0 where β 1 > 3 Combined Model error term ( effects of all other factors on y ) effects of x 1 and x 2 on y are regression parameters relating y to and See Figure 14.3 on page 583 for a geometrical interpretation. 4 We will now use MINITAB to compute the least squares estimates of the regression parameters. See Figure 14.4(b) on page 584. Least squares estimates b , b 1 , and b 2 Prediction equation The least squares prediction equation : y ˆ = 125.29 + 14.1996 x 1 + 22.811 x 2 b b 1 b 2 5 The point prediction of y when and , which equals = 956.6 (that is, \$956,600) We can understand the meaning of least squares by considering Table 14.2 on page 585. The least squares estimates minimize SSE = the sum of squared errors 6 Notation: Remember: β , β 1 , β 2 (betas) : are u nknown regression parameters b , b 1 , b 2 (bees) : are least squares estimates (aka the regression coefficients ) Interpretation of the parameters : μ = 2 2 1 1 x x β β β + + is the mean yearly revenue when population is x 1 and the business rating is x 2 . 1. β ’s interpretation Let x 1 = 0 and x 2 = 0 Then μ = β + 1 β (0) + 2 β (0) = β Therefore, β is the mean yearly revenue when x 1 = 0 and x 2 = 0. This does not make any sense – that’s ok, the y- intercept often makes no sense on it’s own. 2. 1 β ’s interpretation Consider a site where x 1 = c and x 2 = d Then 7 μ = β + 1 β (c) + 2 β (d) Consider a different site where x 1 = (c + 1) and x 2 = d Then µ = β 0 + β 1 (c+1) + β 2 (d) The difference between the mean yearly revenue for the two sites is 1 β Therefore, 1 β = the change in the mean yearly revenue (mean y ) associated with a 1,000 resident increase in population size (that is, a 1 unit increase in x 1 ) when x 2 remains constant. 2 β is interpreted similarly The General Multiple Regression Model y = the dependent variable x 1 , x 2 , x 3 , … , x p = are p independent (predictor) variables The general multiple regression model says that y = ε β β β β β + + + + + + p p x x x x .......
View Full Document

## This note was uploaded on 02/09/2012 for the course DSC 203 taught by Professor Dr.weese during the Fall '11 term at Miami University.

### Page1 / 35

DSC 203 Lecture Notes - Chapter 14 - 1 Chapter 14 Multiple...

This preview shows document pages 1 - 8. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online