MultipleRegression 12

MultipleRegression 12 - Simple Regression (one predictor)...

Info iconThis preview shows pages 1–19. Sign up to view the full content.

View Full Document Right Arrow Icon
Simple Regression (one predictor) Review
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Facebook example Predicted z y = (r xy )(z x ) Predicted z y = (.66)(z x ) z x = -.45 Predicted z y = .66(-.45) = - .297 mean x = 9.25 s x = 6.833 mean y = 3.25 s y = 1.479 z y (s y ) + m y = Y -.297(1.479) + 3.25 = -.4393 + 3.25 = 2.8107 round to 3 z x = -.45 z x (s x ) + m x = X -.45(6.833) + 9.25 = -3.075 + 9.25 = 6.175, approx. 6 hrs
Background image of page 2
Scouts example Predicted z y = (r xy )(z x ) Predicted z y = (.203)(z x ) z x = +1.02 Predicted z y = .203(1.02) = .207 mean x = 15.818 s x = 11.629 mean y = 34.091 s y = 15.911 z y (s y ) + m y = Y .207(15.911) + 34.091 = 3.294 + 34.091 = 37.385 round to 37 boxes of cookies z x = +1.02 z x (s x ) + m x = X 1.02(11.629) + 15.818 = 11.862 + 15.818 = 27.687 round to 28 boxes of popcorn
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Facebook example Predicted Y = bX + a b = (r xy )(s y )/s x a = M y – (b)(M x ) M x = 9.25 s x = 6.833 M y = 3.25 s y = 1.479 b = [(.66)(1.479)]/6.833 = .97614/6.833 = .143 a = 3.25 - (.143)(9.25) = 3.25 - 1.32275 = 1.92725 predicted Y = .143(X) + 1.9275 X = 6 hours predicted Y = .143(6) + 1.9275 = .858 + 1.9275 = 2.7855, round to 3
Background image of page 4
Scouts example Predicted Y = bX + a b = (r xy )(s y )/s x a = M y – (b)(M x ) M x = 15.818 s x = 11.629 M y = 34.091 s y = 15.911 b = [(.203)(15.911)]/11.629 = 3.230/11.629 = .278 a = 34.091 - (.278)(15.818) = 34.091 – 4.397 = 29.694 predicted Y = .278(X) + 29.694 X = 10 boxes of popcorn predicted Y = .278(10) + 29.694 = 2.78 + 29.694 = 32.474 (approx 32) boxes of cookies
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Prediction Question: How much will the LOST audience enjoy THE EVENT? ?
Background image of page 6
Prediction to Prediction Error! X Actual Y 1 2 1 3 2 1 3 4 3 5 4 5 4 3 4 2 4 1 5 4 Predicted Y 2.378 2.378 2.674 2.97 2.97 3.266 3.266 3.266 3.266 3.562 Error = Actual Y-Predicted Y -0.378 0.622 -1.674 1.03 2.03 1.734 -0.266 -1.266 -2.266 0.438
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Calculating Error Error = Actual Y- Predicted Y -0.378 0.622 -1.674 1.03 2.03 1.734 -0.266 -1.266 -2.266 0.438 Calculate the average error = Sum of (Y-Predicted Y) / N
Background image of page 8
Problem ∑ (Actual Y – Predicted Y) = 0 Why? Because the regression line is the “balancing point” between all the (X,Y) pairs
Background image of page 9

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Residuals 2.378 2.674 2.97 3.266 3.562 2 3 1 4 5 5 3 2 1 4 0 1 2 3 4 5 6 0 1 2 3 4 5 6 Lost The Event Predicted Actual Predicted Y Actual Y X
Background image of page 10
Balancing Point Sound familiar? The mean is the balancing point for the set of scores So, ∑(X-M) = 0 To get around this problem: ∑(X-M)(X-M)
Background image of page 11

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Method of Least Squares 1 – draw a line 2 – calculate predicted Y for each person 3 – compare predicted Y to actual Y 4 – square and sum the differences between predicted Y and actual Y 5 – divide by N 6 – this is called the mean square error (MSE), want it to be LOW
Background image of page 12
Eyeballing a Regression Line http://www.ruf.rice.edu/~lane/stat_sim/reg_by_eye/index.html
Background image of page 13

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Prediction using raw scores Advantages Do not need to convert scores to z-scores Easy to apply the formula to make predictions Disadvantages Can not compare across variables More difficult to calculate Formulas: Predicted Y = bX + a b = (r*s y )/s x a = M y – (b)(M x )
Background image of page 14
Prediction using z-scores Advantages Easy to calculate Can compare across variables Disadvantages Need to convert scores to z-scores More difficult to apply Formula: Predicted Z Y = r*Z X
Background image of page 15

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Simple Regression (one predictor) Regression Coefficients and Coefficient of Determination
Background image of page 16
Example: Predicting GPA SAT One predictor (one “X”)
Background image of page 17

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Simple Linear Regression: SAT and GPA Coefficients a 1.876 .346 5.422 .002 .097 .034 .757 2.834 .030 (Constant) SAT Model 1 B Std. Error Unstandardized Coefficients Beta Standardized Coefficients t Dependent Variable: GPA a.
Background image of page 18
Image of page 19
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 05/26/2011 for the course PSCH 343 taught by Professor Victoriaharmon during the Spring '11 term at Ill. Chicago.

Page1 / 65

MultipleRegression 12 - Simple Regression (one predictor)...

This preview shows document pages 1 - 19. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online