This preview shows page 1. Sign up to view the full content.
Unformatted text preview: Discovering Statistics Using SPSS: Chapter 5 Chapter 5: Answers
Task 1
A fashion student was interested in factors that predicted the salaries of catwalk models. She
collected data from 231 models. For each model she asked them their salary per day on days
when they were working (salary), their age (age), how many years they had worked as a
model (years), and then got a panel of experts from modelling agencies to rate the
attractiveness of each model as a percentage with 100% being perfectly attractive (beauty).
The data are on the CDROM in the file Supermodel.sav. Unfortunately, this fashion student
bought some substandard statistics text and so doesn’t know how to analyse her data☺ Can
you help her out by conducting a multiple regression to see which factor predict a model’s
salary? How valid is the regression model? b
Model Summary Change Statistics
Model
1 R
R Square
.429a
.184 Adjusted
R Square
.173 Std. Error of
the Estimate
14.57213 R Square
Change
.184 F Change
17.066 df1
3 df2
227 Sig. F Change
.000 DurbinW
atson
2.057 a. Predictors: (Constant), Attractiveness (%), Number of Years as a Model, Age (Years)
b. Dependent Variable: Salary per Day (£) ANOVAb
Model
1 Regression
Residual
Total Sum of
Squares
10871.964
48202.790
59074.754 df
3
227
230 Mean Square
3623.988
212.347 F
17.066 Sig.
.000a a. Predictors: (Constant), Attractiveness (%), Number of Years as a Model, Age
(Years)
b. Dependent Variable: Salary per Day (£) To begin with a sample size of 231, with 3 predictors seems reasonable because this would
easily detect medium to large effects (see the diagram in the Chapter).
Overall, the model accounts for 18.4% of the variance in salaries and is a significant fit of the
data (F(3, 227) = 17.07, p < .001). The adjusted R2 (.17) shows some shrinkage from the
unadjusted value (.184) indicating that the model may not generalises well. We can also use
Stein’s formula:
adjusted R 2 = 1 − 231 − 1 231 − 2 231 + 1 ( 1 − 0.184 ) 231 − 3 − 1 231 − 3 − 2 231 = 1 − [1.031]( 0.816 )
= 1 − 0.841
= 0.159 This also shows that the model may not cross generalise well. Dr. Andy Field Page 1 5/22/2003 Discovering Statistics Using SPSS: Chapter 5 Coefficientsa Model
1 Unstandardized
Coefficients
B
Std. Error
60.890
16.497
6.234
1.411 Standardized
Coefficients
Beta Collinearity Statistics
Tolerance
VIF .942 .079 12.653 5.561 2.122 .548 2.621 .009 9.743 1.380 .082 12.157 .196 (Constant)
Age (Years)
Number of Years
as a Model
Attractiveness (%) Sig.
.000
.000 95% Confidence Interval for B
Lower Bound Upper Bound
93.396
28.384
3.454
9.015 t
3.691
4.418 .152 .083 1.289 .199 .497 .104 .867 1.153 a. Dependent Variable: Salary per Day (£) In terms of the individual predictors we could report:
B
Constant β SE B –60.89 16.50 Age 6.23 1.41 .94** Years as a Model –5.56 2.12 –.55* Attractiveness –0.20 0.15 –.08 Note. R2 = .18 (p < .001). * p < .01, ** p < .001. It seems as though salaries are significantly predicted by the age of the model. This is a
positive relationship (look at the sign of the beta), indicating that as age increases, salaries
increase too. The number of years spent as a model also seems to significantly predict
salaries, but this is a negative relationship indicating that the more years you’ve spent as a
model, the lower your salary. This finding seems very counterintuitive, but we’ll come back to
it later. Finally, the attractiveness of the model doesn’t seem to predict salaries.
If we wanted to write the regression model, we could write it as:
Salary = β 0 + β 1Age i + β 2 Experience i + β 3Attractiveness i = −60.89 + (6.23Age i ) − (5.56Experience i )− (0.02Attractiveness i ) The next part of the question asks whether this model is valid. a
Casewise Diagnostics a
Collinearity Diagnostics Model
1 Dimension
1
2
3
4 Eigenvalue
3.925
.070
.004
.001 Condition
Index
1.000
7.479
30.758
63.344 (Constant)
.00
.01
.30
.69 Variance Proportions
Number of
Years as a
Age (Years)
Model
.00
.00
.00
.08
.02
.01
.98
.91 Attractiveness
(%)
.00
.02
.94
.04 a. Dependent Variable: Salary per Day (£) Case Number
2
5
24
41
91
116
127
135
155
170
191
198 Std. Residual
2.186
4.603
2.232
2.411
2.062
3.422
2.753
4.672
3.257
2.170
3.153
3.510 Salary per
Day (£)
53.72
95.34
48.87
51.03
56.83
64.79
61.32
89.98
74.86
54.57
50.66
71.32 Predicted
Value
21.8716
28.2647
16.3444
15.8861
26.7856
14.9259
21.2059
21.8946
27.4025
22.9401
4.7164
20.1729 Residual
31.8532
67.0734
32.5232
35.1390
30.0459
49.8654
40.1129
68.0854
47.4582
31.6254
45.9394
51.1478 a. Dependent Variable: Salary per Day (£) Dr. Andy Field Page 2 5/22/2003 Discovering Statistics Using SPSS: Chapter 5 Histogram Normal PP Plot of Regression Standardize
Dependent Variable: Salary per Day (£) Dependent Variable: Salary per Day (£)
60 1.00 50
.75
40 Expected Cum Prob 30 Frequency 20
Std. Dev = .99 10 Mean = 0.00
N = 231.00 0 .50 .25 75
4.
25
4.
75
3.
25
3.
75
2.
25
2.
75
1.
25
1. 5
.7 5
.2
5
.2 5
.7
5
.2
1
5
.7
1 0.00
0.00 Regression Standardized Residual .50 .75 1.00 Observed Cum Prob Scatterplot Partial Regression Plot Dependent Variable: Salary per Day (£) Dependent Variable: Salary per Day (£) 5 80 4 60
3 40 2
1 20 0
1
2
3 2 1 0 1 2 3 Salary per Day (£) Regression Standardized Residual .25 0 20
40
3 Regression Standardized Predicted Value 2 1 0 1 2 Age (Years) Partial Regression Plot Partial Regression Plot Dependent Variable: Salary per Day (£) Dependent Variable: Salary per Day (£)
60 40 40 20 20 Salary per Day (£) 80 60 Salary per Day (£) 80 0 20
40
1.5 1.0 .5 0.0 .5 1.0 1.5 Number of Years as a Model 0 20
40
20 10 0 10 20 30 Attractiveness (%) Residuals: there 6 cases that has a standardized residual greater than 3, and two of these
are fairly substantial (case 5 and 135). We have 5.19% of cases with standardized
residuals above 2, so that’s as we expect, but 3% of cases with residuals above 2.5 (we’d
expect only 1%), which indicates possible outliers.
Normality of errors: The histogram reveals a skewed distribution indicating that the
normality of errors assumption has been broken. The normal PP plot verifies this because
the dotted line deviates considerably from the straight line (which indicates what you’d get
from normally distributed errors). Dr. Andy Field Page 3 5/22/2003 Discovering Statistics Using SPSS: Chapter 5
Homoscedasticity and Independence of Errors: The scatterplot of ZPRED vs. ZRESID does
not show a random pattern. There is a distinct funnelling indicating heteroscedasticity.
However, the DurbinWatson statistic does fall within Field’s recommended boundaries of
13, which suggests that errors are reasonably independent.
Multicollinearity: for the age and experience variables in the model, VIF values are above
10 (or alternatively Tolerance values are all well below 0.2) indicating multicollinearity in
the data. In fact, if you look at the correlation between these two variables it is around .9!
So, these two variables are measuring very similar things. Of course, this makes perfect
sense because the older a model is, the more years she would’ve spent modelling! So, it
was fairly stupid to measure both of these things! This also explains the weird result that
the number of years spent modelling negatively predicted salary (i.e. more experience =
less salary!): in fact if you do a simple regression with experience as the only predictor of
salary you’ll find it has the expected positive relationship. This hopefully demonstrates why
multicollinearity can bias the regression model.
All in all, several assumptions have not been met and so this model is probably fairly
unreliable. Task 2
Using the Glastonbury data from this chapter (with the dummy coding in
GlastonburyDummy.sav), which you should’ve already analysed, comment on whether you
think the model is reliable and generalizable?
This question asks whether this model is valid.
Model Summaryb
Change Statistics
Model
1 R
R Square
.276a
.076 Adjusted
R Square
.053 Std. Error of
the Estimate
.68818 R Square
Change
.076 F Change
3.270 df1
3 df2
119 Sig. F
Change
.024 DurbinWatson
1.893 a. Predictors: (Constant), No Affiliation vs. Indie Kid, No Affiliation vs. Crusty, No Affiliation vs. Metaller
b. Dependent Variable: Change in Hygiene Over The Festival Coefficientsa Model
1 Unstandardized
Coefficients
B
Std. Error
.554
.090
.412
.167
.028
.160
.410
.205 (Constant)
No Affiliation vs. Crusty
No Affiliation vs. Metaller
No Affiliation vs. Indie Kid Standardized
Coefficients
Beta
.232
.017
.185 t
6.134
2.464
.177
2.001 Sig.
.000
.015
.860
.048 95% Confidence Interval for B
Lower Bound Upper Bound
.733
.375
.742
.081
.289
.346
.816
.004 Collinearity Statistics
Tolerance
VIF
.879
.874
.909 1.138
1.144
1.100 a. Dependent Variable: Change in Hygiene Over The Festival a
Casewise Diagnostics
Collinearity Model
1 Dimension
1
2
3
4 Eigenvalue
1.727
1.000
1.000
.273 Condition
Index
1.000
1.314
1.314
2.515 a
Diagnostics (Constant)
.14
.00
.00
.86 Variance Proportions
No Affiliation
No Affiliation
vs. Crusty
vs. Metaller
.08
.08
.37
.32
.07
.08
.48
.52 Case Number
31
153
202
346
479 No Affiliation
vs. Indie Kid
.05
.00
.63
.32 a. Dependent Variable: Change in Hygiene Over The Festival Std. Residual
2.302
2.317
2.653
2.479
2.215 Change in
Hygiene Over
The Festival
2.55
1.04
2.38
2.26
.97 Predicted
Value
.9658
.5543
.5543
.5543
.5543 Residual
1.5842
1.5943
1.8257
1.7057
1.5243 a. Dependent Variable: Change in Hygiene Over The Festival Dr. Andy Field Page 4 5/22/2003 Discovering Statistics Using SPSS: Chapter 5 Histogram Normal PP Plot of Regression Standard
Dependent Variable: Change in Hygiene Dependent Variable: Change in Hygiene Over The
20 1.00 .75 Expected Cum Prob Frequency 10 Std. Dev = .99
Mean = 0.00
N = 123.00 0 .50 .25 2. 1. 1. .7 .2 25 75 5 5 5 5 25 5 5 5  .2 5 .2 .7 .2 .7  .7 1 1 2 2 0.00
0.00 Regression Standardized Residual .75 1.00 Partial Regression Plot Dependent Variable: Change in Hygiene Over The Dependent Variable: Change in Hygiene Over The
2 Change in Hygiene Over The Festival 3 Regression Standardized Residual .50 Observed Cum Prob Scatterplot 2 1
0
1 2
3
2.0 1.5 1.0 .5 0.0 .5 1.0 1 0 1 2
.4 Regression Standardized Predicted Value .2 0.0 .2 .4 .6 .8 No Affiliation vs. Crusty Partial Regression Plot Partial Regression Plot Dependent Variable: Change in Hygiene Over Th Dependent Variable: Change in Hygiene Over Th
2.0 Change in Hygiene Over The Festival 2.0 Change in Hygiene Over The Festival .25 1.5
1.0
.5
0.0
.5
1.0
1.5
2.0
.4 .2 0.0 .2 .4 .6 .8 No Affiliation vs. Metaller 1.5
1.0
.5
0.0
.5
1.0
1.5
2.0
.4 .2 0.0 .2 .4 .6 .8 1.0 No Affiliation vs. Indie Kid Residuals: there are no cases that have a standardized residual greater than 3. We have
4.07% of cases with standardized residuals above 2, so that’s as we expect, and .81% of
cases with residuals above 2.5 (and we’d expect 1%), which indicates the data are
consistent with what we’d expect.
Normality of errors: The histogram looks reasonably normally distributed indicating that
the normality of errors assumption has probably been met. The normal PP plot verifies this Dr. Andy Field Page 5 5/22/2003 Discovering Statistics Using SPSS: Chapter 5
because the dotted line doesn’t deviates much from the straight line (which indicates what
you’d get from normally distributed errors).
Homoscedasticity and Independence of Errors: The scatterplot of ZPRED vs. ZRESID does
look a bit odd with categorical predictors, but essentially we’re looking for the height of the
lines to be about the same (indicating the variability at each of the three levels is the
same). This is true indicating homoscedasticity. The DurbinWatson statistic also falls
within Field’s recommended boundaries of 13, which suggests that errors are reasonably
independent.
Multicollinearity: all variables in the model, VIF values are below 10 (or alternatively
Tolerance values are all well above 0.2) indicating no multicollinearity in the data.
All in all, the model looks fairly reliable (but you should check for influential cases!). Dr. Andy Field Page 6 5/22/2003 ...
View
Full
Document
This note was uploaded on 05/02/2010 for the course IE 0ap06 taught by Professor Ennart during the Spring '10 term at Technische Universiteit Eindhoven.
 Spring '10
 ennart

Click to edit the document details