This preview shows page 1. Sign up to view the full content.
Unformatted text preview: o make things simpler). F= n " var( M )
MSresidual All F does is compare the variance of the group means to how much variance should be expected by chance. If F is large enough, then the group means differ by more than can be !
expected by chance, and we reject the null hypothesis that the population means are equal. This is the essence of ANOVA. Sums of squares for ANOVA. The general approach to ANOVA uses sums of squares, just like with regression. We start with the sum of squares for the data taken altogether, SStotal, which represents the total amount of variability in the data. Then we break the total variability into two parts: the variability that is explainable by the differences among groups (SStreatment) and the variability that is not (SSresidual). The word “treatment” refers to the independent variable, i.e. whatever defines the differences among the groups (e.g., what experimental condition each subject was in). SStotal = SStreatment + SSresidual (1) The total sum of squares is the variability for all the data, ignoring the fact that they come from different groups. To compute SStotal, we find the grand mean ( M ), which is the mean !
of all the data. Then we find the difference between each raw score and the grand mean, square those differences, and add them up. 2 SStotal = # ( X " M ) ! (2) The difference between each raw score and the grand mean, X " M , can be broken into two parts. The first part is the difference between the raw score and the mean of its own group, !
X " M i . This part cannot be explained by the treatment, because all members of each group received the same treatment. Therefore this ! part defines SSresidual. The second first ! part is the difference between each subject’s group mean a...
View
Full
Document
 Spring '08
 MARTICHUSKI
 Psychology

Click to edit the document details