This preview shows page 1. Sign up to view the full content.
Unformatted text preview: t by chance. According to the null hypothesis, the sample means are the result of drawing one sample, calculating its mean, then drawing another sample from the same population and calculating its mean, and so on. This process is exactly how the sampling distribution of the mean is defined. Therefore, if the null hypothesis is correct, then the set of sample means should be consistent with the sampling distribution for the mean (from a single population). In particular, the variance of the sample means should be no bigger than the variance of the sampling distribution. If it is bigger, then the sample means differ from each other by more than can be explained by sampling variability alone (i.e., by chance), so we reject the null hypothesis and adopt the alternative hypothesis that there are real differences among the populations. The Central Limit Theorem tells us that the variance of the distribution of sample means is σ2/n, where n is the size of each sample and σ2 is the variance of the raw scores. This value tells us how large the variance of the sample means, var(M), should be by chance. Therefore we can get a test statistic by computing the ratio of var(M) and σ2/n. If this ratio is sufficiently large, then var(M) is larger than would be expected by chance (i.e., by the null hypothesis). var( M )
" 2 n = n # var( M )
"2 As usual, σ2 is a parameter of the population that we don’t know, and therefore we have to estimate it. For ANOVA, we estimate σ2 using MSresidual, which is the same as the mean !
square used to estimate σ2 the case of an independent
samples t
test (see below). Therefore, the final test statistic is as follows (numerator and denominator were both multiplied by n t...
View Full
Document
 Spring '08
 MARTICHUSKI
 Psychology

Click to edit the document details