1 November 19, 2003 Before we begin, I want to address the question of comparing means. If we have 4 means that we want to compare, we have: 01234:Hµµµµ===If we want to test this hypothesis, we reconceptualize it as: 0122334:000Hµµµµµµ−=−=−=However, what about 14µµ−? Clearly, this is addressed using the above hypothesis since: 14122334µµµµµµµµ−=−+−+−However, when we test this hypothesis, we use estimates, and perhaps what results is: 12344567xxxx====It is conceivable that a difference of 1 (which is the difference between adjacent categories) is NOT statistically significant, whereas a difference of 3 is (7-4). So, how do we reconcile this? This is understood if you remember your ANOVA course. When doing ANOVA, we compare means, and then we can do multiple comparisons to find out which means are different. There are several techniques for multiple comparisons, including Tukey, Scheffe, and Bonferroni. The differences between these methods lies in the method for controlling Type I error. The Tukey method controls Type I error for all possible pairwise comparisons, Bonferroni controls Type I error for the number of contrasts decided upon, and Scheffe controls the type I error rate for all possible linear combinations of the means. Clearly, Scheffe is the most conservative test. So, what is the point? The F test that we conduct on the means, or the regression coefficients, is like the Scheffe test. It actually tests not only the specified linear combinations, but also any linear combination of those specified. Thus, since
has intentionally blurred sections.
Sign up to view the full version.