1
November 19, 2003
Before we begin, I want to address the question of comparing means. If we have 4 means
that we want to compare, we have:
0
1
2
3
4
:
H
µ
µ
µ
µ
=
=
=
If we want to test this hypothesis, we reconceptualize it as:
0
1
2
2
3
3
4
:
0
0
0
H
µ
µ
µ
µ
µ
µ
−
=
−
=
−
=
However, what about
1
4
µ
µ
−
?
Clearly, this is addressed using the above hypothesis since:
1
4
1
2
2
3
3
4
µ
µ
µ
µ
µ
µ
µ
µ
−
=
−
+
−
+
−
However, when we test this hypothesis, we use estimates, and perhaps what results is:
1
2
3
4
4
5
6
7
x
x
x
x
=
=
=
=
It is conceivable that a difference of 1 (which is the difference between adjacent
categories) is NOT statistically significant, whereas a difference of 3 is (7-4). So, how do
we reconcile this?
This is understood if you remember your ANOVA course. When doing ANOVA, we
compare means, and then we can do multiple comparisons to find out which means are
different. There are several techniques for multiple comparisons, including Tukey,
Scheffe, and Bonferroni. The differences between these methods lies in the method for
controlling Type I error. The Tukey method controls Type I error for all possible pairwise
comparisons, Bonferroni controls Type I error for the number of contrasts decided upon,
and Scheffe controls the type I error rate for all possible linear combinations of the
means. Clearly, Scheffe is the most conservative test.
So, what is the point? The F test that we conduct on the means, or the regression
coefficients, is like the Scheffe test. It actually tests not only the specified linear
combinations, but also any linear combination of those specified. Thus, since

This
** preview**
has intentionally

**sections.**

*blurred***to view the full version.**

*Sign up*