This preview shows page 1. Sign up to view the full content.
Unformatted text preview: Statistical Methods I (EXST 7005) Page 125 treatments to a common mean of zero. SAS will output the residuals with an output
statement, and PROC UNIVARIATE has a number of tools to evaluate normality.
Homogeniety of Variance Your textbook discusses one test by Hartley. It is one of the simplest tests, but not usually
the best. To do this test we calculate the largest observed variance divided by the
smallest observed variance. This statistics is tested with a special table by Hartley
(Appendix Table 5.A in your Freund & Wilson textbook).
A number of other tests are available in SAS, but only for a simple CRD (i. e. a One-way
ANOVA). These test are briefly discussed below.
To get all of the tests available in SAS, use the following statement following PROC
MEANS your_treatment_name / HOVTEST=BARTLETT
HOVTEST=LEVENE(TYPE=SQUARE) HOVTEST=OBRIEN WELCH; Levene's Test: This test is basically an ANOVA of the squared deviations
(TYPE=SQUARE). It can also be done with absolute values (TYPE=ABS). This is
one of the most popular HOV tests.
O'Brien's Test: This test is a modification of Levene's with an additional adjustment for
Brown and Forsythe's Test: This test is similar to Levene's, but uses absolute deviations
from the median instead of more ANOVA like means. There is a “nonparametric”
ANOVA that employs deviations from the median instead of the usual deviations
from the mean used for the normal ANOVA.
Bartlett's Test for Equality: This test is similar to Hartley's, but uses a likelihood ratio
test instead of an F test. This test can be inaccurate if the data is not normally
Welch's ANOVA: It is not a test of homogeneity of variance; this test is a weighted
ANOVA. This ANOVA weights the observations by an inverse function of the
variances and is intended to address the problem of non-homogeneous variance and
to be use when the variance is not homogeneous.
The Homogeniety of Variance (HOV) tests discussed above can be done in SAS (PROC
GLM). Note that the last one is NOT an HOV test, it is another type of ANOVA
called a weighted ANOVA. Contrasts and Orthogonality
A priori contrasts are one of the most useful and powerful techniques in ANOVA. There are a few
additional considerations that should be made. So what is a contrast? As described in the handout, it is a comparison of some means against
some other means. The comparison is a linear combination.
When we set these up in SAS, we only need to give the multipliers in the CORRECT ORDER,
and SAS will complete the calculations.
The multipliers must sum to zero, and they can be given as fractions or as integers. James P. Geaghan Copyright 2010 Statistical Methods I (EXST 7005) Page 126 For example, compare pounds of laundry where the treatments are HIS, HERS and OURS, we
want to contrast HIS to HERS to each other and we want to contrast HIS and HERS combined
Contrast 1: Contrast the mean of HIS to the mean of HERS, excluding the mean for OURS. H0: μHis = μHers . The multipliers are –1 and 1 for his and hers, which gets the positive and which gets the negative is not usually important. OURS gets a zero and is excluded from
the calculations. μHis + μHers = μOurs .
The multipliers are 1/2 and 1/2 for his and hers, and –1 for OURS (or negative on the 1/2s
and positive on the 1). But we could also test H0: μHis + μHers = 2μHers and get the same
results. The multipliers are now 1, 1 and –2 (or –1, –1 and 2). Contrast 2: Contrast the mean of HIS and HERS to the mean of OURS; H0 : Contrast
alternative to 2 HIS
0 Contrast calculations
A calculation similar to the LSD, but extended to more than just 2 means, is called a
contrast. Suppose we wish to test the mean of the first two means against the mean of
the last 3 means. 1) H0: μ1 + μ 2
2 = μ3 + μ4 + μ5
3 ( μ + μ )−( μ +
1 2 1
2 1 1 2 1 1 2 3μ 1 + 3μ 2 3 3 1
3 μ1 + μ 2
2 1 − μ 3 + μ 4 + μ5 1 3 ) 1 3 3 3 3 = 0 or μ 4 + μ5 = 0 or ( ) μ + (− ) μ + (− ) μ
+ b −2 g μ + b −2 g μ + b −2 g μ = 0 μ1 + μ 2 + −
2 or 3 4 1 4 3 5 = 0 or 5 This expression is what is a “linear model”, and the last expression of this linear model
is the easiest form for us to work with. We can evaluate the linear model, and if we
can find the variance we can test the linear model. Generically, the variance of a
linear model is “the sum of the variances”, however there are a few other details. As
with the transformations discussed earlier in the semester, when we multiply a value
by “a” the mean changes by “a”, but the variance changes by “a2”. Also, if there are
covariances between the observations these must also be included in the variance. For
our purposes, since we have assumed independence, there are no covariances.
The linear expression to evaluate is then: a1T1+a2T2+a3T3+a4T4+...+akTk where the “a” are
the coefficients and the “T” are the treatment means (sums can also be used).
The variance is then: a21Var(T1)+a22Var(T2)+a23Var(T3)+a24Var(T4)+...+a2kVar(Tk)
In an ANOVA, the best estimate of the variance is the MSE, and the variance of a
treatment mean is MSE/n, where n is the number of observations in that treatment. James P. Geaghan Copyright 2010 Statistical Methods I (EXST 7005) Page 127 We can therefore factor out MSE, and in the balanced case (1/n) can also be factored
out. The result is MSE ( a1 +a 2 +a 3 +a 2 +... + a 2 ) .
n ( ) If we were to use a t-test to test the linear combination against zero, the t-test would be:
a1 T1 + a2 T2 + a3 T3 + a4 T4 +...+ ak Tk
n ca 2
1 + a + a + a +...+ a
k h k
∑ ai Ti
MSE k 2
n i =1 = This is the test done with treatment means. If treatment totals are used the equation is
k modified slightly to ∑ ai Ti
i =1 k nMSE ∑ ai2 and will give the same result.
i =1 One final modification. If we calculate our “contrasts” as above without the “MSE” in the
k denominator, then we calculate Q = ∑ ai Ti
i =1 k n ∑ ai2 , without the MSE, then all that
i =1 would remain to complete the t-test is to divide by MSE . The value called “Q”, when divided by MSE gives a t statistic. If we calculate Q2 and
divide by MSE we get an F statistic. SAS uses F tests. All we need provide SAS is
the values of “a”, the coefficients, in the correct order, and it will calculate and test the
“Contrast” with an F statistic. Another example
Suppose we are comparing hemoglobin concentrations for various animals with diverse lifestyles.
The animals included in our study are: Wrens, Dogs, Whales, People, Cod, Turkeys and
We want to contrast 1) People to Others, 2) Aquatic species to others, and 3) Bird species to
1) People to Others – 1 category versus 6
2) Aquatic species to others – 3 categories versus 4
3) Bird species to others – 2 categories versus 5
Contrast Wrens Dogs Whale People Cod Turkey Turtle
Note that all contrasts sum to zero. In SAS, the contrast statements follow the PROC MIXED or PROC GLM statement.
SAS checks that they sum to zero (to 8 decimal places) James P. Geaghan Copyright 2010 Statistical Methods I (EXST 7005) Page 128 proc mixed data=clover order=data; class treatmnt;
TITLE2 'ANOVA with PROC MIXED - separate variances';
model percent = treatmnt / htype=3 DDFM=Satterthwaite outp=resids;
repeated / group = treatmnt;
lsmeans treatmnt / adjust=tukey pdiff;
** treatments in order=data ==========> 3DOk1 3DOk4 3DOk5 3DOk7 3DOk13;
contrast '3 low vrs 2 high' treatmnt -2
contrast 'odd vrs even'
contrast '1st vrs 2nd'
run; More on Contrasts and Orthogonality
Under some conditions, contrast sum of squares (SS) may add up to less than the treatment SS
or they may add up to MORE than the treatment SS. The most satisfying condition is
when they sum to equal the treatment SS. This is not necessarily a problem, as long as the
contrasts are testing the hypotheses that you are interested in testing.
If we do only a few contrasts, fewer than the d.f. for the treatments, the contrast SS will
probably add up to less than the treatment SS. No problem.
If we do MANY contrasts, more than the number of d.f. for treatments, the contrast SS will
probably add up to more than the treatment SS. You are data-dredging? Consider a
If you do a number of contrasts equal to the number of treatment d.f., then the contrast SS can
add up to more or less than the treatment SS. However, if the contrasts are orthogonal they
will sum to exactly the treatment SS.
Contrasts are orthogonal if all their pairwise cross products sum to zero.
The cross products of a set of paired numbers is simply the product of the pairs. For
example, take the following contrasts. Where the treatment levels are A1, A2, A3 and
A4, write contrasts for A1 versus A2, A1 and A2 versus A3 and A4, and A3 versus
These contrasts are given below.
a1 v a2
a1&a2 v a3&a4
a3 v a4
c1 & c2
c1 & c3
c2 & c3 a1
0 These contrasts are orthogonal. How about the set below?
Where the treatment levels are A1, A2, A3 and A4, write contrasts for A1 versus A2 and
A3, A1 and A2 versus A3 and A4, and A3 versus A4.
If any one set of cross products do not sum to zero, the contrasts are not orthogonal.
Orthogonality is a nice property, but not necessary. Write the contrasts that you want
to test, orthogonal if possible.
Remember the ANOVA source table with its d.f. and Expected mean squares? James P. Geaghan Copyright 2010 Statistical Methods I (EXST 7005) Page 129 Well, a more “modern” approach involves estimating the variance components directly
t–1 Error t(n–1) Total EMS Random σε + nστ EMS Fixed
σ 2 + n Στ i σε σε 2 2 ε t −1 tn–1 2 2 Since the components are estimated directly there is no “sum of squares” for each line in the
table. The model is fitted interatively (maximum likelihood). Traditional ANOVA table
Corrected Total DF
24 Sum of
13.6334 F Value
15.38 Pr > F
0.0001 Te results of the tests and contrasts are usually the same. However, the mixed model analysis is
capable of addressing issues tht PROC GLM cannot, so when differences exist in the analysis
PROC MIXED is likely to give the better result.
PROC MIXED ANOVA table
Type 3 Tests of Fixed Effects
3 low vrs 2 high
odd vrs even
1st vrs 2nd Contrasts
7.21 Pr > F
0.0003 F Value
19.87 Pr > F
Understand the post-hoc tests. The range tests and contrasts. Be able to interpret these from SAS
Understand the differences between the post-hoc tests (error rates). Only one is correct for a
Understand that contrasts are best done as a priori tests, and there is less concern with inflated
Type I error rates if these are a priori tests. What is the error rate for contrasts by the way?
The ANOVA was summarized. Note those aspects that I consider most important.
Understand Expected mean squares. These will become extremely important in discussing larger
designs. Fortunately SAS will give us the EMS (later), we need only understand them. James P. Geaghan Copyright 2010 Statistical Methods I (EXST 7005) Page 130 The Factorial Treatment Arrangement
Also known as “two-way” ANOVA, this analysis has two (or more) treatments. For example,
treatment A with two levels (a1 and a2) and treatment B with two levels (b1 and b2). The
treatments are cross-classified such that each level of one treatment occurs in combination
with each level of the other treatment (e.g. a1b1, a1b2, a2b1, a2b2).
Each treatment may be fixed or random (independently).
The combinations of treatments are still assigned at random to experimental units, so the design is
still a CRD. For example, the 4 combinations in the example given (a1b1, a1b2, a2b1, a2b2)
would be assigned at random to the available experimental units, preferably in equal numbers
to achieve a balanced design.
This treatment arrangement is called a “factorial”, and the dimensions are usually given as 2 by 2
(above), 2 by 3, 3 by 3, etc. A schematic of a 3 by 3 factorial is given below.
The principle treatments (A and B in the previous examples) are called main effects. The main
effect for treatment A will be calculated from the marginal means or sums of the A treatment,
averaged across the B treatment. Likewise, the main effect of treatment B will be calculated
from the marginal means for treatment B average across the levels of A.
Marginal sums or means are used to evaluate the main effects.
A Means A1
a1 mean A2
a2 mean A3
a3 mean B Means
b3 mean Calculations for the main effects (Uncorrected treatment SS) 14
are the same as for the CRD. There is however one new 12
issue. It is possible for the same main effects to arise
from various different cell patterns.
Plotting the means for the first case.
7 Plotting the means for the second case.
0 b2 a1 a2 a3 b1
a4 b2 b1
a1 a2 a3 a4 James P. Geaghan Copyright 2010 Statistical Methods I (EXST 7005) Page 131 This lack of consistency in the cells is caused when the marginal means are not strictly additive.
When additivity exists if some treatment marginal mean (#1) is larger by 2 units than some
other mean (#2), each cell will in treatment #1 be 2 units higher than the corresponding mean
of the treatment #2. This would represent additivity, or no interaction between the treatments.
If, however, the increases and decreases are not consistent, with the marginal means, then there is
an interaction, or a lack of additivity. The marginal means (or sums) are used to calculate the
main effects of the treatments. The cell to cell variation is used to measure the interaction
(after adjusting for the main effects). If we plot the treatment means, as done previously, and
the lines do not appear parallel, then there is some interaction.
However, the lines are never perfectly parallel. Is the departure from additivity significant or not?
To determine this, we test the interaction. This is normally done in ANOVA for all factorial
designs. Interpreting interactions
Sometimes the main effects are very important relative to the interactions and may tell you
most of what you need to know to interpret the results. Sometimes interactions can be
important. Significant interactions indicate that the main effects are somehow inconsistent.
You should determine how this inconsistency affects your eventual conclusions.
Significant interactions should not be ignored. Factorial contrasts
Factorial experiments, also called two-way ANOVAs, are usually done in SAS by entering two
class variables and their interaction in the model.
PROC GLM; CLASSES A B;
MODEL Y = A B A*B; RUN; However, it is also possible to do factorials as contrasts, setting up the treatments as a one-way
ANOVA. For a simple 2 by 2 factorial, with treatments A and B, we have a total of 4 cells
and 3 degrees of freedom. The 4 combinations are of the treatments are a1b1, a2b1, a1b2 and
a2b2. We can test the A main effect with a contrast, likewise the B main effect. To test the
interaction, calculate the cross-product of the A and B contrasts.
A*B Interaction a1b1
0 For larger designs the pattern is similar, for example, examine the 2 × 2 × 2 factorial below.
Treatment A has two levels (a and A), B has two levels (b and B) and C has levels (c and C).
All contrasts consist of plus ones or minus ones, so only the + or – is shown.
0 James P. Geaghan Copyright 2010 Statistical Methods I (EXST 7005) Page 132 A larger factorial. with more than 2 levels in some treatment, would have more than 2 d.f. in some
treatment. This would require a 2 or 3 or more d.f. contrast. These can be done in SAS but
we will not discuss these this semester. Summary
Factorials, or two-way ANOVA, was covered.
A factorial is a way of entering two or more treatments into an analysis.
The description of a factorial usually includes a measure of size, a 2 by 2, 3 by 4, 6 by 3 by 4, 2
by 2 by 2, etc.
Interactions were discussed.
Interactions test additivity of the main effects
Interactions are a measure of inconsistency in the behavior or the cells relative to the main
Interactions are tested along with the main effects
Interactions should not be ignored if significant.
Factorial analyses can be done as two-way ANOVAs in SAS, or they can be done as contrasts. The Randomized Block Design
This analysis is similar in many ways to a “two-way” ANOVA
The CRD is defined by the linear model, Yij = μ + τ i + ε ij . The simplest version of the CRD has
one treatment and one error term. The factorial treatment arrangement discussed previously
occurred within a CRD, and it had several different treatments, Yijk = μ + τ 1i + τ 2 j + τ 1iτ 2 j + ε ijk .
This model has two treatments and one error. It could have many more treatments, and it
would still be a factorial design. Designs having a single treatment or multiple treatments can
all occur within a CRD and are referred to as different treatment arrangements.
There are other modifications of a CRD that could be done. Instead of multiple treatments we
may find it necessary to subdivide the error term.
Why would we do this? Perhaps there is some variation that is not of interest. If we ignore it,
that variation will go to the error term. For example, suppose we had a large agricultural
experiment, and had to do our experiment in 8 different fields, or due to space limitations
in a greenhouse experiment we had to separate our experiment into 3 different greenhouses
or 5 different incubators. Now there is a source of variation that is due to different fields,
or different greenhouses or incubators!
If we do it as a CRD, we put our treatments in the model, but if there is some variation due to
field, greenhouse or incubator it will go to the error term. This would inflate our error term an
make it more difficult to detect a difference (we would lose power).
How do we prevent this? First, make sure each treatment occurs in each field, greenhouse or
incubator (preferably balanced). Then we would factor the new variation out of the error
term by putting it in the model. Yijk = μ + βi + τ j + βiτ j + ε ijk
James P. Geaghan Copyright 2010 ...
View Full Document
- Fall '08