This preview shows pages 1–3. Sign up to view the full content.
Chapter 10:
Detecting, Interpreting, and Analyzing Program Effects
This chapter is divided into the following sections:
I. The Magnitude of a Program Effect
II. Detecting Program Effects
III. Assessing the Practical Significance of Program Effects
IV. Examining Variations in Program Effects
V. The Role of MetaAnalysis
I will provide a brief summary and comments for each section….
I. The Magnitude of a Program Effect
According to RLF, an effect size statistic is “a statistical formulation of an estimate of
program effect that expresses its magnitude in a standardized form that is comparable
across outcome measures.”
In other words, rather than asking was the difference between the groups or was a
relationship statistically significant (which ONLY says that you can reject the null
hypothesis of NO effect whatsoever without saying anything about the magnitude of
relationship or effect), the use of effect sizes provides essential information about the size
or magnitude of effect or relationship.
RLF first mention the use of absolute differences between means (posttest mean for
experimental group minus posttest mean for control group or posttest mean for
experimental group minus pretest mean for experimental group)
and the percentage
change (e.g., difference between post and pre value divided by pre value) as common
ways to determine the magnitude of effect.
However they also recommend the use of more standardized measures such as these
effect size indicators:
a) Standardized mean difference (see Exhibit 10A for calculation) which tells you the
size of a program effect in standard deviation units. This is used when your outcome
variable is quantitative and your independent variable is categorical (experimental vs.
control).
b) Odds Ration (see Exhibit 10A for calculation) which tells you “how much smaller or
larger the odds of an outcome event, say, high school graduation, are for the intervention
group compared to the control group.
the odds ratio is used when both your independent variable (treatment vs. control) and
your dependent variable (e.g., graduate high school vs not graduate, have cancer vs. do
not have cancer) are categorical variables.
An odds ratio of 1 says the two groups have equal odds for having the outcome
 An odds ratio of greater than 1 says that the intervention group participants were more
likely to experience a change
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentFor example, an odds ration of 2 would say that “the members of the intervention group
were twice as likely to experience the outcome than members of the control group.”
Finally, an odds ratio of less than 1 means that the members of the intervention group
were less likely to show the outcome
Note that some additional effect size indicators not mentioned by RLF include eta
squared and omegasquared, Rsquared, and rsquared which tell you how much variance
in the outcome variable is explained by the independent variable(s) (e.g., the IV might be
treatment vs. control). Some more effect sizes are beta (the standardized regression
This is the end of the preview. Sign up
to
access the rest of the document.
 Spring '11
 Staff

Click to edit the document details