rlf9 - Assessing Program Impact: Alternative Designs...

Info iconThis preview shows pages 1–6. Sign up to view the full content.

View Full Document Right Arrow Icon
Assessing Program Impact: Alternative Designs Required reading: RLF, Chapter 9. The focus of this chapter is on how to design an evaluation when random assignment to groups is not possible but constructing a control group is possible. Quasi-experiments are defined as experiments that do not have random assignment but do involve manipulation of the independent variable. You learned in the last chapter about the power of random assignment as a technique for controlling all known and unknown extraneous variables by equating the groups at the start of an experiment. In quasi- experiments we must come up with other strategies for equating groups and ruling out alternative explanations that an observed relationship between the IV (independent variable) and DV (dependent variable) is due to one or more uncontrolled extraneous confounding variables. Quasi-experiments are not nearly as strong as randomized experiments for establishing firm evidence of cause and effect (e.g., evidence of program impact). If done well, however, quasi-experiments can provide moderately strong evidence of program impact. Don’t forget that once you determine impact, you must make an evaluative decision about the program. Remember the four steps in the logic of evaluation? I will provide comments about each of the major sections of this chapter, as well as a section that I am adding to the chapter on additional alternative designs not discussed by the authors. Bias in Estimation of Program Effects Quasi-Experimental Impact Assessment. Additional Alternative Designs Not Discussed by RLF. Some Cautions About Using Quasi-Experiments for Impact Assessment.
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Bias in Estimation of Program Effects In this section RLF discuss some of the threats to validity that can pop up when you are estimating program effects. For a review of the concepts of internal and external validity read my noted from my book (link: http://www.southalabama.edu/coe/bset/johnson/dr_johnson/lecture s/lec8.htm and study the tables that connect designs to the threats to internal validity (Table 9.1, and 9.2 here http://www.southalabama.edu/coe/bset/johnson/dr_johnson/2oh_m asters.htm ). You also should examine Table 10.1 (not online) that shows the threats to the nonequivalent comparison-group design (on page 303 of Johnson and Christensen). Here they are for your convenience:
Background image of page 2
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 4
Here are the “threats” mentioned by RLF: Selection bias (this includes any factor that the groups differ on, often as a result of their initial composition, but also including them dropping out in unique ways which is called differential attrition) Secular Trends (this is not typically mentioned in research methods books but should be, especially for studies that take place over an extended period of time as in interrupted time- series designs and on-going field experiments of any type) Interfering events is basically what we call history effects in the above links. Maturation
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 6
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 31

rlf9 - Assessing Program Impact: Alternative Designs...

This preview shows document pages 1 - 6. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online