33%(3)1 out of 3 people found this document helpful
This preview shows page 18 - 19 out of 21 pages.
criminal justice, it suggests that sys-tematic reviews of what works incriminal justice may be stronglybiasedwhenincludingnonrandomized studies. In effortssuch as those being developed by theCampbell Collaboration, such poten-tial biases should be taken intoaccount in coming to conclusionsabout the effects of interventions.Notes1. Statistical adjustments for randomgroup differences are sometimes employed inexperimental studies as well.2. We should note that we have assumedso far that external validity (the degree towhich it can be inferred that outcomes apply tothe populations that are the focus of treat-ment) is held constant in these comparisons.Some scholars argue that experimental stud-ies are likely to have lower external validitybecause it is often difficult to identify institu-tions that are willing to randomize partici-pants. Clearly, where randomized designshave lower external validity, the assumptionthat they are to be preferred to nonrandomizedstudies is challenged.3. Kunz and Oxman (1998) not only com-pared randomized and nonrandomized stud-ies but also adequately and inadequately con-cealed randomized trials and high-qualityversus low-quality studies. Generally, high-quality randomized studies included ade-quately concealed allocation, while lower-quality randomized trails were inadequatelyconcealed.In addition,the general termshigh-quality trialsandlow-quality trialsindicate adifference where “the specific effect of random-ization or allocation concealment could not beseparated from the effect of other methodologi-cal manoeuvres such as double blinding”(Kunz and Oxman 1998, 1185).4. Moreover, it may be that the finding ofhigher standardized effects sizes for random-ized studies in this review was due to school-level as opposed to individual-level assign-ment. When only those studies that include adelinquency outcome are examined, a largereffect is found when school rather than stu-dent is the unit of analysis (Denise Gott-fredson, personal communication, 2001).5. As the following Scientific MethodsScale illustrates, the lowest acceptable type ofevaluation for inclusion in the Maryland Re-port is a simple correlation between a crimeprevention program and a measure of crime orcrime risk factors. Thus studies that were de-scriptive or contained only process measureswere excluded.6. There were also (although rarely) stud-ies in the Maryland Report that reported twofindings in opposite directions.For instance,inSherman and colleagues’ (1997) section onspecific deterrence (8.18-8.19), studies of ar-rest for domestic violence had positive resultsfor employed offenders and backfire results fornonemployed offenders. In these isolatedcases,the study was coded twice with the samescientific methods scores and each of the in-vestigator-reported result scores (of 1 and –1)separately.