Call Us: US - +1 845 478 5244 | UK - +44 20 7193 7850 | AUS - +61 2 8005 4826

Systematic Reviews and Meta-Analytic Methods in Criminology

In an evidence-based model, the source of scientific evidence is empirical research in the form of evaluations of programs, practices, and policies. Not all evaluation designs are considered equal, however. Some evaluation designs, such as randomized controlled experiments, are considered more scientifically valid than others. The findings of stronger evaluation designs are privileged over the findings of weaker research designs in determining “what works” in criminological interventions. For instance, in their report to the U.S. Congress on what works in preventing crime, University of Maryland researchers developed the Maryland Scientific Methods Scale to indicate to scholars, practitioners, and policymakers that studies evaluating criminological interventions may differ in terms of methodological quality of evaluation techniques (Sherman et al., 1997). Randomized experiments are considered the gold standard in evaluating the effects of criminological interventions on outcomes of interest such as crime rates and recidivism.

Randomized experiments have a relatively long history in criminology. The first randomized experiment conducted in criminology is commonly believed to be the Cambridge– Somerville Youth Study (Powers &Witmer, 1951):

In that experiment, investigators first matched individual participants (youths nominated by teachers or police as “troubled kids”) on certain characteristics and then randomly assigned one to the innovation group receiving counseling and the other to a control group receiving no counseling. Investigators have continuously reported that the counseling program, despite the best intentions, actually hurt the program participants over time when compared to doing nothing to them at all. Although the first participant in the Cambridge–Somerville study was randomly assigned in 1937, the first report of results was not completed until 1951.