Originally posted at Evidence Soup
Randomized, controlled trials (RCTs) are typically viewed as the gold standard in developing evidence and separating what works from what doesn't. (I'll leave a discussion of RCTs for another day.)
The Macarthur-funded, Washington, DC-based Coalition for Evidence seeks to "increase government effectiveness through the use of rigorous evidence" - and by rigorous, they mean derived from RCTs. The group says that: "[i]n most areas of social policy – such as education, poverty reduction, and crime prevention – government programs often are implemented with little regard to evidence, costing billions of dollars yet failing to address critical social problems. However, rigorous studies have identified a few highly-effective program models and strategies ('interventions'), suggesting that a concerted government effort to build the number of these proven interventions, and spur their widespread use, could bring rapid progress to social policy similar to that which transformed medicine..."
These guys are drinking the Kool-Aid. The Coalition continues: "[A] central theme of our advocacy, consistent with a recent National Academies recommendation, is that evidence of effectiveness generally cannot be considered definitive without ultimate confirmation in well-conducted randomized controlled trials."
Their Top Tier Evidence initiative is intended as "a validated resource used by federal officials to assist policy officials in identifying interventions meeting the Congressional Top Tier evidence standard, defined in recent legislative provisions as 'well-designed randomized controlled trials [showing] sizeable, sustained effects on important… outcomes' (e.g., Public Laws 110-161 and 111-8)."
Why can't more organizations do this? With that in mind, the group recently identified two programs as promising, describing them as Near Top Tier (satisfying most, but not all, their criteria for rigorous and evidence-based). I like that the Coalition provides an evidence summary explaining its reasoning and the underlying evidence that supports its rating of subject programs. For the Child FIRST program, key findings include "40-70% reduction in serious levels of (i) child conduct and language development problems, and (ii) mothers’ psychological distress, one year after random assignment." For the PMTO program, findings include: "Sons of women in the program group had substantially fewer arrests over nine years (an average of 0.76 arrests per boy in the PMTO group versus 1.34 per boy in the control group)."
But wait, there's more. Each evidence summary provides a brief cost-benefit analysis pinpointing the potential taxpayer costs or savings from the subject program.This is complicated, I know, but we can't be promoting widespread programs that are unable to document some type of benefit.
High hurdle. The 2009 National Academies report (mentioned earlier) says:
“Federal and state agencies should prioritize the use of evidence-based programs and promote the rigorous evaluation of prevention and promotion programs in a variety of settings in order to increase the knowledge base of what works, for whom, and under what conditions. The definition of evidence-based should be determined by applying established scientific criteria. In applying scientific criteria, the agencies should consider the following standards:
Evidence for efficacy or effectiveness of prevention and promotion programs should be based on designs that provide significant confidence in the results....
When evaluations with such experimental designs are not available, evidence for efficacy or effectiveness cannot be considered definitive, even if based on the next strongest designs, including those with at least one matched comparison."
To Comment, visit the original post here: Evidence Soup