To help inform your impact giving about evidence-based policy, we’ve included the report’s highlights below. For a detailed analysis, please download the PDF.
Best for advanced donors.
In recent decades, policymakers have grown more likely to use research evidence to guide their attempts to meet social and educational goals. Researchers have identified many reforms worthy of broad expansion, including changes in the welfare system and programs that help low-income parents foster their children’s early development. Yet despite these successes, on the whole it remains hard to implement large-scale interventions supported by evidence. Even when interventions are grounded in current knowledge and show positive effects in early tests, those effects are often modest, and they often aren’t repeated when the programs expand to other settings.
Generally speaking, the prevailing paradigm for building evidence about programs or policies is a linear one: A new intervention is developed in response to a social problem. It undergoes early tests. If those tests find positive results relative to what participants would have otherwise experienced then additional impact studies are conducted in new locations (replication). If the replication studies are positive, funders will support further expansion (scaling up), expecting to see similar effects as long as future versions implement the core elements of the intervention faithfully. This pipeline paradigm is sometimes accompanied by a tiered funding model, in which more funding is made available to expand a program to a larger scale as it generates more — and more rigorous — evidence of its continued effects in more locations.
This paper updates the pipeline paradigm for evidence building with a cyclical paradigm that encompasses evidence building, implementation, and adaptation.
Download the full report about evidence-based policy and social change by Virginia Knox, Carolyn J. Hill, and Gordon Berlin at mdrc.org.
This updated paradigm can be used by funders, researchers, and practitioners who want to use evaluation to strengthen programs and their impacts. Ideally, the paradigm can encourage conversations among these different contributors to the improvement of the programs that are, after all, the foundational content of evidence-based policy.