Giving Compass' Take:

• Urban Institute examines effective practices of impact measurement and why repeated evaluations may be necessary, especially if a program is being tried in different contexts.

• This is a good reminder that the hard, important work of evaluation is never really over. In what ways can we move towards better data-gathering methods?

• Here's more on why philanthropy and measurement are inextricably intertwined.


Strong evidence on which social programs work is crucial for effective policymaking, as it ensures that limited government resources go to programs that deliver results. Impact evaluations, which use randomized controlled trials, or quasi-experimental designs, contribute to this evidence base. But what if an untested social policy or program appears so intuitive and has a theory of change so compelling that government officials are highly confident in its ability to deliver the promised outcomes? Is there still a benefit in doing a rigorous evaluation?

The answer is yes. Many social policies and programs were assumed to be effective but were eventually proven not to be. Impact evaluations show us which programs to invest in and which programs not to invest in.

The Cambridge-Somerville Youth Study is one example of this unexpected result. Initiated in the late 1930s, this longitudinal study has been recognized as the first randomized evaluation of a social program. The program it studied — mentoring for at-risk boys and young men from low-income backgrounds — appeared to have obvious and intuitive merit. Mentees remembered their experiences fondly, and many believed the program improved their lives.

But repeated analyses, comparing outcomes for those who were mentored with those who were not, published in 1948, 1959, and 1978, suggested the program was not effective. In fact, the 1978 study found that on several measures — including alcoholism, multiple criminal offenses, and stress-related diseases — the treatment group fared worse than the control group. Compounding these results, the study found that the longer someone was in the intervention program, the worse the outcomes they experienced. It was stunning but compelling evidence that this social program, which had appeared promising, was actually detrimental.

These stories underscore the importance of evaluating programs that are new or are being applied in new ways. But what if a program already has a strong evidence base? Is another impact evaluation necessary?

Read the full article on social program impact evaluations by Will Schupmann and Matthew Eldridge at Urban Institute.