Giving Compass' Take:

• In this 80,000 Hours podcast, economic expert Dr. Eva Vivalt discusses the possible problems with trying to replicate outcomes in global development: The results are often inconclusive.

• Does this mean we can't learn anything from evidence-based aid reforms? Not necessarily, but the date collection and testing must be more precise.

Speaking of which, the UN's Centre for Humanitarian Data could make a big impact.


If we have a study on the impact of a social program in a particular place and time, how confident can we be that we’ll get a similar result if we study the same program again somewhere else?

Dr. Eva Vivalt is a lecturer in the Research School of Economics at the Australian National University. She compiled a huge database of impact evaluations in global development — including 15,024 estimates from 635 papers across 20 types of intervention — to help answer this question.

Her finding: not confident at all.

The typical study result differs from the average effect found in similar studies so far by almost 100%. That is to say, if all existing studies of an education program find that it improves test scores by 0.5 standard deviations — the next result is as likely to be negative or greater than 1 standard deviation, as it is to be between 0-1 standard deviations.

She also observed that results from smaller studies conducted by NGOs — often pilot studies — would often look promising. But when governments tried to implement scaled-up versions of those programs, their performance would drop considerably.

Read the full article about the flaws in evidence-based development by Robert Wiblin at 80000hours.org.