Too often I find myself reading an article or a paper recommending how foundations or nonprofits should change the way they work to have more impact – yet, there is little to no information about how those authors arrived at their recommendations. Sometimes it is apparent that data was collected, but little information is provided about how it was collected, or how it was analyzed. It should never be difficult to understand the methodologies and reasoning upon which recommendations for improving practice are based. Readers should be able to tell if recommendations for improving practice are grounded in rigorously collected data or questionably collected data – or rigorously analyzed data or questionably analyzed data. All of us who publish data from surveys – or from any type of research – with the intention of influencing practice through our work have a responsibility to provide the relevant information that allows readers to judge the quality of the research for themselves.

Through the Transparency Initiative, AAPOR is doing its part to encourage research organizations that conduct surveys to adhere to this maxim. We at CEP share this value, and we’re excited to be a charter member of such an important and necessary movement.

Read the full article about judging research by Ellie Buteau at The Center for Effective Philanthropy.