Abhijit Banerjee puts his finger on a hard truth: there is a lack of rigorous impact evaluation in foreign aid. We collectively lack the will to learn systematically from experience about what works in development programs. This is the soft spot in the argument for more aid—the reason that advocates have to use and reuse pictures of dying children to make their case—and the excuse rich countries use to justify doing too little to help improve social and economic conditions in poor countries. An unwholesome mix of politics, guesswork, and wishful thinking serves as the rickety foundation for the allocation (and misallocation) of public funds.
As Banerjee explains, it doesn’t have to be this way. We now have examples—albeit somewhat idiosyncratic ones—that demonstrate the feasibility of rigorous impact evaluation of social and other programs in developing countries. And we have irrefutable evidence that good intentions don’t guarantee good outcomes. New approaches to improving education and health, combatting corruption, providing disaster relief, and many other core activities of governments and NGOs have to be tested in real-world conditions.
But merely exposing the lack of evidence-based decision-making and offering pat methods to generate the required evidence doesn’t solve the problem. To know what works, we also need to understand the failures of the knowledge market and identify collective ways to address them.
A Center for Global Development working group that was convened to study exactly these questions found three basic incentive problems. First, a portion of the knowledge that is generated through impact evaluation is a public good. That is, the people who benefit from the knowledge go far beyond those who are directly involved in the program and its funding. So, for example, when a girls’ scholarship program in Bangladesh is positively evaluated, policymakers and program designers in India, Pakistan, and even Senegal can use that information—not necessarily as a model, but as a point of reference. The broad benefits are amplified greatly when the same type of program is evaluated in multiple contexts and addresses enduring questions. However, the cost-benefit calculation made by any particular agency might not include those benefits, making impact evaluation appear simply not worth doing.
Second, the rewards for institutions and for individual professionals within them come from doing, not from building evidence or learning. Those who work at USAID, the World Bank, and ministries of education are rewarded for getting programs up and running. In fact, for a long time the numbers of projects launched and the volume of money spent have been the primary indicators of performance. It is thus extremely difficult to preserve funding for rigorous evaluation or to delay the initiation of a project to design the evaluation and conduct a baseline study. Time and again we see resources for impact evaluation cannibalized for project implementation.
Third, there are, frankly, disincentives to finding out the truth. If program managers or leaders of development institutions or ministers of social development believe that future funding depends directly on achieving a high level of success rather than learning from every experience, the temptation to avoid impact evaluation and concentrate instead on producing and disseminating anecdotal success stories is high.
The aversion to recognizing unfavorable results is woven into the fabric of most bureaucracies; a rare institution is comfortable with acknowledging unsuccessful investments and projects, sharing that information in a transparent manner, and making adjustments accordingly. And when peer institutions are behaving similarly or worse, there is no benefit to being the institution that is best able to learn from its errors.
So getting to the point where far more funding decisions are based on good evidence means addressing three big challenges: figuring out how to fund public goods; safeguarding funding for impact evaluation; and rewarding honesty and learning.
These are big challenges, but not impossible ones. Surely they are easier than the grand goals that most development agencies routinely profess—eradicating disease, eliminating poverty, reforming completely dysfunctional governments. If a set of developing-country governments, development agencies, foundations, and NGOs decided that they cared more about poverty reduction than propaganda, they could lead by example: define a shared agenda for impact evaluation, collectively fund independent impact evaluations on a set of major programs in several countries, build in good evaluation in from the start, and agree to use the resulting evidence in the design of future investments.