In their essay announcing the new Economics for Inclusive Prosperity (EfIP) initiative, Suresh Naidu, Dani Rodrik, and Gabriel Zucman describe how economics is deployed to promote “market fundamentalism” in policy discourse. In many policy discussions, they argue, “neoliberalism appears to be just another name for economics.”
I agree. But economic arguments distort policy thought even beyond naïve assertions that “markets always work.” Especially in its most applied, policy-facing branches, the field stands deeply committed to quantification as the backbone of policy analysis. Indeed, a particular approach to quantification for policy evaluation is what many applied economists mean by economics.
This dogma of quantification creates perils for policy that are, in my view, as significant and far-reaching as the market fundamentalism the EfIP authors highlight. I will explore three such perils. As economists such as those affiliated with the EfIP and other allied social scientists begin to rethink the relationships between their disciplines and public policy, they would be well-served to grapple with these issues head-on.
First, though, an affirmation. Quantification may be perilous, but it is essential. As the EfIP authors argue, “systematic empirical evidence is a disciplining device against ideological policy prescriptions.” Quantification forces us to clearly define questions and concepts. It enables serious evaluation of policies and comparison of alternatives. It compels us to confront trade-offs. It replaces speculation and sentiment with rigor and precision. And it creates a framework of contestability: when costs, benefits, and values are quantified and compared, the terms of debate and standards of evidence are clear. These features of quantification are critical for democratic accountability and good governance. But quantification isn’t perfect, and we must look its limits squarely in the face.
Consider a popular vision of policy analysis, which I paraphrase from the standard textbook, A Primer for Policy Analysis (1978), by Edith Stokey and Richard Zeckhauser:
- We identify some problem that needs solving.
- We enumerate the possible courses of action that might address the problem.
- We analyze the likely consequences of these various actions. This often involves quantitative analysis (e.g., analyzing data, evaluating related programs).
- We value all the possible consequences. Here, again, we are quantifying, usually by creating monetary measures of people’s values (called “willingness to pay” in the jargon) and then engaging in a cost-benefit type analysis.
- We choose among the alternatives on the basis of our answers from step 4.
In this linear scheme, quantification is simply a tool, a sort of Leibnizian calculus ratiocinator for public policy. A machine takes in a social objective (presumably given to us by politics, ethics, or the technocratic dictates of “good policy”) and spits out a policy choice. Quantification merely operates on the alternatives—measuring and scoring them—rather than shaping the alternatives themselves. In this black box model, politicians submit policy objectives to a room full of quants, and at the end out pops the “best” policy choice for the politician to take back to government. We are invited to think of quantification as merely in service of public policy aims that are defined elsewhere and by others.
This view is popular but profoundly misleading. Quantification and the aims of public policy are deeply intertwined. We cannot divide the world into a neat dualism of aims and tools—the aims of public policy, on the one hand, and objective, quantitative tools used to pursue those goals, on the other. Instead, there is an inherent feedback between the two. How and what we quantify shapes and determines the aims of public policy, just as the aims of public policy shape and determine what we quantify.
The fiction that quantification is some wholly objective or technocratic undertaking, informed by, but separate from, the aims of public policy, lies at the heart of three key perils of quantification: it flattens the normative standards we use to evaluate policy; it distorts the incentives of those who make and implement policy; and it narrows our frame of vision, limiting the set of policy problems we acknowledge exist or can be addressed.
How Quantification Shapes Our Normative Standards
Despite the rich panoply of normative concerns considered by moral and political philosophers—from deontological theories concerned with rights and duties to consequentialist ones concerned with outcomes—essentially all quantitative policy analysis is rooted in welfarism, the view that policies should be evaluated based on their implications for human wellbeing. Moreover, one welfarist standard predominates: utilitarianism. And not just utilitarianism, but what I will call crass utilitarianism—one that defines wellbeing largely in terms of material costs and benefits such as economic prosperity, health, and other factors for which willingness to pay is relatively straightforward to measure.
As a matter of intellectual history, policymaking’s commitment to quantification preceded the commitment to crass utilitarianism. Quantification is essential to rigorous policy analysis, the idea went. So, because we are committed to making good policy decisions, we are committed to quantification.
Now, once we are committed to quantification, some form of consequentialism is really the only game in town. After all, what is there to quantify but consequences? But this should not worry us too much. A quantitative consequentialism is, in principle, quite flexible; it need not be crassly utilitarian. We can put a value on various non-material factors such as rights, duties, responsibility, dignity, or what have you. Moreover, once you know the quantitative effects of a policy on people’s welfares, you can introduce all sorts of equity considerations into policy evaluation. We could, for example, after quantifying all the effects, define the best policy as the one that maximizes total utility, subject to the constraint that no two individuals’ utilities differ by more than, say, ten percent.
What crass utilitarianism has going for it over all other normative frameworks—even other forms of welfarism—is that it lends itself easily to quantitative analysis. It is hard to figure out how to quantify the value of rights and duties, or how to weight equity considerations. It is much more straightforward—both conceptually and practically—to quantify material costs and benefits and then just add and subtract to figure out whether a policy is good or bad.
Indeed, crass utilitarianism is so easy to work with that it has become just a part of the “standard assumptions” in the background of applied economic thinking about quantitative policy analysis. As a result, today we are often not only crass utilitarians, we are unreflective crass utilitarians. The process of trying to maximize net utility—ignoring questions of rights, duties, responsibilities, equity, dignity, and so on—is so ingrained in our practice and thought that we hardly even notice we are doing it. We simply take for granted that a good policy just is one that optimizes material benefits.
Here we see how misleading is the linear model of policy analysis. In truth, the very notion of the “aims” of public policy is shaped in a deep way by the dictates of quantification. We don’t quantify because we are utilitarians. We are utilitarians because we quantify. Reflecting on similar themes, Michel Foucault perhaps put it best, saying that for modern policy analysts, utilitarianism has ceased to be a philosophy or even an ideology. It has become “a technology of government.”
What is the problem? We have agreed that if we want to be rigorous, consider trade-offs, and have a contestable framework for policy evaluation, we must quantify and thus must be consequentialists. So, what is wrong with allowing a materialist utilitarianism to define our notion of good policy?
In partial answer, I’d like to tell you a story.
In the early 1990s Larry Summers—former president of Harvard, and President Obama’s chief economic advisor and Treasury Secretary—wrote a memo when he was chief economist at the World Bank. He had the following thought:
Shouldn’t the World Bank be encouraging MORE migration of the dirty industries to the LDCs [Less Developed Countries]? . . .
The costs of health impairing pollution depends on the foregone earnings from increased morbidity and mortality. From this point of view, a given amount of health impairing pollution should be done in the country with the lowest cost, which will be the country with the lowest wages. I think the economic logic behind dumping a load of toxic waste in the lowest wage country is impeccable and we should face up to that.
That toxic dumping in low-wage countries has “impeccable economic logic” is an interesting assertion. Here are three claims, each of which seems to me correct:
- It is probably the case that the average willingness to pay for avoiding a little more toxic waste is higher in rich countries than in poor.
- Hence, moving some toxic pollution from rich countries to poor countries will increase net material wellbeing in the world.
- If these are the only costs and benefits (i.e., we don’t, say, count allowing rich countries not to take responsibility for their own actions as a direct cost) and we are utilitarians, then doing so is good policy.
To call that chain of arguments “economic logic” is revealing, for both the first and last steps have nothing to do with economics. They have to do with values. But, I suspect that for Summers, as for many quantitative policy analysts, the assumption that good policy equals utilitarian policy applied only to material costs and benefits is so deeply ingrained, it just gets folded into the standard assumptions and, thus, can be counted as part of the impeccable logic, a scientific inevitability.
Now, I don’t mean to suggest that there are no arguments in favor of Summers’s view. The most important such argument goes as follows. Suppose Summers is right that transferring toxic waste from the rich to the poor will increase net material wellbeing. Then the rich might be able to more-than-compensate the poor for taking the toxic waste and still be left better off than they were with the toxic waste. Hence, if we have the technological ability and political will to get the poor to take the toxic waste and to get the rich to pay them for doing so, we might be able to create a win-win situation. One might still object, based on values other than material wellbeing, to such a Paretian argument. But it is worth noting that Summers’s memo doesn’t even express concern for this kind of principle of compensation. The crass utilitarian argument, on its own, appears to suffice.
We could preserve some of the benefits of quantification—precision, weighing trade-offs, contestability—but reach less objectionable conclusions by wedding quantification to a more nuanced and reflective normative position. Quantification is not theoretically incompatible with nuanced frameworks of normative evaluation. But in practice, what we see in quantitative policy analysis is an overwhelming focus on the kind of highly objectionable, crass, materialist utilitarianism that characterizes this story. We must try to remember a time when we did not think the word “efficient” was a synonym for the word “good.”
Quantification and Incentives
A second peril of quantification concerns its effect on incentives.
Typically, we can only quantify a few of the many inputs that go into addressing a significant social problem. And incentives tend to follow measurement. If we want to hold teachers accountable, and the only thing we can measure is test scores and graduation rates, then some naturally think, “let’s give teachers incentives if their students’ test scores or graduation rates improve.” The problem is that incentivizing only those tasks that are quantifiable can create all sorts of perverse distortions in behavior.
Consider the case of high-stakes testing. What happens when you start rewarding teachers for measurable student performance? The up side is that you may in fact induce teachers to work harder, at least along the dimensions they are being scored. But there are also important down sides: you distort how the teachers teach. You push them to “teach to the test.” Classroom time is limited. So, when you incentivize teachers to emphasize skills relevant for the test, you also incentivize them to deemphasize other, less measurable, skills—conflict resolution, self-control, creative thinking, and so on. If the hard-to-quantify skills are important enough, this distortion in the mix of skills taught can create overall outcomes that are worse than the scenario with no incentives, even if the test-taking incentives actually did make the teachers work harder.
This problem is not limited to education policy. It can occur whenever some inputs are measured and quantified and others are not.
In crime policy, for example, the introduction of systems such as CompStat for the quantification of crime statistics incentivizes law enforcement to focus their efforts on policing that get results on the measured outcomes and disincentives approaches that may be helpful on a variety of less measurable dimensions. As one police chief put it, “We’re not doing community policing now, we’re doing CompStat.”
Likewise, in health policy, if you manage to measure the quality of outcomes and give physicians related incentives, they will work hard to achieve better outcomes. However, one way they may do so is by screening patients and trying to turn away the most difficult cases. So, by strengthening incentives on the measurable dimension, you may reduce incentives on an important unmeasurable dimension: the willingness of doctors to treat the sickest patients.
Finally, in economic policy, when you choose which economic indicators to measure (GDP growth or unemployment, for example), you create incentives for politicians—especially close to election time—to sacrifice progress on unmeasured dimensions (which may be very important for the long run) in order to get short run progress on whatever numbers are slated to be released next.
On a naïve reading of economics, measurement and quantification are always good for incentives and accountability, and incentives and accountability are always good for policy. There is something to the argument. The better your measure of some outcome (be it test scores, crimes solved, patients cured, or what have you), the stronger are the incentives you can create for that outcome to be achieved. But a more complete and nuanced reading of economics must also acknowledge a peril. In a world in which only some of the outcomes you care about are quantifiable, it is not always best to create those strong incentives. Because stronger incentives for achieving one outcome go along with weaker incentives for achieving other outcomes. And those hard-to-measure outcomes may be just as important.
Defining our Field of Vision
I would like to end by pointing to one final peril which, in some sense, is a consequence of the previous two. Quantification narrows our field of vision for policy. Consider just two examples: the way it discounts the future, and the way it leads to aspect blindness about variables that can’t easily be quantified.
Discounting and Intergenerational Equity
A virtue of quantification, even crass utilitarianism, is that it pushes us to think about potential costs and benefits not just in the present, but in the future.
Here the clearest example is environmental policy. An intervention limiting carbon emissions to slow global warming might have relatively small benefits over the course of a single generation or two. But suppose that same policy, over the long run, prevents or seriously mitigates catastrophic climate change. It might ultimately save the lives of billions of people. The benefits to future generations could be huge.
Any welfarist quantifier now faces a challenge. If we treat the members of each generation equally for the purposes of cost-benefit analysis, a policy that offers even a small benefit in the future will have a huge total benefit, since those future benefits affect so many people. This creates two problems. First, if we believe that the members of each generation should be treated equally in our welfare calculations, we really ought to be spending a huge proportion of our current resources on policies that benefit future generations, because even large costs to a few billion people today are a drop in the bucket compared to the benefits to hundreds of billions of people over the course of future generations. Second, since all policies that benefit the future have close to infinite benefits (what with all the people in the future), it is really hard to compare the costs and benefits of one future-benefiting policy against another. Everything looks either infinitely good or infinitely bad. (There are theorems along these lines.)
This situation isn’t tenable for a technology of governance. It provides no answers about how to compare various future-benefiting policies. Moreover, no politician wants to be told that good policy requires sacrificing his or her constituents in service of the interests of people who will be alive in 500 years. Quantitative policy analysis has developed a technological response to this problem, called “discounting the future.”
The idea is inspired by—but not the same as—the financial concept of the time value of money. A dollar today is worth more to you than the promise of a dollar a year from now. Say you’d be indifferent between receiving 90 cents today or a dollar a year from now. Then the value of money you’ll acquire in a year is discounted by a factor of 0.9 today. And, of course, this diminution in the value of a dollar continues exponentially as we go further and further into the future. In this scheme, a dollar in twenty years is worth about 12 cents today; in fifty years, half a cent today.
Standard cost-benefit analysis for policy making extends this methodology to thinking about the benefits of a policy for future generations. Official government documents justify this practice in exactly these terms. “Discounting reflects the time value of money,” the U.S. Office of Management and Budget (OMB) says. “Benefits and costs are worth more if they are experienced sooner.” So the further in time we are from some future generation, the more we discount its benefits or costs. This solves the quantifier’s problem of infinite future benefits. We simply write off the distant future through discounting and get on with the business of quantifying benefits and cost.
But there is something suspect about this kind of discounting. If we think of any kind of welfarism that values everyone’s wellbeing equally, there is just no reason to care less about people in the future than in the present (other than the small chance that the world will cease to exist, so they won’t be around to enjoy the benefits). Their happiness or suffering will be no less real for happening a couple generations from now.
It is this kind of logic that led the philosopher, mathematician, and economist Frank Ramsey—who in the 1920s laid the intellectual foundations for how we think rigorously about intertemporal considerations in policy making—to argue that discounting the welfare of future generations “is ethically indefensible and arises merely from the weakness of the imagination.” Or, more poetically, as put by the mid-century economist Roy Harrod, it is “a polite expression for rapacity and the conquest of reason by passion.” The great theorist of economic growth, Robert Solow, was perhaps clearest, saying, “we ought to act as if the social rate of time preference were zero.”
The real reason we discount future generations in policy analysis is that, if we don’t, we can’t coherently quantify costs and benefits. So this is another instance of quantification shaping our normative standards, perhaps without our even noticing it. And yet, by discounting, we profoundly change our field of vision with respect to the kind of policy problems we identify and the kind of policy remedies we endorse. Once we’ve accepted discounting, we’ve given ourselves license to ignore costs or benefits that will occur more than a few generations in the future. This is a challenge even for the crass utilitarian. If our society discounts the future even at the relatively low rate of 0.9 and someone asks for $10 million for a policy that is guaranteed to save a billion people in two hundred years, the crass-utilitarian cost-benefit analysis says not to do it—since the “value” of that future benefit of $7 quadrillion would come to just under $5 million today.
This may sound theoretical, but it has practical implications. In particular, a common complaint about cost-benefit analysis comes from environmental regulators who find it difficult to persuade the government to take actions on environmental issues with limited short-run impact but potentially catastrophic long-run consequences. Cost-benefit analysis with discounting just doesn’t care much about those long-run consequences because they will be suffered by someone else.
The Lamppost Effect
Another way quantification narrows our field of vision for public policy is by pushing policy makers to focus their efforts on issues that are easily quantified, whether or not they believe those are the most pressing issues of the day.
In the United States, quantification is the law of the land, under various executive orders promulgated by multiple presidents. In particular, the Office of Information and Regulatory Affairs inside the OMB can, for all intents a purposes, veto any major regulatory action by an executive agency if it finds the cost-benefit analysis or process wanting.
What does this sort of restriction mean for policy making? It means that regulatory agencies know that they should only bother with attempting a rulemaking if they believe they can pass OIRA review. As Lisa Heinzerling, a Georgetown law professor and former head of policy at the Environmental Protection Agency (EPA), said in an interview,
From the moment a person at EPA thinks of the possibility of issuing a rule, they start to think, “Will OMB let us issue this rule?” It affects everything in rulemaking at the agencies. . . We’re constantly asking ourselves not, ‘Is this the right thing for environmental protection?’ but, ‘How can we make this acceptable to OMB?’
In some sense, of course, this is exactly the goal. If quantification requirements don’t change the kind of regulation we get, there is no point in having them. The concern, however, is that these sorts of requirements don’t simply prevent the EPA (and other agencies) from promulgating regulations for which the cure is worse than the disease. This is also an instance of quantification distorting incentives. The mandate to quantify discourages agencies from bothering to work on regulations for which there are good arguments, but for which it is impossible, or too expensive or impractical, to quantify the costs and benefits.
For instance, an EPA report on an environmental contaminant often contains two lists of diseases the contaminant is believed to increase the risk of. The first list includes diseases that meet the following two conditions: we can quantify the effect of a change in the contaminant on the change in disease risk, and we can quantify the monetary costs of the disease (usually through its effect on mortality and/or medical treatment). The second list includes diseases which the contaminant is believed to increase the risk of, but for which at least one step of quantification cannot be or has not been done. The diseases in the first list get fully quantified and included in a cost-benefit calculation. But the diseases in the second list can’t be included in the cost-benefit calculation. So they get stuck in an appendix of qualitative concerns to be taken into consideration. Not surprisingly, they rarely again get discussed. And, so, if most of the benefits are in that second list, the issue never gets addressed in the first place, since the policy won’t pass OMB muster.
This may be an inevitable, and worthwhile, cost. An alternative system with less quantification would not have this problem, but would almost surely have more wasteful or ineffective regulations. At the risk of being a crass utilitarian myself, I would say the benefits of quantification may indeed exceed the costs. But the concern is that we may be like the proverbial drunk searching for his lost keys at night under a lamppost.
A passerby asks what he is doing and the drunk responds, “Looking for my keys. I dropped them in the park across the street.” The passerby inquires as to why he is looking under the lamppost, if he dropped his keys elsewhere. The drunk replies, “it’s dark over there, I can’t possibly find them in the dark! This is where the light is.” Quantification shines a bright light on a certain set of potential policy issues. But, if a large group of important problems are left in the dark because quantification is too hard or too expensive, by insisting on quantification, we may be forcing ourselves to search for policy problems and solutions in the wrong places.
Perhaps no field of inquiry has had deeper impact on modern policy thought than economics. Quantification, as much as market fundamentalism, lies at the heart of that impact. We have been perhaps too willing to accept the practice of modern quantitative policy analysis as an unalloyed good, without sufficient reflection on the balance of its merits and demerits. This moment of self-examination inspired by the EfIP authors is an opportunity to adjust course. Whether or not quantification’s role in policy discourse is ultimately defensible, it has to be defended. It is my hope that by highlighting some of the perils of quantification, this essay might contribute to that process of reflection and reform.