This essay is featured in Boston Review’s new book, Thinking in a Pandemic.

Editors’ Note: This is the final installment in an exchange on the epidemiology and public health policy of COVID-19. Read the first installment by the philosopher of medicine Jonathan Fuller and responses by epidemiologists Marc Lipsitch and John Ioannidis.

With over 100,000 deaths in the United States and over 370,000 deaths worldwide after five months, the COVID-19 pandemic is the health crisis of a generation. Yet even while the crisis unfolds, it is important to step back and reflect on the science helping decision-makers navigate the uncertain and the unknown.

It is with that goal in mind that I recently offered in these pages a philosophical perspective on two epidemiological traditions now at the center of public discussion: public health (including infectious disease) epidemiology, on the one hand, and clinical epidemiology (including the movement known as “evidence-based medicine”), on the other. Each is roughly associated with a distinct philosophy of scientific knowledge. Two epidemiologists I named in the piece, Marc Lipsitch and John Ioannidis, have now responded.

There is an urgent need for transparency not only regarding the facts, but also regarding the values linking them to action. In the absence of such clarity, disagreements over values or policy may masquerade as disagreements over science or facts.

Elsewhere, Lipsitch and Ioannidis have disagreed on the use of lockdowns and strict social distancing in the pandemic. Going against the grain, Ioannidis has questioned their justification; Lipsitch, siding with the majority of epidemiologists, has argued they are justified. Regarding our exchange, I draw three conclusions. First, that at a high resolution what constitutes good science might differ according to the scientific field, and that we should not apply evidence-based medicine’s standards for evidence to public health science. Second, it is not only the scientific facts (including the grim death toll) that are at issue, but also the less often discussed relationship between science and decision-making, where values inevitably play a role. And third, that weighing harms and benefits of proposed policies is not straightforward, and it demands the same rigor used in modeling and generating evidence.

• • •

Both Lipsitch and Ioannidis reinforced my caveat that it can be difficult to fit any given scientist into one or either of the two intellectual boxes I described and that a single scientist can operate in these different modes at different times. The distinct philosophies I identified are norms of thinking imprinted on different schools or specialties in epidemiology, not inviolable codes. The evidence-based medicine (EBM) philosophy is neatly crystallized by the hierarchy of evidence, which places systematic reviews of randomized intervention studies at the top of a pyramid of evidence types. The public health epidemiology philosophy is embodied in Hill’s Viewpoints, nine guidelines for inferring causation from association that collectively call upon a plurality of kinds of research.

I agree with Lipsitch that rigid sectarianism is counterproductive to science; my discussion of the two philosophies was a description rather than an endorsement. Elements of both philosophies can be virtues. They are more likely to be virtuous when they are balanced with each other: models with high-quality evidence, data diversity with data quality, pragmatism with skepticism. This balance does not require different disciplines privileging different elements; science is at its best when it embraces all of them. However, as Lipsitch describes there is also great diversity among sciences. While there might be very general features of good science—it remains unsettled in philosophy as to exactly what these features might be—at a higher level of resolution that captures day-to-day scientific practice, what constitutes good science in one discipline might not be good science in another, owing to differences in the domain of study or the uses to which scientific results are put.

The distinct philosophies of public health and clinical epidemiology are norms of thinking imprinted on different schools or specialties in epidemiology, not inviolable codes.

Lipsitch suggests that the cooperation I advocate among the two schools is incomprehensible if epidemiologists cling to contradictory principles, the former accepting a diversity of evidence while the latter insisting on only randomized studies. Instead he endorses evidential diversity in the response to the coronavirus pandemic and wards off the attempt to apply rigid standards of evidence from the “most extreme wing of the evidence-based medicine community.” While I welcome EBM’s emphasis on evidence, data quality, and skepticism (when appropriately applied), I wholeheartedly agree that in all of epidemiology and science—including clinical epidemiology—we must be pluralists about evidence and prediction, partly because different kinds of evidence support different assumptions in our scientific reasoning. In another recent essay in these pages, for example, the EBM researcher Trisha Greenhalgh argues that some EBM experts have made crucial mistakes in the pandemic by applying EBM’s orthodox standards of evidence for medical therapies to public health interventions, particularly mask-wearing.

• • •

Beyond the philosophical dimensions of epidemic models and epidemiological evidence, these essays also call our attention to the complex relationship between science and action. Lipsitch and Ioannidis appear to disagree not only about certain strictly scientific matters, but also about practical decision-making under uncertainty—what actions are supported by the science. This distinction often gets overlooked in public discussions that focus solely on scientific studies—including studies authored or co-authored by Ioannidis, which have been widely and roundly criticized by other scientists. As a result, so much of Ioannidis’s tussle with mainstream opinion in epidemiology and public health has been portrayed as a disagreement over scientific facts. It is that, but it is also more than that.

Consider the specific controversy over estimates of SARS-CoV-2’s infection fatality ratio (IFR). Ioannidis claims in his own research that, worldwide, the IFR may be near the value for seasonal influenza. But even if Ioannidis turns out to be correct (which many experts have doubted), no policy prescription immediately follows, certainly not the idea that our response to the two viruses should be similar (which Ioannidis has not, to my knowledge, argued). For one thing, the IFR does not on its own represent the dangerousness of the virus. It is only one variable determining the number of deaths, which is the product of the IFR and the number of infections (the latter has the potential to be vastly greater for coronavirus compared to seasonal flu because SARS-CoV-2 has never infected humans before). The number of lives at stake should more directly inform decision-making rather than the IFR itself, but even here the gap between number and action is large.

Models and evidence are not the only inputs into the decision-making process. Values are also needed to animate the facts and move decision-makers to action.

It is worth remembering, therefore, that models and evidence are not the only inputs into the decision-making process. Values are also needed to animate the facts and move decision-makers to action. Public health decisions are infused with values, even when those values are unacknowledged and only implicit. These values trickle down to influence the science informing public health. As a result, it is wrong to say that decision-makers (as well as epidemiologists advocating for or against public health measures) are just “following the science.” They are taking political action that is as much informed by social and political values as it is by science. There is thus an urgent need for transparency not only regarding the facts—what models and evidence are informing the response and what predictions they yield—but also regarding the values linking them to action. In the absence of such clarity, disagreements over values or policy may masquerade as disagreements over science or facts.

One way values enter into scientific practice is via the outcomes researchers choose to measure in generating evidence and constructing models. One study might measure aggregate outcomes like the total number of deaths in a population, while an analysis more attuned to health inequities like those that have fallen along lines of race and socioeconomic status in the current pandemic might be more concerned with the distribution of outcomes across population subgroups. These are but a few of the value-laden decisions that precede the decision of what interventions to use. Selecting the “best” policy is partly a matter of science, then, but it is also inevitably a matter of values, too.

• • •

A final insight to be gained from this exchange concerns the complexities of cost-benefit analysis. Ioannidis’s main argument against population-wide “lockdowns” (presumably referring to shelter-in-place and stay-at-home orders) in his reply to my essay seems to take this form. He believes the benefits of lockdown have been exaggerated due partly to overestimating the IFR and to comparisons to the 1918 flu pandemic. Meanwhile, he suggests that the potential harms of lockdown have been overlooked, including effects such as delayed and averted treatment for other diseases, increased violence in the home and compromised mental health, more suicides and deaths from substance abuse, interrupted vaccination campaigns, increased global food insecurity, and increased deaths from tuberculosis and malaria in lower income countries. On balance, he concludes, lockdown could very well be inferior to other policy alternatives, which we should accordingly consider. Ioannidis does not go as far as to say that lockdown is inferior, but I read him as intimating that this conclusion is a strong possibility.

Weighing harms and benefits of proposed policies demands the same rigor used in modeling and generating evidence.

Although the value of cost-benefit analysis is not uncontroversial, especially when the relevant harms and benefits are qualitatively different, I accept that it is a reasonable approach, while noting that there are many ways of carrying it out. Its use need not be restricted to polarized, far-apart alternatives; it could be applied to more fine-grained decisions like which businesses to permit to reopen and on what dates. I will close by pointing out a few gaps in Ioannidis’s harm-benefit argument, which will serve to illustrate some of the difficulties of decision-making in the pandemic.

First, although Ioannidis does not consider specific alternatives in his response, a harm-benefit analysis is only useful in the context of decision-making if it is contrastive: that is, if it compares one policy to others. Even if it were the case that the harms of lockdown are more severe than the harms of some alternatives, its benefits compared to those alternatives might still outweigh its harms.

Second, assessing the harms of our interventions relies on causal inference. But we can only causally attribute harms and benefits to lockdown through comparative empirical analysis, for instance by modeling alternative scenarios or generating comparative evidence of an intervention’s effectiveness. Such an analysis might well reveal that an outcome is not (or not primarily) due to lockdown; it might have arisen either way, for instance due to people voluntarily social distancing in fear of the virus. Ioannidis argues that the imperative to respect all the evidence should include evidence of the harms of our interventions. I would add that this evidence must consider the counterfactual: what would have happened under some alternative scenario.

Third, an intervention’s harms must be considered in the context of further actions we can take to mitigate them. The harms or side effects of noxious chemotherapies, including nausea, can sometimes be offset—if only partially—through other treatments, and the existence of such side effect treatments should be considered in a harm-benefit analysis comparing different chemotherapies. Likewise, a harm-benefit analysis of public health measures must consider what supplementary interventions exist to counteract the social ills induced: economic, health-related, or otherwise.

Finally, a harm-benefit analysis demands more than a back-of-the-envelope treatment; it deserves as much rigor as goes into epidemic modeling or estimating model parameters. Neither the coronavirus pandemic nor our interventions and their effects are purely biological; they are psychological and social as well. Thus, harm-benefit analysis and decision-making more generally in the pandemic must involve a diverse range of experts, a much wider array than the two kinds of epidemiologist I initially described. On this need for diverse expertise, I think Lipsitch, Ioannidis, and I all agree.