Foolproof: Why Misinformation Infects Our Minds and How to Build Immunity
Sander van der Linden
W. W. Norton, $30 (cloth)

At the end of the Korean war in 1953, captured American soldiers were allowed to return home. To widespread amazement, some declined the offer and followed their captors to China. A popular explanation quickly emerged. The Chinese army had undertaken an unusual project with its prisoners of war: through intense and sustained attempts at persuasion—using tactics such as sleep deprivation, solitary confinement, and exposure to propaganda—it had sought to convince them of the superiority of communism over capitalism. Amid the general paranoia of 1950s McCarthyism, the fact that such techniques had apparently achieved some success produced considerable alarm. The soldiers had been “brainwashed”—and everyone was vulnerable.

Fears about misinformation are fueled by the idea that the masses are extremely vulnerable to being duped.

The ensuing panic over mind control stoked a frenzied search for solutions. How could the American public be protected against this new menace? William J. McGuire, a young and ambitious social psychologist, was among those who took up the challenge. McGuire’s big idea was to liken brainwashing to a viral infection. In such cases, post-infection treatment can help, but it is far better to inoculate individuals before they are exposed. Bolstered by a series of experiments that seemed to support his conjecture, McGuire ran with this analogy. According to what he called “inoculation theory,” individuals can be immunized against brainwashing by exposing them to a weakened dose of propaganda and warning them about the manipulative techniques they might encounter in real life. The headline of a 1970 article by McGuire in Psychology Today summarized the theory’s aspiration: “A Vaccine for Brainwash.”

For some time, inoculation theory—and the fears that inspired it—gradually faded from public and scientific consciousness, but they have made a comeback in recent years. They have also had a makeover: we talk now of misinformation instead of brainwashing, and the communists who played the original villains have been joined by a more diverse cast of populist leaders, conspiracy theorists, Russian influence campaigns, social media platforms, and more. At bottom, though, lies the same fear: that the masses are extremely vulnerable to being duped into holding dangerous ideas.

Moreover, this vulnerability is once again being likened to infectious disease. In 2020 the World Health Organization described the dangers posed by the spread of misinformation about COVID-19 as an “infodemic,” a term now widely used in scientific articles and the media. McGuire’s ideas have also been revived, revised, and re-introduced into top scientific journals, informing the policies of governments and media companies in their attempts to combat the viral transmission of false content. CNN reports that “Researchers have created a ‘vaccine’ for fake news,” Scientific American declares that “There’s a Psychological ‘Vaccine’ against Misinformation,” and a Rolling Stone headline reads, “The Disinformation Vaccine: Is There a Cure for Conspiracy Theories?”

All these articles report on the work of University of Cambridge psychologist Sander van der Linden and his collaborators. Van der Linden is a leading figure in the recent eruption of scientific research on misinformation and the most influential modern proponent of inoculation theory. His recent book, Foolproof: Why Misinformation Infects Our Minds and How to Build Immunity, aims to disseminate the new science of misinformation and the central ideas of inoculation theory to a general audience. Van der Linden details the considerable interest in his work from governments, international bodies, and businesses—including Google—and the book’s critical reception has been overwhelmingly positive, with glowing reviews in the Financial Times, The Times, and The Guardian. But its central ideas and arguments don’t hold up to scrutiny.


The message of Foolproof is simple: misinformation is a dangerous virus that threatens everything from public health to democracy. As van der Linden puts it, “People can catch misinformation much like a disease.” This “misinformation virus” is both highly contagious and “doesn’t only threaten the wellbeing of individuals. It poses serious threats to the integrity of elections and democracies worldwide.” If we are to fight it, van der Linden argues, it is not enough to cure victims—to disabuse them of false beliefs—after they have been infected; we must inoculate people before they are infected.

Some of van der Linden’s early work embraced McGuire’s strategy: vaccines for specific forms of misinformation. One set of experiments, for example, seemed to demonstrate that it is better to warn people about misleading claims concerning climate change than to debunk misperceptions after they arise. In more recent work, van der Linden and his collaborators are much more ambitious. Instead of inoculating individuals against misinformation about particular topics, which is costly and does not easily scale, they seek to expose people to the “DNA” of misinformation—the common structure of manipulative techniques, whether they concern climate change, vaccines, election fraud, or anything else. According to van der Linden, this DNA can be understood in terms of six structural building blocks to which he gives the acronym DEPICT: Discrediting, Emotion, Polarization, Impersonation, Conspiracy, and Trolling.

Foolproof does not just posit a simple explanation for complex social problems; it imagines a threat that can be straightforwardly identified.

With the DNA of misinformation cracked, the next step is to design and administer a vaccine. The specific intervention van der Linden and colleagues propose are games. In Bad News, the most influential game they have developed, players take on the role of a “fake news tycoon” attempting to mislead people online. The aim is to accrue badges, which you win for successfully mastering each prong of the DEPICT framework. The examples of misinformation are deliberately silly—as with real vaccines, the game exposes people to a harmless strain of the misinformation virus—but the claim is that by playing the game, individuals will develop mental antibodies, so to speak, against the more dangerous content that they encounter in the wild.

Much of Foolproof is spent on the alleged empirical vindication of this approach. In a typical experiment, participants are asked to rate the reliability of the same set of news headlines before and after playing a game. According to van der Linden, the results show that individuals are significantly better at this task after playing the game, that they outcompete individuals from a range of control groups, and that this strategy provides superior protection compared to alternative interventions.

But the argument isn’t convincing. To begin to see why, it is helpful to return to the brainwashing panic of the 1950s, which in reality was completely unfounded. Of the thousands of American prisoners of war, only twenty-one—roughly half a percent—defected to China. Further, their reasons seem to have had less to do with genuine persuasion than with prosaic self-interest: many probably feared being court-martialed for collaborating with their Chinese captors, a fate that did await some returning soldiers.

Get our latest essays, archival selections, reading lists, and exclusive editorial content in your inbox.

The flimsiness of the brainwashing narrative has two broad lessons for the current misinformation panic and for the central ideas of inoculation theory.

First, fears about people’s manipulability are broadly unfounded. Contrary to popular folklore, there is no such thing as brainwashing; even under the stress of an intense campaign of persuasion and manipulation, people are remarkably stubborn and difficult to influence. Of course, misinformation is not brainwashing, but a similar lesson applies. Much of the current misinformation panic depicts humans as profoundly gullible, routinely revising their worldviews and behaviors based on what they encounter on the Internet. In fact, a large body of psychological research demonstrates that people exploit sophisticated psychological mechanisms for evaluating communicated information. If anything, such mechanisms—what cognitive scientists call “epistemic vigilance”—make individuals overly stubborn, too difficult to influence rather than too easy. In general, people are highly skeptical of information they encounter: if a message conflicts with their preexisting beliefs, they demand arguments that they find persuasive from sources that they judge to be trustworthy. Otherwise, they typically reject the message.

This does not mean that everyone is always well-informed, of course. Ignorance is pervasive, and people hold inaccurate beliefs about many topics. But the issue is not whether people are misinformed; it’s why. Inoculation theory traces false beliefs to exposure to misinformation, but this is often a simplistic and misleading picture of our cognitive life. As Dietram Scheufele and colleagues argue, the idea that viral misinformation “distorts attitudes and behaviors of a citizenry that would otherwise hold issue and policy stances that are consistent with the best available scientific evidence” has “limited foundations in the social scientific literature.” First, regardless of media misinformation, the truth is often complex, uncertain, and counterintuitive, and it can be difficult to figure out, not least because people have limited information, are busy, and are subject to various reasoning biases.

Moreover, the beliefs we hold and the identities through which we interpret the world emerge over extended periods from infancy and feature complex interactions between our predispositions, personalities, development, life experiences, communities, and much more. The worldviews that result from this process are often partial, inaccurate, and difficult to dislodge, especially because many people distrust scientists, government agencies, and public health authorities. Although the causes of this distrust and its apparent recent increase in many countries are contested, it seems to be rooted in factors pertaining to economics, identity, polarization, and institutional failures, not  “infection” by fake news.  

This perspective is sharply at odds with van der Linden’s hypothesis that false beliefs arise because people are insufficiently skeptical of information they encounter online or from other media sources. If, instead, popular misperceptions often have much deeper roots and emerge from processes spanning many years, the real problem in many cases might be the exact opposite: that people are too wedded to their intuitions and overly skeptical of information from trustworthy sources. In that case, teaching people to be more vigilant about possible manipulation might backfire, providing them with greater resources to dismiss information at odds with their unfounded beliefs.

Fears about people’s manipulability are broadly unfounded.

Is this a genuine risk? If we lived in environments plagued by misinformation, as the infodemic metaphor suggests, greater vigilance might be appropriate. But in reality the current panic about a misinformation epidemic is itself rooted in fake news. In well-studied Western countries, at least, the share of misinformation in most people’s information diet is extremely low; the overwhelming majority of people get their news from mainstream, largely reliable sources. It might therefore be genuinely harmful to persuade them—to misinform them—that their worlds are saturated with viral fake news that they should be more skeptical of.

Of course, claims about the scale of misinformation depend on how we define “misinformation.” This is an extremely contentious topic. Whereas real epidemics have well-defined causes—viruses whose strains can be identified and sequenced—misinformation lacks a consensus definition. If one defines it broadly as misleading information, it is utterly ubiquitous. Even among mainstream news sources, reporting is often highly selective and partisan. Indeed, given that news involves presenting an extremely non-representative sample of attention-grabbing events, one could argue that news is inherently misleading even in the absence of partisan biases.

Because a broad definition of misinformation has such consequences, modern misinformation research tends to define it much more narrowly as straightforwardly false information, and van der Linden follows suit. On this narrow definition, misinformation plausibly does constitute a minute portion of most people’s information diet, not least because effective partisan media and propagandists typically refrain from expressing straightforwardly demonstrable falsehoods. Given this definition, however, the image of an infodemic in which our worlds are plagued with a contagious misinformation virus is extremely misleading and potentially harmful.

To be sure, misinformation in this narrow sense is not wholly illusory. Even if it is not as pervasive as commonly alleged, low-quality, straightforwardly false content certainly does exist. Yet the consumption of such misinformation is highly skewed. Most people consume very little, but a small minority of the population—consisting largely of avid conspiracy theorists, hyper-partisans, and extremists—consume a lot. Perhaps this audience would benefit from a misinformation vaccine of the sort van der Linden proposes?

To see why this conclusion is unwarranted, consider a second lesson of the 1950s brainwashing panic: that when people seem to have been manipulated by ideas, it is often not because they have been duped against their interests but, quite the contrary, because their behavior promotes their interests. We can see this at work in both the supply and demand of misinformation. On the supply side, some actors intentionally propagate misinformation because they benefit—financially or otherwise—from misinforming their audience. (In such cases researchers typically talk of “disinformation.”) Others simply want to troll, to generate outrage, to bond with members of their subculture, and to display their socially attractive traits. Similar goals are also central to the consumption of misinformation. We do not always want to be well-informed, even when we consciously represent ourselves as dispassionate truth seekers. Our beliefs and worldviews form parts of our identities and perform a range of emotional and social functions for us: they help us win approval from friends and co-believers, they promote and justify our interests, and they help us to feel good about ourselves and our choices.

If anything, we tend to be overly stubborn—too difficult to influence rather than too easy.

Van der Linden is aware of this phenomenon—psychologists call it “motivated cognition”—and devotes a chapter of the book to it, but he does not appreciate the deep problems it poses for the simple causal story at the heart of inoculation theory. Motivated believers seek out misleading content that rationalizes their favored convictions and narratives, and misinformation entrepreneurs stand to win social and financial rewards from satisfying this demand. In such cases, the result is not an infodemic but a marketplace of rationalizations. People do not treat harmful viruses as a consumer good, and they do not want to be infected with them; with misinformation, by contrast, they often do.

In short, inoculation theory rests on a flawed portrait of human beings as passive and credulous. Far from being gullible victims of mind viruses that plague our information ecosystems, we are sophisticated agents with complex goals and identities navigating environments dominated by largely reliable information, at least on simple factual matters. To the extent that people are misinformed on these matters, it is often not because they have been duped by encounters with unreliable sources; it is because they are overly skeptical of reliable sources that contradict their preexisting beliefs, or because the consumption and propagation of misinformation promotes their interests or goals.


Are these considerations fatal to van der Linden’s project? Even if the scale of misinformation is often exaggerated, and even if people are often motivated to consume biased and misleading content, it would be absurd to deny that people are sometimes duped by harmful falsehoods. Further, even if misinformation is different from a virus in some respects, perhaps it is analogous in the respect that matters most for inoculation theory: perhaps it has an intrinsic “DNA” that individuals can learn to identify through controlled exposure. If so, fears about making people more skeptical across the board might be unfounded. One might think that skepticism could be narrowly targeted on misinformation without undermining trust in reliable content.

To see why this is unlikely, it is essential to distinguish the claim that it is helpful to warn people about specific ideas—to prebunk rather than debunk—from the more ambitious hypothesis that people can be inoculated against all possible misinformation. The former view may be true, although the evidence is mixed. The latter view does not stand up to scrutiny, however. Its central assumption—that there is something intrinsic about misinformation, a DNA that we can be trained to recognize—is radically implausible.

The fundamental problem is that there are no intrinsic differences between true and false claims. That is, whether a claim is right or wrong—or informative or misleading—depends not on characteristics of the claim itself but on whether it accurately represents how things are. If someone tells you that the 2020 U.S. presidential election was invalidated by extensive voter fraud, for example, you cannot simply examine the statement—or even its surrounding rhetorical context—to figure out whether it is true or false; its truth or falsity depends on the world.

How, then, do we decide whether to accept what others tell us? For almost all contested questions of broad public significance, we have no ability to directly verify claims and so must draw on our preexisting beliefs—whether first order (about the issue in question) or second order (about whether the source of the claim is trustworthy). In epistemically ideal situations, people can be receptive to persuasive arguments that conflict with their gut feelings, intuitions, or prior commitments, but even in this case preexisting beliefs are also necessary to evaluate the premises of arguments and the trustworthiness of arguers. In less than ideal conditions, motivated cognition can conflict with the pursuit of accuracy, leading us to reject claims even if we possess good reasons for accepting them and accept claims even when they are counterintuitive or supported by sloppy reasoning. In those cases, we may be biased to accept claims that what we want to believe—for example, because they affirm our identity or signal our partisan allegiances—and too skeptical of claims that we find threatening or unpalatable. 

When people seem to have been manipulated by ideas, it is often not because they have been duped against their interests but because their behavior promotes their interests.

This picture of our cognitive life suggests there will be sharp limits to individual-level solutions for combatting misinformation. If people are already highly skeptical and misperceptions are often rooted in motivated cognition, then the real challenges with misinformation likely reside at the systemic and political levels. For example, we must build more trustworthy institutions that can win people’s confidence, and we must address the social and political conditions that make people avid consumers of partisan and conspiratorial content. This outlook is radically at odds with the central claim of inoculation theory: that misinformation has certain “tells,” intrinsic markers of unreliability that individuals can learn to detect independently of acquiring any new knowledge or motivations.

To see the problem with this idea, return to van der Linden’s DEPICT framework. Consider the first prong of the acronym: discrediting. It is true that misinformation producers often seek to discredit those who challenge their claims, but it is equally true that mainstream and reliable outlets often seek to discredit fringe and extremist content. Van der Linden’s own book, after all, is a sustained attempt to discredit misinformation producers. What is distinctive about misinformation is that it seeks to discredit the wrong sources, not that it engages in discrediting at all.

The same lesson applies to emotional manipulation, the second prong of DEPICT. Once again, it is true that much misinformation plays on people’s emotions, but so does much reliable and important content. Few things play on the emotions more than Martin Luther King, Jr.’s “I Have a Dream” speech. The fact that a claim is emotionally charged does not and cannot tell you whether it is true or false. If it did, that would be a big problem for the central claim of Foolproof: likening misinformation to a dangerous virus, after all, plays on people’s fears.

The third element of DEPICT, polarization, is an especially revealing attempt at identifying misinformation. It is true that propagandists often seek to inflame societal divisions, and much misleading content is no doubt highly polarizing. Nevertheless, unless the only legitimate perspective on the world is centrist—an incoherent view, not least because what counts as the center or mainstream varies so radically across time and place—then perfectly legitimate content can also be highly polarizing. Van der Linden defines polarization as a misinformation technique that attempts “to move people away from the political centre,” but it is difficult to think of any progressive movement—feminists, anti-racists, indigenous rights campaigners, and so on—that could not be characterized in this way. Again, there is no reason to think that this feature of information says anything about its truth or reliability.

Finally, consider conspiracy theories—the fifth prong of van der Linden’s framework. No doubt many conspiracy theories are false, and some are deeply irrational. But what of the claim that senior members of the Catholic church conspired to cover up child abuse within the institution? Or that the Bush administration conspired to win support for the invasion of Iraq by lying about the presence of WMDs? Or that senior figures within major banks conspired to influence policies in self-serving ways in the aftermath of the 2008 financial crisis? Early in the book, van der Linden observes that bad conspiracy theories are distinguished from reasonable ones by their lack of evidence. But if conspiracy theories can be true and reasonable, the mere presence of conspiracy theorizing—however we define it—cannot be a distinguishing mark of misinformation.

So much for four of the six prongs of the DEPICT framework. Far from uncovering the DNA of misinformation, it identifies phenomena often associated with reliable information. Propagandists and disinformation campaigns do often use similar techniques, but it doesn’t follow that the techniques can be detected as a property of their misinformation. Consider cherry picking. Although this isn’t included in the DEPICT framework, it is an extremely widespread and effective misinformation technique precisely because the “cherries”— the news stories, the reported events, and so on—are real and because the process through which the cherries are selected is hidden. The same lesson applies more generally. Effective misinformation—presumably the only kind that we should be concerned about—tends to bypass people’s capacity to detect it.

Are these concerns devastating for inoculation theory? Perhaps the theory can serve a more modest goal—not to capture the essence of misinformation but simply to make people more alert to the possibility that they are being manipulated in certain ways. But this is at odds with van der Linden’s claims in the book.


Alternatively, one might think the proof is in the experimental pudding­—the empirical data van der Linden devotes so much space to in Foolproof. The question is whether successful performance on these experimental tasks indicates a genuine “inoculation” against real misinformation. Again there are strong reasons for doubt.

First, there are two different metrics by which one might evaluate an intervention against misinformation: improvement in the detection of misinformation, or improvement in the ability to distinguish reliable from unreliable content. To appreciate the difference, suppose a study participant is asked to judge the truth of ten headlines, five of which are false. Suppose further that before the intervention, the individual labels all of the headlines true but after the intervention labels all of them false. By the first metric (improvement in the detection of misinformation), the individual has improved dramatically—their success rate at detecting false headlines has gone from 0 percent to 100 percent—but they may not have acquired a greater capacity to discriminate between reliable and unreliable content; they could simply have become more skeptical across the board.

Van der Linden’s experimental methodology is plagued by serious weaknesses.

Remarkably, this seems to be the case with gamified misinformation vaccines, as psychologists Ariana Modirrousta-Galian and Philip A. Higham suggest in a recent article. By focusing on the first metric rather than the second, the existing research greatly inflates the apparent benefits of such games. In fact this is too charitable: given that people are already highly skeptical and that true information is far more prevalent than misinformation in the real world, making people more skeptical across the board almost certainly has negative consequences.

At one point in the book, van der Linden describes research that attempts to guard against this worry. In one experiment, he and his collaborator, Jon Roozenbeek, included two accurate headlines in their study, in addition to a set of headlines illustrating the alleged misinformation techniques identified in the DEPICT framework. After playing Bad News, participants were no more likely to regard the accurate headlines as false, but they were much better at detecting the misinformation techniques. According to van der Linden, this finding shows that the intervention does in fact improve discrimination; it does not just make people more skeptical in general.

Once again, however, it seems that this positive “discovery” is simply an artifact of the experimental design. The two accurate stories included in the study were common knowledge when the experiment was carried out in 2019: namely, that Trump wanted to build a wall along the United States–Mexico border and that Brexit would officially happen that year. A general increase in skepticism would not lead participants to doubt everything they formerly considered true.

This consideration points to a second problem with the experiments alleged to vindicate inoculation theory: the items used to test misinformation discrimination are typically chosen by the experimenters. In fact, van der Linden and his team create examples of misinformation to illustrate what they take to be the core misinformation techniques outlined in the DEPICT framework. For example, they use the tweet “The Bitcoin exchange rate is being manipulated by a small group of rich bankers. #InvestigateNow” to illustrate the conspiracy technique, whereas “New study shows that left-wing people lie far more than right-wing people” is supposed to illustrate the polarization technique. (Never mind whether these are good examples of the phenomena they are supposed to exemplify.) “If we’d used a real fake news story,” van der Linden contends, “people might have known whether it is true or false simply because they’d read, seen, or heard it before.”

This methodology has clear weaknesses. First, if real-world examples of fake news should be excluded on the grounds that people might have encountered them before, why is it ok to include real-world examples of accurate news that participants almost certainly encountered before? Second, because van der Linden and his team come up with the examples of misinformation themselves, they have significant degrees of freedom in creating examples that maximize the likelihood that their intervention will appear successful. (Perhaps this explains the examples’ cartoonish simplicity.) Finally, and worst of all, there is the problem that we have already encountered: the “misinformation techniques” that characterize the test items are not in fact diagnostic of misinformation. As a result, a greater propensity to detect such techniques is not the same thing as a greater propensity to detect misinformation.

All these considerations undermine the contention that inoculation theory has been vindicated experimentally.


Foolproof’s argument, then, is not so foolproof. At least on the relatively narrow definition that van der Linden uses in the book, misinformation is not widespread, and its causal role in major social events is either unsubstantiated or greatly overstated. In general, people are already sophisticated and vigilant social learners, if anything too skeptical than too credulous. The minority of the population that consumes the lion’s share of low-quality misinformation seems to be its avid consumers rather than its passive victims. And effective misinformation lacks an intrinsic DNA that neatly distinguishes it from true and reliable content.

The real challenges with misinformation likely reside at the systemic and political levels.

Given these problems, why has Foolproof’s viral view of misinformation itself gone viral? Panic about misinformation took off in 2016 after two events that many found shocking: the UK’s vote to leave the European Union and the election of Donald Trump in the United States. Amid the global resurgence of nationalist populism—and social media’s role in making fringe views more visible—many pundits, policymakers, and social scientists began to ask why so many people had seemingly lost their minds. The misinformation narrative supplied an answer: because people are misinformed about the world, and they are misinformed about the world because they have been “exposed” to misinformation.

Three features of this explanation are worth noting. First, it is apolitical: it explains social and political conflict not in terms of people’s divergent identities, perspectives, and interests but in terms of factual errors caused by exposure to bad information. On this view, our political adversaries are simply ignorant dupes, and with enough education and critical thinking they will come to agree with us; there is no need to reimagine other social institutions or build the political power necessary to do so. Second, the argument is technocratic rather than democratic. By using an epidemiological metaphor, it suggests solutions to social problems akin to public health measures: experts must lead the way in tamping the spread of mind viruses and vaccinating the masses against them. Finally, though many critics find the misinformation panic too pessimistic, there is a deeper sense in which it is extremely optimistic. Foolproof does not just posit a simple explanation and remedy for complex social problems; it imagines a threat that can be straightforwardly identified.

These features, I think, help explain why the misinformation-as-virus narrative has won such widespread endorsement. The belief that a dangerous misinformation virus is a major source of society’s problems is popular not because it is supported by evidence, and not because it has duped credulous individuals, but, most plausibly, because its apolitical, technocratic, and simplistic character resonates with the interests and biases of those who consume and propagate it.

We’re interested in what you think. Submit a letter to the editors at letters@bostonreview.net. Boston Review is nonprofit, paywall-free, and reader-funded. To support work like this, please donate here.