Information, as Farrell and Schneier point out, is essential to an informed citizenry, but what they overlook is the fact that our media environment for the past fifty years has been shaped by a capitalist agenda. Our information ecosystem is defined by and controlled by financialized interests. This fact, in turn, means that U.S. democracy today is defined more by late-stage capitalism than by information and communication technologies. After all, financialized interests are also behind our technical systems. And herein lies the crux of the problem. While democracy is fundamentally about the public interest, late-stage capitalism is fundamentally not. If there is money to be made, financial interests come first. Then, maybe—but only maybe—the public interest.

How do you maintain democratic rules and preserve healthy disagreement without accounting for how power, privilege, and money coopt the discussion?

When you center financialized interests in an analysis of civic information, what becomes painfully clear is that there is a longstanding, well-understood, and vested interest in undermining knowledge rather than engendering an informed citizenry. For decades, we have watched political elites and their corporate backers systematically work to produce ignorance.

Ignorance has long been thought of as not-yet-knowing. The push to end the so-called “digital divide” in the United States, for example, focused on giving people access to the Internet so that they could access information and opportunities, thus leaving ignorance behind. Yet, at the same time that the Mosaic browser was just beginning to take off in the mid 1990s, a group of scholars became fascinated with how ignorance could be strategically manufactured. The twentieth century was filled with scholarship focused on the spread of propaganda, but there wasn’t yet a good framework for understanding how knowledge could be poisoned. In 1995, Robert N. Proctor, a historian of science, and Iain Boal, a linguist, gave the study of this phenomena a name: Agnotology.

While authoritarian states have long used disinformation as a weapon, U.S. corporations turned agnotology into a science. And, ironically, they targeted science first and foremost. Big Tobacco manufactured science to create uncertainty about the relationship between tobacco and cancer, while Big Oil went to work creating a whole ecosystem of climate deniers. More recently, they have mastered the art of online influencers, bots, and search engine optimization in service to these efforts. Researchers have found that tobacco companies are evading tobacco advertising regulations through the creation of fake influencers and the amplification of content through bots, and both Phillip Morris and Juul have come under public pressure recently for heavily funding social media influencers to reach youth. While these tactics can be seen as typical of contemporary brand marketing, mastery of these tools has allowed both corporate and ideological interests to fracture what is authentic. Their brilliance and our challenge is that most of them are not total disinformation. They are merely arguments for doubting a particular line of thought. They are inviting you to question, to be wary of the information you are receiving.

Leveraging this same approach, Russia Today, the government-funded news network, launched their “Question More” advertising campaign almost ten years ago in the United States and United Kingdom. The campaign addressed topics ranging from terrorism to nuclear proliferation, triggering conspiratorial thinking in politically inconsistent—but always commercially beneficial—directions. On one poster titled “Is climate change more science fiction than science fact,” the rhetorical answer given is:

Just how reliable is the evidence that suggests human activity impacts on climate change? The answer isn’t always clear-cut. And it’s only possible to make a balanced judgement if you are better informed. By challenging the accepted view, we reveal a side of the news that you wouldn’t normally see. Because we believe that the more you question, the more you know.

Farrell and Schneier rightly mention Louis Brandeis as one of the champions of the idea that, in a democracy, “the answer to bad speech is more and better speech.” But he didn’t simply believe in counter-speech and the “marketplace of ideas.” He also had a firm theory of power that recognized the caustic power of profit-seeking corporate actors. In the 1932 ruling Packer v. Utah, Brandeis argued that Utah did indeed have the right to outlaw tobacco ads from billboards, regardless of Packer Corporation’s interests. (These are the same laws that contemporary tobacco companies are evading through social media marketing.) Simply put, Brandeis argued that members of the public couldn’t opt-out of Packer’s messaging other than to divert their eyes. This made them “captive audiences,” which Brandeis argued allowed Packer to abuse its speech privilege.

While we do need to identify vulnerabilities, build resilience, and build threat models, we must also recognize that the security of the future requires a sociotechnical framework.

Therein lies another dimension to Democracy’s Dilemma today. How do you maintain democratic rules and preserve healthy disagreement without accounting for how power, privilege, and money coopt the discussion? Financial incentives, after all, shape how speech is amplified, who can amplify the loudest, and what speech is counterproductive, while the pollution of data and knowledge often benefits corporate interests at the expense of democracy.

As we’re all struggling to make sense of the whirlpool of information that is threatening to drag us under, we must remember that adversarial interests—both domestic and foreign, as Farrell and Schneier point out—are focused on shaping the information structure as well as perverting the data infrastructure. Those with the resources for it are investing time and money into training search engines and recommendation algorithms into amplifying their content. We’re also watching billionaires fund well-produced YouTube videos designed to undermine the university and the scientific production of knowledge. Meanwhile, other adversaries are taking more guerilla-style tactics, focused on making sure that the data upon which both public-sector and private-sector decision-making depends cannot be trusted. Conspiracy theorists flood Google and YouTube with falsehoods, all to support the financial interests of gun manufacturers.

These strategies highlight what Farrell and Schneier lay out: that democracy doesn’t simply depend on good data and good information; it depends also on the public’s trust in data and information. The democratic project is one of shared imagination, after all. So while we do need to identify vulnerabilities in democracy, build resilience against exploits, and effectively and collectively build threat models, we must also recognize that the security of the future requires a sociotechnical framework. There will still be technical attacks, aimed at unwarranted access and destabilizing tools, but there will also be attacks designed to achieve ignorance.

As we grapple with the manipulation of knowledge, trust, and social cohesion, we must also develop new societal-level forms of resilience. The framework of Democracy’s Dilemma helps begin this discussion, but to truly fight the manufacturing of ignorance, we must work to prevent financialized interests from controlling our information ecosystem.