Henry Farrell and Bruce Schneier do a public service by highlighting what they call “Democracy’s Dilemma,” namely “that the open forms of input and exchange” that democracy “relies on can be weaponized to inject falsehood and misinformation that erode democratic debate.” Democracies depend on the free circulation of ideas and information, but that circulation also poses risks, especially when propaganda, divisive messaging, and outright lies enter the mix.
“Changes in technology have made speech cheap, and the bad guys have figured out that more speech can be countered with even more bad speech. In this world, the easy flow of information can cause trouble for democracy,” the authors observe. “Political science questions about what citizens need to know (or believe they know) for democracy to be stable must be translated into information security questions about the attack surface and threat models of democracy, and vice versa.”
I found much to agree with in the essay, but I want to highlight places where I would articulate the problems and the corresponding solutions differently. “We need a better understanding of democracy’s resiliency in the face of information attacks,” Farrell and Schneier write. My concern is that the frames of “attack” and “threat models” imply something wholly external to the system, an aberration. But what if the weak points, the failures, are within? Or, to reframe the same point in the idiom of techies, what if the problem they are analyzing is a feature, not a bug?
We must be careful here not to conflate the capitalist (mis)information economy with that of a democratic information system. The problem, as I understand it, is less that speech is cheap, with bad actors exploiting this fact, and more that our for-profit communications system is not incentivized to serve the public’s information needs. In its tremendous variety, speech has been commodified—journalism, private correspondence, off-the-cuff comments, notes of consolation, and silly jokes are all reduced to data points and content to serve ads against. As I argue in my 2014 book The People’s Platform: Taking Back Power and Culture in the Digital Age, many of the pathologies of digital networks stem from the underlying business model. Our communications system is consolidated, centralized, and commercialized.
As anyone who has watched cable news knows, these are problems that precede the advent of the Internet, but they have metastasized online. When you have a communications architecture explicitly designed to serve eyeballs to advertisers, you have, in essence, invented a communications architecture optimized to promote falsehood. Social media services weren’t built to meet the needs of citizens, but rather data brokers, high-tech marketers, and public relations professionals—the true customers of most online platforms. By promoting products with spin and hype, ads—even in their most harmless form—are a form of disinformation, and they are arguably the core component of the digital sphere as we know it. The development of behavioral targeting and personalization has fueled the shift toward what Shoshana Zuboff smartly dubs “surveillance capitalism.”
For all the attention dark posts, fake news, and conspiracy videos have received, the discomfiting reality is that most of it has been perfectly legal—and good for Facebook, Twitter, and YouTube’s bottom line. The companies weren’t hacked—their services were utilized or purchased as intended. Farrell and Schneier are rightly concerned that Democrats employed a purposefully manipulative and misleading social media strategy to promote Doug Jones’s candidacy in Alabama, but they ignore the more quotidian problem with online personalization—namely that we have no idea what political content other people are seeing. For example, the Trump campaign is making record-breaking investments in Facebook ads, targeting older Republicans with messages the majority us cannot see. Deliberation and debate, the “open forms of input and exchange” that democracy demands, are foreclosed when we have no idea what information other segments of the population are being exposed to. To combat this problem, Canada recently enacted a law requiring online platforms “to create a registry of all digital advertisements placed by political parties or third parties . . . and to ensure they remain visible to the public for two years.” Google, protesting that even this modest requirement is too onerous for its system, has decided to stop selling political ads entirely.
But personalization is just part of what ails us. Consider the nefarious “flooding techniques” Farrell and Schneier describe, which are being deployed in China. A similar sort of flooding is the inevitable and worrisome outgrowth of the American social media ecosystem, where profits are generated by a very crude conception of engagement—one based on the quantity of clicks not the quality of content. In this environment, the old idea of “manufactured consent,” which is passed down through top-down, one-to-many broadcast channels, has been superseded by both “manufactured compulsion” (services are addictive by design) and “manufactured confusion” (sensational content is more shareable, even if it is misleading or inaccurate).
While dissemination online is technically free, the fact is we typically see things because someone has paid to place them in our view. The lightning of virality can strike and some have figured out how to game the algorithms, but in general it helps to have a budget to pay-to-promote your posts. In the aggregate then, all of this cheap speech becomes highly profitable, allowing the growth of massive companies and fortunes and the amassment of political power that comes with scale. As Kevin Erickson of the Future of Music Coalition put it to me, “Centralization of corporate ownership and surveillance-based business models are mutually reinforcing and so are their respective anti-democratic incentives.”
Ultimately, the problem we must address is the underlying political economy of the Internet. If we are looking for reforms, we could start with some version of the call, recently issued by presidential hopeful Elizabeth Warren, to break up digital monopolies and regulate their business practices. Given its well-documented pernicious and discriminatory effects, behavioral targeting should at least be eliminated or highly constrained, opening space for users to pay for services and thus become the true customers of social media services. I’d also want to see the development of publicly-funded alternatives, ranging from the platform to content level. Our data could be public too—in the sense of being a public good, not something exposed for the world’s largest corporations to privatize and exploit. For inspiration, we could look to “tech sovereignty” activists in Barcelona, who are collaborating with municipal government officials to use data in the public interest, and writers including Ben Tarnoff, who have written about ways to recognize data as a resource we should all own and benefit from.
The final drawback I’d like to highlight of framing speech as “cheap” is that it ignores the fact that plenty of speech is still expensive to produce. For example, quality journalism—which everyone agrees is essential to a functioning democracy—remains resource-intensive. Regulation of advertising could be structured in such a way as to acknowledge and remedy this fact. For specific reforms, we could look to the non-profit advocacy group Free Press, whose recent white paper promotes the idea of a tax on targeted advertising, the proceeds of which could be used to subsidize reporting in the public-interest. A robust and independent press has to be part of any credible plan to combat those who would promote deceptions in the information war. In my view, public financing of journalism and the development of public forms of digital media has to be on the agenda. Well-funded local journalism would also serve as a deterrent to the fake local news sites that Republicans are setting up in advance of 2020.
Farrell and Schneier’s conception of “democracy’s dilemma” is potentially useful, but we must take care to keep the bigger picture in mind: the inherent weakness of our money-driven communications and political systems. The “bad guys” producing “bad speech” are bit players compared to those who own the platforms and the politicians tasked with regulating them. By pinning blame on “information attacks” within an economic framework that was never very democratic to begin with, the authors risk treating symptoms and not the underlying disease. We need to ask why expensive-to-produce and necessary speech, such as local journalism, is withering away and why accurate, high-quality content is hard to come by online. And we need to ask these questions loudly—lest we be drowned out not because speech is cheap, but because junk is more profitable.