In our moment of profound inequality and global crisis, now flush with chatbots and simulated images, Morozov is right that we sorely need a clearer articulation of the world we do want to live in, not just the one we want to leave behind. But the challenge of specifying that vision—much less winning it—requires more refined lessons about the challenges ahead and where political power might be built to overcome them.

The field of AI has been not just co-opted but constituted by a few dominant tech firms. It is no coincidence that the dominant “bigger is better” paradigm, which generally uses the scale of compute and data resources as a proxy for performance, lines up neatly with the incentives of a handful of companies in Silicon Valley that disproportionately control these resources. The widely lauded AlexNet paper of 2012 was an inflection point. In its wake, deep learning methods—reliant on massive amounts of data, contingent labor, and exponentially large computational resources—came to dominate the field, spurred at least in part by the growing presence of corporate labs at prestigious machine learning conferences.

Big Tech’s tolerance for internal pushback swiftly faded, driving out dissent.

This isn’t a new phenomenon. The same components shaped the Reagan administration’s vision for a Strategic Computing Initiative meant to ensure American technological prowess in AI. The program was ultimately discarded with the realization that its success would require endlessly scaling computing power and data.

This resurrected vision of infinite scale no matter the cost now drives AI figureheads like Sam Altman to lobby for public investment in chipmaking and the ruthless expansion of power for data centers. If the unregulated surveillance business model of the last decade and a half generated the data, compute, and capital assets to secure Big Tech’s dominant posture, this next phase will require doubling down on these infrastructural advantages. In this view, the ChatGPT moment is not so much a clear break in the history of AI but a reinforcement of the corporate imperatives of the early aughts.

Things might have taken another direction. After all, as Morozov suggests, the term “artificial intelligence” has meant many different things over its seventy-year history. There are still other models he doesn’t mention that resonate with his argument. Feminist AI scholars like Alison Adam once held up situated robotics as an alternative paradigm, interpreting intelligence as emerging not from rule-bound and bureaucratic expert models but out of experience embodied through contact with the outside world. And corporate AI labs once incubated the careers of researchers with a much more radical politics. Lucy Suchman is one of them: emerging from Xerox PARC, she helped to found the field of human-computer interaction, devoted to understanding the contingency of how humans interact with machines in a messy world. (Suchman was also one of the founders of Computer Professionals for Social Responsibility, a group that organized in opposition to the Strategic Computing Initiative and the use of AI in warfare.)

More recently, critical scholarship and worker-led organizing that sought to redefine the trajectory of AI development had its fleeting moment within the Big Tech labs too, from Google to Microsoft. This was the current that produced the research institute we lead, AI Now, and others like the Distributed AI Research Institute, founded by Timnit Gebru. But Big Tech’s tolerance for internal pushback swiftly faded as tech firms have pursued rapid development and deployment of AI in the name of efficiency, surveillance, and militarism. With vanishingly few exceptions, worker-led organizing and the publication of critical papers are swiftly quelled the moment they become threatening to corporate profit margins, hollowing out the already limited diversity of these spaces. In place of this more critical current, AI firms have adopted a helicopter approach to development, creating AI-sized versions of entrenched problems they could offer ready solutions for: iPad apps for kindergartners to solve for teacher shortages, medical chatbots to replace nurses.

It was in this context that the mission to “democratize AI” emerged, and it has now permeated efforts around AI regulation as well as public investment proposals. These initiatives often call for communities directly impacted by AI—teachers impacted by ed tech, nurses contending with faulty clinical prediction tools, tenants denied affordable housing by rent screening systems—to have a seat at the table in discussions around harm reduction. In other cases they focus on ensuring that a more diverse range of actors have access to computing resources to build AI outside of market imperatives. These efforts are motivated by the sense that if only the right people were in the conversation, or were given some small resources, we’d have meaningful alternatives—perhaps something approaching what Morozov calls AI’s “missing Other.”

The idea of “involving those most affected” certainly sounds good, but in practice it is often an empty signifier. The invitation to a seat at the table is meaningless in the context of the intensely concentrated power of tech firms. The vast distance between a seat at the table and a meaningful voice in shaping whether and how AI is used is especially stark in regulatory debates on AI. Mandates for auditing AI systems, for example, have often treated impacted communities as little more than token voices whose input can be cited as evidence of public legitimacy—a phenomenon Ellen Goodman and Julia Tréhu call “AI audit washing.” The effect is to allow industry to continue business as usual, doing nothing to transform the structural injustice or fix the broken incentives powering the AI-as-a-solution-in-search-of-a-problem dynamic.

This tension also plays out in U.S. debates around government-led R&D investment in AI, which lawmakers rightly lament still pales in comparison to the billions of dollars spent by the tech industry. As historians of industrial policy attest, governments have historically driven R&D spending with longer-term horizons and the potential for transformative public benefit, whereas industry is narrowly focused on commercialization. But thanks to its agenda-setting power and widely adopted benchmarks, the tech industry now defines what counts as an advance in basic research. The effect is to blur the line between scientific work and commercialization and to tilt efforts toward superintelligence and AGI in order to justify unprecedented amounts of capital investment. As a result, many current “public AI” initiatives ostensibly driven by the premise of AI innovation either lean heavily into defense-focused agendas—like visions for a “Manhattan Project for AI”—or propose programs that tinker at the edges of industry development. Such efforts only help the tech giants, propelling us into a future focused on ever-growing scale rather than expanding the horizon of possibility.

Morozov rightly rejects this path. But achieving his vision of a “public, solidarity-based, and socialized” future requires going further than he suggests. Rather than starting from the presumption of broadly shared faith in “technology’s emancipatory potential,” this effort must emanate from the visions of AI’s missing others—the critical currents and alternative visions that Silicon Valley has firmly excluded.