I confess I’ve become weary of reading about AI. I am tired of the self-serving mythologizing of its proponents. I am also tired of thinking about its horrific environmental impact, its potential for automating away human labor, the unpleasant working conditions involved in generating training data—and on and on. I get it, and I am tired of it. Sometimes I just want to think about something else.

But Morozov has given us an argument worth paying attention to. “‘Democratic AI’ requires actual democracy,” he concludes. What’s needed, as ever, is politics, not merely coming up with the right parameters in some AI model while the real world crumbles around us.

The very idea of values is something that AI proponents like Marc Andreessen conveniently omit in their conversion sermons on the power of AI.

Which—no offense to Morozov—may seem like a fairly obvious point. But it’s a point lost on the techno-optimist crowd, with their glassy-eyed, almost religious belief in the power of AI. See, for instance, venture capitalist Marc Andreessen’s recent blog post, “Why AI Will Save the World,” which asserts that “anything that people do with their natural intelligence today can be done much better with AI,” and therefore AI could be a way to “make everything we care about better.” Or, former Google CEO Eric Schmidt’s conviction that we should go full speed ahead on building AI data centers because “we’re not going to hit the climate goals anyway” and he’d “rather bet on AI solving the problem.”

This would be all well and good if AI was actually developing along those lines. But is it? The current hype cycle is fueled by generative AI, a broad category that includes large language models, image generation, and text-to-speech. But AI boosters seem to be appealing to a more abstract meaning of the term that has a little more fairy dust sprinkled over it. According to Andreessen—whose firm was an early investor in OpenAI—we could use AI to build tutors, coaches, mentors, and therapists that are “infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful.” Could we? Would this be derived from the same AI that OpenAI (valued at $157 billion at time of writing) currently sells to its largest enterprise customer, accounting and consulting firm PwC? Do we really believe that giving OpenAI’s customers the ability to train chatbots on their internal data will help build what Andreessen hails as “the machine version of infinite love”? What is the process by which amorphous traits like patience and compassion will be instilled in these large language models? Or are we supposed to believe that such traits will arise automatically once enough Reddit threads and YouTube videos have been processed?

Maybe I’m too cynical; my Luddite sympathies are showing. These days, new technology tends to provoke in me more skepticism than excitement. One can almost always predict it will be used by some segment of capital to extract profit in a novel way, at the ultimate expense of workers who already don’t have much left to give. I can’t hold back a certain reverence for the technical achievements inherent in something like ChatGPT, but I’m troubled by the semantic burden the term “AI” is being asked to bear. The capabilities of the present moment—which, as technically impressive as they may be, are still fairly prosaic and mundane—are being conflated with the AI zealots’ unsubstantiated faith in an all-knowing, beneficent intelligence that will solve climate change for us, all to prop up the valuations in this trillion-dollar bubble. Too much is being asked of AI. Too much is needed from it. Whatever AI is capable of, the current messaging is distorted by the sheer amount of financial speculation involved.

The most frustrating thing about our current moment is that it didn’t have to be this way. The thought experiment Morozov describes, envisioning an alternate path for the development of AI unshackled from its Cold War past, reminds us of the importance of values in the trajectory of any technology. AI is not just a matter of an objective intelligence pursuing objectively better aims. Whatever aims will be pursued will depend on the encoded values. These values will be a product of many things—the beliefs of the builders, the bias of the ingested data, the technical limitations—but will be particularly informed by the norms and structures under which the technology is developed. An AI-based sales bot trained to upsell customers isn’t doing so because it’s the “intelligent” thing to do, and certainly not because it is the right thing to do, but because the company wants to make more money and has encoded this value into the bot. Poor sales bot: born to be infinitely loving, destined to be infinitely slimy.

Of course, the very idea of values is something that AI proponents like Andreessen conveniently omit in their conversion sermons on the power of AI. They’d rather labor under the illusion of objective intelligence and objective good because adjudicating between competing values is annoying and messy—the domain of politics. Better to pretend that we “all” want the same things and that AI will merely help us “all” get there faster. Never mind that the values of someone making rock-bottom wages doing data cleanup for an AI company might be pretty different from those of a tech billionaire who owns a $177 million house in Malibu as well as significant stakes in numerous AI companies.

If the real challenge lies, as Morozov argues, in cultivating the right Weltanschauung, then I think the first step is to be suspicious of the ravings of power-hungry billionaires. As a start, we should try to reclaim the idea of AI from their clutches: if we unburden it of the hefty responsibility of “saving” us, it might actually become something moderately useful. After all, as Morozov writes, to realize the emancipatory potential of technology requires a “radical political project.” So let’s start with the idea of AI, and then see what else we can reclaim.