Morozov poses a provocative question, asking how AI might have been directed to different ends than the ones that drive the runaway industry today. As with any technology, we need to question both the technical imperatives and the underlying human values and uses. In the words of the decades-old slogan of Computer Professionals for Social Responsibility, “Technology is driving the future. . . . it is up to us to do the steering.”

Morozov also accurately points out the dominant role of the “Efficiency Lobby” in steering the direction for AI so far, as well as many other modern computing technologies. The question to be asked from a socially meaningful point of view, however, is not where else we could have gone in an alternative world, but how we move forward from here.

We don’t want to fill the world with uncaring playful machines any more than with uncaring efficiency generators.

That is not to say that learning from the past isn’t useful. There were indeed alternatives of the sort Morozov seeks from the very beginning of AI and kindred technologies. A visionary example was Gordon Pask’s Musicolour machine, built in 1953 in collaboration with Robin McKinnon-Wood, which translated musical input into visual output in a way that learned from the interaction with the musician operating it. As Pask put it:

Given a suitable design and a happy choice of visual vocabulary, the performer (being influenced by the visual display) could become involved in a close participant interaction with the system. He trained the machine and it played a game with him. In this sense, the system acted as an extension of the performer with which he could co-operate to achieve effects that he could not achieve on his own.

This and other explorations like it in subsequent decades did point in a direction that the world—or to be more precise, the commercial technology developers—did not choose to take. But is this the direction in which we should be looking for a broad alternative to current AI?

I am not as enamored as Morozov seems to be with the world of Storm’s “flâneur.” I agree that there is something attractive about the image of playfulness, imagination, originality, with no problems to solve, no goals to pursue. But there are deeper human consequences and opportunities that are at stake when we design technologies. What Morozov leaves out in his efficient-versus-playful dichotomy is the role of human care and concern. This is evident in the way he talks about intelligence, which he sees as the measure of being human. Thus he seeks alternative kinds of “non-teleological forms of intelligence—those that aren’t focused on problem solving or goal attainment.”

But care is not a form of intelligence. The philosopher John Haugeland famously said “the trouble with artificial intelligence is that computers don’t give a damn.” This is just as true of today’s LLM-based systems as it was of the “good old-fashioned AI” Haugeland critiqued. Rather than a kind of intelligence, care is an underlying foundation of human meaning. We don’t want to fill the world with uncaring playful machines any more than with uncaring efficiency generators.

Morozov has also missed the main underlying points of the examples he cites from my work with Fernando Flores. The Coordinator was indeed marketed with offers of increased organizational efficiency, but the underlying philosophy reflected a deeper view of human relationships. It was centered on the role of commitment as the basis for language. The Coordinator’s structure was designed to encourage those who used it to be explicit in their own recognition and expressions of commitment, within their everyday practical communications. The theme of this and Flores’s subsequent work is of “instilling a culture of commitment” in our working relationships, allowing us to focus on what we are creating of value together.

My analogy of AI to bureaucracy evokes not just the mechanics of bureaucratic rule-following but the hollowing out of human meaning and intention. We are all familiar with a bureaucratic interaction where our interlocutor says, “I’m sorry, I understand your concern, but the rules say that you have to . . .” That is, care for the lifeworld of the person being told what to do cannot be a consideration. To return to Haugeland’s insight, the bureaucratic system doesn’t give a damn. It’s designed that way on purpose, to remove human subjectivity and judgment from matters even when they are of crucial, life-determining importance.

Morozov recognizes that as long as AI remains largely under corporate control, placing our trust in this technology to solve big societal problems might as well mean placing our trust in the market. But putting it under government control, given the current nature of governments in the world, may not be an improvement. The problem isn’t how to engender AI systems that are more playful and less boring but to lay out what it would mean to create and deploy systems that are supportive of human concern and care. I agree these would be systems designed to enhance the interaction of humans, not to replace it. As outlined in Douglas Engelbart’s early vision, the goal should be intelligence augmentation rather than artificial intelligence.

There have been many calls for moving toward AI “alignment” with human values and concerns, but there is no simple mechanism of alignment that we can appeal to. As Arturo Escobar argues, conventional technology design tends to support a singular, globalized world view that prioritizes efficiency, economic growth, and technological progress, often at the expense of cultural diversity and ecological health. This is not the result of “closed world” assumptions, but of the consequences of the process by which data is collected, networks are trained, and models deployed.

We return to the question we started with: not “How might things have happened differently?” but “How might things be different in the future?” Morozov ends with a tantalizing proclamation: the lesson of the Latin American experiments is that “technology’s emancipatory potential will only be secured through a radical political project.” What is the radical political project of our times, within existing national and international systems of governance, that has the promise to nurture AI’s emancipatory potential? Unfortunately, this is a far more difficult and consequential question.