I’m grateful for these thoughtful responses, many of which grapple with the central question of how to bring “AI’s missing Other” into existence. Before engaging with their specific proposals, however, I need to clarify what this Other actually represents.

Bruce Schneier and Nathan Sanders defend the importance of problem solving (supposedly against my downplaying of it), while Terry Winograd characterizes my position as advocating for “playfulness . . . with no problems to solve, no goals to pursue.” But these moves fundamentally misunderstand the relationship between instrumental and ecological reason. The eolithic flâneur doesn’t set out on an intentional quest to find stones but nevertheless does operate within a framework of long-term projects, ends, and problems to solve. As Storm himself notes, “the stones were picked up . . . in a form already tolerably well adapted to the end in view and . . . strongly suggestive of the end in view.”

The alternatives we imagine needn’t be limited to reforms of existing technologies. We need to ask what we’re trying to accomplish in the first place.

These ends emerge from culture, history, and society, but their exact form depends on how each of us interprets (and reinterprets) them. This is one place where humans differ fundamentally from computers: our different constellations of meaning lead to radically different interpretations of the same object. Hence my argument about the futility of having a computer take a Rorschach test: the exercise is meaningful only in light of human-like life projects—with all their associated anxieties, aspirations, and frustrations—which shape how we make sense of the images.

Far from ignoring questions of care and concern, as Winograd suggests, my conception of intelligence places them at its center. While I agree these aren’t themselves forms of intelligence, they are inseparable from how we respond to what I would call the prompts to care—whether moral, political, or aesthetic—that the world presents to us.

This understanding helps clarify the missing Other. Contrary to Winograd’s reading, I’m not advocating for more playful AI systems like Gordon Pask’s Musicolour machine. Instead, I envision an alternative non-AI project that would deploy some of the technologies currently used in AI—together with other social and cultural resources—to foster ecological reason. The goal would be to make more things meaningful to more people by enabling us to cultivate the interests and skills that transform noise or boredom into meaning and care.

Cold War AI was a massive military Keynesian project to entrench instrumental reason—increasingly embedded in technological systems—in all social domains. Today’s counterpart, by contrast, would leverage technology (but not only technology) to promote moral reasoning, political imagination, and aesthetic appreciation in humans.

Play can certainly help. As Brian Eno writes, “the magic of play is seeing the commonplace transforming into the meaningful.” This underscores my closing remarks about developing the right Weltanschauung: the point is not about following the rituals of play (which is what we do when we play soccer or chess), but, rather, ceasing to doubt that another world is, in fact, possible. A good place to start is by realizing that the same ingredients and starting conditions could yield very different results; a mere stone can be so much more.

It’s in that spirit that I’d defend my use of historical hypotheticals. While they don’t provide a roadmap for action, they serve to crack open our imagination—something especially crucial given Silicon Valley’s chokehold on how we envision the future, as many responses make clear.

The alternatives we imagine needn’t be limited to structural reforms of existing technologies, important as they are—whether revamping funding mechanisms (as Edward Ongweso Jr. argues), empowering workers (as Brian Merchant and Wendy Liu suggest), or building more transparent infrastructure (as Schneier and Sanders advocate). More fundamentally, we need to reimagine what we’re trying to accomplish when we deploy technology to enhance intelligence in the first place. Rather than endlessly qualifying AI with adjectives—“democratic,” “playful,” “socialist,” and so on—perhaps we should return to first principles and ask whether the relationship between technology and intelligence can be conceptualized entirely outside the framework we inherited from the Cold War’s Efficiency Lobby.

I’ll be the first to acknowledge the difficulty. Thus, while I share Sarah Myers West and Amba Kak’s concerns about techno-optimism, they mischaracterize my argument as riding on a renewal of faith in technology’s emancipatory potential. As Winograd correctly notes, I invoked the Latin American examples from the early 1970s precisely to demonstrate the opposite point: merely changing how we think about technology—having an “aha” moment about its alternative possibilities—isn’t enough. Without embedding these insights—this Weltanschauung—within a radical political project, our recognition of technology’s potential remains just that: potential, unrealized and unrealizable.

Winograd is right that the crucial question—what such a project might look like today—is challenging. Many respondents offer their own answers. I believe its basic contours would mirror those of the Latin American initiatives of the 1970s, which were deeply informed by dependency theory. The starting point would be recognizing that contemporary technological development—despite its problem-solving prowess—remains fundamentally capitalist in nature and thus ultimately stands in opposition to human flourishing and ecological survival. What’s needed is a national—and, in some cases, regional—project to imagine and implement noncapitalist developmental paths, not just for technology but for society as a whole. Of course, such an agenda would take dramatically different forms in different contexts—what works for the United States would differ markedly from what might succeed in Guatemala, Thailand, or Kenya. And what to do about the United States, the entrenched hegemon of the global economy, is no easy question either.

Despite her valuable discussion of developments outside North America and Europe, Audrey Tang overlooks this crucial question of noncapitalist development alternatives. While one can debate the precise influence of cybernetics on figures like Edwards Deming, we shouldn’t forget the extensive critiques—by both Japanese and other thinkers—of Toyotism and the lean production methods that drove Japan’s economic miracle. To celebrate these systems merely because they incorporated some worker participation and used concepts like feedback is to miss their deeply political and ideological nature. After all, they strove after higher productivity in (still) highly hierarchical and mostly authoritarian capitalist workplaces. This approach exemplifies precisely the kind of technocratic thinking, divorced from considerations of alternative paths, that I mean to challenge.

Similar criticisms apply to projects like India Stack. While Tang presents the example as a triumph of local innovation, it represents just one developmental model—one that primarily serves India’s domestic capitalist class in its effort to avoid paying tribute to Silicon Valley. Without carefully examining how capitalism, in both its global and national forms, co-opts elements of tradition and social fabric that facilitate accumulation, we risk celebrating surface-level diversity while missing its ultimately homogenizing effects. While time will tell whether India Stack enhances or inhibits ecological reason, I remain deeply skeptical.

The promise of technological alternatives lies not in replacing Silicon Valley’s digital imperialism with local variants but in reconceptualizing technology’s role outside the logic of capital accumulation. This demands more than technical innovation or local control; it requires a radical political vision that can distinguish genuine social transformation from rebranded capitalist development. Our task is not to make AI more democratic or digital infrastructure more nationally flavored, but to build technological futures that break free from the very framework that keeps preaching “there’s no alternative.”

Independent and nonprofit, Boston Review relies on reader funding. To support work like this, please donate here.