For a technology that seemed to materialize out of thin air, generative AI has had a remarkable two-year rise. It’s hard to believe that it was only on November 30, 2022, when ChatGPT, still the public face of this revolution, became widely available. There has been a lot of hype, and more is surely to come, despite talk of a bubble now on the verge of bursting. The hawkers do have a point. Generative AI is upending many an industry, and many people find it both shockingly powerful and shockingly helpful. In health care, AI systems now help doctors summarize patient records and suggest treatments, though they remain fallible and demand careful oversight. In creative fields, AI is producing everything from personalized marketing content to entire video game environments. Meanwhile, in education, AI-powered tools are simplifying dense academic texts and customizing learning materials to meet individual student needs.

Why should the world-historical promise of computing be confined to replicating bureaucratic rationality?

In my own life, the new AI has reshaped the way I approach both everyday and professional tasks, but nowhere is the shift more striking than in language learning. Without knowing a line of code, I recently pieced together an app that taps into three different AI-powered services, creating custom short stories with native-speaker audio. These stories are packed with tricky vocabulary and idioms tailored to the gaps in my learning. When I have trouble with words like Vergesslichkeit (“forgetfulness” in German), they pop up again and again, alongside dozens of others that I’m working to master.

In over two decades of language study, I’ve never used a tool this powerful. It not only boosts my productivity but redefines efficiency itself—the core promises of generative AI. The scale and speed really are impressive. How else could I get sixty personalized stories, accompanied by hours of audio across six languages, delivered in just fifteen minutes—all while casually browsing the web? And the kicker? The whole app, which sits quietly on my laptop, took me less than a single afternoon to build, since ChatGPT coded it for me. Vergesslichkeit, au revoir!

But generative AI hasn’t only introduced new ecstasies of technological experience; it has also brought new agonies. The educational context is a case in point: if ChatGPT holds promise for personalized tutoring, it also holds promise for widespread cheating. Lowering the costs of mischief, as generative AI has already done, is a sure recipe for moral panic. Hence the growing list of public concerns about the likely—and in some cases already felt—effects of this technology. From automated decision-making in government and corporate institutions to its role in surveillance, criminal justice, and even warfare, AI’s reach extends deeply into social and political life. It has the potential to perpetuate bias, exacerbate wealth inequality, and obscure accountability in high-stakes processes, raising urgent questions about its impact.

Many of these concerns point to a larger structural issue: power over this technology is concentrated in the hands of just a few companies. It’s one thing to let Big Tech manage cloud computing, word processing, or even search; in those areas, the potential for mischief seems smaller. But generative AI raises the stakes, reigniting debates about the broader relationship between technology and democracy.

There is broad consensus that AI requires more of the latter, though what that entails remains fiercely debated. For some, democratizing AI involves greater transparency around the models and datasets driving these systems. Others advocate for open-source alternatives that would challenge corporate giants like OpenAI and Anthropic. Some call for reducing access barriers or building public-sector alternatives to privatized AI services. Most of these solutions, however, focus narrowly on fixing democratic deficits at the implementation stage of AI, prioritizing pragmatic adjustments to the AI systems already deployed or in the pipeline. Supporters of this view—call them the realists—argue that AI is here to stay, that its value depends on how we use it, and that it is, at minimum, worthy of serious political oversight.

Meanwhile, a small but growing group of scholars and activists are taking aim at the deeper, systemic issues woven into AI’s foundations, particularly its origins in Cold War–era computing. For these refuseniks, AI is more than just a flawed technology; it’s a colonialist, chauvinist, racist, and even eugenicist project, irreparably tainted at its core. Democratizing it would be like hoping to transform a British gentlemen’s club into a proletarian library—cosmetic reforms won’t suffice.

For their part, AI researchers claim they operated with considerable independence. As one of them put it in a much-discussed 1997 essay, “if the field of AI during those decades was a servant of the military then it enjoyed a wildly indulgent master.” If the AI community indeed enjoyed such autonomy, why did so few subversive or radical innovations emerge? Was conservatism and entanglement with the military-industrial complex ingrained in the research agenda from the start? Could an anti-systemic AI even exist, and what would it look like? More importantly, does any of this matter today—or should we resign ourselves to the realist stance, accept AI as it stands, and focus on democratizing its development?

If we could turn back the clock, what kind of public-spirited and less militaristic technological agenda might have emerged?

The contours of AI critique have evolved over time. The refuseniks, for example, once included a sizeable subset of “AI futilitarians” who took much delight in dissecting all the reasons AI would never succeed. With recent advances in generative AI—operating on principles far removed from those attacked by philosophically inclined skeptics—this position seems in crisis. Today’s remaining futilitarians train their sights on the specter of killer robots and yet-to-come artificial general intelligence—long a touchstone of the tech industry’s futurist dreams.

There are, of course, other positions; this sketch of the debate doesn’t capture every nuance. But we must face up to the fact that both broad camps, the realists and the refuseniks, ultimately reify artificial intelligence—the former in order to accept it as more or less the only feasible form of AI, the latter to denounce it as the irredeemable offspring of the military-industrial complex or the tech industry’s self-serving fantasies. There’s relatively little effort to think about just what AI’s missing Other might be—whether in the form of a research agenda, a political program, a set of technologies, or, better, a combination of all three.

To close this gap, I want to offer a different way of thinking about AI and democracy. Instead of aligning with either the realists or the refuseniks, I propose a radically utopian question: If we could turn back the clock and shield computer scientists from the corrosive influence of the Cold War, what kind of more democratic, public-spirited, and less militaristic technological agenda might have emerged? That alternative vision—whether we call it “artificial intelligence” or something else—supplies a meaningful horizon against which to measure the promises and dangers of today’s developments.


To see what road we might have traveled, we must return to the scene of AI’s birth. From its origins in the mid-1950s—just a decade after ENIAC, the first digital computer, was built at the University of Pennsylvania—the AI research community made no secret that the kind of machine intelligence it sought to create was teleological: oriented toward attaining a specific goal, or telos.

Take the General Problem Solver, a software program developed in 1957 with support from the RAND Corporation. Its creators—Herbert A. Simon, Allen Newell, and J. C. Shaw—used a technique called “means-ends analysis” to create a so-called “universal” problem solver. In reality, the problems the software could tackle had to be highly formalized. It worked best when goals were clearly defined, the problem-solving environment was stable (meaning the rules governing the process were fixed from the start), and multiple iterations allowed for trying out a variety of means to achieve the desired ends.

Of course, this “rules-based” paradigm of AI research eventually lost out to a rival approach based on neural networks—the basis of all modern machine learning, including the large language models (LLMs) powering systems like ChatGPT. But even then, the nascent neural network approach was framed in problem-solving terms. One of the envisioned applications of the Perceptron, an early neural network designed for pattern recognition, was military: sifting through satellite imagery to detect enemy targets. Neural networks required a clearly defined target and trained models to achieve that task. Without a specific goal or a clear history of prior attempts at achieving it, they wouldn’t work.

Early AI focused on replicating the intelligence of a fully committed, emotionally detached office worker—a species of William Whyte’s “organization man.”

I think it is not a coincidence that early AI tools closely mirrored the instrumental reason of clerical and administrative workers in the very institutions—government, corporate, and military—that spearheaded AI research. These were workers with limited time and attention, for whom mistakes carried significant costs. Automating their tasks through machines seemed both a logical next step and an efficient way to reduce errors and expenses. Some of this focus on goals can be traced to funding imperatives; early AI needed to prove its practical value, after all. But a deeper reason lies in AI’s intellectual inheritance from cybernetics—a discipline that shaped much of its early agenda but was sidelined as AI sought to establish itself as a distinct field.

The pioneers of cybernetics were fascinated by how feedback-powered technologies—ranging from guided missiles to thermostats—could exhibit goal-directed behavior without conscious intention. They drew analogies between these systems and the teleological aspects of human intelligence—such as lifting a glass or turning a door handle—that allow us to achieve goals through feedback control. In appropriating this cybernetic framework, AI carried the metaphor further. If a thermostat could “pursue” a target temperature, why couldn’t a digital computer “pursue” a goal?

Yet there was an important difference. Early cyberneticians had one foot in machine engineering and the other in the biological sciences. They saw their analogies as a way to understand how the brain and nervous system actually functioned, and, if necessary, to revise the underlying models—sometimes by designing new gadgets to better reflect (or, in their language, “embody”) reality. In other words, they recognized that their models were just that: models of actually existing intelligence. The discipline of AI, by contrast, turned metaphor into reality. Its pioneers, largely mathematicians and logicians, had no grounding in biology or neuroscience. Instead, intelligence became defined by whatever could be replicated on a digital computer—and this has invariably meant pursuing a goal or solving a problem, even in the biologically inspired case of neural networks.

This fixation on goal-driven problem solving ironically went uncriticized by some of AI’s earliest and most prominent philosophical critics—particularly Hubert Dreyfus, a Berkeley professor of philosophy and author of the influential book What Computers Can’t Do (1972). Drawing on Martin Heidegger’s reflections on hammering a nail in Being and Time, Dreyfus emphasized the difficulty of codifying the tacit knowledge embedded in human traditions and culture. Even the most routine tasks are deeply shaped by cultural context, Dreyfus contended; we do not follow fixed rules that can be formalized as explicit, universal guidelines.

This argument was supposed to show that we can’t hope to teach machines to act as we do, but it failed to take aim at AI’s teleological ethos—the focus on goal-oriented problem solving—itself. This is even more puzzling given that Heidegger himself offers one variant of such a critique. He wasn’t a productivity-obsessed Stakhanovite on a mission to teach us how to hammer nails more effectively, and he certainly didn’t take goal-oriented action as the essential feature of human life.

On the contrary, Heidegger noted that it’s not only when the hammer breaks that we take note of how the world operates; it’s also when we grow tired of hammering. In such moments of boredom, he argued, we disengage from the urgency of goals, experiencing the world in a more open-ended way that hints at a broader, fluid, contextual form of intelligence—one that involves not just the efficient achievement of tasks but a deeper interaction with our environment, guiding us toward meaning and purpose in ways that are hard to formalize. While Heidegger’s world might seem lonely—it’s mostly hammers and Dasein—similar reexaminations of our goals can be sparked by our interactions with each other.

Ironically, the fixation on goal-driven problem solving went uncriticized by some of AI’s earliest and most prominent philosophical critics.

Yet for the AI pioneers of the 1950s, this fact was a nonstarter. Concepts like boredom and intersubjectivity, lacking clear teleological grounding, seemed irrelevant to intelligence. Instead, early AI focused on replicating the intelligence of a fully committed, extrinsically motivated, emotionally detached office worker—a species of William Whyte’s “organization man,” primed for replacement by more reliable digital replicas.

It took nearly a decade for Dreyfus’s Heideggerian critique to resonate within the AI community, but when it did, it led to significant realignments. One of the most notable appeared in the work of Stanford computer science professor Terry Winograd, a respected figure in natural language processing whose work had even earned Dreyfus’s approval. In the 1980s Winograd made a decisive turn away from replicating human intelligence. Instead, he started focused on understanding human behavior and context, aiming to design tools that would amplify human intelligence rather than mimic it.

This shift became tangible with the creation of the Coordinator, a software system developed through Winograd’s collaboration with Fernando Flores, a Chilean politician-turned-philosopher and a serial entrepreneur. As its name suggests, the software aimed to facilitate better workplace coordination by allowing employees to categorize electronic interactions with a colleague—was it a request, a promise, or an order?—to reduce ambiguity about how to respond. Properly classified, messages could then be tracked and acted upon appropriately.

Grounded in principles of human-computer interaction and interaction design, this approach set a new intellectual agenda: Rather than striving to replicate human intelligence in machines, why not use machines to enhance human intelligence, allowing people to achieve their goals more efficiently? As faith in the grand promises of conventional AI began to wane, Winograd’s vision gained traction, drawing attention from future tech titans like Larry Page, Reid Hoffman, and Peter Thiel, who attended his classes.

The Coordinator faced its share of criticism. Some accused it of reinforcing the hierarchical control that stifled creativity in bureaucratic organizations. Like the Perceptron, the argument went, the Coordinator ultimately served the agendas of what could be called the Efficiency Lobby within corporations and government offices. It helped streamline communication, but in ways that often aligned with managerial objectives, consolidating power rather than distributing it. This wasn’t inevitable; one could just as easily imagine social movements—where ambiguity in communication is commonplace—using the software. (It would likely work better for movements with centralized structures and clear goals, such as the civil rights movement, than for decentralized ones such as Occupy Wall Street or the Zapatistas.)

The deeper issue lay in the very notion of social coordination that Winograd and Flores were trying to facilitate. While they had distanced themselves from the AI world, their approach remained embedded in a teleological mindset. It was still about solving problems, about reaching defined goals—a framework that didn’t fully escape the instrumental reason of AI they had hoped to leave behind.


Winograd, to his credit, proved far more self-reflexive than most in the AI community. In a talk in 1987, he observed striking parallels between symbolic AI—then dominated by rules-based programs that sought to replicate the judgment of professionals like doctors and lawyers—and Weberian bureaucracy. “The techniques of artificial intelligence,” he noted, “are to the mind what bureaucracy is to human social interaction.” Both thrive in environments stripped of ambiguity, emotion, and context—the very qualities often cast as opposites of the bureaucratic mindset.

Winograd didn’t examine the historical forces that produced this analogy. But recent historical accounts suggest that AI research may have, from its inception, attracted those already studying or optimizing bureaucratic systems. As historian of technology Jonnie Penn points out, Herbert A. Simon is a prime example: after aiming to build a “science of public administration” in the 1940s, by the mid-1950s he had become a key player in building a “science of intelligence.” Both endeavors, despite acknowledging the limits of rationality, ultimately celebrated the same value: efficiency in achieving one’s goals. In short, their project was aimed at perfecting the ideal of instrumental reason.

The Efficiency Lobby knew exactly what it wanted: streamlined operations, increased productivity, and tighter hierarchical control.

It’s also no surprise that the bureaucracies of the Efficiency Lobby—from corporations to government funding agencies and the military—gravitated toward AI. Even before the 1956 Dartmouth Workshop, often seen as AI’s ground zero, these institutions were already pursuing similar goals, not least due to the Cold War. The era’s geopolitical tensions demanded rapid advancements in technology, surveillance, and defense, pressuring institutions to develop tools that could process vast amounts of information, enhance decision-making, and maintain a competitive edge against the Soviet Union. The academic push for AI seamlessly aligned with the automation agenda already driving these institutions: tightening rule adherence, streamlining production, and processing intelligence and combat data. Mechanizing decision-making and maximizing efficiency had long been central to their core ambitions.

It is here that we should step back and ask what might have been in the absence of Cold War institutional pressures. Why should the world-historical promise of computing be confined to replicating bureaucratic rationality? Why should anyone outside these institutions accept such a narrow vision of the role that a promising new technology—the digital computer—could play in human life? Is this truly the limit of what these machines can offer? Shouldn’t science have been directed toward exploring how computers could serve citizens, civil society, and the public sphere writ large—not just by automating processes, but by simulating possibilities, by modeling alternate futures? And who, if anyone, was speaking up for these broader interests?

In a society with a semblance of democratic oversight in science, we might expect these questions to spark serious inquiry and research. But that was not mid-1950s America. Instead, John McCarthy—the computer scientist who coined the term “artificial intelligence” and the name most associated with the Dartmouth Workshop (he taught there at the time)—defined the field as he and his closest allies saw fit. They forged alliances with corporate giants like IBM and secured military funding, bypassing the broader scientific community altogether. Later, McCarthy openly celebrated these undemocratic beginnings, stating:

AI would have developed much more slowly in the U.S. if we had had to persuade the general run of physicists, mathematicians, biologists, psychologists, or electrical engineers on advisory committees to allow substantial NSF money to be allocated to AI research. . . . AI was one of the computer science areas . . . DARPA consider[ed] relevant to Defense Department problems. The scientific establishment was only minimally, if at all, consulted.

AI retrospectives often bristle at the ignorance of other disciplines, yet its early practitioners had their own blind spots. Their inability to conceptualize topics such as boredom was not an isolated oversight: it reflects their fundamental failure to reckon with the non-teleological forms of intelligence—those that aren’t focused on problem solving or goal attainment. By reducing all intelligence to such matters, they overlooked alternative paths—ones that explore how computer technologies might amplify, augment, or transform other forms of intelligence, or how the technology itself would need to evolve to accommodate and nurture them.

In fairness, it’s unsurprising they didn’t ask these questions. The Efficiency Lobby knew exactly what it wanted: streamlined operations, increased productivity, and tighter hierarchical control. The emerging AI paradigm promised all of that and more. Meanwhile, there was no organized opposition from citizens or social movements—no Humanity Lobby, so to speak—advocating for an alternative. Had there been one, what might this path have looked like?


In 1953 the Colorado Quarterly posthumously published an essay by Hans Otto Storm, an inventor and radio engineer who also made a name for himself as a novelist. He tragically died just four days after the attack on Pearl Harbor, electrocuted while installing a radio transmitter for the U.S. Army in San Francisco. Despite his notable literary career, it is this short essay—initially rejected by his publishers—that has kept his legacy alive.

Even in science and engineering, effective learning succeeds less by algorithmic rigidities than by “messing about.”

Storm was a disciple and friend of the firebrand heterodox economist Thorstein Veblen. While Veblen is widely known for celebrating “workmanship” as the engineer’s antidote to capitalist excess, his thinking took a fascinating, even playful turn when he encountered the scientific world. There, probably influenced by his connections to the pragmatists, Veblen discovered a different force at work: what he called “idle curiosity,” a kind of purposeless purpose that drove scientific discovery. This tension between directed and undirected thought would become crucial to Storm’s own theoretical innovations.

Storm makes a similar crucial distinction between two modes of what he called “craftsmanship.” The more familiar of the two is “design,” rooted in the mindset of Veblen’s engineer. It begins with a specific goal—say, constructing a building—and proceeds by selecting the best materials to achieve that end. In essence, this is just instrumental reason. (Storm was quite familiar with Weber’s oeuvre and commented on it.)

What of the second mode of “craftsmanship”? Storm gave this alternative a strange name: “eolithism.” To describe it, he invites us to imagine Stone Age “eoliths,” or stones “picked up and used by man, and even fashioned a little for his use.” Modern archaeologists doubt that eoliths are the result of this kind of human intervention—probably just the result of natural processes such as weathering or random breakage—but that is no blow to the force of Storm’s vision. In his own words, the key point

is that the stones were picked up . . . in a form already tolerably well adapted to the end in view and, more important, strongly suggestive of the end in view. We may imagine [the ancient man] strolling along in the stonefield, fed, contented, thinking preferably about nothing at all—for these are the conditions favorable to the art—when his eye lights by chance upon a stone just possibly suitable for a spearhead. That instant the project of the spear originates; the stone is picked up; the spear is, to use a modern term, in manufacture. . . . And if . . . the spearhead, during the small amount of fashioning that is its lot, goes as a spearhead altogether wrong, then there remains always the quick possibility of diverting it to some other use which may suggest itself.

The contrast with the design mode of instrumental reason could not be more pronounced. Eolithism posits no predefined problems to solve, no fixed goals to pursue. Storm’s Stone Age flâneur stands in stark opposition to the kind of rationality on display in Cold War–era thought experiments like the prisoner’s dilemma—and is only better for it. The absence of predetermined goals broadens the flâneur’s capacity to see the world more richly, as the multiplicity of potential ends expands what counts as a means to achieve them.

This is Veblen’s idle curiosity at work. Separated from it, design principles are fundamentally limited because they require fixed, predetermined goals and must eliminate diversity from both methods and materials, reducing their inherent value to merely serving those predetermined ends. Storm goes on to argue that efforts to apply design to solve problems at scale, using the uniform methods of mass production, leave people yearning for vernacular, heterogeneous solutions that only eolithism can offer. Its spirit persists into modernity, embodied in unexpected figures—Storm identifies the junkman as the quintessential eolithic character.

What sets Storm apart from other thinkers who have explored similar intellectual territory—like Claude Lévi-Strauss with his notion of “bricolage” and Jean Piaget with his observations of children and their toys—is his refusal to treat the eolithic mindset as archaic or merely a phase for primitive societies or toddlers. This longing for the heterogeneous over the rigid is not something people or societies are expected to outgrow as they develop. Instead, it’s a fundamental part of human experience that endures even in modernity. In fact, this striving might inform the very spirit—playful, idiosyncratic, vernacular, beyond the rigid plans and one-size-fits-all solutions—that some associate with postmodernity.

Storm’s Stone Age flâneur stands in stark opposition to the classic model of Cold War rationality: the prisoner’s dilemma.

That’s not to say that eolithic tendencies were not under threat in Storm’s day, especially given the imperatives favored by the Efficiency Lobby. Indeed, Storm argued that much of professional education carried an inherent anti-eolithic bias, lamenting that “good, immature eolithic craftsmen” were “urged to study engineering, only to find out, late and perhaps too late, that the ingenuity and fine economy which once captivated [them] are something which has to be unlearned.” Yet, even in science and engineering, effective learning—especially in its early stages—succeeds by avoiding the algorithmic rigidities of the design mode. More often, it starts with what David Hawkins, a philosopher of education and one-time collaborator with Simon, called “messing about.” (A friend of Storm’s and a former aide to Robert Oppenheimer—they all moved in the same leftist circles in California of the late 1930s—Hawkins ensured the posthumous publication of Storm’s essay and did much to popularize it, including among technologists.)

Storm was not a philosopher, and his brief essay contains no citations, but his perspective evokes a key theme from pragmatist philosophy. Can we really talk about means and ends as separate categories, when our engagement with the means—and with one another—often leads us to revise the very ends we aim to achieve? In Storm’s terms, purposive action might itself emerge as the result of a series of eolithic impulses.


What does any of this have to do with a utopian vision for AI? If we define intelligence purely as problem solving and goal achievement, perhaps not much. In Storm’s prehistoric idyll, there are no errands to be run, no great projects to be accomplished. His Stone Age wanderer, for all we know, might well be experiencing deep boredom—“thinking preferably about nothing at all,” as Storm suggests.

But can we really dismiss the moment when the flâneur suddenly notices the eolith—whether envisioning a use for it or simply finding it beautiful—as irrelevant to how we think about intelligence? If we do, what are we to make of the activities that we have long regarded as hallmarks of human reason: imagination, curiosity, originality? These may be of little interest to the Efficiency Lobby, but should they be dismissed by those who care about education, the arts, or a healthy democratic culture capable of exploring and debating alternative futures?

At first glance, Storm’s wanderer may seem to be engaged in nothing more than a playful exercise in recategorization—lifting the stone from the realm of natural objects and depositing it into the domain of tools. Yet the process is far from mechanical, just as it is far from unintelligent. Whether something is a useful tool or a playful artifact often depends on the gaze of the beholder—just ask Marcel Duchamp (who famously proclaimed a pissoir an art object) or Brian Eno (who famously peed into Duchamp’s Fountain to reclaim its status as a subversive artifact, and not mere gallery exhibit).

Eolithism posits no predefined problems to solve, no fixed goals to pursue. Can we really dismiss this attitude as irrelevant to intelligence?

Storm points to child’s play as a prime example of eolithism. He also makes clear that not all social situations, actors, and institutional environments are equally conducive to it. For one, some of us may have been educated out of this mindset in school. Others may be surrounded by highly sophisticated, unalterable technical objects that resist repurposing. But Storm’s list is hardly exhaustive. Many other factors are at work, from the skill, curiosity, and education of the flâneur to the rigidity of rules and norms guiding individual behavior to the ability of eolithic objects to “suggest”’ and “accept” their potential uses.

With this, we have arrived at a picture of human intelligence than runs far beyond instrumental reason. We might call it, in contrast, ecological reason—a view of intelligence that stresses both indeterminacy and the interactive relationship between ourselves and our environments. Our life projects are unique, and it is through these individual projects that the many potential uses of “eoliths” emerge for each of us.

Unlike instrumental reason, which, almost by definition, is context-free and lends itself to formalization, ecological reason thrives on nuance and difference, and thus resists automation. There can be no question of formalizing the entire, ever-shifting universe of meanings from which it arises. This isn’t a question of infeasibility but of logical coherence: asking a machine to exercise this form of intelligence is like asking it to take a Rorschach test. It may produce responses, especially if trained on a vast corpus of human responses, but those answers will inevitably be hollow for one simple reason: the machine hasn’t been socialized in a way that would make the process of having it interpret the Rorschach image meaningful.

Yet just because formalization is off the table doesn’t mean ecological reason can’t be technologized in other ways. Perhaps the right question echoes one posed by Winograd four decades ago: rather than asking if AI tools can embody ecological reason, we should ask whether they can enhance its exercise by humans.

Framing the question this way offers grounds for cautious optimism—if only because AI has evolved radically since Winograd’s critique in the 1980s. Today’s AI allows for more heterogeneous and open-ended uses; its generality and lack of a built-in telos make it conducive to experimentation. Where earlier systems might have defaulted to a rigid “computer says no,” modern AI hallucinates its way to an answer. This shift stems from its underlying method: unlike the rules-based expert systems Winograd critiqued as Weberian bureaucracy, today’s large language models are powered by data and statistics. Though some rules still shape them, their outputs are driven by changing data, not fixed protocols.

What’s more, these models resemble the flexibility of the market more than the rigidity of bureaucracy. Just as market participants rely on past trends and can misjudge fast-changing contexts, large language models generate outputs based on statistical patterns—at the risk of occasional hallucinations and getting the context wrong. It’s no coincidence, perhaps, that Friedrich Hayek, whose work in psychology influenced early neural networks, saw an equivalence between how brains and markets operate. (Frank Rosenblatt, creator of the Perceptron, cites Hayek approvingly.)

Unlike context-free instrumental reason, ecological reason thrives on nuance and difference—and thus resists automation.

In my small project to build the language app, I started out much like the carefree Stormian flâneur—unconcerned with solving a particular problem. I wasn’t counting the hours spent learning languages or searching for the most efficient strategy. Instead, as I was using one of the three AI-powered services—my equivalent of stumbling upon Storm’s stone—I noticed a feature that made me wonder if I could link this tool with the other two. Were my hunches about how easily someone as code-illiterate as myself could combine these services correct? I didn’t have to wonder; with ChatGPT, I could immediately test them. In this sense, ChatGPT isn’t the eolith itself—it’s too amorphous, too shapeless, too generic—but it functions more like the experimental workshop where the eolithic flâneur takes his discovery to see what it’s really good for. In other words, it lets us test whether the found stone is better suited as a spearhead, a toy, or an art object.

There are elements of eolithism here, in short, but I think this is far from the best we can hope for. To begin with, all three services I used come with subscription or usage fees; the one that transforms text into audio charges a hefty $99 per month. It’s quite possible that these fees, heavily subsidized by venture capital, don’t even account for the energy costs of running such power-hungry generative AI. It’s as if someone privatized the stonefield where the original eolith was discovered, and its new proprietors charged a hefty entrance fee. A way to maximize ecological intelligence it isn’t.

There’s also something excessively individualistic about this whole setup—a problem that Storm’s asocial, prehistoric example sidesteps. Sure, I can build a personalized language learning app using a mix of private services, and it might be highly effective. But is this model scalable? Is it socially desired? Is this the equivalent of me driving a car where a train might do just as well? Could we, for instance, trade a bit of efficiency and personalization to reuse some of the sentences or short stories I’ve already generated in my app, reducing the energy cost of re-running these services for each user?

This takes us to the core problem with today’s generative AI. It doesn’t just mirror the market’s operating principles; it embodies its ethos. This isn’t surprising, given that these services are dominated by tech giants that treat users as consumers above all. Why would OpenAI, or any other AI service, encourage me to send fewer queries to their servers or reuse the responses others have already received when building my app? Doing so would undermine their business model, even if it might be better from a social or political (never mind ecological) perspective. Instead, OpenAI’s API charges me—and emits a nontrivial amount of carbon emissions—even to tell me that London is the capital of the UK or that there are one thousand grams in a kilogram.

For all the ways tools like ChatGPT contribute to ecological reason, then, they also undermine it at a deeper level—primarily by framing our activities around the identity of isolated, possibly alienated, postmodern consumers. When we use these tools to solve problems, we’re not like Storm’s carefree flâneur, open to anything; we’re more like entrepreneurs seeking arbitrage opportunities within a predefined, profit-oriented grid. While eolithic bricolage can happen under these conditions, the whole setup constrains the full potential and play of ecological reason.

Here too, ChatGPT resembles the Coordinator, much like our own capitalist postmodernity still resembles the welfare-warfare modernity that came before it. While the Coordinator enhanced the exercise of instrumental reason by the Organization Man, ChatGPT lets today’s neoliberal subject—part consumer, part entrepreneur—glimpse and even flirt, however briefly, with ecological reason. The apparent increase in human freedom conceals a deeper unfreedom; behind both stands the Efficiency Lobby, still in control. This is why our emancipation through such powerful technologies feels so truncated.

Today’s AI allows for more heterogeneous and open-ended uses than ever before. But it makes us more like entrepreneurs than Storm’s carefree flâneur.

Despite repeated assurances from Silicon Valley, this sense of truncated liberation won’t diminish even if its technologies acquire the ability to tackle even greater problems. If the main attraction of deep learning systems is their capacity to execute wildly diverse, complex, even unique tasks with a relatively simple (if not cheap or climate-friendly) approach, we should remember that we already had a technology of this sort: the market. If you wanted your shopping list turned into a Shakespearean sonnet, you didn’t need to wait for ChatGPT. Someone could have done it for you—if you could find that person and were willing to pay the right price.

Neoliberals recognized this early on. At least in theory, markets promise a universal method for problem solving, one far more efficient and streamlined than democratic politics. Yet reality is sobering. Real markets all too frequently falter, often struggling to solve problems at all and occasionally making it much worse. They regularly underperform non-market systems grounded in vernacular wisdom or public oversight. Far from natural or spontaneous phenomena, they require a Herculean effort to make them function effectively. They cannot easily harness the vast reserves of both tacit and formal knowledge possessed by citizens, or at least that type of knowledge that isn’t reducible to entrepreneurial thinking: markets can only mobilize it by, well, colonizing vast areas of existence. (Bureaucracies, for their part, faced similar limitations long before neoliberalism, though their disregard for citizen participation stemmed from different motives.)

These limitations are well known, which is why there’s enduring resistance to commodifying essential services and a growing push to reverse the privatization of public goods. Two years into generative AI’s commercial growing pains, a similar reckoning with AI looms. As long as AI remains largely under corporate control, placing our trust in this technology to solve big societal problems might as well mean placing our trust in the market.


What’s the alternative? Any meaningful progress in moving away from instrumental reason requires an agenda that breaks ties with the Efficiency Lobby. These breaks must occur at a level far beyond everyday, communal, or even urban existence, necessitating national and possibly regional shifts in focus. While this has never been done in the United States—with the potential exception of certain elements of the New Deal, such as support for artists via the Federal Art Project—history abroad does offer some clues as to how it could happen.

In the early 1970s, Salvador Allende’s Chile aimed to empower workers by making them not just the owners but also the managers of key industries. In a highly volatile political climate that eventually led to a coup, Allende’s government sought to harness its scarce information technology to facilitate this transition. The system—known as Project Cybersyn—was meant to promote instrumental and technological reason, coupling the execution out of usual administrative tasks with deliberation on national, industry, and company-wide alternatives. Workers, now in managerial roles, would use visualization and statistical tools in the famous Operations Room to make informed decisions. The person who commissioned the project was none other than Fernando Flores, Allende’s minister and Winograd’s future collaborator.

Around the same time, a group of Argentinian scientists began their own efforts to use computers to spark discussions about potential national—and global—alternatives. The most prominent of these initiatives came from the Bariloche Foundation, which contested many of the geopolitical assumptions found in reports like 1972’s The Limits to Growth—particularly the notion that the underdeveloped Global South must make sacrifices to “save” the overdeveloped Global North.

The apparent increase in human freedom offered by AI today conceals a deeper unfreedom: behind both stands the Efficiency Lobby, still in control.

Another pivotal figure in this intellectual milieu was Oscar Varsavsky, a talented scientist-turned-activist who championed what he called “normative planning.” Unlike the proponents of modernization theory, who wielded computers to project a singular, predetermined trajectory of economic and political progress, Varsavsky and his allies envisioned technology as a means to map diverse social trajectories—through a method they called “numerical experimentation”—to chart alternative styles of socioeconomic development. Among these, Varsavsky identified a spectrum including “hippie,” “authoritarian,” “company-centric,” “creative,” and “people-centric,” the latter two being his preferred models.

Computer technology would thus empower citizens to explore the possibilities, consequences, and costs associated with each path, enabling them to select options that resonated with both their values and available resources. In this sense, information technology resembled the workshop of our eolithic flâneur: a space not for mere management or efficiency seeking, but for imagination, simulation, and experimentation.

The use of statistical software in modern participatory budgeting experiments—even if most of them are still limited to the local rather than national level—mirrors this same commitment: the goal is to use statistical tools to illuminate the consequences of different spending options and let citizens choose what they prefer. In both cases, the process is as much about improving what Paulo Freire called “problem posing”—allowing contesting definitions of problems to emerge by exposing it to public scrutiny and deliberation—as it is about problem solving.

What ties the Latin American examples together is their common understanding that promoting ecological reason cannot be done without delinking their national projects from the efficiency agenda imposed—ideologically, financially, militarily—by the Global North. They recognized that the supposedly apolitical language of such presumed “modernization” often masked the political interests of various factions within the Efficiency Lobby. Their approach, in other words, was first to pose the problem politically—and only later technologically.

The path to ecological reason is littered with failures to make this move. In the late 1960s, a group of tech eccentrics—many with ties to MIT—were inspired by Storm’s essay to create the privately funded Environmental Ecology Lab. Their goal was to explore how technology could enable action that wasn’t driven by problem solving or specific objectives. But as hippies, rebels, and antiwar activists, they had no interest in collaborating with the Efficiency Lobby, and they failed to take practical steps toward a political alternative.

One young architecture professor connected to the lab’s founders, Nicholas Negroponte, didn’t share this aversion. Deeply influenced by their ideas, he went on to establish the MIT Media Lab—a space that celebrated playfulness through computers, despite its funding from corporate America and the Pentagon. In his 1970 book, The Architecture Machine: Toward a More Human Environment, Negroponte even cited Storm’s essay. But over time, this ethos of playfulness morphed into something more instrumental. Repackaged as “interactivity” or “smartness,” it became a selling point for the latest gadgets at the Consumer Electronics Show—far removed from the kind of craftsmanship and creativity Storm envisioned.

Latin American examples give the lie to the “there’s no alternative” ideology of technological development in the Global North.

Similarly, as early as the 1970s, Seymour Papert—Negroponte’s colleague at MIT and another AI pioneer—recognized that the obsession with efficiency and instrumental reason was detrimental to computer culture at large. Worse, it alienated many young learners, making them fear the embodiment of that very reason: the computer. Although Papert, who was Winograd’s dissertation advisor, didn’t completely abandon AI, he increasingly turned his focus to education, advocating for an eolithic approach. (Having worked with Piaget, he was also acquainted with the work of David Hawkins, the education philosopher who had published Storm’s essay.) Yet, like the two labs, Papert’s solutions ultimately leaned toward technological fixes, culminating in the ill-fated initiative to provide “one laptop per child.” Stripped of politics, it’s very easy for eolithism to morph into solutionism.


The Latin American examples give the lie to the “there’s no alternative” ideology of technological development in the Global North. In the early 1970s, this ideology was grounded in modernization theory; today, it’s rooted in neoliberalism. The result, however, is the same: a prohibition on imagining alternative institutional homes for these technologies. There’s immense value in demonstrating—through real-world prototypes and institutional reforms—that untethering these tools from their market-driven development model is not only possible but beneficial for democracy, humanity, and the planet.

In practice, this would mean redirecting the eolithic potential of generative AI toward public, solidarity-based, and socialized infrastructural alternatives. As proud as I am of my little language app, I know there must be thousands of similar half-baked programs built in the same experimental spirit. While many in tech have profited from fragmenting the problem-solving capacities of individual language learners, there’s no reason we can’t reassemble them and push for less individualistic, more collective solutions. And this applies to many other domains.

Stripped of politics, it’s easy for eolithism to morph into solutionism. The political project comes first.

But to stop here—enumerating ways to make LLMs less conducive to neoliberalism—would be shortsighted. It would wrongly suggest that statistical prediction tools are the only way to promote ecological reason. Surely there are far more technologies for fostering human intelligence than have been dreamt of by our prevailing philosophy. We should turn ecological reason into a full-fledged research paradigm, asking what technology can do for humans—once we stop seeing them as little more than fleshy thermostats or missiles.

While we do so, we must not forget the key insight of the Latin American experiments: technology’s emancipatory potential will only be secured through a radical political project. Without one, we are unlikely to gather the resources necessary to ensure that the agendas of the Efficiency Lobby don’t overpower those of the Humanity Lobby. The tragic failure of those experiments means this won’t be an easy ride.

As for the original puzzle—AI and democracy—the solution is straightforward. “Democratic AI” requires actual democracy, along with respect for the dignity, creativity, and intelligence of citizens. It’s not just about making today’s models more transparent or lowering their costs, nor can it be resolved by policy tweaks or technological innovation. The real challenge lies in cultivating the right Weltanschauung—this app does wonders!—grounded in ecological reason. On this score, the ability of AI to run ideological interference for the prevailing order, whether bureaucracy in its early days or the market today, poses the greatest threat.

Incidentally, it’s the American pragmatists who got closest to describing the operations of ecological reason. Had the early AI community paid any attention to John Dewey and his work on “embodied intelligence,” many false leads might have been avoided. One can only wonder what kind of AI—and AI critique—we could have had if its critics had looked to him rather than to Heidegger. But perhaps it’s not too late to still pursue that alternative path.

Independent and nonprofit, Boston Review relies on reader funding. To support work like this, please donate here.