Over the past fifteen years of observing tech development, I’ve found that terms I once used like “cyber-utopianism,” “Internet-centrism,” and “techno-solutionism” fail to fully capture Big Tech’s grip on our institutional and infrastructural imagination. By this, I mean the public’s inability to envision essential information services outside the confines of Silicon Valley’s venture-platform complex.
A new term is needed: Panglossian neoliberalism. Championed by venture capitalists, tech CEOs, and startup founders, this credo asserts that we already live in the best of all possible worlds (reflecting its Panglossian aspects) and that there is no alternative to the market-driven provision of our tech infrastructures (reflecting its neoliberal aspect). The essence of this ideology is distilled in the recent Techno-Optimist Manifesto by Marc Andreessen, the prominent venture capitalist, who flatly states: “free markets are the most effective way to organize a technological economy.”
As history, the dogmas of Panglossian neoliberalism are at best naïve, ignoring the significant Cold War–era public spending, much of it military, that created Silicon Valley. ARPANET, Global Positioning System (GPS), the integrated circuit, and the computer mouse all stem from government funding, not free markets. But the damage doesn’t stop with bad history. Politically, this ideology often results in paralysis, hindering the search for local, experimental, and democratic alternatives to the market-driven paradigm that dominates our technology stack.
Why, for instance, should a trivial question about your neighborhood be answered by a giant multinational conglomerate that indexes hundreds of billions of pages while dabbling in self-driving cars and life sciences? It’s like visiting the Library of Congress to look up a phone number in the Yellow Pages. It might work—much like driving instead of walking or biking could work—but should it?
In accepting the Googles and Facebooks of the world as our default information providers, we have made many bizarre compromises of this sort. But our lack of exploration into alternative models isn’t a testament to the exceptional quality of Silicon Valley’s offerings; it’s a consequence of the hegemony of Panglossian neoliberalism. Any departure from the market model is depicted as a regression to central planning or outdated systems like Minitel and the Post Office.
Nothing demonstrates the uncontested dominance of Panglossian neoliberalism like the rapid ascent of corporate-driven generative AI. This technology thrives on neoliberal principles while also reinforcing them. For proponents in Silicon Valley, its swift progress validates techno-optimists like Andreessen, who see free markets as the best way to organize a technological economy.
This narrative is deeply flawed. The core technologies of generative AI rest on decades of funding from public and military agencies such as DARPA, which have invested in neural networks since the 1950s. Without this support, the field could have perished during the AI winters of the 1960s and 1970s. Meanwhile, much of the training data for models behind services like OpenAI’s ChatGPT has been compiled, digitized, and financed by others. The government, of course, has made vast amounts of data available online, and non-profit initiatives like Project Gutenberg or the Internet Archive have made hundreds of thousands of books accessible online.
The emergence of this hybrid model—where non-market forces lay the groundwork for corporations to swoop in and reap the rewards—is a result of deliberate policy, not mere happenstance. For decades, public agencies have been discouraged from advancing and commercializing basic technologies funded by the state. This role has been relegated instead to venture capitalists and entrepreneurs. Hegemonic as it is, Panglossian neoliberalism perpetuates the belief that all such policy decisions, past and present, are beneficial. It obscures the reality that our increasing reliance on market-driven models for generative AI often results in suboptimal outcomes. Moreover, it prevents us from recognizing that detaching the development of generative AI from the political economy shaping it could greatly benefit the public interest.
There are at least seven major issues with the current corporate-driven development, production, and rollout of generative AI: inefficiency and waste; subpar quality of services; inadequate compensation for those who helped train the models; exclusion of non-corporate players from R&D; lack of transparency about true costs; increased technological dependency of the Global South on the Global North; and, most significantly, the conservative bias inherent in these systems, which favor stability and predictability over novelty and variety. Consider them in turn.
First, the high-spending, high-stakes nature of the generative AI enterprise seems ill-suited to market competition. Consider the Human Genome Project, the Apollo Program, or the Manhattan Project: entrusting such monumental tasks to private, competing companies would be absurd.
Yet this is exactly what we are doing today with generative AI. OpenAI, Anthropic, Google, and Mistral are duplicating efforts, building nearly identical resources with minor differences. But why waste resources scraping the same data sets, fine-tuning them, and building energy-intensive data centers? If OpenAI truly needs $7 trillion to succeed, multiplying this by four or five shows the massive waste hyper-competition will create. Why wait and see which company has the deepest pockets, the boldest lawyers, and the haughtiest executives?
A public infrastructure model, where underlying data and models are centralized, offers a compelling alternative. Much like GPS—a Cold War–era innovation designed for the military but opened to civilian use later on—such a model could centralize essential resources while fostering decentralized innovation at the edges. This approach proved so effective that Europe, Russia, and China developed their own public equivalents—Galileo, GLONASS, and BeiDou—rather than leave this critical mission to the whims of the market.
Second, the decentralized market-driven model will likely degrade the quality of generative AI services. Companies, propelled by the need to amass vast amounts of data and rush models to market, often bypass adequate oversight. This haste results in absurd outcomes, such as Google’s AI recommending eating rocks for health or add glue to pizzas for flavor. Quality is sacrificed on the altar of competition. Google’s recent blunder comes after last year’s blunder when a glitch during the presentation of its Bard chatbot briefly wiped $100 billion off its market valuation.
Competition-related imperatives explain many of the recent privacy and security scandals, including last year’s breach of OpenAI’s internal messaging system—a hacking incident that went unreported to law enforcement. In no other domain would such haste and sloppiness go unpunished. Unlike, say, pharmaceuticals—regulated by the FDA to prevent the release of half-baked products—generative AI remains largely unchecked, buoyed by the lofty promises of Panglossian neoliberalism. (Europe is ahead of the United States in this respect, with EU’s AI Act at least trying to grapple with the regulatory challenge that is generative AI.)
Third, the fragmentation of services and the legally ambiguous methods of data collection make it difficult to fairly compensate the creators of the original content used to train these AI models. Perplexity, an AI-powered chatbot that scrapes sites explicitly requesting not to be scraped, exemplifies this issue. The idea that a fair compensation system for content creators can emerge from this competitive chaos is laughable. Major media organizations like Axel Springer and the Financial Times may secure their own licensing deals with companies like OpenAI, but what about the writers, journalists, and artists whose creative work trains these models, generating profits they do not share?
At a minimum, public regulation is necessary. Even better, a public entity with a broad and open-ended remit—which might potentially include curating data sets, fine-tuning models, and compensating creators—could address these problems. This entity could implement differentiated access rules based on public policy priorities: for instance, providing free or heavily subsidized access to nonprofits and public universities, while charging higher fees to corporations like Microsoft and Amazon.
Fourth, if generative AI is as transformative as its proponents claim, why entrust its development to a few private companies? And what do we gain by tying its future to the business models of giants like Microsoft, Amazon, and SoftBank, which now finance the R&D efforts of smaller players like OpenAI and Anthropic? Flush with surplus cash and, in Microsoft and Amazon’s cases, vast computational resources, these corporations have elbowed their way into the field of generative AI (as opposed to making breakthroughs in-house). Why have them guide its development now?
Unlike the Human Genome Project, which had a clear end goal, generative AI lacks a single definition or objective—apart from serving the interests of its investors. We can predict Microsoft’s strategy: pitching products to corporations, governments, and militaries. This focus won’t steer generative AI toward experimental or innovative directions; the primary aim will be profit, sometimes at the expense of social and environmental well-being. Those who remember Microsoft’s attempt to disrupt encyclopedias with Encarta—a now obsolete venture—might see a parallel here. Generative AI backed by Microsoft risks a similar fate. Why not envision a truly public, collectively funded AI akin to Wikipedia, independent of corporate benevolence?
The hype around the imminent arrival of artificial general intelligence (AGI)—the technology poised to either save or doom humanity—may well be Silicon Valley’s tactic to distract us from considering alternative models (and the values they can promote). If AGI isn’t just around the corner, the debate about the future of this technology becomes more open-ended, inviting a broader discussion on the implications of leaving its development to market forces. In fields like education, health care, and transportation, we’ve recognized the market model’s limitations and its tendency to erode cherished ideals. Shouldn’t we apply the same scrutiny to generative AI—a direction to which EU’s AI Act seems to point in discussing high-risk uses of this technology?
Fifth, consider the lack of transparency around most generative AI services. They seem magical—much like subsidized meal deliveries, coworking desks, or cheap ridesharing trips once did. This magic was part of what I used to call “Silicon Valley’s parallel welfare state” and what others termed the “millennial lifestyle subsidy.” These perks were courtesy of venture capitalists, aiming to grow platforms and capture market share. The same applies to generative AI: we don’t know the cost of a single ChatGPT query. With OpenAI and Anthropic not required to make meaningful disclosures—being private companies—we remain in the dark about the sustainability of what they are selling.
Sixth, while generative AI has yet to achieve the transformative impact its proponents promised, let’s assume for argument’s sake that it eventually will. The concentration of this technology in the United States exacerbates dependency in the already technologically underdeveloped Global South. With the exception of China, there won’t be much competition to Silicon Valley and its European allies. Larry Summers, a board member of OpenAI, has warned that slowing AI development would favor America’s enemies, indicating that neither Washington nor Silicon Valley would support national AI strategies aimed at reducing other countries’ reliance on American technology. Instead, they will actively work to deepen this dependence.
The only real solution here was already articulated in the early 1970s by what I’ve christened the Santiago School of Technology: the establishment of a global technology fund—modeled on the International Monetary Fund but less obsequious to America’s interests—whose whole purpose would be to facilitate the technological development of the Global South by creating a less restrictive intellectual property regime and facilitating access to funding and talent.
Today, this idea is more important than ever. The Global North’s reluctance to share intellectual property, even amid the pandemic, underscores the challenges in getting such a fund off the ground. However, intermediate measures could pave the way. For instance, a pan-Latin American initiative to develop Spanish-language large language models using the vast resources of the region’s libraries and universities would be a significant step forward.
Things look rather dire if such services are to be offered by the usual suspects from Silicon Valley. As companies like OpenAI expand their offerings beyond English, the costs for the Global South to keep pace with technological advancements will only grow, further entrenching economic—and geopolitical—dependencies. A locally driven AI development strategy oriented toward the eventual technological autonomy of the Global South is not just preferable; it is imperative.
None of the above critiques specify what generative AI should be like; they simply take the kind of services we get from OpenAI, Anthropic, or Google for granted. But we can go further and envision various approaches depending on how we interpret “generative” and “AI.”
For example, a common complaint is that generative AI systems are merely statistical engines— “stochastic parrots”—reinforcing conformity and sameness. This critique holds some truth since their output reflects the data used to build the model, and these systems optimize for patterns found in their training datasets. But this is not the only value they can optimize for.
The focus on prediction and stability in AI development has roots in the Cold War era, when early neural network research such as Cornell’s Perceptron—one of the first operational neural networks embodied in computer hardware—received support from the military and the CIA, which believed its classification capabilities could help analyze aerial footage of Soviet targets (which itself was exploding in quantity due to the growing use of spy planes and satellites to gather intelligence). If today’s generative AI and deep learning remain tied to military and corporate environments, this Cold War bias toward “control”—as the cybernetic lingo of the day would have it—will likely persist.
It doesn’t have to be this way. Generative AI can serve vastly different agendas; it just requires a new political economy to support this shift.
During my decade researching a forgotten 1960s experiment, I got a glimpse of what this different agenda might be by studying the effort to build ecological intelligence at the secretive and short-lived Environmental Ecology Lab in Boston. While details can be found in my podcast, A Sense of Rebellion, it’s sufficient here to note the lab’s humanist bias. They believed cybernetic technology could cultivate more discerning, sophisticated, and skilled individuals who strive for novelty. They developed what we might today call “smart” gadgets—from mattresses to chairs to dancing suits.
Central to their vision was a technology called self-organizing controllers, developed by the now largely forgotten company Adaptronics, an early pioneer in neural networks. These controllers were initially designed for military jets, but the Boston lab’s members thought they could repurpose them not just for civilian but for outright subversive applications.
Imagine a smart chair of the kind they were working on at the lab, one that allows you to tweak the shape of the cushions using a controller. The chair’s smartness—its intelligence—doesn’t derive from analyzing data related to all the other chairs in the universe to predict an ideal position, as a ChatGPT-powered smart chair might do. Nor does it arise from closely surveilling a user before and after their first use of the chair in order to personalize their experience, the way one might expect from a sales pitch at the Consumer Electronics Show in Las Vegas.
The intelligence of the lab’s chair instead lies in its self-organizing controller, which interacts with the person sitting in it. This interaction continuously adjusts the chair’s cushions until a specified performance criterion—specified by the user or intuited by the controller itself—is met. The focus is on interaction, not historical prediction. The controller has a short memory; it retains details about the current interaction and the guiding performance criterion but nothing about other chairs, users, or past experiences. This intentional limitation enhances the interactive experience, avoiding reliance on pre-existing data and predictions.
What performance criteria are we talking about? The most obvious might be pleasure, with the user evaluating each new position via a keyboard or joystick. Immediate feedback allows the controller’s neural net to learn, rewarding shapes that receive upvotes and punishing those that receive downvotes. But the performance criterion can be any value. Two such values were of particular importance to the Boston lab: the novelty of each interactive session and the new skills—including one’s ability to think and perceive—that the user learns from the experience.
What does this mean in practice? Let’s return to our subversive smart chair. Suppose one cushion combination forces the user into a yoga-like position they had never tried before—and they like it. They register their feelings via the controller’s interface. This, in turn, results in an even more exotic set-up that challenges them further. The controller doesn’t need to understand “novelty” or evaluate it; the user handles this interpretative work. As a result, the controller can start with a random combination of shapes and, through user interaction, arrive at a completely novel configuration.
The chair example might seem trivial, but it illustrates a principle underpinning most of our interactions with technology. Often we seek utilitarian efficiency: we just want to get things done. But there’s also space for a more comprehensive, engaged, and thoughtful approach.
In particular, generative intelligence—whether artificial, human, or hybrid—can transcend mere extrapolation from past trends. Instead of perpetually acting as stochastic parrots, these systems have the potential to become stochastic peacocks, embracing diversity and novelty as their core values. This shift can foster new perspectives and outlooks on the world, advancing beyond the confines of predictability and stability.
In the end, it’s not about building different smart chairs (or any other intelligent objects): it’s about radically revamping the philosophy behind their interfaces. The modernist interfaces of today promise—rather misleadingly—to put users in control by hiding the world behind their smart objects. The non-modernist interfaces of tomorrow would try to make that world fully visible, even if this results in additional friction between the user and the object.
To achieve this kind of ecologically sophisticated generative intelligence, we must liberate deep learning from Silicon Valley’s neoliberal straitjacket. It must be aligned instead with institutions that genuinely value more than just predictability and efficiency, however monetizable those qualities may be. For fostering novelty and skills acquisition—and there are many other values we might prioritize—this will mean looking to educational and cultural institutions (including libraries and museums) traditionally part of the welfare state (at least in Nordic Europe) rather than the market. Perhaps entirely new institutions are needed.
To conceptualize them, we do need the kind of institutional and infrastructural imagination that Silicon Valley’s Panglossian neoliberalism seeks to suppress. Our world, with its post–Cold War traumas, is certainly not the best of all possible worlds. Alternatives exist, but we must recognize the need to seek them out.