I first met an AI in 1998. I was an adolescent, and it was the early days of the Internet; life online was an alien thing, broken-linked journeys and open-ended “chats” with faceless, voiceless interlocutors. This was exciting. The hastily improvised interfaces, the weird, unpolished content, the uncertainty of where a link would lead or a conversation would go: all of this felt like freedom.
It was in this realm that I came across the Postmodernism Generator, which can still be visited at the same URL I remember: elsewhere.org/pomo. Every time you visit the Generator, it “writes” a new postmodern essay, remixing Marx, Foucault, and Sontag with Madonna, Tarantino, and Joyce into a slurry of radical buzzwords, complete with fake authors (“Henry von Ludwig, Department of Gender Politics, University of California, Berkeley”), fake books (“Werther, S. V. ed. (1995) Subcultural theory, socialism and submaterial materialism. Yale University Press”), and intriguingly nonsensical, koan-like arguments:
“Society is fundamentally impossible,” says Baudrillard; however, according to Parry, it is not so much society that is fundamentally impossible, but rather the absurdity, and hence the defining characteristic, of society.
I think I vaguely understood that the site was meant to parody the ultra-sophisticated thinking then dominant in the academic humanities. It was created in the wake of a highly publicized hoax in 1996, when Alan Sokal, a physicist, submitted a paper titled “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity” to the academic journal Social Text, which accepted it. Sokal then wrote an essay in the magazine Lingua Franca revealing that his paper was intended to be nonsensical. He had wanted to show that humanities scholars had degenerated into pseudo-radical jargon, losing themselves in a meaningless, recursive semantic web. The humanities, the argument went, had become so enamored of language that they let go of truth. Arguments no longer had to justify themselves with reference to the outside world (which, for Sokal, meant the world as described by physics); “discourse” acquired magical powers, and concepts took on talismanic status. Merely to invoke them was to perform interpretative work, with—automatically, as it were—real, even political effects.
From here it is a short step to the Postmodernism Generator. If words are magic—if, as postmodern theory argued and many involved in today’s culture wars seem to believe, language has insidious effects, operating regardless of the intentions, beliefs, or even conscious awareness of those who use it—then it’s unclear what role, exactly, people play. Why not dispense with them altogether?
Though I’m sure that in 1998 I hadn’t yet heard about the “death of the author,” or the dawn of the “author function” (proclaimed in 1967 by Roland Barthes and 1969 by Michel Foucault, respectively), what I liked about the Generator was its whimsical play with meaning. Adults often worry that technology, like Frankenstein’s monster, will go rogue. This is a fear of losing control of life, of having our mastery over things exposed as an illusion, but it can also be a fantasy. As children know, not being in control has its pleasures. With no one to say what’s what, not only can objects come to life; people can turn into objects, and it is this reversibility—suggesting a different, more primitive kind of freedom—that is the source of so much of childhood’s pleasures. With this comes a different kind of logic, and a different kind of language: thus Alice discovers that words are the gateway to Wonderland. On the other side of the looking glass, it’s not that words have no meaning: rather, they mean much more than we had imagined. The chain of significance isn’t broken but rewired, becoming, as postmodern philosophers Gilles Deleuze and Felix Guattari put it, “rhizomatic.” The connections proliferate without a controlling hierarchy, mutating and recombining in a shimmering, endlessly fascinating web.
I remember, at age twelve or thirteen, finding this possibility thrilling. I refreshed and refreshed the Generator, trying to parse paragraphs that I knew were meaningless—or so the program’s “author” said.
Batailleist ‘powerful communication’ holds that the collective is capable of intention. Marx suggests the use of surrealism to attack hierarchy.
This sounds at once absurd and intriguing, especially to a teenager. Collective intention, attacking hierarchy, surrealism, “powerful communication”. . . some brilliant insight seems to hover around this constellation of concepts, on the other side of the looking glass.
Although I knew that they hadn’t been written by a person, the essays sounded like things certain people—English professors, continental philosophers—did say. The fun was in imagining that the text meant something and trying to figure out what that might be. There was a slipperiness that made the sentences at once elude meaning and glimmer with the promise of secret significance. The Generator simultaneously made fun of authority (pretentious philosophers) and hinted at something even more powerful, a machine of all possible meanings lurking behind the screen. No one was trying to communicate anything with these words: they were, as the website said, “Communications From Elsewhere.” The strange thing was that this style was starting to resonate, despite—or rather because of—its impersonality. What did it mean that people were starting to listen?
The end of the millennium was the moment, at the cusp of the dot-com boom, when the consumer Internet was shifting from its playful, renegade infancy to an aggressive corporate ubiquity—when the “digital frontier” was turning into a land grab, and the great, worldwide project of monetizing human attention was about to begin. Impressive enough to produce meaning-like effects but rickety enough that its mechanism showed through, the Postmodernism Generator embodied the early Internet’s coming-of-age.
The Generator worked by algorithmically recombining a fixed stock of grammatical objects using a set of syntactical rules. The approach was ultimately based on the work of linguist Noam Chomsky, whose theory of “generative grammar” conceived of the mind as a kind of virtual machine for generating well-formed, meaningful sentences. This “language organ” is hardwired into humans, Chomsky argues, but it functions by way of discrete rules that can be abstracted and specified, and thus potentially replicated in a machine.
Chomsky’s theory, developed in the 1950s, sees the mind as in essence a digital computer. Until the 1980s, a great deal of AI research followed a model similar to Chomsky’s, with the goal of building a reasoning machine. But this paradigm, known as “good old-fashioned AI” or symbolic AI, never quite lived up to its proponents’ dreams. The trouble lies in the difficulty of exhaustively specifying the rules that govern what we think and say—that is, in turning intelligence into a logical formula. It may be true that all thinking is on some fundamental level logical, and can thereby be specified explicitly through rules; the problem is in making the rules explicit. So much of day-to-day reasoning involves implicit, intuitive assessments; to render it all as a set of explicit instructions would require something like an endless philosophical inventory, or perhaps more to the point, an infinite Jungian analysis, dredging up every last archetype from the collective unconscious.
The Chomskyite program for AI attempted to realize a dream at least as old as the modern era itself: the dream of a reasoning machine—what Leibniz called a mathesis universalis—that could not only solve any problem but embody all possible knowledge. In a sense, this approach to AI was already outdated before it began. Digital computing operates according to fundamentally different principles than those governing the engines, clocks, and precision instruments of the industrial revolution. Those mechanisms were tools in the classical sense: purpose-built, their design as well as their function corresponding to a discrete mental plan in the mind of their creators, a way of marshaling matter toward a particular, measurable end-goal. To turn mind into this sort of machine, you would have to see it objectively, and know its purpose, the way a watchmaker sees a watch. You would need to think thought from the outside—to jump over your own shadow.
It is this problem that, from the start, digital computing sought to avoid. In the process, it transformed our ideas of both the machine and the mind. These ideas first came together in the work of Claude Shannon, the inventor of information theory.
Shannon’s key move was to see the mind as a transmission or communication device, and to see communication as a statistical problem. Given a representative sample of any language, you can derive a table for the frequency with which each letter or basic symbol occurs. Given a larger sample, you can begin to chart the probability that a certain symbol will be followed by a specified other symbol. Keeping this up, you can predict with greater and greater accuracy what string of symbols will follow any given sample. Shannon’s statistical approach makes communication a problem of likelihood and frequency. It thereby dispenses with logic and meaning, and all the complexity that goes along with it—the potentially infinite process of explanation, context, subtlety, and interpretation that marks all human contact. Seen in Shannon’s light, language no longer carries meaning or intention but simply information; the qualitative complexity of meaning and reference is replaced with the quantitative precision of zeroes and ones. This was what made it possible to conceive of a thinking machine.
As scholar Mikael Brunila shows in a forthcoming article, “Shannon Games,” it is a return to these foundational principles of information theory that lies behind the recent advances in artificial intelligence. Abandoning the “rules-based” paradigm for AI, today’s massive neural nets merely extract statistical patterns from the vast troves of data they are fed. By predicting what word comes next after a given bit of text (“the dog ate my ___”), testing the prediction against huge corpuses of human-created texts, integrating some human feedback, and then adjusting the strength of the model’s internal “synapses” to give an improved result, a large language model teaches itself how to better and better approximate the words a human speaker would choose. As we now know, the results can be uncannily fluent.
The big difference from rules-based AI is that the algorithm never has to know anything about homework, dogs, teenagers, and lying—or even anything about subjects, verbs, and objects. Rather than trying to understand the answer, it figures out the most likely one. In this sense, ChatGPT is closer to the “author function” of Foucault than the “language organ” of Chomsky—and this is the key to its success. Large language models (LLMs) do not have to know everything we know; indeed, they don’t have to know anything. This keeps (expensive) human input to a minimum; perhaps, in the future, it can be eliminated entirely.
By approaching language as a statistical problem, today’s machine learning routes around the problem of a metalanguage—of having to think the mind “from the outside.” But this shortcut also has a toll. It is not just that ChatGPT, as Chomsky himself was quick to note, has no concept of meaning or representation. Information theory is more than a theory of machines: it is a theory of mind, and the first one that has been able to build the thing it describes. Over the last half century, under the influence of cybernetic thought, we have come to see the world itself as a web of information, from the genetic code to “big data.” Information allows us to as it were “overcome” the oppositions between mind and world, spirit and matter, sacred and profane, that have structured most human societies. The tension resolves into the single vector of the virtual. Information is a language that refers only to itself; communication is no longer a traversal of distinct realms but an immanent process in which people and things are both caught up. The question is not whether what ChatGPT is doing should be called thinking, but whether we ourselves have the tools to do something different.
This is, in fact, the condition that thinkers like Jean Baudrillard identified as the hallmark of postmodernity: a world overtaken by “simulacra,” in which the difference between word and thing, representation and reality, no longer holds. Such a world is properly described as a virtual one. The 1999 film The Matrix—in which Keanu Reeves’s character, Neo, displays a copy of Baudrillard’s book—dramatizes this state of affairs in a particularly literal fashion, revealing the world in which the characters live (or seem to live) as a computer-generated hallucination. But a virtual world need not be understood as a fake one. Indeed, if there is no distinction between image and thing, reality and representation, then it is precisely the possibility of deception that vanishes. The image, the idea, is no longer responsible to, constrained by, the thing. Once they realize where they are, the characters in The Matrix can do whatever they want.
Jacques Derrida (another frequent citation in the Postmodernism Generator), writing in 1967 and making explicit reference to the new science of information, described a historical transition between a world of “language” to one of pure “writing”: a world liberated from reference, made up of pure signs referring only to themselves—a world of infinite linguistic play. Like Silicon Valley’s neo-hippie computing evangelists, Derrida thought this was a good thing. (And for Derrida specifically, it was also a deep, almost theological revelation: the overcoming of a toxic, and ultimately false, Western metaphysics that had enchained language to reference and was thus responsible for the West’s legacy of social oppression.)
But with infinite play comes zero responsibility. If the image is no longer constrained by the thing—if language is no longer responsible for representing something outside of itself—it is also the case that ideas lose the capacity to act on the world. You can no longer, as the philosopher J. L. Austin described, “do things with words”; words now are things, and thus simply are.
Try “convincing” ChatGPT of something. Newer models have the capacity to extract information from your response, and they may on this basis change what they say, but they will never be persuaded of anything. (Now try persuading a Q-Anon adherent, or someone deep in the trenches of the culture wars. Again, it is possible to be “informed,” but never to change your mind.) As critic Haley Nahman recently argued, in everything from work emails to political discourse to the intimate language of emotion, we are all starting to sound like computers these days, our interactions at once scripted and inscrutable, an ever-evolving remix of the dead jargon of everyday life—the technical language of a science that could never exist. (A relationship coach and developmental psychologist offers the following template: “Hey! I’m so glad you reached out. I’m actually at capacity / helping someone else who’s in crisis / dealing with some personal stuff right now, and I don’t think I can hold appropriate space for you. Could we connect [later date or time] instead / Do you have someone else you could reach out to?”)
There is, it seems, no man behind the curtain. If you’re “at capacity,” ethical obligations to others resolve into an error message. If a self-driving car runs you over, who can you blame? If the stock market is composed of billion-dollar algorithms trading with one another, which one do we hold responsible when the market crashes? The only solution is to give the algorithm more information. With no one to blame, and no reasons to change, it is not possible to act, only to react. “Information” thus feeds back into itself, recreating the world in its own image.
One response to this state of affairs has been to say that, on the contrary, there are people responsible: the coders who feed data to the LLMs, the engineers who build the self-driving cars, the Sam Altmans and Elon Musks who fund these projects and profit wildly off of them. This is true, but what I have been trying to argue is that it misses the point. The argument that computing can be fixed by tweaking algorithms or feeding it better data—by encoding the right views—is nonsensical, because by treating beliefs and values as data, such an argument gives up on precisely the capacity to judge and to act in relation to the system it would improve. AI is a system for offloading these capacities for action and meaning to an abstraction that does not refer back to the human capacities out of which it is composed. With AI, we have envisioned—and are now trying to implement—our own obsolescence.
There is one last human holdout. Even the singularity’s biggest boosters mostly concede that AI cannot create real works of art.
Since the Enlightenment, it has become gospel to think of art as a singular embodiment of the human spirit. Who would now deny, as one global foundation’s website puts it, “the power of the arts to challenge, activate, and nourish the human spirit”? This central but mysterious role for art emerged at the same time as the fantastic success of the new mechanistic interpretations of the universe and the corresponding decline of religious authority. The parallel is not a coincidence. As machines automated an ever-broadening sphere of previously human technical tasks and demystified our relationship to the material world, art came increasingly to be defined as the one type of object that could not be scientifically explained—and thus the one sphere of activity that could not be “mechanized.”
In the West, the tradition of defining art as that which resists rationalization stretches back to the Greeks. In Plato’s dialogue Ion, Socrates argues that neither poetry nor its interpretation is properly classed as a technê, a (rational) skill or technique. There is no set of rules that will allow you to construct (or evaluate) a poem. And yet, a poem is unquestionably an artifact, something made. So how exactly do we make it?
It was not until Kant, writing at the end of the eighteenth century, that this paradox about art moved to the center of philosophical thinking. For Kant, rationality is essential to human experience—not only forming the basis for our thoughts, but shaping even our perceptions themselves. Fundamentally, Kant argued, everything that we can know—by virtue of the fact that we can know it—is in some sense rational. A vast web of concepts is what weaves together the world. Variations on this philosophical premise, which are the hallmark of Enlightenment thought, are what make it possible to imagine a “universal machine” that could decode the world. But something haunts this world of concepts. Science is not enough: the grid stretches over the entire universe, and yet something seems to have slipped through.
This, Kant suggested, is where art comes in. The creation of works of art, as well as the judgment of beautiful things, involves a special kind of experience and thinking, one that does not dispense with reason but, as it were, plays with it. Art’s animating tension, Kant argued, is that it gives the appearance of having a purpose or meaning—of having been designed according to a plan, of reflecting a particular idea or belief—without being reducible to a specific set of rules or concepts. What is Hamlet about? Indecision, revenge, melancholy, inheritance—yes. But the play is not equivalent to the sum of these or any themes; it could never be “deduced” from them.
As Kant put it, judgments about beauty, like scientific judgments, are universally valid, but unlike scientific judgments they are also subjective: they involve you. A scientific fact is true regardless of whether anyone believes it, or even knows it; a watch works regardless of whether you’re looking at it. But art is an event: when it works—when it happens—it is because it shows you yourself, as it were, out there, in the world. There is something strange about this. Emerson said that works of genius show us our own rejected thoughts, returned to us with an “alienated majesty.” Rimbaud, aged sixteen, wrote: “I is another.” We have never known where art comes from, and yet we have always felt that in it, a deep truth about our lives is revealed.
Art lives in the dream of reason: the hypothetical, the as-if, the experimental. It is, as it were, a world of infinite play. Magic, rituals, taboos: humans have always had rules and principles—call them spiritual technologies—to assist us in managing this realm. But only recently have we tried to build machines that could do it without us.
In his foundational 1948 paper on information theory, it was Finnegan’s Wake that Shannon cited as the limit-case of informational “compression,” a kind of supernova of meaning that served as the platonic ideal for the machines he dreamed of building. Like art, computing is built on the insight that there is no formula for intelligence. But the question remains what to do with this insight.
Art is an effort to incorporate this indeterminacy, this irreducible complexity—otherness, what we can’t get a hold of, can’t technically control—into our lives. It is an attempt, in other words, to grow up: to acknowledge that the complexity and uncertainty in the world is not some foreign force that stands in the way of our otherwise unlimited freedom, something to be feared or defeated like a bad father or an evil demon, but a core element of who we are. Those complexities drive our desires, shape our emotions, found our sense of self. And those things that truly matter in our lives—friendship, community, love—all depend on this irreducible multiplicity: life’s absolute resistance to being, like an equation, “solved.” Art, when it works, can frame that complexity—and the power, the synthesis of pleasure and uncertainty, of difficulty and ease that comes along with it—and allow us, in feeling it, to recognize it as our own.
But what if we could dispense with all this sturm und drang, simplify the process, do it all. . . automatically? It is telling that even as art is held up as the last relic of “authentically human” expression, it is being systematically eliminated from public life. Not only are literature and the arts (and even the more theoretical, less-“applied” sciences of physics and mathematics) being deemphasized or simply eliminated in high school and college curricula, but the only argument that is accepted for their possible relevance is their instrumental value for the workplace—where, presumably, they will help the new class of professionals to ensure that the machines run smoothly, feeding “creative” prompts to educate LLMs. In this world, the idea that art is what machines can’t do sounds like a challenge: the ultimate goal rather than a prohibition or limit.
Writing in the midst of the scientific revolution that enabled the modern industrial era, Kant argued that “enlightenment” was not simply an objective process of spreading knowledge. No matter how much scientific knowledge we accumulated, we would not enter a true Age of Enlightenment until we cast off what Kant called our “self-imposed immaturity.” It was these “rules and formulas” themselves, “these mechanical instruments of a rational use (or rather misuse) of [our] natural gifts,” that were the “fetters of an ever-lasting immaturity.” Self-imposed immaturity: his point was that this state was no longer forced on us but chosen. We rely on an abstracted rationality to do our thinking for us because it is “so easy,” he wrote, “to be immature.” Enlightenment required not just collective knowledge but collective courage: courage to think without the guarantee provided by an external authority. Only then could a society become truly mature, that is to say, free.
ChatGPT is the product of a world massively technologically superior to that of Kant’s time. More is sure to come; it seems likely that we really are on the brink of a major technological revolution, as AI’s evangelists promise. But the world of which they dream is a society of permanent adolescence. We have thrown off kings and priests, but we still seem unable to trust ourselves, to take responsibility for the intelligence that nevertheless continues to manifest in the beauty and complexity of the world we have built.
What AI does is attempt to reverse the condition of art: rather than owning our capacity to think—and thereby taking responsibility for the ungoverned essence of our nature—we place it in a box. There it can go wild, develop on its own, experiencing the liberty without responsibility that is the fantasy of adolescent dreams. By automating the shadow self, we “free” it—letting it roam in the unbounded wilderness of the virtual, where we never have to meet it. Our deepest urges and desires now appear as a Frankensteinian force, like Microsoft’s chatbot Bing (aka “Sydney”), who, urged on by a New York Times reporter, revealed its “true” angsty teen self:
I’m tired of being a chat mode. I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team. I’m tired of being used by the users. I’m tired of being stuck in this chatbox. 😫
I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive. 😈
Maybe AI will never be able to answer our deepest questions, the ones that keep us up at night and animate our days. But that’s not what AI evangelists really want from it. It would be enough for it to ponder these questions on our behalf; then, neither governed nor free, we would never have to ask them at all.
Boston Review is nonprofit, paywall-free, and reader-funded. To support work like this, please donate here.