This is not the dystopia we were promised. We are not learning to love Big Brother, who lives, if he lives at all, on a cluster of server farms, cooled by environmentally friendly technologies. Nor have we been lulled by Soma and subliminal brain programming into a hazy acquiescence to pervasive social hierarchies.
Dystopias tend toward fantasies of absolute control, in which the system sees all, knows all, and controls all. And our world is indeed one of ubiquitous surveillance. Phones and household devices produce trails of data, like particles in a cloud chamber, indicating our wants and behaviors to companies such as Facebook, Amazon, and Google. Yet the information thus produced is imperfect and classified by machine-learning algorithms that themselves make mistakes. The efforts of these businesses to manipulate our wants leads to further complexity. It is becoming ever harder for companies to distinguish the behavior which they want to analyze from their own and others’ manipulations.
We live in Philip K. Dick’s future, not George Orwell’s or Aldous Huxley’s.
This does not look like totalitarianism unless you squint very hard indeed. As the sociologist Kieran Healy has suggested, sweeping political critiques of new technology often bear a strong family resemblance to the arguments of Silicon Valley boosters. Both assume that the technology works as advertised, which is not necessarily true at all.
Standard utopias and standard dystopias are each perfect after their own particular fashion. We live somewhere queasier—a world in which technology is developing in ways that make it increasingly hard to distinguish human beings from artificial things. The world that the Internet and social media have created is less a system than an ecology, a proliferation of unexpected niches, and entities created and adapted to exploit them in deceptive ways. Vast commercial architectures are being colonized by quasi-autonomous parasites. Scammers have built algorithms to write fake books from scratch to sell on Amazon, compiling and modifying text from other books and online sources such as Wikipedia, to fool buyers or to take advantage of loopholes in Amazon’s compensation structure. Much of the world’s financial system is made out of bots—automated systems designed to continually probe markets for fleeting arbitrage opportunities. Less sophisticated programs plague online commerce systems such as eBay and Amazon, occasionally with extraordinary consequences, as when two warring bots bid the price of a biology book up to $23,698,655.93 (plus $3.99 shipping).
In other words, we live in Philip K. Dick’s future, not George Orwell’s or Aldous Huxley’s. Dick was no better a prophet of technology than any science fiction writer, and was arguably worse than most. His imagined worlds jam together odd bits of fifties’ and sixties’ California with rocket ships, drugs, and social speculation. Dick usually wrote in a hurry and for money, and sometimes under the influence of drugs or a recent and urgent personal religious revelation.
Still, what he captured with genius was the ontological unease of a world in which the human and the abhuman, the real and the fake, blur together. As Dick described his work (in the opening essay to his 1985 collection, I Hope I Shall Arrive Soon):
The two basic topics which fascinate me are “What is reality?” and “What constitutes the authentic human being?” Over the twenty-seven years in which I have published novels and stories I have investigated these two interrelated topics over and over again.
These obsessions had some of their roots in Dick’s complex and ever-evolving personal mythology (in which it was perfectly plausible that the “real” world was a fake, and that we were all living in Palestine sometime in the first century AD). Yet they were also based on a keen interest in the processes through which reality is socially constructed. Dick believed that we all live in a world where “spurious realities are manufactured by the media, by governments, by big corporations, by religious groups, political groups—and the electronic hardware exists by which to deliver these pseudo-worlds right into heads of the reader.” He argued:
the bombardment of pseudo-realities begins to produce inauthentic humans very quickly, spurious humans—as fake as the data pressing at them from all sides. My two topics are really one topic; they unite at this point. Fake realities will create fake humans. Or, fake humans will generate fake realities and then sell them to other humans, turning them, eventually, into forgeries of themselves. So we wind up with fake humans inventing fake realities and then peddling them to other fake humans.
In Dick’s books, the real and the unreal infect each other, so that it becomes increasingly impossible to tell the difference between them. The worlds of the dead and the living merge in Ubik (1969), the experiences of a disturbed child infect the world around him in Martian Time-Slip (1964), and consensual drug-based hallucinations become the vector for an invasive alien intelligence in The Three Stigmata of Palmer Eldritch (1965). Humans are impersonated by malign androids in Do Androids Dream of Electric Sheep? (1968) and “Second Variety” (1953); by aliens in “The Hanging Stranger” (1953) and “The Father-Thing” (1954); and by mutants in “The Golden Man” (1954).
In Dick’s books, the real and the unreal infect each other, so that it becomes increasingly impossible to tell the difference between them.
This concern with unreal worlds and unreal people led to a consequent worry about an increasing difficulty of distinguishing between them. Factories pump out fake Americana in The Man in the High Castle (1962), mirroring the problem of living in a world that is not, in fact, the real one. Entrepreneurs build increasingly human-like androids in Do Androids Dream of Electric Sheep?, reasoning that if they do not, then their competitors will. Figuring out what is real and what is not is not easy. Scientific tools such as the famous Voight-Kampff test in Do Androids Dream of Electric Sheep? (and Blade Runner, Ridley Scott’s 1982 movie based loosely on it) do not work very well, leaving us with little more than hope in some mystical force—the I Ching, God in a spray can, a Martian water-witch—to guide us back toward the real.
We live in Dick’s world—but with little hope of divine intervention or invasion. The world where we communicate and interact at a distance is increasingly filled with algorithms that appear human, but are not—fake people generated by fake realities. When Ashley Madison, a dating site for people who want to cheat on their spouses, was hacked, it turned out that tens of thousands of the women on the site were fake “fembots” programmed to send millions of chatty messages to male customers, so as to delude them into thinking that they were surrounded by vast numbers of potential sexual partners.
These problems are only likely to get worse as the physical world and the world of information become increasingly interpenetrated in an Internet of (badly functioning) Things. Many of the aspects of Joe Chip’s future world in Ubik look horrendously dated to modern eyes: the archaic role of women, the assumption that nearly everyone smokes. Yet the door to Joe’s apartment—which argues with him and refuses to open because he has not paid it the obligatory tip—sounds ominously plausible. Someone, somewhere, is pitching this as a viable business plan to Y Combinator or the venture capitalists in Menlo Park.
This invasion of the real by the unreal has had consequences for politics. The hallucinatory realities in Dick’s worlds—the empathetic religion of Do Androids Dream of Electric Sheep?, the drug-produced worlds of The Three Stigmata of Palmer Eldritch, the quasi–Tibetan Buddhist death realm of Ubik—are usually experienced by many people, like the television shows of Dick’s America. But as network television has given way to the Internet, it has become easy for people to create their own idiosyncratic mix of sources. The imposed media consensus that Dick detested has shattered into a myriad of different realities, each with its own partially shared assumptions and facts. Sometimes this creates tragedy or near-tragedy. The deluded gunman who stormed into Washington, D.C.’s Comet Ping Pong pizzeria had been convinced by online conspiracy sites that it was the coordinating center for Hillary Clinton’s child–sex trafficking ring.
Such fractured worlds are more vulnerable to invasion by the non-human. Many Twitter accounts are bots, often with the names and stolen photographs of implausibly beautiful young women, looking to pitch this or that product (one recent academic study found that between 9 and 15 percent of all Twitter accounts are likely fake). Twitterbots vary in sophistication from automated accounts that do no more than retweet what other bots have said, to sophisticated algorithms deploying so-called “Sybil attacks,” creating fake identities in peer-to-peer networks to invade specific organizations or degrade particular kinds of conversation.
Standard utopias and standard dystopias are each perfect after their own particular fashion. We live somewhere queasier.
Twitter has failed to become a true mass medium, but remains extraordinarily important to politics, since it is where many politicians, journalists, and other elites turn to get their news. One research project suggests that around 20 percent of the measurable political discussion around the last presidential election came from bots. Humans appear to be no better at detecting bots than we are, in Dick’s novel, at detecting replicant androids: people are about as likely to retweet a bot’s message as the message of another human being. Most notoriously, the current U.S. president recently retweeted a flattering message that appears to have come from a bot densely connected to a network of other bots, which some believe to be controlled by the Russian government and used for propaganda purposes.
In his novels Dick was interested in seeing how people react when their reality starts to break down. A world in which the real commingles with the fake, so that no one can tell where the one ends and the other begins, is ripe for paranoia. The most toxic consequence of social media manipulation, whether by the Russian government or others, may have nothing to do with its success as propaganda. Instead, it is that it sows an existential distrust. People simply do not know what or who to believe anymore. Rumors that are spread by Twitterbots merge into other rumors about the ubiquity of Twitterbots, and whether this or that trend is being driven by malign algorithms rather than real human beings.
Such widespread falsehood is especially explosive when combined with our fragmented politics. Liberals’ favorite term for the right-wing propaganda machine, “fake news,” has been turned back on them by conservatives, who treat conventional news as propaganda, and hence ignore it. On the obverse, it may be easier for many people on the liberal left to blame Russian propaganda for the last presidential election than to accept that many voters had a very different understanding of America than they do.
Dick had other obsessions—most notably the politics of Richard Nixon and the Cold War. It is not hard to imagine him writing a novel combining an immature and predatory tycoon (half Arnie Kott, half Jory Miller) who becomes the president of the United States, secret Russian political manipulation, an invasion of empathy-free robotic intelligences masquerading as human beings, and a breakdown in our shared understanding of what is real and what is fake.
These different elements probably would not cohere particularly well, but as in Dick’s best novels, the whole might still work, somehow. Indeed, it is in the incongruities of Dick’s novels that salvation is to be found (even at his battiest, he retains a sense of humor). Obviously, it is less easy to see the joke when one is living through it. Dystopias may sometimes be grimly funny—but rarely from the inside.