The Brain Electric: The Dramatic High-Tech Race to Merge Man and Machine
Farrar, Straus and Giroux, $26 (cloth)
Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots
Ecco, $26.99 (cloth)
The Soul of the Marionette: A Short Inquiry into Human Freedom
Farrar, Straus and Giroux, $23 (cloth)
There are two kinds of technology critics. On one side are the determinists, who see the history of technology as one of inexorable progress, advancing according to its own Darwinian logic—the wheel, the steam engine, the autonomous car—while humans remain its hapless passengers. It is a fatalistic vision, one even the Luddite can find bewitching. “We do not ride upon the railroad,” Thoreau said, watching the locomotive barrel through his forest retreat. “It rides upon us.” On the opposite side of the tracks lie the social constructivists. They want to know where the train came from, and also, why a train? Why not something else? Constructivists insist that the development of technology is an open process, capable of different outcomes; they are curious about the social and economic forces that shape each invention.
Nowhere is this debate more urgent than on the question of artificial intelligence. Determinists believe all roads lead to the Singularity, a glorious merger between man and machine. Constructivists aren’t so sure: it depends on who’s writing the code. In some sense, the debate about intelligent machines has become a hologram of mortal outcomes—a utopia from one perspective, an apocalypse from another. Conversations about technology are almost always conversations about history. What’s at stake is the trajectory of modernity. Is it marching upward, plunging downward, or bending back on itself? Three new books reckon with this question through the lens of emerging technologies. Taken collectively, they offer a medley of the recurring, and often conflicting, narratives about technology and progress.
While the constructivists have gained ground in scholarly circles in recent decades, a strain of determinism persists, particularly among those most animated about the future. In fact, the determinist history lessons of Ray Kurzweil, Ramez Naam, and Andy Clark seem to have become a token of new books about technology. No exception is Malcolm Gay’s The Brain Electric: The Dramatic High-Tech Race to Merge Man and Machine, which traces the development of brain-computer interfaces (BCIs), electrodes surgically implanted in the brain. In an early chapter, Gay looks to history to assure us that BCIs are merely the latest instance of a very old trend: “In some essential sense, we’ve been enmeshing our lives with tools ever since Homo sapiens emerged from the hominid line some 200,000 years ago.”
Gay recalls the development of eyeglasses, pens, the spear, and the wheel—technologies all, lest we forget—noting that humanity resisted each of them. Writing was once feared as the scourge of civilization; the printing press was met with great hue and cry. The moral of the story is self-evident, but Gay is not above belaboring the point. “Innovations that were once suspiciously regarded as levelers of culture . . . are quickly absorbed into mainstream use.” Moreover, such implements become invisible to us over time. (We no longer consider writing high-tech.) Rather, we see such technologies as a natural extension of ourselves, to the extent that “they essentially disappear.” How far back can we go according to this logic? Gay’s history stretches all the way to the primordial soup. Borrowing Richard Dawkins’s idea of genetic “vehicles,” he notes that our bodies are simply constructions of protein built by our DNA as protective gear. We are technology, and technology is us.
One might argue that a word strained to accommodate so much no longer has any useful meaning, but this criticism would be beside the point. Gay’s story is meant as a palliative, and its popularity can perhaps signal how uneasy the public remains about invasive technologies—those that would muddle the border between human and machine.
BCIs are only in the earliest stages of development. So far, humans equipped with these devices can perform a series of basic maneuvers. Quadriplegics have been able to feed themselves with thought-controlled prosthetic limbs; other research subjects have managed to control a computer curser using only their minds. In one study, rats appeared to communicate with one another telepathically. These may be uncontroversial applications, but the neuroscientists and engineers have larger ambitions.
One hero of Gay’s book is Eric Leuthardt, a neurosurgeon and entrepreneur who studied theology before realizing he had more terrestrial goals: to make “the world . . . operate by the force of my will.” Leuthradt founded NeuroLutions, a neuroprosthetics startup devoted to helping stroke patients restore hand function through the use of a robotic glove. But this technology is simply a beachhead, designed to demonstrate clinical relevancy and shore up public approval. Ultimately, Leuthradt hopes that commercial BCIs will become as common as smartphones, allowing the human brain to interact wirelessly with lighting, climate control, vehicles, and the Internet. In this future, communication will be wordless and immediate. Implanted humans will share memories and experiences with other implanted humans without the clumsy machinery of language. Once the technology is in place, Leuthardt promises, “the world essentially becomes your iPad.”
When he claims he wants to connect the brain to a digitized environment because “it’s amazingly cool,” that’s as good a reason as any.
Of course, BCIs raise a number of ethical questions. Consider the implications for data-mining. Once our brains are merged with the Web, will our thoughts be subject to the same vulnerabilities as our cell-phone data? Will our minds become available to third-party partners? Gay never addresses such questions, nor does he consider the possibility that people might reject such technology. Throughout the book, he tacitly affirms the conviction of the neuroscientists he covers: once the benefits of such devices are demonstrated, people will be unable to resist them.
This view seems naïve. After all, it wasn’t so long ago that the tech world was hailing Google Glass as a herald of the age of BCIs, but last year the product was sent back to development, in part due to privacy concerns. Such moments might be viewed as prudent opportunities—to discuss design issues, or debate what direction we want our technology to take. Yet determinists are inclined to see the blowback as merely reactionary. Sarah Slocum, the social media consultant who said she was attacked at a San Francisco bar for wearing Google Glass, shrugged off her critics. “Whenever there are new and emerging technologies, there is always going to be some resistance,” she said. “Some of the irony is that the people hating on me for wearing Google Glass are probably going to have a pair in six months or a year.” Early adapters are always on the winning side of history; everyone else will join in time. According to this logic, there are no good ideas or bad ideas, only forward-thinking ones. When Leuthardt claims that he wants to connect the brain to a digitized environment “for no other reason than I think it’s amazingly cool,” that’s as good a reason as any.
John Markoff, longtime technology writer for the New York Times, doesn’t buy the determinist narrative. For him, there is no blind watchmaker, and the trajectory of technological progress is far from certain: the future depends upon the ongoing decisions of designers and engineers. His new book Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots argues that the development of intelligent machines has long been divided by two conflicting visions. There are those who seek to enhance human powers (through intelligence augmentation, or IA) and those who desire to replace them (through artificial intelligence, or AI). The debate is still unresolved, and the stakes are high: “Those who design the systems that increasingly reshape and define the Information Age are making choices to build humans in or out of the future.” One example is Google’s development of self-driving cars. Initially, the company envisioned a vehicle that would aid the driver through Traffic Jam Assist and other elective autopilot features. But in development it proved more efficient simply to design humans out of the loop. The cars were stripped of their steering columns, and the model currently under development is more like an elevator, operating autonomously with very little human intervention.
Markoff detects in this story a larger, parabolic significance. “Driving was the original metaphor for interactive computing,” he notes, recalling the days when Stanford engineers first modeled how to “drive” through cyberspace, an activity over which the human was unquestionably in control. Increasingly, that vision has given way to a world of autonomous machines. As technology becomes more sophisticated, Markoff asks, will humans be the drivers or the passengers? The masters or the slaves?
The future of man-machine relations depends crucially on the economic system that engenders it.
Markoff avoids the more speculative implications of this question. He doesn’t mull over doomsday scenarios such as the “intelligence explosion”—a theoretical event in which computers become self-replicating and one often cited as the ultimate stakes of the AI versus IA dichotomy. He is concerned with more immediate dangers, such as the displacement of white-collar jobs. His sympathies clearly lie with the augmenters—those who seek to keep humans “in the loop”—though he refuses to cast designers into the familiar roles of heroes and villains. Instead, his book calls for increased dialogue between the two disciplines: “It is essential that these two camps find a way to communicate with each other.” The prescription is essentially therapeutic: Designers need to start talking to each other, and we—the media, the public—need to communicate to the designers and engineers our ethical concerns. Markoff notes that Valley types are typically uneasy with teleological questions, yet it is essential that we pose them if we want to cultivate a robust public debate.
It’s hard to believe that a kumbaya remedy of this sort can have any real influence on a profit-driven sector. It’s especially difficult to believe it when Markoff himself seems conflicted about the power of ethical considerations. His faith in public debate seems to waver when he recounts the rumored establishment of a Google “ethics board.” In 2014 the company acquired DeepMind Technologies, a startup that specialized in machine-learning algorithms, a prominent form of AI that some feared could evolve into self-replication. Because of the technology’s powerful implications, reports circulated that Google would establish a board of ethics “to evaluate any unspecified ‘advances.’” Over time, though, the measure turned out to be little more than a PR stunt, and Markoff concludes that safeguards of this sort are unlikely to be embraced by technology companies, stating baldly that “It will be truly remarkable if any Silicon Valley company actually rejects a profitable technology for ethical reasons.”
This sentiment resurfaces in the final pages of the book, where Markoff observes that the future of man-machine relations depends crucially on the economic system that engenders it. “In a capitalist economy, if artificial intelligence technologies improve to the point that they can replace new kinds of white-collar and professional workers, they will inevitably be used in that way.” This could have been the beginning of a more daring argument about the structural forces at work in the tech sector. But not a page later, he returns to his tidy prescription for renewed conversation, noting that the future of technology “lies in the very human decisions of engineers and scientists.” In the end, Markoff’s view is a tangle of competing sentiments—a call for spirited discussion alongside a countervailing doubt that it can trump the dictates of the market.
This reluctance to contend with the systemic has become a hallmark of tech criticism. Evgeny Morozov recently indicted the genre on these grounds. “There’s the trademark preoccupation with design problems,” he explains, “and their usually easy solutions, but hardly a word on just why it is that startups founded on the most ridiculous ideas have such an easy time attracting venture capital.” Perhaps this is why determinism remains a popular narrative; it’s all too easy to confuse its storyline with the sleights of the invisible hand. For most consumers—who learn about new technologies only when they brighten the windows of an Apple store or after they’ve already gone viral—it’s easy to imagine that technological progress is indeed dictated by a kind of divine logic; that machines are dropped into our lives on their own accord, like strange gifts from the gods.
Markoff borrows his title from a short poem by Richard Brautigan, the San Francisco counterculture poet. The opening stanza of “All Watched Over By Machines of Loving Grace” envisions a digital kingdom where humans and machines live together in prelapsarian harmony:
I like to think (and
the sooner the better!)
of a cybernetic meadow
where mammals and computers
live together in mutually
like pure water
touching clear sky.
The poem was published in 1967, just a year before the inaugural issue of Stewart Brand’s Whole Earth Catalogue, which popularized a cybernetic utopia. The catalogue targeted hippy homesteaders and advertised products that could make Brautigan’s poetic idyll a reality—geodesic domes, personal computers, seeds, hoes. But the catalogue’s most enduring product was its vision of the future, one where technology enabled humanity to transcend its baser nature and create a new world of communal and spiritual peace. The first issue famously opened with Brand’s declaration: “We are all gods and might as well get good at it.”
It was visions such as these that helped reframe computers—once feared as the dehumanized tools of Cold War technocracy—as mediums of collaboration, community, and even spiritual communion. This dream, now sublimated and streamlined, persists in startup manifestos and corporate handbooks. Its reverberations ring in the transcendental connotations of words like “connection” or “sharing” as well as in the language of corporate spokespeople, who engage earnestly in the vocabulary of good and evil. Airbnb is not merely a rental website; it is proof—as a recent ad has it—that humanity is good. “We believe in humanity,” said the company’s CMO, “and we’re putting that humanity and truth into the soul of our marketing.” Google—whose unofficial motto is “Don’t be evil”—courts a similar moral dualism. Drawing on the ancient notion of a fallen world, the company’s CFO has said that reality is fundamentally “broken” and can be redeemed by technology. This is quite a different view of technological expansion, one more ideological than either the determinist or constructivist frameworks. It adds a moral dimension to the narrative of progress. Technology not only drives history forward; it gradually refines human nature to make it better—or to uncover the good that was always there.
John Gray is not alone in identifying this cant as essentially religious, but he may be the first to trace its doctrinal origins. The Soul of the Marionette: A Short Inquiry Into Human Freedom argues that many of us adhere unwittingly to a modern form of Gnosticism. This worldview, which predates Christianity and was eventually absorbed into it, sees the universe embroiled in a cosmic war between good and evil. The world was created by a false and evil god, but humans contain within them the fragments of a divine essence that come from the true god. Each person exists in this liminal state but can achieve spiritual communion and godhood through acquiring knowledge, or gnosis—the key that unlocks those divine sparks.
Among the modern-day Gnostics, says Gray, are the techno-futurists who believe that technology will usher in a state of spiritual perfection and emancipate us from our mortal forms. Many have contributed to this dubious gospel, but its chief prophet is Ray Kurzweil, who for several decades has been heralding the day when technological enhancement will facilitate unlimited knowledge, transforming humanity into an immortal and essentially divine super-race. “As we evolve,” he told an audience last fall at Singularity University, “we become closer to God. Evolution is a spiritual process. There is beauty and love and creativity and intelligence in the world—it all comes from the neocortex. So we’re going to expand the brain’s neocortex and become more godlike.”
We pay homage to secular values by daylight but still seek, in murky and subliminal ways, the old myths that preceded them.
Of course, there is a fundamental problem in ascribing to evolution any guiding telos, let alone one so mystical. As Gray put it several years ago in the Guardian, “Progress assumes some goal or direction. But evolution has neither of these attributes.” This idea is reprised in The Soul of the Marionette, but the book furnishes a darker interpretation of figures like Kurzweil. In the preachments of Humanity 2.0, Gray hears echoes of Lenin’s socialist utopia and Hitler’s superior race. Each of these ideologies, Gray contends, shares a vision of improved humanity coupled with willful blindness toward our enduring errors—“the inherent and incurable flaws of the human animal.” Though Gray is not religious, his views on human nature owe much to the Augustinian and Calvinistic visions of total depravity: an essence that is forever coming up short against the ideal.
The promise of techno-futurists rests on the premise that increased knowledge—in the form of intelligent machines—will liberate us from inherent human flaws. Gray is skeptical. Consciousness is inextricable from the subconscious, that abyss of irrationality, illusions, and paranoia that has persisted in humankind well into the age of reason.
When thinking machines first arrive in the world they will be the work of flawed, intermittently lucid animals whose minds are stuffed with nonsense and delusion. . . . Mutating under the pressure of entropy, the machines humans have invented will develop faults and flaws of their own. Soon they will no longer be aware of parts of their own minds; repression, denial and fantasy will cloud with empty sky of consciousness. . . . Eventually these half-broken machines will have the impression that they are choosing their own path through life.
Gray believes that Gnosticism in its modern incarnation can be traced back to the Enlightenment, which left us with a paradoxical inheritance. On the one hand, the scientific revolution dispelled the idea that man existed at the center of creation. On the other, it transformed the human agent into a subject of empirical study, a kind of technology to be bettered and perfected, opening up the potential for humanity to become godlike.
The upshot of Gray’s book is that the West has not fully reckoned with this disruption. We pay homage to secular values by daylight but still seek, in murky and subliminal ways, the old myths that preceded them. When the bulletins of science tell us that our lives are guided by nothing more than “matter’s aimless energy,” we still cling to a teleological interpretation of history, a hope that as time goes on we are becoming less violent, more prone to reason and peace. Such cheery views of existence, Gray believes, have a warped understanding of history and fail to appreciate the durability of our vexed nature. “Rather than trying to escape violence, human beings more often become habituated to it. History abounds with long conflicts—the Thirty Years War in early seventeenth-century Europe, the Time of Troubles in Russia, twentieth-century guerilla conflicts—in which continuous slaughter has been accepted as normal.” These aren’t dark outliers of the human project, though. “Civilization and barbarism are not different kinds of society. They are found—intertwined—whenever human beings come together.”
In the end, Gray finds more wisdom in the ancients than he does in the techno-utopian imitations of religion. Pre-Christian ideas, including Greek tragedy and early Judaic literature, are superior to the myth of modern progress because they avoid the pitfall of viewing history as a process of redemption. The ancients “knew that civilizations rise and fall; what has been gained will be lost, regained and then lost again in a cycle as natural as the seasons.” They knew to find freedom by looking inward—a skill that we moderns have forgotten.
Gray may be the Cassandra of contemporary philosophy, but it’s not difficult to imagine his argument finding a sympathetic audience. If anything, the spate of recent acts of terrorism has made the West skeptical that technology could be a tool of global harmony. Yet in checking the relentlessly sunny attitude of the utopians, Gray may commit the obverse error of being relentlessly dark. “Divided against itself,” he writes, “the human animal is unnaturally violent by its very nature.” It’s difficult, even for a sympathetic reader, not to hear these grisly observations as reactionary, conjured by someone who sincerely believes that we live in a time “when any reference to the flaws of the human animal is condemned as blasphemy.”
Gray is more convincing when he calls attention to the way technology amplifies human nature rather than alters it. “Science enlarges what humans can do,” he has said; “It cannot reprieve them from being what they are.” As we watch the horizon for intelligent machines, we would do well to entertain the possibility that they will be made in our image.