The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence
Petra Molnar
New Press, $28.99 (cloth)

In an interview in May, the head of the Israeli military intelligence’s targeting division responded to outrage over the civilian death toll in Gaza by boasting that algorithmic surveillance systems had built the army’s killing capacities to scale. “This is an unprecedented event in the history of modern armies—that the army does not lack something to attack,” the colonel told the Israeli outlet. “The quantity barrier has been broken.”

Tech founders now sound like military strategists, promising that unfettered innovation will deliver geopolitical domination.

Military heads have been saying AI is the future of warfare for more than two decades. Slowly and surely, defense and security operations across much of the world have been outsourced to firms capable of churning out ever more advanced weapons systems and intrusive surveillance technologies. Generals praise Silicon Valley conglomerates for providing the computing infrastructure and AI systems central to their military arsenals. Tech founders sound like military strategists, promising that unfettered innovation will deliver geopolitical domination.

Beneath the slogans lies devastation. The unending wars and fortified borders fracturing much of the world have created lucrative testing grounds for the private firms tinkering with defense and security technologies. Venture capitalists scrolling through pitch decks of products seemingly lifted from blockbuster thrillers are rapidly cashing in. According to a Dealroom report released in late September, investment in defense tech startups is up 300 percent since 2019 in NATO countries; funders injected $3.9 billion dollars into the industry just this year. International relations experts Michael Brenes and William Hartung say we are on the verge of “a profit-driven rush toward a dangerous new technological arms race.” But it is more like a crowd crush—one that’s been ramping up insecurity across most of the world for a while now.

Petra Molnar’s new book The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence offers an expansive account of how this global arms race is intensifying already violent homeland security and border regimes. Trained as an anthropologist and lawyer, Molnar shepherds families through militarized border zones, litigates against incarceration and deportation in international courts, and tails human rights advocates in their on-the-ground efforts. Part investigative ethnography and part exercise in moral critique, the book reveals the human face trampled by the unrestrained development of new surveillance technologies and weapons systems. Released seven months into Israel’s war on Gaza—a protracted siege abetted by these very systems—The Walls Have Eyes is also tragically timely. It shows that many of the technologies used by Israel’s military are prevalent worldwide, not least because unending war in the region offers a prime testing ground for international and Israeli firms alike. And it details the harm visited upon millions of migrants who risk their lives crossing militarized border zones or are detained in high-tech prisons.

While tracking this devastation is essential, we must also be clear-eyed about the forces responsible for it. As private-sector players tout networked warfare and automated surveillance as the only road to security, Molnar’s case studies expose an ironic pattern: most of these systems have failed to deliver on precisely that promise. Behind claims we have entered a radically new era of technological warfare lie the same old human actors, seeking power and profit.


Molnar takes us on a world tour of the border zones where technology firms and government agencies try out tech’s latest innovations in monitoring, enclosing, and killing. We move from the Sonoran Desert to the West Bank, where drones, license plate readers, and facial recognition cameras fuel military and paramilitary violence; to Kenyan cities, where the private technology sector helps determine who should be detained at checkpoints; and to Canadian asylum courts and migrant detention centers in Greece, where boutique firms churn out biometric and voice recognition systems that rationalize deportation. As Molnar puts it, “what we are really talking about when we talk about borders is a human laboratory of high-risk experiments.”

Too many critics take salespeople at their word, investing technology with the same outsized power funders and founders give them.

Molnar has spent years working as a migration lawyer across North America, Europe, and parts of the Middle East on behalf of communities bearing the brunt of this experimentation. The Walls Have Eyes hinges on her efforts to help many navigate increasingly lethal border crossings. She drives overnight to pick up a father and his fourteen-year-old son near the Greek border with Turkey, sneaks out soil samples from detention centers to contest inhumane conditions, and meets with rescue workers at the
Poland-Belarus border who are forced to operate in secret. The book is a valuable accounting of high-stakes migration journeys at a time when human rights advocates and journalists face increasing and more punishing obstacles—including the weaponization of anti-smuggling and counterterrorism laws and other forms of harassment and intimidation. As immigration attorney Lauren Carasik wrote in these pages after a U.S. Department of Homeland Security database of activists and reporters was leaked in 2019, “For those working to illuminate the plight of migrants and to protect their rights, being singled out takes an undeniable toll.” Molnar’s study courageously writes against these desperate conditions.

Borders are fortified as much by technologies and weapons as by the narratives of dehumanization and exclusion that accompany them. The Walls Have Eyes deconstructs them, zeroing in on how ever more stringent migration regimes are suffocating those striving to stake out a better life elsewhere. Each chapter is bookended with stories from the ground. We meet Mariam Jamal, a digital rights advocate tracking how data collection is fracturing families and disenfranchising workers across Kenya, and Zaid Ibrahim, a Syrian who manages to reach Europe from Turkey after four prior attempts were met with live bullets, police dogs, and armed security forces. These stories offer a stylistic intervention as much as a theoretical one—a way of writing against a conversation that tends to lean on abstract statistics rather than the concrete lives at stake.

This aspect of the book is sometimes overshadowed, however, by an emphasis on technology as the key factor in all the violence Molnar documents. She tells us the book will not provide a taxonomy of a booming homeland security market, yet at many points her storytelling verges into exactly that kind of catalog—long lists of the systems saturating borderlands. Surveillance towers, drones, and militarized police forces proliferate; migrants seeking refuge are subjected to biometric monitoring, weeded out with predictive policing software or spyware, or denied entry by AI-powered lie detector tests.

This material gives texture to the book’s subtitle, but in one key respect this frame is misleading. Many of these technologies do not run on AI, and Molnar never really defines how others do. Yet the technology itself is not the driving force behind the repressive border regimes the book so powerfully documents. The profit-seeking imperative of the private market—buoyed by lucrative partnerships with governments and militaries—is what has made borders all the more dangerous. Over the past two decades, corporate CEOs and startup founders have found allies in politicians eager to embrace exclusionary policies and brutal military strategies; they are the forces that have entrenched insecurity across much of the world.

Indeed, borders have always been sites of exclusion, detention, and death. What’s relatively new is a massive industry promising to make border enforcement more effective and more precise, which proliferated at the turn of the millennium as the United States’ “global war on terror” dovetailed with a second dot-com boom. The U.S.-Mexico border and the edges of southern Europe served as proofs of concept for this burgeoning surveillance and security business. Corporate heads and startup founders alike pledged to augment the growing number of military and police personnel patrolling border walls and checkpoints constructed to wall in nation-states at the dawn of the twenty-first century. 

“A modern solution for efficient and responsible target management”: that is how Palantir advertises its AI-assisted platform, Gotham.

Innovations in surveilling, shooting, and killing allowed consolidating homeland security regimes to spin endless wars abroad and failed border regimes at home as technocratic, and therefore humane, campaigns. “A modern solution for efficient and responsible target management” is how Palantir advertises its AI-assisted targeting platform, Gotham, to militaries and police forces worldwide. Along with a host of private firms, Palantir powered the surveillance databases and visa triaging algorithms that promised to precisely identify who should be carted off to immigration detention centers or turned around by border guards. As the migration flow continued, undeterred by draconian new policies, startups like Ghost Robotics and Smartshooter advertised AI-assisted robodogs and machine guns as key to deterrence.

The Walls Have Eyes shows how such systems have done little to mitigate the harms of border enforcement policies—giving the lie to industry’s promises, to no sober observer’s surprise. In the United States, deaths have increased despite the state-of-the-art systems crisscrossing the desert. Nor have these technologies stemmed the flow of migrants fleeing famine, wars, and economic catastrophe in other parts of the world. A record number of migrants died trying to reach Europe in 2023. In the occupied Palestinian territories, once marshaled as one of the most securitized places on earth, violence has skyrocketed. Since 2021, each year has been deadlier than the last. Israel’s bombardment on Gaza has broken all prior records, despite being waged with some of the most advanced weapons systems of the day.

In fact, many of the very technologies policing borders are used to push people out of their homes to begin with, stoking insecurity in a carefully engineered loop. Elbit drones crater residential complexes in Gaza, Lebanon, and Syria, for example, and also surveil the Mediterranean coastline, turning away migrants seeking refuge in Europe. Billions poured into so-called “security solutions” exacerbate the violence they promise to mitigate, creating an endless demand for better algorithms and more lethal weapons systems to safeguard national security at home and in wars abroad.


Molnar’s study is far from the only one to face this tension. Books on AI-powered surveillance and weaponry can be divided into two categories. On the one hand, there are grim but laudatory accounts of automated warfare penned by generals-turned-founders and founders who have become close friends with generals. From Google’s Eric Schmidt to the ex-head of Israel’s Unit 8200, Yossi Sariel, national security buffs have churned out manuscripts extolling the virtues of AI-powered weaponry and exhorting American investment, lest China dominates the field first. On the other hand, there is a proliferating genre sounding the alarm about AI’s repressive effects. Journalists and scholars tracking the rise of this industry paint sci-fi–like scenarios of killer robots tracking down and obliterating human life on a whim.

On the surface, the boosters and detractors may appear diametrically opposed, yet they tend to converge on the same techno-determinism. Algorithmic surveillance and AI-powered weapons systems are celebrated or decried as working precisely as the tech founders and venture capitalists dominating the industry promise.

By contrast, critics like Astra Taylor have long called attention to the hype machine serving the interests of companies, investors, and the media. Far from ushering in a more humane and precise era of technological reason, most products billed as AI have downgraded what is most vital to a robust and sustainable humanity—eroding democracy, buckling education systems, stoking political divisions, exacerbating economic inequality, and ramping up global warming to power the vast computational resources on which these systems run. In their new book, computer scientists Arvind Narayanan and Sayash Kapoor take aim at all this AI “snake oil.” Border regimes and warfare offer an object lesson in AI’s failure to deliver on industry’s overstated public promises.

Despite these warnings, critical accounts of war and national security sometimes take the salespeople at their word, investing their objects of critique with the same outsized power funders and founders give them. Robotic dogs patrolling borders are framed as terrifying tools of border enforcement, even though recent prototypes are too expensive and ineffective to be implemented in full. AI-powered sentiment analysis deployed at airports is described as a dystopian surveillance system rather than a pseudoscientific and faulty product. At weapons expos, the same words critics reach for when outlining the harms of these systems—words like “lethal,” “deadly,” and “unparalleled”—plaster promotional materials aimed at the governments and security agencies eager to keep up with the private sector’s breakneck speed of innovation.

In a carefully engineered loop, billions poured into so-called “security solutions” exacerbate the violence they promise to mitigate.

In doing so, critical accounts of this technology’s dehumanizing effects risk eliding the humans who are, at the bottom of it, responsible for the policies and the violence. It is a choice to invest in these technologies and a choice to deploy them. From Middle Eastern battlefields to the borders of southern Europe, most of the new technologies hitting the market lend a veneer of disembodied technical rationality to the same old human campaigns of brute destruction. Instead of racist quotas determining who can enter which country when, we have racially biased algorithms. Instead of human operators deciding when and where to carpet bomb civilian homes, we have algorithms recommending when and where dumb ammunitions should obliterate civilian homes. Armies say they are on the verge of an AI revolution, and homeland security regimes may be embracing “smart” systems to radical effect. But when it comes down to it, those on the ground are subjected to a familiarly brutal violence, often compounded by the algorithmic errors plaguing new weapons systems.


Nowhere is this clearer than in Israel and Palestine. Echoing the words of a former law enforcement officer, Molnar calls Israel the “Harvard of counterterrorism” in a chapter that zeroes in on the proliferation of surveillance technologies and algorithmic weaponry across Hebron, a brutally segregated Palestinian city in the West Bank. She offers a dizzying catalog of the private firms that Israel’s security agencies and military have outsourced development to, from the AI-powered rifle manufacturer Smartshooter to the biometric startup Oosto. Much has been and should be written about the profits made possible by Israel’s permanent occupation of the Palestinian territories. What’s mostly missing from Molnar’s account are the people and policies responsible.

I have tracked Hebron’s transformation into one of the most surveilled places in the West Bank for the past five years. During this time, one thing has always been clear: the devastating conditions in the city—where settler violence against Palestinians has skyrocketed in recent years—are the result of deliberate, coordinated military policies rather than the fantastic capacities of a technological system. Life remains curtailed primarily by the upper echelons of the Israeli military and government, who order young adult soldiers to break into homes, arbitrarily detain and harass Palestinians, and protect the settler vigilantes who destroy lives and livelihoods with impunity. Their policies have become more brutal as right-wing conscripts, many adhering to the messianic ideologies preached by a growing number of security officials and politicians alike, pour into the combat troops. Israeli sociologist Yagil Levy calls it the Israeli army’s “blue-collar rebellion.”

And it is part and parcel of global trends. Conscripts in Europe and North America tend to be more right-wing and ideologically motivated than their civilian counterparts; emboldened by populist politicians rising throughout Europe, North America, and the Middle East, they are eager to collude with allies in Silicon Valley. From Alex Karp and Peter Thiel to Elon Musk, the tech industry’s overlords are no longer peace-loving hippies or dyed-in-the-wool libertarians. They are embracing a virulently racist strand of conservative politics to great effect. The firms they run are churning out algorithms—facial recognition databases that help determine who should be holed up in a detention center for days or weeks or remote sensing systems that decide where drones should drop bombs—that meet security states’ demands.

Israel’s bombardment on Gaza has broken all prior records, despite being waged with some of the most advanced weapons systems of the day.

But that is not to say they are doing it accurately or well. The reality is that the revolving door between tech, venture capital, and the military does little to enhance security. Israeli officials themselves have stated that an overreliance on supposedly state-of-the-art surveillance and weapons systems contributed to the devastating security failures on October 7. For decades, it appears that plenty of military heads around the globe believed private-sector players when they said that better algorithms and more efficient weapons systems would make militaries ultrapowerful and borders impenetrable. In Israel, this hubris, buoyed by the government’s pervasive ideology of Jewish supremacy, blinded military leadership to a staggering pile of warning signs indicating that Hamas was planning to massacre civilians and soldiers and take hundreds hostage. Faith in ingenious technological fixes reigned supreme.

All the death and destruction in Gaza over the last year has done little to dislodge this institutionalized conceit. Within weeks of declaring war against Hamas, Israel’s military circulated press releases claiming state-of-the-art AI-powered targeting systems were augmenting its killing capacities. As the ground troops rolled in, military heads boasted that algorithmically enhanced tanks were allowing units to wage war with lethal precision. And as soldiers reoccupied the strip, security officials announced that the war was yielding a steady stream of data to build up new defense technology products. Their press releases were aimed at the private sector. Transnational firms like Google, Microsoft, Amazon, and Palantir signed over a host of computing infrastructure and AI systems over the past year. Defense tech startups have also rushed to the battlefield to begin product testing. “It is very important the IDF continues to open itself up to foreign companies,” Brigadier General Meika Mastai, former head of the IDF’s information and communication division, said at a technology conference in July. “We can be more successful this way.”

But these are just slogans. As investigative reporting from Yuval Abraham with +972 Magazine has made clear, most of the algorithmic weaponry determining where bombs fall—most notably, AI-assisted targeting systems called Lavender, Where’s Daddy?, and The Gospel—have simply lent the veneer of technical rationality to a military bent on largely indiscriminate destruction. The billions poured into engineering and maintaining these technologies have done nothing to achieve Israel’s stated goals of decimating Hamas or bringing the remaining hostages home. Instead, it has turned most of Gaza into a death zone.

As regional war expands onto another front, claims of technological supremacy ring increasingly hollow. Everyone from U.S. State Department officials to retired Israeli generals has said Israel’s military strategy has failed to the detriment of everyone in the region. Nevertheless, Israel’s government and military are leaning on the same old taglines. After launching one of the most destructive aerial bombardments in history across Lebanon in September, military heads have marshaled algorithmically generated kill lists and AI-assisted weaponry as proof of military edge on yet another battlefield.

As tech critic Kate Crawford observes, AI “performs an ideological function.” In this case, it helps convince the public that intelligent machines, rather than the same old human actors, are doing the work. If this function sometimes falls out of Molnar’s analysis, her powerful case studies also clearly show that whatever security offered by such systems is mostly an illusion. Behind the smoke and mirrors is our present of endless war.

Independent and nonprofit, Boston Review relies on reader funding. To support work like this, please donate here.