Get our latest essays, archival selections, reading lists, and exclusive content delivered straight to your inbox.
Our world is increasingly powered by artificial intelligence. The singularity is not here, but sophisticated machine-learning algorithms are—revolutionizing medicine and transport; transforming jobs and markets; reshaping where we eat, who we meet, what we read, and how we learn. At the same time, the promises of AI are increasingly overshadowed by its perils, from unemployment and disinformation to powerful new forms of bias and surveillance.
Leading off a forum that explores these issues, economist Daron Acemoglu argues that the threats—especially for work and democracy—are indeed serious, but the future is not settled. Just as technological development promoted broadly shared gains in the three decades following World War II, so AI can create inclusive prosperity and bolster democratic freedoms. Setting it to that task won’t be easy, but it can be achieved through thoughtful government policy, the redirection of professional and industry norms, and robust democratic oversight.
Respondents to Acemoglu—economists, computer scientists, labor activists, and others—broaden the conversation by debating the role new technology plays in economic inequality, the range of algorithmic harms facing workers and citizens, and the additional steps that can be taken to ensure a just future for AI. Some ask how we can transform the way we design AI to create better jobs for workers. Others urge that we need new participatory methods in research, development, and deployment to address the unfair burdens AI bias has already imposed on vulnerable and marginal populations. Others argue that changes in social norms won’t happen until workers have a seat at the table.
Contributions beyond the forum expand the aperture, exploring the impact of new technology on medicine and care work, the importance of workplace training in the AI economy, and the ethical case for not building certain forms of AI in the first place. In “Stop Building Bad AI,” Annette Zimmermann challenges the belief that something designed badly can later be repaired and improved, an industry-wide version of the Facebook motto to “move fast and break things.” She questions whether companies will police themselves, and instead calls for new frameworks for determining what kinds of AI are too risky to be designed in the first place.
What emerges from this remarkable mix of perspectives is a deeper understanding of the current challenges of AI and a rich, constructive, morally urgent vision for redirecting its course.
…we need your help. Confronting the many challenges of COVID-19—from the medical to the economic, the social to the political—demands all the moral and deliberative clarity we can muster. In Thinking in a Pandemic, we’ve organized the latest arguments from doctors and epidemiologists, philosophers and economists, legal scholars and historians, activists and citizens, as they think not just through this moment but beyond it. While much remains uncertain, Boston Review’s responsibility to public reason is sure. That’s why you’ll never see a paywall or ads. It also means that we rely on you, our readers, for support. If you like what you read here, pledge your contribution to keep it free for everyone by making a tax-deductible donation.
Vital reading on politics, literature, and more in your inbox. Sign up for our Weekly Newsletter, Monthly Roundup, and event notifications.
But I do miss the hymns, / the small, hard apples with their dimpled skin. I do miss / things.
The vast hinterlands of the Global South’s cities are generating new solidarities and ideas of what counts as a life worth living.
Protests in China are shining a light not only on the country’s draconian population management but restrictions on workers everywhere.