Algorithms for the People: Democracy in the Age of AI
Josh Simons
Princeton University Press, $29.95 (cloth)
A new common sense has emerged regarding the perils of predictive algorithms. As the groundbreaking work of scholars like Safiya Noble, Cathy O’Neil, Virginia Eubanks, and Ruha Benjamin has shown, big data tools—from crime predictors in policing to risk predictors in finance—increasingly govern our lives in ways unaccountable and often unknown to the public. They replicate bias, entrench inequalities, and distort institutional aims. They devalue much of what makes us human: our capacities to exercise discretion, act spontaneously, and reason in ways that can’t be quantified. And far from being objective or neutral, technical decisions made in system design embed the values, aims, and interests of mostly white, mostly male technologists working in mostly profit-driven enterprises. Simply put, these tools are dangerous; in O’Neil’s words, they are “weapons of math destruction.”
These arguments offer an essential corrective to the algorithmic solutionism peddled by Big Tech—the breathless enthusiasm that promises, in the words of Silicon Valley venture capitalist Marc Andreessen, to “make everything we care about better.” But they have also helped to reinforce a profound skepticism of this technology as such. Are the political implications of algorithmic tools really so different from those of our decision-making systems of yore? If human systems already entrench inequality, replicate bias, and lack democratic legitimacy, might data-based algorithms offer some promise in addition to peril? If so, how should we approach the collective challenge of building better institutions, both human and machine?
These are surprisingly difficult questions, and political theorist turned UK Labour MP Josh Simons offers among the most clarifying discussions of them to date in his excellent book Algorithms for the People: Democracy in the Age of AI. Drawing on his broad experience as an industry insider, labor activist, and politician, Simons develops a substantial theory of the idea that “machine learning is political,” whether in the context of distributing social benefits and burdens (as with recidivism predictors in criminal sentencing) or distributing information (as with Facebook’s newsfeed and Google’s PageRank). He defends this claim against the view that the inner workings of private companies shouldn’t face public scrutiny, elaborating a vision of collective self-governance that applies to public and private institutions alike. And he makes the case that democratic legitimacy requires more than mere technocratic oversight. A truly democratic framework for regulation and reform must instead “embed forms of participatory decision-making every step of the way.”
Throughout the book, Simons’s philosophical lodestar is the notion of “political equality,” rooted in the idea that “citizens co-create a common life and live together through the consequences of what they decide.” The book’s pervading spirit is that of John Dewey, for whom “the task of democracy is forever that of creation of a freer and more humane experience in which all share and to which all contribute.” This is an ambitious and demanding vision, and Simons takes it seriously, arguing that “every institution in a democracy has a responsibility to protect against domination and to support the conditions of reciprocity over time.” But because “that responsibility varies in scope and content across institutions and social groups,” Simons notes, there can be no general application of political equality. Instead it requires “further moral and political argument informed by an understanding of the concrete threats to the capacity of some citizens to function as equals and the role of particular institutions in reinforcing or removing those threats.” In short, “political equality is political all the way down.”
This argument exudes a refreshing optimism about democratic self-governance. Social life, Simons stresses, is ours, collectively, to make. His aim is not just to apply these ideas to machine learning but to show how the political debates that new technologies have prompted hold the potential to “reanimate democracy in the twenty-first century” and raise our expectations about what collective self-governance more broadly should look like. But at times his scrutiny of new algorithmic systems suggests that machine learning tools themselves present unique threats to democracy and political equality. This misattribution, however, risks obscuring where the real dangers to these values may lie.
According to Simons, the “political character” of algorithmic decision-making has two aspects. The first is by now familiar: technical decisions—which variables to include in a model, which data to use to train it on, and so on—have serious consequences that can both reinforce and introduce inequality. The second has been less commented on: the politicization of allegedly neutral design choices in algorithmic systems stands to politicize collective decision-making in general, which in turn can help to promote political equality. “By forcing institutions to make intentional choices about how they design decision procedures,” Simons explains, “machine learning often surfaces disagreements about previously implicit or ignored values, goals, and priorities.” Debating algorithmic design and deployment thus presents an “opportunity for greater intentionality and openness about the goals of decision-making” writ large.
Does this make the age of algorithms unique? Simons claims that while using systems of classification and statistical generalization to make decisions about individuals is nothing new, predictive algorithms enable us to do so to an unprecedented degree. “Because machine learning increases the scale and speed at which decisions can be made,” he writes, “the stakes . . . are often immense, shaping the lives of millions and even billions of people at breakneck speed.” “Machine learning,” Simons argues, “both amplifies and obscures the power of the institutions that design and use it.” The book illustrates these stakes by drawing on Eubanks’s work on the Allegheny Family Screening Tool (AFST), risk modeling software used by the Children, Youth, and Families office in Allegheny County, Pennsylvania, to predict harm to children by caregivers. For each referral of potential child maltreatment the office receives, the AFST outputs a “Family Screening Score” based on details that a call screener enters into the case management system. Cases that receive scores that exceed a given threshold are flagged as “mandatory screen-ins,” while those with scores below a certain threshold are by default screened out. Overrides by caseworkers are possible but are documented and reviewed.
The adoption of AFST no doubt alters the screening process at Allegheny County’s call centers. But does it raise the stakes of the office’s decisions? On this, I am less certain. Simons appeals mostly to sheer numbers, suggesting there is more harm, or more pervasive risk of harm, now than before. Setting aside the fact that the utilitarian spirit of this comparison appears to conflict with Simons’s own commitment to democratic decision-making, our child abuse and neglect policies—how state institutions design their procedures, when and to whom they delegate tasks, whether they are more or less decentralized, and so on—have always had high stakes for the disproportionately poor and nonwhite citizens whose lives are touched by them. Why should we assume such decisions were any less morally consequential, or any less matters of public concern, when authority was left to the discretion of human caseworkers? Framing the problem this way thus naturalizes the procedures of the past, obscuring their own political character. In this sense, the advent of machine learning does not raise but rather reveals the stakes of institutional decision-making for what they have always been.
Recognizing as much clears the path to asking whether our old, thoroughly human systems—not just new, algorithmically enhanced ones—are designed well too. A system that relies on the judgments and discretion of individual caseworkers is characterized by variation and inconsistency; it hands over decision-making power to people who have good days and bad days, who sometimes are generous and sometimes not, compassionate at times and frustrated at others—and yes, who harbor unconscious biases, like we all do. Some may be disposed toward empathy. Others may find themselves frustrated or made resentful by low pay and stressful encounters with human suffering, all as part of a labyrinthine bureaucracy within which they feel powerless.
Simons depicts none of this complexity. Instead he offers a thought experiment about the tool displacing a fictionalized caseworker who expresses “empathy,” is endowed with “contextual knowledge,” and is committed to self-education about the “history of racism in the U.S. welfare system.” In similar fashion, Eubanks worries in her book Automatic Inequality (2018) that instead of being supported by the algorithm, caseworkers will wind up being trained by it. In both cases, there is no suggestion that caseworkers might benefit from interacting with such tools and that incorporating algorithms could possibly make the system not just more efficient but more just. (Simons does elsewhere note that democratic governance might ensure that algorithmic tools are used “to empower experienced staff and promote social equality,” but this prospect disappears in his discussion of AFST.) With this idealized portrait of the human caseworker, Simons risks undercutting his own thesis about the indissolubly political character of institutions like the Childen, Youth, and Family office, or at least understating the range of issues that may be “surfaced” for democratic debate. His caseworker appears to have a commitment to protecting children that rises above moral or political dispute. But we can no sooner leave these questions to presumptively benevolent caseworkers than we can design an apolitical algorithmic tool to figure them out for us.
Indeed, sociologists and historians have documented in painstaking detail the frequently traumatic encounters that thousands of poor families have had with the human agents that carry out state family policy. Where Simons sees humans sagely applying discretion and forming judgments appropriately sensitive to context, scholars such as Dorothy Roberts see gross injustice. On her account, the U.S. child welfare system perpetrates “benevolent terror” on the communities it is alleged to “serve”—from state administrators, therapists, and investigators financially benefiting from the removal of children to caseworkers allying with police officers in searching family homes, rarely bothering to obtain a search warrant.
Roberts’s contention that the child welfare system is an extension of the carceral state not only illustrates the terror wrought by institutions before the rise of machine learning. It also demonstrates how, as Simons claims, an understanding of “concrete threats” matters to our political debates. Had Simons run the thought experiment from Roberts’s point of view, he likely would have imagined a less benevolent caseworker—perhaps someone more like an unempathetic cop—and thus might have seen benefits as well as risks to displacing the discretion of human decision-makers in certain contexts. There is a dilemma here for political theory, which by its nature tries to draw general conclusions. On the one hand, since algorithms do not do anything “on their own,” we must attend to their operation in particular contexts. On the other hand, precisely this attention to particularities makes it difficult to draw broader takeaways. Assessing any given case of algorithmic deployment relies on thick assumptions—themselves politically contestable—about an institution’s aims and present functioning that shade the way we see the risks and benefits of expanding the role of machine learning in that context.
The analogy to policing offers another important lesson: the political stakes of our institutions—especially state institutions with a monopoly on violence—can’t be reduced to the personal character of the individuals who work in them. If, as Roberts and many others suggest, the problem with policing as an institution is that it is endowed with the legal rights and raw power to enforce an unjust notion of “law and order,” we will draw misleading conclusions if we focus on “bad apples” or even the technical features of the algorithmic tools they use. By the same token, if the child welfare system enforces a vision of child and family welfare that we endorse, it will not be because of the actions of benevolent call screeners or caseworkers; it will be because the institution as a whole embodies ideals that we have collectively decided on. Focusing on technical niceties and individual behavior illuminates how even the smallest components of a decision-making system matter, but it can also miss the forest for the trees.
Simons is thus on surer footing when he elsewhere observes that “we must engage in public arguments about what different institutions are for, what responsibilities they have, and how decision-making should reflect those purposes and responsibilities.” The fact that algorithmic systems offer a way to put such collective determinations into practice—through democratically specified design—is what makes them such an important site of democratic renewal. This is where algorithmic tools do present a unique opportunity. By expanding our options for decision-making, they make it easier to audit and assess the systems we already have—and thus to see them as open to debate, change, and improvement. Preoccupation with the flaws or inaccuracies of algorithms obscures this fact, as well as the reality that human decision-making can be flawed and inaccurate in equally concerning ways. To decide to stick with an old system is to make just as much of a political choice as to choose to adopt a new one.
Algorithms for the People presents the choice we now face as one among different machine learning models, encoding different values and optimizing for different ends; I think we should see it instead as a choice among a range of systems, both machine and human. Still, Simons rightly insists that the choice is inevitable, and it is ours as a polity to make. This is not to say that democratizing these decisions will be easy. Simons warns that, although algorithmic systems force us to articulate what we want, they also force “that reasoning to be articulated in technical, quantitative terms.” They are shrouded in a “veil of scientific authority,” and they tend to “obscure the uncertainties, as well as the moral and political judgments, involved in generating data.” But failing to rise to the challenge of collectively making these decisions will mean we have failed to build a social order that we might see and endorse as truly ours. The choice we face is not between a future that is designed and one that isn’t, or a world with humans and a world without. Either way, humans are steering the ship.
What, then, should we build? Simons sketches two broad proposals. First, to support regulation that is proactive and system-wide, he proposes an AI Equality Act (AIEA) that would set forth a framework for the “positive equality duties” by which all institutions that use predictive tools must abide. With political equality as its guiding light, the AIEA would advance an affirmative agenda aimed at building a society that empowers us to participate as equal citizens. Rather than allowing individual rights and remedies to dictate the direction of algorithmic systems, the goal would be to establish, from the start, “broad duties for institutions to demonstrate they have made reasonable efforts to ensure that their decision-making systems do not compound social inequalities and that, in some contexts, their systems reduce them.”
Second, drawing on antimonopoly thought from the Progressive and New Deal eras, Simons looks to the theory of public utilities, which licenses the state to take under democratic control those corporations whose “exercise of infrastructural power shapes the terms of citizens’ common life.” As Simons sees it, companies like Meta and Google that now form the basic infrastructure of the “digital public sphere” may meet this criterion. But since the exchange of ideas and information strikes at the heart of democratic self-governance, the public utility framework does not go far enough, Simons argues; delegating control over these institutions to state-appointed regulators would pose too severe a threat to political equality. Instead, Simons develops the idea of specifically democratic utilities, which would empower citizens to “co-design” and “co-create” public infrastructure through new mechanisms of participatory governance.
Algorithms for the People closes with two overarching lessons. The first is that insofar as collective decision-making is political, it is always partial; some set of interests, aims, and values invariably will prevail over others, so we must “ceaselessly debate” how our ideals are being put into practice. The second is that instead of looking to optimize toward a particular set of ends, we must “structure processes of experimentation and collective learning.” “What matters,” Simons writes,
is not which particular values or interests predictive tools prioritize at any given moment, but the processes and mechanisms of governance used to surface and interrogate those values and interests over time. Institutionalizing continuous processes of experimentation, reflection, and revision will force us to ask how best to advance political equality and support the conditions of collective self-government.
There is a tension, however, in this Deweyan emphasis on means rather than ends. After all, deeper democracy is not just a means by which we may achieve other ends; it is itself an end that must be attained. It requires that people win the power not just to debate the values we should embed in our systems of self-governance but to actually live them out. But if the primary obstacle to that power lies in the profit-driven economic order which not only underlies the design and deployment of algorithmic tools but continually frustrates our ability to effectively regulate them, Simons might be said to understate the challenge by only casually noting that we must be “concerned above all with how best to prioritize democracy over capitalism.” Whether reforms of the kind he imagines—firewalls between digital infrastructure and advertisement revenue streams; citizens’ assemblies and better corporate decision-making structures; mini-public meetings among different constituencies of civil society—are sufficient to achieve a political order where “democracy comes before capitalism, not the other way around” is another question.
This much is clear: if Simons is right about the political stakes of infrastructural power—and I believe he is—any disciplining of capitalism by democracy will not come without a fight. That is the point where political theory ends and the real politics in the “politics of machine learning” begins.
Independent and nonprofit, Boston Review relies on reader funding. To support work like this, please donate here.