What's Wrong with Technological Fixes?
Terry Winograd Interviews Evgeny Morozov
July 1, 2013
Jul 1, 2013
18 Min read time
Evgeny Morozov answers questions about his new book
If you are looking for some smart, informed skepticism about the promise of digital technology to cure important problems, Evgeny Morozov is the critic for you. In his second book, To Save Everything, Click Here: The Folly of Technological Solutionism, the BR contributing editor takes aim at what he describes as Silicon Valley’s “amelioration orgy.” According to the ameliorationists, “all that matters” is “to get humans to behave in more responsible and sustainable ways, to maximize efficiency.”
Morozov characterizes this impulse to fix everything as “solutionism,” and offers two broad challenges to the solutionist sensibility. First, solutionists often turn public problems into more bite-sized private ones. Instead of addressing obesity by regulating the content of food, for example, they offer apps that will ‘nudge’ people into better personal choices. Second, solutionists overlook the positive value in the ‘vices’ they seek to ‘cure.’ According to Morozov, some of life’s good things come from ignorance rather than knowledge; opacity rather than transparency; ambivalence rather than certainty; vagueness rather than precision; hypocrisy rather than sincerity; messy pondering of imponderables rather than crisp efficiency. As these challenges reveal, Morozov’s critique is, in the end, animated by a sensible picture of human life that suggests a more modest view of technology than solutionists have proposed.
To probe his ideas further, we got Terry Winograd—professor emeritus of computer science at Stanford University, founding faculty member of the Hasso Plattner Institute of Design at Stanford (the “d.school”), and one-time advisor to Google (which was founded by his students)—to ask Morozov some questions about To Save Everything. His answers were offered via email.
Terry Winograd: While preparing for this interview, I ran across a story by Will Storr in the Telegraph, about an engineer in Silicon Valley whom I am totally convinced you must have made up, in order to show you weren’t exaggerating:
Last Christmas, Rob Rhinehart realised that food doesn’t work. At least, not very well. Its function is to deliver the energy and nutrition that the body requires for fuel, and yet it’s expensive to buy and takes time to prepare. Many in the world can’t afford to eat properly, whilst others eat so badly that they become clogged and obese and then they just die. Eating is a problem, in one way or another, for millions, perhaps billions of humans. So, a few months ago, the 24-year-old computer engineer began his quest to “solve” food.
It may seem eccentric, even naive, this compulsion to solve all the problems that he comes across, no matter how profound, but Rhinehart is an engineer, and this is how engineers think. The world is filled with puzzles and the parts necessary to solve them. Success in that world is a process of teasing out new and evermore perfect solutions to the various problems that make up a human life. For people like Rhinehart, nostalgia is a bizarre and retrogressive state. The past is simply a less efficient iteration of now. And the present isn’t good enough, either, because there’s nothing in it that can’t be improved or optimised. “I’m not obsessive about it,” he says, “but when something is glaringly inefficient I don’t think there’s any reason to put up with it. Just because we’ve been doing something a certain way for a long time, I don’t think that makes anything sacred.”
Is he for real?
Evgeny Morozov: Sadly, I'm not surprised. In To Save Everything . . . I quote from Ken Alder's fascinating book on engineering and the French revolution, where he argues that engineering is actually one of the most revolutionary professions, since engineers are so keen to “disrupt” and are always eager to look for the most efficient solution. Here is what he wrote:
Engineering operates on a simple, but radical assumption: that the present is nothing more than the raw material from which to construct a better future. In this process, no existing arrangement is to be considered sacrosanct, everything is to be examined in the light of present aspirations, and all practices refashioned according to the dictates of reason.
Now, there's much to like about this revolutionary spirit, at least in theory. I'm not the one to defend current practices because that's how we have always done things (even though I do realize why so many conservative commentators endorse my work; the best review I got is probably from Commentary; I'm not yet sure how to react to such applause to my work). But this doesn't mean that everything should be up for grabs all the time—especially when efficiency is our guiding value. This to me seems dumb, not least because many of our political and social arrangements are implicitly based on the idea of inefficiency as the necessary cost of promoting some other values.
Take rent control or common carriers like taxis. Those two norms introduce a lot of inefficiency, as the proponents of AirBnB and Uber like to point out. But to say that these cool start-ups are good because they promote efficiency is not to say much—since efficiency may not be what we actually want. There's something odd happening here and I think we ought to explicitly recognize that inefficiency can be a good thing—at least when it allows us to get something else. To put this in broader theoretical context, I think we ought to stop being in denial about the foundations of modernity; it could be that opacity, ambiguity and inefficiency always played an important role but we never had to defend them because they were never under threat. The situation today is different and we ought to defend them—if only to make it harder for the “disruptors” to keep appealing to efficiency and transparency as if those are unalloyed goods. They aren't and we ought to make this clear.
TW: There are two central critiques in your book. One targets an ideology you call “internet-centrism” and its accompanying “epochalism.” You write: “Anyone who is desperately trying to understand how today’s digital platforms work is much better off simply assuming that ‘the internet does not exist.’” What do you mean by that?
EM: This statement—that “the internet does not exist”—was meant to be provocative, but I'm not actually denying the physicality of routers or cables. It's a fairly simple notion: depending on where and how we choose to look at things, we will end up with a very different picture. What ails most of our Internet debates is that we conduct them at the level of abstraction where our ideological predispositions regularly affect what we see and what we don't see—and we don't even notice this bias.
Some 'problems' become problems only because we are armed with certain tools that bias us into identifying them as such.
Let's try an example. A story about Wikipedia becomes a very different story if it's also weaved into a larger story about “the Internet,” its democratizing potential, its tendency to promote collaboration, its potential to be the next printing press, etc. I'm not even sure we can see Wikipedia for what it is, outside of this frame—but we should be trying. This, I argue applies to all sorts of other phenomena—education, politics, health—we have become prisoners to what I call “Internet-centrism.” When we set these very specific stories in the context of a powerful meta-narrative such as “the Internet,” we operate with a theoretical handicap of sorts, producing answers before all the questions have been posed—not least because we have certain expectations of how the Internet works, what it is, and what it is for. What I argue is that if we can suspend anything that we know about “the Internet,” we might end up with better, more accurate stories about individual technologies. We can also govern them differently without always having to look back at how such decisions might affect “the Internet.”
TW: Your broader social critique is of “solutionism,” the essence of which may be summed up in the slogan “There’s an app for that.” What is the connection between solutionism and Internet-centrism?
EM: This summary of solutionism covers only part of my critique—about the need to remember that problems could be addressed on many different levels and, occasionally, approaching them through apps is not the wisest possible move. For example, a problem like obesity can be tackled by giving everyone a smartphone and telling them that, from now on, the smartphone will track everything they eat and how much they exercise, so that they can optimize their behavior accordingly. We happen to be living during a period when governments are struggling with the financial crisis and might actually find this app-driven approach to problem-solving quite appealing (there are other factors why this might be so: the rise of behavioral economics and nudging and the appeal of tech-innovative solutions is what makes political careers these days).
So yes, we can tackle a problem like obesity through apps—but we can also do something at a higher, structural level: regulate the food industry, pass stricter rules for advertising junk food on TV, build more infrastructure where people can walk and exercise. Silicon Valley is good at producing micro-solutions to macro-problems. Sometimes they work, sometimes they don't. But we also need to remember that macro-solutions to macro-problems are possible and that, perhaps, we want to keep experimenting with them even if we have plenty of micro-solutions at our disposal.
Now, the other part of my solutionism critique that “there's no app for that” doesn't cover is the idea that not all “problems” are problems and that some “problems” become problems only because we are armed with certain tools that bias us into identifying them as such—for example, seeing forgetting or hypocrisy as problematic and worth ‘solving.’
A few months ago, I stumbled upon an interview with Foucault where he was asked to explain his notion of “problematization” and I think what he said in response applies to my thoughts on “solutionism.” Foucault said he was interested in rediscovering “what has made possible the transformation of difficulties and obstacles of a practice into a general problem for which one proposes diverse practical solutions . . . This development of a given into a question, this transformation of a group of obstacles and difficulties into problems to which the diverse solution will attempt to produce a response, this is what constitutes the point of problematization . . .”
What I tried to do with the idea of “solutionism” was to urge our problem-solvers—who, no doubt, got empowered thanks to the proliferation of digital technologies—to pause and ask a very simple question: How did my problem become a problem and how do I know that it is, in fact, a problem?
Now, your actual question—the relationship between Internet-centrism and solutionism—is a complicated one. Let's just say that my critique of the Internet as a cultural construct, with its own myths and presumed teleologies, leads me to conclude that many people who do succumb to Internet-centrism stop asking questions about problems that they are trying to solve; instead, they double their solutionist zeal. Part of this has to do with the fact that the Internet is invested with the kind of uniqueness and exceptionalism that makes us less suspicious of bad ideas than we should be. The logic, to oversimplify quite a bit, goes something like this: Well, if Wikipedia is possible, this means that the Internet is something new—and if the Internet is something new, it means that there might as well be a “Wikipedia for politics.” Internet-centrism normalizes “disruption”—and once disruption is seen as something to cheer for, all sorts of solutionist projects spring up.
TW: A key element of many solutionist approaches is a focus on devices and techniques to shape the actions of individuals. For example you quote Tim Chang, head of a major venture fund investing in “Quantified Self” apps that allow users to track their vital statistics: “the only way we’ll fix our horribly broken healthcare system is by getting consumers to think about health and not healthcare.” In addition to the turn to “consumer,” I would imagine there are underlying assumptions here you don’t share about the framing of the problem. Is that right?
EM: One of the arguments I make in the book is that it's impossible to understand the appeal or impact of technologies for self-tracking outside of the specific domain where they are introduced. So if we want to understand how the many tools advocated by the Quantified Self crowd would affect, say, health, we need to know something about how the notions of health and disease have changed in the last five decades and what role not only science but also the pharmaceutical industry has played in this process. I've read a bit in sociology and anthropology of health and medicine and one unmissable trend there is the growing concern with what academics call “biomedicalization”—which is like the good old medicalization, with its imperialistic tendencies to redescribe all experiences and concerns in the language of modern medicine, but now boosted by the techno-scientific apparatus of biological sciences.
If you follow the arguments of some prominent voices in this field—say anthropologists like Joe Dumit—you'll see that they draw an interesting connection between this constant search for new symptoms and the financial interests of Big Pharma, who, of course, wants to sell us more and more drugs to treat more and more diseases. So this is the context in which I think we should view the proliferation of various devices for self-tracking. And seen in this light, the picture isn't very pretty. So part of what I'm arguing in the book is that it's wrong to just celebrate these new ways of self-tracking—the fact that we can now build sensors into our t-shirts to monitor our health—without having some deeper views on how we think about health and disease. I'm actually not interested in advancing one view of health over another in this book; all I'm trying to do is to point out that most debates about self-tracking technologies—at least those we hear in the public arena—sidestep these issues and just present these tools as ways for us to know more about ourselves, etc. That's why solutionists have such an easy time: they do not complicate their stories, framing all of their innovations as giant steps towards progress and Enlightenment.
TW: You are very critical of attempts to get people do the “right” thing by providing games, incentives, nudges, or simply making the “wrong” thing impossible to do. If these can effectively modify behavior, what’s wrong with doing them?
EM: Well, if we assume that “modifying behavior” of citizens in the most efficient way is the goal of public policy, then, I guess there's no problem. I am however quite old-fashioned and excessively utopian in that I believe that it wouldn't be such a bad idea for citizens to know what they do and have some basic understand of why it matters—even if we have the option of achieving a better outcome with them doing it without any awareness. Look at gamification: We can now easily design a scheme that will encourage people to engage in climate-friendly behavior without having a shred of comprehension about what they do, why this matters, and what the fuss about climate change is all about. You just award them ‘points’ for soliciting behavior that someone somewhere has deemed environmentally friendly.
The discourse associated with 'virality' conceals the role of mediators.
I think that this is a naïve and ugly approach. It's naïve because it assumes (wrongly in my opinion) that complex problems can be addressed without getting citizens to develop more sophisticated models of the world around them; it's the ultimate conceit of our technocratic policymakers, who believe that they can solve the crisis on their own, if only the citizens do not get in their way. It's ugly because it abandons the idea of citizens as active players who are capable of learning and understanding.
By the way, I'm not saying that citizens ought to be forced to confront every single problem that faces the world. This would be too much. And, frankly, I don't have a theory as to how we should prioritize which problems matter and which problems don't. But what I do know is that the approach taken on by the cheerleaders of gamification, the quantified self, big data, etc.—I take all of them to harbor these ambitions of getting citizens to do the right thing without necessarily grasping the reasons behind it—also doesn't have such a theory, and, more importantly, pretends that such a theory is not needed. I think this is insane. Even if not every problem deserves reflection on a regular basis, it doesn’t follow that no problems deserve such reflection. I'm eager to have a conversation about what a proper theory would look like, but to have that conversation we must at least acknowledge that there's something wrong with the current approach, even if it does achieve greater efficiency.
TW: One of the phenomena often taken to demonstrate the democratic nature of the Internet is “virality.” You see this not as a natural force, but as something subject to social control and design decisions. Can you explain why, and say what might be done.
EM: I don't like the idea of “virality” because the discourse associated with it usually conceals the role of mediators, be it institutions or powerful individuals, and gets some rather bizarre and dangerous ideas off the hook simply because they are seen as expressions of the vox populi. Then there are questions about metrics and algorithms that also become invisible: that videos go popular on YouTube is not necessarily a reflection of their intrinsic nature but the consequence of how YouTube's 'related videos' recommender system works. I think it would probably be helpful to know how such recommender systems work—whether they are too prudish, whether they might embody some socioeconomic biases and so forth—but the discourse of “virality” makes it harder to have those conversations. Of course, I'm not denying that some ideas take off more than others because they are better than others, but I think it happens in far fewer cases than we are made to believe. I quite liked Ryan Holiday's book published last year—Trust Me I'm Lying. It's a good overview of just how easy it is for those with money and power to push their ideas by pretending that they’ve gone “viral.”
TW: As a potential antidote (I won’t say “solution”) to solutionism, you advocate the creation of “technological unfixes.” What are some examples and what do they achieve?
EM: The idea of “technological unfixes” is not really original. We have traditionally thought of technological fixes as black boxes—we want them to work without forcing any extra thought on the user. But can we have fixes that would, perhaps, resolve the problem but without becoming black boxes? Can we make environmentally friendly lightbulbs that would get us to think about climate change even as we get used to them? Perhaps, one way to preserve some of this thoughtfulness is by having some of our technological artifacts to occasionally break down—but do so without hurting anyone. Imagine Martin Heidegger had an MBA and had to design hammers. I guess that's what we would get.
TW: Many of the reviews of your book comment on its style with words like "snarky,” “bitchy,” and “bullying." Your text is full of brief asides that denigrate or question the motives of people you are describing, and even compare them at times to Nazis or worse. I can't help but note that this scattering of clever acerbic Tweet-sized comments is part of the standard style in the blogosphere, while not in the academic tradition of serious intellectual discourse. You're obviously reflective and thoughtful about the way you write, so I was wondering what you saw as the plusses and minuses of using this style in a book.
EM: Let's get the Nazis out of the way first. There's a considerable body of serious scholarship looking at the technological thought of the Nazis. They had plenty of engineers and scientists and some had rather ambitious theoretical ambitions. (Not to mention that Carl Schmidt and Heidegger, whatever their relationship to Nazism, wrote about technology). All I did was to point out that there are some hard-to-miss similarities between the early Nazi thought about the relationship between technology and nature and the way in which someone like Kevin Kelly writes about it. Now, that's a serious charge and I think I support it well in the text. What do my opponents say? Well, they say that it's completely unreasonable to make such comparisons because this is what people on Internet forums do. I find such a response to be ridiculous. My summaries of Nazi technological thought all draw on articles in serious peer-reviewed journals of history and if someone wants to challenge the interpretation, they are free to do so. But to simply dismiss my critique as “something that angry people on the Internet do” is lame.
As for the acerbic tone, well, I've got no problem with it. Karl Kraus was already writing in this tone well before Twitter existed. I'm not comparing myself with Kraus but the community of people writing about “the Internet” is just so stuffy, boring, and self-absorbed (in their defense, most of them were trained as lawyers) that some acidic commentary cannot possibly hurt. We've got too many priests and not enough jesters. The other thing is that I'm not really hiding the fact that I seek to change the mechanics of “Internet discourse” with this book. This project is not just about the substances of individual arguments—it's also about how we should listen to those who preach “the Internet” and, most radically, about who I think ought to say less. Needless to say, I'm not surprised that people I've targeted are not very happy with the book. To me, though, it's a sign that everything is working.
While we have you...
...we need your help. Confronting the many challenges of COVID-19—from the medical to the economic, the social to the political—demands all the moral and deliberative clarity we can muster. In Thinking in a Pandemic, we’ve organized the latest arguments from doctors and epidemiologists, philosophers and economists, legal scholars and historians, activists and citizens, as they think not just through this moment but beyond it. While much remains uncertain, Boston Review’s responsibility to public reason is sure. That’s why you’ll never see a paywall or ads. It also means that we rely on you, our readers, for support. If you like what you read here, pledge your contribution to keep it free for everyone by making a tax-deductible donation.
July 01, 2013
18 Min read time