Kentaro Toyama’s insightful essay punctures the cyber-utopian hype surrounding ICT4D initiatives and resists the allure of quick technological fixes for political and social problems.
But Toyama says relatively little about how to design ICT4D projects that apply the same good sense. In the absence of a clear-cut prescription, policymakers may believe that simply by acknowledging the failures of previous technologies, they ensure that their new initiatives avoid the same fate.
If only it were that easy! The long history of technological utopianism teaches us otherwise. The unfulfilled promise of past technologies rarely bothers the most fervent advocates of the cutting edge, who believe that their favorite new tool is genuinely different from all others that came before. And because popular belief in the world-saving power of technology is often based on myth rather than carefully collected data or rigorous evaluation, it is easy to see why technological utopianism is so ubiquitous: myths, unlike scientific theories, are immune to evidence.
Besides, the maddening pace of innovation in ICT4D leaves little time for self-reflection on the part of practitioners. Instead of analyzing the failure of yesterday’s gadget, many passionate innovators end up road-testing technologies that are poised to become “cool” tomorrow.
If technological utopianism is here to stay, how do we safeguard our policies and projects from its pernicious influence? Toyama’s analysis contains, albeit in tentative and implicit form, the contours of a normative framework. I’ll try to make those contours more explicit.
Toyama’s argument is founded on two ideas. First, policymakers often start with excessive expectations about the power of technology, so it is almost inevitable that they settle on improper strategies. Second, policymakers are bad at predicting the outcomes of technology adoption in individual contexts and thus may miss the negative externalities of their seemingly benign efforts.
When building development projects, we should aim to have practical expectations and strategies that are appropriate for individual contexts. And these strategies should anticipate—and potentially mitigate—the undesirable consequences of technology adoption in those contexts.
Enacting new strategies first requires a shift in philosophy. Rather than view technology as a solution to global, large-scale, abstract problems, we should instead see technology as a tool for solving local, small-scale, specific problems. This alternative philosophy demands that ICT4D no longer function as a one-size-fits-all approach. The design of development technologies should be tailored to the unique needs of people who share little more than poverty.
Unleashing the full capacity of technology also requires a more humble conception of that capacity. Rather than envision ICT4D projects as independent and autonomous, development practitioners should treat ICT4D as one option in their toolbox alongside non-technological “social engineering” solutions.
In short, we need to be realistic, holistic, and attentive to context. Why haven’t we been so far? Part of the problem seems to lie in the public’s penchant for fetishizing the engineer as the ultimate savior, as if superb knowledge of technology could ever make up for ignorance of local norms, customs, and regulations. One way to fight this destructive fetish is to empower those with regional and subject-matter expertise at the expense of those with technological expertise. Non-technologists may be more successful in identifying the shortcomings of technologies in given contexts. They may be better equipped to foresee how proposed technological solutions complement or compete with other available non-technological solutions as well as to anticipate the political and institutional backlash that can result from choices of technology.
To achieve these aims, decision-makers need to pay attention to a few important dimensions of the outcomes of ICT4D projects, all of which are suggested in Toyama’s essay: easy-to-detect effects on people and their livelihoods versus subtle but profound shifts in social and political structures; immediate effects versus effects that appear over a much longer term; outcomes foreseen by founders and designers versus outcomes that were not envisioned; socially desirable outcomes versus harmful ones.
Toyama does not suggest that we focus on one or the other outcome in these either-or scenarios. Rather, he cautions that both types of outcomes may occur simultaneously. We need to assess whether the visible, short-term, and intended effects of technology actually are socially beneficial, and we also need to run the same tests, to the extent that we can, for invisible, long-term, and unintended effects.
It is inevitable that in many cases the invisible, longer-term, and unforeseen effects will be socially harmful and require mitigating interventions. The issue, predictably, is how to foresee these effects sooner rather than later. The only satisfying answer here seems to be the same as in the case of optimizing strategies and outcomes: we need to spend less time thinking about the proposed solution—technology—and more time theorizing the problem that we are trying to address, whether it is poverty, illiteracy, or disease. As long as the logic of technology cannot be grasped outside of the regional and social context in which it manifests, the analytical emphasis should be on understanding the latter rather than the former. In other words, closely observe the place where technologies are being adopted instead of just implementing an ICT4D project and letting it run its course.
If technology is to deliver on its promise, then we must come to terms with the conceptual limitations of its proponents and find ways to limit their influence. Those who are in a better position to predict the trajectory of technology adoption in a particular environment need to be in the driver’s seat.