January 6 last year will enter history books as the day a violent, pro-Trump mob stormed the U.S. Capitol. But for millions of people living in Latin America, Asia, and Africa, the day brought with it something quite different: an unusual WhatsApp notification.
WhatsApp is the world’s most popular messaging app, boasting more than 2 billion users. Facebook acquired the service in 2014 for nearly $22 billion—one of the largest acquisitions in tech history—after WhatsApp began to outpace Facebook’s own Messenger in global growth. By 2016 the service had become the primary means of Internet communication for hundreds of millions of users in the Global South.
WhatsApp was founded in 2009 on the promise that it would never sell users’ personal information—a creed reiterated at the time of Facebook’s acquisition. In January last year, however, WhatsApp began to update its terms of use and privacy policy, issuing a notification to users and prompting them to accept the new terms. According to the new policy, WhatsApp could share user data with its parent company, including a user’s phone number, device identifiers, interaction with other users, payment data, cookies, IP address, browser details, mobile network, time zone, and language. In reality, data sharing between WhatsApp and Facebook had started in 2016. The difference in 2021 was that users could no longer opt out: if they failed to accept the new policy, the app’s functionality would degrade to such a point that they could no longer use it.
The WhatsApp story is just one example of the pervasive and harmful impact of Big Tech on communities across the Global South. It illustrates how monopolistic market position breeds data extraction, leaving people dependent on platforms with no choice or formal mechanism for accountability. While these issues are at play worldwide, their harmful consequences are heightened in the Global South.
Take the case of online disinformation. For years activists in Myanmar have called out Facebook for its role in fueling violence against the Rohingya, yet these concerns have fallen on deaf ears, as a recent Amnesty International report makes clear. In the Philippines, meanwhile, Rodrigo Duterte’s authoritarian government weaponized social media. In 2015 journalist and Nobel laureate Maria Ressa and her newspaper Rappler partnered with Facebook on its Internet.org initiative to provide Filipinos with free access to basic online services. But by 2021, after witnessing years of large-scale algorithm-powered spread of disinformation, Ressa had become one of the tech industry’s most vocal public critics. In both Myanmar and the Philippines, the spread of online disinformation was accelerated by Facebook’s aggressive promotion of its “free” access initiatives.
Tech companies’ investments in responding to these challenges in the Global South pale in comparison to their efforts in the United States. According to internal documents, Facebook allocated 84 percent of its overall misinformation budget to the United States, even though it is home to less than 10 percent of users of the platform. Unsurprisingly, the Mozilla foundation recently found that TikTok, Twitter, and Meta (as Facebook rebranded itself this year) violated various local election laws in Kenya during the presidential elections. In just the last few months, the NGO Global Witness put Meta’s policies to the test by submitting ads containing fraudulent election-related information ahead of the 2022 Brazilian elections. All were approved, in direct violation of the company’s election ad policies. Global Witness found similar patterns in Myanmar, Ethiopia, and Kenya.
This essay is featured in Imagining Global Futures.
Tech companies often make the case that catching mis- and dis-information in what technologists call “low-resource” languages is harder, claiming that what they need to solve the problem is more linguistic data. In reality, much of the alleged difficulty stems from the sector’s underinvestment in non-European contexts. In the Kenyan case, Meta’s system failed to detect hate speech in both Swahili and English—belying the argument that a lack of data is to blame. Meanwhile, Big Tech’s content moderators are largely located in the Global South, many working in appalling conditions.
This worrying picture has led several scholars and activists—among them Sareeta Amrute, Nanjala Nyabola, Paola Ricaurte, Abeba Birhane, Michael Kwet, and Renata Avila—to characterize Big Tech’s global impact as a form of digital colonialism. On this view, primarily U.S.-based tech corporations function in many ways like former colonial powers. Driven by an expansionist ideology, these companies arrange digital infrastructures to fit their economic needs on a global scale. They contribute to the exploitation of low-wage, marginalized workers across the globe. They extract truly staggering profits with very little accountability and with harmful consequences for local communities. They institutionalize social practices designed by a small group of largely white, male, and American software engineers, undermining the self-determination of the societies they seek to expand into. And much like the colonizers of yore who tied all this to a so-called “civilizing” mission, they claim to do all this in the name of “progress,” “development,” “connecting people,” and “doing good.”
But wherever there is unjust power, there is resistance. Activists around the world have responded to the rise of digital colonialism with their own countervailing vision of digital justice. From calling for accountability and pushing for policy and regulatory changes to developing new technologies and engaging various publics in these debates, digital rights communities in the Global South are pointing the way to a more just digital future for everyone. They face an uphill battle, but they have already made significant gains, catalyzed a growing movement, and developed powerful new strategies for change. Consider three of them, in particular.
Strategy 1: Finding a language
Digital rights debates can seem esoteric to anyone, but given the outsize influence of actors in the United States, they are especially opaque to the three-quarters of the global population who do not speak English and lack readily accessible terminology in their own languages to talk about these issues. It is this challenge that Nairobi-based author and activist Nanjala Nyabola has set out to solve with her Kiswahili Digital Rights Project. The simple fact is that there can be no inclusive, democratic agenda for digital policy if people around the world cannot discuss these issues in their own communities, on their own terms.
Nyabola drew inspiration from the foundational work of Kenyan novelist Ngugi Wa Thiongo’s anti-colonialist call to write in native African languages. Starting last year, Nyabola worked with linguists and activists across East Africa to provide Kiswahili translations for keywords in digital rights and technology. As part of this collaborative work, Nyabola and her team worked with local and international media outlets publishing in Kiswahili, encouraging them to adopt the vocabulary in their reporting of technology issues. The group also developed a set of flashcards to be distributed in schools and sold in libraries in the region and that are also available online.
The power of this project lies in its simplicity, its collaborative nature, and the fact that it is so easily replicable. At its core, this vision holds that people ought to be empowered with contextualized knowledge about the systems that shape their lives. If global digital rights advocacy is to become meaningful to enough people around the world, it will need to develop many such dictionaries.
Strategy 2: Capturing public opinion
By virtue of its grounding in law, digital rights advocacy is often oriented toward changing and impacting regulation. This work can sometimes devolve into a technocratic enterprise among legal experts, but shaping public opinion at large also plays a central role in changing policy. There is perhaps no better example than the 2015 campaign led by Indian activists for net neutrality: the principle that Internet services providers should permit access to all websites and programs, without interference or preferential treatment.
Facebook launched Internet.org, later rebranded as Free Basics, in 2013, aiming to provide global users with access to a selection of online services through a Facebook-controlled portal, free of data charges. The proposal was central to Facebook’s aggressive strategy of global expansion and user growth.
As it happened, the introduction of Free Basics in India in 2015 coincided with an emerging debate there about “zero-rating”—the practice of providing “free” online services access. At the time, several telecom operators were keen on introducing forms of zero-rating, which digital rights activists decried as a violation of net neutrality. As a clear instance of zero-rating, Free Basics became a lightning rod. Local activists, coders, and policy wonks launched a campaign called Save the Internet forcefully opposing the program. Their website featured a video explainer about net neutrality by a group of popular comedians, All India Bakchod; the video went viral and racked up 3.5 million views. For nearly a year activists engaged in a nationwide and highly publicized battle with Facebook over the interpretation of net neutrality. They made forceful arguments about the value of self-determination, protecting local businesses, and resisting data extraction by a foreign corporation.
This campaign was met with significant corporate pushback. When activists organized marches, Facebook published ads in local newspapers. When activists took to Twitter and YouTube, Facebook bought billboard ads throughout the country. And when activists received support from transnational networks of digital rights advocacy organizations such as Access Now and Color of Change, Facebook mounted an automated astroturfing campaign, inviting users to send a pre-filled message to the Indian telecom regulator in support of Free Basics. (Approximately 16 million users did so.) But despite all this opposition, the public campaign was successful: India’s regulatory authority decided to uphold net neutrality, forbid zero-rating, and effectively kick Free Basics out of the country.
This victory was rightly and widely celebrated among digital rights activists worldwide, but it also illustrates the magnitude of the enduring challenge. Despite being banned in India, Free Basics continued to expand elsewhere—particularly across the African continent, where it reached 32 countries by 2019. Some have also argued the Indian campaign excluded poorer and rural voices and cemented a middle-class view of net neutrality on behalf of small business. Still, the campaign holds important lessons for the future of global digital rights advocacy. Perhaps most significantly, it shows that broad-based mobilization around a technical issue in digital rights policy can play a profound role in reigning in the power of multinational tech corporations.
Strategy 3: Cross-class, transnational organizing
Unionizing and organizing have also emerged as promising avenues for change and accountability within Big Tech itself. In 2018 over 20,000 Google employees walked out to protest, among other things, pay inequalities and the company’s handling of sexual harassment. That same year Microsoft employees protested the company’s work with U.S. Immigration and Customs Enforcement. On June 1, 2020, hundreds of Facebook employees refused to work to oppose the company’s choice to not do anything about inflammatory posts by Donald Trump. Today’s tech organizing goes beyond white-collar headquarters, reaching out to retail workers in Apple stores and pickers and packers in Amazon warehouses. The next frontier for such organizing is the inclusion of workers across the world. Through it all, we must broaden our understanding of who counts as a “tech worker.”
Adrienne Williams, Milagros Miceli, and Timnit Gebru have recently called attention to the transnational networks of workers behind the artificial intelligence hype, or what anthropologist Mary Gray and computer scientist Siddarth Suri call the industry’s pervasive “ghost work.” These include content moderators, but also data labelers, delivery drivers, or even chatbot impersonators, many of whom live in the Global South and work in exploitative and precarious conditions. The costs of protest for these precarious workers are much higher than for high-paid tech workers in Silicon Valley. This is precisely why Williams, Miceli, and Gebru argue that the future of tech accountability lies in cross-class organizing between low-income and high-income employees.
Take the case of Daniel Motaung. In 2019, as a recent university graduate from South Africa, he accepted his first job as a content moderator for a Facebook subcontractor, Sama. He was relocated to Kenya, signed a non-disclosure agreement, and was then revealed the type of content he would be reviewing. For $2.20 an hour, Motaung was subjected to a non-stop stream of content that one of his colleagues described as “mental torture.” When Motaung and some of his colleagues organized to unionize for better pay and working conditions, including mental health support, they were intimidated, and Motaung was fired.
Though this particular unionization effort failed, Motaung’s story is widely known and made the cover of Time. He is now suing Meta and Sama for unfair labor relations and union busting. By bravely blowing the whistle on the inhuman working conditions of content moderators in the Global South, Motaung drew much-needed attention to a category of workers that ought to be part and parcel of current movements for tech accountability. His work shows that low-paid tech workers are a force to be reckoned with. It also illustrates the importance of changing pipelines both into and out of tech—including preparing landing pads for the whistleblowers and organizers of tomorrow.
Through these efforts and many others, the future of global digital rights advocacy is being written as we speak. Some resist tech power; others develop alternatives. To amount to more than ad hoc progress, however, such work requires sustained funding, institutionalization, and international cooperation.
The “global” dimension of digital rights advocacy should not be taken for granted: it must instead be consciously and carefully cultivated. In a recent analysis of the annual flagship RightsCon conference—arguably the most significant event of the global digital rights community—Rohan Grover found that 37 percent of the organizations hosting sessions were U.S.-based and that 49 percent of the organizations claiming a “global” scope were U.S.-registered nonprofits. In its current form, digital rights activism mostly relies on institutional support with Euro-American funding, where corporate capture lurks around the corner.
This is a pressing concern that must be addressed—including through innovative forms of grassroots support and funding—but a path to a more just digital future for all is already becoming clear. At the heart of strategies emerging in the Global South is a vision for the urgency and necessity of collective power. They point to the need for pressure from within and outside companies; from Big Tech metropoles as well as its so-called peripheries; from policymakers, lawyers, journalists, organizers, and a wide range of tech workers. Above all, they show why we need people-led movements to make technology accountable to everyone.