Since the rise of what was called Internet 2.0 about a decade ago, nearly all Americans have shared their beliefs, values, social and commercial proclivities, and patterns of behavior with a handful of Web-based companies. In return, the companies—most prominently, Google, Facebook, Yahoo, and Amazon—have shared with everyone and profited fantastically from user-generated content provided both consciously (such as emails or Instagram photos) and unwittingly (such as location information tracked via cell phones).
By pooling these data with the communications records it has been collecting since the spread of mobile phone networks two decades ago, the U.S. government has assembled the largest, if least visible, database in the world. It has given a million people security clearances. It has invested billions of dollars in software and hardware to analyze the data. It is constantly launching new code to comb the data for patterns that might reveal who is planning what terrible act against our country’s interests.
Few would deny the benefits of Internet 2.0 or mobile phones. And no one should begrudge the government’s official purpose. But information is power, and power corrupts. The effect may be insidious. Surveillance and pattern recognition can combine to convey an impression of knowledge so convincing that even the humblest of leaders may believe their data-driven judgments are infallible. The horrific threats we face today and the government’s apparent ability to know everything may seem to justify a broad range of actions, from prosecutorial (accusations, investigations, criminal charges) to punitive (drone strikes). The digital world moves so quickly that one might think it cannot accommodate the Constitution’s creaky eighteenth-century structures, which are designed to protect individuals—their privacy and personal liberty—from governmental power.
The fundamental question of politics remains unchanged: Who rules? Does the alliance of big business and big government to control big information constitute a new and necessary form of control over society? Or does the promise of democracy still hold—are individuals still sovereign? If the latter, then businesses would have to remember that the customer is king. Government would be of the people, by the people, and for the people. And technology would not jeopardize the rights of individuals. Instead it would re-establish the Bill of Rights in the digital world.
No individual can muster the spending power of the big firms, much less the government. However, computer scientists have recently invented new ways for people to act alone or together, in private, outside the purview of business or government. The desire of the state and firms to know everything is on a collision course with the possibility that individuals can now engage with each other, in anonymous confidence, as never before.
What is needed is nothing less than a digital bill of rights that reinterprets the original amendments for this century. This document, like the original, will have to be negotiated. My purpose is to outline the three principles that I think should guide the negotiation: (1) substantial reduction of the secrecy that shrouds the sharing of information between firms and the government and constraints against government misuses of personal data, (2) purposeful encouragement of the attempts by individuals, acting alone or together, to use new technologies that assure privacy, and (3) commitment to due process of law as the method for exercising state and business power over information.
The Power of Data
The genesis of the new powers obtained by the corporate-government alliance has been big data—analytics as applied to unstructured data. Computers can analyze prodigious amounts of information, even if no one has organized it into relational databases. Businesses can use the data to identify consumer propensities. The government can use the data to search for conduct that threatens the state and its citizens.
Although the tools are the same, the goals are not. Firms want to increase the probability that advertisers will reach likely purchasers. They can do well even if their predictions are not very accurate: given the number of advertisements that reach consumers, being less inaccurate than one otherwise would be in predicting individual preferences is an accomplishment for which advertisers will pay a great deal. Beyond suffering distraction, individuals rarely object to seeing ads as a price they pay for the free services of social networks.
The government, by contrast, gathers information in order to detect, prevent, and punish. In these efforts inaccuracy is harmful both for national security and for those wrongfully accused or punished. Unlike false positives generated by commercial firms, the government’s false positives have enormous costs. Governance too is an issue. If the top executives at social networking firms misuse data, their boards might oust them. They even might be subject to civil or criminal sanctions. But if top people in the government misuse data in order to, for example, punish political opponents, there will not necessarily be consequences. The problem was stated by Juvenal two millennia ago: Quis custodiet ipsos custodes? Who watches the watchmen? Indeed, even when governmental leaders are well intentioned, if misguided, simply having a productive debate about appropriate guidelines is no easy task. The state, as recently demonstrated in the contretemps between the CIA and the Senate, may believe that the debate itself jeopardizes national security. And when courts are asked to supervise the government’s information gathering, they may lack time, competence, or their own appropriate supervision. Still another problem may be the lack of human intervention. The computers crunching data are doing such complex calculations that even the most scrupulous observers would not be able to discern their errors.
If law-abiding people cannot be protected against the misuse of their data by an alliance of government and information-gathering firms, then they lose their status as rights-holding individuals. They become statistically defined groups. Already leaders of businesses and government treat hundreds of millions of consumers and voters as interest groups whose wishes are expressed by purchases or poll results. Stalin said, “Each death is a tragedy but millions are a statistic.” Almost every other leader in history was less barbarous, but all know that to govern a big country is to govern by numbers, which is only made easier in an era of big data.
Framing surveillance as a tradeoff between privacy and security is a dead end for democracy.
The claim to privacy presses against categorization. The right of due process is a right to be treated as a person. Privacy, due process, and all property rights—including rights to control digital information—guarantee that even if leaders cannot consider citizens one by one, they nevertheless cannot inflict enduring harm except on an individual basis, because a court will hear an individual’s claims and will not judge by groups or categories. (That is the essential difference between democracy and each of its antipodal enemies, fascism and communism.)
Privacy, due process, and digital property rights are meant to further enable speech. In their absence, self-censorship is an inevitable act of self-protection. In the all-knowing state, only silence is safe. To speak is to invite trouble.
But censorship was the antithesis of the culture that the U.S. government embraced when it decided to champion the spread of the Internet around the world more than twenty years ago. I was chairman of the Federal Communications Commission then. We envisioned the Internet as a limitless forum not only for our democracy, but also for a global debate of issues ranging from the sublime to the trivial. I thought—and so have FCC chairs since my time—that cyberspace is, or ought to be, “a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity,” as John Perry Barlow put it.
Yet if I were asked to relinquish my privacy and put my right of self-expression under some constraint in return for protecting my family from harm, I would agree. I would not enjoy the tradeoff. But I would rather accept the risks of losing privacy than put my family in danger. Almost everyone would make the same choice.
The goal for both law and technology is to create alternatives to this privacy-versus-security framing of the issue. That is a cul-de-sac for democracy in a technologically advancing country.
First Principle: Reduce Secrecy, Constrain Misuse
Edward Snowden’s revelations opened the architecture of the government’s surveillance program to public scrutiny. People disagree on the appropriateness of his conduct. However, many believe that the Snowden affair has demonstrated the need for maintaining as much transparency as possible. Determining what can be open and what must remain secret is an ongoing debate.
As a first step, the Review Group on Intelligence and Communications Technologies and the Privacy and Civil Liberties Oversight Board—empowered by the president and Congress, respectively—have published important recommendations on how intelligence oversight should be reshaped. Most prominently, the Board, in a split decision, called for an end to the NSA’s phone data collection, claiming that the program was ineffective and overbroad. The Review Group called for the data collected by the program to be housed by an independent, private third party, with the NSA gaining access only through court order.
So far, the president has adopted some, but not all, of the Board and Review Group recommendations. In March President Obama announced his intention to end the collection of phone records, though they will still be stored by phone companies and accessible to the NSA via court order. No change was made to collection of online data or foreign communications.
Congress, infamously inactive, is showing movement toward reduced secrecy. A bill cosponsored by Senator Patrick Leahy and Representative James Sensenbrenner proposes to end government’s bulk data collection—of phone and online data—and increase transparency about agreements between the state and business.
The Supreme Court, too, inevitably will be a critical forum for defining how secretive future programs can be. In December of last year, two federal judges came to opposite conclusions about the effectiveness and constitutionality of the NSA’s metadata programs, setting the stage for an eventual high court review. On December 16, Judge Richard J. Leon in Washington ruled that the programs were likely unconstitutional and “almost Orwellian.” Less than two weeks later, in New York, Judge William H. Pauley III ruled that the programs were legal and could have helped prevent the September 11 hijackings. A decision from the Supreme Court following the appeals will grant insight into how government surveillance programs might operate in the future.
So far the technology firms whose information has empowered government have not fully exercised their own right to speak. In December Apple, Facebook, Google, and others published an open letter asking the government for surveillance reform. But they have not sufficiently addressed their future role in enabling surveillance. They are asking only for “sensible limitations on [the government’s] ability to compel service providers to disclose user data.” Instead they should advocate, as newspapers often have, stronger individual rights of privacy and association, even if those efforts limit their current business models. For example, when the Supreme Court takes up the conflicting lower court rulings on metadata collection, the major information-gathering firms ought to file briefs defending the rights of individuals to create self-defined digital communities.
As I have discussed in detail elsewhere, individuals who are mistakenly tabbed or labeled as criminals or in any way victimized by government data collection should receive monetary compensation. When mistakes are more costly, vigilance is likely to be greater, and government may be unable to shroud everything in a veil of secrecy. In addition, government can reduce the incidence of mistakes by continually obtaining the private sector’s technical advice on data storage, use, and retrieval practices. Individuals should know how the state gathers personal data from firms, and negotiations between the state and the firms should be transparent.
Second Principle: Bolster Individual Use of Privacy and Security Tools
Along with new guidelines for corporate and government use of data, citizens should be encouraged to take action on behalf of their own privacy and security. If citizens abdicate their obligation to protect themselves and society, they will lose the privacy rights that are their bulwark against state interference. Next, they will put at risk their ability to associate and to speak in favor of replacing those who lead the state. To say individuals should be sovereign means that they should be able to select their own leaders. That depends on access to information. The most familiar example of the key role of information in holding power is the Watergate incursion that ultimately brought about the only resignation of a President in American history. As foolish as was the Nixon-backed attempt to steal data, it is exactly what the government can now do on a massive scale.
Happily, new technological breakthroughs have given individuals software tools that can revive privacy. These methods enable people to gather across space and to join in protective associations—digital versions of the Masons, if you will, but far more inclusive. Just as the cloud allows people to gather around interests of their choosing, these breakthroughs allow individuals to form secure groups around any number of activities and interests. For example, The Onion Router (Tor) allows for protected communication through a distributed network of relays. The block chain, which has made Bitcoin possible, offers a basis for creating social networks that are both secure against intruders and designed to assure personal privacy.
Software can permit individuals to create protected, secure, private spaces in cyberspace:
1) Individuals can act through pseudonyms to create networks of various sizes committed to common goals.
2) Individuals in these networks can exchange digitized goods and services at great speed and low transaction costs.
3) Through the innovation of the block chain, an individual’s ledgers—books of account, records of what was given and received—can remain encrypted, but a public record is maintained that displays all transactions. As a result, other participants in the exchange can ensure that transaction records are accurate, without actually viewing or inspecting another individual’s ledger.
These breakthroughs have been manifest in the case of crypto-currency, such as Bitcoin. According to Marc Andreessen, one of the founders of the commercial Internet as we know it, exchanges of Bitcoins can be far more secure—resistant to fraud, theft, and inaccuracy—than anything now experienced either in the digital space of the Internet or the analog world of paper money. The combination of government and business may not be able to provide nearly as much protection as people can create for themselves by acting through trusted networks. Moreover, these software advances can facilitate a wide range of collective enterprises, including distributed solar power, health insurance, pensions, student loans, neighborhood watches. Perhaps most intriguing for promoting democracy, these software protocols can make casting digital ballots secret and secure.
Instead of expressing indifference or even hostility to these advances, the state could help entrepreneurs put new tools safely in the hands of individuals. Because, as it stands, these tools are pricey and hard to use. In a recent op-ed in the New York Times, Julia Angwin describes the high cost of secure encryption technology and the difficulty in verifying its efficacy. Considering the billions invested in data collection and surveillance, government could invest more in providing access to encrypted, safe technology. Programs teaching and broadening knowledge about Tor, which was created through government funding, and Bitcoin would increase access and use. In the cases of other privacy technologies, government could certify or standardize the capacities of these tools.
New technological breakthroughs have given individuals software tools that can revive privacy.
At the least, government should not seek to undermine this technology by purposefully weakening encryption, such as by introducing backdoors. As Richard Clarke, President Obama’s former cyber-security czar and a member of the Review Group, stated at the Cloud Security Alliance Summit, the way to restore trust “is to have the U.S. government forced by executive order, or forced by public law, to uphold encryption standards, to strengthen encryption standards and to promote encryption—not the other way around.”
As Snowden commented, speaking remotely at the South by Southwest festival, “The bottom line . . . is that encryption does work.” Yet even encryption is not enough to create secure, trusted digital spaces. As encryption and services such as Tor and Bitcoin continue to advance, individuals still need law to protect and enable the spaces that these technological breakthroughs permit.
Third Principle: Technology Still Needs the Support of Law
Technology is amoral. Criminals can use software for their own uses. Bitcoin gained early prominence because of the black-market Web site The Silk Road, which facilitated anonymous drug trafficking. Terrorists use Tor to communicate anonymously. Governments therefore have good reason to seek access to these networks.
All digital networks, even those that consist solely of anonymous users and encrypted information, need some point of interaction with the analog world in order to have meaningful impact on the economy, society, or government. At that point of interaction, a Bitcoin must be exchanged for some number of dollars. At that same point, government needs to negotiate its access to private, encrypted networks.
Individuals should want a crossing point that is transparent, policed, and governed by sound rules. Law-abiding citizens might find their Tor- or Bitcoin-enabled networks invaded by bad actors. Like a religious order on a high mountaintop that has been infiltrated by a criminal from the outside world, the most private network sometimes needs to call the police.
The state, as the policing power, will need to be able to cross between the analog and digital sides of the street. It must be able to inquire into cyberspace activities. It must be able to obtain the real identities of victims, witnesses, and suspects operating under pseudonyms in digital space. Ultimately, it needs to bring the guilty back into the analog world for trial and punishment—cyber-libertarianism as a system for organizing complex societies has its limits.
Warrants and the tried-and-true practices of the criminal justice system, not secret backdoors that weaken encryption, must be the means through which the state polices encrypted digital space. Undercover police officers attempt to uncover drug trafficking rings in the analog world; pseudonymous police officers can conduct investigations against drug traffickers operating behind their own pseudonyms in encrypted spaces.
Properly employed, law can make these spaces stronger, more usable. The Constitution and Bill of Rights can be interpreted so as to establish the rights and powers that enable citizens to be sovereign: our foundational laws are enablers, not just limitations. Similarly, law in this digital age should encourage the empowerment of individuals through new technologies; efforts should be devoted to protecting digital spaces. Arrest whoever stole the Bitcoins from Mt. Gox. Find the perpetrators of malware and jail them. Insist that social networks and retailers provide state-of-the-art protection for individuals’ digital information or be exposed to serious penalties.
The challenge in online privacy and security is a basic one: Who rules? Through law, citizens make choices, but, more importantly, they confer responsibility and assign accountability. That is, citizens decide who rules.
To ensure that individuals can weigh the balance of power that the digital age offers, transparency must be maintained. The president, review panels, Congress, the Supreme Court, and technology companies must all work to ensure that secrecy remains minimized and that individuals know the terms of contracts between state and firm.
Using encryption and the breakthroughs presented by computer science, individuals can form their own secure digital spaces where they should be afforded the same privacy rights they would enjoy in the analog world—individuals can force the government to treat individuals as individuals. Notably, these rights are not absolute. Through law, government will need to define its ability to cross between analog and digital space in order to prevent crime and strengthen the foundation of that encryption has offered.
By purposefully creating trust on the basis of effective technology and law, individuals, acting alone or in groups, can rule their own lives. They can express themselves without doing harm to others and exchange value for value. They can then make real a principle the United States has for three centuries held dear: government of the people, by the people, and for the people.
This forum is part of an ongoing collaboration with the Bowen H. McCoy Family Center Ethics in Society at Stanford University.
Photograph: Andreas Herten