Blog | U.S.

Privacy is Not Dead—It's Inevitable

May 28, 2014

The pace of technological change we have seen over the past fifteen years has been so breathtaking, so unrelenting, that it’s worth pausing to reflect on it for a moment. Fifteen years ago, our world was very different. Bill Clinton was President. The Red Sox had not won the World Series for almost a century. Mobile phones existed, but were little more than walkie-talkies with flip-tops. And the idea of total surveillance was unthinkable, a spectre of dystopian fiction and failed communist and fascist states from our grandparents’ time.

Fifteen years ago, our politics simply would not permit total surveillance, along the lines of the Stasi or J. Edgar Hoover’s COINTELPRO. The veterans of the cold war against communism and the hot war against fascism were still alive and vigilant. Surveillance was not only politically inconceivable but also technically impossible. Telephone metadata existed (including new forms of metadata for mobile phones), but the amount of data humans generated in their daily activities was vastly smaller.

What a difference fifteen years makes. Our technological and political environment has radically changed. The combination of the consumer phase of the digital revolution and the decrease in the political will to watch the watchers has meant that far more about us is digitized, much of it without any oversight or regulation. After the shock of 9/11, Congress authorized the Patriot Act, and many secret government programs (some legal, some not) operated under the radar.

As we move forward, it is essential that we figure out how to translate the values of free speech, privacy, due process, and equality into the new digital environment—what they mean in an era of big data and pervasive surveillance, and how to build them into the fabric of our digital society. We might decide to reject any idea of digital due process. Or we might embrace it fully. Most likely, we’ll end up somewhere in between. Regardless, privacy of some kind will be inevitable — but what those rights look like depends on a democratic environment.

Incrementally, the Internet has been transformed from a place of anarchic freedom to an environment of total tracking and total control.

Let’s consider first the inevitability of privacy. We often think of privacy as a factual state – how much do people know about me? As more and more information is collected and tracked, and fewer dimensions of human life remain opaque to observation, privacy would seem to be in retreat, perhaps irreversibly so. From that perspective, it is easy for commentators to suggest, glibly, that privacy is dead. Just this week, Thomas Friedman suggested in the New York Times that “privacy is over.” But there are other ways of thinking about privacy. Web sites and doctors’ offices have privacy policies about how they will keep your information private and confidential. In an information age, this way of understanding privacy—as keeping secrets rather than merely being secret—will become more important.  When we collect information about people, what happens to that information? Is its use unrestricted? Is its disclosure unrestricted? These areas will be regulated in one form or another.  Law will play a role, and if Congress is unable or unwilling to regulate, then leadership will come from elsewhere, whether the White House, the FTC, or foreign sources of law, like this month’s decision by a Spanish Court giving people a right to control how search engines report results about them. Globally-operating technology companies are bound by global rules, and European and Canadian regulators don’t buy the “death of privacy” fallacy.  Even putting law to one side, information rules will be imposed inevitably through social norms, technology, or the competition of the market.  Witness Facebook’s continual improvement of its “privacy controls” after a decade of pressure.

When we understand that “privacy” is shorthand for the regulation of information flows, it’s clear that information rules of some sort are inevitable in our digital society. The idea that privacy is dead is a myth. Privacy—the rules we have to govern access to information—is just changing, as it’s always been changing. The rules governing the creation, ownership, and mortality of data can be permissive or restrictive; they may create winners and losers, but they will exist nonetheless. And some of those rules are not just going to be privacy rules (rules governing information flows), but privacy-protective rules - ones that restrict the collection, use, or disclosure of information.

Consider the National Security Agency. The NSA purports to prevent harm by tracking our movements and communications—denying us a factual state of privacy we have enjoyed in the past from the state. This window into our lives is one kind of privacy rule. But the NSA also argues that it needs to perform its operations in secret—secret data collection, secret technologies, secret courts. It claims that if it were forced to disclose its operations, the targets of its surveillance would be able to avoid it. This is also a privacy rule—the NSA argues that operational privacy is necessary for it to do its job. Facebook and other technology companies also use trade secret law, computer security tools, and non-disclosure agreements to keep their own data private. When the very entities that would deny the existence of privacy rely on privacy rules to protect their own interests, it becomes clear that privacy is not doomed. This is what I call the transparency paradox.

But if we care about civil liberties, we need to foster an ecosystem in which those liberties can thrive. Take freedom of speech, for example. We often (correctly) talk about the robust culture of free speech enjoyed by Americans. Certainly, the Supreme Court’s interpretation of the First Amendment has played an important role in the exercise of this essential freedom. Legal doctrine has been important, but the cultural, social, and economic inputs have been equally essential in making free speech possible. Without a robust democratic atmosphere, freedom of speech can become a shallow protection, in which people might say a lot without saying anything of substance at all.

In the twentieth century, this atmosphere was created by a large middle class, universal literacy, broad access to education, a culture of questioning authority and protection for dissenters, and cheap postal rates for printed matter, among other things. In the digital age, if we care about our democratic atmosphere, we need to worry about things like access to technology, the “digital divide,” network neutrality, digital literacy, and technologies to verify that the data on the hard drives hasn’t been tampered with. We also need to ensure access to effective technological tools like cryptography, information security, and other technologies that promote trust in society. Reed Hundt’s fine essay “Saving Privacy” reminds us that government transparency is essential for democracy, that we need to empower individuals in their use of privacy and security tools, and that there will still be an essential role for law to play in the digital world we are building together.

Let's be sure that intellectual privacy is part of our digital future.

Most of all, though, we need to worry about intellectual privacy. Intellectual privacy is protection from surveillance or interference when we are engaged in the processes of generating ideas– thinking, reading, and speaking with confidantes before our ideas are ready for public consumption. Law has protected intellectual privacy in the past. But the digital revolution has raised the stakes. More and more, the acts of reading, thinking, and private communication are mediated by electronic technologies, including personal computers, smart phones, e-books, and tablets. Whether we call it surveillance or transparency, being watched has effects on behavior. When we watch the NSA or the police, they behave better. And when the police watch us, so do we, whether it is not speeding for some of us or not stealing for others.

But critically, when we are using computers to read, think, and make sense of the world and engage with ideas, there is no such thing as a bad idea or bad behavior. If our society is to remain free, we must be able to engage with any ideas, whether we agree with them or not. This is true across a range of topics, from Mein Kampf to the Vagina Monologues, and from erotica to Fox News. But constant, unrelenting, perpetual surveillance of our tastes in politics, art, literature, TV, or sex will drive our reading (and by extension our tastes) to the mainstream, the boring, and the bland. As we build our digital society, we need to ensure that we carve out and protect the intellectual privacy that political freedom requires to survive.

Fifteen years ago, the Internet was heralded as a great forum for intellectual liberation—a place to think for ourselves and meet like- (and different-) minded people unmediated by censors or surveillance. Yet, incrementally, the Internet has been transformed from a place of anarchic freedom to something much closer to an environment of total tracking and total control. All too often, it may seem like the digital future is unfolding before our eyes in some kind of natural and unstoppable evolution. But the final state of Internet architecture is not inevitable, nor is it unchangeable. It is up for grabs. In the end, the choices we make now about surveillance and privacy, about freedom and control in the digital environment will define the society of the very near future. I fear that the “privacy is dead” rhetoric is masking a sinister shift, from a world in which individuals have privacy but exercise transparency over the powerful institutions in their lives, to a world in which our lives are transparent but the powerful institutions are opaque.  That's a pretty scary future, and one which we’ve told ourselves for decades that we don’t want.  The availability of cheap smartphones and free apps shouldn’t change that.  We should choose both control of our digital information and the benefits of our digital tools.  We can make that choice, but the “privacy is dead” rhetoric is obscuring the existence of the choice.  

Let’s realize that privacy—some system of rules governing information—is inevitable, and argue instead about what kind of digital society we are building under the rhetoric.  If we care about living in a society with free speech and free minds, let’s be sure that intellectual privacy is part of our digital future.

Thumbnail image: Vu Bui


The big problem with using laws to regulate technology is that computers are good at processing data and bad at determining the purpose of the processing. It comes from two of the foundation theories of computer science: the Church-Turing Hypothesis, and the Halting Problem. The only way to prevent privacy violations is to keep information out of connected systems.
For example, the massive Target security breach. Target did not mean to give everybody's information to hackers. But they connected an air conditioner contractor's system to their payment system in a flagrant violation of safety standards. Their CTO deserved to resign in disgrace. The customer information system was working properly, but it couldn't tell that it wasn't supposed to give data to the air conditioner contractor's system.
Obviously, access to customer information should be highly restricted. The problem comes when companies derive value from collecting and correlating <em>consumer</em> information. All the data is sitting in computers, all connected to the Internet, and it's just a matter of time before it's leaked somehow, somewhere. That's why so many of us think this data should not even be collected. As long as your data are in the Internet, your privacy is dead.

To extend from your last thought, this means we need to accept the eventuality that as our own timeline progresses to our eventual death, at some point, we will need to face a reality where the markers of our digital identities financial, medical, PII data required to spoof us, will become available for the world to see. Mathematically, it's like a slow evolutionary equation that starts from 0 (false) and due to the likelihood of occurrence (the number of digital break-ins), you eventually reach 1 (true).

People who seek power do not want to forget the objects that grant them power in the first place, especially if this power happens to control our lives.

This is an absolutely awful approach to these problems.  Simply redefining the word "privacy" to mean "some system of rules governing information"—so that there can still be "constant, unrelenting, perpetual surveillance of our tastes in politics, art, literature, TV, or sex" but said tastes can be called "private" if Facebook and Google have some policies written somewhere and provide some buttons that change nothing about the fact the surveillance is happening in the first place—serves only to deceive people who are familiar with the meaning of the English word "private".

The author seems intent on facilitating the "sinister shift" he supposedly dreads.  The obscuring rhetoric is this argument, that Facebook or Google should be able to tell people "you have intellectual privacy" even when the systems of those companies directly observe everything done on every website that includes a snippet of their javascript.  (Which is every website, at this point.)

Yes, have all the laws and policies and controls described in this article, but don't call the result "privacy".

My EX graduated on a scholarship with honors at MIT's graduate degree and his " First in Class in Annapolis didn't hurt either in terms of an education. A few years on the military contributed other skills as well.
When he walked out on me the day after a disabling car accident that left me immobile for weeks, strange things started happening with my cell phone and all computers, I was put in that awkward position of knowing I was being "HACKED", and being told I was crazy by anyone I shared this opinion with.

I spoke with : LOCAL POLICE. "This is a civil matter " they said. I told them he had been breaking in while my phones' GPS showed me as being out. I was relegated to using the cheapest phone I could find. I went through 6 iPhones, and 4 computers where he took over admin steatite rights and turned them into paperweights.

I spoke with and was turned away by: LOCAL POLICE, DETECTIVES, FCC, and the cyber crimes unit at the FBI.

The FBI sent 2 agents who basically said "too bad, you were seeing him romantically; that's what you get

They told ME to change my identity and gave 45 mins on how to wipe out every electronic piece of me and start over against a

Add new comment

Filtered HTML

  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • Lines and paragraphs break automatically.

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
This question is for testing whether you are a human visitor and to prevent automated spam submissions. CAPTCHA is not case sensitive.
Enter the characters shown in the image.