Lessons Learned Too Well: The Evolution of Internet Regulation (4)

Read parts One, Two, and Three.

A moment’s reflection makes clear why whether the user can be anonymous or instead can be required to use tools that identify him is so fundamental so often. It is that old equation: “Packet-switching + strong crypto = total communicative freedom”.

If you can identify speakers and listeners, you can often tell what they are up to even if you are not able to eavesdrop on the content of their communications. It is called traffic analysis: If you know A called B, then B ran out of the house to do something, you can often infer what A told B. Content industries with copyrights to protect and governments with law enforcement and intelligence interests to protect now fully understand this.

What is more, governments understand that online, a lot can happen quickly. Thus, rather than have to anticipate the need to wiretap, or even react in real time, how much better it would be it were possible to turn back the virtual clock and wiretap the past in those cases it seems necessary or convenient to do so. And that – known as data retention – is basically where we seem to be heading in this country, and where Europe is already. No SciFi time machine is required: European law now requires telephone companies, ISPs and other intermediaries to archive information about the communications – email headers and call setup data – of their customers for periods of six months to a year although implementation has been uneven and controversial at the member state level. Earlier this year, the Obama Justice Department asked Congress to enact similar data retention legislation in the US.

Meanwhile, in the culmination of a campaign that started in 2006, the FBI recently asked Congress to expand CALEA to webmail, social networking sites and peer-to-peer services. In each case, the goal would be to require online companies involved in online communications to re-engineer their software so that law enforcement could easily access it. In addition the FBI has sought and received funding for its “Going Dark” project, which seeks legal and technical innovations to enhance lawful communications intercept capabilities.

If you look around, you find that anonymity – and communicative privacy more generally – have a diverse coalition ranged against them. Fans of mandatory identification include the military, which worries about cyber terrorism, the police, which wants easier ways to catch bad guys, publishers who want to protect established business models, people subjected to anonymous libel who understandably want to find their victimizers, and marketers salivating at the thought of systems designed to make every internet (and perhaps too real world) move recordable, accessible, and subject to data mining.

Indeed, at present the most vocal critics of anonymous communication include some law professors, notably feminists and some of my fellow progressives, who argue that anonymity is not just a cloak behind which the social oppressor hides, but a condition that pathological personalities find empowering and intoxicating thus making them more likely to engage in hate speech or to silence women or minorities. The proposed solution is to require the intermediaries, the ISPs, to keep logs of their visitors eliminating anonymity so we can track the perpetrators. (And, sometimes it is suggested that those hosting open forums should act as cyber-censors, or face consequences for distributor liability, thus enlisting private parties as unpaid regulators.)

In the face of all these disparate voices, and all these interests, so many of which are legitimate and so many of which are powerful (and a few of which are even both legitimate and powerful), is the case for protecting anonymous speech online, and indeed trying to prevent real-time tracking of our movements in both cyberspace and three-dimensional life (or meatspace as we sometimes call it), anything more than nostalgia for the days of the 28K modem connection? I think that it is more than nostalgia, and that there is something here of great importance.

The case for preserving the option of communicative anonymity is both constitutional and instrumental, both domestic and international.

The constitutional case is well known, and in the interest of moving more rapidly to the free drinks, I will say little about First Amendment free speech. It is hornbook constitutional law that the rights to anonymous speech and association are key protections for members of threatened minorities and unpopular organizations. There is a line of cases starting with Talley v California, then McIntyre v Ohio Elections Comm’n, and running through the more recent Watchtower Bible and Tract Society, in which the Supreme Court sets out a sweeping constitutional right to anonymous religious and political speech. A government-mandated ban on anonymous speech such as proposed by the military and by some progressive activists would violate this right. And for better or worse, there is no way to distinguish religious or political speech from other speech online – there is no reliable ‘politics bit’. So if we want to protect anonymous political speech, it turns out we have to be ready to protect all speech. We also know, however, that doctrine has a tendency to bend in the face of perceived emergencies such as 9/11.

But this isn’t just a question of doctrine. Protecting anonymous speech is good policy for the world, and also good foreign relations. Dissidents around the world rely on US-based servers to get out their message. It would be terribly unwise to engineer our communications software and hardware in a way that can be abused to undermine them. The creation of that option might tempt the next Nixon or Kissinger seeking to cozy up to foreign dictators to slip them information about their dissidents. It might even be legal since the courts tell us that those foreigners do not have First Amendment rights when they are based abroad.

But there’s an even more important reason to resist efforts to make technologies of identification legally required. When we legislate communications architectures that have back doors, or front doors, or even spy holes, for law enforcement we create capabilities that create immediate dangers. If history is any guide, we’ll get it wrong, and create technical insecurities that will plague us all. But even if we get the technology just right, we create legal and moral problems. Perhaps we can trust our own commitment to the Rule of Law to protect us from too much abuse of these capabilities. That is a different debate. But we can be certain that when we design architectures of identification, whatever we require, or whatever we standardize on, will without any doubt be exported to many other places including those where the commitment is to a different kind of law than that to which we aspire.

This is not hypothetical. It is already happening. Several years ago Pakistan sought to prohibit end-to-end encryption of mobile telecommunications, wanting calls in clear at each base station so that they could be easily wiretapped. Just in the past year, Research In Motion found itself struggling with several countries, including India and the United Arab Emirates, which demanded that the company hand over the keys to the encrypted messages sent by BlackBerry users.

In many countries, if we give the government the power to prevent anonymity, to force identification, to gather traffic analysis, it will be used to stamp out dissidents. We should admit that sometimes those dissidents may be terrorists. Technology can empower very bad people as well as very good ones. But that is also my point: sometimes the very bad people are in power, and the people against whom they will use technologies of identification are the human rights activists, the democratic and non-violent protestors, the Twitter-users planning demonstrations in Egypt’s Tahrir Square. And after the technologies of identification will come the technologies of retaliation.

Way back in 1983, Ithiel de Sola Pool wrote presciently about networking technology:

Technology will not be to blame if Americans fail to encompass this system within the political tradition of free speech. On the contrary, electronic technology is conducive to freedom. … Computerized information networks of the twenty-first century need not be any less free for all to use without hindrance than was the printing press. Only political errors might make them so.

History teaches us that these errors are most likely in periods of hysteria. And we have just lived through – or may still be living through – one such period of hysteria following the tragedy of 9/11. Our ability to take the long view was tested then and will be tested again. Consider, most recently, the size of the furore over Wikilieaks, and then consider that even if the costs outweighed the benefits, which could be debated, it is already clear that our government’s and media’s panicked response was undoubtedly excessive.

I know it is all very well for an academic to stand in such genteel surroundings and ask that we not give in to fear. It is fine for me to tell you that when we are told that in order to be safe, and in order to protect the profits of an industry important to our trade balance, we will have to create an infrastructure of mandatory identification that will be persistent and eventually ineradicable we should first demand proof that there really is a threat, and that there are no less restrictive means, or that we should consider all the externalities.

But here, anyway, are a few suggestions for avoiding what could otherwise be an outcome will will regret, also based on lessons learned from the past twenty years or so:

  • Demand evidence for mandatory identification and data retention rules, not just plausible anecdotes.
  • Don’t lock technology into law.
  • Always consider what an identificatoin rule proposed for one purpose can do in the hands of despots.
  • Empower user self-regulation whenever possible rather than chokepoint regulation. Design filters and annotators before designing walls and takedown mechanisms. Make it an offense for devices to phone home without clear, knowing, and meaningful consent on the part of the speaker, reader, listener, or viewer.
  • Build alternatives in technology and law that allow people to control how much their counterparts know about them, and which by making selective release of information easier reduce the need for a binary choice between anonymity or data nudity.
  • Require that privacy-enhancement be built in at the design level. And vote with your dollars if designers fail to do so.

Those who disagree with what I am suggesting worry, with some reason, about new technology undermining the powers of states and sovereigns. In some of my writing I’ve argued that most core government powers, like the power to tax, will not in fact be undermined in any substantial way so long as we still need to eat and we want things delivered in those nice brown UPS trucks. But more generally, even if it is a bit scary, and even if this power is sometimes misused, why is allowing people to speak freely to each other, without fear of eavesdroppers or retaliation, such a terrible thing?

It is terrible, ultimately, because it is empowering. Communicative freedom allows people to share ideas, to form groups, and to engage not just in self-realization, but in small scale and even mass political organization. Here then is the most important lesson to be learned from the story I have outlined to you today. The Internet and related communications technologies have shown a great potential to empower end-users, but also to empower firms and especially governments at their expense. Governments (and firms) around the world have learned this lesson all too well, and are taking careful, thorough, and often coordinated steps to ensure that they will be among the winners when the bits settle.

The thing to watch out for, therefore, is whether we, and especially those individuals already burdened with repressive regimes, will be among the winners also.

This entry was posted in Law: Internet Law, Talks & Conferences. Bookmark the permalink.

One Response to Lessons Learned Too Well: The Evolution of Internet Regulation (4)

  1. Peter D Lederer says:

    Thanks for posting this, Michael. It was wonderful to be able to attend and hear you speak, and I am glad to be able to read what you had to say at leisure.
    The comments made by Jonathon T. Weinberg from Wayne State, and Cindy Cohn of the EFF were excellent! Have they also been preserved somewhere?

Leave a Reply

Your email address will not be published. Required fields are marked *