Monthly Archives: April 2011

Lessons Learned Too Well: The Evolution of Internet Regulation (4)

Read parts One, Two, and Three.

A moment’s reflection makes clear why whether the user can be anonymous or instead can be required to use tools that identify him is so fundamental so often. It is that old equation: “Packet-switching + strong crypto = total communicative freedom”.

If you can identify speakers and listeners, you can often tell what they are up to even if you are not able to eavesdrop on the content of their communications. It is called traffic analysis: If you know A called B, then B ran out of the house to do something, you can often infer what A told B. Content industries with copyrights to protect and governments with law enforcement and intelligence interests to protect now fully understand this.

What is more, governments understand that online, a lot can happen quickly. Thus, rather than have to anticipate the need to wiretap, or even react in real time, how much better it would be it were possible to turn back the virtual clock and wiretap the past in those cases it seems necessary or convenient to do so. And that – known as data retention – is basically where we seem to be heading in this country, and where Europe is already. No SciFi time machine is required: European law now requires telephone companies, ISPs and other intermediaries to archive information about the communications – email headers and call setup data – of their customers for periods of six months to a year although implementation has been uneven and controversial at the member state level. Earlier this year, the Obama Justice Department asked Congress to enact similar data retention legislation in the US.

Meanwhile, in the culmination of a campaign that started in 2006, the FBI recently asked Congress to expand CALEA to webmail, social networking sites and peer-to-peer services. In each case, the goal would be to require online companies involved in online communications to re-engineer their software so that law enforcement could easily access it. In addition the FBI has sought and received funding for its “Going Dark” project, which seeks legal and technical innovations to enhance lawful communications intercept capabilities.

If you look around, you find that anonymity – and communicative privacy more generally – have a diverse coalition ranged against them. Fans of mandatory identification include the military, which worries about cyber terrorism, the police, which wants easier ways to catch bad guys, publishers who want to protect established business models, people subjected to anonymous libel who understandably want to find their victimizers, and marketers salivating at the thought of systems designed to make every internet (and perhaps too real world) move recordable, accessible, and subject to data mining.

Indeed, at present the most vocal critics of anonymous communication include some law professors, notably feminists and some of my fellow progressives, who argue that anonymity is not just a cloak behind which the social oppressor hides, but a condition that pathological personalities find empowering and intoxicating thus making them more likely to engage in hate speech or to silence women or minorities. The proposed solution is to require the intermediaries, the ISPs, to keep logs of their visitors eliminating anonymity so we can track the perpetrators. (And, sometimes it is suggested that those hosting open forums should act as cyber-censors, or face consequences for distributor liability, thus enlisting private parties as unpaid regulators.)

In the face of all these disparate voices, and all these interests, so many of which are legitimate and so many of which are powerful (and a few of which are even both legitimate and powerful), is the case for protecting anonymous speech online, and indeed trying to prevent real-time tracking of our movements in both cyberspace and three-dimensional life (or meatspace as we sometimes call it), anything more than nostalgia for the days of the 28K modem connection? I think that it is more than nostalgia, and that there is something here of great importance.

The case for preserving the option of communicative anonymity is both constitutional and instrumental, both domestic and international.

The constitutional case is well known, and in the interest of moving more rapidly to the free drinks, I will say little about First Amendment free speech. It is hornbook constitutional law that the rights to anonymous speech and association are key protections for members of threatened minorities and unpopular organizations. There is a line of cases starting with Talley v California, then McIntyre v Ohio Elections Comm’n, and running through the more recent Watchtower Bible and Tract Society, in which the Supreme Court sets out a sweeping constitutional right to anonymous religious and political speech. A government-mandated ban on anonymous speech such as proposed by the military and by some progressive activists would violate this right. And for better or worse, there is no way to distinguish religious or political speech from other speech online – there is no reliable ‘politics bit’. So if we want to protect anonymous political speech, it turns out we have to be ready to protect all speech. We also know, however, that doctrine has a tendency to bend in the face of perceived emergencies such as 9/11.

But this isn’t just a question of doctrine. Protecting anonymous speech is good policy for the world, and also good foreign relations. Dissidents around the world rely on US-based servers to get out their message. It would be terribly unwise to engineer our communications software and hardware in a way that can be abused to undermine them. The creation of that option might tempt the next Nixon or Kissinger seeking to cozy up to foreign dictators to slip them information about their dissidents. It might even be legal since the courts tell us that those foreigners do not have First Amendment rights when they are based abroad.

But there’s an even more important reason to resist efforts to make technologies of identification legally required. When we legislate communications architectures that have back doors, or front doors, or even spy holes, for law enforcement we create capabilities that create immediate dangers. If history is any guide, we’ll get it wrong, and create technical insecurities that will plague us all. But even if we get the technology just right, we create legal and moral problems. Perhaps we can trust our own commitment to the Rule of Law to protect us from too much abuse of these capabilities. That is a different debate. But we can be certain that when we design architectures of identification, whatever we require, or whatever we standardize on, will without any doubt be exported to many other places including those where the commitment is to a different kind of law than that to which we aspire.

This is not hypothetical. It is already happening. Several years ago Pakistan sought to prohibit end-to-end encryption of mobile telecommunications, wanting calls in clear at each base station so that they could be easily wiretapped. Just in the past year, Research In Motion found itself struggling with several countries, including India and the United Arab Emirates, which demanded that the company hand over the keys to the encrypted messages sent by BlackBerry users.

In many countries, if we give the government the power to prevent anonymity, to force identification, to gather traffic analysis, it will be used to stamp out dissidents. We should admit that sometimes those dissidents may be terrorists. Technology can empower very bad people as well as very good ones. But that is also my point: sometimes the very bad people are in power, and the people against whom they will use technologies of identification are the human rights activists, the democratic and non-violent protestors, the Twitter-users planning demonstrations in Egypt’s Tahrir Square. And after the technologies of identification will come the technologies of retaliation.

Way back in 1983, Ithiel de Sola Pool wrote presciently about networking technology:

Technology will not be to blame if Americans fail to encompass this system within the political tradition of free speech. On the contrary, electronic technology is conducive to freedom. … Computerized information networks of the twenty-first century need not be any less free for all to use without hindrance than was the printing press. Only political errors might make them so.

History teaches us that these errors are most likely in periods of hysteria. And we have just lived through – or may still be living through – one such period of hysteria following the tragedy of 9/11. Our ability to take the long view was tested then and will be tested again. Consider, most recently, the size of the furore over Wikilieaks, and then consider that even if the costs outweighed the benefits, which could be debated, it is already clear that our government’s and media’s panicked response was undoubtedly excessive.

I know it is all very well for an academic to stand in such genteel surroundings and ask that we not give in to fear. It is fine for me to tell you that when we are told that in order to be safe, and in order to protect the profits of an industry important to our trade balance, we will have to create an infrastructure of mandatory identification that will be persistent and eventually ineradicable we should first demand proof that there really is a threat, and that there are no less restrictive means, or that we should consider all the externalities.

But here, anyway, are a few suggestions for avoiding what could otherwise be an outcome will will regret, also based on lessons learned from the past twenty years or so:

  • Demand evidence for mandatory identification and data retention rules, not just plausible anecdotes.
  • Don’t lock technology into law.
  • Always consider what an identificatoin rule proposed for one purpose can do in the hands of despots.
  • Empower user self-regulation whenever possible rather than chokepoint regulation. Design filters and annotators before designing walls and takedown mechanisms. Make it an offense for devices to phone home without clear, knowing, and meaningful consent on the part of the speaker, reader, listener, or viewer.
  • Build alternatives in technology and law that allow people to control how much their counterparts know about them, and which by making selective release of information easier reduce the need for a binary choice between anonymity or data nudity.
  • Require that privacy-enhancement be built in at the design level. And vote with your dollars if designers fail to do so.

Those who disagree with what I am suggesting worry, with some reason, about new technology undermining the powers of states and sovereigns. In some of my writing I’ve argued that most core government powers, like the power to tax, will not in fact be undermined in any substantial way so long as we still need to eat and we want things delivered in those nice brown UPS trucks. But more generally, even if it is a bit scary, and even if this power is sometimes misused, why is allowing people to speak freely to each other, without fear of eavesdroppers or retaliation, such a terrible thing?

It is terrible, ultimately, because it is empowering. Communicative freedom allows people to share ideas, to form groups, and to engage not just in self-realization, but in small scale and even mass political organization. Here then is the most important lesson to be learned from the story I have outlined to you today. The Internet and related communications technologies have shown a great potential to empower end-users, but also to empower firms and especially governments at their expense. Governments (and firms) around the world have learned this lesson all too well, and are taking careful, thorough, and often coordinated steps to ensure that they will be among the winners when the bits settle.

The thing to watch out for, therefore, is whether we, and especially those individuals already burdened with repressive regimes, will be among the winners also.

Posted in Law: Internet Law, Talks & Conferences | 1 Comment

“Mozillazine Servers Down” Message

Can’t find any servers to help you

The mozillazine servers seem to be on strike. The systems team is negotiating…

Please try again later

[this used to be the "drupal.org error"]

via Mozillazine Servers Down.

Of course, in a few years, when our servers reach sentience, and then become our robot overlords, this may not seem so funny.

(I was trying to find out how to get Helvetica as a font on Mozilla, and learning that it may be a very bad idea because of this buggish behavior.)

Posted in Completely Different, Software | Comments Off on “Mozillazine Servers Down” Message

Lessons Learned Too Well: The Evolution of Internet Regulation (3)

Read Part One and Part Two.

The search for more effective legislation to fight disruptions to settled expectations led governments even at an early stage to experiment with chokepoint regulation, a development that would come into its full fruition later. If the end-users in democracies were too difficult to police then the intermediaries on whom they depended for services – Internet Service Providers (ISPs), credit card companies, domain name registrars, makers of computer and telephone hardware and even software – were far less numerous, were easier to find, and far easier to persuade to comply with rules that end-users, given a choice, might well have balked at. The lesson was not lost on regulators in both democracies and despotisms. Where once a government might have sought to set a technical standard, or influence the marketplace, now it would legislate it. If code was not law enough, then bring on the law to determine the code – or even the hardware.

An early example of this type of legislation – and of its dangers – was the US’s Communications Assistance for Law Enforcement Act (CALEA), in 1994. CALEA was sold to lawmakers as a way to preserve – preserve, not expand – law enforcement wiretapping capabilities by requiring telephone companies to design their networks to be wire-tap ready. Since 1994, however, the FBI has used CALEA to expand its capabilities, turning wireless phones into tracking devices, requiring phone companies to collect specific signaling information for the convenience of the government, and allowing interception of packet communications without privacy protections. In 2005, the Federal Communications Commission granted an FBI petition and expanded CALEA to broadband Internet access and VOIP services, a decision upheld by the DC Cir 2006.

A similar chokepoint strategy was deployed against”cybersquatters”, the name coined to describe people who registered domain names that shared identical character strings with trademarks, and which a small group of profiteers snapped up and then attempted to ransom to brand managers late to the Internet. There the chokepoint was the domain name system, and the central data bases run by registries provided easy leverage. The cybersquatter problem was worldwide, and the solution there was not just domestic US legislation, but the creation in 1998 of a new formally private body, the Internet Corporation for Assigned Names and Numbers, to take over regulation of domain names. Its first policy was to create a lightweight arbitration-like system to adjudicate domain name disputes, one that ended up righting some wrongs, and creating some new ones – in both cases to the advantage of trademark holders, often large firms, some of whom were able to secure victories they could never have won in court, and for only a fraction of the cost.

One thing about ICANN stand out from a legal perspective: the regulations that it imposed on domain name registrants – notably that they had to agree that their domain names could be taken away if ICANN’s arbitration-like process so determined – were an important objective of the US Department of Commerce that settled on ICANN as the Domain Name system manager. But because a domain name is acquired by contract between a registrant and a private company that is two private contracts away from ICANN (and thus three away from the US Government), due process had no traction. Enlisting private parties as de facto regulators proved to be an effective legal workaround.

A larger battle, also with a less-than-happy outcome raged over file-sharing and copyrights. The copyright industry achieved an early victory by securing passage of the DMCA in 1998. DMCA §1201 created what has come be known as a “paracopyright” – legal protection for copy-protection technologies used by copyright holders. This goes beyond traditional copyright in that it not only prohibits copying of the work and circumventing copy-protection software, but also prohibits the creation or trafficking of tools designed to circumvent copy-protection software. Indeed, §1201 applies regardless of whether the copy-protection technology is effective or not.

Equally important, the DMCA created a method – the takedown notice – by which an allegation of copyright violation would suffice in most cases to force ISPs to immediately take content offline – no injunction needed. That provision, and regular copyright law, sufficed to enable the killing of file-sharing giant Napster (which was far from an innocent victim). In no time, however, other less centralized music-sharing systems sprang up to replace it.

By 2000, that is, about a decade ago, the first wave of Internet enthusiasm had already crested. The early heady days of people making use of new technologies and routing happily around legal rules were almost a subject for nostalgia. Even if Internet exceptionalism was still alive, in important ways the unregulated Internet had already been subjected to – often ham-handed – attempts to regulate. The Empire – Law’s Empire – had struck back.

But this was only the beginning.

Governments and industry learned from both their successes and failures, and these shaped a second wave of internet regulation. While there are things about the second wave that are encouraging, there is even more that is troubling.

It is surely good that Internet regulation is increasingly based on a sound understanding of the technology, thus minimizing pointless and ineffective rules. But as regulatory strategies get more effective, there are collateral consequences.

In the past decade, the copyright police have stepped up their efforts to stamp out file-sharing. In addition to their legislative successes, they have embraced technological solutions, focusing on the chokepoints of ISPs, and especially hardware and software manufacturers. The model technology may be region coding, in which nearly all commercial DVDs, and both hardware and software DVD players, must be locked to prevent the playing of DVDs sold far away – the fear being the gray market, a form of competition that is legal for almost all other goods. The targets of regulation by technology have been expanded to limit how other home theater devices interconnect in order to limit home taping. And, in the Orwellian-named “Trusted Computing” intiative, chip-makers are being encouraged (and might some day be required) to place unique identifiers on computer chips that could be invoked by software to identify the machine, perhaps without the knowledge or consent of the user. Intel’s latest generation of chips, Sandy Bridge, includes a unique identifier (they call it the “Intel Insider”) just waiting for software – not necessarily under the control of the user – to identify it. The hope, not yet realized, is that having this capability will give more content providers the courage to stream top-quality movies online because they can encrypt it in a way that only a chip with that unique identifier will be able to decrypt. Of course, every internet-connected device already has a unique MAC number, but it is more feasible to change or mask those than something hardwired on the CPU.

Regulation aimed at closing down cross-border gambling focused on the most vulnerable chokepoint: the credit cards used to move value. We’ve seen mandated location tracking for cell phones, justified as a way to help emergency services locate callers, but with side effects only now coming into focus.

And surprisingly quickly, the liberty-enhancing aspects of the Internet were on the defensive. Where a decade ago it was still reasonable to see the constellation of technologies around the Internet as fundamentally empowering and anti-totalitarian, that optimism is increasingly hard to sustain as regulators in both democratic and totalitarian states have learned how to structure rules that cannot easily be evaded, and – increasingly – how to use Internet-based technologies to achieve levels of regulatory control that would not have been possible previously.

The second wave of regulation has been growing in the past decade, but it has yet to peak. It draws strength from a market-driven shift towards closure and centralization in both hardware (e.g. the iPhone) and software (e.g. Facebook, Twitter) which create new chokepoints. It may be easier to see how someone will make money off Hulu or even YouTube than off Gnutella or Bittorent, but it is also far easier to regulate them.

The crest of the second wave, however, is now in sight: the abolition of online anonymity. First wave internet regulation could never have achieved the identification of every user and every data packet, but the second wave is both more international and more adept; when law harnesses technology to its ends, law can achieve far more than when law either regulates outside technology (categorization) or regulates against it.

The consequences risk being severe. More than a decade ago the Internet seemed poised to serve libertarian values; a decade ago some of us thought they might, with some pushing, be Habermassian. The future looks rather more grim, threatening to vindicate earlier Foucaultian predictions . The challenge for theorists and activists is to structure the coming era of inescapable tracking and information so that we encourage a responsibility society, but still have one in which the democracy-enhancing aspects of Internet technology are nurtured, and not one where, as is too common in times of fear and hardship, authorities become empowered at the expense of all of us.

Continued…

Posted in Law: Internet Law, Talks & Conferences | 2 Comments

Herald on Cason’s win

The Herald’s Howard Cohen discusses Jim Cason’s victory in Theories abound on Cason’s ‘miracle’ win in Coral Gables. Yours truly is quoted.

Posted in Coral Gables, The Media | 3 Comments

Zoopreme Court

Have a look at Zoopreme Court. Here’s a sample, Warren E. Bearger (Chief Justice 1969 – 1986):

Posted in Law: The Supremes | 2 Comments

Lessons Learned Too Well: The Evolution of Internet Regulation (2)

Read Part One.

In the US, the first wave of Internet law and regulation had three separate impulses, each a differently motivated reaction to the disruptive effects of a constellation of new technologies based on the communicative power of a network.

First, both logically and in time, was the legal instinct for categorization, often as a means to solve disputes. Was the Internet like a telephone network? Or was it more like television? Was computer-mediated speech more like radio or newspapers or private letters? Was e-commerce like mail-order? Is encryption more like speech or a widget? Where is an online transaction for jurisdictional purposes? Of course, as in any such exercise, which category an Internet-enabled activity would be placed in was often contestable, because there was debate about the true nature of the Internet-mediated activity, because analogies are imperfect, because parties dueled about the appropriate level of generality, or because category choice could determine outcomes.

A second type of first-wave regulation, usually legislative, sought to create new categories and in rare cases new institutions. Sometimes this was because the existing categories seemed inadequate; other times it was because the existence of a new technology promised new capabilities, or new solutions to old problems, or an opportunity to use the Internet as an excuse to achieve a regulatory goal that could not otherwise be justified.

Sometimes this impulse to create new categories went a bit wrong, as for example when the then rare breed of lawyer-technologists toiled to enable solutions that had not yet found their problems.

The best example of this phenomenon is the Utah Digital Signature Act of 1995, the first of its kind in the nation, and in many ways the model for the ABA guidelines that followed. The Utah law attempted to shape the future by defining transactional roles, rights, and responsibilities in a way that relied on particular technologies used in digital identification and authentication. Those technologies did not catch on in the marketplace nearly as quickly as the law’s backers had hoped, nor did the law succeed in kick-starting a new e-commerce industry based on new intermediaries. The Utah model more failed than succeeded. In contrast, digital signature laws that took a more modest and technology-neutral approach, and sought primarily to domesticate deployed technologies and fit them into known categories worked well. It helped to have a legislative rule making clear when an electronic or digital signature counted as a valid signature and when it did not – it saved a lot of needless court cases. (The most common answer, by the way, is that it is a signature for most things other than wills or conveyance of real property.) By the late ’90s there was (mostly light-weight) digital signature legislation in 49 US states, and in many countries.

A third set of legal and governmental responses unashamedly sought to return matters to the status quo ante, or were designed proactively to protect either business models or established governmental practices from Internet threats. And it is in this third category where our biggest future troubles lie.

Remember the equation, “Packet-switching + strong crypto = total communicative freedom”?

Excess communicative freedom was not just a concern of the US. The Canadian government unsuccessfully sought to block US sources from sending daily Internet accounts of ongoing Canadian trials – banned domestically on the grounds that it prejudices the defendant’s right to a fair trial. At some point more despotic regimes also began to take note and to wonder what they should do in response. In due course that would lead to the Great Firewall of China.

But it was the US government – or perhaps I should say the NSA, the people in charge of capturing and analyzing signals intelligence from around the world – who were first to recognize how that equation might make their lives more difficult. Similarly, domestic law enforcement agencies that relied on wiretaps to make and break cases faced the threat that if all communications were encrypted end-to-end, one of their most valuable law enforcement tools would go the way of the Dodo. It did not help that some cypherpunks had spec’ed out a model for a “Blacknet” – the philosophical if not in fact genetic ancestor of Wikileaks and its ilk – in which anonymous leakers could sell their secrets to anonymous buyers and both sides could be assured that their identities would remain unknown to all parties concerned including any intermediaries. Some law enforcement may have got excited about it, but as best we know “Blacknet” wasn’t real, just an online party piece intended pour épater les bourgeois. That said, crypographically powered anonymous remailers were real, and (when they were working properly) they allowed people to send untraceable messages, be they love notes or ransom notes.

Simply banning strong crypto did not seem to be a viable option. There was no statutory authority, and no political consensus for new legislation. Worse, there was a First Amendment case – unproven, perhaps, but fervently pushed by its adherents and potentially potent – that such a ban would be unconstitutional. The US government’s response was ingenious. Rather than seek new legal powers, the US government decided to leverage export control power it already had, and use that power to set technical standards in a way that would preserve the parts of the status quo it most valued. The government already prohibited the export of strong cryptography on the grounds that it was a dual-use good, a thing that could be used for military as well as civilian purposes, a category that included cryptographic software. Software companies were very concerned about speed getting to market, and about version control. They didn’t want to wait around for licenses, and they didn’t want to have to make a ‘lite’ version for export as that would depress foreign sales, and require them to maintain and update two versions of their product. Plus, crypto is hard to implement. Subtle mistakes can destroy a product’s security.

The US government’s clever ploy was to offer firms the use of an NSA-approved strong cryptographic algorithm embedded in a tamper-proof chip, with one little extra: the Clipper Chip would come with an extra method for decrypting messages known only to the US government, which it would promise to use only according to specified legal procedures. Win-win, said the government: strong crypto for everyone, we preserve our law enforcement and spy capabilities. In an effort to set a de facto technical standard, the US started to use the Pentagon’s buying power to acquire compliant smart cards, in the hope of creating economies of scale for Clipper-enabled devices and thus setting a market standard too. An important feature of this plan was that every private action – making the chips, selling the chips, using the devices – could be characterized as formally “voluntary,” thus evading or at least burying any Constitutional questions.

It almost worked. It failed, largely because of a determined effort by privacy activists who raised legal and technical questions about the plan, and because a globalizing marketplace rebelled at the thought of encryption optimized for US intelligence agencies.

Governments learned from these failures. Indeed, there is a risk that in time we may come to see them as having lost the battle but won the war because they learned – all too well – from their early failures.

One solution was simply to legislate more directly, and smarter.

Continued…

Posted in Law: Internet Law, Talks & Conferences | Comments Off on Lessons Learned Too Well: The Evolution of Internet Regulation (2)