Lessons Learned Too Well: The Evolution of Internet Regulation (2)

Read Part One.

In the US, the first wave of Internet law and regulation had three separate impulses, each a differently motivated reaction to the disruptive effects of a constellation of new technologies based on the communicative power of a network.

First, both logically and in time, was the legal instinct for categorization, often as a means to solve disputes. Was the Internet like a telephone network? Or was it more like television? Was computer-mediated speech more like radio or newspapers or private letters? Was e-commerce like mail-order? Is encryption more like speech or a widget? Where is an online transaction for jurisdictional purposes? Of course, as in any such exercise, which category an Internet-enabled activity would be placed in was often contestable, because there was debate about the true nature of the Internet-mediated activity, because analogies are imperfect, because parties dueled about the appropriate level of generality, or because category choice could determine outcomes.

A second type of first-wave regulation, usually legislative, sought to create new categories and in rare cases new institutions. Sometimes this was because the existing categories seemed inadequate; other times it was because the existence of a new technology promised new capabilities, or new solutions to old problems, or an opportunity to use the Internet as an excuse to achieve a regulatory goal that could not otherwise be justified.

Sometimes this impulse to create new categories went a bit wrong, as for example when the then rare breed of lawyer-technologists toiled to enable solutions that had not yet found their problems.

The best example of this phenomenon is the Utah Digital Signature Act of 1995, the first of its kind in the nation, and in many ways the model for the ABA guidelines that followed. The Utah law attempted to shape the future by defining transactional roles, rights, and responsibilities in a way that relied on particular technologies used in digital identification and authentication. Those technologies did not catch on in the marketplace nearly as quickly as the law’s backers had hoped, nor did the law succeed in kick-starting a new e-commerce industry based on new intermediaries. The Utah model more failed than succeeded. In contrast, digital signature laws that took a more modest and technology-neutral approach, and sought primarily to domesticate deployed technologies and fit them into known categories worked well. It helped to have a legislative rule making clear when an electronic or digital signature counted as a valid signature and when it did not – it saved a lot of needless court cases. (The most common answer, by the way, is that it is a signature for most things other than wills or conveyance of real property.) By the late ’90s there was (mostly light-weight) digital signature legislation in 49 US states, and in many countries.

A third set of legal and governmental responses unashamedly sought to return matters to the status quo ante, or were designed proactively to protect either business models or established governmental practices from Internet threats. And it is in this third category where our biggest future troubles lie.

Remember the equation, “Packet-switching + strong crypto = total communicative freedom”?

Excess communicative freedom was not just a concern of the US. The Canadian government unsuccessfully sought to block US sources from sending daily Internet accounts of ongoing Canadian trials – banned domestically on the grounds that it prejudices the defendant’s right to a fair trial. At some point more despotic regimes also began to take note and to wonder what they should do in response. In due course that would lead to the Great Firewall of China.

But it was the US government – or perhaps I should say the NSA, the people in charge of capturing and analyzing signals intelligence from around the world – who were first to recognize how that equation might make their lives more difficult. Similarly, domestic law enforcement agencies that relied on wiretaps to make and break cases faced the threat that if all communications were encrypted end-to-end, one of their most valuable law enforcement tools would go the way of the Dodo. It did not help that some cypherpunks had spec’ed out a model for a “Blacknet” – the philosophical if not in fact genetic ancestor of Wikileaks and its ilk – in which anonymous leakers could sell their secrets to anonymous buyers and both sides could be assured that their identities would remain unknown to all parties concerned including any intermediaries. Some law enforcement may have got excited about it, but as best we know “Blacknet” wasn’t real, just an online party piece intended pour épater les bourgeois. That said, crypographically powered anonymous remailers were real, and (when they were working properly) they allowed people to send untraceable messages, be they love notes or ransom notes.

Simply banning strong crypto did not seem to be a viable option. There was no statutory authority, and no political consensus for new legislation. Worse, there was a First Amendment case – unproven, perhaps, but fervently pushed by its adherents and potentially potent – that such a ban would be unconstitutional. The US government’s response was ingenious. Rather than seek new legal powers, the US government decided to leverage export control power it already had, and use that power to set technical standards in a way that would preserve the parts of the status quo it most valued. The government already prohibited the export of strong cryptography on the grounds that it was a dual-use good, a thing that could be used for military as well as civilian purposes, a category that included cryptographic software. Software companies were very concerned about speed getting to market, and about version control. They didn’t want to wait around for licenses, and they didn’t want to have to make a ‘lite’ version for export as that would depress foreign sales, and require them to maintain and update two versions of their product. Plus, crypto is hard to implement. Subtle mistakes can destroy a product’s security.

The US government’s clever ploy was to offer firms the use of an NSA-approved strong cryptographic algorithm embedded in a tamper-proof chip, with one little extra: the Clipper Chip would come with an extra method for decrypting messages known only to the US government, which it would promise to use only according to specified legal procedures. Win-win, said the government: strong crypto for everyone, we preserve our law enforcement and spy capabilities. In an effort to set a de facto technical standard, the US started to use the Pentagon’s buying power to acquire compliant smart cards, in the hope of creating economies of scale for Clipper-enabled devices and thus setting a market standard too. An important feature of this plan was that every private action – making the chips, selling the chips, using the devices – could be characterized as formally “voluntary,” thus evading or at least burying any Constitutional questions.

It almost worked. It failed, largely because of a determined effort by privacy activists who raised legal and technical questions about the plan, and because a globalizing marketplace rebelled at the thought of encryption optimized for US intelligence agencies.

Governments learned from these failures. Indeed, there is a risk that in time we may come to see them as having lost the battle but won the war because they learned – all too well – from their early failures.

One solution was simply to legislate more directly, and smarter.


This entry was posted in Law: Internet Law, Talks & Conferences. Bookmark the permalink.