- 2015 Coral Gables Commission Election Results on
- Coral Gables Commission Group V Election: Six Candidates in Search of a Plurality on
- Coral Gables Commission Group V Election: Six Candidates in Search of a Plurality on
- Coral Gables Commission Group V Election: Six Candidates in Search of a Plurality on
- Right and Wrong Way to Go Negative on
Unique Homepage Views:
Sitemeter Tracker & Counter
Category Archives: Civil Liberties
I was recently asked to contribute to a set of essays being assembled in honor of the Electronic Privacy Information Center‘s 20th anniversary. Here’s a draft:
A. Michael Froomkin
Laurie Silvers & Mitchell Rubenstein Distinguished Professor
University of Miami School of Law
Identity Management looms as one of the privacy battlegrounds of the coming decade. The very term is contested. In its most minimal form it means little more than keeping secure track of login credentials, passwords, and other identity tokens. The more capacious version envisions an ‘identity ecosystem’ in which people’s tools carefully measure out the information they reveal, and in which we all have a portfolio of identities and personae tailored to circumstances. What is more, in this more robust vision, many transactions and relationships that currently require verification of identity move instead to a default of only requiring that a person demonstrate capability or authorization.
A privacy-protective Identity Management architecture matters because the drift towards strong binding between identity and online activities enables multiple forms of profiling and surveillance by both the public and private sectors. Moving to a better system would make a substantial part of that monitoring and data aggregation more difficult. Thus, a privacy-protective Identity Management ecosystem has value on its own or as a complement to a more comprehensive reform of privacy protection, whether EU-style or otherwise. Importantly, given present trends, a reformed ID ecosystem would protect privacy against private monitoring and against illicit public sector surveillance also.
In the US the present and future of privacy seems to fall somewhere between grim and apocalyptic. The NSA seeks to capture all digital data. Law enforcement agencies club together to share surveillance data in fusion centers. Corporate data brokers find new ways to collect and use personal data. Yet, it seems all too likely that data-gathering will remain largely unencumbered by EU-style privacy regulation for the foreseeable future. Data privacy is being squeezed by a technological pincer composed of multiple advances in data collection on the one hand and rapid advances in data collation on the other. Big Data gets bigger and faster, and is composed of an ever-wider variety of information sources collected and shared by corporations and governments.
The catalog of threats to privacy runs from the capture of internet-based communications, to location and communications monitoring via cellphones and license plate tracking. Effective facial recognition is on the horizon. Both public and private bodies increasingly deploy cameras in public, and process and store the results; increasingly too they share data – or at least the private sector shares with the government, whether willingly or otherwise. Plus, as people become more used to (and more dependent on) electronic social and economic intermediaries such as Facebook, Twitter, Instegram, Amazon, and Google, they themselves become key sources of data that others can use to track and correlate their movements, associations, and even ideas – not to mention those of the people around them.
In an environment of increasingly pervasive surveillance of communications, transactions, and movements, the average US person is almost defenseless. Legal limits on data collection tend to lag technical developments. As regards private-sector collection, the dominant largely laisser-faire theory of contract means that privacy routinely falls in the face of standard-form extractions of consent. As regards data collection in public and also data use and re-use, First Amendment considerations might make it difficult to outlaw the repetition of many true facts not obtained in confidence. Furthermore, there is relatively little the average person can do about physical privacy in daily lives. Obscuring license plates is illegal in most states. Many states also make it a crime to wear a mask in public, although the constitutionality of that ban is debatable. Most cell phones are locked, rooting them is neither simple nor costsless, nor does it make it possible to solve all the privacy issues.
Electronic privacy has for years seemed to be an area where privacy tools might make significant dent in data collection and surveillance. Unfortunately, cryptography’s potential is yet to be realized; disk encryption software now ships as an option with major operating systems, but encrypted email remains a specialist item. Cell phones leak information not just via location tracking but through the apps and uses that make the devices worthwhile to most users. Estimates suggest that when one counts senders and recipients, one company – Google – sees half the emails sent nationally. And we now know beyond a reasonable doubt that the NSA has adopted a vacuum cleaner policy towards both electronic communications and location data.
One of the first papers I wrote about privacy, back in 1995, contrasted four types of communications in which the sender’s identity was at least partially hidden. Listed in declining order of privacy protection they were: (1) traceable anonymity, (2) untraceable anonymity, (3) untraceable pseudonymity, and (4) traceable pseudonymity. Encouraging untraceable anonymity has for years seemed to me be one of the best routes to the achievement of electronic privacy. “Three can keep a secret if two of them are dead”: If people could transact and communicate anonymously, then the exchange would by its nature remain outside the ever-expanding digital dossiers. But even though we have increasingly reliable privacy-enhanced communications through systems like Tor, and even though at least a segment of the public has demonstrated an appetite for semi-anonymous cryptocurrency (cf. the Bitcoin fiasco), the fact remains that for most people most of the time, anonymous electronic communication, much less anonymous transactions, are further and further out of reach because tracking and correlating technologies are getting better all the time. Whether due to the use of MAC numbers to track equipment, cookies and browser fingerprints to track software and its users, or cross-linking of location data with other data captures be it phones, faces, loyalty cards, self-surveillance, the fact is that anonymity is on the ropes even before we get to the various impediments in the US, and even more in other countries, to real anonymity.
A focus on Identify Management involves a shift from anonymity to pseudonymity. Plus, if one is being realistic about the legal environment, any robust identity management likely will have substantial traceability in it. Useful, attractive, Identity Management tools can only exist if we first create a legal and standards-based infrastructure that supports them. In the US, at least, the legal piece of that infrastructure will require action by the federal government. Although actors within the Obama Administration have signaled support for strong identity management in the “National Strategy for Trusted Identities in Cyberspace (NSTIC)“, not all parts of this Administration are speaking in unison. Worse, the early signs are the NSTIC implementation will fall far short of its potential.
NSTIC is almost unique among recent government pronouncement about the regulation of the Internet domestically.1 The typical government report on cyberspace is long on the threats of cyber-terrorism, money laundering, and (sometimes) so-called cyber-piracy (unlicenced digital copying), and gives at most lip service to the importance of privacy and individual data security. The exceptions are reports on the dangers of ID theft – which seem mostly to stress caution in Internet rather than secure software – and NSTIC itself. NSTIC envisions an “Identity Ecosystem” guided by four key values:
- Identity solutions will be privacy-enhancing and voluntary
- Identity solutions will be secure and resilient
- Identity solutions will be interoperable
- Identity solutions will be cost-effective and easy to use
These are good goals, and to realize them would be a substantial achievement. Even if it is limited to cyberspace – in other words, even if it does not directly address the problems of surveillance in the physical world – in this list lie the seeds for an ‘ecosystem’ based on enabling law and voluntary standards that could very substantially enhance data privacy by allowing people to compartmentalize their lives and by creating obstacles to marketers and others stitching those compartments together.
The problem that NSTIC could solve is that without some sort of intervention both the interests of marketers, law enforcement, and (in part as a result) hardware and software designer most frequently tend towards making technology surveillance-friendly and towards making communications and transactions easily linkable. If we each have only one identity capable of transacting, and if our access to communications resources, such as ISPs and email, requires payment – or even just authentication – then all too quickly everything we do online is at risk of being joined to our dossier. The growth of real-world surveillance, and the ease with which cell phone tracking and face recognition will allow linkage to virtual identities, only adds to the potential linkage. The consequences are that one is, effectively, always being watched as one speaks or reads, buys or sells, or joins with friends, colleagues, co-religionists, fellow activists, or hobbyists. In the long term, a world of near-total surveillance and endless record-keeping is likely to be one with less liberty, less experimentation, and certainly far less joy (except maybe for the watchers).
Robust privacy-enhancing identities – pseudonyms – could put some breaks on this totalizing future. But in order for identities to genuinely serve privacy in a new digital privacy ecosystem, these roles need to have capabilities to transact, at least in amounts large enough to purchase ISP and cell phone services. And we need a standards that ensure our hardware does not betray our identities: using different identities on the same computer or the same cell phone must not result in the easy collapse of multiple identities into one. Thus, given the current communications infrastructure, computers and phones must have a way of alternating among multiple identities, down to the technical (MAC, IPv6, and IMEI number) level.
In its most robust form, we would have true untraceable pseudonymity powered by payer-anonymous digital cash. But even a weaker form, one that built in something as ugly as identity escrow – ways in which the government might pierce the identity veil when armed with sufficient cause and legal process – would still be a substantial improvement over the path we are on. It is possible to imagine the outlines of a privacy-hardened identity infrastructure that fully caters to all but the very most unreasonable demands of the law enforcement and security communities. In this ecosystem, we would each have a root identity, as we do now, and we would normally use that identity for large financial transactions. In addition, however, everyone would have the ability to create limited-purpose identities that would be backed up by digital certificates issued by an ID guarantor – a role banks for example might be happy to play. Some of these certificates would be ‘attribute’ certs, stating that the holder is, for example, over 18, or a veteran, or a member of the AAA for 2015. Others would be capability certs, much like credit cards today, stating that the identity has an annual pass to ride the bus, or has a credit line to draw on. (There could be limits on the size of the credit line if there are money laundering concerns, although several banks already offer an option of throw-away credit card numbers for people concerned about using their credit cards online; those cards, however, carry the name of the underlying card-holder while in a privacy-enhanced ID system they would not need to.) We might define a flag that distinguished between personae that are anchored to a real identity and those that are not; the anchored ones would deserve more trust, even if we didn’t know who was behind them.
In time, we would learn to interact online through virtualized compartments – configurable persona. Doing so would enable a stricter, cryptographically enforced, separation between work, home, and play. It would also provide for defense in depth against identity theft – if someone, say, broke into one’s Facebook persona, the attacker would be able to leverage this to the work persona. Furthermore, there would be less need for tight security controls imposed at work to limit (or monitor) private personae – already an increasing problem with corporate-issued cell phones and laptops.
Even this – a much watered-down recipie for limited privacy – is a tall order in today’s United States. It is hard enough to persuade even democratic governments of the virtues of free speech, and even harder to find any enthusiasm for the freer speech that comes from strong pesudonyms. When one gets to the even freer speech that comes from untraceable anonymity, governments get cold feet – and when money is involved, the opposition is only stronger.
The Obama Administration’s National Strategy for Trusted Identities in Cyberspace (NSTIC) raised hopes that the US government might swing its weight towards the design of legal and technical architectures designed to simultaneously increase online security while reducing the privacy costs increasingly imposed as a condition of even access to online content. At present those hopes have yet to be realized. There is much to be done.
- The caveat is important: the US government often seems more willing to talk of anonymization on the Internet as potentially empowering tool for dissidents abroad than for citizens at home. [↩]
The IETF has issued RFC 7258, aka Best Current Practice 188, “Pervasive Monitoring Is an Attack”. This is an important document. Here’s a snippet of the intro:
Pervasive Monitoring (PM) is widespread (and often covert) surveillance through intrusive gathering of protocol artefacts, including application content, or protocol metadata such as headers. Active or passive wiretaps and traffic analysis, (e.g., correlation, timing or measuring packet sizes), or subverting the cryptographic keys used to secure protocols can also be used as part of pervasive monitoring. PM is distinguished by being indiscriminate and very large scale, rather than by introducing new types of technical compromise.
The IETF community’s technical assessment is that PM is an attack on the privacy of Internet users and organisations. The IETF community has expressed strong agreement that PM is an attack that needs to be mitigated where possible, via the design of protocols that make PM significantly more expensive or infeasible. Pervasive monitoring was discussed at the technical plenary of the November 2013 IETF meeting [IETF88Plenary] and then through extensive exchanges on IETF mailing lists. This document records the IETF community’s consensus and establishes the technical nature of PM.
The term “attack” is used here in a technical sense that differs somewhat from common English usage. In common English usage, an attack is an aggressive action perpetrated by an opponent, intended to enforce the opponent’s will on the attacked party. The term is used here to refer to behavior that subverts the intent of communicating parties without the agreement of those parties.
The conclusion is simple, but powerful: “The IETF will strive to produce specifications that mitigate pervasive monitoring attacks.”
I can’t help but see this as a shining example of the IETF living up to its legitimate-rule-making potential, as I described in my 2003 Harvard Law Review article Habermas@discourse.net: Toward a Critical Theory of Cyberspace.
Below, I reprint my abstract: Continue reading
This British police intimidation of a blogger is much less heavy-handed than the hammer the Mayor of Peoria brought down on a parody Twitter account.
The Cambridge, UK story is, however, creepy insofar as it could be read to suggest that the police support the far-right UKIP party. Then again, the UKIP complainant was a local councillor, so maybe it really is the somewhat-less-unkind UK version of the Peoria story. After all, in Cambridge, no property was damaged or seized or destroyed. And no judges signed off on spurious warrants either.
This story seems like a Smoking Gun-sized Big Deal. The NYT version, C.I.A. Employees Face New Inquiry Amid Clashes on Detention Program and the less namby-pamby McClatchy version, Probe sought of CIA conduct in Senate study of secret detention program paint a pretty damming picture of an agency totally out of control, and of a potentially massive separation of powers conflict arising out of the Senate’s report on CIA torture.
Compare McClatchy’s leed:
The CIA Inspector General’s Office has asked the Justice Department to investigate allegations of malfeasance at the spy agency in connection with a yet-to-be released Senate Intelligence Committee report into the CIA’s secret detention and interrogation program, McClatchy has learned.
The criminal referral may be related to what several knowledgeable people said was CIA monitoring of computers used by Senate aides to prepare the study. The monitoring may have violated an agreement between the committee and the agency.
to the NYT leed:
The Central Intelligence Agency’s attempt to keep secret the details of a defunct detention and interrogation program has escalated a battle between the agency and members of Congress and led to an investigation by the C.I.A.’s internal watchdog into the conduct of agency employees.
The agency’s inspector general began the inquiry partly as a response to complaints from members of Congress that C.I.A. employees were improperly monitoring the work of staff members of the Senate Intelligence Committee, according to government officials with knowledge of the investigation.
McClatchy also says this:
The committee determined earlier this year that the CIA monitored computers – in possible violation of an agreement against doing so – that the agency had provided to intelligence committee staff in a secure room at CIA headquarters that the agency insisted they use to review millions of pages of top-secret reports, cables and other documents, according to people with knowledge.
Sen. Ron Wyden, D-Oregon, a panel member, apparently was referring to the monitoring when he asked CIA Director John Brennan at a Jan. 9 hearing if provisions of the Federal Computer Fraud and Abuse Act “apply to the CIA? Seems to me that’s a yes or no answer.”
Brennan replied that he’d have to get back to Wyden after looking into “what the act actually calls for and it’s applicability to CIA’s authorities.”
None of that is in the NYT version, although the NYT (like McClatchy) does have these details:
Then, in December, Mr. Udall revealed that the Intelligence Committee had become aware of an internal C.I.A. study that he said was “consistent with the Intelligence Committee’s report” and “conflicts with the official C.I.A. response to the committee’s report.”
It appears that Mr. Udall’s revelation is what set off the current fight, with C.I.A. officials accusing the Intelligence Committee of learning about the internal review by gaining unauthorized access to agency databases.
In a letter to President Obama on Tuesday, Mr. Udall made a vague reference to the dispute over the C.I.A.’s internal report.
“As you are aware, the C.I.A. has recently taken unprecedented action against the committee in relation to the internal C.I.A. review, and I find these actions to be incredibly troubling for the committee’s oversight responsibilities and for our democracy,” he wrote.