I’m off to Ottawa for the 2nd Annual Privacy Personas and Segmentation (PPS) Workshop which is being held in conjunction with the Symposium on Usable Privacy and Security (SOUPS).
The organizers selected me to give the keynote for the workshop, and I’ve produced a provocation for them. Here is the introduction:
Users are notoriously bad at safeguarding their online privacy. They do not read privacy policies, which in any case are mostly contracts of adhesion. They make over-optimistic assumptions about protections and dangers. They use weak passwords (and repeat them), accept cookies, and leave their cell phones on thus facilitating location tracking, which is vastly more destructive to privacy than almost any user grasps.  Contrary to Alan Westin’s privacy segmentation analysis , most privacy choices are not knowing and deliberate because they are not within the user’s control (e.g. surveillance in public). Other ‘choices’ happen because users believe, correctly, that they in fact have no choice if they want the services (e.g. Google, mobile telephony) that large numbers of consumers consider necessary for modern life. 
The systematic exposure of the so-called “privacy vulnerable” user  suits important public and private interests. Marketers, law enforcement, and (as a result) hardware and software designers tend towards making technology surveillance-friendly and tend towards making communications and transactions easily linkable.
If we each have only one identity capable of transacting–even if it is mediated through multiple logins–and if our access to communications resources, such as ISPs and email, requires payment or authentication, then all too quickly everything we do online is at risk of being linked to one master dossier. The growth of real-world surveillance, and the ease with which cell phone tracking and face recognition will allow linkage to virtual identities, only adds to the size of that dossier. The consequences are that one is, effectively, always being watched as one speaks or reads, buys or sells, or joins with friends, colleagues, co-religionists, fellow activists, or hobbyists. In the long term, a world of near-total surveillance and endless record-keeping is likely to be one with less liberty, less experimentation, and certainly far less joy  (except maybe for the watchers). In a country such as the US where robust data-protection law is deeply unlikely, a technological solution is required if privacy is to continue to be relevant in the era of big data; one such, perhaps the best such, technological improvement would be to create an IMA designed to give every person multiple privacy-protective transaction-empowered digital personae. Roger Clarke provides a good working definition of the “digital persona” as “a model of an individual’s public personality based on data and maintained by transactions, and intended for use as a proxy for the individual.” 
Whereas Clarke presciently saw (and critiqued) the ‘dataveillance’ project as being an effort to create a single, increasingly accurate, digital persona connected to the person, the objective here is to undermine that linkage by having multiple personae that would not be as easy to link to each other or to the person.
(Updated to correct link to workshop.)
I neglected to link to Lessons Learned Too Well: Anonymity in a Time of Surveillance, the paper I’m presenting at #yalefesc. A very very small number of people will recognize this as a partial redraft of a paper I started a few years ago, but never published because it didn’t seem quite right. My plan is to get it as right as I can in the next few months, which is why I’m workshopping it.
Next time you stay in a hotel that has a notice like this one on the bedside table… …use the earplugs.
I’m in New Haven for the Freedom of Expression Scholars Conference, which uses the wonderful workshop format we adopted for We Robot. The author of the paper being workshopped doesn’t present – the discussant starts by summarizing the paper, which all the attendees are presumed to have read. The author gives a brief response, and it is off to the races.
I’m in the usually unenviable first-thing-Sunday morning slot, the one where you compete with exhaustion (and hangovers) but I actually think that at FESC first-on-Sunday is better than last-on-Saturday, as there is a very very long program.
I am not a core first amendment scholar, not at all, although my work on anonymity obviously intersects, which is why I’m here. It’s very interesting to see the things that concern people who focus on the First Amendment these days; it’s a very different set of concerns from what there was say ten years ago. I learned a lot from reading the papers (or, rather, the fraction of the papers for the sessions I plan to go to – there are three parallel ones in most time slots). Plus I get to meet a lot of new people, more than I do at Internetty events, maybe even more than robotty events now that I’ve been to a bunch of them.
It’s always slightly odd to be back in New Haven, where I spent first four and then later three years. The city is much more cheerful (it helps that its Spring, while memory has a strong overlay of February). There’s been a great deal of turnover in the shops, with many of the small places I liked gone, and a number of chain rather chichi clothing and such shops replacing them. A mixed blessing at best.
And, coming from Miami, almost everyone on the street looks a bit pale.
net.wars: Multiplicity has the best write-up of We Robot 2015 that I’ve seen yet.
My favorite quote:
[David] Post led the discussion to broader questions: if you’re going to intervene in the development of new norms and law, when do you do it? How do you do it while remaining flexible enough to allow the technology to develop? Particularly with respect to privacy and teens’ willingness to share information in a way that scares their elders, “Could we have had that conversation in 1983?”
This is the heart of We Robot: the co-chairs, Michael Froomkin and Ryan Calo run the conference precisely to try to get ahead of prospective conflicts. So Froomkin’s answer to Post’s question was to note that being “in the room” matters. Had “just one lawyer” been present when engineers were creating the domain name system its design could have been different because that lawyer would have spotted the issues we have been grappling with ever since. “People with different backgrounds and perspectives spot problems,” he said, “and also solutions.” And, he added, those changes are easier at the beginning, when there’s less deployment and less money invested.
And yes, as I said at the conference, one of the main things I’ve learned from 15+ years of Internet policy research is that ‘Who is in the room’ when decisions are made is about as important as anything else.
(The photo is of Tony Dyson, the designer of the original R2D2 (top left), and two other happy happy conference-goers.)
It seems to get better every year.
But we’ll have a tough act to follow after the success of the 2015 edition!