Fresh up on SSRN, a pre-publication draft text of Privacy as Safety, co-authored with Zak Colangelo and forthcoming in the Washington Law Review. Here’s the abstract and TOC:
The idea that privacy makes you safer is unjustly neglected: public officials emphasize the dangers of privacy while contemporary privacy theorists acknowledge that privacy may have safety implications but hardly dwell on the point. We argue that this lack of emphasis is a substantive and strategic error, and seek to rectify it. This refocusing is particularly timely given the proliferation of new and invasive technologies for the home and for consumer use more generally, not to mention surveillance technologies such as so-called smart cities.
Indeed, we argue—perhaps for the first time in modern conversations about privacy—that in many cases privacy is safety, and that, in practice, United States law already recognizes this fact. Although the connection rarely figures in contemporary conversations about privacy, the relationship is implicitly recognized in a substantial but diverse body of U.S. law that protects privacy as a direct means of protecting safety. As evidence we offer a survey of the ways in which U.S. law already recognizes that privacy is safety, or at least that privacy enhances safety. Following modern reformulations of Alan Westin’s four zones of privacy, we explore the safety-enhancing privacy protections within the personal, intimate, semi-private, and public zones of life, and find examples in each zone, although cases in which privacy protects physical safety seem particularly frequent. We close by noting that new technologies such as the Internet of Things and connected cars create privacy gaps that can endanger their users’ safety, suggesting the need for new safety-enhancing privacy rules in these areas.
By emphasizing the deep connection between privacy and safety, we seek to lay a foundation for planned future work arguing that U.S. administrative agencies with a safety mission should make privacy protection one of their goals.
First came product placement. In exchange for a payment, whether in cash, supplies or services, a TV show or a film would prominently display a brand-name product.
Then there was virtual product placement. Products or logos would be inserted into a show during editing, thanks to computer-generated imagery.
Now, with the rise of Netflix and other streaming platforms, the practice of working brands into shows and films is likely to get more sophisticated. In the near future, according to marketing executives who have had discussions with streaming companies, the products that appear onscreen may depend on who is watching.
Just posted: A near-final draft of my latest paper, Big Data: Destroyer of Informed Consent. It will appear later this year in a special joint issue of the Yale Journal of Health Policy, Law, and Ethics and the Yale Journal of Law and Technology.
Here’s the tentative abstract (I hate writing abstracts):
The ‘Revised Common Rule’ took effect on January 21, 2019, marking the first change since 2005 to the federal regulation that governs human subjects research conducted with federal support or in federally supported institutions. The Common Rule had required informed consent before researchers could collect and use identifiable personal health information. While informed consent is far from perfect, it is and was the gold standard for data collection and use policies; the standard in the old Common Rule served an important function as the exemplar for data collection in other contexts.
Unfortunately, true informed consent seems incompatible with modern analytics and ‘Big Data’. Modern analytics hold out the promise of finding unexpected correlations in data; it follows that neither the researcher nor the subject may know what the data collected will be used to discover. In such cases, traditional informed consent in which the researcher fully and carefully explains study goals to subjects is inherently impossible. In response, the Revised Common Rule introduces a new, and less onerous, form of “broad consent” in which human subjects agree to as varied forms of data use and re-use as researchers’ lawyers can squeeze into a consent form. Broad consent paves the way for using identifiable personal health information in modern analytics. But these gains for users of modern analytics come with side-effects, not least a substantial lowering of the aspirational ceiling for other types of information collection, such as in commercial genomic testing.
Continuing improvements in data science also cause a related problem, in that data thought by experimenters to have been de-identified (and thus subject to more relaxed rules about use and re-use) sometimes proves to be re-identifiable after all. The Revised Common Rule fails to take due account of real re-identification risks, especially when DNA is collected. In particular, the Revised Common Rule contemplates storage and re-use of so-called de-identified biospecimins even though these contain DNA that might be re-identifiable with current or foreseeable technology.
Defenders of these aspects of the Revised Common Rule argue that ‘data saves lives’. But even if that claim is as applicable as its proponents assert, the effects of the Revised Common Rule will not be limited to publicly funded health sciences, and its effects will be harmful elsewhere.
Consent, that is ‘notice and choice,’ is a fundamental concept in the U.S. approach to data privacy, as it reflects principles of individual autonomy, freedom of choice, and rationality. Big Data, however, makes the traditional approach to informed consent incoherent and unsupportable, and indeed calls the entire concept of consent, at least as currently practiced in the U.S., into question.
Big Data kills the possibility of true informed consent because by its very nature one purpose of big data analytics is to find unexpected patterns in data. Informed consent requires at the very least that the person requesting the consent know what she is asking the subject to consent to. In principle, we hope that before the subject agrees she too comes to understand the scope of the agreement. But with big data analytics, particularly those based on Machine Learning, neither party to that conversation can know what the data may be used to discover.
I then go on to discuss the Revised Common Rule, which governs any federally funded human subjects research. The revision takes effect in early 2019, and it relaxes the informed consent rule in a way that will set a bad precedent for private data mining and research. Henceforth researchers will be permitted to obtain open-ended “broad consent”–-i.e. “prospective consent to unspecified future research”–-instead of requiring informed consent, or even ordinary consent, on a case-by-case basis. That’s not a step forward for privacy or personal control of data, and although it’s being driven by genuine public health concerns the side-effects could be very widespread.