Category Archives: Writings

New Article: Privacy as Safety

Fresh up on SSRN, a pre-publication draft text of Privacy as Safety, co-authored with Zak Colangelo and forthcoming in the Washington Law Review.  Here’s the abstract and TOC:

The idea that privacy makes you safer is unjustly neglected: public officials emphasize the dangers of privacy while contemporary privacy theorists acknowledge that privacy may have safety implications but hardly dwell on the point. We argue that this lack of emphasis is a substantive and strategic error, and seek to rectify it. This refocusing is particularly timely given the proliferation of new and invasive technologies for the home and for consumer use more generally, not to mention surveillance technologies such as so-called smart cities.

Indeed, we argue—perhaps for the first time in modern conversations about privacy—that in many cases privacy is safety, and that, in practice, United States law already recognizes this fact. Although the connection rarely figures in contemporary conversations about privacy, the relationship is implicitly recognized in a substantial but diverse body of U.S. law that protects privacy as a direct means of protecting safety. As evidence we offer a survey of the ways in which U.S. law already recognizes that privacy is safety, or at least that privacy enhances safety. Following modern reformulations of Alan Westin’s four zones of privacy, we explore the safety-enhancing privacy protections within the personal, intimate, semi-private, and public zones of life, and find examples in each zone, although cases in which privacy protects physical safety seem particularly frequent. We close by noting that new technologies such as the Internet of Things and connected cars create privacy gaps that can endanger their users’ safety, suggesting the need for new safety-enhancing privacy rules in these areas.

By emphasizing the deep connection between privacy and safety, we seek to lay a foundation for planned future work arguing that U.S. administrative agencies with a safety mission should make privacy protection one of their goals.


Enjoy!

Posted in Law: Privacy, Writings | Leave a comment

Just Uploaded–Big Data: Destroyer of Informed Consent (Final Text)

I’ve just uploaded the final text of Big Data: Destroyer of Informed Consent which is due to appear Real Soon Now in a special joint issue of the Yale Journal of Health Policy, Law, and Ethics and the Yale Journal of Law and Technology. This pre-publication version has everything the final version will have except the correct page numbers. Here’s the abstract:

The ‘Revised Common Rule’ took effect on January 21, 2019, marking the first change since 2005 to the federal regulation that governs human subjects research conducted with federal support or in federally supported institutions. The Common Rule had required informed consent before researchers could collect and use identifiable personal health information. While informed consent is far from perfect, it is and was the gold standard for data collection and use policies; the standard in the old Common Rule served an important function as the exemplar for data collection in other contexts.

Unfortunately, true informed consent seems incompatible with modern analytics and ‘Big Data’. Modern analytics hold out the promise of finding unexpected correlations in data; it follows that neither the researcher nor the subject may know what the data collected will be used to discover. In such cases, traditional informed consent in which the researcher fully and carefully explains study goals to subjects is inherently impossible. In response, the Revised Common Rule introduces a new, and less onerous, form of “broad consent” in which human subjects agree to as varied forms of data use and re-use as researchers’ lawyers can squeeze into a consent form. Broad consent paves the way for using identifiable personal health information in modern analytics. But these gains for users of modern analytics come with side-effects, not least a substantial lowering of the aspirational ceiling for other types of information collection, such as in commercial genomic testing.

Continuing improvements in data science also cause a related problem, in that data thought by experimenters to have been de-identified (and thus subject to more relaxed rules about use and re-use) sometimes proves to be re-identifiable after all. The Revised Common Rule fails to take due account of real re-identification risks, especially when DNA is collected. In particular, the Revised Common Rule contemplates storage and re-use of so-called de-identified biospecimens even though these contain DNA that might be re-identifiable with current or foreseeable technology.

Defenders of these aspects of the Revised Common Rule argue that ‘data saves lives.’ But even if that claim is as applicable as its proponents assert, the effects of the Revised Common Rule will not be limited to publicly funded health sciences, and its effects will be harmful elsewhere.

An earlier version, presented at the Yale symposium which the conference volume memorializes, engendered significant controversy — the polite form of howls of rage in a few cases — from medical professionals looking forward to working with Big Data. Since even the longer final version is shorter, and if only for that reason clearer, than much of what I write I wouldn’t be surprised if the final version causes some fuss too.

Posted in Administrative Law, AI, Science/Medicine, Writings | Leave a comment

New Paper–“Big Data: Destroyer of Informed Consent”

Just posted: A near-final draft of my latest paper, Big Data: Destroyer of Informed Consent. It will appear later this year in a special joint issue of the Yale Journal of Health Policy, Law, and Ethics and the Yale Journal of Law and Technology.

Here’s the tentative abstract (I hate writing abstracts):

The ‘Revised Common Rule’ took effect on January 21, 2019, marking the first change since 2005 to the federal regulation that governs human subjects research conducted with federal support or in federally supported institutions. The Common Rule had required informed consent before researchers could collect and use identifiable personal health information. While informed consent is far from perfect, it is and was the gold standard for data collection and use policies; the standard in the old Common Rule served an important function as the exemplar for data collection in other contexts.

Unfortunately, true informed consent seems incompatible with modern analytics and ‘Big Data’. Modern analytics hold out the promise of finding unexpected correlations in data; it follows that neither the researcher nor the subject may know what the data collected will be used to discover. In such cases, traditional informed consent in which the researcher fully and carefully explains study goals to subjects is inherently impossible. In response, the Revised Common Rule introduces a new, and less onerous, form of “broad consent” in which human subjects agree to as varied forms of data use and re-use as researchers’ lawyers can squeeze into a consent form. Broad consent paves the way for using identifiable personal health information in modern analytics. But these gains for users of modern analytics come with side-effects, not least a substantial lowering of the aspirational ceiling for other types of information collection, such as in commercial genomic testing.

Continuing improvements in data science also cause a related problem, in that data thought by experimenters to have been de-identified (and thus subject to more relaxed rules about use and re-use) sometimes proves to be re-identifiable after all. The Revised Common Rule fails to take due account of real re-identification risks, especially when DNA is collected. In particular, the Revised Common Rule contemplates storage and re-use of so-called de-identified biospecimins even though these contain DNA that might be re-identifiable with current or foreseeable technology.

Defenders of these aspects of the Revised Common Rule argue that ‘data saves lives’. But even if that claim is as applicable as its proponents assert, the effects of the Revised Common Rule will not be limited to publicly funded health sciences, and its effects will be harmful elsewhere.

This is my second foray into the deep waters where AI meets Health Law. Plus it’s well under 50 pages! (First foray here; somewhat longer.)

Posted in AI, Law: Privacy, Writings | Leave a comment

‘When AIs Outperform Doctors’ Published

I’m happy to report that the Arizona Law Review has published When AIs Outperform Doctors: Confronting the Challenges of a Tort-Induced Over-Reliance on Machine Learning, 61 Ariz. L. Rev. 33 (2019), that I co-authored with Ian Kerr (U. Ottawa) and Joelle Pineau (McGill U.).

Here’s the abstract:

Someday, perhaps soon, diagnostics generated by machine learning (ML) will have demonstrably better success rates than those generated by human doctors. What will the dominance of ML diagnostics mean for medical malpractice law, for the future of medical service provision, for the demand for certain kinds of doctors, and—in the long run—for the quality of medical diagnostics itself?

This Article argues that once ML diagnosticians, such as those based on neural networks, are shown to be superior, existing medical malpractice law will require superior ML-generated medical diagnostics as the standard of care in clinical settings. Further, unless implemented carefully, a physician’s duty to use ML systems in medical diagnostics could, paradoxically, undermine the very safety standard that malpractice law set out to achieve. Although at first doctor + machine may be more effective than either alone because humans and ML systems might make very different kinds of mistakes, in time, as ML systems improve, effective ML could create overwhelming legal and ethical pressure to delegate the diagnostic process to the machine. Ultimately, a similar dynamic might extend to treatment also. If we reach the point where the bulk of clinical outcomes collected in databases are ML-generated diagnoses, this may result in future decisions that are not easily audited or understood by human doctors. Given the well-documented fact that treatment strategies are often not as effective when deployed in clinical practice compared to preliminary evaluation, the lack of transparency introduced by the ML algorithms could lead to a decrease in quality of care. This Article describes salient technical aspects of this scenario particularly as it relates to diagnosis and canvasses various possible technical and legal solutions that would allow us to avoid these unintended consequences of medical malpractice law. Ultimately, we suggest there is a strong case for altering existing medical liability rules to avoid a machine-only diagnostic regime. We argue that the appropriate revision to the standard of care requires maintaining meaningful participation in the loop by physicians the loop.

I think this is one of the best articles I’ve written or co-written–certainly in the top five. I’m particularly proud that I worked out, or intuited, a property of Machine Learning that was either not present or certainly not prominent in the literature: that if all the inputs to future generations of ML systems are due to the output of earlier generations of the ML system, there’s a chance it may all go wrong.

Reasonable people could disagree about the size of that chance, but if it happens at least with current technology there’s no way the system itself would warn us. Depending on the complexity of the system, and the extent to which doctors have been deskilled by the prevalence of the ML technology, we might be hard put to notice some types of degradation ourselves.

It would be good, therefore, to try to engineer legal rules that would make this possibly very unhealthy outcome much less likely.

Posted in AI, Writings | Leave a comment

New Paper ‘When AIs Outperform Doctors’

My latest (draft!) paper, When AIs Outperform Doctors: The dangers of a tort-induced over-reliance on machine learning and what (not) to do about it is, I hope, special. I had the good fortune to co-author it with Canadian polymath Ian Kerr, and with Joëlle Pineau, one of the world’s leading machine learning experts.

Here’s the abstract:

Someday, perhaps soon, diagnostics generated by machine learning (ML) will have demonstrably better success rates than those generated by human doctors. What will the dominance of ML diagnostics mean for medical malpractice law, for the future of medical service provision, for the demand for certain kinds of doctors, and—in the longer run—for the quality of medical diagnostics itself?

This article argues that once ML diagnosticians, such as those based on neural networks, are shown to be superior, existing medical malpractice law will require superior ML-generated medical diagnostics as the standard of care in clinical settings. Further, unless implemented carefully, a physician’s duty to use ML systems in medical diagnostics could, paradoxically, undermine the very safety standard that malpractice law set out to achieve. In time, effective machine learning could create overwhelming legal and ethical pressure to delegate the diagnostic process to the machine. Ultimately, a similar dynamic might extend to treatment also. If we reach the point where the bulk of clinical outcomes collected in databases are ML-generated diagnoses, this may result in future decision scenarios that are not easily audited or understood by human doctors. Given the well-documented fact that treatment strategies are often not as effective when deployed in real clinical practice compared to preliminary evaluation, the lack of transparency introduced by the ML algorithms could lead to a decrease in quality of care. The article describes salient technical aspects of this scenario particularly as it relates to diagnosis and canvasses various possible technical and legal solutions that would allow us to avoid these unintended consequences of medical malpractice law. Ultimately, we suggest there is a strong case for altering existing medical liability rules in order to avoid a machine-only diagnostic regime. We argue that the appropriate revision to the standard of care requires the maintenance of meaningful participation by physicians in the loop.

I hope that it will be of interest to to lawyers, doctors, computer scientists, and a range of medical service providers and policy-makers. Comments welcome!

Posted in AI, Sufficiently Advanced Technology, Writings | 1 Comment

Latvian ‘Ocean’

My article Flood Control on the Information Ocean: Living With Anonymity, Digital Cash, and Distributed Databases has been translated into Latvian as “Plūdu kontroli par informāciju, kas okeānā: dzīvo ar anonimitāti, digitālās naudas, un sadalītās datu bāzes” by Arija Liepkalnietis.

Thank you Ms. Liepkalnietis! (But please add a Creative Commons License.)

Posted in Writings | Leave a comment