Category Archives: Writings

Draft of Virtual Law School, 2.0 Now at SSRN

I recently uploaded a draft of my paper The Virtual Law School, 2.0 to SSRN. (It will need a little updating in light of the development of vaccines for COVID.) Here’s the abstract and table of contents:

Just over twenty years ago I gave a talk to the AALS called The Virtual Law School? Or, How the Internet Will De-skill the Professoriate, and Turn Your Law School Into a Conference Center. I came to the subject because I had been working on Internet law, learning about virtual worlds and e-commerce, and about the power of one-to-many communications. It seemed to me that a lot of what I had learned applied to education in general and to legal education in particular.

It didn’t happen. Or at least, it has not happened yet. In this essay I want to revisit my predictions from twenty years ago in order to see why so little has changed (so far). The massive convulsion now being forced on law teaching due to the social distancing required to prevent COVID-19 transmission presents an occasion in which we are all forced to rethink how we deliver law teaching. After discussing why my predictions failed to manifest before 2020, I will argue that unless this pandemic is brought under control quickly, the market for legal education may force some radical changes on us—whether we like it or not, and that in the main my earlier predictions were not wrong, just premature.

    1.  That Was Then (Virtual Law School 1.0)
    2.  Why We Do Not Have Serious Virtual Law Schools (Yet)
      1. ABA Rules
      2. Bad Software
      3. Concerns About Bad Pedagogy and Lost Opportunities for Skills Training and Networking
      4. Reputational and Branding Concerns
      5. Bad Economics
    3. Law Teaching in a Time of COVID
      1. The Old is New Again
      2. Spring’s Scrambling: Opening the Door to Online Learning
        1. ABA (and ICE) Actions
        2. Law School Actions
        3. Student and Other Reactions
      3. The Longer Term: Teaching in the ‘New Normal’
        1. COVID-19 Scenarios
        2. What the Scenarios Mean for the Virtual Law School
    4. Conclusion: Winners and Losers, 2.0

This a true draft, not a finished product, and I would very much welcome any comments readers might have.

Posted in Writings | 17 Comments

Something Cheerful

I was very happy to learn that Larry Solumn — a one-man Jotwell — has blogged my latest article, Privacy as Safety (written with Zak Colangelo), and tagged it “highly recommended.”

Thank you Larry!

Posted in Writings | Comments Off on Something Cheerful

New Article: Privacy as Safety

Fresh up on SSRN, a pre-publication draft text of Privacy as Safety, co-authored with Zak Colangelo and forthcoming in the Washington Law Review.  Here’s the abstract and TOC:

The idea that privacy makes you safer is unjustly neglected: public officials emphasize the dangers of privacy while contemporary privacy theorists acknowledge that privacy may have safety implications but hardly dwell on the point. We argue that this lack of emphasis is a substantive and strategic error, and seek to rectify it. This refocusing is particularly timely given the proliferation of new and invasive technologies for the home and for consumer use more generally, not to mention surveillance technologies such as so-called smart cities.

Indeed, we argue—perhaps for the first time in modern conversations about privacy—that in many cases privacy is safety, and that, in practice, United States law already recognizes this fact. Although the connection rarely figures in contemporary conversations about privacy, the relationship is implicitly recognized in a substantial but diverse body of U.S. law that protects privacy as a direct means of protecting safety. As evidence we offer a survey of the ways in which U.S. law already recognizes that privacy is safety, or at least that privacy enhances safety. Following modern reformulations of Alan Westin’s four zones of privacy, we explore the safety-enhancing privacy protections within the personal, intimate, semi-private, and public zones of life, and find examples in each zone, although cases in which privacy protects physical safety seem particularly frequent. We close by noting that new technologies such as the Internet of Things and connected cars create privacy gaps that can endanger their users’ safety, suggesting the need for new safety-enhancing privacy rules in these areas.

By emphasizing the deep connection between privacy and safety, we seek to lay a foundation for planned future work arguing that U.S. administrative agencies with a safety mission should make privacy protection one of their goals.


Enjoy!

Posted in Law: Privacy, Writings | 1 Comment

Just Uploaded–Big Data: Destroyer of Informed Consent (Final Text)

I’ve just uploaded the final text of Big Data: Destroyer of Informed Consent which is due to appear Real Soon Now in a special joint issue of the Yale Journal of Health Policy, Law, and Ethics and the Yale Journal of Law and Technology. This pre-publication version has everything the final version will have except the correct page numbers. Here’s the abstract:

The ‘Revised Common Rule’ took effect on January 21, 2019, marking the first change since 2005 to the federal regulation that governs human subjects research conducted with federal support or in federally supported institutions. The Common Rule had required informed consent before researchers could collect and use identifiable personal health information. While informed consent is far from perfect, it is and was the gold standard for data collection and use policies; the standard in the old Common Rule served an important function as the exemplar for data collection in other contexts.

Unfortunately, true informed consent seems incompatible with modern analytics and ‘Big Data’. Modern analytics hold out the promise of finding unexpected correlations in data; it follows that neither the researcher nor the subject may know what the data collected will be used to discover. In such cases, traditional informed consent in which the researcher fully and carefully explains study goals to subjects is inherently impossible. In response, the Revised Common Rule introduces a new, and less onerous, form of “broad consent” in which human subjects agree to as varied forms of data use and re-use as researchers’ lawyers can squeeze into a consent form. Broad consent paves the way for using identifiable personal health information in modern analytics. But these gains for users of modern analytics come with side-effects, not least a substantial lowering of the aspirational ceiling for other types of information collection, such as in commercial genomic testing.

Continuing improvements in data science also cause a related problem, in that data thought by experimenters to have been de-identified (and thus subject to more relaxed rules about use and re-use) sometimes proves to be re-identifiable after all. The Revised Common Rule fails to take due account of real re-identification risks, especially when DNA is collected. In particular, the Revised Common Rule contemplates storage and re-use of so-called de-identified biospecimens even though these contain DNA that might be re-identifiable with current or foreseeable technology.

Defenders of these aspects of the Revised Common Rule argue that ‘data saves lives.’ But even if that claim is as applicable as its proponents assert, the effects of the Revised Common Rule will not be limited to publicly funded health sciences, and its effects will be harmful elsewhere.

An earlier version, presented at the Yale symposium which the conference volume memorializes, engendered significant controversy — the polite form of howls of rage in a few cases — from medical professionals looking forward to working with Big Data. Since even the longer final version is shorter, and if only for that reason clearer, than much of what I write I wouldn’t be surprised if the final version causes some fuss too.

Posted in Administrative Law, AI, Science/Medicine, Writings | Comments Off on Just Uploaded–Big Data: Destroyer of Informed Consent (Final Text)

New Paper–“Big Data: Destroyer of Informed Consent”

Just posted: A near-final draft of my latest paper, Big Data: Destroyer of Informed Consent. It will appear later this year in a special joint issue of the Yale Journal of Health Policy, Law, and Ethics and the Yale Journal of Law and Technology.

Here’s the tentative abstract (I hate writing abstracts):

The ‘Revised Common Rule’ took effect on January 21, 2019, marking the first change since 2005 to the federal regulation that governs human subjects research conducted with federal support or in federally supported institutions. The Common Rule had required informed consent before researchers could collect and use identifiable personal health information. While informed consent is far from perfect, it is and was the gold standard for data collection and use policies; the standard in the old Common Rule served an important function as the exemplar for data collection in other contexts.

Unfortunately, true informed consent seems incompatible with modern analytics and ‘Big Data’. Modern analytics hold out the promise of finding unexpected correlations in data; it follows that neither the researcher nor the subject may know what the data collected will be used to discover. In such cases, traditional informed consent in which the researcher fully and carefully explains study goals to subjects is inherently impossible. In response, the Revised Common Rule introduces a new, and less onerous, form of “broad consent” in which human subjects agree to as varied forms of data use and re-use as researchers’ lawyers can squeeze into a consent form. Broad consent paves the way for using identifiable personal health information in modern analytics. But these gains for users of modern analytics come with side-effects, not least a substantial lowering of the aspirational ceiling for other types of information collection, such as in commercial genomic testing.

Continuing improvements in data science also cause a related problem, in that data thought by experimenters to have been de-identified (and thus subject to more relaxed rules about use and re-use) sometimes proves to be re-identifiable after all. The Revised Common Rule fails to take due account of real re-identification risks, especially when DNA is collected. In particular, the Revised Common Rule contemplates storage and re-use of so-called de-identified biospecimins even though these contain DNA that might be re-identifiable with current or foreseeable technology.

Defenders of these aspects of the Revised Common Rule argue that ‘data saves lives’. But even if that claim is as applicable as its proponents assert, the effects of the Revised Common Rule will not be limited to publicly funded health sciences, and its effects will be harmful elsewhere.

This is my second foray into the deep waters where AI meets Health Law. Plus it’s well under 50 pages! (First foray here; somewhat longer.)

Posted in AI, Law: Privacy, Writings | Comments Off on New Paper–“Big Data: Destroyer of Informed Consent”

‘When AIs Outperform Doctors’ Published

I’m happy to report that the Arizona Law Review has published When AIs Outperform Doctors: Confronting the Challenges of a Tort-Induced Over-Reliance on Machine Learning, 61 Ariz. L. Rev. 33 (2019), that I co-authored with Ian Kerr (U. Ottawa) and Joelle Pineau (McGill U.).

Here’s the abstract:

Someday, perhaps soon, diagnostics generated by machine learning (ML) will have demonstrably better success rates than those generated by human doctors. What will the dominance of ML diagnostics mean for medical malpractice law, for the future of medical service provision, for the demand for certain kinds of doctors, and—in the long run—for the quality of medical diagnostics itself?

This Article argues that once ML diagnosticians, such as those based on neural networks, are shown to be superior, existing medical malpractice law will require superior ML-generated medical diagnostics as the standard of care in clinical settings. Further, unless implemented carefully, a physician’s duty to use ML systems in medical diagnostics could, paradoxically, undermine the very safety standard that malpractice law set out to achieve. Although at first doctor + machine may be more effective than either alone because humans and ML systems might make very different kinds of mistakes, in time, as ML systems improve, effective ML could create overwhelming legal and ethical pressure to delegate the diagnostic process to the machine. Ultimately, a similar dynamic might extend to treatment also. If we reach the point where the bulk of clinical outcomes collected in databases are ML-generated diagnoses, this may result in future decisions that are not easily audited or understood by human doctors. Given the well-documented fact that treatment strategies are often not as effective when deployed in clinical practice compared to preliminary evaluation, the lack of transparency introduced by the ML algorithms could lead to a decrease in quality of care. This Article describes salient technical aspects of this scenario particularly as it relates to diagnosis and canvasses various possible technical and legal solutions that would allow us to avoid these unintended consequences of medical malpractice law. Ultimately, we suggest there is a strong case for altering existing medical liability rules to avoid a machine-only diagnostic regime. We argue that the appropriate revision to the standard of care requires maintaining meaningful participation in the loop by physicians the loop.

I think this is one of the best articles I’ve written or co-written–certainly in the top five. I’m particularly proud that I worked out, or intuited, a property of Machine Learning that was either not present or certainly not prominent in the literature: that if all the inputs to future generations of ML systems are due to the output of earlier generations of the ML system, there’s a chance it may all go wrong.

Reasonable people could disagree about the size of that chance, but if it happens at least with current technology there’s no way the system itself would warn us. Depending on the complexity of the system, and the extent to which doctors have been deskilled by the prevalence of the ML technology, we might be hard put to notice some types of degradation ourselves.

It would be good, therefore, to try to engineer legal rules that would make this possibly very unhealthy outcome much less likely.

Posted in AI, Writings | Comments Off on ‘When AIs Outperform Doctors’ Published