Category Archives: AI

I’ve Joined the Editorial Board of the Technology & Regulation Journal

I’m proud to be part of the editorial board committee of the brand new Journal of Technology and Regulation (TechReg), housed at the Tilburg Institute for Law, Technology, and Society (TILT) at Tilburg University in the Netherlands.

Technology and Regulation (TechReg) is an international journal of law, technology and society, with an interdisciplinary identity. TechReg provides an online platform for disseminating original research on the legal and regulatory challenges posed by existing and emerging technologies (and their applications) including, but by no means limited to, the Internet and digital technology, artificial intelligence and machine learning, robotics, neurotechnology, nanotechnology, biotechnology, energy and climate change technology, and health and food technology. We conceive of regulation broadly to encompass ways of dealing with, ordering and understanding technologies and their consequences, such as through legal regulation, competition, social norms and standards, and technology design (or in Lessig’s terms: law, market, norms and architecture).

We aim to address critical and sometimes controversial questions such as:

  • How do new technologies shape society both positively and negatively?
  • Should technology development be steered towards societal goals, and if so, which goals and how?
  • What are the benefits and dangers of regulating human behavior through technology?
  • What is the most appropriate response to technological innovation, in general or in particular cases?

It is in this sense that TechReg is intrinsically interdisciplinary: we believe that legal and regulatory debates on technology are inextricable from societal, political and economic concerns, and that therefore technology regulation requires a multidisciplinary, integrated approach. Through a combination of monodisciplinary, multidisciplinary and interdisciplinary articles, the journal aims to contribute to an integrated vision of law, technology and society.

We invite original, well-researched and methodologically rigorous submissions from academics and practitioners, including policy makers, on a wide range of research areas such as privacy and data protection, security, surveillance, cybercrime, intellectual property, innovation, competition, governance, risk, ethics, media and data studies, and others.

TechReg is double-blind peer-reviewed and completely open access for both authors and readers. TechReg does not charge article processing fees.

Posted in AI, Personal, Readings | Leave a comment

‘When AIs Outperform Doctors’ Published

I’m happy to report that the Arizona Law Review has published When AIs Outperform Doctors: Confronting the Challenges of a Tort-Induced Over-Reliance on Machine Learning, 61 Ariz. L. Rev. 33 (2019), that I co-authored with Ian Kerr (U. Ottawa) and Joelle Pineau (McGill U.).

Here’s the abstract:

Someday, perhaps soon, diagnostics generated by machine learning (ML) will have demonstrably better success rates than those generated by human doctors. What will the dominance of ML diagnostics mean for medical malpractice law, for the future of medical service provision, for the demand for certain kinds of doctors, and—in the long run—for the quality of medical diagnostics itself?

This Article argues that once ML diagnosticians, such as those based on neural networks, are shown to be superior, existing medical malpractice law will require superior ML-generated medical diagnostics as the standard of care in clinical settings. Further, unless implemented carefully, a physician’s duty to use ML systems in medical diagnostics could, paradoxically, undermine the very safety standard that malpractice law set out to achieve. Although at first doctor + machine may be more effective than either alone because humans and ML systems might make very different kinds of mistakes, in time, as ML systems improve, effective ML could create overwhelming legal and ethical pressure to delegate the diagnostic process to the machine. Ultimately, a similar dynamic might extend to treatment also. If we reach the point where the bulk of clinical outcomes collected in databases are ML-generated diagnoses, this may result in future decisions that are not easily audited or understood by human doctors. Given the well-documented fact that treatment strategies are often not as effective when deployed in clinical practice compared to preliminary evaluation, the lack of transparency introduced by the ML algorithms could lead to a decrease in quality of care. This Article describes salient technical aspects of this scenario particularly as it relates to diagnosis and canvasses various possible technical and legal solutions that would allow us to avoid these unintended consequences of medical malpractice law. Ultimately, we suggest there is a strong case for altering existing medical liability rules to avoid a machine-only diagnostic regime. We argue that the appropriate revision to the standard of care requires maintaining meaningful participation in the loop by physicians the loop.

I think this is one of the best articles I’ve written or co-written–certainly in the top five. I’m particularly proud that I worked out, or intuited, a property of Machine Learning that was either not present or certainly not prominent in the literature: that if all the inputs to future generations of ML systems are due to the output of earlier generations of the ML system, there’s a chance it may all go wrong.

Reasonable people could disagree about the size of that chance, but if it happens at least with current technology there’s no way the system itself would warn us. Depending on the complexity of the system, and the extent to which doctors have been deskilled by the prevalence of the ML technology, we might be hard put to notice some types of degradation ourselves.

It would be good, therefore, to try to engineer legal rules that would make this possibly very unhealthy outcome much less likely.

Posted in AI, Writings | Leave a comment

I’m on the ‘This Week in Health Law’ Podcast

Nicholas Terry of Indianan University was kind enough to ask me to join him and other experts on episode 151 (!) of  his podcast, This Week in Health Law (TWIHL) which was devoted to AI and health care:


I am joined by Abbe Gluck, Professor of Law and the Faculty Director of the Solomon Center for Health Law and Policy at Yale Law School. In November 2018 her team pulled together an excellent roundtable on “The Law and Policy of AI, Robotics, and Telemedicine in Health Care.” This episode of TWIH is the first of two taking a deeper dive into just a few of the  issues that were so well presented at the roundtable. Here we were joined by Michael Froomkin, the Laurie Silvers and Mitchell Rubenstein Distinguished Professor of Law at the University of Miami School of Law and by Nicholson Price, Assistant Professor of Law at The University of Michigan Law School. Topics ranged from consent in the next generation of healthcare research to data protection, and appropriate regulatory models.

Posted in AI, The Media | Leave a comment

Big Data: Destroyer of Informed Consent

My guest post Big Data: Destroyer of Informed Consent for this Friday’s Yale Workshop on “The Law and Policy of AI, Robotics & Telemedicine” is now online at the Balkanization blog.

Consent, that is ‘notice and choice,’ is a fundamental concept in the U.S. approach to data privacy, as it reflects principles of individual autonomy, freedom of choice, and rationality. Big Data, however, makes the traditional approach to informed consent incoherent and unsupportable, and indeed calls the entire concept of consent, at least as currently practiced in the U.S., into question.

Big Data kills the possibility of true informed consent because by its very nature one purpose of big data analytics is to find unexpected patterns in data. Informed consent requires at the very least that the person requesting the consent know what she is asking the subject to consent to. In principle, we hope that before the subject agrees she too comes to understand the scope of the agreement. But with big data analytics, particularly those based on Machine Learning, neither party to that conversation can know what the data may be used to discover.

I then go on to discuss the Revised Common Rule, which governs any federally funded human subjects research. The revision takes effect in early 2019, and it relaxes the informed consent rule in a way that will set a bad precedent for private data mining and research. Henceforth researchers will be permitted to obtain open-ended “broad consent”–-i.e. “prospective consent to unspecified future research”–-instead of requiring informed consent, or even ordinary consent, on a case-by-case basis. That’s not a step forward for privacy or personal control of data, and although it’s being driven by genuine public health concerns the side-effects could be very widespread.

Posted in AI, Law: Privacy, Talks & Conferences | Leave a comment

Organizing the Federal Government’s Regulation of AI

Jack Balkin is running blog posts summarizing contributions to this Friday’s Yale Workshop on “The Law and Policy of AI, Robotics & Telemedicine“.

One of my two summaries is online at Balkanization, Organizing the Federal Government’s Regulation of AI.  In it I argue that most issues relating to medical AI shouldn’t be regulated separately from AI in general; there are real issues of policy but they’re complicated.  A first step should be to set up a national think tank and coordination center in the White House that could advise both agencies and state and local governments.

Posted in AI, Talks & Conferences | Leave a comment

Three Tech-Related Stories Today

Three things that caught my eye this morning:

Posted in 2020 Election, AI, Sufficiently Advanced Technology | 9 Comments