Category Archives: AI

Big Data: Destroyer of Informed Consent

My guest post Big Data: Destroyer of Informed Consent for this Friday’s Yale Workshop on “The Law and Policy of AI, Robotics & Telemedicine” is now online at the Balkanization blog.

Consent, that is ‘notice and choice,’ is a fundamental concept in the U.S. approach to data privacy, as it reflects principles of individual autonomy, freedom of choice, and rationality. Big Data, however, makes the traditional approach to informed consent incoherent and unsupportable, and indeed calls the entire concept of consent, at least as currently practiced in the U.S., into question.

Big Data kills the possibility of true informed consent because by its very nature one purpose of big data analytics is to find unexpected patterns in data. Informed consent requires at the very least that the person requesting the consent know what she is asking the subject to consent to. In principle, we hope that before the subject agrees she too comes to understand the scope of the agreement. But with big data analytics, particularly those based on Machine Learning, neither party to that conversation can know what the data may be used to discover.

I then go on to discuss the Revised Common Rule, which governs any federally funded human subjects research. The revision takes effect in early 2019, and it relaxes the informed consent rule in a way that will set a bad precedent for private data mining and research. Henceforth researchers will be permitted to obtain open-ended “broad consent”–-i.e. “prospective consent to unspecified future research”–-instead of requiring informed consent, or even ordinary consent, on a case-by-case basis. That’s not a step forward for privacy or personal control of data, and although it’s being driven by genuine public health concerns the side-effects could be very widespread.

Posted in AI, Law: Privacy, Talks & Conferences | Leave a comment

Organizing the Federal Government’s Regulation of AI

Jack Balkin is running blog posts summarizing contributions to this Friday’s Yale Workshop on “The Law and Policy of AI, Robotics & Telemedicine“.

One of my two summaries is online at Balkanization, Organizing the Federal Government’s Regulation of AI.  In it I argue that most issues relating to medical AI shouldn’t be regulated separately from AI in general; there are real issues of policy but they’re complicated.  A first step should be to set up a national think tank and coordination center in the White House that could advise both agencies and state and local governments.

Posted in AI, Talks & Conferences | Leave a comment

Three Tech-Related Stories Today

Three things that caught my eye this morning:

Posted in 2020 Election, AI, Sufficiently Advanced Technology | 9 Comments

Interviewed on AI & Medicine

Robert David Hart’s Qartz article, Who’s to blame when a machine botches your surgery? has some of my thoughts on AI and medicine.

I’m busy revising my paper on AI and medicine (co-authored with Ian Kerr and Joëlle Pineau) and plan to post a new version in early October.

Posted in AI, The Media | Leave a comment

New Paper ‘When AIs Outperform Doctors’

My latest (draft!) paper, When AIs Outperform Doctors: The dangers of a tort-induced over-reliance on machine learning and what (not) to do about it is, I hope, special. I had the good fortune to co-author it with Canadian polymath Ian Kerr, and with Joëlle Pineau, one of the world’s leading machine learning experts.

Here’s the abstract:

Someday, perhaps soon, diagnostics generated by machine learning (ML) will have demonstrably better success rates than those generated by human doctors. What will the dominance of ML diagnostics mean for medical malpractice law, for the future of medical service provision, for the demand for certain kinds of doctors, and—in the longer run—for the quality of medical diagnostics itself?

This article argues that once ML diagnosticians, such as those based on neural networks, are shown to be superior, existing medical malpractice law will require superior ML-generated medical diagnostics as the standard of care in clinical settings. Further, unless implemented carefully, a physician’s duty to use ML systems in medical diagnostics could, paradoxically, undermine the very safety standard that malpractice law set out to achieve. In time, effective machine learning could create overwhelming legal and ethical pressure to delegate the diagnostic process to the machine. Ultimately, a similar dynamic might extend to treatment also. If we reach the point where the bulk of clinical outcomes collected in databases are ML-generated diagnoses, this may result in future decision scenarios that are not easily audited or understood by human doctors. Given the well-documented fact that treatment strategies are often not as effective when deployed in real clinical practice compared to preliminary evaluation, the lack of transparency introduced by the ML algorithms could lead to a decrease in quality of care. The article describes salient technical aspects of this scenario particularly as it relates to diagnosis and canvasses various possible technical and legal solutions that would allow us to avoid these unintended consequences of medical malpractice law. Ultimately, we suggest there is a strong case for altering existing medical liability rules in order to avoid a machine-only diagnostic regime. We argue that the appropriate revision to the standard of care requires the maintenance of meaningful participation by physicians in the loop.

I hope that it will be of interest to to lawyers, doctors, computer scientists, and a range of medical service providers and policy-makers. Comments welcome!

Posted in AI, Sufficiently Advanced Technology, Writings | 1 Comment