I’m in Ottawa today for the Machine M.D. conference. My panel, on “Regulating Health and Safety” started at 8:30am, so now that it’s over I get to enjoy the rest of the very packed schedule.
(I’m missing the also wonderful annual Privacy Law Scholars conference to be here, an example of the difficulties in trying to write in, and keep up in, multiple areas even within technology law.)
My argument in my talk was that, contrary to a number of articles by others that are now in press, we don’t want a super-AI regulator, but rather need to find a way to strengthen the AI capacity of existing sectoral regulators like the FDA, NHTSA and many others. The exception(s) to this general rule arise(s) only if the AI aspect of a regulatory question predominates over the sectoral — something I argue is rare, and probably limited to issues regarding access to data, to data quality, to job losses due to AI, and possibly to the regulation (or at least liability for) AI emergent behavior.
Unsurprisingly, some members of the audience, many of whom are health professionals, pushed back against the idea that when it comes to AI health really isn’t all that different from transport, finance, sentencing, or other predictive profiling applications. A polite discussion — we are after all in Canada — ensued.
Of course some day, someone really will figure out how to use a robot to do a burglary. Or, more likely, subvert one via your smart home.
We’ll be talking about what robots are actually coming, what they may do, and how we should prepare for it, at We Robot 2019, which starts tomorrow. Advance registration is closed, but on-site registration will be available.
This evening I’m attending an event on “Blockchain: Business, Regulation, Law and the Way Forward” featuring Jerry Britto (Coin Center), Marcia Weldon (MiamiLaw), and Samir Patel (Holland & Knight).
The event is organized jointly by three student groups: the Federalist Society, the Business law Society, and the Alliance Against Human Trafficking. That’s a pretty eclectic group. I think it shows how widely the blockchain dream has taken hold.
And yet, despite this, not absolutely everyone loves blockchain. I for one am somewhat skeptical, as I think the use cases are much more limited than the optimists would have it. Indeed, my views are almost summarized by this great graphic, which sets out a decision tree for people thinking of using blockchain:
Yes, the reality is a bit more complicated, but if you can’t explain why the above doesn’t apply to you, you probably shouldn’t be using blockchain….
We have an action-packed lineup planned for We Robot 2019. The main conference is April 12-13, with an optional workshop day on April 11. I’ve put the schedule below; you should register now for We Robot 2019 if you haven’t already.
Consent, that is ‘notice and choice,’ is a fundamental concept in the U.S. approach to data privacy, as it reflects principles of individual autonomy, freedom of choice, and rationality. Big Data, however, makes the traditional approach to informed consent incoherent and unsupportable, and indeed calls the entire concept of consent, at least as currently practiced in the U.S., into question.
Big Data kills the possibility of true informed consent because by its very nature one purpose of big data analytics is to find unexpected patterns in data. Informed consent requires at the very least that the person requesting the consent know what she is asking the subject to consent to. In principle, we hope that before the subject agrees she too comes to understand the scope of the agreement. But with big data analytics, particularly those based on Machine Learning, neither party to that conversation can know what the data may be used to discover.
I then go on to discuss the Revised Common Rule, which governs any federally funded human subjects research. The revision takes effect in early 2019, and it relaxes the informed consent rule in a way that will set a bad precedent for private data mining and research. Henceforth researchers will be permitted to obtain open-ended “broad consent”–-i.e. “prospective consent to unspecified future research”–-instead of requiring informed consent, or even ordinary consent, on a case-by-case basis. That’s not a step forward for privacy or personal control of data, and although it’s being driven by genuine public health concerns the side-effects could be very widespread.