We have an action-packed lineup planned for We Robot 2019. The main conference is April 12-13, with an optional workshop day on April 11. I’ve put the schedule below; you should register now for We Robot 2019 if you haven’t already.
Consent, that is ‘notice and choice,’ is a fundamental concept in the U.S. approach to data privacy, as it reflects principles of individual autonomy, freedom of choice, and rationality. Big Data, however, makes the traditional approach to informed consent incoherent and unsupportable, and indeed calls the entire concept of consent, at least as currently practiced in the U.S., into question.
Big Data kills the possibility of true informed consent because by its very nature one purpose of big data analytics is to find unexpected patterns in data. Informed consent requires at the very least that the person requesting the consent know what she is asking the subject to consent to. In principle, we hope that before the subject agrees she too comes to understand the scope of the agreement. But with big data analytics, particularly those based on Machine Learning, neither party to that conversation can know what the data may be used to discover.
I then go on to discuss the Revised Common Rule, which governs any federally funded human subjects research. The revision takes effect in early 2019, and it relaxes the informed consent rule in a way that will set a bad precedent for private data mining and research. Henceforth researchers will be permitted to obtain open-ended “broad consent”–-i.e. “prospective consent to unspecified future research”–-instead of requiring informed consent, or even ordinary consent, on a case-by-case basis. That’s not a step forward for privacy or personal control of data, and although it’s being driven by genuine public health concerns the side-effects could be very widespread.
One of my two summaries is online at Balkanization, Organizing the Federal Government’s Regulation of AI. In it I argue that most issues relating to medical AI shouldn’t be regulated separately from AI in general; there are real issues of policy but they’re complicated. A first step should be to set up a national think tank and coordination center in the White House that could advise both agencies and state and local governments.
We Robot 2019 seeks contributions by American and international academics, practitioners, and others, in the form of scholarly papers, technological demonstrations, or posters. We Robot fosters conversations between the people designing, building, and deploying robots and the people who design or influence the legal and social structures in which robots will operate. We particularly encourage papers that reflect interdisciplinary collaborations between developers of robotics, AI, and related technology and experts in the humanities, social science, and law and policy.
This conference will build on a growing body of scholarship exploring how the increasing sophistication and autonomous decision-making capabilities of robots and their widespread deployment everywhere from the home, to hospitals, to public spaces, to the battlefield disrupts existing legal regimes or requires rethinking policy issues.