Category Archives: AI

On Robot & AI Personhood

The question of robot and AI personhood comes up a lot, and likely will come up even more in the future with the proliferation of models like GPT-3, which can be used to mimic human conversations very very convincingly. I just finished a first draft of a short essay surveying contemporary issues in robot law and policy; that gave me a chance to briefly sketch out my views on the personhood issue, and I figured I might share it here:

As the law currently stands in the United States and, as far as I know, everywhere else,1 the law treats all robots of every type as chattel. That is, in the words of Neil Richards and William Smart, “Robots are, and for many years will remain, tools. They are sophisticated tools that use complex software, to be sure, but no different in essence than a hammer, a power drill, a word processor, a web browser, or the braking system in your car.”2 It follows that robot personhood (or AI personhood) under law remains a remote prospect, and that some lesser form of increased legal protections for robots, beyond those normally accorded to chattels in order to protect their owners’ rights, also remain quite unlikely. Indeed, barring some game-changing breakthrough in neural networks or some other unforeseen technology, there seems little prospect that in the next decades machines of any sort will achieve the sort of self-awareness and sentience that we commonly associate with a legitimate claim to the bundle of rights and respect we organize under the rubric of personhood.3

There are, however, two different scenarios in which society or policymakers might choose to bestow some sort of rights or protections on robots beyond those normally given to chattels. The first is that we discover some social utility in the legal fiction that a robot is a person. No one, after all, seriously believes that a corporation is an actual person, or indeed that a corporation is alive or sentient,4 yet we accept the legal fiction of corporate personhood because it serves interests, such as the ability to transact in its own name, and limitation of actual humans’ liability, that society—or parts of it—find useful. Although nothing at present suggests similar social gains from the legal recognition of robotic personhood (indeed issues of liability and responsibility for robot harms need more clarity, not less accountability), conceivably policymakers might come to see things differently. In the meantime, it is likely that any need for, say, giving robots the power to transact. can be achieved through ordinary uses of the corporate form, in which a firm might for example be the legal owner of a robot.5

Early cases further suggest that U.S. courts are not willing to assign a copyright or a patent to a robot or an AI even when it generated the work or design at issue. Here, however, the primary justification has been straightforward statutory construction, holdings that the relevant U.S. laws only allow intellectual property rights to be granted to persons, and that the legislature did not intend to include machines within the that definition.6 Rules around the world may differ. For example an Australian federal court ordered an AI’s patent to be recognized by IP Australia.7 Similarly, a Chinese court found that an AI-produced text was deserving of copyright protection under Chinese law.8

A more plausible scenario for some sort of robot rights begins with the observation that human beings tend to anthropomorphize robots. As Kate Darling observes, “Our well-documented inclination to anthropomorphically relate to animals translates remarkably well to robots,” and ever so much more so to lifelike social robots designed to elicit that reaction—even when people know that they are really dealing with a machine.9 Similarly, studies suggest that many people are wired not only to feel more empathy towards lifelike robots than to other objects, but that as a result, harm to robots feels wrong.10 Thus, we might choose to ban the “abuse” of robots (beating, torturing) either because it offends people, or because we fear that some persons who abuse robots may develop habits of thought or behavior that will carry over into their relationships with live people or animals, abuse of which is commonly prohibited. Were we to find empirical support for the hypothesis that abuse of lifelike, or perhaps humanlike, robots makes abusive behavior towards people more likely, that would provide strong grounds for banning some types of harms to robots—a correlative 11 to giving robots certain rights against humans.12

It’s an early draft, so comments welcome!


Notes


  1. The sole possible exception is Saudi Arabia, which gave ‘citizenship’ to a humanoid robot, Sophia, in 2017. It is hard to see this as anything more than a publicity stunt, both because female citizenship in Saudi Arabia comes with restrictions that do not seem to apply to Sophia, and because “her” “life” consists of … marketing for her Hong-Kong-based creators. ((See Emily Reynolds, The agony of Sophia, the world’s first robot citizen condemned to a lifeless career in marketing, Wired (Jan. 6, 2018), https://www.wired.co.uk/article/sophia-robot-citizen-womens-rights-detriot-become-human-hanson-robotics. []
  2. Neil Richards & William Smart, How Should the Law Think About Robots? in Robot Law 1, 20 (Ryan Calo, A. Michael Froomkin and Ian Kerr, eds. 2016). []
  3. For an interesting exploration of the issues see James Boyle, Endowed by Their Creator? The Future of Constitutional Personhood, Brookings Institution (Mar. 9, 2011). For a full-throated denunciation of the ‘robots rights’ concept as philosophical error and ethical distraction, see Abeba Birhane & Jelle van Dijk, Robot Rights? Let’s Talk about Human Welfare Instead, Proceedings of the 3rd AAAI/ACM Conference on AI, Ethics, and Society 207-213 (Feb. 7, 2020). []
  4. Although, Charlie Stross has suggested we should think of corporations as “Slow AIs”. Charlie Stross, Dude, you broke the future!, Charlie’s Diary (Jan. 2, 2018), https://www.antipope.org/charlie/blog-static/2018/01/dude-you-broke-the-future.html (transcript of remarks to 34th Chaos Communication Congress, Leipzig, Dec. 2017). []
  5. For speculation as to how a robot or AI might own itself, without people in the ownership chain see Shawn J. Bayern, Autonomous Organizations (2021); Shawn J. Bayern, Are Autonomous Entities Possible?, 114 Nw. U. L. Rev. Online 23 (2019); Lynn LoPuki, Algorithmic Entities, 95 U.C.L.A. L. Rev. 887 (2018). []
  6. See Thaler v. Vidal, 43 F.4th 1207 (Fed. Cir. 2022) (upholding USTO decision refusing application for patent in name of AI). For arguments in favor of granting such patents, see, e.g., Ryan Abbott, I Think, Therefore I Invent: Creative Computers and the Future of Patent Law, 57 B.C. L. Rev. 1079 (2016). For a European perspective see P. Bernt Hugenholtz & João Pedro Quintais, Copyright and Artificial Creation: Does EU Copyright Law Protect AI-Assisted Output?, 52 IIC – Int’l Rev. Int’l. Prop. and Competition L. 1190 (2021). The recent literature on the copyrightability of machine-generated texts is vast, starting with Annemarie Bridy, Coding Creativity: Copyright and the Artificially Intelligent Author, 2012 Stan. Tech. L. Rev. 5 (2012). An elegant recent article disagreeing with Bridy, with many citations to the literature, is Carys Craig & Ian Kerr, The Death of the AI Author, 52 Ottawa L. Rev. 31 (2021). []
  7. Commissioner of Patents v. Thaler (DABUS), [2022] FCAFC 62. []
  8. Paul Sawers, Chinese court rules AI-written article is protected by copyright, VentureBeat (Jan. 10, 2020), https://rai2022.umlaw.net/wp-content/uploads/2022/02/16_Chinese-court-rules-AI-written-article-is-protected-by-copyright.pdf. []
  9. Clifford Nass & Youngme Moon, Machines and Mindlessness: Social Responses to Computers. 56 J. Soc. Issues 81 (2000); Kate Darling, Extending Legal Protection to Social Robots in Robot Law 213, 214, 220 (Ryan Calo, A. Michael Froomkin and Ian Kerr, eds. 2016).  []
  10. Darling, supra note 9, at 223. []
  11. In Hohfeldian terms, if persons have a duty not to harm a robot, then, correlatively, the robot has right not to be harmed by those persons. See Pierre Schlag, How to Do Things with Hohfeld, 78 L. & Contemp. Probs. 185, 200-03 (2014). Hohfeld was concerned with the relations of persons, and probably would have thought the idea of property having rights to be a category error. Yet if the duty to forbear from certain harms extends to the owner of the robot as well as others, I submit that the “rights” term of the correlative relations is a useful way to describe what the robot has. []
  12. Darling, supra note 9, at 226-31. []
Posted in AI, Robots | 2 Comments

#WeRobot Finished With a Bang!

(Metaphorically, only.)

We will have recordings of substantially all the discussions up online in about a week.

Meanwhile, you can still read the papers.  You might want to start with the prize-winners:

… although I’d also like to give a shout-out to two of my personal favorites:

That said, the papers all were really good, which is pretty amazing.

Posted in AI, Robots, Talks & Conferences | Leave a comment

#WeRobot 2021 Starts Today!

Join us for the 10th Anniversary Edition – Register Here. All events will be virtual. All times are US Eastern time.

At We Robot we ask (and expect) that everyone reads the papers scheduled for Days One and Two in advance of those sessions. (The Workshops do not have advance papers.) In most cases, authors do not deliver their papers. Instead we go straight to the discussant’s wrap-up and appreciation/critique. The authors respond briefly, and then we open it up to Q&A from our fabulous attendee/participants. Click on the paper titles below to download a .pdf text of each paper. Enjoy! Or you can download a zip file of Friday’s papers and Saturday’s papers.

We Robot 2021 Program

Download full schedule to your calendar.

We Robot 2021 will be hosted on Whova. We’ve prepared a We Robot 2021 Attendee Guide. You can also Get Whova Now.

We Robot 2021 has been approved for 19.0 Florida CLE credits, including 19.0 in technology, 1.0 in ethics, and 3.5 in bias elimination. Details here.

Thurs. Sept. 23 Workshop ScheduleWhatWho
10:30-11:00Please see the Attendee Whova Instructions for info about how conference software works and how to log in.Email Ryan Erickson for tech support logging in.
11:00-12:00Here Be Robots:
The panel will discuss basic technical concepts underpinning the latest developments in AI and robotics.
Bill Smart
Cindy Grimm
12:00-1:00LunchEveryone!
1:00-2:00if(goingToTurnEvil), {don’t();}: Creating Legal Rules for Robots
A lawyer, a roboticist, and a sociologist (or other discipline) walk into a bar…to form multidisciplinary teams attempting to craft or tear apart hypothetical legislation. This experiential session combines law, robotics, drones, and networking.
Evan Selinger
Kristen Thomasen
Woody Hartzog
2:00-3:00Break & Breakouts
Finding your Path, Your People, and Your Conference Program--Networking Break
Take a break, or join one of the following networking sessions:
1. How to do interdisciplinary research in this space
2. What do I want to be when I grow up?
3. Welcome to We Robot for newbies
Ryan Calo
Sue Glueck
Kristen Thomasen
3:00-4:00Why Call Them Robots? 100 Years of R.U.R.
The panel will discuss multidisciplinary perspectives on R.U.R., the 1920 sci-fi play by the Czech writer Karel Čapek. "R.U.R." stands for Rossumovi Univerzální Roboti.
Robin Murphy, Joanne Pransky and Jeremy Brett
4:00-4:15BreakEveryone!
4:15-5:30I’ll Take Robot Geeks for $1000, Alex: An Afternoon of Robot Trivia
Light appetizers and beverages will be provided.
Jason Millar
Woody Hartzog
Friday, Sept. 24 ScheduleDay One EventsDiscussant
8:30-9:30Check-in / Registration
Please see the Attendee Whova Instructions for info about how conference software works.
Email Ryan Erickson for tech support logging in.
9:30-10:00Welcome and Introductions
10:00-11:00The Legal Construction of Black Boxes
Elizabeth Kumar, Andrew Selbst, and Suresh Venkatasubramanian
Ryan Calo
11:00-11:30Break
Live Demo Q&A
Societal Implications of Large Language Models
Miles Brundage
We suggest viewing recorded demo in advance of Q&A
11:30-12:30Being "Seen" vs. "Mis-seen": Tensions Between Privacy and Fairness in Computer Vision
Alice Xiang
Daniel Susser
12:30-12:45Lightning Poster Session & Announcements
12:45-1:45Lunch Break
1:45-3:15Field Robotics Panel
Moderator: Edward Tunstel
3:15-3:45Break
Live Demo Q&A
Skills from Students – Artifacts from a Robot Interaction Design Curriculum for Fifth Grade Students
Daniella DiPaola
We suggest viewing recorded demo in advance of Q&A
3:45-4:45Social Robots and Children’s Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Deployment
Vicky Charisi, Selma Šabanović, Urs Gasser, and Randy Gomez
Veronica Ahumada-Newhart
4:45-5:15Break
Live Demo Q&A: Robots and Robotics as a service. Service Robots you can use today.
Jean Duteau, CEO of Robot World
We suggest viewing recorded demo in advance of Q&A
5:15-6:15Driving Into the Loop: Mapping Automation Bias & Liability Issues for Advanced Driver Assistance Systems
Katie Szilagyi, Jason Millar, Ajung Moon, and Shalaleh Rismani
Meg Leta Jones
6:15-7:15Poster Session & Reception

7:45-9:45Conference DinnerVirtual....
Saturday Sept. 25 ScheduleDay Two EventsDiscussant
9:00-10:00Registration
Please see the Attendee Whova Instructions for info about how conference software works.
Email Ryan Erickson for tech support logging in.
10:00-11:00Debunking Robot Rights: Metaphysically, Ethically and Legally
Abeba Birhane, Jelle van Dijk, and Frank Pasquale
Deb Raji
11:00-11:30Break
Live Demo Q&A
Skills from Students – Artifacts from a Robot Interaction Design Curriculum for Fifth Grade Students
Daniella DiPaola
We suggest viewing recorded demo in advance of Q&A
11:30-12:30Autonomous Vehicle Fleets as Public Infrastructure
Thomas Gilbert and Roel Dobbe
Madeleine Clare Elish
12:30-1:30Lunch Break
1:30-2:30Predicting Consumer Contracts
Noam Kolt
Meg Mitchell
2:30-3:00Break
Live Demo Q&A
Societal Implications of Large Language Models
Miles Brundage
We suggest viewing recorded demo in advance of Q&A
3:00-4:00Anti-Discrimination Law’s Cybernetic Black Hole
Marc Canellas
Cynthia Khoo
4:00-4:30Break
4:30-5:30Health Robotics Panel
Moderator: Michelle Johnson
5:30-5:45Awards of Prizes for Best Poster, Best Paper (Jr. Scholars), Best Paper (Sr. Scholars)
Summary & Conclusion
Announcement of next We Robot
Kate Darling
Michael Froomkin
Posted in AI, Robots, Talks & Conferences | Leave a comment

We Robot is Next Week!!!

WeRobot 2021

We Robot, now heading into its 10th anniversary, is the leading North American conference on robotics law and policy. The 2021 event will be hosted by the University of Miami School of Law on September 23 – 25, 2021.

NOW VIRTUAL
Due to safety concerns we’ve decided to take We Robot to a fully virtual format again.

Earn CLE
19.0 Florida CLE credits approved, including 19.0 in technology, 1.0 in ethics, and 3.5 in bias elimination.

Register Today!

New virtual prices:
Workshop on Sept. 23: $25.00
Admission for both days, Sept. 24 & 25: $49.00
All students and UM Faculty for all 3 days: $25.00

Although we’d looked forward to welcoming you back to Coral Gables and will not be able to see you in person, we look forward very much to your virtual participation in We Robot 2021. The heart of We Robot has always been its participants, and we will do all we can to preserve that. See you (virtually) soon!

For more information, visit WeRobot2021.com

See Full Program

September 23 – 25, 2021

Posted in AI, Robots, Talks & Conferences | Leave a comment

We Robot Paper Submission Deadline Extended One Week

Everyone says it’s harder to get things done under COVID, so we’re extending the deadline for submission of paper abstracts to We Robot 2021 by one week – to midnight US East Coast time on February 8, 2021.

We will attempt to keep to the rest of the schedule, but paper acceptance notices may end up slightly delayed also.

Posted in AI, Robots, Talks & Conferences | Leave a comment

Just Uploaded–Big Data: Destroyer of Informed Consent (Final Text)

I’ve just uploaded the final text of Big Data: Destroyer of Informed Consent which is due to appear Real Soon Now in a special joint issue of the Yale Journal of Health Policy, Law, and Ethics and the Yale Journal of Law and Technology. This pre-publication version has everything the final version will have except the correct page numbers. Here’s the abstract:

The ‘Revised Common Rule’ took effect on January 21, 2019, marking the first change since 2005 to the federal regulation that governs human subjects research conducted with federal support or in federally supported institutions. The Common Rule had required informed consent before researchers could collect and use identifiable personal health information. While informed consent is far from perfect, it is and was the gold standard for data collection and use policies; the standard in the old Common Rule served an important function as the exemplar for data collection in other contexts.

Unfortunately, true informed consent seems incompatible with modern analytics and ‘Big Data’. Modern analytics hold out the promise of finding unexpected correlations in data; it follows that neither the researcher nor the subject may know what the data collected will be used to discover. In such cases, traditional informed consent in which the researcher fully and carefully explains study goals to subjects is inherently impossible. In response, the Revised Common Rule introduces a new, and less onerous, form of “broad consent” in which human subjects agree to as varied forms of data use and re-use as researchers’ lawyers can squeeze into a consent form. Broad consent paves the way for using identifiable personal health information in modern analytics. But these gains for users of modern analytics come with side-effects, not least a substantial lowering of the aspirational ceiling for other types of information collection, such as in commercial genomic testing.

Continuing improvements in data science also cause a related problem, in that data thought by experimenters to have been de-identified (and thus subject to more relaxed rules about use and re-use) sometimes proves to be re-identifiable after all. The Revised Common Rule fails to take due account of real re-identification risks, especially when DNA is collected. In particular, the Revised Common Rule contemplates storage and re-use of so-called de-identified biospecimens even though these contain DNA that might be re-identifiable with current or foreseeable technology.

Defenders of these aspects of the Revised Common Rule argue that ‘data saves lives.’ But even if that claim is as applicable as its proponents assert, the effects of the Revised Common Rule will not be limited to publicly funded health sciences, and its effects will be harmful elsewhere.

An earlier version, presented at the Yale symposium which the conference volume memorializes, engendered significant controversy — the polite form of howls of rage in a few cases — from medical professionals looking forward to working with Big Data. Since even the longer final version is shorter, and if only for that reason clearer, than much of what I write I wouldn’t be surprised if the final version causes some fuss too.

Posted in Administrative Law, AI, Science/Medicine, Writings | Leave a comment