Category Archives: Law

Annals of Consumer Law (Contracts of Adhesion Dept.)

After twenty or so years of reliable service the old fridge had started taking days off. We were not sympathetic.  The outages came without warning, and they did the food no favors. So out with the old and in with the new.

This morning, indeed about 45 minutes prior to the four-hour envelope in yesterday’s email, the delivery truck from Home Depot came bearing a shiny new fridge. They detached the rusty old fridge from its plumbing, carted it away, and attached the new one via the $17.98 “12′ Upgraded Braided Water Line ” which was a consequence of ticking the box asking for installation. Ten minutes after arrival, they and the old fridge were gone, leaving a warning to give the new machine several hours to cool, and even longer to start making ice.

It was not until this afternoon that we noticed that along with a users’ manual they had left this:

"By using ... you agree ... biding arbitration."I would be far more annoyed had stuff like this not been the hypo I gave my students years ago when teaching about so-called ‘clickwrap’ contract terms.  Now I try to be amused.

Posted in Arbitration Law, Shopping | Leave a comment

ChatGPG on Republican Meanness

So I took ChatGPG for a spin.  Overall the results are really scarily good.  But.

Or maybe that is the best answer?

Posted in AI, Completely Different, Politics: US | 1 Comment

Faculti Video on ‘Safety as Privacy’ Posted

An outfit called Faculti, which presents as a sort of (anti?-)TED talk for nerdier, more detail-oriented people, recently did an interview with me about Safety as Privacy, a paper (co-authored with Phillip Arencibia & P. Zak Colangelo-Trenner), that should be published soon in the Arizona Law Journal. There’s a near-final version of Safety as Privacy at SSRN.

Faculti published the video interview and you are invited to enjoy it. I probably won’t: I don’t much like to listen to myself, much less watch myself, and I can’t shake the idea that I have an ideal face for radio. But the questions they set me in advance were substantive, and I hope the answers were too.

Here’s the paper’s abstract:

New technologies, such as internet-connected home devices we have come to call the Internet of Things (IoT), connected cars, sensors, drones, internet-connected medical devices, and workplace monitoring of every sort, create privacy gaps that can cause danger to people. In prior work [New technologies, such as internet-connected home devices we have come to call the Internet of Things (IoT), connected cars, sensors, drones, internet-connected medical devices, and workplace monitoring of every sort, create privacy gaps that can cause danger to people. In prior work1, two of us sought to emphasize the deep connection between privacy and safety to lay a foundation for arguing that U.S. administrative agencies with a safety mission can and should make privacy protection one of their goals. This Article builds on that foundation with a detailed look at the safety missions of several agencies. In each case, we argue that the agency has the discretion, if not necessarily the duty, to demand enhanced privacy practices from those within its jurisdiction, and that the agency should make use of that discretion.

Armed with the understanding that privacy is or causes safety, several U.S. agencies tasked with protecting safety could achieve substantial gains to personal privacy under their existing statutory authority. Examples of agencies with untapped potential include the Federal Trade Commission (“FTC”), the Consumer Product Safety Commission (“CPSC”), the Food and Drug Administration (“FDA”), the National Highway Traffic Safety Administration (“NHTSA”), the Federal Aviation Administration (“FAA”), and the Occupational Safety and Health Administration (“OSHA”). Five of these agencies have an explicit duty to protect the public against threats to safety (or against risk of injury) and thus—as we have argued previously—should protect the public’s privacy when the absence of privacy can create a danger. The FTC’s general authority to fight unfair practices in commerce enables it to regulate commercial practices threatening consumer privacy. The FAA’s duty to ensure air safety could extend beyond airworthiness to regulating spying via drones.

The CPSC’s authority to protect against unsafe products authorizes it to regulate products putting consumers’ physical and financial privacy at risk, thus sweeping in many products associated with the IoT. NHTSA’s authority to regulate dangerous practices on the road encompasses authority to require smart car manufacturers to include precautions protecting drivers from misuses of connected car data due to the carmaker’s intention and due to security lapses caused by its inattention. Lastly, OSHA’s authority to require safe work environments encompasses protecting workers from privacy risks that threaten their physical and financial safety on the job.

Arguably, an omnibus federal statute regulating data privacy would be preferable to doubling down on the United States’s notoriously sectoral approach to privacy regulation. Here, however, we say only that until the political stars align for some future omnibus proposal, there is value in exploring methods that are within our current means. It may be only second best, but it is also much easier to implement. Thus, we offer reasonable legal constructions of certain extant federal statutes that would justify more extensive privacy regulation in the name of providing enhanced safety, a regime that we argue would be a substantial improvement over the status quo yet not require any new legislation, just a better understanding of certain agencies’ current powers and authorities. Agencies with suitably capacious safety missions should take the opportunity to regulate to protect relevant aspects of personal privacy without delay.

  1. A. Michael Froomkin & Zak Colangelo, Privacy as Safety, 95 Wash. L. Rev. 141 (2020). []
Posted in Administrative Law, Law: Privacy, Talks & Conferences | Leave a comment

Attorney General Merrick Garland Speaks at Ellis Island (Take That, Ron)

Attorney General Merrick Garland swore in 200 new citizens at Ellis Island to celebrate the anniversary of the Constitution. Garland spoke about his immigrant grandparents, in an unusually personal and moving talk that serves as a rebuke to the neo-fascists busing and flying immigrants around the country as human pawns.

By the way, this post at Digby’s Blog has fascinating details about Florida Gov. Ron DeSantis’s migrant-shipping flights to Martha’s Vineyard, including a copy of the (false and misleading) brochure prepared to lure asylum-seekers to the airplanes.

Posted in Immigration | Leave a comment

On Robot & AI Personhood

The question of robot and AI personhood comes up a lot, and likely will come up even more in the future with the proliferation of models like GPT-3, which can be used to mimic human conversations very very convincingly. I just finished a first draft of a short essay surveying contemporary issues in robot law and policy; that gave me a chance to briefly sketch out my views on the personhood issue, and I figured I might share it here:

As the law currently stands in the United States and, as far as I know, everywhere else,1 the law treats all robots of every type as chattel. That is, in the words of Neil Richards and William Smart, “Robots are, and for many years will remain, tools. They are sophisticated tools that use complex software, to be sure, but no different in essence than a hammer, a power drill, a word processor, a web browser, or the braking system in your car.”2 It follows that robot personhood (or AI personhood) under law remains a remote prospect, and that some lesser form of increased legal protections for robots, beyond those normally accorded to chattels in order to protect their owners’ rights, also remain quite unlikely. Indeed, barring some game-changing breakthrough in neural networks or some other unforeseen technology, there seems little prospect that in the next decades machines of any sort will achieve the sort of self-awareness and sentience that we commonly associate with a legitimate claim to the bundle of rights and respect we organize under the rubric of personhood.3

There are, however, two different scenarios in which society or policymakers might choose to bestow some sort of rights or protections on robots beyond those normally given to chattels. The first is that we discover some social utility in the legal fiction that a robot is a person. No one, after all, seriously believes that a corporation is an actual person, or indeed that a corporation is alive or sentient,4 yet we accept the legal fiction of corporate personhood because it serves interests, such as the ability to transact in its own name, and limitation of actual humans’ liability, that society—or parts of it—find useful. Although nothing at present suggests similar social gains from the legal recognition of robotic personhood (indeed issues of liability and responsibility for robot harms need more clarity, not less accountability), conceivably policymakers might come to see things differently. In the meantime, it is likely that any need for, say, giving robots the power to transact. can be achieved through ordinary uses of the corporate form, in which a firm might for example be the legal owner of a robot.5

Early cases further suggest that U.S. courts are not willing to assign a copyright or a patent to a robot or an AI even when it generated the work or design at issue. Here, however, the primary justification has been straightforward statutory construction, holdings that the relevant U.S. laws only allow intellectual property rights to be granted to persons, and that the legislature did not intend to include machines within the that definition.6 Rules around the world may differ. For example an Australian federal court ordered an AI’s patent to be recognized by IP Australia.7 Similarly, a Chinese court found that an AI-produced text was deserving of copyright protection under Chinese law.8

A more plausible scenario for some sort of robot rights begins with the observation that human beings tend to anthropomorphize robots. As Kate Darling observes, “Our well-documented inclination to anthropomorphically relate to animals translates remarkably well to robots,” and ever so much more so to lifelike social robots designed to elicit that reaction—even when people know that they are really dealing with a machine.9 Similarly, studies suggest that many people are wired not only to feel more empathy towards lifelike robots than to other objects, but that as a result, harm to robots feels wrong.10 Thus, we might choose to ban the “abuse” of robots (beating, torturing) either because it offends people, or because we fear that some persons who abuse robots may develop habits of thought or behavior that will carry over into their relationships with live people or animals, abuse of which is commonly prohibited. Were we to find empirical support for the hypothesis that abuse of lifelike, or perhaps humanlike, robots makes abusive behavior towards people more likely, that would provide strong grounds for banning some types of harms to robots—a correlative 11 to giving robots certain rights against humans.12

It’s an early draft, so comments welcome!


  1. The sole possible exception is Saudi Arabia, which gave ‘citizenship’ to a humanoid robot, Sophia, in 2017. It is hard to see this as anything more than a publicity stunt, both because female citizenship in Saudi Arabia comes with restrictions that do not seem to apply to Sophia, and because “her” “life” consists of … marketing for her Hong-Kong-based creators. ((See Emily Reynolds, The agony of Sophia, the world’s first robot citizen condemned to a lifeless career in marketing, Wired (Jan. 6, 2018), []
  2. Neil Richards & William Smart, How Should the Law Think About Robots? in Robot Law 1, 20 (Ryan Calo, A. Michael Froomkin and Ian Kerr, eds. 2016). []
  3. For an interesting exploration of the issues see James Boyle, Endowed by Their Creator? The Future of Constitutional Personhood, Brookings Institution (Mar. 9, 2011). For a full-throated denunciation of the ‘robots rights’ concept as philosophical error and ethical distraction, see Abeba Birhane & Jelle van Dijk, Robot Rights? Let’s Talk about Human Welfare Instead, Proceedings of the 3rd AAAI/ACM Conference on AI, Ethics, and Society 207-213 (Feb. 7, 2020). []
  4. Although, Charlie Stross has suggested we should think of corporations as “Slow AIs”. Charlie Stross, Dude, you broke the future!, Charlie’s Diary (Jan. 2, 2018), (transcript of remarks to 34th Chaos Communication Congress, Leipzig, Dec. 2017). []
  5. For speculation as to how a robot or AI might own itself, without people in the ownership chain see Shawn J. Bayern, Autonomous Organizations (2021); Shawn J. Bayern, Are Autonomous Entities Possible?, 114 Nw. U. L. Rev. Online 23 (2019); Lynn LoPuki, Algorithmic Entities, 95 U.C.L.A. L. Rev. 887 (2018). []
  6. See Thaler v. Vidal, 43 F.4th 1207 (Fed. Cir. 2022) (upholding USTO decision refusing application for patent in name of AI). For arguments in favor of granting such patents, see, e.g., Ryan Abbott, I Think, Therefore I Invent: Creative Computers and the Future of Patent Law, 57 B.C. L. Rev. 1079 (2016). For a European perspective see P. Bernt Hugenholtz & João Pedro Quintais, Copyright and Artificial Creation: Does EU Copyright Law Protect AI-Assisted Output?, 52 IIC – Int’l Rev. Int’l. Prop. and Competition L. 1190 (2021). The recent literature on the copyrightability of machine-generated texts is vast, starting with Annemarie Bridy, Coding Creativity: Copyright and the Artificially Intelligent Author, 2012 Stan. Tech. L. Rev. 5 (2012). An elegant recent article disagreeing with Bridy, with many citations to the literature, is Carys Craig & Ian Kerr, The Death of the AI Author, 52 Ottawa L. Rev. 31 (2021). []
  7. Commissioner of Patents v. Thaler (DABUS), [2022] FCAFC 62. []
  8. Paul Sawers, Chinese court rules AI-written article is protected by copyright, VentureBeat (Jan. 10, 2020), []
  9. Clifford Nass & Youngme Moon, Machines and Mindlessness: Social Responses to Computers. 56 J. Soc. Issues 81 (2000); Kate Darling, Extending Legal Protection to Social Robots in Robot Law 213, 214, 220 (Ryan Calo, A. Michael Froomkin and Ian Kerr, eds. 2016).  []
  10. Darling, supra note 9, at 223. []
  11. In Hohfeldian terms, if persons have a duty not to harm a robot, then, correlatively, the robot has right not to be harmed by those persons. See Pierre Schlag, How to Do Things with Hohfeld, 78 L. & Contemp. Probs. 185, 200-03 (2014). Hohfeld was concerned with the relations of persons, and probably would have thought the idea of property having rights to be a category error. Yet if the duty to forbear from certain harms extends to the owner of the robot as well as others, I submit that the “rights” term of the correlative relations is a useful way to describe what the robot has. []
  12. Darling, supra note 9, at 226-31. []
Posted in AI, Robots | 2 Comments

#WeRobot Finished With a Bang!

(Metaphorically, only.)

We will have recordings of substantially all the discussions up online in about a week.

Meanwhile, you can still read the papers.  You might want to start with the prize-winners:

… although I’d also like to give a shout-out to two of my personal favorites:

That said, the papers all were really good, which is pretty amazing.

Posted in AI, Robots, Talks & Conferences | Leave a comment