Monthly Archives: March 2019

Could be Either, Right?

Real or Onion: Study Reveals That Girls Who Play Princess Grow Up With Skewed Perceptions Of The Role Of Modern Monarchy In A Democratic Society

Continue reading

Posted in Onion/Not-Onion | Comments Off on Could be Either, Right?

Memo to Self re: Hardening Firefox (Updated)

Make sure all of the following extensions are installed on all my computers

Also use 1.1.1.1 for DNS and change default search engine to Duck Duck Go or to Startpage. Full manual DNS settings are:

 1.1.1.1
 1.0.0.1 
 2606:4700:4700::1111 
 2606:4700:4700::1001

As per A Few Simple Steps to Vastly Increase Your Privacy Online.

Note that I already have a VPN, or I’d be suggesting that too.

Posted in Software, Sufficiently Advanced Technology | Comments Off on Memo to Self re: Hardening Firefox (Updated)

ICE Routinely Seeks to Detain US Citizens in Miami-Dade County

When I can summon the energy to do so, I worry that the current unpleasantness threatens to exhaust my capacity for outrage.

However, I am now able to report that some news still shocks and surprises, such as this report from the ACLU, Citizens on Hold: A Look at ICE’s Flawed Detainer System in Miami-Dade County. It is not pretty reading.

Miami-Dade County’s records show that between February 2017 and February 2019, ICE sent the jail 420 detainer requests for people listed as U.S. citizens, only to later cancel 83 of those requests—evidently because the agency determined, after the fact, that its targets were in fact U.S. citizens. The remaining individuals’ detainers were not canceled, and so they continued to be held for ICE to deport them.

The ACLU reports that false detainer requests are fairly common across the country, but that we here in Miami are the epicenter.

Immigrant-rights groups say ICE’s databases are filled with outdated information. (And the ACLU says ICE’s data bases often fails to reflect that people become naturalized citizens.) And, ACLU notes a CNN investigation that showed ICE agents routinely forging their bosses’ signatures on critical detention warrants to skip mandatory document reviews.

Meanwhile, the State Legislature is considering a bill that would require local cops to honor ICE requests.

(Spotted via New Times, ICE Issued False Deportation Requests for 420 U.S. Citizens in Miami-Dade, ACLU Reports.)

Posted in Immigration | Comments Off on ICE Routinely Seeks to Detain US Citizens in Miami-Dade County

Everybody Loves Blockchain?

This evening I’m attending an event on “Blockchain: Business, Regulation, Law and the Way Forward” featuring Jerry Britto (Coin Center), Marcia Weldon (MiamiLaw), and Samir Patel (Holland & Knight).

The event is organized jointly by three student groups: the Federalist Society, the Business law Society, and the Alliance Against Human Trafficking. That’s a pretty eclectic group. I think it shows how widely the blockchain dream has taken hold.

And yet, despite this, not absolutely everyone loves blockchain.  I for one am somewhat skeptical, as I think the use cases are much more limited than the optimists would have it.  Indeed, my views are almost summarized by this great graphic, which sets out a decision tree for people thinking of using blockchain:

Yes, the reality is a bit more complicated, but if you can’t explain why the above doesn’t apply to you, you probably shouldn’t be using blockchain….

Posted in Cryptography, Talks & Conferences, U.Miami | 1 Comment

‘When AIs Outperform Doctors’ Published

I’m happy to report that the Arizona Law Review has published When AIs Outperform Doctors: Confronting the Challenges of a Tort-Induced Over-Reliance on Machine Learning, 61 Ariz. L. Rev. 33 (2019), that I co-authored with Ian Kerr (U. Ottawa) and Joelle Pineau (McGill U.).

Here’s the abstract:

Someday, perhaps soon, diagnostics generated by machine learning (ML) will have demonstrably better success rates than those generated by human doctors. What will the dominance of ML diagnostics mean for medical malpractice law, for the future of medical service provision, for the demand for certain kinds of doctors, and—in the long run—for the quality of medical diagnostics itself?

This Article argues that once ML diagnosticians, such as those based on neural networks, are shown to be superior, existing medical malpractice law will require superior ML-generated medical diagnostics as the standard of care in clinical settings. Further, unless implemented carefully, a physician’s duty to use ML systems in medical diagnostics could, paradoxically, undermine the very safety standard that malpractice law set out to achieve. Although at first doctor + machine may be more effective than either alone because humans and ML systems might make very different kinds of mistakes, in time, as ML systems improve, effective ML could create overwhelming legal and ethical pressure to delegate the diagnostic process to the machine. Ultimately, a similar dynamic might extend to treatment also. If we reach the point where the bulk of clinical outcomes collected in databases are ML-generated diagnoses, this may result in future decisions that are not easily audited or understood by human doctors. Given the well-documented fact that treatment strategies are often not as effective when deployed in clinical practice compared to preliminary evaluation, the lack of transparency introduced by the ML algorithms could lead to a decrease in quality of care. This Article describes salient technical aspects of this scenario particularly as it relates to diagnosis and canvasses various possible technical and legal solutions that would allow us to avoid these unintended consequences of medical malpractice law. Ultimately, we suggest there is a strong case for altering existing medical liability rules to avoid a machine-only diagnostic regime. We argue that the appropriate revision to the standard of care requires maintaining meaningful participation in the loop by physicians the loop.

I think this is one of the best articles I’ve written or co-written–certainly in the top five. I’m particularly proud that I worked out, or intuited, a property of Machine Learning that was either not present or certainly not prominent in the literature: that if all the inputs to future generations of ML systems are due to the output of earlier generations of the ML system, there’s a chance it may all go wrong.

Reasonable people could disagree about the size of that chance, but if it happens at least with current technology there’s no way the system itself would warn us. Depending on the complexity of the system, and the extent to which doctors have been deskilled by the prevalence of the ML technology, we might be hard put to notice some types of degradation ourselves.

It would be good, therefore, to try to engineer legal rules that would make this possibly very unhealthy outcome much less likely.

Posted in AI, Writings | Comments Off on ‘When AIs Outperform Doctors’ Published

Real or Onion?

Florida House Speaker apologizes for referring to pregnant women as “host bodies” in interview on abortion.

Fooled me. Need to lower my filters.

Posted in Onion/Not-Onion | Comments Off on Real or Onion?