Fusion Power is Only 15 Years Away, we’re told. I guess that’s progress since in just the last few years people have said its Always 50 Years Away, or maybe Always 30 Years Away, or maybe formely 30 years away, now its more like 50 years away, or maybe just forever 20 years away, or 13 Years Away.
So ten years away is progress, right? Then again three years ago it ten years away so maybe we’re going backwards?
Or maybe we’re looking at the wrong scientific advance here: what we really have is an odd form of time travel?
My latest (draft!) paper, When AIs Outperform Doctors: The dangers of a tort-induced over-reliance on machine learning and what (not) to do about it is, I hope, special. I had the good fortune to co-author it with Canadian polymath Ian Kerr, and with Joëlle Pineau, one of the world’s leading machine learning experts.
Here’s the abstract:
Someday, perhaps soon, diagnostics generated by machine learning (ML) will have demonstrably better success rates than those generated by human doctors. What will the dominance of ML diagnostics mean for medical malpractice law, for the future of medical service provision, for the demand for certain kinds of doctors, and—in the longer run—for the quality of medical diagnostics itself?
This article argues that once ML diagnosticians, such as those based on neural networks, are shown to be superior, existing medical malpractice law will require superior ML-generated medical diagnostics as the standard of care in clinical settings. Further, unless implemented carefully, a physician’s duty to use ML systems in medical diagnostics could, paradoxically, undermine the very safety standard that malpractice law set out to achieve. In time, effective machine learning could create overwhelming legal and ethical pressure to delegate the diagnostic process to the machine. Ultimately, a similar dynamic might extend to treatment also. If we reach the point where the bulk of clinical outcomes collected in databases are ML-generated diagnoses, this may result in future decision scenarios that are not easily audited or understood by human doctors. Given the well-documented fact that treatment strategies are often not as effective when deployed in real clinical practice compared to preliminary evaluation, the lack of transparency introduced by the ML algorithms could lead to a decrease in quality of care. The article describes salient technical aspects of this scenario particularly as it relates to diagnosis and canvasses various possible technical and legal solutions that would allow us to avoid these unintended consequences of medical malpractice law. Ultimately, we suggest there is a strong case for altering existing medical liability rules in order to avoid a machine-only diagnostic regime. We argue that the appropriate revision to the standard of care requires the maintenance of meaningful participation by physicians in the loop.
I hope that it will be of interest to to lawyers, doctors, computer scientists, and a range of medical service providers and policy-makers. Comments welcome!
One more reason to hold off on buying one, I guess.
AN AMAZON ECHO SPEAKER has been blamed for starting a “rave” in a sixth floor flat in Hamburg.
The owner was out at a real nightclub when the speaker decided to start blasting out bangin’ tunes at top volume at 0150 CET. Neighbours called the police who broke down the door to find no one in, just the Alexa speaker havin’ it large all on its own.
Speaking to German paper Die Welt, the flat (and speaker) owner, Oliver Haberstroh explained that he’d not had any problems with the digital task monkey up until this point.
Neighbours raised the alarm after shouting and banging on the door didn’t work.
Build 2017: Project Emma is a watch-sized device with tiny motors in them that ‘short circuit’ the brain-body feedback look that seems to cause tremors in suffers of Parkinson’s Disease.
It was created by Haiyan Zhang, the Innovation Director at Microsoft Research. More details at betanews.
Spotted via Slashdot.
See the video:
Researchers develop face-capture technology that can alter pre-recorded videos in real-time on low cost computers.
Boing Boing suggests it could be used to make George W Bush or Donald Trump appear intelligent.
I can imagine even worse:
- Fake ransom videos
- Horrible pranks of the Intentional Infliction of Emotional Distress variety (fake relative’s video suicide/I’m joining ISIS/mass shooter note)
- Fake Clinton videos admitting complicity in WhiteWater
- Unretouched videos of Donald Trump
Feel free to add yours below.