← Back to context

Comment by TacticalCoder

9 days ago

> ... if regular scans were widespread, it's likely this result in innovations that would drive down costs, improve accuracy, as well as producing a much larger corpus of data with which to guide diagnosis and reduce false positives.

And if there's one thing where AI models really do already excel at it's classifying and noticing patterns.

Many dermatologists (not all of them yet, at least not in the EU), for example already have software classifiers using pictures of one's skin and helping guide diagnosis. I've lots of moles/nevi and freckles on my skin: I'm one of those Gen X kids raised by parents that had no idea that sun exposure and sunburns was a bad thing so I regularly get warning shots and my body, especially my back, if full of scars for for my entire life dermatologists have regularly removed concerning little buggers and sent them to the lab for further analysis.

Nowadays my dermatologist is helped in her classification by hardware/software.

I don't see why that wouldn't be the way forward for full scan MRI: they'll begin to be more and more hooked up to AI classifiers.

It always takes time: it's not as if the tech comes out and in 48 hours every hospital/physician is equipped with it.

It's literally the future is here (classifiers helping dermatologists find concerning nevi), just not evenly distributed (many dermatologists still don't have access to these latest machines).

I can't imagine this taking strong hold in the US unless it shields physicians from legal consequences of false negatives or produces enough false positives to ensure revenue doesn't fall.

I don't see any way that the hospital systems running healthcare in the US would embrace a technology that reduces false positives (income) without decreasing false negatives (risk and lost income) at least as much.