Comment by rich_sasha
7 days ago
Can someone ELI5 why false positives on a MRI are so bad?
From a pure Bayesian PoV, you're better off with a noisy additional observation. At worst it doesn't get much weight.
At a pragmatic level, can't you say, hey here's something thats probably nothing, let's scan it again in 6 months? Why does an MRI necessarily lead to invasive follow ups?
I get that ideally we'd have a crystal ball with 0 type I / type II errors but short of that, why is a noisy predictor bad?
> At a pragmatic level, can't you say, hey here's something thats probably nothing, let's scan it again in 6 months
If a doctor even _hints_ there might be cancer, the patient will have a terrible 6 months (with actual, measurable negative health impacts of the added stress). Also, at some uncertainy-level (say, 10% chance of cancer) the doctor _has_ to say something and has to schedule expensive followups to not risk liability, even though in 90% of the cases it is not only unnecessary, but actively harmful to the patients.
When, on average, the cost of the screening + the harm done by a false positive outweighs the benefits of an early detection, you shouldn't do the screening in the first place.
I 100% agree. The UK recently recommended to not scan for prostate cancer because it sometimes detects cancers that don't need treatment:
https://www.bbc.com/news/articles/cm20169gz44o.amp
This seems super dumb to me. If the test detects cancer that doesn't need treatment, don't treat it!! It's still better to know it's there and that you need to monitor it.
> Before you know it, you are on the operating table having your prostate removed – and we see examples of that all the time,
Well fix that problem then. If someone puts a smoke detector above a toaster you don't just pull the battery and call it a day.
My parents are doctors so I’m very used to giving them all the data and pushing decision making down to them. Almost all of the time there is no action to be taken. But this built a certain habit that I realized is not conducive to medical care in the US.
I once told my wife that it’s better if she just passes all information downstream and then lets the diagnostician do the diagnostics.
During her pregnancy, at antenatal monitoring, when asked the routine questions I encouraged her to mention everything and so she mentioned a slight twinge in her chest (“it’s probably nothing, maybe something I ate”). She was hooked up to the monitors and so on but this was a sudden moment of panic for everyone but us. The nurse called for a doctor, there was an EKG machine brought up, all sorts of honestly nonsensical reaction given the data.
I realized my mistake soon after. There’s the obvious legal consideration, of course, but the real magic lies in the fact that no one gives full information so if someone sends you a signal they assume it’s crossed some threshold to significance. My mistake was in being a non-normative participant here, akin to someone who drives straight on green in a land where a green light means you first let one person turn left before you go.
Anyway, patients are supposed to perform pre-diagnosis in the US. And you’re not supposed to show your doctor things that they will then act on. You should first apply Bayes yourself and then give the info to the doctor here because they won’t use Bayes.
> Well fix that problem then. If someone puts a smoke detector above a toaster you don't just pull the battery and call it a day.
I think what's happening here is that the smoke detector is indicating the possibility of fire, but the toaster is always being immediately doused in water. Which as we know would cause more damage than good unless there truly was a raging inferno.
The suggestion here seems to be moving the smoke detector to somewhere where there's a higher chance of it ringing means a higher chance of a damaging fire. Which seems quite reasonable.
The question is how can you know if it needs treatment or not. I guess you either need to do a biopsy, or check if it's grown after N months (leaving patient scared and anxious during that time). Neither are great if most cases end up not needing treatment.
If the test provides you zero information about whether it needs treating then it was never a useful test. Presumably it's more like "there's a X% chance this needs treatment". In which case you just set reasonable thresholds for X. E.g. if it's 5% you monitor it, 10% you do a biopsy, 70% you operate, etc.
This is much more sensible than just not testing at all and letting people die from cancer.
> leaving patient scared and anxious during that time
This seems to be the actual motivation. We don't want to scare people with test results so we're just not going to test them. I think that should be up to the patient.
3 replies →
> If the test detects cancer that doesn't need treatment, don't treat it!!
How do you know which ones to treat and which ones to leave?
When the result is above a chosen threshold (which may depend on other factors like family history etc.).
1 reply →
My understanding is it's liability, if the doctor decides not to look into it then they could be blamed for it if it turns to cancer.
Because the patient is usually unable to handle such information correctly (the medical system sometimes too). And the whole-body-scan type of tests additionally pre-select for the high anxiety types.
In real life, every additional data point has a cost...