Comment by idoubtit
2 years ago
Basically, you want to replace statistics ("large enough samples will average out variation") with AI. I'm afraid that's cargo cult instead of science.
AI can lie. It means a "truely personalized medicine" would sometimes poison its patients. See for instance Donald Knuth experiment with Chat GPT, starting with "Answer #3 is fouled up beautifully!" with some totally wrong AI answers https://www-cs-faculty.stanford.edu/~knuth/chatGPT20.txt
Of course medical science could make a better use of statistics, get help from AI, and discern more profiles (e.g. one US adult out of two is obese, and it's often unclear how to adjust medication to person mass). But that's a long process, with no obvious path, and much distinct from the magic "AI will solve it all".
(I'll bring a conciliatory bias to this conversation. Under what interpretations might the ^ and ^^ comments be saying mostly the same thing?)
>> My hope is the AI+sensors ushers in the era of truely personalized medicine.
> AI can lie.
The first hope is compatible with the second fact. It is possible that carefully _designed_ AI systems (that have different architectures than 'vanilla' LLMs) can serve a useful purpose here.
There is a lot of interesting conversation to be had in 'the conciliatory zone', leaving plenty of opportunities to disagree when it is warranted.
> Basically, you want to replace statistics ("large enough samples will average out variation") with AI. I'm afraid that's cargo cult instead of science.
There is a whole lot of _assuming_ going on here, followed by a mischaracterization. This is not the paragon of curious conversation.
I don't expect perfection, but I still think we should try. I don't mean to pick on any one person; I do this sometimes as well. I'm just pointing it out because, well, it is right here, right now.
Think about the impact on the system. Seeing the anti-pattern above too often can drive people away. I think it does. Who remains? People who somehow aren't bothered by it? Downselecting in this way is self-defeating.
I'm probably just as frustrated as anyone here (if not more) regarding (i) the state of medicine, (ii) perceptions of what current AI technology can do, and (iii) many other serious problems. But we shouldn't not let this frustration bleed into our personal interactions.
It is both a waste of our time and (worse) damaging to the community ethos when we take an unnecessarily pessimistic view of _each other_. Some online fora are 'good enough' (though flawed) to help us connect and build bridges as _people_. It doesn't help when we fall into the all-too-common pattern of sniping without asking questions first and clarifying meaning.
Again, nothing personal. This is more of a rant and request.
Current AI is deeply rooted in probability and statistics, so it would actually increase the use of statistics.
I'm not saying a false positive or a false negative cannot happen. I am saying that we would have better estimates of both, according to probability theory.
Also: false positives and false negatives are basically impossible to prevent, for a sufficient small margin of error. And that's science.
> Basically, you want to replace statistics ("large enough samples will average out variation") with AI
No. This is misunderstanding due to my lack of clarity. Apologizes.
The biggest problems with Medical data is that 1) incredibly small scale data is collected to make assertions, 2) the data is horrendously de-normalized
In Radiology, a common validation approach is have radiologists review cases on the order of 250 studies to make assessments about a radiology product. This is considered the gold standard for FDA. Look into it more, vaccines, treatments. The sample sizes are fucking tiny.
The statistical assertion is that these relatively small samples capture the variation sufficiently to demonstrate efficacy across devices, treatments. These are then extrapolated to the US and wider populations. Do you believe that this is rigorously true?
The rationale underpinning this is simply practicality. You cannot get thousands (or hundreds of thousands) of patients/doctors/etc to get a strong signal and confidence. For drugs, it's super hard, but for devices and software interventions, it way easier to get data.
That brings us to the second big problem: the data structure is completely highly varied and denormalized.
1) From a pure structure point of view, it's basically free text fields that doctors sporadically fill.
2) From a underlying truth point of view, each hospital across the world has different protocols for care delivery. A histopathological FNA procedure might have a completely different meaning in CA, NY, or EU. This might simply be because of workflow, timeline, or just people using the words wrong.
What I mean with AI+Sensors:
AI doesn't need to solve the problem of intuition around medical problems. The biggest impact will likely come from the relatively mundane task of simply structuring and normalizing the data. Sensors simply help to generate more data
To be more concrete, you don't want (and shouldn't trust) this:
Prompt: "Please diagnose this person"
You want this:
Prompt: "Here is 100 TB of data from 100 different hospitals each with different workflows and patients for histopathology. For each patient, synthesize a CSV with the following schema "AnonPatientID, AnonCaseID, Pathology Result, Pathology stage, Incidenctal findings, ..."
Then I can do the analysis myself.
Hope this makes more sense.
(I work in healthtech and med device and I promise you: Demoralization at the state of medicine is a rite of passage after which you can begin to address practical problems)