Comment by rapatel0

2 years ago

This doesn’t surprise me. We have massive systemic issues in medical science and care delivery.

- Medical science handles variation by simply assuming that large enough samples will average out variation. This loses a ton of information as the “average person” is a construct that almost certainly doesn’t exist.

- news media on medical science glosses over all uncertainties in the name of clickbaity sensationalism.

- lawyers are the incentivized by our adversarial legal system to adopt aggressively hyperbolic interpretations of the science to sue people and extract money.

- medical associations then tweak policies to protect against malpractice

Run this loop enough times and lots of noise gets amplified.

My hope is the AI+sensors ushers in the era of truely personalized medicine.

I'm starting to see AI studies on the medical detection of child abuse, which unfortunately reproduce the same biases as the low-quality clinical data they are based on. An AI that would detect subdural and retinal hemorrhage without external signs of trauma with 99% accuracy would detect "child abuse with 99% accuracy" and would impress law enforcement and courts. However, it wouldn't be more reliable than an expert witness confidently asserting that these signs are almost always due to child abuse.

Basically, you want to replace statistics ("large enough samples will average out variation") with AI. I'm afraid that's cargo cult instead of science.

AI can lie. It means a "truely personalized medicine" would sometimes poison its patients. See for instance Donald Knuth experiment with Chat GPT, starting with "Answer #3 is fouled up beautifully!" with some totally wrong AI answers https://www-cs-faculty.stanford.edu/~knuth/chatGPT20.txt

Of course medical science could make a better use of statistics, get help from AI, and discern more profiles (e.g. one US adult out of two is obese, and it's often unclear how to adjust medication to person mass). But that's a long process, with no obvious path, and much distinct from the magic "AI will solve it all".

  • (I'll bring a conciliatory bias to this conversation. Under what interpretations might the ^ and ^^ comments be saying mostly the same thing?)

    >> My hope is the AI+sensors ushers in the era of truely personalized medicine.

    > AI can lie.

    The first hope is compatible with the second fact. It is possible that carefully _designed_ AI systems (that have different architectures than 'vanilla' LLMs) can serve a useful purpose here.

    There is a lot of interesting conversation to be had in 'the conciliatory zone', leaving plenty of opportunities to disagree when it is warranted.

    > Basically, you want to replace statistics ("large enough samples will average out variation") with AI. I'm afraid that's cargo cult instead of science.

    There is a whole lot of _assuming_ going on here, followed by a mischaracterization. This is not the paragon of curious conversation.

    I don't expect perfection, but I still think we should try. I don't mean to pick on any one person; I do this sometimes as well. I'm just pointing it out because, well, it is right here, right now.

    Think about the impact on the system. Seeing the anti-pattern above too often can drive people away. I think it does. Who remains? People who somehow aren't bothered by it? Downselecting in this way is self-defeating.

    I'm probably just as frustrated as anyone here (if not more) regarding (i) the state of medicine, (ii) perceptions of what current AI technology can do, and (iii) many other serious problems. But we shouldn't not let this frustration bleed into our personal interactions.

    It is both a waste of our time and (worse) damaging to the community ethos when we take an unnecessarily pessimistic view of _each other_. Some online fora are 'good enough' (though flawed) to help us connect and build bridges as _people_. It doesn't help when we fall into the all-too-common pattern of sniping without asking questions first and clarifying meaning.

    Again, nothing personal. This is more of a rant and request.

  • Current AI is deeply rooted in probability and statistics, so it would actually increase the use of statistics.

    I'm not saying a false positive or a false negative cannot happen. I am saying that we would have better estimates of both, according to probability theory.

    Also: false positives and false negatives are basically impossible to prevent, for a sufficient small margin of error. And that's science.

  • > Basically, you want to replace statistics ("large enough samples will average out variation") with AI

    No. This is misunderstanding due to my lack of clarity. Apologizes.

    The biggest problems with Medical data is that 1) incredibly small scale data is collected to make assertions, 2) the data is horrendously de-normalized

    In Radiology, a common validation approach is have radiologists review cases on the order of 250 studies to make assessments about a radiology product. This is considered the gold standard for FDA. Look into it more, vaccines, treatments. The sample sizes are fucking tiny.

    The statistical assertion is that these relatively small samples capture the variation sufficiently to demonstrate efficacy across devices, treatments. These are then extrapolated to the US and wider populations. Do you believe that this is rigorously true?

    The rationale underpinning this is simply practicality. You cannot get thousands (or hundreds of thousands) of patients/doctors/etc to get a strong signal and confidence. For drugs, it's super hard, but for devices and software interventions, it way easier to get data.

    That brings us to the second big problem: the data structure is completely highly varied and denormalized.

    1) From a pure structure point of view, it's basically free text fields that doctors sporadically fill.

    2) From a underlying truth point of view, each hospital across the world has different protocols for care delivery. A histopathological FNA procedure might have a completely different meaning in CA, NY, or EU. This might simply be because of workflow, timeline, or just people using the words wrong.

    What I mean with AI+Sensors:

    AI doesn't need to solve the problem of intuition around medical problems. The biggest impact will likely come from the relatively mundane task of simply structuring and normalizing the data. Sensors simply help to generate more data

    To be more concrete, you don't want (and shouldn't trust) this:

    Prompt: "Please diagnose this person"

    You want this:

    Prompt: "Here is 100 TB of data from 100 different hospitals each with different workflows and patients for histopathology. For each patient, synthesize a CSV with the following schema "AnonPatientID, AnonCaseID, Pathology Result, Pathology stage, Incidenctal findings, ..."

    Then I can do the analysis myself.

    Hope this makes more sense.

    (I work in healthtech and med device and I promise you: Demoralization at the state of medicine is a rite of passage after which you can begin to address practical problems)

Your narrative reverses the roles a bit. The lawyers appear as the heroes in this particular story and the villains are all associated with the hospital -- either in the form of people or bureaucratic red tape, depending on how generous you want to be in your analysis.

> The doctors at the hospital were absolutely, unconditionally 100% certain that no other cause than violent shaking could ever explain blood around the brain and at the back of the eyes.

> As a precautionary measure, the hospital followed mandatory reporting statutes and my wife and I temporarily lost custody of David.

> I disturbingly realized that what I had been told at the hospital, namely that subdural and retinal hemorrhage in infants are almost always caused by violent shaking even in the absence of external evidence of trauma, was an assertion based on very weak scientific foundations.

> Thanks to our incredibly effective defense lawyer, we were cleared of all charges within two months, during which we stayed at the hospital 24/7 with David until we sorted out the legal procedures.

> Every case requires years of intense, dedicated efforts by an entire team of specialized lawyers and medical experts, but there are tens of thousands of cases and few experts willing to defend them.

Hospitals are definitely the weak link in this system. Just looking at the way the story is laid out, the solution is more lawyers and fewer, less expansive hospitals.

> - Medical science handles variation by simply assuming that large enough samples will average out variation. This loses a ton of information as the “average person” is a construct that almost certainly doesn’t exist.

Well, this wouldn't even be that bad, if sample size were actually large enough.

  • No I think the point that was being made is that the "average person" idea is not that great if you have huge variance. If I have a uniform distribution from 0 to 1, average is '0.5', but its just as likely to get 0. or 1.

    • Yes, I know.

      My point goes beyond that one: yes, variance is a problem. But _even_ _just_ getting good averages, for all their faults, requires a bigger n than many studies have. Especially observational studies.

  • That's part of it. The sample size is usually super small.

    Furthermore, there's the opportunity. With large amounts of data (from software, medical devices, sensors) we can actually tackle this problem at scale.