← Back to context

Comment by creativeSlumber

17 hours ago

> "An AI and a pair of human doctors were each given the same standard electronic health record to read"

This is handicapping the human doctors abilities. There is a lot more information a human doctor can gather even with a brief observation of the patient.

They have covered this in the article.

> But it is not curtains for emergency doctors yet, the researchers said. The study only tested humans against AIs looking at patient data that can be communicated via text. The AI’s reading of signals, such as the patient’s level of distress and their visual appearance, were not tested. That means the AI was performing more like a clinician producing a second opinion based on paperwork.

  • > The study only tested humans against AIs looking at patient data that can be communicated via text.

    This is like saying that LLMs can evaluate paintings better than art experts. But only when looking at data that can be communicated via text.

    Of course they can, because it makes no sense to do such a thing.

  • > That means the AI was performing more like a clinician producing a second opinion based on paperwork.

    That actually seems like a good application – automatically get a quick AI second opinion for everything; if it's dissenting the first/human medic can re-review, or comment why it's slop, or get a third/second-human opinion.

    (I'm assuming most cases would be You're absolutely right, that's an astute diagnosis.)

Agreed. I think the best use of this sort of tech is to use both to their strengths. Use AI to go over the record and suggest diagnoses which you have the doctor review after observing the patient.

The other thing is that common issues are common. I have to wonder how much that ultimately biases both the doctor and the LLM. If you diagnose someone that comes in with a runny nose and cough as having the flu you will likely be right most of the time.

You could say the same about the Ai. Ai is incredibly well suited for extracting knowledge through chats.

In this regard. A doctor also just have 15 minutes for an interview. An Ai can be with the patient for days leading up to a consultation.

So if we remove this "handicap" this Ai will likely really start to win.

  • Chat seems like a really bad way to get patient information. You'll miss out on various cues doctors will use to diagnose you. People can get ashamed of their symptoms and may try to hide them.

  • It’s not good for a doctor to be your best friend. It doesn’t seem any LLM is capable of that emotional distance.

This feels like a deeply important observation. Now also, would be interesting to include e.g. a short video or photograph for the AI to use as well.

My doctor makes me wait for weeks, then googles my symptoms in front of me, asks me if I checked on the internet first before I came and then gives me the first google result as an answer, as well as suggests me to wait longer. He does this several times.

When I got tired of this I just lied to the emergency line and was admitted to hospital based on my lie, and they discovered a brain tumor which explained the other stuff.

I WISH I could just use AI.

Bonus, health networks now push doctors to use AI transcription software for the EHR entries. Doctors and nurses like it because they don't have to type it up. But it is a complete shitshow on whether the records are reviewed for transcription errors which happen quite often

Now feed a flawed transcripted into an AI diagnosis system and bam-o. The AI will treat it as gospel, while the doctor may go wait what.