Comment by fwip
2 days ago
AI has a long way to go before it can serve as a trustworthy middleman between research papers and patients.
For instance, even WebMD might waste more time in doctor's offices than it saves, and that's a true, hallucination-free source, written specifically to provide lay-people with understandable information.
This study found that an LLM outperformed doctors "on a standardized rubric of diagnostic performance based on differential diagnosis accuracy, appropriateness of supporting and opposing factors, and next diagnostic evaluation steps, validated and graded via blinded expert consensus."
https://jamanetwork.com/journals/jamanetworkopen/fullarticle...
This study is about doctors using an LLM and it doesn't seem like it made them significantly more accurate than doctors not using LLM.
If you look in the discussion section you'll find that wasn't exactly what the study ended up with. I'm looking at the paragraph starting:
> An unexpected secondary result was that the LLM alone performed significantly better than both groups of humans, similar to a recent study with different LLM technology.
They suspected that the clinicians were not prompting it right since the LLM without humans was observed to be outperforming the LLM with skilled operators.
1 reply →