Comment by roenxi
2 days ago
One of the more exciting AI use-cases is that it should be about competent to handle the conversational parts of diagnosis; it should have read all the studies and so it'll be possible to spend an hour at home talking to an AI and then turn up at the doctor with a checklist of diagnostic work you want them to try.
A shorter amount of expensive time with a consultant is more powerful if there is a solid reference to play with for longer before hand.
AI has a long way to go before it can serve as a trustworthy middleman between research papers and patients.
For instance, even WebMD might waste more time in doctor's offices than it saves, and that's a true, hallucination-free source, written specifically to provide lay-people with understandable information.
This study found that an LLM outperformed doctors "on a standardized rubric of diagnostic performance based on differential diagnosis accuracy, appropriateness of supporting and opposing factors, and next diagnostic evaluation steps, validated and graded via blinded expert consensus."
https://jamanetwork.com/journals/jamanetworkopen/fullarticle...
This study is about doctors using an LLM and it doesn't seem like it made them significantly more accurate than doctors not using LLM.
2 replies →