Comment by lukko

18 hours ago

I'm surprised at both the article and the paper - both seem very hyperbolic. This is LLMs competing against doctors in a way that is heavily weighted in the LLMs favour, which does not represent clinical practice. These reasoning cases are not benchmarks for doctors, they are learning tools.

I think it's important to note that diagnosis also relies on accurate description of the patient in the first place, and the information you gather depends on the differential diagnosis. Part of the skill of being a doctor is gathering information from lots of different sources, and trying to filter out what is important. This may be from the patient, who may not be able to communicate clearly or may be non verbal, carers and next of kin. History-taking is a skill in itself, as well as examination. Here those data are given.

For pattern recognition from plain text, especially on questions that may be in the o1's training data, I'm not surprised at all that it would outperform doctors, but it doesn't seem to be a clinically useful comparison. Deciding which investigations to do, any imaging, and filtering out unnecessary information from the history is a skill in itself, and can't really be separated from forming the diagnosis.

Also, you need to see an analysis of the incorrect calls. The goal of a human Dr is not to get the highest accuracy, it's to limit total harm to the patient. There can be cases where the odds favor picking X (but it may not be by that much), but the safe thing to do is to rule out some other option first, or start a safe treatment that covers several other possible options.

Simply getting the "high score" on this evaluation is not necessarily good medical treatment.

  • Exactly this. Most diagnosis isn’t about pinpointing the underlying exact cause, it’s ruling out the really bad stuff and minimising harm. Differential diagnosis just isn’t real world medicine.

  • Yeah 100% this. We've all used AI. It's obvious that it can sometimes outperform humans in a "did it get the right answer" benchmark while being wildly worse overall because of worse failure modes.

    I bet the AI's incorrect answers are less "I don't know, let's get a second opinion" and more "you're perfectly fine, 0% chance this is cancer".

At many (otherwise) world-leading facilities even just reviewing the patient history is a slog. There is rarelly any ability to keyword search the records or even filter the records by location, title and occupation of the healthcare professional making it, etc. Especially very ill people will have hundreds and hundreds of recent entries.

And stepping through those entries isn’t like browsing a modern local-first app [1], where you will just scroll through dozens of entries in milliseconds. It’s not like the slightly older and slightly slower Gmail interface. You’re clicking on each record and waiting 400ms-3s for it to load, as if instead of a 25Gb fiber connection you’re on dialup requesting the record from Epic’s headquarters in the US and proxying them via Australia.

[1] https://bugs.rocicorp.dev/p/roci