Comment by dchu17
4 hours ago
> What's the performance of the models trained specifically on this task, and random guessing, compared to the expert pathologist?
I should probably first clarify here, the disease classification tasks are about subtyping the type of cancer (i.e classifying a case as invasive ductal carcinoma of the breast) rather than just binary malignant/benign classification so random guessing is much more difficult and makes this model performance more impressive.
> Would also be curious how the LLM compares to this and other approaches.
There aren't a lot of public general purpose pathology benchmarks. There are some like (https://github.com/sinai-computational-pathology/SSL_tile_be...) but focus on just binary benign/malignant classification tasks and binary biomarker detection tasks.
I am currently working on self-hosting the available open-source models.
> this seems like the sort of task where being right 90% of the time is not good enough, so even if the LLM beats other approaches, it still needs to close the gap to human performance
Yep, your intuition is right here, and actually the expectation is probably closer to mid-high 90%, especially for FDA approval (and most AI tools position as co-pilots at the moment). There is obviously a long way to go, but what I find about interesting about this approach is that it allows LLMs to generalize across (1) a variety of tissue types and (2) pathology tasks such as IHC H-score scoring.
No comments yet
Contribute on Hacker News ↗