Show HN: Cancer diagnosis makes for an interesting RL environment for LLMs
3 hours ago
Hey HN, this is David from Aluna (YC S24). We work with diagnostic labs to build datasets and evals for oncology tasks.
I wanted to share a simple RL environment I built that gave frontier LLMs a set of tools that lets it zoom and pan across a digitized pathology slide to find the relevant regions to make a diagnosis. Here are some videos of the LLM performing diagnosis on a few slides:
(https://www.youtube.com/watch?v=k7ixTWswT5c): traces of an LLM choosing different regions to view before making a diagnosis on a case of small-cell carcinoma of the lung
(https://youtube.com/watch?v=0cMbqLnKkGU): traces of an LLM choosing different regions to view before making a diagnosis on a case of benign fibroadenoma of the breast
Why I built this:
Pathology slides are the backbone of modern cancer diagnosis. Tissue from a biopsy is sliced, stained, and mounted on glass for a pathologist to examine abnormalities.
Today, many of these slides are digitized into whole-slide images (WSIs)in TIF or SVS format and are several gigabytes in size.
While there exists several pathology-focused AI models, I was curious to test whether frontier LLMs can perform well on pathology-based tasks. The main challenge is that WSIs are too large to fit into an LLM’s context window. The standard workaround, splitting them into thousands of smaller tiles, is inefficient for large frontier LLMs.
Inspired by how pathologists zoom and pan under a microscope, I built a set of tools that let LLMs control magnification and coordinates, viewing small regions at a time and deciding where to look next.
This ended up resulting in some interesting behaviors, and actually seemed to yield pretty good results with prompt engineering:
- GPT 5: explored up to ~30 regions before deciding (concurred with an expert pathologist on 4 out of 6 cancer subtyping tasks and 3 out of 5 IHC scoring tasks)
- Claude 4.5: Typically used 10–15 views but similar accuracy as GPT-5 (concurred with the pathologist on 3 out of 6 cancer subtyping tasks and 4 out of 5 IHC scoring tasks)
- Smaller models (GPT 4o, Claude 3.5 Haiku): examined ~8 frames and were less accurate overall (1 out of 6 cancer subtytping tasks and 1 out of 5 IHC scoring tasks)
Obviously, this was a small sample set, so we are working on creating a larger benchmark suite with more cases and types of tasks, but I thought this was cool that it even worked so I wanted to share with HN!
LLMs finding cancer is wild!! I get the importance of giving the model control over zoom and pan to narrow down where to look, but I’m curious about the other side.
** How much does the LLM actually understand cancer recognition itself? **
If you gave it just a cropped region that’s clearly cancerous, without the exploration step, would it still be able to identify it correctly?
How would a human classify the cancers? I assume the LLM training data does not include a whole bunch of cancer samples, so assumably there are some rules that it follows?
> While there exists several pathology-focused AI models
Would also be curious how the LLM compares to this and other approaches. What's the performance of the models trained specifically on this task, and random guessing, compared to the expert pathologist? Correct me if I'm wrong but this seems like the sort of task where being right 90% of the time is not good enough, so even if the LLM beats other approaches, it still needs to close the gap to human performance.
> What's the performance of the models trained specifically on this task, and random guessing, compared to the expert pathologist?
I should probably first clarify here, the disease classification tasks are about subtyping the type of cancer (i.e classifying a case as invasive ductal carcinoma of the breast) rather than just broad malignant/benign classification so random guessing is much more difficult and makes this model performance more impressive.
> Would also be curious how the LLM compares to this and other approaches.
There aren't a lot of public general purpose pathology benchmarks. There are some like (https://github.com/sinai-computational-pathology/SSL_tile_be...) but focus on just binary benign/malignant classification tasks and binary biomarker detection tasks.
I am currently working on self-hosting the available open-source models.
> this seems like the sort of task where being right 90% of the time is not good enough, so even if the LLM beats other approaches, it still needs to close the gap to human performance
Yep, your intuition is right here, and actually the expectation is probably closer to mid-high 90%, especially for FDA approval (and most AI tools position as co-pilots at the moment). There is obviously a long way to go, but what I find about interesting about this approach is that it allows LLMs to generalize across (1) a variety of tissue types and (2) pathology tasks such as IHC H-score scoring.