Comment by deepdarkforest
2 days ago
Interesting. It's just an agent loop with access to python exec and web search as standard, BUT with premade, curated, 150 tools like analyze_circular_dichroism_spectra, with very specific params that just execute a hardcoded python function. Also with easy to load databases that conform to the tools' standards.
The argument is that if you just ask claude code to do niche biomed tasks, it will not have the knowledge to do it like that by just searching pubmed and doing RAG on the fly, which is fair, given the current gen of LLM's. It's an interesting approach, they show some generalization on the paper(with well known tidy datasets), but real life data is messier, and the approach here(correct me if im wrong) is to identify the correct tool for a task, and then use the generic python exec tool to shape the data into the acceptable format if needed, try the tool and go again.
It would be useful to use the tools just as a guidance to inform a generic code agent imo, but executing the "verified" hardcoded tools narrows the error scope, as long as you can check your data is shaped correctly, the analysis will be correct. Not sure how much of an advantage this is in the long term for working with proprietary datasets, but it's an interesting direction
No comments yet
Contribute on Hacker News ↗