Comment by tartakovsky
6 days ago
Well, task == Resolving real GitHub Issues
Languages == Python only
Libraries (um looks like other LLM generated libraries -- I mean definitely not pure human: like Ragas, FastMCP, etc)
So seems like a highly skewed sample and who knows what can / can't be generalized. Does make for a compelling research paper though!
Hey, paper author here. We did try to get an even sample - we include both SWE-bench repos (which are large, popular and mostly human-written) and a sample of smaller, more recent repositories with existing AGENTS.md (these tend to contain LLM written code of course). Our findings generalize across both these samples. What is arguably missing are small repositories of completely human-written code, but this is quite difficult to obtain nowadays.
Why stick to python-only repositories though?
To reduce the number of variables to account for. To be able to finish the paper this year, and not the next century. To work with a familiar language and environments. To use a language heavily represented in the training data.
I mean, it's not that hard to understand why.
3 replies →
I think that is a rather fitting approach to the problem domain. A task being a real GitHub issue is a solid definition by any measure, and I see no problem picking language A over B or C.
If you feel strongly about the topic, you are free to write your own article.
> Libraries (um looks like other LLM generated libraries -- I mean definitely not pure human: like Ragas, FastMCP, etc)
How does this invalidate the result? Aren't AGENTS.md files put exactly into those repos that are partly generated using LLMs?