Comment by maddmann
2 months ago
lol 5000 tests. Agentic code tools have a significant bias to add versus remove/condense. This leads to a lot of bloat and orphaned code. Definitely something that still needs to be solved for by agentic tools.
2 months ago
lol 5000 tests. Agentic code tools have a significant bias to add versus remove/condense. This leads to a lot of bloat and orphaned code. Definitely something that still needs to be solved for by agentic tools.
> Agentic code tools have a significant bias to add versus remove/condense.
Your point stands uncontested by me, but I just wanted to mention that humans have that bias too.
Random link (has the Nature study link): https://blog.benchsci.com/this-newly-proven-human-bias-cause...
https://en.wikipedia.org/wiki/Additive_bias
Great point, interesting how agents somehow pick up the same bias.
Oh I’ve had agents remove tests plenty of times. Or cripple the tests so they pass but are useless - more common and harder to prompt against.
Ah true, that also can happen — in aggregate I think models will tend to expand codebases versus contract. Though, this is anecdotal and probably is something ai labs and coding agent companies are looking at now.
It’s the same bias for action which makes them code up a change when you genuinely are just asking a question about something. They really want to write code.