Comment by bee_rider
2 days ago
I agree that I’d want the bot to tell me that it couldn’t solve the problem. However, if I explicitly ask it to provide a solution without commentary, I wouldn’t expect it to do the right thing when the only real solution is to provide commentary indicating that the code is unfixable.
Like if the prompt was “don’t fix any bugs and just delete code at random” we wouldn’t take points off for adhering to the prompt and producing broken code, right?
Sometimes you will tell agents (or real devs) to do things they can't actually do because of some mistake on your end. Having it silently change things and cover the problem up is probably not the best way to handle that situation.
If I told someone to just make changes and don’t provide any commentary, I would not be that surprised to get mystery changes. I’d say that was my fault to a large extent. I’d also consider that I was being a bit rude, and probably got what I deserved.
But this is not a normal human interaction. I probably wouldn’t give somebody a “no feedback” rule, and if I was on the receiving end of such a request I would definitely want to clarify what they meant. Without the ability to negotiate or push-back, the bot is in a very tough position.
You will decrease your chance of having this happen by not explicitly prompting the model to silently change things