Comment by 59nadir

9 hours ago

> However, I'm not convinced this shows GPT 5.5 being that much better than Opus 4.7. It could very well be the harness around it, the system prompts used in the harness and tools available.

A model that can more effectively make use of the tools presented to it is going to be better. You're not wrong about the system prompt; these can have quite a pronounced effect, especially when what the agent is bridging to is not just a case of bash + read/write; you need the prompt (and tool descriptions) to steer and reinforce what it should actually do because most models are heavily over-trained on executing bash lines.

When it comes to more basic agent usage that just runs in a terminal and executes bash ultimately most models are going to do just fine as long as you provide the very basics.

Regarding your case in this post it could be any number of issues: The provider being over-provisioned, leaving less time for your case, the model just not being particularly great, your previous context (in your original session) subtly nudging the model to not do the correct thing, and so on.

The truth is that you simply can't really know what the exact cause of this behavior you experienced is, but I think you're also working hard to cope on behalf of Anthropic.

All in all I think you're placing a bit too much faith in agents and their effect. If you slim down and use something like Pi instead you'll likely get a more accurate sense of what agents do and don't do, and how it affects things. You can then also add your own things and experiment with how that impacts things as well.

I've written an agent that only allows models to send commands to Kakoune (a niche text editor that I use) and can say that building an agent that just executes bash + read/write in 2026 is probably the easiest proposition ever. I say this because a lot of the work I've had to do has been to point them in the direction of not constantly trying to write bash lines; models all seem to tend towards this so if you just wanted to do that anyway most of your work is already done. The vast amount of the work in those types of agents is better spent fixing model quirks and bad provider behavior in terms of input/output.

> A model that can more effectively make use of the tools presented to it is going to be better.

Of course. What I was getting at is that if the harness A doesn't expose certain useful tools that harness B does, it doesn't matter if the model could use those tools.

> I think you're also working hard to cope on behalf of Anthropic

How on earth did you get that out my post? I was just reporting on a recent experience I had, to make a point that harness+model is a very different thing from just model when it comes to evaluating effectiveness and quality of output.