Comment by alt227
23 days ago
I thought it sounded more like an ad for Claude written by Anthropic:
> "This was surprising, but fits with Claude's playful personality and flexible disposition."
23 days ago
I thought it sounded more like an ad for Claude written by Anthropic:
> "This was surprising, but fits with Claude's playful personality and flexible disposition."
This sounds as expected to me as a heavy user of Opus. Claude absolutely has a "personality" that is a lot less formal and more willing to "play along" with more creative tasks than Codex. If you want an agent that's prepared to just jump in, it's a plus. If you want an agent that will be careful, considered and plan things out meticulously, it's not always so great - I feel that when you want Claude to do reptitive, tedious tasks, you need to do more work to prevent it from getting "bored" and try to take shortcuts or find something else to do, for example.
> when you want Claude to do reptitive, tedious tasks, you need to do more work to prevent it from getting "bored"
Is this sentance seriously about a computer? Have we gone so far that computers wont just do what we tell them to anymore?
Claude has outright told me "this is getting tedious" before proceeding to - directly against instructions - write a script to do the task instead of doing it "manually" (I'd told it not to because I needed more complex assessment than it could do with a script).
There are fairly straightforward fixes, such as either using subagents or script a loop and feed the model each item instead of a list of items, as prompt compliance tends to drop the more stuff is in the context, but, yes, they will "get bored" and look for shortcuts.
Another frequent one is deciding to sample instead of working through every item.
Yup - most models ignore specific initial instructions once you pass ~50% of usable context window, and revert to their defaults eg generating overtly descriptive yet useless docs / summaries