Comment by dmk
5 hours ago
The benchmarks are cool and all but 1M context on an Opus-class model is the real headline here imo. Has anyone actually pushed it to the limit yet? Long context has historically been one of those "works great in the demo" situations.
Paying $10 per request doesn't have me jumping at the opportunity to try it!
Makes me wonder: do employees at Anthropic get unmetered access to Claude models?
It's like when you work at McDonald's and get one free meal a day. Lol, of course they get access to the full model way before we do...
1 reply →
Seems quite obvious that they do, within reason.
The only way to not go bankrupt is to use a Claude Code Max subscription…
Has a "N million context window" spec ever been meaningful? Very old, very terrible, models "supported" 1M context window, but would lose track after two small paragraphs of context into a conversation (looking at you early Gemini).
Umm, Sonnet 4.5 has a 1m context window option if you are using it through the api, and it works pretty well. I tend not to reach for it much these days because I prefer Opus 4.5 so much that I don't mind the added pain of clearing context, but it's perfectly usable. I'm very excited I'll get this from Opus now too.
If you're getting on along with 4.5, then that suggests you don't actually need the large context window, for your use. If that's true, what's the clear tell that it's working well? Am I misunderstanding?
Did they solve the "lost in the middle" problem? Proof will be in the pudding, I suppose. But that number alone isn't all that meaningful for many practical uses. Claude 4.5 often starts reverting bug fixes ~50k tokens back, which isn't a context window problem.
Opus 4.5 starts being lazy and stupid at around the 50% context mark in my opinion, which makes me skeptical that this 1M context mode can produce good output. But I'll probably try it out and see