Comment by Aeolun
17 hours ago
> Anthropic just came out and yolo'd the context window because they could afford to
I don’t think this is true at all. The reason CC is so good is that they’re very deliberate about what goes in the context. CC often spends ages reading 5 LOC snippets, but afterwards it only has relevant stuff in context.
Background of how it works: https://kirshatrov.com/posts/claude-code-internals
Prompt: https://gist.github.com/transitive-bullshit/487c9cb52c75a970...
Also check out claude-trace, which injects fetch hooks to get at the data: https://github.com/badlogic/lemmy/tree/main/apps/claude-trac...
I'm always surprised how short system prompts are. It makes me wonder where the rest of the app's behavior is encoded.
I’ve definitely observed that CC is waaaay slower than cursor
Heard a lot of this context bs parroted all over HN, don't buy it. If simply increasing context size can solve problem, Gemini would be the best model for everything.
Gemini tends to be better at bug hunting, but yes everything else Claude is still superior