Comment by CuriouslyC
20 hours ago
The hot mess that is Claude Code (if you multi-orchestrate with it, it'll start to grind even very powerful systems to a halt, 15+ seconds of unresponsiveness, all because CC constantly serializes/deserializes a JSON data file that grows quite large every time you do stuff), their horrible service uptime compared to all their competitors, their month long performance degradation their users had to scream at them to get them to investigate, the fact that they had to outsource their web client and it's still bad, etc.
You think Anthropic’s engineering talent for infosec is possible to determine because…you’ve used Claude Code? Am I understanding this right?
> The hot mess that is Claude Code
And yet it's one of the fastest growing products of all time and is currently the state of the art for AI coding assistants. Yeah it's not perfect but nothing is
I give the model a lot of credit for being very good at a fairly narrow slice of work (basic vibe coding/office stuff) that also happens to be extremely common. I'm harder on Claude Code because of its success and the fact that the company that makes it is worth so much.
"I doubt they have good security chops because they make bad technical choices"
"What bad technical choices?"
"These ones"
"Ok but they're fast-growing, so..."
Does being a fast-growing product mean you have security chops or is this a total non-sequitur?
They brought up some performance related edge case that I've never even run into even with extremely heavy usage including building my own agent that wraps around CC and runs several sessions in parallel... So yeah I failed to see the relevance
I have the opposite perception: they’re the only company in the space that seems to have a clue what responsible software engineering is.
Gemini Code and Cursor both did such a poor job sandboxing their agents that the exploits sound like punchlines, while Microsoft doesn’t even try with Copilot Agentic.
Countless Cursor bugs have been fixed with obviously vibe-coded fake solutions (you can see if you poke into code embedded in their binaries) which don’t address the problems on a fundamental level at all and suggest no human thinking was involved.
Claude has had some vulnerabilities, but many fewer, and they’re the only company that even seemed to treat security like a serious concern, and are now publishing useful related open source projects. (Not that your specific complaint isn’t valid, that’s been a pain point for me to, but in terms of the overall picture that’s small potatoes.)
I’m personally pretty meh on their models, but it’s wild to me to hear these claims about their software when all of the alternatives have been so unsafe that I’d ban them from any systems I was in charge of.
I suggest spending some time with Codex. Claude likes to hack objectives, it's really messy and it'll run off sometimes without a clear idea of what you want or how a project works. That is all fine when you're a non-technical person vibe coding a demo, but it really kills the product when you're working on hard tasks in a large codebase.
Codex is the one I haven’t really tried, I’ll have to check it out.
Every tool in this space is blatantly unsafe. The sandboxes that people have designed are quite ineffective.
[flagged]
You seem to have a personal emotional investment in Anthropic, what's the deal?
[flagged]
2 replies →