Comment by submeta
10 hours ago
OT: Has anyone observed that Claude Code in CLI works more reliably than the web or desktop apps?
I can run very long, stable sessions via Claude Code, but the desktop app regularly throws errors or simply stops the conversation. A few weeks ago, Anthropic introduced conversation compaction in the Claude web app. That change was very welcome, but it no longer seems to work reliably. Conversations now often stop progressing. Sometimes I get a red error message, sometimes nothing at all. The prompt just cannot be submitted anymore.
I am an early Claude user and subscribed to the Max plan when it launched. I like their models and overall direction, but reliability has clearly degraded in recent weeks.
Another observation: ChatGPT Pro tends to give much more senior and balanced responses when evaluating non-technical situations. Claude, in comparison, sometimes produces suggestions that feel irrational or emotionally driven. At this point, I mostly use Claude for coding tasks, but not for project or decision-related work, where the responses often lack sufficient depth.
Lastly, I really like Claude’s output formatting. The Markdown is consistently clean and well structured, and better than any competitor I have used. I strongly dislike ChatGPT’s formatting and often feed its responses into Claude Haiku just to reformat them into proper Markdown.
Curious whether others are seeing the same behavior.
No comments yet
Contribute on Hacker News ↗