Comment by rednb

20 hours ago

As someone who uses quite a couple of different AI providers (codex, glm, deepseek, claude premium among others), i've noticed that claude tends to move too fast and execute commands without asking for permission.

For example, if i ask a question regarding an implementation decision while it is implementing a plan, it answers (or not) and immediately proceeds to make changes it assumes i want. Other models switch to chat mode, or ask for the best course of action.

Once this is said, i am not blaming Anthropic For that one, because IMHO the OP has taken a lot of risks and failed to design a proper backup and recovery strategy. I wish them to recover from this though, this must be a very stressful situation for them.

All the models I have used will frequently jump ahead a ton of steps and not verify any of its assumptions. From generating a ton of code output I didn't ask for, to making a ton of assumptions about what I'm working on without appropriate context.

  • Yeah, /plan is the only way I can work with them now. Too much "helpful" crap I didn't ask for. Having nightmares of former coworkers who would want to refactor 80% of the code base for a 3 line change. AI doesn't subscribe to "if it ain't broke, don't fix it."