Comment by pluc
15 hours ago
I posted this question two weeks ago: "What is your plan when the AI you have implemented throughout your company changes the results you've come to trust?" (https://www.theregister.com/2026/04/06/anthropic_claude_code...).
Since then, I had to add:
"or won't let you log in?": https://github.com/anthropics/claude-code/issues/44257
"or makes stuff up?": https://dwyer.co.za/static/claude-mixes-up-who-said-what-and...
"or when it's down?": https://status.claude.com/incidents/6jd2m42f8mld
"or when you get banned?": https://bannedbyanthropic.com/
"or installs spyware?": https://www.thatprivacyguy.com/blog/anthropic-spyware/
And this is all exclusively about Anthropic. It's insane. On any other tech, there would be a consensus to wait until it's stable, but not AI - we go full throttle when it's AI.
Genuinely curious how people who have implemented this in serious companies are answering these questions, because my answer is to keep it the fuck out.
No comments yet
Contribute on Hacker News ↗