Anthropic is Down

10 hours ago (updog.ai)

The great thing about LLMs being more or less commoditized is switching is so easy.

I use Claude Code via the VS Code extension. When I got a couple of 500 errors just now I simply copy pasted my last instructions into Codex and kept going.

It's pretty rare that switching costs are THAT low in technology!

  • It’s not a moat, it’s a tiny groove on the sidewalk.

    I’ve experienced the same. Even guide markdown files that work well for one model or vendor will work reasonably well for the other.

  • Which is exactly why these companies are now all focused on building products rather than (or alongside) improving their base models. Claude Code, Cowork, Gemini CLI/Antigravity, Codex - all proprietary and don't allow model swapping (or do with heavy restrictions). As models get more and more commoditized the idea is to enforce lock-in at the app level instead.

    • FWIW, OpenAI Codex is open source and they help other open source projects like OpenCode to integrate their accounts (not just expensive API), unlike Anthropic who blocked it last month and force people to use their closed source CLI.

      2 replies →

    • I only integrate with models via MCP. I highly encourage everybody to do the same to preserve the commodity status

  • The switching cost is so low that I find it's easier and better value to have two $20/mo subscription from different providers than a $200/mo subscription with the frontier model of the month. Reliability and model diversity are a bonus.

  • I genuinely don't know how any of these companies can make extreme profit for this reason. If a company makes a significantly better model, shouldn't it be able to explain how it's better to any competitor?

    Google succeeded because it understood the web better than its competitors. I don't see how any of the players in this space could be so much better that they could take over the market. It seems like these companies will create commodities, which can be profitable, but also incredibly risky for early investors and don't make the profits that would be necessary to justify the evaluations of today.

    • > If a company makes a significantly better model, shouldn't it be able to explain how it's better to any competitor?

      No. Not if it's not trained on any materials that reveal the secret sauce on why it's better.

      LLM's don't possess introspection into their own training process or architecture.

      2 replies →

  • > It's pretty rare that switching costs are THAT low in technology!

    Look harder. Swapping usb devices (mouse,…) takes even less time. Switching wifi is also easy. Switching browser works the same. I can equally use vim/emacs/vscode/sublime/… for programming.

    • Switching between vim <-> emacs <-> IDEs is way harder than swapping a USB (unless you already know how to use them).

    • good point, they are standards, by definition society forced vendors to behave and play nice together. LLMs are not standards yet, and it is just pure bliss that english works fine across different LLMs for now. Some labs are trying to push their own format and stop it. Specially around reasoning traces, e.g. codex removing reasoning traces between calls and gemini requiring reasoning history. So don't take this for granted.

      1 reply →

    • You make it sound like lock-in doesn't exist. But your examples are cherry picked. And they're all standards anyway, their _purpose_ was for easy switching between implementations.

    • Most people only have one mouse or Wi-Fi network. If my Wi-Fi goes down, my only other option is to use a mobile hotspot, which is inferior in almost every way.

      4 replies →

  • Except Kimi Agent via website is hard to replace - I tried the same task in Claude Code, Codex, and Kimi Agent - the results for office tasks are incomparable. The versions from Anthropic and OpenAI are far behind.

Their GitHub issues are wild; random people are posting the same useless "bug reports" over and over multiple times per minute.

https://github.com/anthropics/claude-code/issues

  • Gives you a good window into a vibe coder's mentality. They do not care about anything except what they want to get done. If something is in the way, they will just try to brute force it until it works, not giving a duck if they are being an inconvenience to others. They're not aware of existing guidelines/conventions/social norms and they couldn't care less.

    • This sounds like a case of a bias called availability heuristic. It'd be worth remembering that you often don't notice people who are polite and normal nearly as much as people who are rude and obnoxious.

    • Could it be that you're creating a stereotype in your head and getting angry about it?

      People say these things against any group they dislike. It's so much that these days it feels like most of the social groups are defined by outsiders with the things they dislike about them.

      3 replies →

    • I am starting to get concerned about how much “move fast break things” has basically become the average person’s mantra in the US. Or at least it feels that way.

      4 replies →

    • if history doesn't repeat, but it rhymes

      does vibe coding rhyme with eternal september?

    • IF anything, this is good news for Anthropic, they can now bury every open source project with useless isssues and PRs

  • Wow are these submitted automatically by claude code? I'm not comfortable with the level of details they have (user's anthropic email, full path of the project they were working on, stack traces...)

    • Scanning a few. Some are definitely written by AI but most seem genuinely human (or at least, not claude).

      Anecdata: I read five and only found one was AI. Your sampling may vary.

    • I consider revealing my file structure and file paths to be PII so naturally seeing people's comfort with putting all that up there makes me queasy.

    • I think claude code has a /bug command which auto-fills those details in a github report.

    • Definitively some automation involved, no way the typical user of Claude Code (no offense) would by default put so much details into reporting an issue, especially users who don't seem to understand it's Anthropic's backend that is the issue (given the status code) rather than the client/harness.

  • and every single one of them checked "I have searched existing issues and this hasn't been reported yet"

    • A long time ago I was taking flight lessons and I was going through the takeoff checklist. I was going through each item, but my instructor had to remind me that I am not just reading the checklist - I need understand/verify each checklist item before moving on. Always stuck with me.

      2 replies →

  • It's wild that people check the box

    > I have searched existing issues and this hasn't been reported yet

    when the first 50 issues are about 500 error.

  • This is the kind of abuse that will cause them to just close GitHub issues.

    Or they'll have to put something in the system prompt to handle this special case where it first checks for existing bugs and just upvotes it, rather than creating a new one.

  • There has to be some sort of automation making these issues, to many of them are identical but posted by different people.

    Also love how many have the “I searched for issues” checked which is clearly a lie.

    Does Claude code make issue reports automatically? (And then how exactly would it be doing that if Anthropic was down when the use of LLM in the report is obvious )

  • Github issues will be the real social network for AI agents, no humans allowed!

  • Goes to show that nobody reads error messages and it reminds me of this old blogpost:

    > A kid knocks on my office door, complaining that he can't login. 'Have you forgotten your password?' I ask, but he insists he hasn't. 'What was the error message?' I ask, and he shrugs his shoulders. I follow him to the IT suite. I watch him type in his user-name and password. A message box opens up, but the kid clicks OK so quickly that I don't have time to read the message. He repeats this process three times, as if the computer will suddenly change its mind and allow him access to the network. On his third attempt I manage to get a glimpse of the message. I reach behind his computer and plug in the Ethernet cable. He can't use a computer.

    http://coding2learn.org/blog/2013/07/29/kids-cant-use-comput...

  • Thats exactly what they Anthropic deserves (btw they cant even get Anthropic on github lmao, this must be the biggest company having to run with wrong ID on github)

For folks who have latest version (0.4.1) LM Studio installed, I just noticed they added endpoints for being compatible with Claude Code, maybe this is an excellent moment to play around with local models, if you have the GPU for it. zai-org/glm-4.7-flash (Q4) is supposed to be OK-ish, and should fit within 24GB VRAM. It's not great, but always fun to experiment, and if the API stays down, you have some time to waste :)

I find it a bit annoying that the last place where I can learn about an Anthropic outage is the Anthropic Status page.

  • As best as I can tell, there was less than 10 minutes from the last successful request I made and when the downtime was added to their status page - and I'm not particularly crazy with my usage or anything, the gap could have been less than that.

    Honestly, that seems okay to me. Certainly better than what AWS usually does.

  • https://status.claude.com/

    what do you mean it's right there. Judging by the Github issues it only took them 10 minutes to add the issue message.

    • It appeared there like 5 minutes ago; it was down for at least 20 before that.

      That's 20 minutes of millions of people visiting the status page, seeing green, and then spending that time resetting their context, looking at their system and network configs, etc.

      It's not a huge deal, but for $200/month it'd be nice if, after the first two-thousand 500s went out (which I imagine is less than 10 seconds), the status page automatically went orange.

Big models are single points of failure. I don't want to rely on those for my business, security, wealth, health and governance.

Why do people have to learn the same lessons over and over again? What makes them forget or blind to the obvious pitfalls?

  • Aren't most developers accessing Anthropic's models via vscode/github? It takes seconds to switch to a different model. Today I'm using Gemini.

  • The entire AI hype was started because Silicon Valley wanted a new SaaS product to keep themselves afloat, notice that LLMs started getting pushed right after Silicon Valley Bank collapsed.

Both the CC api and their website -- hopefully related to the rumored Sonnet 5 release

Anthropic might have the best product for coding but good god the experience is awful. Random limits where you _know_ you shouldn’t hit them yet, the jankiness of their client, the service being down semi-frequently. Feels like the whole infra is built on a house of cards and badly struggles 70% of the time.

I think my $20 openai sub gets me more tokens than claude’s $100. I can’t wait until google or openai overtake them.

I've had the $20/month account for OpenAI, Google, and Anthropic for months. Anthropic consistently has more downtime and throws more errors than the other two. Claude (on the web) also has a lot of seemingly false positive errors. It will claim an error occurred but then work normally. I genuinely like Claude the best but its performance does not inspire confidence.

Even when Claude isn't down on status indicator. I get issues with Claude all the time being very slow where it is basically down. I have to kill the request and then restart it. I wonder if it has to do with being on Pro plan. I rarely had this issue when I was on Max

Their website seems fine to me but CC is throwing API error 500.

-edit- CC is back up on my machine

Was getting 500 errors from Claude Code this morning but their status page was green. So frustrating that these pages aren't automated, especially considering there are paying users affected.

We need an updated XKCD comic where they are sword fighting but instead of “compiling” its “Anthropic is down”.

Yeah using Claude-Desktop is broken current, all my mcp tool calls keep failing and retrying, then it'll work for a bit, and stop again. In just retrying in an Opus chat in desktop to make it finish doing what I started it's wasted 35% of my Pro hourly quota.

This is why I stopped paying for Max until they fix this shit.

What does Down mean in this context? I imagine running inference on any server suffices. Just rent AWS since Amazon owns it anyways and keep Claude running.

Fortunately before working hours on the west coast so it shouldn't impact that many people.

Everybody that uses it knows it is down, what value does this add? No context here either. Posts like these feels so much like a low hanging first-to-post-this for karma grab.

It's like.. Popular service is down, let me post that to hn first! Low effort but can still end up popular.

I dunno. Maybe I'm being overly critical. Thoughts?

To add something to the discussion though: this is a reminder why you should not invest in one tool, claude or otherwise. Also, don't go enhancing one of these agents, ond only one of these agents. beads spent the better part of a medium sized country in energy to create a simple TODO list and got smeared in 10 minutes once claude integrated todos in their client.

  • The usual impetus is that the official status pages habitually under report and sharing the status page showing the best information gives a basis to form a place to have discussion about where/how/why it was down. Unfortunately, many seem to take the other interpretation instead and said discussion often ends up being low quality in a self fulfilling loop of reasons.

    The second note probably deserves to be a separate comment.

    • This is a good take. You are right, they rarely do report actual status.

      And yes, it probably should have been :)

  • Not everyone that interacts with a service is interacting with it directly? How is this a serious question.

    A thing called API’s exist and if your users rely on it but your not interacting with it directly yourself, seeing this could save you time to investigate an issue.

    Or you are using it yourself and seeing this post confirms it is not just you having an issue and you can move on with your day.

    This has nothing to do with it being AI and it being a large service. It is the same with posts about an Azure or AWS.

    • I wasn't using it. I was just on hn. And yes, it is the same about all of these services of course, which was my point.

      Not sure how "nothing to do with being a large service" and then you bring up Azure and AWS matches though, but fair enough.

      Now I know you guys care, so fair game.

  • I don't use it but I'd like to know even if it's just entertainment value. But I can imgine people who intend to use it learn something from it: redundancy must be part of their strategy like you said.

  • Everybody who uses anything knows that it's down

    So what value does it add saying "X is down" anywhere?

    It's just for discussion, you can't just ignore it and not talk about it with anyone if a particular service is down and posts like this are pretty common on hn and i haven't seen anyone complaining, it's you being overly critical, yes