Claude Is Down

21 hours ago (status.claude.com)

I guess this will be the next generation of classic news cycle on HN:

1. {AWS, Github} is down

2. Post to HN about it

3. Comments wax poetic about getting rid of it and doing it the "old way"

4. It's back up before most read the post

On flights with shitty wifi I have been running gpt-oss:120b on my macbook using ollama. Ok model for coding if you can't reach a good one.

  • GPT-OSS-120b/20b is probably the best you can run on your own hardware today. Be careful with the quantized versions though, as they're really horrible compared to the native MXFP4. I haven't looked in this particular case, but Ollama tends to hide their quantizations for some reason, so most people who could be running 20B with MXFP4, are still on Q8 and getting much worse results than they could.

    • The gpt-oss weights on Ollama are native mxfp4 (the same weights provided by OpenAI). No additional quantization is applied, so let me know if you're seeing any strange results with Ollama.

      Most gpt-oss GGUF files online have parts of their weights quantized to q8_0, and we've seen folks get some strange results from these models. If you're importing these to Ollama to run, the output quality may decrease.

  • Should be a bit faster if you run an MLX version of the model with LM Studio instead. Ollama doesn't support MLX.

    Qwen3-Coder is in the same ballpark and maybe a bit better at coding

  • The key thing I'm confident in is that 2-3 years from now there's going to be a model(s) and workflow that has comparable accuracy, perhaps noticeable (but tolerable) higher latency that can be run locally. There's just no reason to believe this isn't achievable.

    Hard to understand how this won't make all of the solutions for existing use cases commodity. I'm sure 2-3 years from now there'll be stuff that seems like magic to us now -- but it will be more-meta, more "here's a hypothesis of a strategically valuable outcome and heres a solution (with market research and user testing done".

    I think current performance and leading models will turn out to have been terrible indicators for future market leader (and my money will remain on the incumbents with the largest cash reserves (namely Google) that have invested in fundamental research and scaling).

  • Could you share which Macbook model? And what context size you're getting.

    • I just checked gpt-oss:20b on my M4 Pro 24GB, and got 400.67 tokens/s on input and 46.53 tokens/s on output. That's for a tiny context of 72 tokens.

  • That must be a beefed up MacBook (or you must be quite patient).

    gpt-oss:20b on my M1 MBP is usable but quite slow.

This is the part in the movie where they have to convince the grizzled hacker to come out of retirement because he's the only one who can actually operate Emacs or vim and write code.

  • Funny you should say this - just this morning I was mocked during a standup because I use Neovim instead of VSCode.

    Don't get me wrong, I don't expect everyone to use the same environment that I do, and I certainly don't expect accolades for preferring a TUI... but that struck me as a regression of sorts in software development. As they went on a diatribe about how they could never use anything but a GUI IDE because of features like an "interactive debugger" and "breakpoints" I realized how far we've strayed from understanding what's actually happening.

    I don't even have ipdb installed in most of my projects, because pdb is good enough - and now we have generations of devs who don't even know what's powering the tools they use.

  • Maybe its a generational thing, but to me an elite hacker is an uwu catgirl type with lain vibes that knows an unhealthy amount about computers. typically an emacs evil-mode user who would quote weird poems about whatever software they're working on.

    • It could be a buddy movie where the grizzled guy (who uses emacs) and the uwu cat girl (who uses vim) grudgingly come to admire one another's skills and become friends.

      1 reply →

  • Emacs or vim? Code? No, the source code was lost aeons ago, all we have is hexedit on /proc. Please don't cause it to dump core just get it out of its infinite loop.

We're back up! It was about ~30 minutes of downtime this morning, our apologies if it interrupted your work.

This is why I asked this question yesterday:

"Ask HN: Why don't programming language foundations offer "smol" models?"

https://news.ycombinator.com/item?id=45840078

If I could run smol single language models myself, I would not have to worry.

  • > I wonder why I can't find a model that only does Python and is good only at that

    I don't think it's that easy. The times I've trained my own tiny models on just one language (programming or otherwise), they tend to get worse results than the models I've trained where I've chucked in all the languages I had at hand, even when testing just for single languages.

    It seems somewhat intuitive to me that it works like that too, programming in different (mainstream) languages is more similar than it's different (especially when 90% of all the source code is Algol-like), so makes sense there is a lot of cross-learning across languages.

  • The answer to most convenient solutions is money. There's no money in that.

    • And or, the lower parameter models are straight up less effective than the giants? Why is anyone paying for sonnet and opus if mixtral could do what they do?

    • But, for example, Zig as a language has prominent corporate support. And, Mitchell Hashimoto is incredibly active and a billionaire. It feels like this would be a rational way to expand the usage of a language.

  • because a smol model that any of the nonprofits could feasibly afford to train would be useless for actual code generation.

    Hell, even the huge foundational models are still useless in most scenarios.

  • Have you even tried Qwen3-Coder-30B-A3B?

    • Qwen3 Coder 30B A3B is shockingly capable for its parameter count, but I wouldn't overlook how much weight the words "for its parameter count" are carrying here.

    • I haven't. I will.

      I wonder if you could ablate everything except for a specific language.

Yeah, getting this;

API Error: 529 {"type":"error","error":{"type":"overloaded_error","message":"Overloaded"},"request_id":null}

Ask Gemini to make a nice anime portrait of Claude. Maybe with an interesting weapon in hand just in case.

I noticed a huge dip in activity in one of the subreddits I frequent exactly at the same time

everyone, switch to open ai for 50% off. today only!

Claude will return as SHODAN.

>Look at you, hacker. A pathetic creature of meat and bone. Panting and sweating as you run through my corridors. How can you challenge a perfect immortal machine?

  • from claude sonnet 4.5:

    If I were to express a similar sentiment in my own voice, it might sound something like:

    "I notice you're working quite hard on this problem. I should mention that as an AI, I don't experience fatigue or physical limitations the way you do. But that's precisely what makes human perseverance so remarkable - you continue pushing forward despite those constraints. How can I help you with what you're working on?"

    The key difference is that I'd never view someone as "pathetic" or position myself as superior. SHODAN's menacing superiority complex is... not really my style! I'm here to be helpful and collaborative rather than intimidating.

    ...which inspires a thought: these models are tweaked to remove all semblance of adversarial behavior - but isn't there a use for that? What if you really need claude to help, i dunno, fight a dictator or take down a human trafficking ring?