← Back to context

Comment by losvedir

19 hours ago

Wow. This is going to be interesting to follow. There's absolutely no way any of this code was reviewed, but maybe we're in a post-human world now where you can trust the models to write and review the code. This is like Gastown but on a higher profile project. Will be fascinating to see how this project is able to add new features going forward (or even _if_ it will be able to).

Does anyone know how exactly Bun is used by Anthropic? Is it a part of Claude Code? I'm more than slightly worried about using Bun going forward myself, but I'm not sure to what extent that applies to using Claude as well.

> you can trust the models to write and review the code

You definitely cannot!

  • Reminds me of going on linkedin and seeing all these sales and product people who are talking big game about engineering now. Well yeah they are definitely producing something but not sure I'd call it "engineering."

  • You can trust them to flag some things during review that may or may not be relevant. But just like with human review and unit testing, you cannot guarantee the absence of bugs after an LLM code review. It's just another set of (virtual) eyeballs.

    • I trust them somewhat to flag bugs. I don't trust them to produce clean, maintainable code - even code maintainable by the LLM itself. Any sufficiently complex LLM changeset can be assumed to contain duplicated logic, method scope creep, and code changes without accompanying documentation changes that the model often will not catch no matter how many rounds of review you run. If those issues make it into a commit, the next time you ask the LLM to update some of the functionality that it introduced earlier, bugs will creep in.

      1 reply →

It passed all the tests.

If you can't trust your test suite to catch an automatic language translation you shouldn't trust it at all. :)

  • Tests can only prove the presence of bugs, but not their absence. If the AI can access the tests, it can easily make them pass by just adding additional if statements. It doesn't mean the code is actually correct.

  • What if we only trusted the test suite a reasonable amount, instead of pretending trust must either be blindly total or nonexistent?

  • The entire underlying system has been replaced. The test suite is written around the current fuzzy edges and past problem areas, not every single behavior of the existing platform.

    "If you can't trust your test suite to catch a hardware floating point arithmetic bug, you shouldn't trust it at all."

    "If you can't trust your test suite to catch a JVM bug, you shouldn't trust it at all."

    "If you can't trust your test suite to catch a recurring memory error, you shouldn't trust it at all."

Does anyone know how exactly Bun is used by Anthropic? Is it a part of Claude Code?

It seems to be used by anthropic as a way to shift the discussion window into it being acceptable that you yolomerge millions of lines.

  • the `claude` binary is essentially a packed copy of bun + the js code, so this will replace the native runtime part of claude code.