Rewrite Bun in Rust has been merged

1 day ago (github.com)

When announcements say that rewrite took 1 week, I wonder how much time went into preparing this file with very detailed instructions on mapping Zig to Rust idioms: https://github.com/oven-sh/bun/commit/46d3bc29f270fa881dd573...

On top of that, if you look at 'Pointers & ownership' and 'Collections' sections, the Bun codebase is already prepared, using internal smart pointer types that map 1-to-1 to Rust equivalents, and `bun_collections` Rust crate already exists.

This makes an impression, that rewrite was prepared long time ago and was Bun team proposition to Anthropic during the acquisition deal.

  • Yeah I don’t know what’s true when reading about LLMs. Same with comments here on hacker news. So much money on the line it’s clear they would seed communities with marketing shills (and some people are just tribal).

    Same since they own Bun, they have every incentive to make this seem easier than it was.

    • This is a huge problem regarding the specifics of ai. Tech is becoming very adversarial as a worker, since marketing and technical information are blurring lines more and more.

      3 replies →

    • I'm not sure it matters what anyone claims. It's easy to use and experience its abilities and limitations.

    • The truth lies somewhere in the middle.

      Context: 20 years coding, 13-ish of which professional. Using LLMs for side projects, including a very big one. Also using them to help manage our home server.

      I’ve used 20-ish agents with OpenRouter, Google’s own AGY, Mistral’s Vibe, and Claude Code. The good ones are good and can be very helpful with spec’ing work or handling repetitive tasks. Except for Opus 4.6, none of them produce TypeScript that I’d be super proud of; but they write stuff that’s good enough compared to what I’ve seen in the industry. It’s always some mix of spaghetti and shortcuts. That’s fine, you steer the model and tighten your specs and tests.

      Anyone claiming ‘Model X can one-shot’ an app is delusional about maintainability, deployment, all the little things that grease the wheels. Anyone claiming ‘LLMs are useless’ is probably not being impartial. That’s it.

      And any company claiming AI is awesome at everything and will replace everyone? Yeah, they’re lying, at least about their capabilities as of right now.

  • Similar highly crafted success stories getting passed around within some big tech firms right now.

    We got told that someone wrote a huge, sophisticated driver in Rust in a single day using Claude Code. This is being pushed as a case of AI doing something that we encounter on a regular basis, way faster than a human could do it.

    Some ommitted details: Turns out the official spec for this driver is written in C, and the standard has a massive official suite of unit tests.

  • Ignoring things like whether the Rust that was output could be deemed qualitatively good, whether the resulting line count is appropriate, how much the codebase was ready or primed for this kind of exercise going in, and so on, is it fair to say that a 622 line artefact created up front is a relatively small cost for a potential increase in consistency or quality of output when the output is ~1M LoC? It seems like there's a multiplicative power here given how much output there is. Or is that missing a lot of nuance?

    I'd also be interested generally in how much tacit knowledge was needed to come up with these rules and how much iteration on this file was needed, for example how many of the rules here came from a failure case hit as part of iterating on the translation.

    • > I'd also be interested generally in how much tacit knowledge was needed to come up with these rules and how much iteration on this file was needed, for example how many of the rules here came from a failure case hit as part of iterating on the translation.

      I think that's the point the original poster was making. There's basically zero chance this file was just spit out by memory in an afternoon. It was obviously the result of a LOT of pre-planning and back and forth checking over the artifacts that Claude was incorrectly generating for one reason or another. So yeah, an extremely iterative process.

      With rules as fine-grained as these, there was almost certainly many instances where hundreds of files are generated -> one particular file doesn't translate <X> correctly -> add a rule for <X> -> regenerate everything again -> crap, that rule broke a different file because <Y> -> add a rule for <X if Y>, another for <X not Y> -> regenerate everything again[0] -> repeat. The token costs must have been out of this world.

      0: now I'm sure people will say "why would you regenerate a file that generated correctly once? Just mark it off the list and move on." Well, when essentially 99.9999% of your codebase is generated artifacts, the tiny fraction that is actually human-understandable is now the spec, the source of truth for everything. It HAS to be able to essentially redo the entire process if you expect any level of maintainability going forward.

    • I would guess it was a for ... each loop. They likely wrote a bunch of skills. The foor loop went through each file and generated a complimentary file, then had another process integrate/validate.

      I doubt the entire process was a single week, just whatever harness they specially prepared for the work.

      1 reply →

  • > using internal smart pointer types that map 1-to-1 to Rust equivalents

    Smart pointers weren't invented by Rust. If you write code in other languages with pointers you mentally model the same types already.

    > and `bun_collections` Rust crate already exists.

    This is wrong. It's part of the PR in the codebase. It did not previously exist.

    • Agree, after closer look smart pointer types are pretty standard and collections were indeed a part of migration.

      But still, in order to prepare those detailed and very project-specific instructions you need to iterate on trying to convert the files from this specific codebase.

  • Its like that hackathon winner project that everyone knows wasn't ideated or built there. True to the law, not to the spirit.

  • Based on the use of "≥" and em-dashes, I'd say this markdown file was written with or by an LLM.

  • Yes, there is exaggeration going on.

    Nonetheless, it’s a fact it would have taken much longer without LLMs, I’d say all possible.

    I find this is a valid success story if you can look past the embellishments. More than that, it’s really cool, actually.

  • Given zig instability (as in frequent breaking changes), it wouldn't surprise me if they intentionally design bun from the start in a way to make it easier to migrate to rust if needed.

  • It's the same thing with their gcc stunt.

    It would be _so_ easy to alleviate any doubt from this and hype up the IPO even more. They just need start a separate repo with all the hidden work they needed to do to prod the AI along, and let everyone replicate the results. After all, isn't that what all their customers are trying to achieve? A million lines of usable code in "7" days? Never mind the fact that it will also boost Anthropic's usage metrics as everyone tries to replicate it into their workflows.

    If it was beautiful, they would've started with a blog post about this with links and instructions. Perhaps I will still be proven wrong and a blog post is being written as I type this.

    • Which part of a Zig to Rust port (working, passing tests) of a quite large codebase in a little over a week is not worthy of hype do you reckon? That they didn't one-shot it? What could possibly make it impressive if not the sheer velocity of the thing? That's a months or years long operation for a human. There's a reason porting large programs to new languages was vanishingly rare throughout most of computing history, and there's a reason people are suddenly doing it almost on a whim, now.

  • That makes the Bun owner's claim, just a week ago in this site, even more dubious when he came on here and said this code was just an experiment and likely to be thrown away.

    • I don't think the owner lied, but rather that the entirely speculative comment on here is obviously wrong.

      It says here in the comments that it's mistaken about the supposed previous existence of the crate.

  • Seems like Zig Bun had 3 pointer types that map neatly to existing Rust pointer types. The other 7-8 needed types to be created.

    Is that the conspiracy?

    bun_collections doesn't look much older than the porting guide.

Still writing the blog post about this. Will share more details.

For where this is coming from, skim the bugfixes in the Bun v1.3.14 and earlier release notes. Rust won’t catch all of these - leaks from holding references too long and anything that re-enters across the JS boundary are still on us. But a large % of that list is use-after-free, double-free, and forgot-to-free-on-error-path, which become compile errors or automatic cleanup.

  • You, nine days ago[0]:

    > I work on Bun and this is my branch

    > This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.

    Maybe... it wasn't such an overreaction?

    [0]: https://news.ycombinator.com/item?id=48019226

    • I'm really out the loop here so maybe you can help answer me a question - why is HN unhappy about this rewrite? why are people writing here almost as if they feel betrayed by Bun being rewritten from Zig into Rust?

      I genuinely don't get it. I've been following this Bun stuff a bit but I don't understand where the HN sentiment is coming from.

      15 replies →

    • You're not alone in voicing this, another (now dead) comment did it earlier too with a bit more of an emotional response (https://news.ycombinator.com/item?id=48134229).

      Still, do you folks never do something to see how you feel about something, then chose to go one way or another? I'm not sure why it's so hard to see that it was an overreaction at the time, because it was an experiment, then at one point it stopped being an experiment and now they've chosen to actually run with it?

      Is this not a common occurrence for other people? Personally I change my mind all the time, especially based on new evidence, which usually experiments like this surface, I'm not sure I understand the whole "You said X some days ago" outrage that seems to cause people's reaction here.

      41 replies →

    • I was down voted pretty hard for calling this comment out. I would say I'm surprised but honestly? Completely predictable.

  • Looking forward to the blog post. Do you plan to run both the Zig and Rust binaries side-by-side across a wide range of real applications (potentially shadowing in production) to weed out bugs?

    • They have a PR (~~closed by GitHub bot as AI slop, ironically~~ this was wrong info, it was apparently closed by Jarred himself as it missed a conversion or some 20 Zig files to Rust) to remove the Zig code.

      I guess the answer is "no".

  • I'm curious how much this would cost a paying customer. Can you please give us an estimate?

    • Great question and I'd love the answer.

      I bet the answer is industry changing even if the token cost is high.

      This work was impossibly expensive in terms of people hours and time before. Architectural planning, engineering alignment and politics, phased engineering that gets interrupted by changing priorities.

      That it's possible to do R&D, the port, and get 99.X test passing in less than 2 weeks is so much more efficient for the humans.

  • I bet the blog post will make no mention of pressure from anthropic to do this and instead will celebrate the fact that “it passes all tests”, of course omitting how many tests were modified to forcibly pass

    • Do you have any proof Anthropic pushed for this? Because the author has been clear this was an experiment they wanted to test out on their own, only when it seemed to be in a working state did they consider, okay maybe this might work for us.

      3 replies →

    • Was there pressure to do this, or freedom to do this? If I had an unlimited token budget I'd probably try all sorts of crazy things. Also you (one) can read the tests and see that they weren't modified to forcibly pass.

  • Any plans to issue a CVE for this HTTP request smuggling attack vector fixed in the latest bun release?

    https://github.com/oven-sh/bun/issues/29732

  • Did you (or will you) implement some kind of e2e (fuzzy?) testing comparing the two binaries? Do you have particular plans regarding the release of this (for ex to not break users workflows or things like that)?

  • > The codebase is otherwise largely the same. The same architecture, the same data structures.

    How can you possibly verify this, if a 1M line patch was written over 7 days? It's at best a hunch (vibes?), and at worst a lie.

  • I can hope this will lead to little to no memory issues in using bun as a web server

    • I'd be surprised if they could eliminate memory issues completely, especially considering the amount of `unsafe` the codebase seems to contain.

          git rev-parse HEAD && ag "unsafe" src | wc -l
          19d8ade2c6c1f0eeae50bd9d7f2a4bf4a2551557
          14865

      8 replies →

  • Does that mean that from now your coding agents working on the Bun codebase are themselves running on that rust-Bun runtime?

  • So a question you should answer: Couldn't you just train the super SOTA model on fixing those issues instead of porting it?

  • [flagged]

    • Coming on a bit strong no? Isn't it possible one could do an experiment almost two weeks ago, then by today the experiment concluded and now you've made a choice?

      Did you think "experiment" meant 100% this will be thrown away? Wouldn't make much sense to experiment with something you know you'll throw away, unless you have some specific reason for it.

    $ rg 'unsafe [{]' src/ | wc -l
    10428
    $ rg 'unsafe [{]' src/ -l | wc -l
    736
    
    Language        Files     Lines      Code  Comments    Blanks
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
    Rust             1443    929213    732281    116293     80639
    Zig              1298    711112    574563     59118     77431
    TypeScript       2604    654684    510464     82254     61966
    JavaScript       4370    364928    293211     36108     35609
    C                 111    305123    205875     79077     20171
    C++               586    262475    217111     19004     26360
    C Header          779    100979     57715     29459     13805

  • Cool you can just search specifically for potentially unsafe code in Rust. How do you search for unsafe code in Zig? Or do you just have to assume it's everywhere?

    • If half of your code is unsafe then unless you exercise tremendous discipline (Claude basically doesn't) you will just end up with a big ball of unsafe, peppered with hallucinations in whatever random documentary comments Claude decided to make. I doubt they enforced the confinement of unsafe to a specific architectural layer or anything like that.

      5 replies →

    • There is a qualitative difference between unsafe Rust and Zig as far as I know.

    • if half of your files in a million line codebase are unsafe that doesn't tell you much any more. Presumably the point of a Rust rewrite is that you actually make use of Rust's safety features in a coherent way.

      But given the whole "let AI rewrite this for me" stunt nature of this project that was not going to happen because that would require well, actual thinking and a re-design. So now you have Zig disguised as Rust and a line-by-line port because the semantics of idiomatic Rust don't map on the semantics of Zig.

      16 replies →

    • It's worth pointing out that "unsafe" in rust is not a very sound concept - it's not like a monad or "function colour" whereby the compiler can say "this code ultimately calls unsafe". It's more like a comment on steroids; you call unsafe in a function, write a comment about it, and no caller of that function would have any idea that it's calling unsafe code.

      1 reply →

  • The half of the files contain 'unsafe' keyword? It doesn't seem as a good rewrite. What is the point of rewrite into Rust, if ~half of your code is still unsafe?

    • Bun is fundamentally a boundary-heavy system and it also rolls its own version of a lot of things that people typically use via libraries, where unsafe is hidden. (no async, memory arenas, etc). It also uses FFI heavily which requires unsafe.

      It also looks like the top 2 maintainers are currently actively working on getting the amount of unsafe down and it's going down quickly.

      1 reply →

    • > What is the point of rewrite

      To win a news cycle.

      For the forseeable future, the AI market competition is not about which product can provide the most valuable utility to users. It's about which product can be holding the protective aura of social media and investment zeitgeist while competitors buckle under the strain from unfulfilled hype and over-leveraging.

      Utility, engineering, efficiency... these are all menial details for the winners to reluctantly iron out in 2035.

      1 reply →

    • unsafe just means that you take responsibility for the safety of the code contained within. Calling into non-Rust libraries has to be wrapped in unsafe. Making syscalls has to be wrapped in unsafe.

      Bun needs to interact with FFI code. This gets wrapped in unsafe blocks.

      There are many places where a JavaScript interpreter and library would need to make unsafe calls and operations.

      It doesn't literally mean the code is unsafe. It means the code contained within is not something that can be checked by the compiler, so the writer takes responsibility for it.

      There are many low-level data munging and other benign operations that a human can demonstrate are safe, but need to be wrapped in safe because they do things outside of what the compiler can check.

      12 replies →

    • Some correct me if I'm wrong, but it's unlikely they wrote this first initial version of Rust and will leave it unchanged as-is. What's there now is a step in a long process, not the final destination.

    • Rust has a ton of other features besides safe. Like exhaustive checking of enum variants and the ability to avoid using null with option and result.

      3 replies →

    • that sounds like a starting point and an honest translation. If it was originally unsafe and suddenly becomes safe immediately after the rewrite, it would mean they break existing behaviors

  • Better to know where memory bugs may happen than them being everywhere. Also, bun team are looking it to reduce it by a large margin. Since it was a line by line port, there is a good space for improvement. By first rust release, a significant number of it should be resolved.

    • Wouldn't it be better to port more idiomatically? Otherwise, you've done nothing but port all the existing bugs while creating new ones.

      1 reply →

Remember the top comment to this Hacker News thread? https://news.ycombinator.com/item?id=48016880 "This is an overreaction." "302 comments about code that does not work." "We haven’t committed to rewriting." "There’s a very high chance all this code gets thrown out completely."

Well. That was about a week ago.

> +1009257 -4024

Bun is now over 1M lines of Rust code.

This is approaching the size of the Rust compiler itself; except that BunJs is mostly a JavaScript interpreter wrapper + a reimplementation of the NodeJS library (Rust STD wrapper).

I think BunJS is becoming the canary for software complexity management in the LLM era.

  • > mostly a JavaScript interpreter wrapper

    Not accurate. Bun is a batteries-included JavaScript & CSS transpiler (parser), minifier, bundler, npm-like package manager, Jest-like test runner, as well as runtime APIs like a builtin Postgres, MySQL and Redis client. This is naturally a ton of code.

  • Bun is not a JavaScript interpreter, it's "only" a reimplementation of the NodeJS library + various other libraries. Bun uses JavaScriptCore as its JS engine. So Bun itself does (or at least should do) no JavaScript parsing, interpreting or JITing.

    EDIT: I misread, sorry! You said "JavaScript interpreter wrapper", which is correct.

  • I'm not sure if it's just the leading '+' or if there are other factors for phone number detection on iOS, but on mobile the line count changes are underlined and I can tap it to start a call, which, if it is because of the diff size, is something I find pretty amusing.

    • Apple has had a feature called Apple Data Detectors since the 90's that looks for different patterns in text and allows you to perform actions on them.

      So if the text includes a phone number, email address, flight number, package tracking number, street address or other pattern in the data it is underlined and allows you to perform one or more actions.

      The patterns it looks for and actions it takes are extensible by developers.

      If you don't care for it, you can turn it off.

    • > +1009257 -4024

          +1 (009) 257-4024
      
      

      I think it just lines up with the typical size of a phone number and the '-' is interpreted as a separator. Just a simple regex probably.

    • The leading “+” is not needed. Numbers with seven digits are automatically hyperlinked (possibly depends on locale).

      123456

      1234567

      12345678

      3 replies →

  • The Bun codebase had a similar number of lines of code before the rewrite.

    There's nothing unusual about a rewrite coming in with a similar LOC number.

    • I think the unusual thing is that it was written in a week. I highly doubt that they read and understood all 1M lines. But if it works and people use it, what does that mean for software? Should we still care about the code that’s written? Should we even look? I’ve always thought so, but maybe I’m just biased.

      1 reply →

    • I was going to comment this same thing.

      I don't know enough about what Bun does... But Rust is so insanely complicated, it's hard for me to wrap my head around how Bun is equally complictated.

      2 replies →

  • > I think BunJS is becoming the canary for software complexity management in the LLM era.

    Yeah, Cursor did the same thing, bragging about how many lines of code they managed to produce for a semi-working browser, completely missing the idea where less code is better, not the other way around.

    • I think their point was that the project is complex, with the implicit assumption that the complexity is to a large degree inherent.

      Even if it's mostly accidental, and the code is overengineered slop (which it is), the system being able to decompose a problem and deliver something is impressive in terms of stability: it wasn't sucked into rewriting everything from scratch every time it would run into issues, it didn't have infinite subagent recursion with a one-agent-per-line type workflow, etc.

  • you can easy fix this by MAKE NO MISTAKES, DO NOT HALLUCINATE under your zig2rust.md skill agent flow /s

About 9 days ago, Jarred wrote that it was far from certain that this would merge and that it was an overreaction. Ironic.

  • Model open source leadership. Imagine the meltdown if Linus says Linux kernel is not going to be rewritten and then one day wakes up and merges full machine-assisted rewrite in Rust.

  • When you don't own your company any more anything you say can be safely ignored. It was obvious that the token spend will need to be justified.

  • That doesn't mean he was lying. Just that things changed.

    It was uncertain then, and not so uncertain now.

    • > This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.

      I would say it is reasonably clear they had already committed to rewriting at that point.

      The possibility that that particular code might be thrown out was potentially true, but also totally unrelated to the previous statement.

      At the end of the day, whatever, but this feels a heck of a lot like “ah, we didn't mean for this to be public yet” rather than “this is just a random experiment”.

      AI companies love AI stories.

      It is an AI company.

      :p

      [1] - https://news.ycombinator.com/item?id=48016880

I'm actually excited for somebody trying experimenting with automated translation, but I'm afraid this will be lots of backwards compatibility issues.

I started looking at the commits, and it's basically solving the ,,tests not pass'' problem by changing the tests themselves. The real work of making it working on programs that are already deployed will be just starting now.

The only silver lining I see is that the server side JS community for some reason is already used to breakages all the time.

  • The whole idea that my RUNTIME contains code that a single human hasn't looked at does make me uncomfortable, but if this actually works without a ton of issues it's pretty remarkable.

  • > I started looking at the commits, and it's basically solving the ,,tests not pass'' problem by changing the tests themselves

    Not sure if these decisions were made by the LLM, but I've always felt that Claude is more prone to doing "shady stuff" like modifying tests than finding correct solutions to problems.

    GPT/Codex is more honest in this regard.

    • Yeah, Claude is very creative in finding ways of "solving" problems that go against what the user probably intended.

      Having said that, after looking at some of the test changes, they seem to be minor things, like changing timeouts, not changing the actual intended semantics of the tests. But it's too much code to review everything, so I might be completely wrong about that, and in real-world usage, even minor changes like these will cause issues.

  • I doubt it will end up as stable release very soon, but I'm happy to be proven wrong. I have some skepticism about this whole rewrite, Jarred Sumner has enormous internet following and it feels like an ad.

    • How do you wash to define ad, and why does it matter? If I tell you I had lunch, I mean. okay, great. If I tell you I had a delicious Coca-Cola with my lunch, sure. If I happen to work at Coca-Cola, does that now become an ad? And what level does it become an issue? And I what is the issue?

      1 reply →

  • > solving the ,,tests not pass'' problem by changing the tests themselves

    https://github.com/oven-sh/bun/pull/30412/changes/68a34bf8ed...

    This is great! Just add a random sleep(1) to a test, don't worry about it, it's going to be fine!

    • On the other hand, the sleep fits better to the test description, "should allow reading stdout after a few milliseconds". Even if 1 != 'a few'. It's possible the part of the commit reverted here, https://github.com/oven-sh/bun/commit/a42bf70139980c4d13cc55..., defeated the purpose of the test by removing the sleep. I don't think adding the sleep back is an example of AI cheating.

      Strange test though either way.

    • To be fair the commit message `revert proc.exited change in spawn.test.ts` suggests the sleep was there originally.

  • I wish I could take a look through the tests to see if anything substantial actually changed, but I can't even get github to load the diffs for me.

  • > I started looking at the commits, and it's basically solving the ,,tests not pass'' problem by changing the tests themselves. The real work of making it working on programs that are already deployed will be just starting now.

    Wow, This is definitely quite something for sure.

    Can jarred comment about if he has read the commits or not too or respond to your comment, this has basically made me lose the small faith I had in what bun is doing if it turns out to be correct.

    • It's OK, we'll see how it goes. He and Antropic are giving it us for free, and nowdays just forking the old version is easy if a project needs that. Even maintenance is much easier using LLMs.

      I'm happy it's not a project I'm depending on, but a large enough project had to try this at some point so that we all can learn from how it goes.

      I think this is why Antropic bought bun, so that they can sell big code translation as a feature for all the banks with COBOL code that they want to get rid of for a long time.

      Still, those banks / enterprises won't appreciate the number of unit test changes.

      And I agree with another comment that Codex xhigh is much better for these kinds of tasks, but still hard on this kind of scale.

    • Jared has commented on this elsewhere in the thread, basically claiming the parent you replied to is outright lying: it has removed no tests and has not meaningfully changed annotations to reduce coverage of effectiveness. It added additional tests and made a few changes to hard coded values due to differences in, as an example, how LLVM and Zig handle stack frames.

      The MR is right there, linked at the top of this page. You can check who is telling the truth.

      That said, I don't know how anyone is actually claiming to have done that. All day, the size of the MR makes the diff take too long to load and GitHub dies. I'll have to pull it later to check myself.

  • > it's basically solving the ,,tests not pass'' problem by changing the tests themselves.

    False.

    0 test files were deleted. 0 pre-existing tests were skipped, todo’d, or had assertions removed. 5 new tests were added in test.skip/test.todo state to track known not-yet-fixed bugs in the port that lacked test coverage before.

    The merge changed 28 test files in total.

    +1,312 lines

    −141 lines

    Most of that +1,312 is new tests.

    The depth-of-recursion tests for TOML/JSONC parsers went from 25_000 -> 200_000 because Rust’s smaller stack frames (LLVM lifetime annotations let the optimizer reuse stack slots) mean 25k levels no longer reaches the 18 MB stack on Windows.

    • We're keeping this honest and chill, no worries.

      What is "most of that "?

      Why did you feel the need to produce so much detail about a single category of tests?

    • That's great!

      It's too bad you haven't structured the commits and pull requests a bit differently so that it's easier to review the exact changes, but I hope it goes well.

      For example doing the test refactorings in a first pull request, and using something like test.xfail that is first fails then after the merge succeeds (but the test code itself doesn't change).

      Also I have seen some tests getting stricter, which is again not a problem, but separating to a different pull request would have improved the reviewability significantly for a runtime that many people and companies depend on.

      I'm sorry you were downvoted by HN and your comment got ,,dead'', that's not the way to review things.

  • in tsz[0] 100% of tests pass yet I have a ton of bugs. I don't think any software out there is fully tested really. I'm experimenting this this idea as well. So far learned a ton.

    I'm convinced the future of writing code is heavily LLM assisted

    [0] https://tsz.dev

Wow. This is going to be interesting to follow. There's absolutely no way any of this code was reviewed, but maybe we're in a post-human world now where you can trust the models to write and review the code. This is like Gastown but on a higher profile project. Will be fascinating to see how this project is able to add new features going forward (or even _if_ it will be able to).

Does anyone know how exactly Bun is used by Anthropic? Is it a part of Claude Code? I'm more than slightly worried about using Bun going forward myself, but I'm not sure to what extent that applies to using Claude as well.

  • > you can trust the models to write and review the code

    You definitely cannot!

    • Reminds me of going on linkedin and seeing all these sales and product people who are talking big game about engineering now. Well yeah they are definitely producing something but not sure I'd call it "engineering."

    • You can trust them to flag some things during review that may or may not be relevant. But just like with human review and unit testing, you cannot guarantee the absence of bugs after an LLM code review. It's just another set of (virtual) eyeballs.

      2 replies →

  • It passed all the tests.

    If you can't trust your test suite to catch an automatic language translation you shouldn't trust it at all. :)

    • Tests can only prove the presence of bugs, but not their absence. If the AI can access the tests, it can easily make them pass by just adding additional if statements. It doesn't mean the code is actually correct.

    • What if we only trusted the test suite a reasonable amount, instead of pretending trust must either be blindly total or nonexistent?

    • The entire underlying system has been replaced. The test suite is written around the current fuzzy edges and past problem areas, not every single behavior of the existing platform.

      "If you can't trust your test suite to catch a hardware floating point arithmetic bug, you shouldn't trust it at all."

      "If you can't trust your test suite to catch a JVM bug, you shouldn't trust it at all."

      "If you can't trust your test suite to catch a recurring memory error, you shouldn't trust it at all."

    • It also modified many of the tests to make them pass in mischievous ways. You can't trust a test suite to catch regressions if the new version doesn't use the same test suite.

      7 replies →

  • Does anyone know how exactly Bun is used by Anthropic? Is it a part of Claude Code?

    It seems to be used by anthropic as a way to shift the discussion window into it being acceptable that you yolomerge millions of lines.

    • the `claude` binary is essentially a packed copy of bun + the js code, so this will replace the native runtime part of claude code.

I will move the handful of my projects that use Bun to something else. I don't trust governance that permits this kind of reckless change.

Regardless the outcome, this is such a disrespectful move towards the huge amount of contributors who invested time and effort to learn the project and make it better. I hope the zig/dev community forks the project and continues the development. I'd rather use the fork than this project that has sacrificed its contributors for marketing purposes.

  • How is that different (in this sense) to any "slower" rewrites or other significant changes?

    • The difference is exactly the speed. Slowly transitioning from one thing to another gives the opportunity to contributors to get involved in the process.

      2 replies →

  • > this is such a disrespectful move towards the huge amount of contributors who invested time and effort to learn the project and make it better.

    What? How?

    You contribute to projects run by others with the understanding that others run the project, is this not the default assumption others have too when contributing to FOSS?

    Is it disrespectful if my proposed feature was merged, but then later was removed because the maintainer just didn't want the feature anymore? In my mind, pretty clear it wouldn't, I'm only a contributor after all, not the maintainer or the person running the project.

    • > Is it disrespectful if my proposed feature was merged, but then later was removed because the maintainer just didn't want the feature anymore?

      No, the big difference is that the described scenario does not require getting familiar with a new 1M LoC codebase written in a different language to be able to continue contributing to the project.

      8 replies →

As an educational thread, see this one from a week ago where Jarred again deflects from a merge decision and legions of foot soldiers attack anyone who predicted the impending merge:

https://news.ycombinator.com/item?id=48073680

Didn't age well, did it?

  • From "This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely." and what seems to amount to some experimental curiosity -- to merging the whole thing in 10 days!? This seems really crazy.

  • It'll never cease to amaze me how many bootlickers are out of there that don't really care which boot to lick.

Love seeing the tests themselves getting modified, with random `sleep(1)` thrown around in a few of them. This bodes well, I pray some idiot at some large AI co actually ends up using this garbage in prod

  • Claude Code uses Bun as its runtime.

    If this has been merged, I expect that Bun-rust is good enough to power Anthropic's internal agents to do live testing.

If this goes wrong even in the slightest, the ridicule about a drug dealer getting high on their own supply will be neverending and grim.

  • not enough people are emotionally prepared for if it’s not going wrong even in the slightest

    • It's going to work for the most part. Most people know that. It's a file by file, mostly function by function, conversion from one low level language to another with a very large test suite (with lots of Rust unsafe to work around differences). I've done that for C tools and it's fine, with some obscure edge cases here and there. The challenges are going to be making the new, very ugly, alien codebase idiomatic Rust in future and adding features or debugging the complex issues. I wish the developers luck. They're in for a slog.

      1 reply →

    • I think given the novelty of this, a lot of eyes will be on it, so a lot of issues will be dealt with out of the gate. The problem will be when smaller projects that aren't in the spotlight think it's safe too and then do stuff like this after being encouraged by bun, and for those projects then lots of bugs will just remain unfixed. Basically a nation state adversary's wildest dreams came true today.

      2 replies →

    • It will not go wrong in obvious ways, LLMs are actually not that bad for language translation, and they have big test coverage; any issues will be non obvious. The question will be more long term maintainability, how fast will the whole thing collapse.

    • However, you can never prove that it hasn't gone wrong, because there are so many long-form problems with software (quiet bugs, maintainability issues, etc). This creates FUD.

    • I expect it will be just fine. It's like bragging about getting the words right on a mental health exam. AI was given the answer, it just repeated it back in a slightly different format. Even a stupid human could have done that.

      1 reply →

  • Wasn't looking at leaked Claude Code source already enough for the ridicule?

    • I mean, that's just startup culture shipping half-baked duct-taped "products".

      Reengineering a well-used open source project… that's proper hubris territory, if you do it poorly enough.

      It's outside their "zone of absolute terror", to put it in anime references. Any argument against them while inside their domain is countered by their apparent success; as much as it pains me, the shit code did deliver enough. Not so when they step outside that domain, Bun was delivering before.

  • they are already high on their own supply

    did you read their Mythos paper? they're anthropomorphizing it like crazy. Maybe it's just cheap heat, but if they really believe the LLM is conscious..wew

I'm a pretty reckless programmer, but I would never do it on a project this big... 1m LOC cannot be reviewed in <1 week. Why not put it behind a feature flag, since you're keeping the code anyway (only -4k LOC).

This does not seem thought out, and was fueled by dopamine.

Having just migrated all my teams repos to Bun, I feel… stupid. I was already feeling a little nervous by the time of the acquisition but this is pretty rough.

PR so thick, the page failed to load the first time I opened it, and the comments still continue to fail to load. Absolutely hilarious. Though that may be just GitHub having a normal one, hard to tell these days.

1 009 257 lines added

4024 lines removed

6755 commits

2188 files touched

I haven't the slightest clue how anyone would even remotely hope to review this. I guess by just using even more AI? Or maybe by throwing some über hardcore lint pass onto it? It really seems like more an exercise in risk assessment than code review.

  • The maddening thing is that there's a right way to do this if you have the patience and professionalism to do so. It requires building a bit of scaffolding (feature flags, cross-language calling support, harnesses for shadow testing, etc.), then you ship-of-theseus the codebase incrementally. This is not even incompatible with LLM-assistance, plus it breaks the thing up into smaller, reviewable changes that don't break your diff tool!

    However, doing it the right way takes a bit more time, involves community feedback, and doesn't produce headlines about huge codebases being rewritten by LLMs in just a few days, so ...

  • Not sure there is much of a point in reviewing a port of this size. It has >1000 instances of `unsafe` and uses the same patterns as the zig code according to Jarred. It feels like a vibe-ported version of what the TypeScript team are doing porting from TypeScript > Go with codemods.

  • Humans are no longer maintaining bun. There is no good faith argument that can claim a human understands this rewrite

This kind of frivolous nonsense disqualifies bun from ever being a serious option to me. I'm not building any kind of software used in a professional setting on 1M lines of unreviewed code.

  • Odd take. Bun was not option for me because or Zig. There was no security. Issue tracker has 3000 issues about segfaults. Now I might actually reconsider.

Anthropic buys bun, makes them spend tokens to convert to rust, nobody understands it anymore, locked into ai now

So the geniuses in the datacenter prefer to rewrite the full codebase in another language instead of maintaining and improving its own fork or contributing to make the current language better.

Impressive to rewrite 1MLOC in a week yes, but this is more of a job of a million monkey programmers crammed in a datacenter than a bunch geniuses. And I would know, since I'm a monkey programmer who is in danger now... Or maybe the Zig team is in a greater danger, since their brains hold the genius juice the clankers are missing and they should have it by 2027...

  • > Or maybe the Zig team is in a greater danger, since their brains hold the genius juice the clankers are missing and they should have it by 2027

    Imagine you want to monopolize programming by pushing LLM as an obligatory middle-men. Then people who can program without LLMs are direct threat to your business plan. It's time for us to start hiding. I'm cosidering adding `co-authored by Claude Code` to my hand-written commits and running Claude in useless loops to mock API usage.

  • You seriously think any of them gives a shit about any of this? They're part of Anthropic now, making money is the only goal.

  • No matter how I look at this, it's churn for the sake of churn.

    Even if the translation was free and into ideal idiomatic Rust (and it's obviously not - it's now Zig with Rust syntax) then this would be churn for the sake of churn.

    At some project scale the language really stops being any limiting factor, and you're instead mostly dealing with working past past architectural decisions, integration of large changes, deep optimization, steering the codebase into alignment with project roadmaps and long-term goals, regression testing as features get introduced, maintenance of multiple release trains... Experienced software engineers mostly stop caring about simple things like the programming language choice at that point, because whatever issues come from that choice have already been resolved. What matters is stability, careful orchestration of large changes and a stable and comprehensive test suite.

    • > At some project scale the language really stops being any limiting factor

      That's not entirely true. At a certain scale, some languages start becoming increasingly more of a factor. Memory issues in C/C++ codebases, for example. This is pretty well established at this point, which is why there's a push to move away from memory-unsafe languages. Which likely would include Zig, for better or worse.

      3 replies →

    • It's I think not churn for the sake of churn. It's likely encouraged by the fact that Zig itself will not accept AI written code contributions.

      So now imagine your company and project -- written in Zig -- has just been acquired by the world's biggest/second-biggest AI company.

      That company's most successful and popular tool is running on your platform that is written Zig.

      And Zig maintainers want nothing to do with you.

      What kind of pressures, real or imagined, do you think that puts on the developers of Bun?

      Honestly, from what I've seen from a distance, actual rigorous software engineering doesn't happen at Anthropic. From what we saw of the Claude Code source, the reliability issues over the last few months, and now this. It's just a bunch of people getting high on their own supply falling all over each other. Quality issues galore and a delirious frenzy.

      FWIW I don't think it's intrinsic to AI. Codex is very well written (in Rust, BTW), fast, and consistent.

    • The "idiomatic Rust" thing rubs me the wrong way. If someone writes Rust that compiles and works, that's Rust. full stop. Telling people it doesn't count until it's "idiomatic" is just gatekeeping. It quietly says you're not a real Rust dev until you've put in years and absorbed all the unwritten rules, which shuts out exactly the people who are still learning. Everyone writes "non-idiomatic" code when they start. That's not a failure, that's how learning works. Even if being written by LLMs, the devs still will need to improve their knowledge to keep the codebase.

      4 replies →

  • > or contributing to make the current language better

    The people making Zig have said they don't want that.

    • They also said that:

      > Code origin was not even a factor [0]

      > AI is entirely besides the point here. The changes in this Zig fork are not desirable to upstream for several reasons. [1]

      So my view here is that besides AI policies to filter low value contributions and "contributor poker" [2] to attract contributors vs just contributions, a well thought of genious implementation aligned with the Zig roadmap instead of the "hacky implementation for a flashy headline" [1] would have made the cut.

      But then again this entertaining drama will sadly get deprecated by mid 2027 as the datacenters will be churning out their own opusrust and clankzig.

      [0] https://kristoff.it/blog/contributor-poker-and-ai/

Say what you want, but for people building products on Bun, this is bad news for the foreseeable future.

  • I am genuinely speechless.

    I don't understand the rationale behind how any project, especially of this magnitude, can seriously build something stable this way.

    My consolation - and it could be pure cope - is that at least I am in the same boat as a huge company like Anthropic, and they surely wouldn't be stupid enough to also build their cli tools around something that they saw as risky.

    feelsbadman.

  • I guess that the next release of Claude Code will use that runtime.

    No later than next week.

  • This is bad for anyone building on Zig.

    • Cue the clueless CEOs of zig shops (I don't know many, but still):

      "Rust is faster and safer! Port it! If you don't do it, I'll do it myself, because AI can do everything a programmer can, including the stuff you don't want to do. Ship it!"

      1 reply →

    • Why would it be? There is projects like Roc that did the opposite, they went from rust to zig, as they (had to) use lots of unsafe rust. And before you ask, no it was not an AI generated rewrite.

      1 reply →

I'm confused. Never heard of Bun until a few days ago here on HN. It's some nodejs wrapper thingy, written in Zig, and someone decided to use LLM to rewrite it in Rust. Is this a big deal? Who is even using this software? Why is this big?

  • Bun isn't a node.js wrapper. It's an alternative to node.js that sits at roughly the same spot in the stack.

    Node.js is a distribution of the V8 JavaScript engine (the thing that executes JavaScript in the Chrome browser), along with a bunch of standard library code written mostly in C++.

    Bun is a distribution of the JavaScriptCore engine (the thing that executes JavaScript in the Safari browser), along with a bunch of standard library code written mostly in Zig (and now Rust). Bun's standard library is in many cases compatible with or inspired by the Node.js standard library, but with some changes for convenience and performance.

    • Answering “who is even using this software” is unfortunately missing in your answer. I am honestly curious. I’ve never seen it “in the wild” (in job descriptions, hearing from past colleagues, meetups etc). Only place I heard about it is HN and Twitter.

      4 replies →

  • Rust vs Zig "wars" etc.

    Also at some point Bun was acquired by Anthropic. And some people feared that this will greatly influence Bun's development.

    • I don't think Rust vs. Zig has anything to do with why people are talking about this. It is a large piece of "real software" that underwent a full language transition in ~1 week using LLMs. That is a big deal regardless of the language and will be a case study regardless of how it turns out.

      2 replies →

  • I think relatively few people are probably running Bun in production, but as a dependency management system and bundler for the JavaScript ecosystem, it's similar to `uv` from the Python ecosystem in how much faster it is compared to the most popular alternatives so it's fairly popular in that space.

  • Bun is not a node.js wrapper, it is a node.js alternative. It had non-trivial adoption, tens of thousands of stars on github for whatever that's worth (before the AI spam took over stars). It was then purchased by Anthropic and now we're witnessing open source software that people used be sacrificed to the altar of LLM marketing hype.

  • Not mature enough for everyone to be using it yet, but it may dominate the space down the line. They compete with Deno.

  • I've never done any JavaScript development of any kind and had never heard of this either. I thought it was a package manager at first, but apparently it's an entire runtime.

    My question is, if it's this trivial to rewrite Zig to Rust, and trivial in general to write Rust at all, why not just use Rust for your server side code in the first place? What's the value of continuing to use JavaScript and putting so much effort into the runtime?

  • Bun has a lot of buzz as 'the next big thing' in the JS ecosystem, and was recently purchased by Anthropic. So it's kind of in the zeitgeist.

  • >Is this a big deal? Who is even using this software? Why is this big?

    Let's see. $10T in market cap, a significant chunk of everyone's assets and retirement funds, are currently dedicated to AI build out because of the potential for AI like Claude Code, which is recently doing $3b in revenue, and built completely on Bun.

    If Bun is able to successfully vibe code a complete language shift in this short of time, it much more concretely validates the potential of vibe coding / AI for the entire industry.

So many of the code comments on the new port concern only discussion on how it was ported, usually referring reader to the original zig implementation.

So now I'd basically be reading 2x the amount of comments and code to understand _why_ anything is happening.

Software is only as good as the end result; it doesn't matter how we get there.

There is reason to be suspicious of LLMs, but people should stop getting so wrought up over _how_ the Bun team writes their software, until they have complaints over the software itself.

Just let the team do their thing. You're free to reject the end result.

  • I agree, if the code gets tested endlessly, and audited, and nobody, not even the LLM can find major jarring issues with it, it compiles, builds, and works as expected, isn't degraded in any way, I don't think I care how you built the "new" rendition of the software.

If LLMs can achieve this level of task in 9 days, why do we even need Bun in the first place? Shouldn't we just write our apps in Rust and not even deal with JS?

  • Why even rust at the first place? I dont see why we can't go straight from natural language -> Claude -> HTML/JS/CSS bundle. Instead of writing webpage, one can just write prompt for each page and serve it with claude.cgi

    • And if you inject information about the user into the context, everyone can have their own personalized version and we'll turn the internet into the tower of babel where no two people see or experience the same thing.

    • >> I dont see why we can't go straight from natural language -> Claude -> HTML/JS/CSS bundle.

      Or we could just rewrite everything in assembly, becauase thats fast. Well, Claude can do that. (/s ??)

  • I find that LLMs are quite good at translating code. If you are writing something from scratch you have the burden of preparing something for the LLMs to "translate" from, i.e. prompt or specifications – next best thing to actual source code.

    Defining specifications with the level of detail needed to build applications exactly as intended is not as trivial as it may seem.

Honest question, how many of the leaks and crashes can be attributed to zig the language vs possibly (maybe, we don't know) a loosey-goosey, slot machine approach to development heavily reliant on AI? Will the inherent leaks and crashes be fixed, purely by dint of porting to Rust?

Given Anthropic's existing track record of producing terrible hallucinated inaccurate documentation in Claude Code, I'm very curious how Bun will handle this as it continues development. Anthropic probably doesn't care about Bun's external compatibility as long as it runs Claude Code. Will Bun be eventually become "the JavaScript flavor that Claude Code uses"? Will they even bother updating external documentation as it changes? Docs currently live at https://bun.com/reference, but I don't know how much of this is separately maintained documentation versus JSDoc-style generated documentation.

If the bun team is around I would be interested to get their opinion on this: in the old time migrating a 1M codebase from one language to another meant you would pretty much become an expert in the target language. The output of the work is team experience/knowledge + the actual rewrite. With that Bun rewrite do you feel that the Bun team learned something other than “Claude can rewrite a very large codebase in no time”, which is impressive in itself. Is the output only the rewrite, or did you learn something along the way? And how do you feel about your answer? Not a snark question, like a lot of others I’m myself trying to understand how I feel about how our profession is/has been changing.

  • I used to think software was inherently valuable.

    Then I decided that software is of limited value without a team to maintain it. Not necessarily because they fix it, but because they represent a bunch of humans who collectively understand it and therefore give it more possibilities.

    And now this. I'm not sure what to make of it.

I think one of the things I had forgotten about but sheds some more light in my mind about how this was done is that anthropic bought bun.

The change of tone with the author in the capabilities of Claude. The strategy of merging everything at once instead of a more slow, careful cutover. The “single” author story that every company loves to put forth.

By reading this thread I've learned that, apparently, you are not allowed to rewrite a large piece of software backed by a large test suite in another language within two weeks otherwise you are a witch and need to be burned on a stake. You are also not allowed to move from the PoC phase to lets-do-it phase within a couple of days without being called names. Why are we concerned with speed all of a sudden? Are we in the "people will literally die if a car moved faster than 25 mph" era of software engineering? Let them do whatever they want, they've shown the will to move on from wrong decisions, they will do it again if the Rust port fails to deliver and the whole industry gets to learn from it, whatever "it" might become.

  • I can't ignore how much this sounds like Stockton Rush.

    > "Apparently if you build a submersible with carbon fiber you are a witch and need to be burned on a stake. But look we're making reliable trips down to the Titanic with no problems."

    Realistically, this is a forum of experienced engineers watching a company make some extremely questionable but very flashy engineering decisions. There's going to be a lot of people standing around here going "gee I dunno, that seems questionable".

    Personally, I think the rewrite will largely work - logically, direct translations from one language to another are pretty well within the realm of the few things LLMs should perform extremely well at. But I also think more information will come out showing this was much more bespoke than just prompting an agent to do the translation. This just feels too much like an ad for Anthropic, I think it's likely there was a lot more human involvement and planning than we are being told.

  • That you're only just "learning" that these things are true is a damning admission. And to fix your bad analogy, it's more like "hey maybe we shouldn't be allowing f1 street races through school zones".

    • That analogy might work if this situation is 'reckless behaviour risking children's safety' but in this case it's much closer to 'We made an large, potentially risky change that you can choose to avoid until it's more mature'

      3 replies →

    • This is silly IMHO. They haven’t released a new official Bun version with this code yet. It is a canary release. Give them a chance to figure it out and try it out and see how the limited number of production users of bun as a runtime experience the move. If it succeeds, this will massively accelerate development and they will have much to teach us all about how to safely code 1M lines with AI and merge it in days. If it fails, we will know that AI isn’t ready for that yet

  • The AI polarization is making me sick. Please don't let this style of comment become normalized on HN (and that includes equivalently tribalistic anti-AI comments).

  • > By reading this thread I've learned that, apparently, you are not allowed to rewrite a large piece of software backed by a large test suite in another language within two weeks otherwise you are a witch and need to be burned on a stake.

    You've just learned that you can't do random shit and not get called out? Were you born yesterday?

  • Anyone running bun in production right now has to be sweating lol, this is a ridiculous change for a part of your software stack that really ought to be reliable.

  • Heavy implications on how the future will be formed if things go well with this port. It would prove a lot of people wrong if things go well 3 months down the road.

  • The top comment in the thread explains it pretty well, so please don't pretend it's anything else. The point is they went from "chillax, it's just an experiment" to "we'll switch languages via a 1M line vibecoded patch" in two days. People that rely on this software are understandably fearful, since there is no way this change has been properly revised and tested. Although perhaps the mistake was relying on such software in the first place... And so are contributors too, which have seen essentially the entire codebase replaced in a week.

    • People relying on this software can absolutely choose to stay on current/recent versions until this becomes more mature. My assumption is that the current state allows for public testing, but anyone needing a stable version wouldn't be affected and can choose to not be affected by it.

    • Why "no way"? You're also forgetting extensive test suite?

      Merging it so quickly only odd if you're planning on retaining current community.

      It's not like it was merged and shipped to every single stable distro overnight. That's how things get tested.

Wondering what they will do when rust rejects a pr from them.

  • I guess they vibe-rewrite to C, relying on CCC compiler. Agent loop will be modifying both the project and the compiler until the ends meet.

We should be greatful for this. This is the one public case study on how large-scale llm-driven code generation actually works out.

With node and deno there are reasonable alternatives for everyone who don't want to use bun anymore.

  • > This is the one public case study on how large-scale llm-driven code generation actually works out.

    Is it, really? I can't imagine how much money in tokens was spent to get something like this + Jarred's and the teams salaries to review/manage this.

  • It’s not a public study though. We’re not going to get trust worthy numbers about labor or token cost.

  • The problem is that many negative effects of this kind of thing won't be clear or immediate, so it's not an easy test to make useful. At minimum, this increases the opacity of the box, reducing perceived trustworthiness.

Would be very cool if as a result the different components were published as crates and embeddable in other rust projects!

I just skimmed through the porting guide and based on the number of unsafe blocks, this looks like a fairly straight-forward mechanical translation.

If that is the case, why didn't they just "vibe-code" a Zig->Rust translator and a small Rust/TS/JS/whatever script to orchestrate things. You don't even need pretty printing support because rustfmt exists.

You'll save on a bunch of tokens, probably a lot of time/enegy, the process becomes auditable and (hopefully) deterministic, and if there's a mass bug in the translation, you only have to fix it in one spot.

I hope the Deno lot take the opportunity to capitalise on this

  • This is their chance for sure but it seems they are scaling down, at least their main product Deno Deploy.

    Prev they have presence in 31 regions but now it's down to just 6

    https://docs.deno.com/deploy/classic/regions/

    • Bun's rise over Deno is honestly shocking. One man's project that went viral because of some very misleading benchmarks has evolved into a behemoth in an incredibly short time frame. Some major projects bought into the benchmarks and adopted it for important projects and thus it was thrust into stardom.

      I was naive enough to believe Deno's ascendance was all but guaranteed with Ryan Dahl's name on it and the direly needed security guarantees it offered.

Well, that escalated quickly. I think I first heard rumors of this a week or two ago. That's a very vast turnaround for such massive code-churn. I don't know how to feel about this.

Github is failing to load the 800 comments, naturally. I'll bet they're fun.

  • Too bad modern computers are not capable of processing 800 paragraphs of text. That’s several hundred kilobytes! Maybe the technology will advance thanks to AI…

  • Github actually made my computer lag when there were no comments at all because of the 1 million lines of code added iirc. I could've responded something first but well I wanted to say something meaningful and didn't have anything so I just closed it.

    I had to literally force quit my browser because of how much it lagged iirc.

So how many of their employees are now familiar with the codebase? zero?

  • I mean if you look at the code, it's a pretty faithful rewrite. It seems like being near 1-1 with the original code was prioritized even more so than utilizing Rust's safety features since unsafe Rust is everywhere

first major company to really nuke their main product via AI psychosis?

Rust needs to remove the unsafe keyword to finally fulfill it's destiny as a practical LLM generation target.

This may be the largest AI-generated codebase right now, by a lot. It'll be interesting to see how this plays out.

Frontier AI software development still falls short in the design/architecture department, in my recent experience. Though it's pretty impressive at making "working" code.

This being a fairly direct conversion from one language to another, even keeping the same interfaces across files, means the architecture is already in place.

The detailed test coverage is also very helpful for Claude. But even detailed testing can't cover every edge case.

So my questions are: How well did Claude do on the edge cases? And how maintainable will this codebase be going forward?

  • > This may be the largest AI-generated codebase right now, by a lot.

    I'm sure there's lots of other large scale applications of AI, just not many/any projects that are open source and so high profile - with the changes being done so far.

    Personally, in the past 3 months I've shipped about 2.3M lines of a legacy project migration, though the new codebase is Java + Oracle ADF because of reasons™ and instead of being an interesting codebase, it's more forms heavy and essentially acts as a front end for a large Oracle instance, think more CRUD than application runtime (with an upsetting amount of XML).

    The difference also is that it wasn't migrated by using AI on every file, but rather dumped the DB schema into JSON, and converted the old form contents to a YAML intermediate format that describes what's in the forms and have been iterating ever since of creating code that generates code - basically AI assisted development of a codegen solution + AI assisted sidecars that get merged with the generated code based on markers, when something can't be automated that way and often times also AI controlled browser based testing (since Playwright is in the cards for everything, but not yet).

    Seems to be going pretty okay so far, will probably take months more of iteration and fixes, currently the automated testing is taking a while because let me tell you - not only Oracle ADF is shit, but so is WebLogic, like fuck I'd be so closer to being done if I was allowed to pick Python + HTMX or even Java + Thymeleaf. That's still better than a team spending a year on the migration and getting like 10% of the way there.

    Obviously there's no more details to publicly share, but the overall vibe is clear: as long as you can test any changes, you can iterate faster than without AI - and the code ends up being more readable that colleagues would often write. The problem is that people would squint at the suggestion of 100% test coverage previously so most code is even written in a way that is straight up not testable (and often nothing is decoupled from the framework properly and tests take way too long, both time and resources).

I hope it's obvious why I'm removing Bun dependency in all my projects. Would be great to have a non-affiliated zig-bun fork that focuses on, well, runtime.

That's pretty... brave? Not releasing it in parallel and spending a few months testing it against the old mainline version to surface issues BEFORE a potential merge?

  • Who knows what their release strategy will be. This is still only a canary release. Don’t put your horse before your cart

I wonder what portion of the migration was contributed by Mythos. Surely the Bun team now has access to more powerful models, but could such a migration be done with just Opus 4.7? Nonetheless, nearly 7k commits is impressive.

Turns out "its just an experiment, you all are overreacting" was just a lie to damp criticism.

https://news.ycombinator.com/item?id=48019226

  • Merging a complete rewrite in another language in 9 days seems insane to me. Maybe I'm just too cautious but with something like this I'd split off as a separate binary and get some heavy use customers involved as testers first to see if it causes any unforeseen problems before slowly expanding it out.

    I'd want to be pretty damn confident it won't cause any regressions before sunsetting the original codebase in favor of this one.

    • I don’t think you’re too cautious. Big upgrades and rewrites is somewhat of a „work hobby” of mine and this seems waaay too fast. I don’t know how the Bun canary process works and I guess their test suite is better than typical projects but still… I can’t imagine this working out well without testing it on a variety of big projects for a significant amount of time.

      There’s probably loads(?) of observable behaviors that people rely on, consciously or not. Even _if_ the new thing is 100% spec compliant, it might still be breaking or otherwise problematic for heavy users.

      That said, I’d love to be proven wrong. I use Bun from time to time on small stuff and I enjoy it, so I wish them well (:

    • > too cautious

      No, you are perfectly normal.

      The people who in one week decided to replace the whole codebase for a widely used tool with code no human has seen are the crazy ones.

  • Well I've got egg on my face.

    I am in that post, defending bun.

    I thought for sure the peanut gallery was overreacting. Especially when the concern was absurd - because who would do such an insance thing? Like, at the time I legitimately thought 'no way a project switches over in a few months'. Even as an absurd hypothetical, I couldn't even imagine the prospect of it being done in a matter of days.

    Feeling really confused right now.

    • > Well I've got egg on my face.

      Not at all. Supporting a methodical conversion to Rust seems reasonable. How could you have predicted they'd shotgun it?

    • that’s the advertisement part of this ordeal you’re experiencing.

  • It seems it was an experiment at that moment, and that it went well? I do hope they release it under 2.x though, cannot imagine how a 1M LoC can break in so many ways, especially if what xiphias says is true:

    https://news.ycombinator.com/item?id=48132902

    • If I got magically handed the perfect rust rewrite for a project of this magnitude, it would take way longer than 9 days to merge, because I would need to make sure it's actually good.

      1 reply →

    • > It seems it was an experiment at that moment, and that it went well?

      There’s no way they can know that for sure. A change of this magnitude cannot go from experiment to success in such a short time frame. Even if all the code were 100% correct, you can’t call it a success until it’s battle tested in real world scenarios for a while, and that is impossible without time. Same way you can’t cook properly by throwing food into a vulcano. It’s not just about the temperature.

      Either the “experiment” claim was a lie or they are being irresponsible.

  • Maybe Anthropic decided to push this because of all the attention the experiment got.

    If it works out it’ll be a good study case for marketing.

  • I'm no believer... 9 days later... Lessssssgoooooooo wooooooooo <sunglasses and rave>

  • The experiment might have turned out well, or the author might have spent enough time to bring it to a place they was comfortable.

    Frustration moves mountains, I don't think this rewrite was done lightly.

  • "We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely."

  • You have no idea if it was a lie or not. I routinely have my clanker fleet spend a couple days toiling on some crap that I assume I will throw away, but it turns out pretty awesome, so I keep it.

    It's entirely plausible that when that comment was posted, he doubted it would work well enough to keep.

    (Sensible default for LLM code, btw. But sometimes it works great.)

  • > was just a lie to damp criticism.

    Citation needed. Couldn't it just as easily have been one person being as suspicious of the task as everyone else seemed to be?

  • Surely the mods will be here to remind you that it's against the rules to direct personal attacks towards other community members, to fulminate and brigade.

    Or do those protections only cover whiny open source developers upset about a chat bot writing blogs?

  • Well it was 9 days ago, at the time they were not confident, but maybe the results were insanely good.

    • no matter how good the results are, this kind of rewrites deserves an experimental build to be battle tested by bleeding edge users.

      It takes a lot of rigorous testing automated and manual and by community before such changes are cosnidered permanent.

      One does not simply YOLO a full langugae rewrite without user feedback. it is insane.

      4 replies →

  • Does anything from that comment say that there was 0% chance the experiment wouldn't be merged into main? I see "very high chance all this code gets thrown out completely", which just means the low chance of it not being thrown out has occurred.

    • It doesn't say what will happen, but isn't their comment responding to people who don't like the look of this rewrite, and telling them basically that they don't have to think/worry about it? I definitely read it as 'not yet' and not 'another week or so'.

I don't really understand the point of this. Is it Anthropic showing off well their LLMs work? Was it too difficult to find Zig devs so Bun swapped to Rust? Did Jarred read one too many memes about "rewriting in rust" and took it at face value??

I would imagine that there will be bugs migrating all at once, performance will probably be close to the same, and the maintainers will need to context shift from Zig to Rust. A very confusing decision for sure.

  • Claude is significantly better at rust than zig. Zig is changing all the time. If you check my profile comments I did a quick experiment recently to demonstrate. Essentially, Claude could generate a basic working tcp echo server in a few seconds. For zig, either asking it to do it just with zig, or with specific versions (.15 and .16 because some fundamental language changes necessitate different implementations) failed to produce working code in all three cases and also took magnitudes longer to generate the code.

    Aside from the big marketing play, Claude not being able to easily generate zig code was probably a big motivator - it doesn’t make anthropic look good and it doesn’t fit into how they’re doing things

    Also, you’re assuming that actual traditional maintainers even exist now. Likely it’s a smaller team of people running mythos agents with an unlimited budget and no real need to fully understand the code

  • I suspect one part of the puzzle is that Bun used its own fork of Zig, that had diverged signficantly in design and direction from mainline Zig.

  • Probably some combination of: Anthropic is heavily invested in the Rust ecosystem and they want their core tools to be built on Rust. More Rust developers. More Rust training data so LLMs write better Rust code than Zig code. Advertisement for Claude Code doing major work on a high profile open source project.

Why didn't they ask Claude to remove all of the `unsafe` at the same time??

  • "at the same time" is a recipe for failure with coding agents.

    • It's also a recipe for failure for ports in general. Same goes for the "not idiomatic Rust" comments above — that would be nonsense.

      You want to port it as faithfully as possible to the original, porting it bug-for-bug, quirk-for-quirk. Then, over time, after the port has been proven to be as identical to the original as possible, you can gradually fix those kinds of internals.

      That's why TypeScript's tsgo native port is so good.

      5 replies →

How does the no async work? Would have thought Bun would need that

  • Async presumably happens in the JS runtime that bun calls into. Just need 1 thread to host that

“+1,000,000” changes in a single commit is insane.

  • Why would they do it like this? It makes no real sense to me. At that point it's an entirely different project, with the same functionality.

    If you use Bun in production, does this feel like a well managed upstream?

    I don't use Bun, I don't care that they are using an LLM (though it is impressive that this actually worked), but the project management aspects of this is just wacky.

  • The really interesting thing to do would be to ask the agents to submit the diff as a coherent patchset...

If this means that segfaults become rarer with Bun I might consider using it in production again. As it stands, Bun has been great as an all-in-one TS/JS package manager, build system and test runner but unstable enough that I still want Node running in production backends.

  • Yes. That is the plan.

    See jared comment [0]

    If this helps bun and rust is a better lang for developing bun going forward with the help of claude. Then i think that is just fine.

    I thought rust was making the codebase complex so zig won on speed and dx.

    But with llm and a large codebase it seems like rust gives fewer bug and you can develop it faster & safer.

    https://news.ycombinator.com/reply?id=48133519&goto=threads%...

  • Surely there are no bugs in the 1000000 lines of code that no one has reviewed…

This is a massive marketing for Anthropic. It shows how capable their systems to enterprises customers.

Also this is a perfect task for LLMs. They have the most detailed spec (Production Zig code) ever, and since it as file for file and line for line rewrite, agents were able to quickly complete a massive 1 Millon line rewrite.

We will continue to see more of these in future.

With weird sadness I have to say, we are getting targeted with new kind of marketing. It doesn’t look like it was just technical decision. If anyone was following what was going on X, it was crazy with amount of content about it.

I couldn’t believe before with all fearmongering being marketing, but I am coming to conclusion it is. It’s hard to get any signal over noise in attention economy. They know what they are doing and it’s Deja Vu of crypto, but now we are targets with rage baits, guerilla marketing, buzz

Has he estimated the token cost for this (if he had to pay that is)? I'm curious how much this would cost a paying customer.

For those looking for an alternative no-compilation TypeScript runner, I'm quite satisfied with TSX: https://github.com/privatenumber/tsx

Node.js itself is getting quite close to running TypeScript natively, but they don't support using ES imports of CJS packages and importing with no-extension qualifier.

Huh, it makes sense that Anthropic acquired these guys. This kind of AI nativity in thought directing to action is actually incredibly uncommon.

I wonder, did they consider an approach of vibe-coding a deterministic converter and then running it? This should be much more token efficient.

I wonder if the whole acquisition was done so that they have guinea pigs that can’t say no…

or if I want to be cynical… so that they have a big enough project where they can force gigantic rewrites without considering the outcome from the project’s point of view, all so that they can fuel their marketing strategy.

To be honest, kind of obvious looking back.

This canary will never leave the mine. (unless Anthropic opens their wallet again)

I have full faith, it's the same really smart people that built bun (Jarred and team) that have spearheaded this and are running it. So I have no reason to believe that this was done carelessly.

That said, I'm still shocked and amazed that something this big is possible these days. But as we've seen multiple times now, one of the most important things your codebase can have is a solid test suite.

I will continue to use bun, because at the end of the day, it isn't just the technology, but the talent/people behind the technology that ensures that it will be solid.

And since that hasn't changed, I will still trust bun and its direction.

Also, bun is mostly glue code and sort of "user space" libraries (my words) as Jarred has said on X, most of the underlying runtimes like JavascriptCore, etc weren't rewritten.

So this isn't like 100% of what we think of as bun was rewritten. It's more like the scaffolding and harness.

  • > So I have no reason to believe that this was done carelessly.

    Writing software with an LLM is doing it carelessly.

  • Doesn't doing this in the matter of a week or so, by definition mean it was done carelessly?

    How could it be possible to test such a complicated piece of software, and review such a large amount of code in such a small timeframe? Spoiler, it's not. They're merging slop.

  • yeah but it also made some tests pass by changing the tests. i’m not super familiar so i’ll dig more on weekend but it seems sus pending more review. i’ve had ai do similar things that i caught in manual review. cheating the test is bad.

    • It is welk known that agents can cheat or go off on tangents and not recover. Just recently deleted a bunch of code files that I didn't ask for. The code wasn't even used anywhere.

Maybe a good advert for Claude; but a terrible, terrible advert for the stewardship and governance of the Bun project.

  • This is the most accurate take lol. Claude's done impressive work, but I would absolutely never trust this project in production now.

On one hand I kinda feel validated for having jumped ship on Zig 3+ Years ago[1] and moving everything to Rust[2], with the language simply being too unstable and unsafe in my eyes, despite my love for comptime and people arguing that Bun and Tigerbeetle were proof that it wasn't the languages fault.

But I also feel bad for the Zig project to loose one of their flagship projects, because while I find the project ultimately anachronistic, I know what it's like to pour your sweat, heart and soul into something, and having it replaced within a week is a sobering experience even from afar.

A couple years ago this would have been unthinkable because of how slow legacy codebases and rewrites are.

I wonder if Tigerbeetle will also have problems arguing for their solution now that the other project they can point to for customer assurance is gone. And I wonder if they will follow suit eventually simply due to marketing pressure (after having been bitten by the Zig compiler I was surprised that they were putting their super duper high reliability database on top of it at all, but with another big player using it there was at least some peace of mind for their enterprise customers).

1: https://github.com/triblespace/tribles-zig

2: https://github.com/triblespace/triblespace-rs

  • > I wonder if Tigerbeetle will also have problems arguing for their solution now that the other project they can point to for customer assurance is gone.

    In general, we never like to appeal to popularity (a logical fallacy), but why would you assume here that we would point to Bun specifically (or any project for that matter) [1] as an example of Zig’s quality?

    We prefer to judge Zig’s quality on its own intrinsic merit:

    For example, we subject the language through TigerBeetle to inordinate amounts of fuzzing, perhaps more than any other language (you could say Zig is lucky to have TB’s test suite aimed against it!).

    Literally 1,024 dedicated CPU cores, 24/7.

    Zig holds up remarkably well.

    We also recently pledged $512K to the ZSF, together with Synadia.

    These are the kinds of things we prefer to point to. Not hype, but real end-to-end systems engineering, and long term financial support, regardless of the language we choose to use.

    [1] I picked Zig back in July 2020. At the time, the largest project was River, but already Zig was a phenomenal choice, and the years have only shown that Zig was probably one of the best design decisions in the development of TigerBeetle. It turned out better than I imagined.

    • Correct me if I'm wrong, but the three largest Zig project (by far, with a huge gap between them and the rest of the pack) are Bun, Ghostty, and TigerBeetle.

      A language so niche that it only has 3 major projects is a liability. Now it has 2 major projects, one of which is yours. Even I as a weird language connoisseur would raise an eyebrow at that.

      After switching from Zig to Rust, I felt like the language was helping me improve the correctness of my project, to argue that the fuzzing of your project helps improve the correctness of the language feels backwards and adds to my suspicions.

      We both know that fuzzing is great, but that wether you fuzz with 1000 cores or 1.000.000 cores, at an exponentially growing state space it doesn't make (that much of a) difference (I know that you guys are not doing naive fuzzing, which is extremely cool, but the shape of the problem is still O of evil shaped). Most things you can find with fuzzing are shallow-ish, and if you want to go deeper you need formal verification (for which a strong type system is a good first approximation and I'm not aware of something like Kani in Zig).

      I like TigerBeetle and I still wish you guys all the success in the world, but I can't help and wonder where you could be by now if your language was lifting you up, instead of you having to lift up your language.

      2 replies →

  • While I don’t have personal experience with either project, I feel it is safe to say that Bun and TigerBeetle are not comparable projects: TigerBeetle has a strong focus on testing and correctness, and Bun maybe not so much. IIRC, TB did well in the Jepsen test and had one segfault in a client library. Bun has had quite a few memory safety issues, in fact, the stated motivation for the Rust move is to eliminate those going forward. We shall see how that pans out.

  • I doubt the Zig maintainers will miss the giant PRs from Bun!

    • I'm pretty sure they'll miss the full developer salary that Oven used to sponsor them, which they no longer do. I'd wager one doesn't do a rewrite like that, if you are in great personal standing with the language foundation.

      That same "just don't use it" attitude was what drove me away from Zig btw. I would have been fine in restricting myself to a somewhat stable subset, e.g. if, loop + function calls, but they didn't want to provide any tiered stability guarantees for the language.

      Opinionated is great, no local minima is great, but you have to accept that if you don't want to engage with the needs of your (professional) community then what you do is a hobby project. A very cool hobby project beloved by thousands, but a hobby project.

      4 replies →

Hopefully this means Bun can now support things that were limitations of the Zig libraries like being able to upgrade standard TCP sockets to tcp without closing them.

  $ grep --exclude-dir=.git -r 'unsafe {' | wc -l
  10465

Nice.

  • It's not that weird to end up with this when translating C/Zig/C++ to Rust. A first pass can use unsafe and then when the code is in Rust you can work on reducing the unsafe.

    Trying to eliminate all unsafe as part of the rewrite, whether done by human or LLM, would be making too big of a change in the process of rewriting.

    • > would be making too big of a change in the process of rewriting

      God forbid the already unreviewable -710kloc/+1mloc change get any bigger!

      1 reply →

  • The benefit of using Rust is that you know exactly where the unsafe code is so you can handle it explicitly and deliberately to avoid issues by imposing carefully crafted constraints... oh.

This is a wild experiment! I do think the incentives are heavily weighted to Anthropic for this to go well. I have mixed feelings about how it will go, but it will result in an important outcome…

RIP Bun.

Im feeling like i won the lottery that i picked deno over bun a few years ago for a bigger project.

It's cool how you can just do this now in 2026. I hope it gets cheaper and easier to do with other big projects written in outdated or just not good enough languages

Will be interesting to see how this pans out. Some people will see minor issues as proof that AI is terrible, but honestly if this gets released and is relatively uneventful it just highlights how the art of building software had changed completely in the last few years.

Probably one of the most reckless things I've seen in software. Beyond safety or quality, at the very least: what about all the existing contributors' PRs? Fuck 'em?

I'm curious where this leaves Zig. Bun was the most prominent and biggest project using it. What's left?

  • Zig is still a moving target with big fundamental changes being made to the language from version to version - nowhere near v1. When rust was at this stage of its development you wouldn’t have been able to name many projects either.

  • I though TigerBeetle was the biggest Zig project. Anyway, I am sure there's plenty of projects in Zig out there.

  • It leaves it in the same vibe realm as Nim. A terrific language but probably never hitting mainstream. You're familiar with Nim. ;)

    • Doesn’t seem like it is in the same adoption realm. I wasn’t aware Ghostty was written in Zig and I’m not aware of any Nim project ever reaching the heights of Ghostty (or indeed Bun). Plus as others state, Zig is still pre-1.0.

      Things do look significantly better for Zig adoption-wise than for Nim as far as I can tell.

"And Icarus laughed as he fell, for he knew to fall means to once have soared"

I low key hope a codex shop, perhaps OpenAI themselves, do this too, so we can compare results.

It shows that the choices/philosophy chosen by Zig isn’t the right one and that memory safety is still too boring/hard to handle at scale.

I mean aside from the somewhat...dishonest statements from the people involved, giving false explanations is one thing, but calling people who smelled this "overreacting" gives this a weird taste.

I am neutral on such a rewrite itself, there are pros and cons to the whole "rewrite in Rust" topic. People are making decent arguments. But the way the initiator here reacted makes it seem like the Bun team itself thinks they are doing something weird here...

Guess reviewing any code isn't exactly their thing either anymore? And I guess adjusting the tests themselves is certainly one way to make things pass.

Ultimately this just seems like it was done specifically to make Bun more "ai friendly". Whether it turns out good or not that appears to be the motivation behind it.

I feel like there's an iron triangle here, that involves "is vibe-coded", "is secure" and "accepts bugfixes".

Like, you didn't review that 1M LoC. There's no way to have done so. If we're accepting slop-fest PRs, then nothing stops an attacker from burying a security bug in a slop-fest PR that then gets reviewed. And if I'm the attacker, I'm crafting that security hole to have subtle clues to the security AIs reading it as to why it's "correct" so that your AI review bot goes "oh, yeah, this logic works".

This will burn the little reputation and trust Bun has been able to achieve in the past couple of years.

I guess this is what happens when you only have to respond to your corporate overlords.

I will migrate my Bun projects in production to something else.

It's interesting that the developer who spearheaded the hype of Zig abandoned the engineering without addressing the segfault. They could have also taken the approach of gradually porting from Zig to Rust via FFI. Yes, this is a slop show by the AI lab.

To me the interesting thing to watch about this project is that if it fails and Bun becomes a piece of shit even with all the resources at their disposal, it means LLMs are probably not going to be the revolutionary tech everyone has been hyping it up to be. It’s useful sure, but software engineers aren’t going away. How could anyone interpret this any other way?

I can't imagine doing this to my own code base lol. I suppose only after Anthropic gave me a lot of money I'd say hey fuck it let's find out

I'm old. currently in npm dependency hell on my side project. wtf is bun and will switching to it save me?

  • Unlikely. Not all npm packages are even compatible with Bun (tho 98% are).

    Bun started off as an alternative runtime to Node (like Deno) but today is an everything-monster. It even has a built-in test-runner.

    To be completely honest, if you're dealing with dependency hell in 2026 you might be misusing npm. Or you're trying to update a really old project

It's going to be absolute mess of total AI slop and black box that nobody understands and is going to cause more issues than it fixes.

  • I've done some pretty incredible things with LLMs. If this were sqlite with its exhaustive test suite... OK, I can see it.

    It's hard for me to see this not becoming a pile of slop, but hey, maybe I'm wrong

Why would you replace an existing codebase like this instead of forking the repo instead and then making the changes?

  • They did fork it initially to experiment, then decided this experiment would go forward and thus naturally belong in the main repo.

    Git has this branch concept. It's being used correctly here, IMHO.

I wonder if projects like Ladybird will try this approach now. They've been trying to move to Rust (after trying Swift first) for a while.

Probably goes without saying but they probably had it check out thousands of projects that use bun and compile them using the new rust binary. And that was probably all automated and lifted into a compute structure that probably did all of that testing in 20 minutes. these people have scale.

Congratulations to everyone who uses Bun. You're now working as alpha testers for Anthropic... for free.

Anyone using Bun should consider migrating away immediately. Not because of the LLM angle, but because of how insanely irresponsible this is.

  • I reviewed the million lines of code added in a week, and I'm horrified. Not running that thing on my machine.

Well this is uncomfy. Not what...a week ago this was just framed as an experiment and now it's being rammed through?

Even if it works/is correct/etc, this is shockingly careless.

If I'm going to be using your thing to build on top of, I sure as hell don't want to see you 180'ing a week after you just said you weren't going to do exactly what you just did.

Hard pass, purely on principle.

The result is so horrible that Anthropic will quietly move to Node in 6 months. Now they got their headlines and in 6 months everyone will have forgotten about it.

What does this mean for bun add-ons like opencode's opentui? Did FFI also somehow get ported or will that have to be updated? https://github.com/anomalyco/opentui

  • First, why are you calling it "add-on". Second, it's done via the same C ABI.

    • Node's been calling native code distributed in a npm package "add-ons" for a decade and a half.

      Fair call on the same C abi. Adapting to node 26.1.0's new FFI is happening in https://github.com/anomalyco/opentui/pull/104 . There's also some new FFI adapters opentui is adding there, and they're adding a worker.

      So there is some adaption. That was sort of the interesting useful actual look I thought might be informative, where-as I feel like you were mostly just trying to be curt & maintain a status quo of keeping us all uninformed/unknowing. Let's try actually providing useful steps forwards when we post, ok?

This will go down in history as the biggest mistake of software engineering of all time.

Bun is the runtime of Claude Code, which is the core product of a trillion dollar company, which now sits on a vibe-coded app, where not a single person in the world has a proper mental model of.

  • Claude Code itself is purely vibecoded, both CC and Bun leads are saying that humans are not writing code at Anthropic anymore. It is amazing how much money they intend to squander, because it's all funny money to them, investors just give it to them hand over fist for them to burn. Developing wrappers around the model isn't even the hard part and yet they're going to burn themselves to the ground getting high on their own supply.

    • > Claude Code itself is purely vibecoded [...] money they intend to squander [...] going to burn themselves to the ground getting high on their own supply.

      This really really really isn't the burn you think it is. Going from 0 to 2B+ in revenue from a "purely vibecoded" thing is what they've said they're doing, and what they've actually done. Like in already done. It's not going back, no matter how many nuh nuh people write. They've already shown this can be done.

      People will continue to think that this is some sort of a gotcha. But it's actually precisely what they've done: they showed that dogfooding works. If this works, why not x y z?

      5 replies →

  • Maybe this is the best marketing trick for Claude Code ever. Maybe there was pressure from Anthropic to do this and prove the value. Even partial success is enough to prove the value, justify the value and usage, and AI dependency even further.

  • On the other hand they might be super confident in the results, and if it goes well they might use is as an example of how good claude is

  • Well, realistically as well, humans gave us softwares that are full of security holes (and bugs), which one have you seen that a human perfected on the first time around? Give AI some time as well to be fair.

  • My initial reaction was that this is pure insanity but in fairness this is a fairly 1:1 port of existing code, so the developer's mental model of it should still match fairly well.

    For instance look at this Zig function: https://github.com/oven-sh/bun/blob/ed1a70f81708d7d137de8de0...

    Versus this Rust version: https://github.com/oven-sh/bun/blob/ed1a70f81708d7d137de8de0...

    I did pick that at random but it does look like the best case. I skimmed through a lot of the Rust code and there's a surprisingly small amount of `unsafe`.

    Still pretty insane to merge this in such a short time with so little testing, but I can easily think of bigger software engineering mistakes. Hell it's not like Bun even needs to be commercially successful any more.

i find it hilarious how desperate people are to cope that this can’t possibly work, must be horrible, etc. for all i know, it is. but let’s just see how well it works, rather than “no true scotsman” grouse about it. it is so sad. it reeks of “doth protest too much” energy. if it were so obvious that ai was insufficient to do the work, then i don’t think you’d have to circle the wagons about it. you could just confidently watch the market turn on the product and know the reason why. and all that would prove is just how special you all are that ai cannot replicate your genius. the reality is that foundation model makers have been dogfooding their own vibes for multiple years now, and it is clearly is good enough for _them_. but yeah, i’m sure that’s just a total fluke and they are all idiots. /eyeroll

  • Last time I took the Time to see details about such crap, it was CCC.

    Great advertisement, fails to compile a random C projet I have, waste of my time.

Where are all the guys in the Hacker News comments who have been explaining how bad LLMs are?

  • LLMs bad¹ ² ³ ⁴

    --

    ¹ when they empower idiots who vibe features with no regard for tech debt

    ² in a long run when they are used without human oversight

    ³ even on trivial tasks when results can't be reliably verified (f.ex. tests coverage)

    ⁴ the above list is not exhaustive, but outlines main points which should be easily recoverable (by any person smarter than a house spider) from the context of discussions involving LLM sceptics.

    --

    To answer your question "where" – take this as your home assignment. My message contains enough hints to come to the right answer.

Giant slop-filled PR (that will power future slop-generation) has caused slop-coded Github to stop loading properly.

The Anti-Singularity is approaching ever quicker!

  • It's okay, at this rate Anthropic will be the only ones left using Bun.

    This is the Extinguish phase of the process, right?

We have hundreds of projects that run on Bun. (Some are Bun-specific for whatever reason, but most are "runtime-agnostic TypeScript code that runs on Bun, Node 24.2+, and Deno, but that means they run their test suites on Bun, in addition to the other two.)

Out of curiosity, I installed the canary Bun and just ran a bunch of them. It didn't take me long to find one that works on stable Bun and crashes on "canary" Bun.

      schematic git:(main)  bun upgrade --canary
    [1.55s] Upgraded.
    
    Welcome to Bun's latest canary build!
    
    Report any bugs:
    
        https://github.com/oven-sh/bun/issues
    
    Changelog:
    
        https://github.com/oven-sh/bun/compare/0d9b296af...19d8ade2c
    
      schematic git:(main)  bun run main.ts serve
    Schematic Editor running at http://localhost:4200
    Bundled page in 25ms: src/web/index.html
    frontend TypeError: Cannot destructure property 'isLikelyComponentType' from null or undefined value
        at V0 (http://localhost:4200/_bun/client/index-00000000ac7e3555.js:24:2534)
        at reactRefreshAccept (http://localhost:4200/_bun/client/index-00000000ac7e3555.js:21:6090)
        at http://localhost:4200/_bun/client/index-00000000ac7e3555.js:8766:27
        at CY (http://localhost:4200/_bun/client/index-00000000ac7e3555.js:21:8973)
        at nY (http://localhost:4200/_bun/client/index-00000000ac7e3555.js:21:9285)
        (...more like this...)
        at m (http://localhost:4200/_bun/client/index-00000000ac7e3555.js:21:8773)
        at http://localhost:4200/_bun/client/index-00000000ac7e3555.js:24:6482
        at http://localhost:4200/_bun/client/index-00000000ac7e3555.js:24:6548
        from browser tab http://localhost:4200/
    ^C
      schematic git:(main)  bun upgrade --stable
    Downgrading from Bun 1.3.14-canary to Bun v1.3.14
    [2.02s] Upgraded.
    
    Welcome to Bun v1.3.14!
    
    What's new in Bun v1.3.14:
    
        https://bun.com/blog/release-notes/bun-v1.3.14
    
    Report any bugs:
    
        https://github.com/oven-sh/bun/issues
    
    Commit log:
    
        https://github.com/oven-sh/bun/compare/bun-v1.3.14...bun-v1.3.14
      schematic git:(main)  bun run main.ts serve
    Schematic Editor running at http://localhost:4200
    [browser] Version mismatch, hard-reloading
    Bundled page in 20ms: src/web/index.html
    
    # working fine as usual... ¯\_(ಠ_ಠ)_/¯

I mean "passes test suite" is one thing. And a good thing. But... "doesn't break any (or even, say 99.5%) of the apps deployed around the world that are built on bun" is a pretty radically different thing.

It's hard to feel like this is responsible behavior, but I will reserve judgement for now, and see how long they persist this "canary" phase.

If they extend it for a lengthy period, and even like, fix bugs on the Zig version and the Rust "canary" version, then... I would be mollified to a great extent, since it is so easy to switch between the Zig stable version and the Rust canary version.

As a pretty heavy user of Bun, I'm actually pretty psyched for it to switch to Rust... but given the abruptness and speed so far, I can't quite shake the "new AI dealer getting high on his own supply" vibe.

But I hope they enter an intensive phase of prioritizing any and all "canary" bugs, and come out on the other side with a better product, and an even faster rate of improvement (which has honestly been pretty wild already).

(Yes, of course, I will have my clanker file a bug report with repro... but that may take a few days.)

vibe coders keep saying that now you can have 100x productivity, that you can write a million lines of code in a week and do what would take a team of 10 experienced developers a year.

where are all these million lines vibe coded projects? I don't see them. its all hype

  • This PR appears to be over a million lines (though GitHub won't load for me).

    Of course the quality is the real question. I haven't had amazing results with LLMs with Rust, but they're less bad at it than they are at Zig, which is probably the reason for the rewrite.

    At least in this case the original code was written carefully by hand, so the design is sane, and now just the auto-translation is in question. Now it just needs to be battle tested.

  • Bun is now literally vibe-coded, that's your proof. And Bun developers will solely use LLMs at some point (pretty close to "vibe coding").

    • Show me some gold instead of a continuous stream of pickaxes.

Now pull the branch and roll your own bun without license issues (using an ai) against their test suite.

Anyone using Bun in production excited for this release? (other than Anthropic of course)

I'm bullish on LLM-assisted development but this is just a very stupid way of performing such a critical migration.

I hate to say this, but this reeks of "We're owned by Anthropic now and we were put to task to prove Claude Opus as the ultimate AI model, so we were forced to do a full port of something millions of developers rely on to Rust in record time. Just ignore the slop and unsafe statements." (sweeps the broom)

This is nothing more than a marketing stunt from Anthropic. Nothing to see here.

HN overreacting again.

I trust Jarred to make the right decisions regarding bun, which seems to be his passion. Bun has always been amazing since i first tried it, it had some bugs along the way, which didn’t last long.

Anything bad that comes from this, will simply be fixed.

I hope more software does this and gets rid of their segmentation fault producing code, written in c++ and other unsafe languages

I can think of a few.

  • It has 10k unsafe blocks, pretty sure those segfaults are still gonna be there

    • Definitely. That's what a good translation is.

      But then, agents can work on removing each unsafe one by one and this will bubble issues.

I might not necessarily agree with the haste / stability of this, but I commend Jarred for pushing boundaries on what AI coding is capable of, can't deny that. 4 years ago this would've seemed like science fiction.