Bun's experimental Rust rewrite hits 99.8% test compatibility on Linux x64 glibc
6 days ago (twitter.com)
https://news.ycombinator.com/item?id=48016880 - May 2026 (540 comments)
6 days ago (twitter.com)
https://news.ycombinator.com/item?id=48016880 - May 2026 (540 comments)
From 4 days ago: https://news.ycombinator.com/item?id=48019226
cargo check reported over 16,000 compiler errors when I wrote that message. It could not print a version number or run JavaScript. I didn’t expect it to work this quickly and I also didn’t expect the performance to be as competitive. There’ll be a blog post with more details.
If this experiment ends up resulting in a real migration path, I think that would be completely awesome. Maybe it means we have a chance to revive older projects such as ngspice [0], but with modern affordances and better safety properties.
From your post, though, it sounds like Bun may have been a pretty direct rewrite, without too many hard choices along the way. Is that fair?
[0] https://ngspice.sourceforge.io/
31 replies →
UPDATE: This would make for an excellent case study if you don’t mind sharing the details. I am very curious about the number of agents, hours it took, and models used (did you use Mythos?).
This would not have been possible 5 years ago. LLMs are going to push us into the space age. Both Anthropic and OpenAI have committed to spending 10s of billions of dollars on training alone for the year. I am equally excited and terrified at the pace of progress!
Rust is really fun to work with and the compiler is great, just make sure the rewrite takes compile times into account since larger projects often have to be organized in a way that makes compilation reasonably fast.
10 replies →
What coding model are you using for the rewrite? Opus for everything? A prerelease model like Mythos?
Just an aside, is there any way to know how many of those 16,000 compiler errors are independent. I mean, could it be that just by changing say 500 lines of code all those errors disappear?
Perhaps 16,000 could just measure cascade breakage, for example one lifetime mismatch can cause errors in every function that tries to use that reference.
Rust reference lifetime bookkeeping is a difficult task for LLMs. The LLM has to maintain, across multiple functions and structs, which references outlive which. Furthermore compiler messages are highly contextual and lifetime patterns are sparse in the training set.
That's a post I am eagerly waiting to read.
Basically we are seeing now an "inverse Hofstadter's Law" where doing something with an LLM takes less time thanexpected even when you take into account this law.
I am a Rust developper myself but I really love Zig and Bun. I am just overly curious of all this.
5 replies →
This does not surprise me in the least. Several Claudes are very good at splitting up and working through them all.
I think given the current mood of things, it would be prudent to not make such strong assertions on anything. Trust is in increasingly short supply these days.
7 replies →
> I am so tired of worrying about & spending lots of time fixing memory leaks and crashes and stability issues. it would be so nice if the language provided more powerful tools for preventing these things.
haven't used zig...(only used rust)
but zig doesn't solve those problems?
54 replies →
Peter Naur: Programming as Theory Building
Bun: Hold my beer
"No one has the intention of building a wall" - Walter Ulbricht, chairman of the central committee, a couple of months before the Berlin Wall was built.
The AI companies and their associates are beginning to surpass that level of denials and lies.
It’s disrespectful to immediately jump to adversarial conclusions from a simple desire to refactor and poor netiquette.
37 replies →
you know this whole exercise is both a marketing exercise and a way to make noise.
would the world come to a standstill tomorrow if every Bun instance out there ran on Node.js ?
they know their A.I can't sell without the noise that it's now on the edge of the frontier. this is hype.
zig adopting a strict 'no LLM' policy affects the LLM vendors.
13 replies →
Looks like he did the maintainability performance and test suite checks and made his decision :)
Honestly, I fully support the rewrite to Rust, but he should have just owned this from the start. I'm sure he knew in the back of his mind how dedicated he was to that branch as he had already spent the equivalent of thousands of dollars in tokens by that point.
12 replies →
Yeah, that means it's an extremely successful experiment so far.
Also a few days before that:
> I expect OSS to go the opposite direction: no human contribution allowed. Slop will be a nostalgic relic of 2025 & 2026.
We should have seen this coming after they got acquired by Anthropic, but it's still disappointing. I'm not against large language models as a technology, just thoroughly disgusted how these "AI" companies rose to power, eating the software industry and the rest of society. It's creating a very unhealthy dependency.
Think a few steps ahead and start preparing a slop-free software stack and community. That includes Zig and its ecosystem. Even if we (and future generations) don't manage to live entirely without slop, it's more important than ever to ensure a sustainable computing culture, free as in freedom.
Software companies have been about automating human labor since the invention of computers. It's the whole damn point. Why do you think finance used to be (sometimes still is) the head of the IT dept? Because we automated accounting away. Then typists. Then secretaries. Then drafting. Etc etc.
41 replies →
So you argue we discriminate based on who/what wrote the code, instead of what's in it?
Let's take this to a different domain, self driving cars. Would you equally argue for human driving? I'm pretty sure over time it will become clear to everyone that machines will be able to outperform humans consistently at this task, to the degree that human driving will become illegal. But for now the press likes to focus on any failure of machine driving, taking for granted human drivers are the largest or second largest cause of premature death in many countries.
Coding (in many ways, but not all) is a more open ended and versatile task than driving, so it's natural that current iterations seem untrustworthy, but ignoring the trajectory is erring on conservatism, and doesn't seem to me to be grounded in any sound reasoning.
How could it possibly be open source if it requires proprietary models developed by a few companies to writs the code.
Seems like that would make open source entirely controlled by open ai, anthropic et al.
1 reply →
It isn’t really slop anymore and it will keep improving.
He works at claude, he has unlimited tokens. He can do anything, he is using mythos.
I think such re-implementations will be a huge asset to the process of software developments in the future.
[flagged]
What's your point
To demonstrate engineers may not be as skilled and knowledgeable as they appear. To make such a comment then turn around and make an announcement days later indicates that the engineers are not skilled in the tools they’re using or even possibly the domain they’re working in.
13 replies →
Very impressive that they could do this so quickly because I have been on a similar project (porting TypeScript to Rust) for 5 months. But I guess I don't have access to Mythos and unlimited tokens. I'm also close to 100% pass rate. 99.6% at the time of writing.
https://tsz.dev
Rust is perfect for writing all of code using LLM. It's strict type system makes is less likely to make very dumb mistakes that other languages might allow.
Also want to note that writing the code using LLM doesn't remove the need to have a vision for the design and tradeoffs you make as you build a project. So Jarred and his team are the right kind of people to be able to leverage LLMs to write huge amounts of code.
> Rust is perfect for writing all of code using LLM. It's strict type system makes is less likely to make very dumb mistakes that other languages might allow.
I question this. Yes, strong enforcement of invariants at compile time helps the LLM generate functional code since it gets rapid feedback and retraces as opposed to generating buggy code that fails at runtime in edge cases.
On the other hand, Rust is a complex language prone to refactoring avalanches, where a small change in a component forces refactoring distant code. If the initial architecture is bad or lacking, growing the code base incrementally as LLMs typically do will tend towards spaghettification. So I fear a program that compiles and even runs ok, but no longer human readable or maintainable.
> Rust is a complex language prone to refactoring avalanches
This may be so, but LLMs are great at slogging through such tedious repercussions.
I would say if the language prevents sloppy intermediate states, that actually makes it more amenable to AI; if you just half-ass a refactor into a conceptually inconsistent state, it’s possible for bad tests to fail to catch it in Python, say. But if many such incomplete states are just forbidden, then the compiler errors provide a clean objective function that the LLM can keep iterating on.
2 replies →
> On the other hand, Rust is a complex language prone to refactoring avalanches, where a small change in a component forces refactoring distant code.
Are you saying this out of personal experience or just hypothesizing? I am working on a large, complex rust project with Claude Code and do not experience this at all.
6 replies →
It's very easy to just instruct the LLM to build using isolated crates, to maintain boundaries, focus on "ports and adapters", etc, and not run into this - in my experience.
I haven't had any issues with this getting out of hand on >10KLOC vibed rust codebases.
4 replies →
Sure, but if the initial architecture is bad for most mainstream languages, trying to do a huge cascading refactor is equally hard, but at the end the result is a lot less likely to work, so you don't so it at all and end up in the same spaghetti mess.
The lesson here is that right now LLMs are a lot better at "fill in the implementation for this API I defined" than "design everything from scratch" if you care at all about whether it becomes a mess of spaghetti. Maybe someday they'll be better at it, but at least today, you have to choose between going full vibes and not caring about the code, or you need to be involved in the design, and either way it's not clear that Rust is a significantly worse choice based on anything other than your own experience.
Human readability and maintainability is not the future.
[flagged]
When Microsoft rewrote it in go, there was a comment from one of the leads that they chose it over rust because of the similarity in paradigms (garbage collection, etc), and that using rust would've been more difficult, requiring a lot of "hoop jumping". Now that you've done it... Thoughts?
Yes indeed. More than 1 million lines of code (including tests) is jumping lots of hoops but with LLMs it's not as painful so you can just ask it to do the hard things.
Example of a Claude Code session after 2 hours of "Crunching" that came out without results https://github.com/mohsen1/tsz/pull/4868 (Edit I force pushed to PR to solve the problem, you can see the initial refuse message in the initial version of PR description)
Funny thing is, the last percent of the test have been so hard to work on that Opus 4.7 routinely bails and says "it's too involved or complicated" so I had to add prompts specifically asking it not to bail.
8 replies →
They mentioned that they wanted to port their compiler over to retain existing behavior (vs a re-write) and Rust has a hard time with their cyclic data structures.
Is GC useful for a static type checker? Or did they make a new runtime?
2 replies →
Same but for multi-threaded Postgres[0]. 96% pg regression tests pass after 1 month and 823K LOC. 8 Codex accounts at $200/mo is what i could use up with no Mythos
I've also seen the benefits of Rust for this too. And making the bet that my pg experience will help me make good design choices around many of the things people have been having trouble with in pg for a long time[1]. Excited to see AI make it more possible to improve complex pieces of software than has historically been practical.
[0] https://github.com/malisper/pgrust [1] https://malisper.me/the-four-horsemen-behind-thousands-of-po...
1600/mo, there is now a token-rich class.
Very cool! If you have extra tokens laying around ask the agent try to break things and open GitHub issues. This is what I do for tsz and beyond conformance test I can see it finding very good bugs.
96% tests passing sounds impressive, but I remember that C compiler that had similar (or better) stats yet was still hilariously broken because the test suite didn't cover many "obvious" things that a human wouldn't get wrong even without the tests.
1 reply →
> PostgreSQL, rewritten from scratch in Rust.
You use the test suite and LLMs are trained on Postgres.
Are you at Freshpaint? A company that "helps healthcare marketing teams grow in a world where privacy is the baseline, but performance is the goal."
Nice promises! Surely the marketing teams will respect privacy!
wow!
curious about your workflow for running all these accounts. different harnesses in parallel? manually switching in codex? 5.5pro only?
what works for you?
1 reply →
Rust is amazing, but the way I want to build Rust software breaks down on large projects with LLMs. Maintaining clean boundaries or even just establishing them stops being a flow state and turns into painful reviews that push me into procrastination mode.
I’ve struggled to get Opus to not write the weirdest possible Rust, ignoring all idioms and so on. Any tips?
I found that turning on every possible clippy lint and telling it that it has to run clippy as well as tests before it can claim it's finished helped a decent amount. Of course, if you have a decent-sized codebase of Rust you're happy with, it helps immensely, since it will tend to listen to follow instructions to follow existing patterns.
Be absolutely ruthless with technical debt. Opus is perfectly capable of producing idiomatic code in any mainstream language you please, but will seize on any opportunity to justify writing basically-python instead because that's "consistent" with the "convention". Deprive it of that excuse.
1 reply →
Give it coding guidelines. It'll largely try to do what you ask.
Left to itself, it often follows human developers who conceive of their goal as "get the program working, the end justifies the means." Which makes sense because there are a lot of systems like that in the training corpus.
>Rust is perfect for writing all of code using LLM. It's strict type system makes is less likely to make very dumb mistakes that other languages might allow.
100%. I've been telling everyone who will listen this for 2 years. LLMs are infinitely more productive with swift code like
let engineCycleCount: Int = 5
vs
let eC = 5
They still make mistakes, but forcing _explicit_ typing in a strongly typed language makes them make far fewer mistakes + the compiler is catching >90% of what you try to catch with a billion rspecs in trash languages like ruby.
Wow, amazing work.
Pretty impressive that it is faster than the Go version already.
Thank you!
It's much faster in single file benchmarks (3 to 5x)
https://tsz.dev/benchmarks/micro
I have optimizations planned for large projects that I'm still flushing out.
2 replies →
Zig is much more type aligned to bun than typescript. And there’s a common interface of C ffi so you could imagine porting it modularly and keeping the test suite in zig
>Rust is perfect for writing all of code using LLM.
Rust is a terrible language for using LLMs to write code if Rust's low latency isn't needed, because of its extreme compile times. LLMs code faster than humans so a far bigger fraction of the time is spent waiting for the compiler, and a reasonably sized project will take literally 10x longer to compile in Rust than in e.g. Zig or Go.
In my experience with Claude Code, it writes most of the code, including tests, without invoking the compiler until the very end (almost like a spelling checker). Rarely are there any compilation problems, and when there are, it’s often a token issue like a missing brace. I hypothesize this is possible because of the robust invariants of the language itself, and its strong types, such that the LLM can encode deeper meaning in fewer tokens.
Also remember, `cargo check` is quite fast, and wholly sufficient for confirming correctness.
shouldn't typed code that uses functional style be kinda the perfect end game for llms? You can parallelize generation at any granularity, easily ring fence changes, reproduce everything, types give clues to the llm.
Interesting, but why not then use an even stricter language? Say Idris, ATS, Lean or F* ?
Not OP. For this particular use case, I think performance is a primary concern.
But if you mean in general, I also totally feel that languages that let you represent more invariants statically are better fit for LLMs. I'd love to see experimentation with LLMs with dependent types and managed effects.
Because I don't know those languages. I'm still reading the code LLM writes
[flagged]
> How do we know it is true?
The branch is open.
You can check it out and run the tests if you don’t believe it.
Zig isn’t so much on the blacklist because of the culture it carries from its maintainers, but because the ecosystem is no longer easily composed with other GitHub projects/GitHub Actions.
> We are dealing with a company of habitual liars and promoters.
Any sources to back this up?
I just want to comment that I think it's a good change if we look past the AI involvement.
Bun has had an extremely high amount of crashes/memory bugs due to them using Zig, unlike Deno which is Rust.
Of course, if Bun's Rust port has tons of `unsafe`, it won't magically solve them all, but it'll still get better
> Bun has had an extremely high amount of crashes/memory bugs
Any stats/source? Not that I think it's false
> and the ugly parts look uglier (unsafe) which encourages refactoring.
Looks like Bun owes that to itself to some extent, not solely because of the language
Not that it's a particularly accurate stat, but:
https://github.com/oven-sh/bun/issues?q=is%3Aissue%20state%3...
119 open, 885 closed
https://github.com/denoland/deno/issues?q=is%3Aissue%20state...
10 open, 46 closed
You want a better source than the actual author of Bun?
4 replies →
I believe the author is the creator of Bun.
3 replies →
FTA:
> why: I am so tired of worrying about & spending lots of time fixing memory leaks and crashes and stability issues. it would be so nice if the language provided more powerful tools for preventing these things.
Not a hard number obviously but a clear indication those issues exist.
3 replies →
If you look at percent of segfault errors in each repo, Bun had a much larger percent. Although don't quote on me.
Last time i checked their issue tracker (in 2025), the main source of problem was the engine, not their Zig code. A lot of core dump was happening inside and around JSC.
I remember back in the day we used to blame the user and not the tool, but I guess we changed that notion when it comes to tool vs tool comparisons LOL
> Of course, if Bun's Rust port has tons of `unsafe`, it won't magically solve them all, but it'll still get better
You get very few of the Rust guarantees when you litter your code with unsafe to get around the safety checks (which is what they're doing here). I would not recommend running this in production.
Yes, liberal unsafe code makes in Rust it arguably worse than writing it in a presumed-unsafe language.
From what I understand, rust "unsafe" is actually pretty damn safe compared to an actually memory unsafe language.
2 replies →
And they're clearly marked as `unsafe`, so easy to find, which gives them a nice list of issues to address.
Is your claim that using Zig ends in an "extremely high amount of crashes/memory bugs?" Wouldn't that mean that it isn't even feasible to make high-quality software with such a tool? There is a lot of quality stuff made with C/C++, so what is Zig doing wrong?
> Is your claim that using Zig ends in an "extremely high amount of crashes/memory bugs?" Wouldn't that mean that it isn't even feasible to make high-quality software with such a tool?
What caused you to hallucinate such a broad blanket statement? The point is the memory unsafety issues they ran into would be categorically impossible in safe Rust, which is why they're doing this in the first place.
14 replies →
It is basically Modula-2 / Object Pascal with C like syntax.
While bounds checking, improved argument passing, typed pointers, proper strings and arrays are an improvement over C, it still suffers from use after free cases.
C++ already prevents many of those scenarios, at least for those folks that don't use it as a plain Better C, and actually make use of the standard library in hardned mode. When not, naturally is as bad as C.
Also to note that the tools that Zig offers to prevent that, are also available in C and C++, but people have to actually use them, e.g. I was using Purify back in 2000's.
Then there is the whole point that Zig is not yet 1.0, and who knows what will still change until then.
7 replies →
It is much harder to write quality stuff in c/c++ that doesn't have memory bugs (use after free, out of bounds access, use of unitialized memory, double free, memory races, etc.). I wouldn't say it isn't feasible to build high quality software in those languages, but even the highest quality software written in those languages has these types of bugs. Zig is better than c, and maybe a little bit better than c++, especially with respect to spatial memory bugs, but it doesn't provide the same garantees as rust.
4 replies →
The answer is that C (and by extension Zig, C++) code goes through a hardening process. New code in these languages tends to be unsafe. But bugs and vulnerabilities get squashed over time. Bun gets updated fast and so has a lot of new unsafe code.
it's feasible to write good software but anything on the scale of millions of lines of code will have memory and pointer issues. I've worked in large C++ code bases with people much more experienced and skilled than I was and every single one of them would tell you that at that scale, no matter how economic and simple you program you will produce memory bugs, the smartest person in the world makes errors holding that much stuff in their head.
They're difficult to find, difficult to reason about in big software and you'll always create some. Languages that rule that out are a huge improvement in terms of correctness.
1 reply →
The statement “there exists a project where zig led to an extremely high amount of crashes/memory bugs” does not imply “all zig projects have an extremely high amount of crashes/memory bugs”.
This is a classic logic problem - eg “there is an orange cat” doesn’t imply “all cats are orange”.
1 reply →
> There is a lot of quality stuff made with C/C++
There’s a lot of leaky crap written in those languages too. One of the core promises of Rust is that the compiler will catch memory issues other languages won’t experience until runtime. If Zig doesn’t offer something similar it’ll make Rust very compelling.
12 replies →
> Wouldn't that mean that it isn't even feasible to make high-quality software with such a tool?
plenty of other companies/entities making high quality software in zig? tigerbeetle, zig itself for example.
Bun's entire history has been a kind of haphazard move as fast as you can story, so...
Can you or someone shed some light on how much compute it took to do this?
[flagged]
Ah, yes, the "you're holding it wrong" defense. If one tool has a higher safety rating than another, significantly so, preventing entire classes of mistakes from happening that the other does not, in a kind of superset manner - even the most skilled craftsman will inevitably make mistakes that would have been prevented by the safer tool.
2 replies →
I think the main problem with Bun is that they are trying to move very quickly.
Tigebeetle devs spend 90% time working on stability, safety, tests and so on. They don't need new features, they need reliable software. Their database is pretty simple in terms of features and their goal was always stability and speed. Bun devs spend the majority of the time adding new features.
2 replies →
> This just sounds like they are not good at using Zig.
That's odd, because of the visibility of team Bun using the language, one would think they could get whatever help and guidance they asked for. Seems weird for team Bun to complain about crashes, leaks, and bugs if they could have what they are doing wrong explained to them or their issues fixed in a timely manner.
Not sure if ghostty is the best example https://mitchellh.com/writing/ghostty-memory-leak-fix
6 days of work to do this. Even if it doesn't end up becoming meaningful, it shows just how tokens and work done will be linked now and in the future.
It's going to be hard to compete with someone or a company that has more compute. They will just be able to do things you can't.
Translating a project that includes a good test suite from one language to another is known to be a great case where LLMs work well.
When you’re starting with a complete codebase to use as an example and a test suite to check everything it’s much easier to iterate toward the desired goal. The LLM can already see what the goals are and how they’ve been implemented once already, which is a much easier problem than starting from a spec.
Great case where rust works well too. I won't cite every famous libs that got rewritten in rust but it wasn't all with LLM.
3 replies →
It's not hard to imagine a future where the only things committed to git repos are tests and specs.
2 replies →
Sure, but, given that, does it not seem like the conclusion is: if you have something that could in principle be reverse engineered by a competitor with more compute, they can and will steal it, because the only constraint is roi.
The goal posts are always moving. This would have been an unthinkable task a couple years ago.
1 reply →
Unclear. Very good products tend to be about doing one or a few things very well; not about doing tons of stuff. So far, all I see is “Man, Im a 10x engineer now!”, shipping more code but without clear direction and taste. At this point, most of LLM-based work is just noise.
You could have said the same thing about steam power or electricity. And it’s not just an analogy: The magic of these things is in being universal information engines. You spend capital to build them, using well-understood, scalable techniques, plug them into electricity, and out comes value.
My point is, there’s no chance of a “haves and have nots” emerging, any more than electricity turned out that way in the modern world.
Electricity might be a good analogy - but for the other side of this argument.
In the US, (nearly) full electrification wasn't achieved until the late 1940's/early 1950's - a process of nearly a century. (A moment of personal trivia, my great grandfather worked on crews electrifying rural areas of the midwest.)
1 reply →
>My point is, there’s no chance of a “haves and have nots” emerging, any more than electricity turned out that way in the modern world.
Energy costs vary widely across the world and that has enormous capacity for the economies of different countries and their industrial capacity.
5 replies →
[dead]
It's a new era of capital, literally, in software development. Ownership of the means of production is now concentrated.
Nah. These agents are getting easier and easier to run local. Have you tried Qwen 3.6 27b? It’s insane what it can do compared to its size. Like 100% vibe small projects if you manage context properly.
These models are a race to the bottom just like compute.
I don’t think it matters. Local matters becoming better has not stopped demand for SOTA models.
1 reply →
I can't help but wonder what this cost in USD assuming you paid standard rates from Anthropic. Can someone even ballpark the price?
Much less than what it’d costs for a team of rust engineers.
This is both amazing and scary; has been for a while now.
7 replies →
10k lines ~$250 in OpenAI API calls (no plan)
45 million lines would get to ~$1.125 mil for the linux kernel.
950k lines for Bun would get to $23,750
use whatever math you like ofc.
Does an Anthropic/employee pay that, no. Even if it's at a loss in terms of company revenue, it's worth burning the private capital for all kinds of other reasons.
There are not many companies that live off taking full test suite made over decades and just generating code off it.
With less employees....
Isn’t just one guy?
10 replies →
Completely unbased, but I don’t want to have to do anything with bun anymore. It’s just a gut feeling, but I don’t trust them and support them.
They fork Zig to utilize LLM rewrites and build something the Zig team clearly disregarded (non-deterministic compiling)
And now like a whiny baby they LLM rewrite to Rust. There is a very real chance that Zig design philosophy got them to the point where they are now by enforcing to make the tough but precise decisions and the Rust rewrite is the start of the downfall.
It’s purely politics-based not technical, but it seems like bun is full on pampered by Claude. So much that I wouldn’t wonder that the next marketing piece of Anthropic is. Claude Mythos rewrote leading 950k LOC JS Runtime to Rust.
Who's the whiny baby? The developer writing some code in their own repo, or the guy complaining about it on Hacker News?
Yeah I also noticed this irony. In addition to accusing the rewrite to being political and not technical, while their whole comment is being political not technical.
4 replies →
How can you be so blind? This is all a marketing campaign by anthropic. No more no less. The developers doing the rewrite have no voice at all in this game.
1 reply →
> You're posting valid criticism, therefore you're a crying baby
Yawn.
1 reply →
I'm team Zig in most cases but I genuinely think they are better off with Rust. They have had a lot of buffer overruns and segfaults as a result of undisciplined Zig code. I think Rust actually is a better technical choice for them.
[flagged]
1 reply →
> And now like a winey baby they LLM rewrite to Rust.
I didn't see any whining from Jarred, this seems like misplaced sentiment
> It’s purely politics-based
The linked twitter thread gives clear technical justifications
Jarreds Twitter is a Claude Code Billboard
1 reply →
> I am so tired of worrying about & spending lots of time fixing memory leaks and crashes and stability issues
There are legit reasons to rewrite a program in a better fitting language, but as a runtime to be "tired of worrying about & spending lots of time fixing memory leaks and crashes and stability" is really borderline to me.
Also there are way more things to it than just compile time and tests: you reset mental model and will lose contributers. There is philosophy, developer skill and more attached to a language.
In this case both compile via LLVM the same and there is no performance benefit given the code is written exactly the same, so it’s developer preference, where the current head seemed to prioritize his own DX over everyone else’s.
But again this is mainly my gut feeling. I’m not the first dev that doesn’t like the way bun changes : https://news.ycombinator.com/item?id=48011184
"They" likely refers to Anthropic in this case rather than being an indeterminate singular pronoun:
https://bun.com/blog/bun-joins-anthropic
I'm not sure if the 50% of people defending the whole rewrite live under a rock with regard to the acquisition or have never worked at a US company or a deliberately naive. Companies give instructions. Nothing of this is accidental or prompted by curiosity.
It looks more political than technical. Also, criticizing the Zig team for not making any AI contributions before this gives a hint.
1 reply →
I agree. From the get-go, Bun was apparent in its design philosophy: we do everything you'd ever want; runtime, bundler, test suite, package manager, all in a new breaking patch each week. With each and every one blowing the established competition out, better, faster and stronger. But it was glaringly obvious that they'd do anything but Keep It Simple Stupid. It was obvious that the only production environment it would see the light of the day in the near future would be YC startups burning one after another at the speed of an accelerant. Now, they're past the point of no return.
> It’s purely politics-based not technical
Jarred mentioned having to work on fixing memory leaks as the main motivation to try this.
https://xcancel.com/jarredsumner/status/2053058171338682875#...
I was never fully comfortable with Zig given it's much less mature than Rust. Maybe this will be for the better.
I mentioned a similar sentiment 4 days ago in the original discussion about this project, and HN for some reason did not like that I noted Rust is used in production longer and way more than Zig is, including Firefox, CloudFlare's own reverse proxy, Discord, and many other massive effort projects that affect millions if not billions of people.
People are seriously naive about corporate incentives. You think he'll go "Yeah, it being in Zig has put a wrench in our AI usage and that's not a good look now that we're with Anthropic"? No, he'll confirm everyone's biases instead - and it's working as well as expected on this crowd.
He is a puppet. Anthropic is making him a billionaire. No surprise no one here can notice the difference
I don't have the personal investment that you appear to have with Bun, but why does this matter? Do you scrutinize the rest of your dependencies this way?
Much of working in the JS / NPM ecosystem is already pure faith on un-vetted dependencies, and this appears no different pre or post LLM rewrite. If it satisfies the intended goal and API contract it originally did, is there any difference? Were you carefully reading the original source code before?
> Do you scrutinize the rest of your dependencies this way?
You don't?
9 replies →
I consider zig the "whiny baby" approach to be honest.
[flagged]
Bun is effectively dead.
Anthropic bought it in a somewhat dumb attempt to solve their "performance" issues (not realizing their horrible code was the issue in the first place).
It probably helped them, simply because they brought in some actually competent developers.
But doing so, Bun went from being a public project to more of an internal tool for Anthropic, spoiled for now with AI money and losing quite a bit of focus.
Let's hope that when the bubble pops, some of the Bun effort could at least be salvaged. I don't see Anthropic maintaining it long term, they are simply not in the business of selling support for a runtime nor have the (Google) scale justifying maintaining one on the side.
Yep, the Anthropic acquisition, this petulant Rust rewrite, and bun's increasingly buggy releases (slop) have caused me to migrate my projects (personal and work) to nodejs+pnpm.
The risks of using bun are no longer just those concerns around a newer tech and "drop-in" replacement for nodejs. Now you have to marry Anthropic, Rust, and a founder with conflicting priorities.
Having read the comments from the actual engineer doing this rewrite, the only petulance I have seen is from those reacting so strongly to it.
2 replies →
So let me get this straight:
Developers use LLMs to migrate a million line codebase to a language that they have much less experience with in such a short amount of time that they likely do not have a good mental model of the migrates code.
At least the tests pass.
Only one person drove the migration, so the number of people that understand the new code is ~0.5 under the assumption there's no way the sole dev could build a mental model of fresh 1m code in 6 days.
This is code for a language runtime.
It's great that the tests pass but it's really hard for me to interpret this as anything other than horrible mismanagement of a promising project. When you sit this low in the stack this is grossly irresponsible and I have no idea why anyone would use Bun after this. You'd be literally adopting a runtime the devs presumably don't understand, keep in mind they now somehow need to evolve and maintain this in the future.
Hopefully this remains an experiment, or Bun has some plan for re-upping dev knowledge of the codebase. Sorry but a component with massive blast radius like a runtime isn't really a good candidate for vibe coding, no matter how good the AI is. I'd like the maintainers to actually understand their runtime, thanks.
Thank you put my gut feeling that I had in my top comment here in words. I didn’t have the full explanation ready, why this threw me off.
They won't, they will continue to vibe code it until it collapses under them and the project fades into obscurity. Which it will regardless since it was acquired by Anthropic.
Node beat Deno and Bun. Pretty impressive.
I think a lot of people taking this at face value , a lot of this was possible because of the beyond standard extensive and comprehensive test suit previously built.
It's still an impressive achievement that would have taken even the most competent engineers an exponentially longer time to accomplish.
I just hope it's noted when this is eventually marketed how much human effort went into designing and curating the test suite that even enabled this speed in the first place.
A test suite sort of functions exactly like the ideal scenario for current gen llms. A comprehensive enough test suite essentially forms the spec for agents to implement however they see fit - in this case rust.
You could probably throw away the entire actual source code in certain cases and reimplement the whole thing from scratch just giving an agent access to the tests when it's as well crafted as a project like bun.
Look what it can do in 6 days!
Ignore the hundreds of thousands of hours put into the original architecture and test suite that made it possible in the first place.
This is such a bad faith argument. How long would it take a dev or a team of devs to do this with the same architecture and test suite? A hell of a lot longer than 6 days..
10 replies →
Exactly this.
I am not sure why people sound so astounded, to be honest. This has been my frank experience of the agentic tools both Codex and Claude since about December.
When given the right constraints this kind of thing is entirely conceivable.
However the important question not being answered here is: does anybody working on it have a full understanding of what has been built?
My experience having constructed similar types of projects using these tools is yes, you could do this in a week or two but now you'll have a month or two of digging through what it made, understanding what was built, and undoing critical yolo leaps of faith it made that you didn't want.
1 reply →
If this is a "beyond standard" test suite, (so much so that it _uniquely_ makes this work possible compared to other projects,) then how is Bun also uniquely unstable compared to other Zig programs (and so deserving of rewrite?) If the blame lies partially with the test suite, what does this imply (if anything) about the Rust port?
Because tests validate behavior, not undefined behavior.
The thesis is that Rust makes undefined behavior less likely.
What a time to be alive.
So much of the fundamental dynamics of the industry and the job have changed in so little time. Basically over night.
Some days I am so excited at how much I can do now. You can build anything you want, in basically no time! 100% of my software dreams can be a reality.
Some days I am terrified at what's going to happen to the job market.
Suddenly you can get so much with so little. The world only needs so much software.
Is every company that sells software as their core business model going to go out of business?
What will happen if only certain companies or governments get access to the best models?
> Is every company that sells software as their core business model going to go out of business?
Probably not, for a number of reasons:
* Some software suites are (probably still for a few years) too big to regenerate them through a coding LLM
* There's quite a lot of proprietary knowledge not just in the code itself, but in the requirements, industry knowledge etc. For example if you want to write a hospital management system, you need to know a lot about how hospital works, how they are billing their services in different legislatures, data protection rules etc.
* For some pieces of software (like computer-aided engineering), validation of the software is just as important as the software itself.
* Liability: suppose you build bridges, and you're on the hook if it fails too early. Do you really want to vibe-code your own software that validates the bridge's design? Will any insurance company cover that? Probably not in the near future...
* Currently, security and safety of LLM-generated code is still a pretty big concern. I guess this will get better as the LLM-Coding industry matures.
> The world only needs so much software.
Around the time of the dot com crash, there was a decent amount of rhetoric advising students and job seekers against getting into the software industry, because it was getting "too saturated." The thinking was there's just not that much work to go around, especially for the number of people flocking to the field. And the crash just reinforced that narrative.
But even as a student back then, I could tell that there was unlimited scope for software. Pretty much any cognitive thing we do manually could be done in software. I once idly tried to enumerate those and quickly realized there was soooo much to do. Plus, I also understood that the more you do things a new way, a lot more things pop up that we haven't even imagined yet. The possibilities were countless. It was clear that the "saturation" narrative stemmed from a lack of people's imagination and understanding of what software really was.
I just knew that this field would never get saturated because it was impossible to run out of things to write software for.
But these days...
I mean, I know we will always have new software to build as things evolve, which they will do faster than ever with AI. But these days, I wonder if it's now possible to write software faster than we can imagine new things to do.
> Pretty much any cognitive thing we do manually could be done in software.
Yes, although I suggest being careful with that kind of thinking.
https://www.orwell.ru/library/novels/The_Road_to_Wigan_Pier/...
2 replies →
Let's take a SW business like a ticketing system.
Do you think 100 enterprises with 1 bln of tokens are going to make a better product than specialized vendor with 100bln of tokens?
For sure SW vendors and SAAS like "logo creator" are already dead, but unless the next generation of LLMs aren't going to have an embedded ticketing system the ticketing system vendor will be fine(maybe less headcount, but not sure).
> Do you think 100 enterprises with 1 bln of tokens are going to make a better product than specialized vendor with 100bln of tokens?
I'm not sure if this is sound reasoning, because "better product" is very context-dependent.
My currently employer has migrated away from RT to OTRS as ticket system, and now moving to servicenow.
The RT instance was heavily patched/customized.
The OTRS instance was heavily patched/customized.
We try not to customize servicenow quite as much, but the less we customize it, the more we have to change the workflows in our company. And humans are slow to adapt.
With this experience in mind, the question is more: do we want to spend lots of money on a vendor-supplied ticket system, and then spend lots more LLM tokens to customize it, or do we LLM-build it from the ground-up?
If we started a new ticket system migration project today, maybe the best answer would be to start with an easily-customizable Open Source ticket system, and then throw LLM-power at customizing it.
1 reply →
Certainly companies and governments will have access to better models than the public (in fact, that's already the case with Mythos). The public will still be able to help themselves with models that are behind the frontier.
Maybe, or they use the same smartphones as everybody else. The mass market also wants the best model and will pay accordingly.
It’s pure marketing. Don’t be naive
I'm a full time Zig developer, and I see this as an absolute win. I know Jarred has said in the past he feels Zig makes him more productive, but I also think it's fair to say Bun was programmed in a way that's quite cavalier towards buffer overruns. I think Jarred and the Oven team will have significantly better luck with Rust.
Some commenters have remarked they only heard of Zig because of Bun, therefore this is bad for Zig. Not so. In my opinion, there has always been a mismatch. I say with no ill will that a divorce is likely better for both parties. I genuinely believe Bun will be better software once fully converted to Rust.
I remember looking into the nodejs alternatives some years ago, one way to compare them is to look at the open issues. bun had so many hits for 'segfault' and deno has basically none.
Even now:
bun (zig) [1] 119 open / 885 closed
deno (rust) [2] 0 open / 1 closed
I don't think this has that much to do with Zig's anti-AI stance. More about using the right tool for the job.
[1] https://github.com/oven-sh/bun/issues?q=is%3Aissue%20state%3...
[2] https://github.com/denoland/deno/issues?q=is%3Aissue%20state...
You misspelled segfault as segfaut on your Deno search:
https://github.com/denoland/deno/issues?q=is%3Aissue%20state...
There 10 open and 40 closed on Deno.
1 reply →
Not sure why you're getting downvoted; I think you're close to right. They were successful with one technology and had a great exit. They may also be successful with another technology post-acquisition.
Lets see the fruit of their decision.
Just a cautionary case of porting to Rust using AI
https://blog.katanaquant.com/p/your-llm-doesnt-write-correct...
Also passing tests doesn't mean something works.
Claude code C compiler passed 100% of gcc tests and couldn't even run a hello world...
It couldn't run "hello, world" on systems where the include files were not located in the directory that it expected -- producing instead diagnostics saying, quite clearly, that the header files were not found. On systems where they were, it built versions of postgresql, redis, and several other things which passed their test suites completely.
If you've heard this problem described as a fundamental limitation of the compiler, and not the kind of packaging glitch that's routine to find in pre-alpha software of all descriptions, whoever described it to you that way is not serving their readers well.
I'm not saying CCC was production-ready, or close -- the total lack of an optimizer would be a killer in any real use, and I assume that there were problems with the diagnostics at least as bad as problems with performance and the include files, for similar reasons -- the LLMs hadn't been asked to optimize for that stuff yet, just test suite correctness. But it did achieve that, and the amount of cope I've seen on social media claiming otherwise is more than a bit disturbing.
1 reply →
The C compiler written by Claude a few months was able to compile a hello world.
The main problem I think that it was extremely slow.
i think theres a different lesson to be taken from those cases - the LLM will build to what you give a feedback loop for.
if you give just the logical tests, it wont consider the speed at all. if you included tests that measure the speed and ask the llm to match the performance, itll do that too.
its the same class of error as everything else with llms. it has no common sense context for things people consider important. if you dont enforce the boundaries, it will ignore them
Question is, are our optimization functions well specified enough? (No)
How important is well specified opt function? No one knows. We will find out
Discussed here if anyone's interested:
LLMs work best when the user defines their acceptance criteria first - https://news.ycombinator.com/item?id=47283337 - March 2026 (422 comments)
I think the industry is moving to English as the programming language, and specifications-context-tdd as the framework for building software.
Many find it distasteful, and many finding liberating. I think it's broadly correlates with how they feel about expressing themselves in english vs say C++.
As a side question, is there anyone who's using LLMs primarily in non-english mode to program? I suspect there's quite a few people using mandarin, but can someone share first-hand account.
I’m Korean, and I’ve used GitHub Copilot, Claude Code, and Codex. At first, I prompted them in English, but over time I came to the conclusion that using Korean works better for me. It may consume more tokens, but reducing the time spent understanding and correcting the plan is more valuable. That said, when the context gets close to its limit, the responses sometimes include Korean words that do not actually exist.
As an aside, I don’t think the benefits LLMs bring to non-English users are widely understood. I studied linguistics and Russian, and I’m capable of professional interpretation in English and Russian. Even so, I can read technical documents, understand them, and communicate about them much faster and with far less effort in my native language, Korean. These days, I read most English documentation and HN posts through Chrome’s automatic translation. Sometimes the translation is ambiguous, but in those cases I can immediately refer back to the original English. This has been a major help to me and to other Korean developers I work with.
I'm using it 50% English (personal projects)/50% Polish (workplace; reasons being agents.md / team is not that english proficient) and honestly I haven't seen much difference in the output/ambiguity.
Polish prompts tend to be shorter due to the language having a lot of verb forms/conjugations, the only "bad" thing for me is that when it's saying "it broke" it tends to use uncanny / blunt words that make me sometimes laugh.
Interesting. Some questions: Would you say polish is more dense or less dense than english? It's interesting to hear that code quality is not suffering but the response text is sillier or blunter. Any other descrepenacies compared to English?
1 reply →
Same here, been using 50% English and 50% Spanish for months, no particular reason, just whatever feels easier at the moment. Sometimes I even switch languages in the middle of a session. I have not noticed a difference in the quality of the output.
Natural language doesn’t have the precision required for building systems. We already have languages for specifying systems precisely. It’s called “code”…
Well, what we're seeing the past few months is that natural language does - at least enough to build code and tests.
I think it will eventually be its own dialect of English. Telling LLMs what to do is better using not quite normal English and I think this will continue until it isn't recognizable as natural English anymore, but a new fuzzy programming language (probably >1).
>Telling LLMs what to do is better using not quite normal English
What are your prompts like?
I believe new (programming) languages will emerge both for LLMs to parse and take instructions from as well as for them to generate code in. The former is because English is a nuanced language evolved for human usage which the LLMs don't quite need, with the only advantage of it being a metric ton of training material. Same goes for Rust, Go and other languages LLMs do primarily well coding in, which all have concepts geared towards human convenience.
I wonder how well Mandarin works for LLM-based programming. On one hand, it's very token efficient as Mandarin script is very dense in meaning. On the other, I suppose this can increase ambiguity.
I can speak, read, and write Taiwanese Mandarin (which is likely relatively underrepresented in the training sets and, which is, in my practical experience, materially different in its usage.)
The authoritative answer for this question would best come from the millions (or tens of millions) of Chinese-speakers who are currently using LLMs to write software.
However, it is my suspicion that you would see no advantages using any language other than English. While there is a certain token-level density to written texts, it seems the benefits of this (and the more recent discussion around “caveman talk”) are quite limited.
Furthermore, consider that the vast majority of textbooks, technical documentation, blog posts, StackOverflow answers, &c. are originally in English. Historically, where these have been translated to Chinese, the translations have often been of very poor quality (and the terminology and phraseology is often incomprehensible unless you also understand some English.) I would suspect that this makes up the overwhelming majority of the training sets for these models.
That said, my experience using the most recent models, is that they are surprisingly language-agnostic in a way that surpasses readily-available human capability. For example, I can prompt the LLM to translate English into something that uses German grammar, Chinese vocabulary, and Japanese characters, and I'll get an output that is worse than what a human expert could do… but where am I going to find a multilingual expert?
(Of course, I have so far only ever been impressed that a model could generate an output but never impressed with the output it did generate. Everything—translations, prose, code—seems universally sloppy and bland and muddy.)
So what I would anticipate the biggest benefit for a Chinese-speaker today… is that if they are disinterested in working internationally, they have significantly less dependency on learning English.
Character-density and token-efficiency are different things. Latter is data and, therefore, tokenizer specific e.g. take GPT-5's tokenizer o200k_base and run mandarin text and its translation through. Some amount of the time en will beat zh. I just tested with news articles and wikipedia.
After all `def func():` is only 3 tokens on o200k_base.
I'm using it in english / albanian. Not much difference really. Impressive.
I use French nearly all the time, it works well. Not that I can't write English prompts, but I find it easier to use my native language.
I'm teaching my kids to be fluent in tokenese
I agree, and those are still too focused on code generation for specific languages are fighting the last war.
It is the revenge of UML modeling.
Eventually it will get good enough that what comes out of agent work, is a matter of formal specification.
Assuming that code is actually needed and cannot be achieved as pure agent orchestration workflows.
You really think that's what the positions on either side boil down to, how they feel about expressing themselves in English vs C++? No, that's ridiculous. That's such a wild reductionistic simplification.
Presumably the biggest loser in all this is Zig, I only know of the language because of Bun.
But the timescale still gives me pause… just because AI lets us convert a codebase in 6 days doesn’t mean it’s wise. There are surely a lot of downstream implications! It’s always felt a little like Bun is making up a plan as it goes along (and maybe that’s unfair), this seems to underline the point.
Zig is a great low-level language. It's much better than C, while not being so much larger as e.g. Rust or C++. AFAICT Zig does well in embedded development, and should continue to do so. Note that Zig is not even 1.0 yet.
Yeah but now they got the fame of the language that fumbled the ball because of an overly onerous anti-AI stance.
6 replies →
For most use cases I can’t imagine why you’d make the effort to move off C and not just go all the way to Rust.
1 reply →
These tools let you get a massive codebase functional in 6 days. But, presumably, there's no better language to target than Rust (in terms of safety/performance), and therefore the rest of time can be spent making the birthed-in-6-days codebase better.
But the author said "the code truly works, passing the test suite on Linux and soon other platforms" which just sounds really wise.
> 99.8% of bun’s pre-existing test suite passes on Linux x64 glibc in the rust rewrite
OK, they've got a working prototype, congrats! Now it needs to be put into shape so that all the unsafe blocks are eliminated (maybe with a few tiny exceptions), and the code is turned into maintainable, readable, reasonably idiomatic Rust.
I wonder how long is it going to take.
About 2 months, or 60 days, if we go by the old 90/10 rule.
Not sure that rule is even applicable anymore, but I don't have a better heuristic to make guesses by either.
maybe its tokens instead of time now? bun has access to an unlimited amount of it
This is the kind of program that would need to have a lot of unsafe even if it had been written in Rust from the very beginning. For comparison, there are about 2600 unsafe blocks in Deno, not counting dependencies.
[dead]
> Now it needs to be put into shape so that all the unsafe blocks are eliminated
All the unsafe seems to be FFI?
https://github.com/search?q=repo%3Aoven-sh%2Fbun+unsafe+lang...
> and the code is turned into maintainable, readable, reasonably idiomatic Rust. I wonder how long is it going to take.
This isn't a c2rust rewrite?
That GitHub search only covers the main branch, not the not-yet-merged Rust rewrite; the only Rust code in there is tests for Rust FFI (so that people can write native extension modules for Bun in Rust if they want to).
The rewrite's in https://github.com/oven-sh/bun/tree/claude/phase-a-port. By running the following command on it, I count about 14,000 unsafe blocks:
1 reply →
At the very least, it's interesting to be a bystander observering as efforts like this progress. The first thing it makes me wonder is how comprehensive/high quality the test suite is to begin with. Not to cast aspersions, but even at 100% on all platforms I wonder how confident the Bun team would be in migrating.
Bun is going this route because their proposed fix wasn’t great. https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio...
Cannot imagine this agent rewrite had anyone review any the code (you can’t at that speed).
I’m positive this will go extremely well :p
Fwiw, that's not the stated motivation for the rewrite experiment. In fact, the Rust rewrite is slower to compile than the zig code when compiled with their internal fork of zig (tho it is faster when OG zig is used).
I don't want to infringe upon your right to speculate. I just want to point out that your statement is at best a speculation.
I harbor some hope that the (sad) fall of human SWEs will at least be accompanied by language defragmentation. We don't need 38 systems languages once human taste is mostly out of the picture.
Since the LLM craze started I have always assumed it would end up in a place where programming languages are dead and LLMs generate something more low level.
Programming languages were always designed as an abstraction to allow humans to more easily instruct a computer than by writing binary or assembly. If humans write natural language and don't check the generated code, there's no reason to take the hit of generating C, JS, etc that still has to be compiled and/or interpreted.
If anything LLMs should use something higher level because it compresses the context and makes programming closer to natural language they are trained on.
Forcing LLMs to do a shitty job of what a compiler can do deterministically is not a good approach IMO.
1 reply →
No doubt on my side porting was "easy". What I’d find interesting is the ability to maintain and properly care for the code over time for the next iterations. Do we eventually end up with a codebase that nobody truly understands in depth anymore, where everything is generated and modified through GenAI?
Thanks for the sharing
Yeah, that's my issue with llm code. If we imagine a future without human programmers - sure, go ahead, we are not there yet, but maybe it's possible.
But if you want it to coexist with humans, then it doesn't seem to work well. It gets in the way of human learning and human communication. Making professionals and teams weaker essentially
And here I am trying to get an LLM to add types to a 100k line Ruby repository for 2 days, and it's not going so hot...
I have some experience in this. Reach out (email in my bio) I would love to chat.
A SMT solver may work better.
Will that work if my codebase is filled with nils it shouldn't be filled with, and HashMaps instead of structs with a loosely defined schema, and tuples masquerading as arrays?
1 reply →
This is remarkable. Man, there are all those ancient things that "we've lost the source code for". One time, in a past job 10 years ago we were reimplementing something that was lost to the sands of time, using an out of date spec it had used. It was such a tedious job with verification but we got there. Amazing how easy that would be today.
Are you sure you will be able to spend time playing around that kind of stuff when anthropic/openai/google/etc make you jobless? (well, perhaps not YOU precisely, but 90% of devs, so there’s a high chance).
We always think it’s not gonna hit us… we may be wrong
I don't think this kind of thing works nearly so well without a comprehensive test suite or the ability to easily use the reference version as a test harness. The typical enterprise relic for which no specification or source remains almost surely lacks the former and probably isn't very amenable to the latter.
The Ubutnu coreutils thing last week really soured me on 99.8% test compatibility Rust rewrites :|. I clicked through to the tweet linked here and it was kind of like shudder I feel quite opposite now when I see this kind of thing. I'm like *looking for exit*
Guys calm down, this is just marketing from anthropic the same as the browser and the c compiler.
What does this mean for Zig?
Few big popular projects use Zig, if they start to move away from it, what Zig's future will look like?
I think the issue is that Zig lost their biggest project, which was a posterboy project for real uses of Zig. Worse, the project felt like Zig wasn't meeting their needs, to the point they abandoned Zig and rewrote their entire project in a different language. Really bad signal for anyone thinking of using Zig for a big project. It is still in beta, but has there been any situation like this, where a upcoming programming language was abandoned by its biggest external project and still was able to be considered a successful language after that?
Well they haven't lost anything yet. Somebody is vibe coding a rewrite in another language and we don't know much else. The author said he will write a blog post about it soon. So far all we know is it is passing most of the test suite.
But Bun has open issues and bugs. The test suite doesn't tell us whether it has introduced many new bugs, solved existing ones the test suite doesn't catch, or anything else. Not to mention, the rewrite is 960K lines that nobody understands. How long will it take for the Rust version to be better, and be understood as well as its current maintainers understand the Zig version?
Having a project consider a rewrite isn't so big a deal. Zig has been designed from the ground up with a vision, and isn't worried about taking a while to create a stable API to achieve that vision. The self-hosted backend shows how incredibly fast incremental compilation is when the language is built for it ground-up. Compared to other languages that implement weaker forms of incremental compilation it isn't even close.
I don't think the Zig team is concerned at all.
7 replies →
> I think the issue is that Zig lost their biggest project, which was a posterboy project for real uses of Zig.
Bun, Ghostty, and TigerBeetle are 3 popular projects that I have heard about using zig.
1 reply →
Is it lost already? Did antropic already say new LLM generated thing is way to go for the future?
Nobody knows. Here's my two cents.
Zig is a very interesting LOW level language, but honestly I think it should be considered for what it is: a better C. I don't think it fits for anything that someone would have written in C++, Java, Haskell or C#. Instead, Rust is competitive with all of these languages when it comes to safety, abstractions and speed. And also C and Zig itself.
Zig has a couple very interesting ideas that make it stand out: comptime and the zig build system.
Alas, Zig is still far from being stable. Rust came out to the public in 2012 and became stable (1.0) in 2015. Zig came out to the public in 2016, and it's 10 years now and someone says it's still years away from 1.0.
So, while rust took 3 years of public development to become stable, zig is taking 10/15 years. I love the language, but TBH I don't see a great future ahead, especially with LLMs advancements that can use safer languages to do the same work. There's no point in risking more memory bugs when the effort for writing code is the same.
Honestly I think, at least to the Zig community, But isn't the biggest name we'd think of. There's been some philosophical friction between the Zig project and Bun (Zig is pretty anti-AI and favor methodical thinking through of problems, while Bun is more move fast and break things). I think TigerBeetle is a better representation of what Zig can do. TigerBeetle is fuzzed within an inch of its life, and is absolutely rock solid. The people who work on it are brilliant programmers who care a lot about correctness. They find that Zig lets them express their ideas succinctly, while still giving them the needed power.
When I read about Bun, I get the sense that Jarron has different priorities, mainly moving quickly. Bun also implements a lot of userspace APIs, since the core engine is JavaScriptCore which is written in C++. I think Rust really shines in applications programming, so I guess it makes sense that Rust has lined up with Jarron's needs. I'd be interested to see what JavaScriptCore would look like in Zig versus Rust, I think Zig might have an edge in the core interpreter and JIT.
It means nothing for Zig. Zig isn't even out of beta yet.
This is like when Aaron ported Reddit over from Lisp to Python
meaning it doesn't matter except for online discourse about X being bad for 2 days
Jarred has already said on Twitter that this was only an experiment for comparisons and very, very unlikely that they'd switch to Rust.
https://news.ycombinator.com/item?id=48077663
There is no way a port this massive will have human code reviews.
If this succeeds, there is no stopping AI given it will have crossed the rubicon of human bottlenecks.
Serious question… Who’s going to want to run a vibe coded runtime in production?
I don’t see how this is a good look for Bun?
One should care about tests more than how code was coded.
If I had a codebase with lots of tests and asked someone else to rewrite it to another language passing the same test suite, I honestly wouldn't expect a great quality job.
I say this because it happened 3 times in the company I work for: we conducted experiments by tasking different companies to rewrite the same code in another language. All of them passed (most) of the tests, but code quality was low. If the job is a black box, rely on the I/O to determine quality, not the inner workings.
I care that runtime developers know and understand their codebase deeply. 1M LOC written by 1 dev in a short time does not inspire confidence in such an important dependency.
There's no way this code is understood fully by the original author, let alone anyone else. I wouldn't accept this from an intern, let alone in code that's fundamental to my business.
I have seen, many times, code that has lots of tests but don't work.
Why?
Some of the patterns that I saw:
* The code is only called from tests but never called in production
* Tests are not testing the actual application logic, or the logic that matters. In some cases, the tests have nothing to do with the application code at all, because it does not even run any application code.
* Tests repeat the same logic as in application (tautology). All the time.
* Application code is actually incorrect. But tests just end up using the wrong expected value to make tests pass, disregarding what should happen.
That's using the latest models.
To make things better, apparently people never bothered to go through the manual workflow at least once to verify the behavior.
Good luck just relying on tests.
3 replies →
Testing shows the presence, not the absence, of bugs. - EWD
All tests overspecialize and easy to cheat, there is no "program works" test.
I think you overestimate the number of people who care how the software (or any product really) they use is made.
I just see a ton of reflexive AI hate here. I don't care if it was vibe coded, if it passes the entire test suite and was vibe coded by the original authors, I trust it as much as the original Bun. These are Jarred's words about it:
> it’s basically the same codebase except now we can have the compiler enforce the lifetimes of types and we get destructors when we want them. and the ugly parts look uglier (unsafe) which encourages refactoring.
> why: I am so tired of worrying about & spending lots of time fixing memory leaks and crashes and stability issues. it would be so nice if the language provided more powerful tools for preventing these things.
This makes me trust it more, not less.
I still don't understand how people consider bun as a viable runtime when its owned by evil corp trying to use it for capture of the tooling layer and its the most insecure runtime on top of that. Meanwhile deno is performant and has dramatically improved node compatibility while exposing a proper permission broker api.
The branch in question: https://github.com/oven-sh/bun/compare/main...claude/phase-a...
STOP Analyzing.. Now rewrite the Linux kernel in rust. DO NOT MAKE MISTAKES, then post it on Hacker News.
---
The Pareto principle is in play here. It might take years to get that last percentage point.
Good enough for a side project, not good enough for transferring banking system from cobol
That is actually what companies like IBM and Unisys are already doing today, LLM assisted porting.
https://research.ibm.com/publications/enterprise-scale-cobol...
Why not? I think we are perfectly capable on generating a test and validation environment that we can use for correctness. Most likely llms could do this better than engineers with zero to none domain and language knowledge can do these days. From that point on, rewrites would become feasable (not easy, feasable).
If this goes through, it feels like it will stoke rust on zig violence
I just wish the camps would stop being as tribalistic. I see a broad spectrum of fights between any "better C" language and Rust enthusiasts. There is room for both of these things. Just use what works for you. Rust is a bit more like Ada in spirit, it introduces a lot of friction compared to "C like" things which gladly accept you blowing your leg off. Each tool has unique benefits, and is uniquely suited to different problems.
If I'm building a simple GUI app, I'm not sure the friction from Rust is all that worthwhile. If I'm sending someone to space, I think I'd rather have the safeties of a Rust or an Ada, or MISRA C.
I really don't think so. Bun was using their own language, that forked from Zig 0.14. It's not like the communities interacted much. All of Bun's code was their own internal code, it was not part of the Zig ecosystem. I don't see how this could have any impact in the Zig community at all.
Sadly, yes. I feel too much "violence" on both parts.
Honestly, Zig community seems the most bitter for whatever reason, while on the Rust side it seems to me that are simply overstating how great the language is and are pushy in trying to convince the other of their ideas.
If this goes through, we can all take SWE lessons from it, but I think the communities will suffer.
3 years from now: Linux ported to Rust in 6 days.
And on the seventh day Claude ended His work which He had done, and He rested on the seventh day from all His work which He had done
That's a fun point. I honestly don't think it will happen in 3 years, but I think it will surely doable in 10.
More interestingly: will we need to care about the code at all, at that point?
Obviously there is a huge trend of "rewrite X in Rust". I understand why, Rust is a huge improvement in safety and speed.
My question is, to people even older than me (and I'm certainly not young), does anyone remember this much enthusiasm about people rewriting C code into (C++/Java/Whatever was new and hot)? Because I don't, but maybe I missed it.
I recall C++ OOP being the new hotness when I started out and C was always contrasted as the old & busted example. Kind of the "Everything-as-an-object will simplify everything" phase. Windows MFC was the new way, then STL.
Java WORA write once, run anywhere was definitely a thing when it came out. Java Applets came out of the woodwork and were the WASM of their day. Even Cisco ran Java for their router UI for a while, which was painful.
More recently, HN went through a period about 10 years ago where every other article ended in " ... written in Go".
The mantra may not have rhymed with "rewrite X in Y" but the spirit was there.
> every other article ended in " ... written in Go"
What happened to that: is Go no longer considered great / popular?
1 reply →
There were no good options previously. It was either C or C++. Most of the other languages were either fringe or had a GC, or had a pseudo runtime GC (Swift). The culture of Java and C# and Go didn't really support the type of low level optimizations needed, even though you could technically do system programming if you restrict yourself to a specific subset of language and cut yourself off from most of the standard library and ecosystem. Nim was unstable. OCaml had the same issues as Go and Java and C#. You simply did not have any options until Rust came along. Oberon was an academic trinket. The less said about the various lisps and forths the better.
OS and embedded programming require bare metal support and data structures that can run standalone in the absence of an OS and standard library, and the ecosystem must exist to support such a style of programming.
Currently Rust has over 10000 crates that would theoretically work just fine in an kernel environment.
https://crates.io/categories/no-std
Kind of the opposite, I was deep in the R world a decade ago and there was a huge trend of replacing Java dependencies with C/++ ones because the JVM was such a pain to manage. The community eagerly adopted the replacements about as soon as they existed.
>this is a 960,000 LOC rewrite, the code truly works, passing the test suite on Linux and soon other platforms
I wonder how much of this is original size vs rust requiring verbosity vs the LLM being verbose in general.
Not a criticism, I do believe language translation it's the one field that AI is mature enough to near one shot projects.
I suspect that the test suite isn't that great tho. Bun has so many different behaviors compared to other JS engines, sometimes just plain wrong or contradicting the spec. Test suite didnt catch those.. Not sure how much I trust the rewrite :)
Notably, Bun is not a JS engine. JavaScriptCore is the JS engine. Bun is just a complicated wrapper around it.
With the amount of changes they've made to WebKit, I honestly don't think we can claim it's just JSC..
https://github.com/oven-sh/WebKit/commits/main/
I'm looking forward to the race to the bottom in the tokens-for-work-done race.
Interesting that ports can be written so quickly with AI. But that aside, I have to ask...why? You want a super performant bundler/runtime/package manager written in rust with TS support, Deno has this already.
Has anybody thought through the legal aspects of this, regarding code ownership?
As far as I understand the situation in the US (sorry, no idea where he is located), output from LLMs, once published, is essentially in the Public Domain, since there isn't any human who owns it.
However, in some sense, this is also a machine-assisted translation from one computer language into another, so one could argue that the ownership of the original code base still applies to the new one.
Which one is it? Is there any way to find out before a similar case goes to court?
> output from LLMs, once published, is essentially in the Public Domain, since there isn't any human who owns it
That’s not what the court case in question was about: https://www.morganlewis.com/pubs/2026/03/us-supreme-court-de...
If I ask an LLM to come up with an entirely new story on its own, the output is not copyrightable.
But if I feed an LLM a Tom Clancy novel and ask it to regurgitate that same novel, I cannot legally then put the output on a website for anyone to download.
AI rewrite one of the Microsoft source code leaks to Rust and publish it as open source on GitHub. We will soon find out what the answer is.
I love Bun & Zig and this feels a bit like my parent are getting a divorce. I thought it was a bit strange that Bun did not sponsor the Zig foundation while others much smaller companies have.
Are you kidding? IIRC Oven gave $5k/month to Zig for years. And btw that was before they got acquired for billions, when they had no income at all.
Yeah, that tracks according to the numbers.
https://ziglang.org/news/300k-from-mitchellh/
https://ziglang.org/news/2024-financials/#income
https://ziglang.org/news/2025-financials/#income
I had a bit of trouble finding it myself but Claude proved a better Googler than I
Alright my bad I did not find any info about this, but still they are no longer mentioned as a sponsor.
Obviously bun having been acquired by Anthropic changes the arithmetic a bit, but I'd love to see the token cost/consumption of this initiative.
That's amazing, over time I got a few memory related crashes w/ bun but have deep respect for the performance work put in. Hopefully Rust's compiler will help even more.
Off: I'm wondering if now when more JS finds place on our machines and bundle size is 2nd place for most, would a revival of prepack or projects in the same vein would be worth it, especially with agents.
How many tokens did this port consume?
Bun is owned by anthropic and so has access to Mythos & unlimited tokens.
The answer is... more than any of us could likely afford.
would be fun to do zig -> rust -> zig and to measure the delta
(in a VAE-ish way, kl div on the embeddings?)
also feels like a good posttraining task
Were there perhaps [licensing issues](https://www.phoronix.com/news/Chardet-LLM-Rewrite-Relicense) with the original?
Responses seem to be very "either or" as usual on such topics.
I think it should be possible to appreciate how impressive this is on one hand, while also discussing the limitations of the approach.
Everyone can probably agree that getting this far without LLMs would have taken substantially longer and required a huge amount of work.
But what is then the end result?
Personally, for me it would still be hard pass on using a 1M LoC LLM-migrated language runtime - I have seen CC do enough crazy things to still be wary of any code without a human in the loop. It simply plays too fundamental a role in the tech stack. Others might feel differently, and time will tell how things play out.
Even if this does play out as optimistically as one can imagine, would it then mean I can go and migrate some of my enterprise codebases the same way? I doubt it.
Bun has the nice feature that it has an extensive set of black box / E2E tests that don't themselves need migrating. Most projects in the wild seem to be much more reliant on unit and integration tests that are part of the codebase itself, and would therefore also need to be migrated and be subject to mistakes in the migration process.
It also seems fairly rare that test suites are good enough to guarantee that the program will work as expected in all cases. I am yet to come across a larger enterprise codebase where the tests were good enough to make human review and even manual testing fully redundant. To be honest I doubt that is the case for Bun either, but I don't know enough about bun to conclude that.
> and crashes and stability issues
inb4 .unwrap() / slice / etc hell + livelocks & deadlocks + resource leaks & toctou bugs + larger exposure to supply chain attacks
Still, ~1M LOC ported in a work week (400 LOC/min, wtf?) and almost all of it working is pretty wild. I hope the guy managed to maintain normal function, cause I found that getting into the flow but with AI is even more self-consuming and intoxicating than without it, which was already potentially rather rough.
At 100 agents in parallel that’s 4loc/min, and 100 agents is a lower bound on what they had access to.
It's not so much the agents' througput I'd be worried about, more meant to imply that at such speed, large parts of this are going to be pretty much just guaranteed unsupervised / unchecked completely. Like literal "LGTM + god bless + fuck it we ball" tier.
Interesting! I wonder how the performance is compared to the Zig version
Kinda crazy to use AI to switch from zig to rust in a tool that runs js. Bin bun and use a real lang to begin with. No reason to have that extra layer anymore.
Lol, I had a similar thought as well, but more along the lines of "We're coming for you next, JavaScript!"
But the effort is certainly an exquisite rearrangement of the deck chairs, no?
Bun runs TypeScript directly without external tooling.
bun script.ts just works.
Otherwise I bet it wouldn't even be a blip in our radar.
What license is this ? Let me guess, its is no GPL...
Unlike the GNU coreutils rewrite in Rust, the Bun rewrite in Rust is being undertaken by the owners of the project.
That said, yes, you’re correct that Bun isn’t GPL: https://github.com/oven-sh/bun?tab=License-1-ov-file
Hmm, that's unfortunate - why does so much Rust stuff seem to default to MIT/BSD ? Just because Mozilla used that for most of the Rust stuff ?
Do developers using Rust even know the difference ? Like how anyone can basically take all you work & base a proprietary fork on it with maybe saying "thanks" (attribution) if they feel like it ? :P
3 replies →
Your guess is correct! Congrats. Bun itself is not GPL either by the way. Oh, rust compiler itself isn't GPL either.
Its going to be interesting to see how this holds up in production after a release or two.
Xunroll's "view on X" truncates the first character of the username.
The flagship product is both the cash cow (subsidizes rewrite) AND the labor (amortizes? rewrite).
Now in Zig, Julia, Nim, Crystal. I just love programming languages.
But in all honesty, I don't understand the extremism in Rust engineers that reject any other language.
Steelmanning a rust programmer's argument: memory issues have a very large blast radius, as the bugs tend to show up in completely unrelated. Because of this, dependencies on languages not written in Rust can easily corrupt areas that are unrelated.
I feel like one of Rust's defining philosophies is modularity, in the sense that each module should be self-contained, and have clear boundaries. This can come up as an assumption behind their arguments imo.
I think it was Rich Hickey who said "Programmers understand the benefits of everything, and the costs of nothing."
I'm also reminded of video game forums where everyone argued whether the Xbox or Playstation is better, not because they're genuinely interested in the pros and cons of each system, but because they only have an allowance to buy one of them, so they're trying to gaslight everyone and themselves into believing the one they picked is better. In the case of programming languages, there's only so much time in the day, so the people who post on this site go all-in on the programming language they picked, and will rationalize any reason they can think of to believe the language they picked is better.
Curious how the test suite was applied. Was it ported from Zig to Rust beforehand?
Almost all of Bun's tests are written in JavaScript run in Bun itself.
Deleted
@simonw explains how hilariously misguided that paper is in one of the top comments, and how it doesn't apply remotely to a real agent harness. Plus it's not even clearly relevant here, because the model isn't trying to regurgitate the original document, but generate a new one, and there are guardrails to put it back on track in the form or a compiler and tests. Also, the test suite is very thorough, and pre-existing, and the vast majority passes already. This is skepticism for the sake of it.
Perhaps you can elaborate on how your comment is relevant to the Bun's experiment here.
This is a manual rewrite or auto generation ?
They could also do a rewrite of CC itself to Rust.
When is someone going to do linux rust rewrite
It’s relatively easy to get a basic Unix-liked kernel together. Hardware compatibility (and associated testing) is where it gets hard.
Do scala.js next
Explain it for dummies. Isn't Zig a programming language? Why are they re-writting a programming language in another programming language?
They're not rewriting zig. They're rewriting bun, which is currently written in zig
best way to kill an open source project in 2025 - use AI to port it to Rust.
Bunner
The fastest large-scale rewrite in the history of software engineering, likely
jared's post is singlehandedly shitting on Zig's reputation. not good juju for him to post like that.
"I am so tired of worrying about & spending lots of time fixing memory leaks and crashes and stability issues"
bun was zig's poster child. if it moves away, it becomes yet another random language like nim or crystal.
I'd feel better to have that kind of person out of my community.
First of all, did he not pick the language for Bun himself? Then introduced bunch of memory bugs, sound like skill issue cascade.
I remember some years ago in podcast touting how amazing Zig is to allow them being so performant which was the claim to fame for Bun, now to turn around and shit on the thing. Interesting persona.
will this mean opencode is finally portable?
There is some really cool work to port opencode's underlying opentui to Node.js, including some new FFI work in node itself that got merged (called... drum roll please... node:Ffi! Really cool stuff. https://github.com/anomalyco/opentui/pull/939 https://github.com/nodejs/node/pull/62762
Also worth noting that opentui is... Zig!
Very unclear what it's going to take to get this reviewed and shipped, but some very high potential. I've seen some other changes going by in opencode for node.js compatibility; I'm not sure what besides the tui has Ffi needs that might be gating; maybe nothing!
Merge with Deno
This is a good reminder that tooling choices compound over time. The short-term speedup matters less than whether the next maintainer can still reason about the system.
Being anthropic accuired project does he have access to mythos or it’s normal Claude we plebs have access to
This is entirely possible with Claude as it existed even last year.
The LLMs are quite good at re-writes and even better when provided an 'oracle' like a well rounded test suite or existing implementation to work against.
Its part of the reason we keep seeing "I rewrote <library> in <language>" posts on hackernews and when you look at the repo its more like I prompted claude to rewrite this repo in rust or whatever.
As an Anthropic acquihire, not only does he have access to every model and service but he probably has infinite tokens available.
Bun powers Claude.
Also, isn't it a great ad for Anthropic itself? One wonders
Indeed, knowing the amount of tokens spent would be very interesting.
Bun alert!
[dead]
[flagged]
[flagged]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[flagged]
[dead]
Obviously LLMs would identify as binary.
(But also, low effort meanness is bad, HN strives for better in both dimensions)
[flagged]
> absolute position of hating something such as AI and progress
Most takes I've seen are far more nuanced.
Key is that 'progress' has a positive connotation. It is different from change. Mere change - such as new inventions - may not necessarily be aligned with progress in a field, society, etc.
Change may be inevitable, but it's up to us humans to sculpt it into progress.
But I am talking about Zig and others who have the same stance. Zig has a very strict No LLM / AI contribution policy and it likely got in the way of the Bun maintainers at Anthropic. From [0]
>> No LLMs for issues.
>> No LLMs for patches / pull requests.
>> No LLMs for comments on the bug tracker, including translation.
[0] https://codeberg.org/ziglang/zig#strict-no-llm-no-ai-policy
1 reply →
Thats true, but the author might have decided on its own. Not everything is a marketing plan.
Meh. I prefer Java, all hours of the day, every day of the week.
> why: I am so tired of worrying about & spending lots of time fixing memory leaks and crashes and stability issues. it would be so nice if the language provided more powerful tools for preventing these things.
As expected, Modula-2 / Objective Pascal like safety was great during the last century, before automatic resource management, and improved type system became common in this century.
Naturally also have to note, wasn't this supposed to be only an experiment, nothing serious?
An update on Bun’s experimental migration from Zig to Rust:
The Rust rewrite now passes 99.8% of Bun’s pre-existing Linux x64 glibc test suite.