This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.
I’m curious to see what a working version of this looks, what it feels like, how it performs and if/how hard it’d be to get it to pass Bun’s test suite and be maintainable. I’d like to be able to compare a viable Rust version and a Zig version side by side.
It is a pity that you can't make an experimental commit on an experimental branch without igniting a fire of delirium through some people who -- if they were able to put their emotional response aside for a minute and could weigh this up on the basis of merit -- would probably agree with the motivations for researching this approach.
> if/how hard it’d be to get it to pass Bun’s test suite and be maintainable
Every month brings new opportunities to completely abstract the process of porting code with agents, all using linguistics. What an exciting time.
For those looking for a similarly interesting (and interestingly similar) example, see Cloudflare's port of Next.js[0], "vinext", from a couple of months ago. It had some teething problems at the start but I'm using it in a few production projects now with minimal issues.
I am a topic starter, and I had no emotional response, was just being curious. Never expected it will land at HN #1. I specifically posted the link to the first commit and not to the whole branch, because currently the prompt is the most interesting part.
It’s annoying for the team members I suppose, but to be fair, if you’re working on a high-profile open source project, owned by one of the most hyped companies in the world, and your branches are public, it’s probably a good idea to be clear in the branch naming and supplemental files if you’re just “experimenting”.
By working in public on a popular open source project, you are communicating intent and purpose to your users and the general public through your commit messages, branch names, and documentation. You’ll save yourself a lot of grief if you act accordingly.
The fact someone who works on Bun is willing to create and even push a branch generated by a stochastic parrot is very telling of the direction the project is going.
Doesn't matter if it's "experimental", it's a dumb experiment that shouldn't exist.
I love your work on bun. How do you feel about all the constant concerns being raised about the quality of the project lately? I understand some of them might just be typical twitter hate but some of them are real. And I think people are right to question why you are adding image processing or web views inside a javascript runtime when there are bugs affecting production that sit unaddressed. For example on of our biggest blockers right now is https://github.com/oven-sh/bun/issues/6608 which was reported in 2023, still affecting us 3 years later.
When you start getting hate, you’ve made it. Up until then you’re a hypothetical that people like. Maybe they’ve built a side project with you or read the docs. You only get hate when people have used your tool and butted up against limitations. We saw this with Deno too where they went from beloved potential savior to realistic, limited tool. Hate is good. It means people rely on you
With AI agents and how good they are in doing "language translation" tasks against an identical target with a comprehensive test suite, you end up doing these things out of curiosity. The AI agent has the originals to test it's assumptions with too.
I've had surprisingly good results from getting AI agents to take a script in shell, python or typescript and have it translate it into those other programming languages, including rust versions. Or swapping from one build system to another.
While you are here, can you elaborate on the method chosen? For example, why not write a conversion script for phase A? I mean, same Anthropic model will produce it in no time, prompting it is at the same cognitive load level, but you would have a deterministic result.
Thank you, Jarred, for your work. It’s unfortunate to see so much backlash toward legitimate research. Bun is often seen by some as “the flagship project for zig” - especially among those frustrated with rust who want zig to "win over rust" for whatever reasons. At the end of the day, you should do what makes the most sense for your project and your circumstances, regardless of the language or tools involved.
Personally, I find this experiment interesting and I’m curious to see how it develops. Writing idiomatic rust requires a shift in mindset, so it’ll be worth watching how well LLMs adapt to that over time.
I can only speak for myself... but I've found at least Claude Opus to handle Rust very well, and in my own use cases WebAssembly (wasm) and FFI for interoperation with TS/JS has been pretty smooth.
You can view it as an overreaction, but also as a sign that your work is significant. It impressed some, and scared others. In any case, you made something interesting.
You're replying to the original author of Bun. Given the usage of Bun, and the fact that his company (primarily him, actually) was recently acquired by Anthropic for what I'm guessing was a bajillion dollars, I think he probably already knows his work is significant and that he made something interesting.
Might be a good idea to let AI handle social media. I'm not saying you're doing it badly, just that it doesn't seem like worth the drained energy to do manually.
the is lovely, how admirable that you have the space to do this. its very rare that we as a community take the time to actually implement a non trivial system in X and Y and look at the differences. so much discussion around these things is based on pointless tribalism.
I'm sure recasting Bun in a new mold is going to be hugely informative about the structure of Bun itself, regardless of the outcome.
Advice for the future: experiments should be explicitly tagged as such. The commit message "docs: add Phase-A porting guide" says nothing about the experimental and looks like a planned move to rust. That message certainly looks very official to me.
> This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.
Trying to pass off a blunder like this like its no big deal is an insult to your users. You made a dumb mistake. Own it, be transparent and correct the problem that started this; namely, put some form of experimental tag in the commit message. Then say you made a simple mistake, sorry, and move on. Being dismissive is a defense mechanism that can arouse suspicion, as in are you now lying about the experimental state to quench the flame war? Not that I believe that but it can certainly now become conspiracy. Again, you can avoid all that with transparency.
Or we can stop being toxic to open source maintainers and acting like we own them or they owe us anything.
A commit message on a random branch is not an obligation. Not telling random internet users what side projects they're working on is not a blunder. It quite frankly doesn't matter what you think looks official, it doesn't give you the right to treat people like this.
It's so embarrassing to be a programmer some times, so many of my peers behaving like spoiled rotten brats.
Most of Bun’s code is already written by LLMs. If you feel that way, it’s already been too late for a while. Furthermore, we’re talking about a million line port done in a couple of days. The question of whether it’s worth the time looks extremely different if done by hand. It would take a year.
I think the criticism is still a valid to an extent because I don't see how this would give you a good way to evaluate Zig vs. Rust. Maybe a better approach is to migrate a particularly problematic space and bench that on its own?
It's not like OP asked for any criticism to start with, right? This whole thread is pretty good example of why saying "Fools and children should never see half-finished work" exists. ¯\_(ツ)_/¯
Will you have a way to measure the ecological impact it has to make such a throw away attempt?
Not actually pointing on you or anyone in particular here to be clear. And if the answer would be "not much more than forgetting the light when leaving the toilets", certainly that would be a "go have fun" cheerleading on my part.
But otherwise we collectively have to keep in mind that the prompt that we can throw mindlessly and without perceiving any direct negative feedback are possibly not anodyne.
So if you can measure it, come back also with these numbers so we can all take that into consideration next time the thrill to run it just to see what happens rise in our mind. Thanks.
> Showing 1,808 changed files with 790,916 additions and 151 deletions.
Just looking at the git diff [0].
I looked at one of these rust port files [1]. Its 827 loc and apparently 7,576 tokens. So that gives you a first order guess that the full 700k additions is around 8 million output tokens. Obviously there are some tool calls, reasoning, reads of the zig version, and fixing compile errors as overhead. So I would guess maybe this is like 40 million tokens by multiplying by 5?
If we guess that is around $200 to $500 in token spend. We can probably guess that it emits around the same as buying $100 in gas? Or like 50 or so kgs of CO2?
Less than the impact of people who can't be bothered to remember basic historical facts or directions in terms of hitting Google services dozens of times a day across the population.
Probably less than the impact of having dozens/hundreds of actual developers, each with a dedicated computer running for months/years in what it would take for a similar effort.
If you want to go live in the woods and farm/hunt for yourself, feel free. I'd suggest you stay away from the museums with paint and not glue yourself to a car mfg.
Interesting to see this when the current top post on HN is someone worrying about Bun as it was acquired by Anthropic. The top comment there describes “Anthropic does experiments on their own codebase, the Bun team is not gonna do the same vibe coding experiments”.
Yet here we are, what looks like a massive undertaking for vibe coding.
Time will tell how this will turn out. Would be nice if the Bun maintainers could give some clarification about what they’re doing here, and why they’re doing this.
They recently tried to upstream an improvement to zig, but were prevented from doing so because zig has a hard and fast "no AI code" rule. Whether you think this response is trying to put pressure on zig or whether they're just moving for practical reasons is up to you.
I don't see why they think it would work when the reason their patch set was rejected was because it was not correct, did not go in a direction the Zig authors were interested in and is also in an area where they are already working hard on improvements. It would have been much better if the bun team joined forces and helped out instead of vibe coding a broken PoC patch that never can get merged. Compilation speed is one of the current main focuses of Zig and changing the type system to make that possible was a big part of 0.16.
Anyone can hack up a quick PoC, even without LLMs, the hard part is writing code that is correct and maintainable.
Makes me wonder why zig announced the strict LLM rule recently. I'm afraid one reason could be that zig doesn't want to accept code from the bun fork in the first place (because of LLM usage, deviation and other reasons)
The Zig maintainers did a pretty in-depth review of the PR, and laid out multiple technical reasons for why it would not get merged. They did not reject it simply for being vibe-coded (though that is likely the cause of it sucking).
Yeah, now that I think about it, having a major project written in a language that doesn't accept AI contributions now owned by a major AI company was a recipe for dis... er, conflict.
I'm not a huge fan of Rust, but I guess having a project like Bun in an actually memory safe language is probably a win? Guess it depends on how good Claude is at writing Rust code...
Read the previous discussions on the topic. Your summary is a sensationalist lie, since their change was apparently a smoking pile of hot garbage, and Zig already had similar performance gains in a newer release.
Probably moreso going with the native language that is reliable and battle tested. Rust runs on Firefox, and in production at several systems across major orgs, this is not surprising.
> what looks like a massive undertaking for vibe coding
fwiw, I suspect it's less of an undertaking than you may think. I've been playing with AI to rewrite Postgres in Rust[0] over the past couple of weeks and I found the AI to be exceptional at doing rewrites. Having an existing codebase you can reference prevents a lot of the problems you have with vibecoding. You have an existing architecture that works well and have a test suite that you can test against
Over the course of a month I've gone from nothing to passing over 95% of the Postgres test suite. Given Jarred built Bun, I bet he'll be able to go much faster
> I suspect it's less of an undertaking than you may think... having an existing codebase you can reference prevents a lot of the problems you have with vibecoding.
I do not know if there's any overlap between these teams, but it seems like Anthropic itself is fairly invested in the Rust ecosystem.
They recently proposed some of their internal tools to be the official Rust implementation[0] of Connect RPC[1]. As a protobuf based library set, this includes a new Rust-based protobuf compiler, Buffa[2].
Zig is a moving target. 0.15 -> 0.16 includes some massive structural changes concerning IO and async/threading.
Claude has absolutely no idea what it's doing with bleeding edge zig unless you feed it source and guide it closely (in which case it's useful for focused work) - I'm building a game engine & tcp/udp servers with it and it requires a hands-on approach and actually understanding what's being built.
I imagine these are not really concerns with rust at this point.
In my ideal world the team behind bun would be putting in the work to keep up with modern zig, but it's starting to look like they are running mostly on vibes in which case rust might be a better choice.
I would expect all LLMs are going to be better at Rust than Zig - a strong, thorough compiler will simply prevent more mistakes, and the benefits of a "simple" language decreases the larger the code base gets. The more abstractions exist, the less valuable "no hidden control flow" or "no hidden allocations" from the standard library get, and that's before you add the mother of all abstractions of vibe coding.
But why should they? This just seems like the groundwork for an initial refactor and moving from one language to another. They haven't actually committed to switching from Zig to Rust yet. I mean, I get if you are an investor and you want to see if they are using their time effectively, but why would it matter to anyone else?
Lots of people, me included, heavily invested their time and expertise into Bun, using it as a daily driver, to bundle production code or even using it in production as a JS/TS runtime. Of course, we are interested in Bun to stay a useful tool. The Anthropic acquisition was worrying enough on its own.
They’re not required to do so, but like I said, it would be nice, because it removes a lot of speculation. And development is in the open, so people notice what they’re doing.
To be fair, this seems to be Buns original creator themselves experimenting. Unclear if there's any relation to the Anthropic acquisition. But I think it's best we refrain from prematurely speculating if we just don't know.
Honestly, this kind of thing seems to work quite well with vibe coding. If I remember correctly, the Ladybird JS engine was "vibe-ported" to Rust as well, and it passed 100% of the original test suite, in addition to new Rust tests.
anthropic just wanted to "codex" like bragging rights of codex being developed in rust. so they are now going to write bun in rust, and then claudecode can use claim to be built on rust.
I think the definition of vibe coding is a bit fluid, in this case I just meant it to be “code fully generated by AI, possibly not fully reviewed by human eyes”. I agree that this definitely not “coding based purely off vibes”, and the approach looks legit.
It depends on what you mean by "vibe coding". Is AI coding based on an existing implementation vibe coding? What about only from a natural-language spec? How does manual reviewing affect whether or not it's vibe coding?
In practice all use of AI rapidly becomes vibe coding. Even if someone says they're going to carefully manually review everything that's generated, within a couple of days they get bored and just click approve.
Porting from one typed language to another seems like a perfect use for LLMs. I can see the appeal of both languages and why to consider such an action (e.g., rust is a mainstream PL vs zig's cult status (no slight intended)).
I think the big difficulty here is that Rust's ownership model in particular tends to require certain kinds of control flow to avoid a bunch of weird churning/copying, which makes it not as straightforward of a port target from other imperative languages.
Like maybe you get the LLM to try _really hard_ to churn through everything, but this feels like a big case of "perils of the lack of laziness".
Of course if you have a good idea for how to deal with allocations etc "idiomatically" already maybe that works out well. And to the credit of the port guide writer bun seems to have its explicit allocations that are already mapping pretty well to Rust.
Interesting how times have changed. Back in 2015, the entire Go runtime (already a mature codebase) was rewritten from C to Go semi-automatically: one of the maintainers wrote a C-to-Go conversion tool (for a subset of C they used) so that it compiled and produced identical output, and then the resulting code was manually refactored to make the Go code more idiomatic and optimized. And now you can just ask a language model.
The big difference here is that the C-to-Go tool was presumably deterministic: running it over and over again should produce the exact same result. You can trust that result because the human wrote the conversion tool, understood it, tested it, and worked the bugs out.
The LLM is non-deterministic. You could have it independently do the conversion 10 times, and you'd get 10 different results, and some of them might even be wildly different. There's no way to validate that without reviewing it fully, in its entirety, each time.
That's not to say the human-written deterministic conversion tool is going to be perfect or infallible. But you can certainly build much more confidence with it than you can with the LLM.
I'm not convinced by this argument. If you put 10 senior devs on a problem, you'd get ten solutions. Maybe even 12. If one engineer solves the same problem 10 times, you also will get 10 solutions.
The problem is not that we get 10 solutions, and I think you should draw out your implications and state them directly. Bc they're already either solved or being actively iterated on by industry. And we (well not me) can address them if you're willing to speak them
Perhaps a viable approach might be to vibe code the translation tool itself and observe that for every input it gives the expected output. Then once the translation is done, the translation tool can be discarded.
This would require a robust test suite though.
One of the cases where vibe coding might actually be useful, writing a throwaway tool.
Why does the deterministic nature matter? The interesting part is having oracle tests, not determinism. If someone is deterministic and wrong you use oracle tests to catch that.
Linked commit is probably not the most convincing for this tagline. Here's a branch[0] of Claude mass rewriting Zig code into Rust which is currently at 773,950 additions and 151 deletions:
Yikes. When Jarred left Stripe for the first time, he left behind multiple 10k+ line PRs rewriting code in the dashboard (this is before LLMs). It took months to work through those. A three quarter million line diff is essentially unreviewable.
I wonder if a successful, albeit slower, approach would be to walk the git commit history in lockstep, applying the behavioral intent behind each commit. If they did this, I would be interested in knowing if they were able to skip certain bug fix commits because the Rust implementation sidestepped the problem.
this is an interesting idea and i might try it with something smaller. there are more than 15,000 commits to bun, so you’d have to have some sort of way to operate on groups of commits in one prompt to get that done without thousands and thousands of api requests
I'll be very interested in how this AI port turns out. I am involved in a number of active projects that are being held back by the language / framework is holding back the project, but where a rewrite would be too big of a project to undertake by using only human power.
I've had more success vibe coding Rust than I have in more dynamic languages. I suspect the strictness of the Rust compiler forces the AI agent to produce better code. Not sure. It could be just that I am less familiar with Rust so it feels like it's doing a better job.
Rust is a good choice to let LLMs run without a ton of supervision. In my experience you need to monitor the progress heavily and take ownership of the design of the thing you're building or porting. Test harness is a must. Each iteration should run the test and ensure it doesn't break things in other places.
I am in the middle of porting TypeScript to Rust and learned a ton doing this. You can check out the work in progress here https://github.com/mohsen1/tsz/
I've been targeting Go instead of Rust for a few things. But same deal, I'm not really a Go programmer and it seems to work well enough. I do have a few decades of engineering all sorts of code bases; so I'm not coming at this completely naively.
My way of compensating for my own inability to do detailed code reviews is making sure the tests, integration tests, end to end tests, cover everything I care about. Without that, you can't be sure it is not skipping detail work. I've also made it do some bench marking and stress testing and then analyze the code base for potential bottlenecks. After it found and fixed a few issues, it got better. Finally, prompting it to do critical reviews, look for refactoring opportunities, etc. can give you a nice list of stuff to fix next. Having it run memory leak checkers and static code analysis tools also is a good strategy. Once you start running low on issues you find this way, the code is probably not horrible. Or at least you hit some sort of local optimum.
The lack of code reviews sounds pretty horrible. But it is now quickly becoming the biggest bottleneck in AI assisted coding. Eliminating that bottleneck is scary but it enables a few step changes in volume of code that becomes possible. Using strict compilers and strict memory management helps eliminate a few categories of bugs and issues.
I was previously doing this with languages I do understand. Once you start routinely dealing with larger and larger commits, reviews become a problem.
I expect working with larger code bases like this will get a lot easier and better over time. I noticed that the main headaches I face with this type of engineering are the tendency of models to keep deliberately cutting corners, only doing happy path testing, or deferring essential work for later. I suspect a lot of the models are simply biased to conserving token usage. Pretty annoying but also easy to compensate for with follow up prompts and testing. And probably something that becomes less of an issue as the models get tuned to behave better without additional prompting.
I have the same experience. Maybe I’ve used the same amount of time getting the rewrite out, but the amount of quality checking have increased for me. Before, I would probably not bother to create end to end tests and benchmarks, but now the mental cost for being extra vigilant is so cheap.
My rewrite is running stable in production for two weeks with 50x speedup, which have made the doomed old solution viable again.
Wonder what this will mean for future legacy projects and how how we should structure the programs to be inside “rewrite with llm”-size? Maybe a renessanse for microservices?
Given the recent gripe that Bun/Anthropic indicated regarding compile times with Zig (i.e. that their vibe-coded 4x compilation speedup PR wasn't accepted), it appears to me as an "interesting" move to switch to a language that probably delivers 4x longer compilations than even vanilla Zig.
I am very sceptical zig actually compiles faster than rust.
I had similar code written in zig and c++ and cold compilation was many times faster in c++ and incremental compilation was instant in c++.
I think the reason most rust projects compile slow is because of excessive usage of dependencies and also the excessive use of metaprogramming in code.
Zig doesn’t have multiple compilation units so it doesn’t parallelize compilation
I want zig to succeed but given that zig is not yet 1.x I'd imagine a large code base like bun would have difficulties addressing major breaking changes. Also given the fact that bun is using a fork of zig https://x.com/bunjavascript/status/2048427636414923250?s=20
So, Anthropic acquires Bun team because claude-code uses Bun. They port Bun from Zig to Rust presumably because Rust "is better" (imagine big air quotes here). Again presumably, they want to make claude-code "better". Why make it so complicated? With all the power of LLMs they have, surely they can make claude-code the best possible by writting it in Rust directly.
Presumably they aren't falling for their (extremely obvious) "grassroots" marketing, and know, like any good engineer, that LLMs are not the right tool for this.
It's easy to just see Bun as a marketing stunt, as well.
Zig is a moving target that has breaking changes in every release (which is fine as they are sub-1.0). But that means that AI tools have been trained on outdated syntax/etc. Zig isn't that common, so there is even less training data to begin with.
Rust on the other hand is pretty established by now and has less breaking changes. It also has more compile-time safety-guarantees that makes vibe-coding a bit more confident.
In top of that, Zig has rejected their upstream contributions. So they'd have to maintain their own compiler in the long run, which is probably just technical debt to maintain.
Most of my vibe coding is in zig, and it has been my experience that Claude and Codex both keep up with zig changes just fine. Every now and then I catch them writing outdated code that they burn some tokens on, but my experience says your local codebases’s idioms will influence what gets generated enough to stop this from being a problem.
Probably an experiment due to Bun's PRs to Zig being rejected (Zig does not allow AI use). If Rust works well enough, and the alternative is maintaining a fork of Zig, I'd guess they'd go with Rust.
The anti-AI policy had nothing to do with Bun's PRs being rejected. This post[0] by a core zig maintainer explains why the PRs were low quality and subsequently rejected.
Picking a pre 1.0 language to build your product always seemed like a bad choice to me. Purely on that basis and ignoring the recent drama this seems like a reasonable idea for tech debt pay down to me. Assuming automated conversion can work without making things worse, which is not exactly a given.
React Native is only an application framework. Using a tool with an unstable API a level down the stack seems much worse. Foundations of sand is the phrase that springs to mind.
So I can't tell if the linked commit is an actual attempt or just an experiment but it did always strike me as odd to make a JS runtime in Zig when my impression was there were a lot of work-stopping compiler bugs at the time.
The problem with vibe coded re-writes is that you basically sign off on understanding the generated codebase at that point. Any historical knowledge of the codebase is gone.
It makes the git history a bit more confusing to follow if you want to see old changes, but I'm sure a simple wrapper to check for the zig equivalent files as well wouldn't be very difficult.
I am also porting TypeScript to Rust. With a different design I managed to make it faster than tsgo port. I've made a lot of progress in the last 4 months but needs more work. Contributions are welcome!
The fact that tsz can compile to wasm might actually give you an even more interesting feature that tsgo can't (yet): using the type checker for data validation at runtime.
When I first heard that bun was written in zig, I thought that was an odd choice for such a large project, mostly because the language is "unstable" and is still making significant breaking changes.
I would guess dealing with breaking changes is a big motivation for this.
The only Bun shipped product I've used in anger is OpenCode and I regularly run into segfaults on it. I doubt this is the reason for migration but every time it happens, it reminds me the real cost of unsafe code. That being said, Zig is an absolute pleasure to write and I can't wait until it has a real library ecosystem, Rust's greatest boon.
That's completely normal at the first step of the language transformation. Actually it's required if you do a file by file transformation first while wanting to maintain interface compatibility.
I'm not sure I would take this kind of path, I would much more focus on refactoring the project to small and easily translatable components with small boundaries, but it's cheap to try things.
If nothing, it'll be good marketing material targeted at non-technical enterprise executives so that they pressurize their engineering teams in meetings that look people are porting such complicated things from one different language to totally different language then why are we not using AI effectively?!
Both their AI policy and their rejection of Bun's performance PR were level-headed and well-reasoned. And the link seems more like a proof-of-concept than anything else.
It's true corporate sponsors are a big help with language development, but not at the expense of conceptual integrity.
Bun is the largest project written in zig. And it isn't close. Bun is bigger than zig itself. Seems like zig isn't mature enough to handle Bun's needs, so I don't blame them at all for looking for off ramps. Only time will tell if rigidity from the zig team is worth the cost of losing Bun. It might be.
So far the wonders of claude/codex have been mostly constrained to applications that are built within the boundary conditions of existing libraries -- the models make direct use of the good work that humans have done to date to build Python, `requests`, `ffmpeg`, you name it.
But I'm excited for the (I think inevitable) stage where the shoggoth starts to reach outside those constraints -- rewriting, patching, renaming, rebuilding libraries, DLLs, binaries -- and we move into a regime where the libraries dissolve, the application floats on top of the shifting sands of an ever more efficient, secure, unified and totally inhuman technology stack.
Obviously this is a horrifying idea in some ways (interpretability, security etc), but it's also not obvious to me that it can't work, especially if there are dedicated, centralized efforts to do this. it's also not clear that interpretability is necessarily mutually exclusive with full slopification/machine rewrite of decades of foundational, incremental development
Tell me you've never worked with system languages without telling me you've never worked with system languages (telling claude to "write it in Rust" does not count).
Having written a JavaScript runtime in Rust in the past - Rust is an excellent choice. Not just due to the development experience, but also for embedders who want to consume the project as a a library (rather than a binary, e.g. node).
Not sure about vibe-coding it. While they aren't using v8, LLMs made it easier to understand v8 quirks and update v8 as they make weird changes every now and then. It couldn't write the runtime without help though.
It seems there was an issue where the image API ignored the ICC Profile.(now fixed)
Any developer with experience implementing image formats would almost certainly avoid this mistake. This is a problem that cannot be solved with vibe coding. In this situation, the user is merely a guinea pig for bug fixes.
April 26th - Bun announces they used AI to fork Zig so they could make an optimization for a 4x improvement
April 27th - Zig contributor mlugg clarifies why the specific optimizations Bun did were ill advised and wouldn't have been accepted in Zig, regardless of AI use [1]
May 4 - Bun is looking into Rust as an alternative.
This, to me, seems like total whiplash. Has anyone at Bun made a statement on why they're making such dramatic changes? It seems like the lesson to internalize from mlugg is not "switch to Rust"
Zig is a pre 1.0 language, subject to many breaking changes and has thousands of (stranded) issues on its GitHub.
It was always a risky proposition to use Zig, unless those persons were philosophically committed to help the language develop or die-hard fans. If not, them jumping to some other language, should not be so big of a surprise.
They may come to the conclusion that Zig is incapable of delivering on its promises or is deficient at satisfying their requirements.
> They may come to the conclusion that Zig is incapable of delivering on its promises or is deficient at satisfying their requirements
Sure, but what you're suggesting is not related to the timeline I gave. They did not determine Zig was deficient in some way. They tried to get a cheap gain, and the gain breaks parts of Zig and they didn't even realize it, and it was worse than the gain already available in Zig. That seems less like they've made a pragmatic choice about speed and more like they are doing headline based development.
What you write makes it sound like there's a pragmatic process being followed that only you are privy to, and I'd like to know what it is. Zig may be inappropriate for Bun after all, but this makes it look like they don't understand what they are doing, and the agentic coding doesn't help.
I would assume that Zig was a risky choice to start with, and Rust was always lurking as a sensible option behind the corner. This probably just broke the camel's back.
It's not really shunned - it's the standard solution for async in Rust - but it's not the right solution for every project, especially if you have specific requirements for how your project's computation should be scheduled. I would guess that Bun is one of those projects, especially as it needs to be able to schedule JS async work itself.
The answer is in the next sentence: "Bun owns its event loop and syscalls." They clearly want to manage their use of threads explicitly, which is not _unusual_ for systems programming but probably less common. Note that `rayon` is different from most of these in that it has nothing to do with async Rust - it's a tool for spreading computation over a thread pool, very popular in non-async projects, but it would also go against their goals here.
tokio is great and it's pretty performant, but you pay an allocation for every future unless you do some complex organization of your futures.
Source: I worked on Deno, competed directly with Bun on HTTP performance (and won on some metrics).
Edit: and of course I typed future instead of task (aka "spawned future"). Thanks, child commenters below. Much of Deno was built on spawning futures that mapped to promises and doing it as fast as possible. I spent ages writing a future arena to optimize this stuff..
You only allocate on box futures, which are much more rare than naked futures - generally only used where object safety (essentially dyn support) is required. Even then some workarounds exist.
It's an async runtime. The whole async-await flow removes a little bit of scheduling control and adds some forced memory management in order to give you some nicer code in an application case, but if you're trying to build a runtime yourself I think you'd much rather retain control in this case. It's just hard to reason about.
You much rather have this runtime you're building manage task scheduling and allocation and all that. It's the most natural design choice to make.
You shouldn't have to pull in big complex dependencies to do what should be primitive things. Zig is putting a strong and thought-out effort into getting async & parallelism "right" inside the stdlib. I'm honestly not up to speed with where rust is at with it at the moment, but last time I checked it was a bit of a mess.
`tokio`, and Rust `futures` in general, are perfectly fine for typical applications.
But as soon as you need something that doesn’t fit neatly into the abstractions they provide, even something as seemingly simple as proactively reusing or cancelling sessions, things quickly become extremely complicated, inefficient, and unreliable.
For high-performance servers, where you really care about raw performance, DoS resistance, and taking advantage of modern kernel features, these abstractions can become a major limitation.
It’s a bit like using an ORM that gives you no easy way to send raw SQL queries. It works fine for common cases, even if it’s not always optimal. But when you really want to take advantage of what the database can do, you usually avoid the ORM.
Tokio is a general purpose async runtime. Much the same could probably be said for async-std (except IIRC they do have a barebones reactor for you to build your own on). In general, a general-purpose async runtime will do worse for highly specific tasks than a purpose-built one (especially e.g. NUMA).
I think avoiding async entirely might be a mistake, and I'm not entirely convinced anything better than a general-purpose async runtime might exist for a JS runtime (it itself is general purpose after all).
Avoiding std::fs is fucking bizarre to me: it's completely sync and is a really lightweight abstraction over syscalls.
my guess is they want to do AI/O as part of their event loop explicitly, and blocking a thread in a syscall waiting for an IOP (ala std::fs) isn't the vibe.
Async is much harder to work with than sync+threading is. And while threads have more overhead in theory, in practice almost nobody is writing applications at such a scale where that overhead actually matters. So I don't blame them for eschewing async, there's likely no benefit for the project in it.
I wonder if something like Haxe, a language that was able to transpile to several languages would be the best target for LLMs. They could always generate haxe and then transpile it to whatever language the user wants.
Probably not for an already ongoing project like this but for a greenfield one.
This feels more like a reaction to Zig's anti-LLM policy than anything. Anthropic would probably like to contribute something back to Zig at some point, but I doubt anyone would ever believe their PRs were not written by Claude.
Exactly, this is a direct response to Zig refusing to accept pull requests from Bun (and Anthropic). That situation forced Bun to maintain a fork of Zig, and it makes sense in the long term that they'd rather port their entire project to Rust.
I've really enjoyed Bun the past year or so, but the acquisition by Anthropic, Bun's codebase and documentation increasingly becoming AI slop, and this impulsive complete rewrite - all of it has ruined it for me and I'm actively moving off of Bun. I don't feel comfortable relying on it any longer.
Interesting. When I thought of Zig, I thought of Bun. In my mind it was the flagship application for that language. Is there another? I wonder how the Zig team feels about this. To me it seems like Rust has definitively won now.
I was hopeful for this project, and I've reported crashes & bugs in the bundler with the hope that it will stabilize over time, but this is just silly - I'm not going to risk them pulling the rug under me and replacing the runtime with 1 million lines of vibecoded rust.
I can't imagine going from reviewing code in Zig to letting Claude code handle it in Rust. Seems like a lot of change to deal with in a short amount of time. Wonder how much the bun team culture will change? We've been really liking bun so far
I am not a fan of AI but my limited experience with running local small LLM's did show me that rewriting some scripts into a different language worked really well. So my guess is this will just turn out fine.
We can even use all PLs in a single project. Starting question should go with something like "which part will we code rather in brainfuck and which in whitespace?"
Yeah, it's not clear. Especially the rise of LLMs is going to chip away Zig's strong points (simplicity at the cost of lesser safety) as time goes on. Which might be a part of why they're so stressed about it.
I work on Bun and this is my branch
This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.
I’m curious to see what a working version of this looks, what it feels like, how it performs and if/how hard it’d be to get it to pass Bun’s test suite and be maintainable. I’d like to be able to compare a viable Rust version and a Zig version side by side.
It is a pity that you can't make an experimental commit on an experimental branch without igniting a fire of delirium through some people who -- if they were able to put their emotional response aside for a minute and could weigh this up on the basis of merit -- would probably agree with the motivations for researching this approach.
> if/how hard it’d be to get it to pass Bun’s test suite and be maintainable
Every month brings new opportunities to completely abstract the process of porting code with agents, all using linguistics. What an exciting time.
For those looking for a similarly interesting (and interestingly similar) example, see Cloudflare's port of Next.js[0], "vinext", from a couple of months ago. It had some teething problems at the start but I'm using it in a few production projects now with minimal issues.
[0] - https://github.com/cloudflare/vinext
This is what it means to work on a popular project, unfortunately.
6 replies →
I am a topic starter, and I had no emotional response, was just being curious. Never expected it will land at HN #1. I specifically posted the link to the first commit and not to the whole branch, because currently the prompt is the most interesting part.
16 replies →
It’s annoying for the team members I suppose, but to be fair, if you’re working on a high-profile open source project, owned by one of the most hyped companies in the world, and your branches are public, it’s probably a good idea to be clear in the branch naming and supplemental files if you’re just “experimenting”.
By working in public on a popular open source project, you are communicating intent and purpose to your users and the general public through your commit messages, branch names, and documentation. You’ll save yourself a lot of grief if you act accordingly.
The fact someone who works on Bun is willing to create and even push a branch generated by a stochastic parrot is very telling of the direction the project is going.
Doesn't matter if it's "experimental", it's a dumb experiment that shouldn't exist.
3 replies →
That's not a very constructive, nor accurate, way of trying to dismiss all concerns around bun that has been raised.
2 replies →
I love your work on bun. How do you feel about all the constant concerns being raised about the quality of the project lately? I understand some of them might just be typical twitter hate but some of them are real. And I think people are right to question why you are adding image processing or web views inside a javascript runtime when there are bugs affecting production that sit unaddressed. For example on of our biggest blockers right now is https://github.com/oven-sh/bun/issues/6608 which was reported in 2023, still affecting us 3 years later.
When you start getting hate, you’ve made it. Up until then you’re a hypothetical that people like. Maybe they’ve built a side project with you or read the docs. You only get hate when people have used your tool and butted up against limitations. We saw this with Deno too where they went from beloved potential savior to realistic, limited tool. Hate is good. It means people rely on you
8 replies →
Okay, let's be honest. That's a feature request, not a bug report.
2 replies →
Why not offer a bounty to get this issue fixed? Are you otherwise paying any money to the bun team?
15 replies →
What's the main motivation for considering Rust?
For what it's worth, in my last experience with Bun[0] I ran into a couple of bugs where it seemed Rust could have helped, e.g. using Bun.write
[0]: https://mastrojs.github.io/blog/2025-10-29-what-struggled-wi...)
With AI agents and how good they are in doing "language translation" tasks against an identical target with a comprehensive test suite, you end up doing these things out of curiosity. The AI agent has the originals to test it's assumptions with too.
I've had surprisingly good results from getting AI agents to take a script in shell, python or typescript and have it translate it into those other programming languages, including rust versions. Or swapping from one build system to another.
1 reply →
Thank you for the clarification!
While you are here, can you elaborate on the method chosen? For example, why not write a conversion script for phase A? I mean, same Anthropic model will produce it in no time, prompting it is at the same cognitive load level, but you would have a deterministic result.
Thank you, Jarred, for your work. It’s unfortunate to see so much backlash toward legitimate research. Bun is often seen by some as “the flagship project for zig” - especially among those frustrated with rust who want zig to "win over rust" for whatever reasons. At the end of the day, you should do what makes the most sense for your project and your circumstances, regardless of the language or tools involved.
Personally, I find this experiment interesting and I’m curious to see how it develops. Writing idiomatic rust requires a shift in mindset, so it’ll be worth watching how well LLMs adapt to that over time.
I can only speak for myself... but I've found at least Claude Opus to handle Rust very well, and in my own use cases WebAssembly (wasm) and FFI for interoperation with TS/JS has been pretty smooth.
>who want zig to "win over rust" for whatever reasons
I don't understand why this mentality is so common. Zig and Rust are both fine languages with markedly different design goals and they can coexist.
1 reply →
....you were saying?
You can view it as an overreaction, but also as a sign that your work is significant. It impressed some, and scared others. In any case, you made something interesting.
You're replying to the original author of Bun. Given the usage of Bun, and the fact that his company (primarily him, actually) was recently acquired by Anthropic for what I'm guessing was a bajillion dollars, I think he probably already knows his work is significant and that he made something interesting.
1 reply →
Calm and curious about your results.
I hope you get the code elegant and not only maintainable but future friendly and performant.
I'm very curious what Zig vs Rust code looks like for the same project! What are your thoughts so far?
Might be a good idea to let AI handle social media. I'm not saying you're doing it badly, just that it doesn't seem like worth the drained energy to do manually.
Can't think of a more stupid and detrimental way to use AI. Pretending to be a (particular) human on social media.
the is lovely, how admirable that you have the space to do this. its very rare that we as a community take the time to actually implement a non trivial system in X and Y and look at the differences. so much discussion around these things is based on pointless tribalism.
I'm sure recasting Bun in a new mold is going to be hugely informative about the structure of Bun itself, regardless of the outcome.
would love to read a postmortem
A research prototype. This is normal.
[dead]
[flagged]
Advice for the future: experiments should be explicitly tagged as such. The commit message "docs: add Phase-A porting guide" says nothing about the experimental and looks like a planned move to rust. That message certainly looks very official to me.
> This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.
Trying to pass off a blunder like this like its no big deal is an insult to your users. You made a dumb mistake. Own it, be transparent and correct the problem that started this; namely, put some form of experimental tag in the commit message. Then say you made a simple mistake, sorry, and move on. Being dismissive is a defense mechanism that can arouse suspicion, as in are you now lying about the experimental state to quench the flame war? Not that I believe that but it can certainly now become conspiracy. Again, you can avoid all that with transparency.
Or the community at large could stop acting deranged over language wars like it’s 2001.
It’s their repo, let them do what they want lol
1 reply →
Or we can stop being toxic to open source maintainers and acting like we own them or they owe us anything.
A commit message on a random branch is not an obligation. Not telling random internet users what side projects they're working on is not a blunder. It quite frankly doesn't matter what you think looks official, it doesn't give you the right to treat people like this.
It's so embarrassing to be a programmer some times, so many of my peers behaving like spoiled rotten brats.
2 replies →
> We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.
Props for the effort man, but people have already picked up on Zig-to-Rust transition.
Poor Zig folks ...
More like poor Bun
Hoping that an AI rewrite is thrown out.
You may even be an OK programmer, but IF YOU AREN'T ABLE TO DO THE WORK I DON'T WANT TO USE IT.
Not worth your time? Not worth my time.
Most of Bun’s code is already written by LLMs. If you feel that way, it’s already been too late for a while. Furthermore, we’re talking about a million line port done in a couple of days. The question of whether it’s worth the time looks extremely different if done by hand. It would take a year.
5 replies →
I think the criticism is still a valid to an extent because I don't see how this would give you a good way to evaluate Zig vs. Rust. Maybe a better approach is to migrate a particularly problematic space and bench that on its own?
It's not like OP asked for any criticism to start with, right? This whole thread is pretty good example of why saying "Fools and children should never see half-finished work" exists. ¯\_(ツ)_/¯
3 replies →
Will you have a way to measure the ecological impact it has to make such a throw away attempt?
Not actually pointing on you or anyone in particular here to be clear. And if the answer would be "not much more than forgetting the light when leaving the toilets", certainly that would be a "go have fun" cheerleading on my part.
But otherwise we collectively have to keep in mind that the prompt that we can throw mindlessly and without perceiving any direct negative feedback are possibly not anodyne.
So if you can measure it, come back also with these numbers so we can all take that into consideration next time the thrill to run it just to see what happens rise in our mind. Thanks.
Right now it seems to say:
> Showing 1,808 changed files with 790,916 additions and 151 deletions.
Just looking at the git diff [0].
I looked at one of these rust port files [1]. Its 827 loc and apparently 7,576 tokens. So that gives you a first order guess that the full 700k additions is around 8 million output tokens. Obviously there are some tool calls, reasoning, reads of the zig version, and fixing compile errors as overhead. So I would guess maybe this is like 40 million tokens by multiplying by 5?
If we guess that is around $200 to $500 in token spend. We can probably guess that it emits around the same as buying $100 in gas? Or like 50 or so kgs of CO2?
[0] https://github.com/oven-sh/bun/compare/main...claude/phase-a...
[1] https://github.com/oven-sh/bun/blob/dacc59c62a8f93eabe6d9998...
1 reply →
Less than the impact of people who can't be bothered to remember basic historical facts or directions in terms of hitting Google services dozens of times a day across the population.
Probably less than the impact of having dozens/hundreds of actual developers, each with a dedicated computer running for months/years in what it would take for a similar effort.
If you want to go live in the woods and farm/hunt for yourself, feel free. I'd suggest you stay away from the museums with paint and not glue yourself to a car mfg.
3 replies →
Interesting to see this when the current top post on HN is someone worrying about Bun as it was acquired by Anthropic. The top comment there describes “Anthropic does experiments on their own codebase, the Bun team is not gonna do the same vibe coding experiments”.
Yet here we are, what looks like a massive undertaking for vibe coding.
Time will tell how this will turn out. Would be nice if the Bun maintainers could give some clarification about what they’re doing here, and why they’re doing this.
They recently tried to upstream an improvement to zig, but were prevented from doing so because zig has a hard and fast "no AI code" rule. Whether you think this response is trying to put pressure on zig or whether they're just moving for practical reasons is up to you.
It's probably a bit of both.
I don't see why they think it would work when the reason their patch set was rejected was because it was not correct, did not go in a direction the Zig authors were interested in and is also in an area where they are already working hard on improvements. It would have been much better if the bun team joined forces and helped out instead of vibe coding a broken PoC patch that never can get merged. Compilation speed is one of the current main focuses of Zig and changing the type system to make that possible was a big part of 0.16.
Anyone can hack up a quick PoC, even without LLMs, the hard part is writing code that is correct and maintainable.
6 replies →
Not only because the AI part, here's a discussion [0] about it
[0] https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio...
13 replies →
> but were prevented from doing so because zig has a hard and fast "no AI code" rule
The patch would have been rejected either way because it was out of date and conflicted with other work going on.
Makes me wonder why zig announced the strict LLM rule recently. I'm afraid one reason could be that zig doesn't want to accept code from the bun fork in the first place (because of LLM usage, deviation and other reasons)
60 replies →
So if tomorrow Rust denied the "improvement" to upstream Rust then what's the next language they plan to vibe code it in?
20 replies →
> but were prevented from doing so because zig has a hard and fast "no AI code" rule
No, they were prevented from doing so because the Zig devs didn't like the proposed changes and are preparing a more comprehensive improvement.
Even if AI had not been used, the changes would not have been upstreamed, see https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio... tl;dr the supposed improvements are not sound and the zig compiler has already gotten a whole lot faster
4 replies →
The Zig maintainers did a pretty in-depth review of the PR, and laid out multiple technical reasons for why it would not get merged. They did not reject it simply for being vibe-coded (though that is likely the cause of it sucking).
Anthropic just needs to buy Zig! Problem solved.
10 replies →
>They recently tried to upstream an improvement to zig, but were prevented from doing so because zig has a hard and fast "no AI code" rule.
And will Rust team accept their vibe coded patches?
3 replies →
Yeah, now that I think about it, having a major project written in a language that doesn't accept AI contributions now owned by a major AI company was a recipe for dis... er, conflict.
I'm not a huge fan of Rust, but I guess having a project like Bun in an actually memory safe language is probably a win? Guess it depends on how good Claude is at writing Rust code...
I see that as a win for Zig.
Read the previous discussions on the topic. Your summary is a sensationalist lie, since their change was apparently a smoking pile of hot garbage, and Zig already had similar performance gains in a newer release.
> They recently tried to upstream an improvement to zig
They didn't.
Not only that but Zig was working on a similar improvement to their change already
seems easier to fork zig
1 reply →
good, more reason to stay away from zig
1 reply →
Probably moreso going with the native language that is reliable and battle tested. Rust runs on Firefox, and in production at several systems across major orgs, this is not surprising.
> what looks like a massive undertaking for vibe coding
fwiw, I suspect it's less of an undertaking than you may think. I've been playing with AI to rewrite Postgres in Rust[0] over the past couple of weeks and I found the AI to be exceptional at doing rewrites. Having an existing codebase you can reference prevents a lot of the problems you have with vibecoding. You have an existing architecture that works well and have a test suite that you can test against
Over the course of a month I've gone from nothing to passing over 95% of the Postgres test suite. Given Jarred built Bun, I bet he'll be able to go much faster
[0] https://github.com/malisper/pgrust
> I suspect it's less of an undertaking than you may think... having an existing codebase you can reference prevents a lot of the problems you have with vibecoding.
That's because it's not vibe coding - stingraycharles doesn't seem to understand what vibe coding is. Vibe coding was defined here https://x.com/karpathy/status/1886192184808149383
> There's a new kind of coding I call “vibe coding”, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.
This is very far from Anthropic's migration plans.
15 replies →
I do not know if there's any overlap between these teams, but it seems like Anthropic itself is fairly invested in the Rust ecosystem.
They recently proposed some of their internal tools to be the official Rust implementation[0] of Connect RPC[1]. As a protobuf based library set, this includes a new Rust-based protobuf compiler, Buffa[2].
[0]: https://github.com/orgs/connectrpc/discussions/7#discussionc...
[1]: https://connectrpc.com/
[2]: https://github.com/anthropics/buffa
I imagine claude is better at Rust than Zig?
Zig is a moving target. 0.15 -> 0.16 includes some massive structural changes concerning IO and async/threading.
Claude has absolutely no idea what it's doing with bleeding edge zig unless you feed it source and guide it closely (in which case it's useful for focused work) - I'm building a game engine & tcp/udp servers with it and it requires a hands-on approach and actually understanding what's being built.
I imagine these are not really concerns with rust at this point.
In my ideal world the team behind bun would be putting in the work to keep up with modern zig, but it's starting to look like they are running mostly on vibes in which case rust might be a better choice.
10 replies →
Contributors and maintainers will also be easier to find in Rust than Zig.
Zig is a great language and I want to see it succeed, but this is a prudent move for Bun.
34 replies →
I would expect all LLMs are going to be better at Rust than Zig - a strong, thorough compiler will simply prevent more mistakes, and the benefits of a "simple" language decreases the larger the code base gets. The more abstractions exist, the less valuable "no hidden control flow" or "no hidden allocations" from the standard library get, and that's before you add the mother of all abstractions of vibe coding.
3 replies →
But why should they? This just seems like the groundwork for an initial refactor and moving from one language to another. They haven't actually committed to switching from Zig to Rust yet. I mean, I get if you are an investor and you want to see if they are using their time effectively, but why would it matter to anyone else?
Lots of people, me included, heavily invested their time and expertise into Bun, using it as a daily driver, to bundle production code or even using it in production as a JS/TS runtime. Of course, we are interested in Bun to stay a useful tool. The Anthropic acquisition was worrying enough on its own.
2 replies →
They’re not required to do so, but like I said, it would be nice, because it removes a lot of speculation. And development is in the open, so people notice what they’re doing.
To be fair, this seems to be Buns original creator themselves experimenting. Unclear if there's any relation to the Anthropic acquisition. But I think it's best we refrain from prematurely speculating if we just don't know.
The industry does not shape bases on HN top posts, nor media buzz. Remember youtube birth. Necessity, available tech, fresh talent.
I believe now we have all but we fail at choosing.
"Show me the incentive and I'll show you the outcome" is usually the overarching law of software dev/design/arch.
What do you mean with that in this context?
2 replies →
I think itnis ok to use or build vibe coded tools if it is built by experts in the domain and they take the ownership.
I think if it's well built by experts it doesn't deserve the "vibe coded" label even if it was built with agentic tools.
Honestly, this kind of thing seems to work quite well with vibe coding. If I remember correctly, the Ladybird JS engine was "vibe-ported" to Rust as well, and it passed 100% of the original test suite, in addition to new Rust tests.
anthropic just wanted to "codex" like bragging rights of codex being developed in rust. so they are now going to write bun in rust, and then claudecode can use claim to be built on rust.
> what looks like a massive undertaking for vibe coding
It doesn’t look like that at all. Do you think that all use of AI is vibe coding?
Did you look at the branch? This is vibed, even with the most liberal definition
https://github.com/oven-sh/bun/compare/claude/phase-a-port
This single commit is 65k lines of additions
https://github.com/oven-sh/bun/commit/ffa6ce211a0267161ae48b...
10 replies →
I think the definition of vibe coding is a bit fluid, in this case I just meant it to be “code fully generated by AI, possibly not fully reviewed by human eyes”. I agree that this definitely not “coding based purely off vibes”, and the approach looks legit.
what would you call a fully uncommented commit with
"+27,939Lines changed: 27939 additions & 0 deletions"
of new rust code
7 replies →
It depends on what you mean by "vibe coding". Is AI coding based on an existing implementation vibe coding? What about only from a natural-language spec? How does manual reviewing affect whether or not it's vibe coding?
1 reply →
In practice all use of AI rapidly becomes vibe coding. Even if someone says they're going to carefully manually review everything that's generated, within a couple of days they get bored and just click approve.
5 replies →
Porting from one typed language to another seems like a perfect use for LLMs. I can see the appeal of both languages and why to consider such an action (e.g., rust is a mainstream PL vs zig's cult status (no slight intended)).
I think the big difficulty here is that Rust's ownership model in particular tends to require certain kinds of control flow to avoid a bunch of weird churning/copying, which makes it not as straightforward of a port target from other imperative languages.
Like maybe you get the LLM to try _really hard_ to churn through everything, but this feels like a big case of "perils of the lack of laziness".
Of course if you have a good idea for how to deal with allocations etc "idiomatically" already maybe that works out well. And to the credit of the port guide writer bun seems to have its explicit allocations that are already mapping pretty well to Rust.
3 replies →
Interesting how times have changed. Back in 2015, the entire Go runtime (already a mature codebase) was rewritten from C to Go semi-automatically: one of the maintainers wrote a C-to-Go conversion tool (for a subset of C they used) so that it compiled and produced identical output, and then the resulting code was manually refactored to make the Go code more idiomatic and optimized. And now you can just ask a language model.
The slides: https://go.dev/talks/2015/gogo.slide#3
An interesting similarity:
>We had our own C compiler just to compile the runtime.
The Bun team maintain their own fork of Zig too
The big difference here is that the C-to-Go tool was presumably deterministic: running it over and over again should produce the exact same result. You can trust that result because the human wrote the conversion tool, understood it, tested it, and worked the bugs out.
The LLM is non-deterministic. You could have it independently do the conversion 10 times, and you'd get 10 different results, and some of them might even be wildly different. There's no way to validate that without reviewing it fully, in its entirety, each time.
That's not to say the human-written deterministic conversion tool is going to be perfect or infallible. But you can certainly build much more confidence with it than you can with the LLM.
I'm not convinced by this argument. If you put 10 senior devs on a problem, you'd get ten solutions. Maybe even 12. If one engineer solves the same problem 10 times, you also will get 10 solutions.
The problem is not that we get 10 solutions, and I think you should draw out your implications and state them directly. Bc they're already either solved or being actively iterated on by industry. And we (well not me) can address them if you're willing to speak them
1 reply →
Perhaps a viable approach might be to vibe code the translation tool itself and observe that for every input it gives the expected output. Then once the translation is done, the translation tool can be discarded.
This would require a robust test suite though.
One of the cases where vibe coding might actually be useful, writing a throwaway tool.
2 replies →
Why does the deterministic nature matter? The interesting part is having oracle tests, not determinism. If someone is deterministic and wrong you use oracle tests to catch that.
2 replies →
You could also use the LLM create a program to do the conversion and then review and use the program to deterministicly perform the actual conversion.
Have the best of both worlds.
Linked commit is probably not the most convincing for this tagline. Here's a branch[0] of Claude mass rewriting Zig code into Rust which is currently at 773,950 additions and 151 deletions:
[0]: https://github.com/oven-sh/bun/compare/claude/phase-a-port
Yikes. When Jarred left Stripe for the first time, he left behind multiple 10k+ line PRs rewriting code in the dashboard (this is before LLMs). It took months to work through those. A three quarter million line diff is essentially unreviewable.
I was curious how much work this would be. Here are the top five from cloc:
I wonder if a successful, albeit slower, approach would be to walk the git commit history in lockstep, applying the behavioral intent behind each commit. If they did this, I would be interested in knowing if they were able to skip certain bug fix commits because the Rust implementation sidestepped the problem.
this is an interesting idea and i might try it with something smaller. there are more than 15,000 commits to bun, so you’d have to have some sort of way to operate on groups of commits in one prompt to get that done without thousands and thousands of api requests
Many segfaults in Bun issue tracker. I bet it would sidestep many.
Well…there would still be panics.
1 reply →
Interesting idea
I'll be very interested in how this AI port turns out. I am involved in a number of active projects that are being held back by the language / framework is holding back the project, but where a rewrite would be too big of a project to undertake by using only human power.
I've had more success vibe coding Rust than I have in more dynamic languages. I suspect the strictness of the Rust compiler forces the AI agent to produce better code. Not sure. It could be just that I am less familiar with Rust so it feels like it's doing a better job.
Rust is a good choice to let LLMs run without a ton of supervision. In my experience you need to monitor the progress heavily and take ownership of the design of the thing you're building or porting. Test harness is a must. Each iteration should run the test and ensure it doesn't break things in other places.
I am in the middle of porting TypeScript to Rust and learned a ton doing this. You can check out the work in progress here https://github.com/mohsen1/tsz/
Happy to share my learnings on this
Oh wow! That sounds like a massive task. How involved have you had to be? How much is this costing you in AI?
I've been targeting Go instead of Rust for a few things. But same deal, I'm not really a Go programmer and it seems to work well enough. I do have a few decades of engineering all sorts of code bases; so I'm not coming at this completely naively.
My way of compensating for my own inability to do detailed code reviews is making sure the tests, integration tests, end to end tests, cover everything I care about. Without that, you can't be sure it is not skipping detail work. I've also made it do some bench marking and stress testing and then analyze the code base for potential bottlenecks. After it found and fixed a few issues, it got better. Finally, prompting it to do critical reviews, look for refactoring opportunities, etc. can give you a nice list of stuff to fix next. Having it run memory leak checkers and static code analysis tools also is a good strategy. Once you start running low on issues you find this way, the code is probably not horrible. Or at least you hit some sort of local optimum.
The lack of code reviews sounds pretty horrible. But it is now quickly becoming the biggest bottleneck in AI assisted coding. Eliminating that bottleneck is scary but it enables a few step changes in volume of code that becomes possible. Using strict compilers and strict memory management helps eliminate a few categories of bugs and issues.
I was previously doing this with languages I do understand. Once you start routinely dealing with larger and larger commits, reviews become a problem.
I expect working with larger code bases like this will get a lot easier and better over time. I noticed that the main headaches I face with this type of engineering are the tendency of models to keep deliberately cutting corners, only doing happy path testing, or deferring essential work for later. I suspect a lot of the models are simply biased to conserving token usage. Pretty annoying but also easy to compensate for with follow up prompts and testing. And probably something that becomes less of an issue as the models get tuned to behave better without additional prompting.
I have the same experience. Maybe I’ve used the same amount of time getting the rewrite out, but the amount of quality checking have increased for me. Before, I would probably not bother to create end to end tests and benchmarks, but now the mental cost for being extra vigilant is so cheap.
My rewrite is running stable in production for two weeks with 50x speedup, which have made the doomed old solution viable again.
Wonder what this will mean for future legacy projects and how how we should structure the programs to be inside “rewrite with llm”-size? Maybe a renessanse for microservices?
> It could be just that I am less familiar with Rust so it feels like it's doing a better job.
Dunning Kruger effect. At least you admit it.
This is pretty much the opposite of Dunning Kruger effect.
1 reply →
Yes it generates trash Rust code.
> Not sure. It could be just that I am less familiar with Rust so it feels like it's doing a better job.
Ya think?
Doy!
Given the recent gripe that Bun/Anthropic indicated regarding compile times with Zig (i.e. that their vibe-coded 4x compilation speedup PR wasn't accepted), it appears to me as an "interesting" move to switch to a language that probably delivers 4x longer compilations than even vanilla Zig.
I am very sceptical zig actually compiles faster than rust.
I had similar code written in zig and c++ and cold compilation was many times faster in c++ and incremental compilation was instant in c++.
I think the reason most rust projects compile slow is because of excessive usage of dependencies and also the excessive use of metaprogramming in code.
Zig doesn’t have multiple compilation units so it doesn’t parallelize compilation
You might be interested in learning more about `-fincremental`, that's how Zig gives you fast rebuilds.
I want zig to succeed but given that zig is not yet 1.x I'd imagine a large code base like bun would have difficulties addressing major breaking changes. Also given the fact that bun is using a fork of zig https://x.com/bunjavascript/status/2048427636414923250?s=20
Why not rewrite claude-code in Rust?
So, Anthropic acquires Bun team because claude-code uses Bun. They port Bun from Zig to Rust presumably because Rust "is better" (imagine big air quotes here). Again presumably, they want to make claude-code "better". Why make it so complicated? With all the power of LLMs they have, surely they can make claude-code the best possible by writting it in Rust directly.
Presumably they aren't falling for their (extremely obvious) "grassroots" marketing, and know, like any good engineer, that LLMs are not the right tool for this.
It's easy to just see Bun as a marketing stunt, as well.
> that LLMs are not the right tool for this.
Claude Code itself is already heavily written by LLMs[0], so I'm not sure what's "this" here. You mean LLMs are okay for writing code but not porting?
[0]: No, it's not just marketing. The codebase was leaked and anyone who glanced at it would realize the claim is likely true.
2 replies →
Because afaik claude code is react rendered as TUI. They must really want react. I guess that happens to ones brain on too much ai
It mangles the render so often. Now I know why.
"You are absolutely right! Would you like me to delete Bun and rewrite Claude Code in Rust instead?"
Why? Are there particular reasons that the maintainers of Bun feel the need to attempt to migrate from Zig to Rust?
Possibly related to https://simonwillison.net/2026/Apr/30/zig-anti-ai/ where the Bun team wanted to upstream work to Zig that was rejected by a blanket anti-LLM contribution policy.
Code origin was not even a factor https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio...
5 replies →
Zig is a moving target that has breaking changes in every release (which is fine as they are sub-1.0). But that means that AI tools have been trained on outdated syntax/etc. Zig isn't that common, so there is even less training data to begin with.
Rust on the other hand is pretty established by now and has less breaking changes. It also has more compile-time safety-guarantees that makes vibe-coding a bit more confident.
In top of that, Zig has rejected their upstream contributions. So they'd have to maintain their own compiler in the long run, which is probably just technical debt to maintain.
Most of my vibe coding is in zig, and it has been my experience that Claude and Codex both keep up with zig changes just fine. Every now and then I catch them writing outdated code that they burn some tokens on, but my experience says your local codebases’s idioms will influence what gets generated enough to stop this from being a problem.
Is there even breaking change in Rust after 1.0?
Probably an experiment due to Bun's PRs to Zig being rejected (Zig does not allow AI use). If Rust works well enough, and the alternative is maintaining a fork of Zig, I'd guess they'd go with Rust.
The anti-AI policy had nothing to do with Bun's PRs being rejected. This post[0] by a core zig maintainer explains why the PRs were low quality and subsequently rejected.
[0] https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio...
1 reply →
Also, if Zig itself doesn’t accept AI contributions, it’s probably NGMI unless somebody is willing to maintain that fork.
If the computer can do it for them, then why not?
[flagged]
Source?
Really? Do you have a source?
Normal, emotionally stable people don’t care if the creators of a programming language disagree with them about tariffs.
3 replies →
Absolute nonsense. Why are you creating rumours?
1 reply →
Picking a pre 1.0 language to build your product always seemed like a bad choice to me. Purely on that basis and ignoring the recent drama this seems like a reasonable idea for tech debt pay down to me. Assuming automated conversion can work without making things worse, which is not exactly a given.
> Picking a pre 1.0 language to build your product always seemed like a bad choice to me.
Such as React Native? :D
React Native is only an application framework. Using a tool with an unstable API a level down the stack seems much worse. Foundations of sand is the phrase that springs to mind.
Yes. And don't get me wrong. I have made a living from it for years now. It's a wild ecosystem. Not for the faint hearted.
Partially, the team would have never expected the project to be acquire before Bun touches v1.0.
Or, even if they 100% expected to be acquired before Bun touches 1.0, you could see how they might not care about this type of tech debt.
So I can't tell if the linked commit is an actual attempt or just an experiment but it did always strike me as odd to make a JS runtime in Zig when my impression was there were a lot of work-stopping compiler bugs at the time.
Considering no public announcement this is just an experiment, possibly leaked.
The problem with vibe coded re-writes is that you basically sign off on understanding the generated codebase at that point. Any historical knowledge of the codebase is gone.
This prompt defines the translation as a file for file, line for line port. Seems like historical knowledge will be fine.
Having dabbled with both Zig and Rust, they do things so fundamentally differently, it isn’t possible to do exact lines like that.
8 replies →
It makes the git history a bit more confusing to follow if you want to see old changes, but I'm sure a simple wrapper to check for the zig equivalent files as well wouldn't be very difficult.
Given they have "unlimited" AI usage, do we expect the port to be complete tomorrow?
Comparing this claude/phase-a-port branch with main: “Showing 1,646 changed files with 773,950 additions and 151 deletions.”
And of course, everything was carefully reviewed by a human.
I am also porting TypeScript to Rust. With a different design I managed to make it faster than tsgo port. I've made a lot of progress in the last 4 months but needs more work. Contributions are welcome!
https://tsz.dev
The fact that tsz can compile to wasm might actually give you an even more interesting feature that tsgo can't (yet): using the type checker for data validation at runtime.
When I first heard that bun was written in zig, I thought that was an odd choice for such a large project, mostly because the language is "unstable" and is still making significant breaking changes.
I would guess dealing with breaking changes is a big motivation for this.
The only Bun shipped product I've used in anger is OpenCode and I regularly run into segfaults on it. I doubt this is the reason for migration but every time it happens, it reminds me the real cost of unsafe code. That being said, Zig is an absolute pleasure to write and I can't wait until it has a real library ecosystem, Rust's greatest boon.
the rust port (at least currently) heavily uses unsafe as well
https://github.com/oven-sh/bun/compare/claude/phase-a-port#d...
that isn't particularly surprising, but the point is I would expect getting things more stable than the zig version would take a bit.
That's completely normal at the first step of the language transformation. Actually it's required if you do a file by file transformation first while wanting to maintain interface compatibility.
I'm not sure I would take this kind of path, I would much more focus on refactoring the project to small and easily translatable components with small boundaries, but it's cheap to try things.
How do you even run it with bun?
I get nodejs not found error when running opencode command in terminal. I installed it via bun too.
try `bunx --bun whateverthecommandis`?
For better or for worse, at least Bun is open source, and the world is not lacking a NodeJS alternative.
What is the most interesting here for me is:
- a big, clear outcome and acceptance criteria, vibe coding project on
- a public, working, high performance, full featured, production codebase by
- the leading LLM model maker known for the strongest coding ability
A good example no matter if it successes or not.
If nothing, it'll be good marketing material targeted at non-technical enterprise executives so that they pressurize their engineering teams in meetings that look people are porting such complicated things from one different language to totally different language then why are we not using AI effectively?!
This is a huge loss for the zig language and community.
As a fan of the language, I hope it leads to some reflection on things that might need to change moving forward.
Nah, let the Zig foundation cook.
Both their AI policy and their rejection of Bun's performance PR were level-headed and well-reasoned. And the link seems more like a proof-of-concept than anything else.
It's true corporate sponsors are a big help with language development, but not at the expense of conceptual integrity.
I think I agree more with this take than where I started
I think it reflects more on Bun. [1].
[1] https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio...
Bun is the largest project written in zig. And it isn't close. Bun is bigger than zig itself. Seems like zig isn't mature enough to handle Bun's needs, so I don't blame them at all for looking for off ramps. Only time will tell if rigidity from the zig team is worth the cost of losing Bun. It might be.
3 replies →
The big loss for the Zig community would be if they stopped donating to ZSF. They have estranged themselves from it for a while.
Bun has stopped donating to the ZSF after the Anthropic acquisition.
At this point, it looks just like an experiment. It's not a definitive "were going to switch".
I think people here are reading too much into it.
Its never been easier to rewrite X in Rust than today.
Will everything eventually be rewritten in Rust and we finally achieve utopia?
why would we need to rewrite twitter in rust? (sorry, couldn't resist)
... or will it all rust away?
OK I'm sorry, I'll see myself out.
So far the wonders of claude/codex have been mostly constrained to applications that are built within the boundary conditions of existing libraries -- the models make direct use of the good work that humans have done to date to build Python, `requests`, `ffmpeg`, you name it.
But I'm excited for the (I think inevitable) stage where the shoggoth starts to reach outside those constraints -- rewriting, patching, renaming, rebuilding libraries, DLLs, binaries -- and we move into a regime where the libraries dissolve, the application floats on top of the shifting sands of an ever more efficient, secure, unified and totally inhuman technology stack.
Obviously this is a horrifying idea in some ways (interpretability, security etc), but it's also not obvious to me that it can't work, especially if there are dedicated, centralized efforts to do this. it's also not clear that interpretability is necessarily mutually exclusive with full slopification/machine rewrite of decades of foundational, incremental development
I suspect that an experiment is being run. In any case, that'll be a hell of a story!
Could just be an experiment or something. It's Monday, the week is young
Rewriting it using an LLM is one. But did all the contributors became as proficient in Rust as they were in Zig over night as well?
They are owned by Anthropic. They have virtually unlimited Claude credits.
Tell me you've never worked with system languages without telling me you've never worked with system languages (telling claude to "write it in Rust" does not count).
Having written a JavaScript runtime in Rust in the past - Rust is an excellent choice. Not just due to the development experience, but also for embedders who want to consume the project as a a library (rather than a binary, e.g. node).
Not sure about vibe-coding it. While they aren't using v8, LLMs made it easier to understand v8 quirks and update v8 as they make weird changes every now and then. It couldn't write the runtime without help though.
For those curious: https://github.com/alshdavid/ion
Didn't they write a whole blog post on why they chose Zig over Rust?
https://github.com/oven-sh/bun/issues/30197
It seems there was an issue where the image API ignored the ICC Profile.(now fixed) Any developer with experience implementing image formats would almost certainly avoid this mistake. This is a problem that cannot be solved with vibe coding. In this situation, the user is merely a guinea pig for bug fixes.
... and that bug was spotted in the canary release, reported and fixed.
Sounds like responsible open source software development to me. That's what pre-releases are for.
April 26th - Bun announces they used AI to fork Zig so they could make an optimization for a 4x improvement
April 27th - Zig contributor mlugg clarifies why the specific optimizations Bun did were ill advised and wouldn't have been accepted in Zig, regardless of AI use [1]
May 4 - Bun is looking into Rust as an alternative.
This, to me, seems like total whiplash. Has anyone at Bun made a statement on why they're making such dramatic changes? It seems like the lesson to internalize from mlugg is not "switch to Rust"
[1] https://lobste.rs/s/ifcyr1/contributor_poker_zig_s_ai_ban#c_...
Zig is a pre 1.0 language, subject to many breaking changes and has thousands of (stranded) issues on its GitHub.
It was always a risky proposition to use Zig, unless those persons were philosophically committed to help the language develop or die-hard fans. If not, them jumping to some other language, should not be so big of a surprise.
They may come to the conclusion that Zig is incapable of delivering on its promises or is deficient at satisfying their requirements.
> They may come to the conclusion that Zig is incapable of delivering on its promises or is deficient at satisfying their requirements
Sure, but what you're suggesting is not related to the timeline I gave. They did not determine Zig was deficient in some way. They tried to get a cheap gain, and the gain breaks parts of Zig and they didn't even realize it, and it was worse than the gain already available in Zig. That seems less like they've made a pragmatic choice about speed and more like they are doing headline based development.
What you write makes it sound like there's a pragmatic process being followed that only you are privy to, and I'd like to know what it is. Zig may be inappropriate for Bun after all, but this makes it look like they don't understand what they are doing, and the agentic coding doesn't help.
1 reply →
I would assume that Zig was a risky choice to start with, and Rust was always lurking as a sensible option behind the corner. This probably just broke the camel's back.
It's a "you can't tell me what to do" reaction, to be honest.
https://x.com/bunjavascript/status/1966806250827714736
Haha, is it really okay not to retract that that the official account previously posted a caricature criticizing Rust?
Yes, it's quite ok to not "retract" a goofy image from months ago. It's harmless fun.
this isn't vibe coding. this is vibe rewriting. ~500k lines of code. nobody is reading those diffs line by line. nobody.
>*No `tokio`, `rayon`, `hyper`, `async-trait`, `futures`.* No `std::fs`,
I'm not a rust dev but even I kind of notice that tokio is kind of shunned in most projects. Why is that? Is it just bad or what?
It's not really shunned - it's the standard solution for async in Rust - but it's not the right solution for every project, especially if you have specific requirements for how your project's computation should be scheduled. I would guess that Bun is one of those projects, especially as it needs to be able to schedule JS async work itself.
The answer is in the next sentence: "Bun owns its event loop and syscalls." They clearly want to manage their use of threads explicitly, which is not _unusual_ for systems programming but probably less common. Note that `rayon` is different from most of these in that it has nothing to do with async Rust - it's a tool for spreading computation over a thread pool, very popular in non-async projects, but it would also go against their goals here.
tokio is great and it's pretty performant, but you pay an allocation for every future unless you do some complex organization of your futures.
Source: I worked on Deno, competed directly with Bun on HTTP performance (and won on some metrics).
Edit: and of course I typed future instead of task (aka "spawned future"). Thanks, child commenters below. Much of Deno was built on spawning futures that mapped to promises and doing it as fast as possible. I spent ages writing a future arena to optimize this stuff..
Do you mean allocate on every task?
You only allocate on box futures, which are much more rare than naked futures - generally only used where object safety (essentially dyn support) is required. Even then some workarounds exist.
Edit: and tasks.
It's an async runtime. The whole async-await flow removes a little bit of scheduling control and adds some forced memory management in order to give you some nicer code in an application case, but if you're trying to build a runtime yourself I think you'd much rather retain control in this case. It's just hard to reason about.
You much rather have this runtime you're building manage task scheduling and allocation and all that. It's the most natural design choice to make.
You shouldn't have to pull in big complex dependencies to do what should be primitive things. Zig is putting a strong and thought-out effort into getting async & parallelism "right" inside the stdlib. I'm honestly not up to speed with where rust is at with it at the moment, but last time I checked it was a bit of a mess.
In pretty much every bit of code I've written both professionally and leisurely I have always used tokio.
However, there are reasons why you might not want to use it:
- You don't need async at all
- You want to own the async execution polling completely
- You want some alternative futures executor like io uring (even though tokio-uring is a thing)
`tokio`, and Rust `futures` in general, are perfectly fine for typical applications.
But as soon as you need something that doesn’t fit neatly into the abstractions they provide, even something as seemingly simple as proactively reusing or cancelling sessions, things quickly become extremely complicated, inefficient, and unreliable.
For high-performance servers, where you really care about raw performance, DoS resistance, and taking advantage of modern kernel features, these abstractions can become a major limitation.
It’s a bit like using an ORM that gives you no easy way to send raw SQL queries. It works fine for common cases, even if it’s not always optimal. But when you really want to take advantage of what the database can do, you usually avoid the ORM.
Tokio is a general purpose async runtime. Much the same could probably be said for async-std (except IIRC they do have a barebones reactor for you to build your own on). In general, a general-purpose async runtime will do worse for highly specific tasks than a purpose-built one (especially e.g. NUMA).
I think avoiding async entirely might be a mistake, and I'm not entirely convinced anything better than a general-purpose async runtime might exist for a JS runtime (it itself is general purpose after all).
Avoiding std::fs is fucking bizarre to me: it's completely sync and is a really lightweight abstraction over syscalls.
my guess is they want to do AI/O as part of their event loop explicitly, and blocking a thread in a syscall waiting for an IOP (ala std::fs) isn't the vibe.
1 reply →
Async is much harder to work with than sync+threading is. And while threads have more overhead in theory, in practice almost nobody is writing applications at such a scale where that overhead actually matters. So I don't blame them for eschewing async, there's likely no benefit for the project in it.
You try to use it you'll get it. Otherwise it's just words. Like these: rust failed at async.
Async is an anti-pattern but sometimes inexperienced developers don't realize that and will infect your codebase with it.
Please explain.
Bun can't be used for anything serious, only as a "script kiddie" to run small scripts.
Trying to run it as a replacement for node in persistent backend/api scenarios is just plain broken.
RSS grows unbounded under Bun: https://discord.com/channels/876711213126520882/148058965798...
Probably a good thing for the project even if the only net positive ends up being the Bun team stops maintaining a fork of Zig.
Just checking some loc numbers from nodejs, bun and deno:
On nodejs: `tokei src`: 98333 LOC C++ Code
On bun: `tokei src` 573572 LOC Zig Code
On deno: `tokei libs cli runtime` 289573 LOC Rust Code
This seems wrong though so would be appreciated if someone who knows the structure of these projects can correct me on the folder names.
Doing `tokei lib src test deps` gives more than 5M loc. but not sure if that is fair
I wonder if something like Haxe, a language that was able to transpile to several languages would be the best target for LLMs. They could always generate haxe and then transpile it to whatever language the user wants. Probably not for an already ongoing project like this but for a greenfield one.
Aside from Zig's anti-AI stance and maintaining their own Zig fork, I think this port will showcase that Anthropic can re-engineer a massive codebase.
As an aside, I've been bitten by Zig's breaking changes on my own projects as well. It's taken the shine off of Zig and I'm looking at alternatives.
I think they are simply experimenting to fully exploit Claude's models' powerful capabilities.
This feels more like a reaction to Zig's anti-LLM policy than anything. Anthropic would probably like to contribute something back to Zig at some point, but I doubt anyone would ever believe their PRs were not written by Claude.
Exactly, this is a direct response to Zig refusing to accept pull requests from Bun (and Anthropic). That situation forced Bun to maintain a fork of Zig, and it makes sense in the long term that they'd rather port their entire project to Rust.
I've really enjoyed Bun the past year or so, but the acquisition by Anthropic, Bun's codebase and documentation increasingly becoming AI slop, and this impulsive complete rewrite - all of it has ruined it for me and I'm actively moving off of Bun. I don't feel comfortable relying on it any longer.
Zig said they wouldn't have accepted the changes without AI either.
1 reply →
I hope they ship and use this. It’ll be a super interesting case study in a few years.
If they really started the work this week, we'll see by the end of June probably.
Interesting. When I thought of Zig, I thought of Bun. In my mind it was the flagship application for that language. Is there another? I wonder how the Zig team feels about this. To me it seems like Rust has definitively won now.
Ghostty is mainly Zig aside from the UI parts.
That TigerBeetle database I think.
Any confirmation that a genuine port is underway? This might just be an experiment.
I don’t understand that effort. They could use Deno and be done with it.
Maybe Mythos told them to quit using zig because it is not safe
You don't need Mythos for that, just open the bun issue tracker and filter for "segmentation fault".
Unexpected, I was waiting for them to maintain a zig fork
That PORTING.md file is massive and seemingly comprehensive. Was that AI written as well? Is there a general Zig to Rust porting template being used?
Let the guy cook, would be nice benchmark of llm nothing else. Damn I wish I had access to infinite tokens for crazy experiments like this.
Alright, back to node.
I was hopeful for this project, and I've reported crashes & bugs in the bundler with the hope that it will stabilize over time, but this is just silly - I'm not going to risk them pulling the rug under me and replacing the runtime with 1 million lines of vibecoded rust.
"Claude, migrate bun to Rust, make no mistakes"
> Read this whole document before writing any code.
Hm does that actually work?
Edit: in a way that can be verified, and not the AI tool saying it did
Which makes one think, why they did not buy deno at first place then?
If they did, I guess they would rewrite deno in C++
oh for christ’s sake
@dang: is this the kind of curious conversation that you're cultivating?
Just curious, why Go was not an option. TS compiler was rewritten in Go.
Interesting. What are the main trade-offs they expect from the switch?
I can't imagine going from reviewing code in Zig to letting Claude code handle it in Rust. Seems like a lot of change to deal with in a short amount of time. Wonder how much the bun team culture will change? We've been really liking bun so far
Poor Zig - it's bleeding now.
Everyone wants to be a Rustee these days.
this makes me so scared to work on OOS. If people saw every random draft PR, branch, design doc I ever made, no doubt the community would be furious
I am not a fan of AI but my limited experience with running local small LLM's did show me that rewriting some scripts into a different language worked really well. So my guess is this will just turn out fine.
How well does that long translation prompt work?
maybe anthropic should‘ve just acquired deno
Claude Mythos cannot do the porting?
instead of writing it once in C++
I mean this is self-evident. Bun got bought by Anthropic to shill in the open source space:
https://bun.com/blog/bun-joins-anthropic
"I got obsessed with Claude Code"
So the bad, bad Zig that opposes the clanker mania has to be punished, even if top comments deny it.
Anthropic is one of the most evil companies in existence today. Whenever someone produces something, they steal it.
the days are not far when golang will be ported to rust.
Watch your mouth.
Here we go again ...
Company A buys company B. A's management decrees the henceforth B's aqcuihired team must comply with company A's standards.
Second system effect kicks in. Bugs multiply.
Half of original company B devs leave.
I'm investigating whether future projects should revert to using Deno.
Bun is showing their lack of experience and guidance
Great. Everyone should use Rust.
it will make it more portable.
what a win
[flagged]
[dead]
[dead]
[dead]
[flagged]
[dead]
[dead]
[dead]
I guess it's like Trump saying, "I'll take Greenland too..."
you can use both zig and rust in a single project, duh
We can even use all PLs in a single project. Starting question should go with something like "which part will we code rather in brainfuck and which in whitespace?"
multi-language codebase are a nightmare to work with
hahaha eat your heart out "don't port it to rust" gang
I don't think problem ever is Rust, Rust is by far the best systems programming language.
Problem is fanboys like YOU.
I fully support this decision
People are asking why they would switch from zig to rust. I wonder the opposite: why would anyone would use zig over rust?
Yeah, it's not clear. Especially the rise of LLMs is going to chip away Zig's strong points (simplicity at the cost of lesser safety) as time goes on. Which might be a part of why they're so stressed about it.
Makes sense on merit. There really isn’t room for Zig when Rust exists, is more ergonomic, and also safe.