Comment by sesm
17 hours ago
When announcements say that rewrite took 1 week, I wonder how much time went into preparing this file with very detailed instructions on mapping Zig to Rust idioms: https://github.com/oven-sh/bun/commit/46d3bc29f270fa881dd573...
On top of that, if you look at 'Pointers & ownership' and 'Collections' sections, the Bun codebase is already prepared, using internal smart pointer types that map 1-to-1 to Rust equivalents, and `bun_collections` Rust crate already exists.
This makes an impression, that rewrite was prepared long time ago and was Bun team proposition to Anthropic during the acquisition deal.
Yeah I don’t know what’s true when reading about LLMs. Same with comments here on hacker news. So much money on the line it’s clear they would seed communities with marketing shills (and some people are just tribal).
Same since they own Bun, they have every incentive to make this seem easier than it was.
This is a huge problem regarding the specifics of ai. Tech is becoming very adversarial as a worker, since marketing and technical information are blurring lines more and more.
Influencers are getting paid to promote ai for 10s of thousands of USD. This is one the reasons social media has been swamped with it lately.
2 replies →
> Tech is becoming very adversarial as a worker, since marketing and technical information are blurring lines
Since one of LLM's largest market (with product fit) is us developers, we are experiencing what the crypto bros did to others.
You can just use AI for yourself and see. It isn't some mysterious product that only a few people get to use.
This is the thing. I do use LLMs (mostly Anthropic).
It just does not generate good useable code. I have to review every single change to a higher degree than I would my own code because it likes to slip in hidden nasties. I have to rewrite at least 50% of what it generates.
That being said, I know devs who swear that they don’t even write code anymore. Like this rust port. I can’t even fathom blindly merging something his massive.
I think we're still seeing pretty wild variance in how effective LLMs can be for code, depending on who is driving it. I've seen some folks getting themselves into messes pretty regularly with LLMs. But, ever since Opus 4.5, it's been pretty obviously better to work with it than without it, remarkably better in some use cases. Porting an application with source available and a huge existing test suite is pretty much the ideal use case for an LLM. It has everything it needs to succeed. I can't imagine why anyone would embark on a porting effort without an LLM at this point.
While this is true, it's also true that few people have the budget to spend a bunch of tokens on porting bun over to rust.
2 replies →
Most people do use LLMs, which is why they have the so-called pessimistic opinions they do.
2 replies →
I'm not sure it matters what anyone claims. It's easy to use and experience its abilities and limitations.
The truth lies somewhere in the middle.
Context: 20 years coding, 13-ish of which professional. Using LLMs for side projects, including a very big one. Also using them to help manage our home server.
I’ve used 20-ish agents with OpenRouter, Google’s own AGY, Mistral’s Vibe, and Claude Code. The good ones are good and can be very helpful with spec’ing work or handling repetitive tasks. Except for Opus 4.6, none of them produce TypeScript that I’d be super proud of; but they write stuff that’s good enough compared to what I’ve seen in the industry. It’s always some mix of spaghetti and shortcuts. That’s fine, you steer the model and tighten your specs and tests.
Anyone claiming ‘Model X can one-shot’ an app is delusional about maintainability, deployment, all the little things that grease the wheels. Anyone claiming ‘LLMs are useless’ is probably not being impartial. That’s it.
And any company claiming AI is awesome at everything and will replace everyone? Yeah, they’re lying, at least about their capabilities as of right now.
Similar highly crafted success stories getting passed around within some big tech firms right now.
We got told that someone wrote a huge, sophisticated driver in Rust in a single day using Claude Code. This is being pushed as a case of AI doing something that we encounter on a regular basis, way faster than a human could do it.
Some ommitted details: Turns out the official spec for this driver is written in C, and the standard has a massive official suite of unit tests.
Ignoring things like whether the Rust that was output could be deemed qualitatively good, whether the resulting line count is appropriate, how much the codebase was ready or primed for this kind of exercise going in, and so on, is it fair to say that a 622 line artefact created up front is a relatively small cost for a potential increase in consistency or quality of output when the output is ~1M LoC? It seems like there's a multiplicative power here given how much output there is. Or is that missing a lot of nuance?
I'd also be interested generally in how much tacit knowledge was needed to come up with these rules and how much iteration on this file was needed, for example how many of the rules here came from a failure case hit as part of iterating on the translation.
This is effectively a very expensive and resource-intensive machine translation. As such, there is no increase in consistency or quality of output.
The translation is a starting point to enable follow-on work to take advantage of Rust's features.
How would you have achieved this “machine translation” without an LLM?
It seems to me it would have been highly likely to be more expensive and more resource intensive - if realistically possible at all, short of implementing a general Zig to Rust translator first.
6 replies →
> I'd also be interested generally in how much tacit knowledge was needed to come up with these rules and how much iteration on this file was needed, for example how many of the rules here came from a failure case hit as part of iterating on the translation.
I think that's the point the original poster was making. There's basically zero chance this file was just spit out by memory in an afternoon. It was obviously the result of a LOT of pre-planning and back and forth checking over the artifacts that Claude was incorrectly generating for one reason or another. So yeah, an extremely iterative process.
With rules as fine-grained as these, there was almost certainly many instances where hundreds of files are generated -> one particular file doesn't translate <X> correctly -> add a rule for <X> -> regenerate everything again -> crap, that rule broke a different file because <Y> -> add a rule for <X if Y>, another for <X not Y> -> regenerate everything again[0] -> repeat. The token costs must have been out of this world.
0: now I'm sure people will say "why would you regenerate a file that generated correctly once? Just mark it off the list and move on." Well, when essentially 99.9999% of your codebase is generated artifacts, the tiny fraction that is actually human-understandable is now the spec, the source of truth for everything. It HAS to be able to essentially redo the entire process if you expect any level of maintainability going forward.
I would guess it was a for ... each loop. They likely wrote a bunch of skills. The foor loop went through each file and generated a complimentary file, then had another process integrate/validate.
I doubt the entire process was a single week, just whatever harness they specially prepared for the work.
> I doubt the entire process was a single week, just whatever harness they specially prepared for the work.
it wasn't. probably quite a lot of preparation i would think. and it's very much a first pass which is far from idiomatic rust and far from memory safe. still impressive though for what it is.
https://x.com/jarredsumner/status/2053588764774269292 https://x.com/jarredsumner/status/2054984043708740093
> using internal smart pointer types that map 1-to-1 to Rust equivalents
Smart pointers weren't invented by Rust. If you write code in other languages with pointers you mentally model the same types already.
> and `bun_collections` Rust crate already exists.
This is wrong. It's part of the PR in the codebase. It did not previously exist.
Agree, after closer look smart pointer types are pretty standard and collections were indeed a part of migration.
But still, in order to prepare those detailed and very project-specific instructions you need to iterate on trying to convert the files from this specific codebase.
Yes, there is exaggeration going on.
Nonetheless, it’s a fact it would have taken much longer without LLMs, I’d say all possible.
I find this is a valid success story if you can look past the embellishments. More than that, it’s really cool, actually.
Its like that hackathon winner project that everyone knows wasn't ideated or built there. True to the law, not to the spirit.
Based on the use of "≥" and em-dashes, I'd say this markdown file was written with or by an LLM.
It's the same thing with their gcc stunt.
It would be _so_ easy to alleviate any doubt from this and hype up the IPO even more. They just need start a separate repo with all the hidden work they needed to do to prod the AI along, and let everyone replicate the results. After all, isn't that what all their customers are trying to achieve? A million lines of usable code in "7" days? Never mind the fact that it will also boost Anthropic's usage metrics as everyone tries to replicate it into their workflows.
If it was beautiful, they would've started with a blog post about this with links and instructions. Perhaps I will still be proven wrong and a blog post is being written as I type this.
Which part of a Zig to Rust port (working, passing tests) of a quite large codebase in a little over a week is not worthy of hype do you reckon? That they didn't one-shot it? What could possibly make it impressive if not the sheer velocity of the thing? That's a months or years long operation for a human. There's a reason porting large programs to new languages was vanishingly rare throughout most of computing history, and there's a reason people are suddenly doing it almost on a whim, now.
Given zig instability (as in frequent breaking changes), it wouldn't surprise me if they intentionally design bun from the start in a way to make it easier to migrate to rust if needed.
Seems like Zig Bun had 3 pointer types that map neatly to existing Rust pointer types. The other 7-8 needed types to be created.
Is that the conspiracy?
bun_collections doesn't look much older than the porting guide.
That makes the Bun owner's claim, just a week ago in this site, even more dubious when he came on here and said this code was just an experiment and likely to be thrown away.
I don't think the owner lied, but rather that the entirely speculative comment on here is obviously wrong.
It says here in the comments that it's mistaken about the supposed previous existence of the crate.