Comment by Jarred
6 days ago
cargo check reported over 16,000 compiler errors when I wrote that message. It could not print a version number or run JavaScript. I didn’t expect it to work this quickly and I also didn’t expect the performance to be as competitive. There’ll be a blog post with more details.
If this experiment ends up resulting in a real migration path, I think that would be completely awesome. Maybe it means we have a chance to revive older projects such as ngspice [0], but with modern affordances and better safety properties.
From your post, though, it sounds like Bun may have been a pretty direct rewrite, without too many hard choices along the way. Is that fair?
[0] https://ngspice.sourceforge.io/
I hear your suggestion without feeling the need to remark the far too common Linux/Deveoper response of “but if you just do all this other stuff and run it this special way and install 15 dependencies and compile XYZ lib from source then clearly it works fine and you’re mistaken”.
That’s exactly the type of thing that is needed is to optimize projects for modern compatibility, portability and safety when other modernization efforts or forks don’t exist.
That said, I suspect this rewrite went so quickly and so optimally because it had the benefit of (effectively) 100% test coverage already in place in a really well defined system. Most open source project spawn from efforts of a single developer who frequently never waste time writing tests for a little side project. Later as it grows, they rarely stop and go back to implement testing. So if you’re truly working with an old dead project, there is a really good chance there are zero tests to be found. That is far more difficult to reach the same completeness unless the goal is simply to port all of those same problems to a new language and hope type safety fixes them.
(Not specific to ngspice, just mean generally.)
You can instruct an LLM to improve the test coverage.
1 reply →
I've found Rust to be pretty enjoyable to work with in terms of Agent assisted development. Easier still if you have something you're trying to port or recreate in Rust for various reasons. There are definitely some rougher edges around a few things as you get more general purpose in terms of app targets. Some of the DB engines can use some work or may be missing interfaces you use in other supported languages/platforms... There's a somewhat limited set of UI options, and no clear winner.
Lifetimes can get pretty hard in very complex code bases... even if other aspects of burrow checking may be more common, this is where I've had and seen the biggest gaps in understanding in practice. That said, you can usually do inefficient things to work around these issues with the opportunity to come back later. Often inefficient Rust with lots of clone operations is still faster, smaller, lighter than the same services in Java or C# as an example.
[flagged]
As an amateur in the space: I download on Mac, run `ngspice`, "Error: Can't open display: :0". I look in the code - hardcoded X11-era assumptions. Not exactly modern affordances...
Then I try to understand and extract the actual formulas, and there isn't a clean formula layer anywhere. All is procedural, e.g. in `b4v6temp.c` formulas are tangled with branching, caching, model-state mutation. Extracting the computation, embedding cleanly and exposing through a sane API feels hair-pulling.
So yeah, maintained, but not as in 'modern, embeddable, understandable software component' I'd be looking forward in a rewrite. Maybe not even touch the simulation core, just rewriting Embedding/API layer and the UX would already be a big deal.
6 replies →
I see "sourceforge" and immediately I think "this project is way behind time and is going to pose a lot of issues to new users, if it's still active".
2 replies →
I think this is highlighting the problem the poster you're responding to laments!
+1, a project presenting at FOSDEM certainly does not need a "revive".
13 replies →
UPDATE: This would make for an excellent case study if you don’t mind sharing the details. I am very curious about the number of agents, hours it took, and models used (did you use Mythos?).
This would not have been possible 5 years ago. LLMs are going to push us into the space age. Both Anthropic and OpenAI have committed to spending 10s of billions of dollars on training alone for the year. I am equally excited and terrified at the pace of progress!
Rust is really fun to work with and the compiler is great, just make sure the rewrite takes compile times into account since larger projects often have to be organized in a way that makes compilation reasonably fast.
https://x.com/jarredsumner/status/2053050239423312035
This is at least partially disingenuous. Zig is working on, and has already shipped for some situations, a faster compiler. Bun runs on an outdated version of Zig that doesn't include it.
In my experience Bun in Zig compiles more slowly than Deno in Rust.
Single compiles for sure. Where Zig is optimizing compilation is in the incremental compiler, which I've seen compile the compiler itself in an instant after a single line change. Of course, that kind of speed is probably not interesting to some people if the AI is writing tons of lines of code before they go to the compilation step.
5 replies →
What coding model are you using for the rewrite? Opus for everything? A prerelease model like Mythos?
Just an aside, is there any way to know how many of those 16,000 compiler errors are independent. I mean, could it be that just by changing say 500 lines of code all those errors disappear?
Perhaps 16,000 could just measure cascade breakage, for example one lifetime mismatch can cause errors in every function that tries to use that reference.
Rust reference lifetime bookkeeping is a difficult task for LLMs. The LLM has to maintain, across multiple functions and structs, which references outlive which. Furthermore compiler messages are highly contextual and lifetime patterns are sparse in the training set.
That's a post I am eagerly waiting to read.
Basically we are seeing now an "inverse Hofstadter's Law" where doing something with an LLM takes less time thanexpected even when you take into account this law.
I am a Rust developper myself but I really love Zig and Bun. I am just overly curious of all this.
> Basically we are seeing now an "inverse Hofstadter's Law" where doing something with an LLM takes less time thanexpected even when you take into account this law.
Even LLMs themselves can't accurately estimate this (though this may be out of distribution stuff)
LLMs have no conception of time, unless you explicitly feed in timestamps to the context
3 replies →
This does not surprise me in the least. Several Claudes are very good at splitting up and working through them all.
I think given the current mood of things, it would be prudent to not make such strong assertions on anything. Trust is in increasingly short supply these days.
Nothing Jarred said is an assertion other than "There’ll be a blog post with more details."
"I didn’t expect it to work this quickly and I also didn’t expect the performance to be as competitive."
These are two assertions. There could have been a prior secret rewrite that took much longer than six days and this is a marketing stunt for Anthropic. In case people still don't get it, Jarred works for Anthropic and Bun belongs to Anthropic.
5 replies →
> I am so tired of worrying about & spending lots of time fixing memory leaks and crashes and stability issues. it would be so nice if the language provided more powerful tools for preventing these things.
haven't used zig...(only used rust)
but zig doesn't solve those problems?
Zig is a middle ground. It solves some of the common foot-guns in C, Without the costs of affine substructural typing that offers Rust its super powers.
I am of the opinion that it is horses for courses and not a universal better proposition.
Because my needs don’t fit in with Rust’s decisions very well I will use zig for personal projects when needed. I just need linked lists, graphs etc…
While hopefully someone can provide a more comprehensive explanation here are the two huge wins for my use case.
1) In Zig, accessing an array or slice out of bounds is considered detectable illegal behavior.
2) defer[0] allows you to collocate the the freeing of resources with code.
That at least ‘feels’ safer to me than a bunch of ‘unsafe’ rust that is required for my very specific use case.
I was working on some eBPF code in C and did really miss zig.
For me it fits the Pareto principle but zig is also just a sometimes food for me, so take that for what it is worth.
[0] https://zig.guide/language-basics/defer/
Fwiw you don't need unsafe for graphs or linked lists in Rust. At least not directly - these things can be abstracted. The petgraph crate is the most popular for graphs. I'm not sure about linked lists because linked lists are the wrong choice 99.9% of the time.
I've written hundreds of thousands of lines of Rust and outside of FFI, I've written I think one line of unsafe Rust.
15 replies →
zig is unmanaged memory. But rust also allows memory leaks, and they're not uncommon in large, complex programs. So this rewrite will not necessarily control for that.
What language doesn't allow memory leaks?
5 replies →
Zig doesn't even have RAII...
which is a good thing. C++'s RAII is magic-sauce that does a lot for you when you can simply use `defer` in zig. A constructor is just a function call. A destructor is just a function call.
11 replies →
Nope! Zig is like C in this regard. There’s no borrow checker. Managing memory is your responsibility.
It gives you a few more tools than C - like a debug allocator, bounds checked array slices and so on. But it’s not a memory safe language like rust.
It's not.. but im pretty sure it could be. could probably even take this (WIP) idea and bolt on a formal verifier pretty easily.
https://github.com/ityonemo/clr
10 replies →
Those tools exit in C tooling as well, now that many ignore them is another matter.
MSVC has a debug allocator since at least Visual Studio 5.
It is quite obvious that Zig is pre 1.0 with thousands of stranded unsolved issues (per their GitHub repo). A review of Zig hype gives the strong impression it was created by being relentlessly and suspiciously pushed on HN, beyond logic or its language rankings (per TIOBE or GitHub stats), so that many were under the illusion that the language was something more or other than what it really is.
Zig is still under development and beta. Stability, crashes, and leaks should not be surprising, and even expected. To stick with a beta language, usually companies and developers are philosophically and/or financially aligned with the language. An example is JangaFX and Odin, where they not only have committed to using the language (despite being beta) in their products, but have directly hired GingerBill.
Team Bun appears to have "alignment and relationship issues" with Zig, to the point they have decided to extensively explore their options. Now Bun is rewritten in Rust. They are seeing if Rust solves their requirements. As with any relationship, if one ignores or takes a partner for granted, don't be surprised if they want a divorce or jump to someone else.
You might want to check their Codeberg then, because they've moved all their development over there...
2 replies →
Peter Naur: Programming as Theory Building
Bun: Hold my beer