← Back to context

Comment by legerdemain

6 days ago

From 4 days ago: https://news.ycombinator.com/item?id=48019226

  > I work on Bun and this is my branch
  >
  > This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.
  >
  > I’m curious to see what a working version of this looks, what it feels like, how it performs and if/how hard it’d be to get it to pass Bun’s test suite and be maintainable. I’d like to be able to compare a viable Rust version and a Zig version side by side.

cargo check reported over 16,000 compiler errors when I wrote that message. It could not print a version number or run JavaScript. I didn’t expect it to work this quickly and I also didn’t expect the performance to be as competitive. There’ll be a blog post with more details.

  • If this experiment ends up resulting in a real migration path, I think that would be completely awesome. Maybe it means we have a chance to revive older projects such as ngspice [0], but with modern affordances and better safety properties.

    From your post, though, it sounds like Bun may have been a pretty direct rewrite, without too many hard choices along the way. Is that fair?

    [0] https://ngspice.sourceforge.io/

    • I hear your suggestion without feeling the need to remark the far too common Linux/Deveoper response of “but if you just do all this other stuff and run it this special way and install 15 dependencies and compile XYZ lib from source then clearly it works fine and you’re mistaken”.

      That’s exactly the type of thing that is needed is to optimize projects for modern compatibility, portability and safety when other modernization efforts or forks don’t exist.

      That said, I suspect this rewrite went so quickly and so optimally because it had the benefit of (effectively) 100% test coverage already in place in a really well defined system. Most open source project spawn from efforts of a single developer who frequently never waste time writing tests for a little side project. Later as it grows, they rarely stop and go back to implement testing. So if you’re truly working with an old dead project, there is a really good chance there are zero tests to be found. That is far more difficult to reach the same completeness unless the goal is simply to port all of those same problems to a new language and hope type safety fixes them.

      (Not specific to ngspice, just mean generally.)

      2 replies →

    • I've found Rust to be pretty enjoyable to work with in terms of Agent assisted development. Easier still if you have something you're trying to port or recreate in Rust for various reasons. There are definitely some rougher edges around a few things as you get more general purpose in terms of app targets. Some of the DB engines can use some work or may be missing interfaces you use in other supported languages/platforms... There's a somewhat limited set of UI options, and no clear winner.

      Lifetimes can get pretty hard in very complex code bases... even if other aspects of burrow checking may be more common, this is where I've had and seen the biggest gaps in understanding in practice. That said, you can usually do inefficient things to work around these issues with the opportunity to come back later. Often inefficient Rust with lots of clone operations is still faster, smaller, lighter than the same services in Java or C# as an example.

  • UPDATE: This would make for an excellent case study if you don’t mind sharing the details. I am very curious about the number of agents, hours it took, and models used (did you use Mythos?).

    This would not have been possible 5 years ago. LLMs are going to push us into the space age. Both Anthropic and OpenAI have committed to spending 10s of billions of dollars on training alone for the year. I am equally excited and terrified at the pace of progress!

  • Rust is really fun to work with and the compiler is great, just make sure the rewrite takes compile times into account since larger projects often have to be organized in a way that makes compilation reasonably fast.

  • What coding model are you using for the rewrite? Opus for everything? A prerelease model like Mythos?

  • Just an aside, is there any way to know how many of those 16,000 compiler errors are independent. I mean, could it be that just by changing say 500 lines of code all those errors disappear?

    Perhaps 16,000 could just measure cascade breakage, for example one lifetime mismatch can cause errors in every function that tries to use that reference.

    Rust reference lifetime bookkeeping is a difficult task for LLMs. The LLM has to maintain, across multiple functions and structs, which references outlive which. Furthermore compiler messages are highly contextual and lifetime patterns are sparse in the training set.

  • That's a post I am eagerly waiting to read.

    Basically we are seeing now an "inverse Hofstadter's Law" where doing something with an LLM takes less time thanexpected even when you take into account this law.

    I am a Rust developper myself but I really love Zig and Bun. I am just overly curious of all this.

    • > Basically we are seeing now an "inverse Hofstadter's Law" where doing something with an LLM takes less time thanexpected even when you take into account this law.

      Even LLMs themselves can't accurately estimate this (though this may be out of distribution stuff)

      4 replies →

  • This does not surprise me in the least. Several Claudes are very good at splitting up and working through them all.

  • I think given the current mood of things, it would be prudent to not make such strong assertions on anything. Trust is in increasingly short supply these days.

  • > I am so tired of worrying about & spending lots of time fixing memory leaks and crashes and stability issues. it would be so nice if the language provided more powerful tools for preventing these things.

    haven't used zig...(only used rust)

    but zig doesn't solve those problems?

    • Zig is a middle ground. It solves some of the common foot-guns in C, Without the costs of affine substructural typing that offers Rust its super powers.

      I am of the opinion that it is horses for courses and not a universal better proposition.

      Because my needs don’t fit in with Rust’s decisions very well I will use zig for personal projects when needed. I just need linked lists, graphs etc…

      While hopefully someone can provide a more comprehensive explanation here are the two huge wins for my use case.

      1) In Zig, accessing an array or slice out of bounds is considered detectable illegal behavior.

      2) defer[0] allows you to collocate the the freeing of resources with code.

      That at least ‘feels’ safer to me than a bunch of ‘unsafe’ rust that is required for my very specific use case.

      I was working on some eBPF code in C and did really miss zig.

      For me it fits the Pareto principle but zig is also just a sometimes food for me, so take that for what it is worth.

      [0] https://zig.guide/language-basics/defer/

      16 replies →

    • zig is unmanaged memory. But rust also allows memory leaks, and they're not uncommon in large, complex programs. So this rewrite will not necessarily control for that.

      6 replies →

    • Nope! Zig is like C in this regard. There’s no borrow checker. Managing memory is your responsibility.

      It gives you a few more tools than C - like a debug allocator, bounds checked array slices and so on. But it’s not a memory safe language like rust.

      12 replies →

    • It is quite obvious that Zig is pre 1.0 with thousands of stranded unsolved issues (per their GitHub repo). A review of Zig hype gives the strong impression it was created by being relentlessly and suspiciously pushed on HN, beyond logic or its language rankings (per TIOBE or GitHub stats), so that many were under the illusion that the language was something more or other than what it really is.

      Zig is still under development and beta. Stability, crashes, and leaks should not be surprising, and even expected. To stick with a beta language, usually companies and developers are philosophically and/or financially aligned with the language. An example is JangaFX and Odin, where they not only have committed to using the language (despite being beta) in their products, but have directly hired GingerBill.

      Team Bun appears to have "alignment and relationship issues" with Zig, to the point they have decided to extensively explore their options. Now Bun is rewritten in Rust. They are seeing if Rust solves their requirements. As with any relationship, if one ignores or takes a partner for granted, don't be surprised if they want a divorce or jump to someone else.

      3 replies →

"No one has the intention of building a wall" - Walter Ulbricht, chairman of the central committee, a couple of months before the Berlin Wall was built.

The AI companies and their associates are beginning to surpass that level of denials and lies.

  • It’s disrespectful to immediately jump to adversarial conclusions from a simple desire to refactor and poor netiquette.

    • Four days ago there was no intention to rewrite, now it's a simple desire to refactor. It's not adversarial conclusion, it's pointing out the clear hypocrisy.

      18 replies →

    • If experienced (in open source and corporate politics) developers would bet on Polymarket if the rewrite is going to be ultimately merged, which side would you bet on?

      What would the emerging odds be? My guess is 19/20 in favor of ditching Zig.

      I have followed many initial denials on a wide range of topics, not only rewrites, over the years. Like clockwork, most of them were lies.

      2 replies →

  • you know this whole exercise is both a marketing exercise and a way to make noise.

    would the world come to a standstill tomorrow if every Bun instance out there ran on Node.js ?

    they know their A.I can't sell without the noise that it's now on the edge of the frontier. this is hype.

    zig adopting a strict 'no LLM' policy affects the LLM vendors.

    • A good point. The business and marketing aspect of this situation can not be overlooked. The rewrite in Rust was a clear marketing opportunity, to maintain the LLM hype, that team Bun warmly embraced.

      1 reply →

    • Exactly. Always asks “who benefits from this?” . The answer in this case is: AI vendors, not us.

    • It’s also just a useful exercise in general, especially for getting feedback for models and harnesses.

      I’ve been thinking about setting up a non trivial project to use as a benchmark for any plugins and/or harness changes I make.

      Having a prebuilt verification suite is great. You can use it to asses things like token usage, time, across different harnesses, models, plugins.

    • I don’t think the Zig project adopting a strict ‘no LLM’ policy affects the LLM vendors at all. How many developers are working on the Zig project itself that will (maybe) now not buy a Claude subscription? I can buy that this is a marketing stunt, but nobody at the top cares if a relatively small open source project doesn’t allow AI contributions.

      2 replies →

Looks like he did the maintainability performance and test suite checks and made his decision :)

  • Honestly, I fully support the rewrite to Rust, but he should have just owned this from the start. I'm sure he knew in the back of his mind how dedicated he was to that branch as he had already spent the equivalent of thousands of dollars in tokens by that point.

Also a few days before that:

> I expect OSS to go the opposite direction: no human contribution allowed. Slop will be a nostalgic relic of 2025 & 2026.

We should have seen this coming after they got acquired by Anthropic, but it's still disappointing. I'm not against large language models as a technology, just thoroughly disgusted how these "AI" companies rose to power, eating the software industry and the rest of society. It's creating a very unhealthy dependency.

Think a few steps ahead and start preparing a slop-free software stack and community. That includes Zig and its ecosystem. Even if we (and future generations) don't manage to live entirely without slop, it's more important than ever to ensure a sustainable computing culture, free as in freedom.

  • Software companies have been about automating human labor since the invention of computers. It's the whole damn point. Why do you think finance used to be (sometimes still is) the head of the IT dept? Because we automated accounting away. Then typists. Then secretaries. Then drafting. Etc etc.

  • So you argue we discriminate based on who/what wrote the code, instead of what's in it?

    Let's take this to a different domain, self driving cars. Would you equally argue for human driving? I'm pretty sure over time it will become clear to everyone that machines will be able to outperform humans consistently at this task, to the degree that human driving will become illegal. But for now the press likes to focus on any failure of machine driving, taking for granted human drivers are the largest or second largest cause of premature death in many countries.

    Coding (in many ways, but not all) is a more open ended and versatile task than driving, so it's natural that current iterations seem untrustworthy, but ignoring the trajectory is erring on conservatism, and doesn't seem to me to be grounded in any sound reasoning.

  • How could it possibly be open source if it requires proprietary models developed by a few companies to writs the code.

    Seems like that would make open source entirely controlled by open ai, anthropic et al.

    • Open source and open weight models are already really good. I don’t think anyone really depends on the big AI companies anymore, if they go away, the open source models seem to be already sufficiently good to take the torch and will continue to improve thanks to research. They may require money to train , but the cost of that is already covered quite well and if these model became the mainstream way to use AI , more money from governments and research institutions would be poured into them.

      That is actually a very plausible scenario!

I think such re-implementations will be a huge asset to the process of software developments in the future.

What's your point

  • To demonstrate engineers may not be as skilled and knowledgeable as they appear. To make such a comment then turn around and make an announcement days later indicates that the engineers are not skilled in the tools they’re using or even possibly the domain they’re working in.

    • The quote doesn’t provide warrant for this claim. The developer did a great job investigating the applicability of a new tool and it appears the investigation yielded fruit.

      Your kind of negativity is pathological.

      2 replies →

    • Being an expert software developer - which Jarred Sumner indisputably is, having created Bun - doesn't automatically make you an expert on predicting the improvements in software development performance that LLMs enable. All of us - experts and amateurs alike - are in the process of figuring that out, in real time, around the world, right now.

      Underestimating how quickly a non-trivial project will come together is an almost unheard of phenomenon. It used to invariably be the other way around, to the point that there are laws about it, like Hofstadter's Law, which says that projects always take longer than anticipated, even when accounting for the law itself. Or Fred Brooks' work, which puts limits on how much the development of software projects can be sped up.

      The sane takeaway here is that if what's being reported is true (keeping in mind it's coming from a newly minted Anthropic employee), it implies an astonishing, unheard of improvement in software development speed, at least for certain kinds of tasks, enabled by LLMs.

      To somehow twist that into "experts may not be as skilled and knowledgeable as they appear" or "not skilled in the tools they’re using" makes me think of the Charles Babbage quote, "I am not able rightly to apprehend the kind of confusion of ideas that could provoke such [an opinion]."