The byte-for-byte identical output requirement is the smartest part of this whole thing. You basically get to run the old and new pipelines side by side and diff them, which means any bug in the translation is immediately caught. Way too many rewrites fail because people try to "improve" things during the port and end up chasing phantom bugs that might be in the old code, the new code, or just behavioral differences.
Also worth noting that "translated from C++" Rust is totally fine as a starting point. You can incrementally make it more idiomatic later once the C++ side is retired. The Rust compiler will still catch whole classes of memory bugs even if the code reads a bit weird. That's the whole point.
> Way too many rewrites fail because people try to "improve" things during the port
I'd say that porting is a great time to "improve" many things, but like you suggest, not a great time to add new features. You can do a lot of improvements while maintaining output parity. You're in the weeds, reading the code, thinking about the routines, and you have all the hindsight of having done it already. Features are great to add as comments that sketch things out but importantly this is a great time to find and recognize that maybe a subroutine is pretty inefficient. I mean the big problem in writing software is that the goal are ever evolving. You wrote the software for different goals, different constraints. So a great time to clean things up, make them more flexible, more readable, *AND TO DOCUMENT*.
I think the last one gets ignored easily but my favorite time to document code is when reading it (but the best time is when writing it). It forces you to think explicitly about what the code is doing and makes it harder for the little things to slip by. Given that Ladybird is a popular project I really do think good documentation is a way to accelerate its development. Good documentation means new people can come in and contribute faster and with fewer errors. It lowers the barrier to entry, substantially. It's also helpful for all the mere mortals who forget things
LLMs are great at producing documentation - ask one "hey can you add a TODO comment about the thing on line 847 that is probably not the best way to do this?" while you're working on the port, and it will craft a reasonably-legible comment about that thing without further thought from you, that will make things easier for the person (possibly future-you) looking at the codebase for improvements to make. Meanwhile, you keep on working on the port that has byte-for-byte identical output.
That reminds me of the "Strangler Fig" pattern where you replace a service by first sending the requests to both the old and new implementation so you can compare their outputs. Then only when you're confident the new service functions as expected do you actually retire the old service.
I hope, with the velocity unlocked by these tools, that more pure ports will become the norm. Before, migrations could be so costly that “improving” things “while I’m here” helped sell doing the migration at all, especially in business settings. Only to lead to more toil chasing those phantom bugs.
One of the biggest point of rewriting is you know better by then so you create something better.
This is a HUUUGE reason code written in rust tended to be so much better than the original (which was probably written in c++).
Human expertise is the single most important factor and is more important than language.
Copy pasting from one language to another is way worse than complete rewrite with actual idiomatic and useful code.
Best option after proper rewrite is binding. And copy-paste with LLM comes way below these options imo.
If you look at real world, basically all value is created by boring and hated languages. Because people spent so much effort on making those languages useful, and other people spent so much effort learning and using those languages.
Don’t think anyone would prefer to work in a rust codebase that an LLM copy-pasted from c++, compared to working on a c++ codebase written by actual people that they can interact with.
I did several web framework conversions exactly like this. Make sure the http output string matches in the new code exactly as the old code and then eventually deleted the old code with full confidence.
Really like this translation approach and I had written about it just couple of days back (more from a testing and validation context). To see folks take that approach to something complex is pretty amazing!
https://balanarayan.com/2026/02/20/gen-ai-time-to-focus-on-l...
> I used Claude Code and Codex for the translation. This was human-directed, not autonomous code generation. I decided what to port, in what order, and what the Rust code should look like. It was hundreds of small prompts, steering the agents where things needed to go. After the initial translation, I ran multiple passes of adversarial review, asking different models to analyze the code for mistakes and bad patterns.
> The requirement from the start was byte-for-byte identical output from both pipelines. The result was about 25,000 lines of Rust, and the entire port took about two weeks. The same work would have taken me multiple months to do by hand. We’ve verified that every AST produced by the Rust parser is identical to the C++ one, and all bytecode generated by the Rust compiler is identical to the C++ compiler’s output. Zero regressions across the board
This is the way. Coding assistants are also really great at porting from one language to the other, especially if you have existing tests.
> Coding assistants are also really great at porting from one language to the other
I had a broken, one-off Perl script, a relic from the days when everyone thought Drupal was the future (long time ago). It was originally designed to migrate a site from an unmaintained internal CMS to Drupal. The CMS was ancient and it only ran in a VM for "look what we built a million years ago" purposes (I even had written permission from my ex-employer to keep that thing).
Just for a laugh, I fed this mess of undeclared dependencies and missing logic into Claude and told it to port the whole thing to Rust. It spent 80 minutes researching Drupal and coding, then "one-shotted" a functional import tool. Not only did it mirror the original design and module structure, but it also implemented several custom plugins based on hints it found in my old code comments.
It burned through a mountain of tokens, but 10/10 - would generate tens of thousands of lines of useless code again.
The Epilogue: That site has since been ported to WordPress, then ProcessWire, then rebuilt as a Node.js app. Word on the street is that some poor souls are currently trying to port it to Next.js.
> 10/10 - would generate tens of thousands of lines of useless code again.
Me too! A couple days ago I gave claude the JMAP spec and asked it to write a JMAP based webmail client in rust from scratch. And it did! It burned a mountain of tokens, and its got more than a few bugs. But now I've got my very own email client, powered by the stalwart email server. The rust code compiles into a 2mb wasm bundle that does everything client side. Its somehow insanely fast. Honestly, its the fastest email client I've ever used by far. Everything feels instant.
I don't need my own email client, but I have one now. So unnecessary, and yet strangely fun.
Its quite a testament to JMAP that you can feed the RFC into claude and get a janky client out. I wonder what semi-useless junk I should get it to make next? I bet it wouldn't do as good a job with IMAP, but maybe if I let it use an IMAP library someone's already made? Might be worth a try!
> It burned through a mountain of tokens, but 10/10 - would generate tens of thousands of lines of useless code again.
This is the biggest bottleneck at this point. I'm looking forward to RAM production increasing, and getting to a point where every high-end PC (workstation & gaming) has a dedicated NPU next to the GPU. You'll be able to do this kind of stuff as much as you want, using any local model you want. Run a ralph loop continuously for 72 hours? No problem.
> a relic from the days when everyone thought Drupal was the future (long time ago).
Drupal is the future. I never really used it properly, but if you fully buy into Drupal, it can do most everything without programming, and you can write plugins (extensions? whatever they're called...) to do the few things that do need programming.
> The Epilogue: That site has since been ported to WordPress, then ProcessWire, then rebuilt as a Node.js app. Word on the street is that some poor souls are currently trying to port it to Next.js.
This is the problem! Fickle halfwits mindlessly buying into whatever "next big thing" is currently fashionable. They shoulda just learned Drupal...
> It burned through a mountain of tokens, but 10/10 - would generate tens of thousands of lines of useless code again.
Pardon me, and, yes, I know we're on HN, but I guess you're... rich? I imagine a single run like this probably burns through tens or hundreds of dollars. For a joke, basically.
I guess I understand why some people really like AI :-)
Agree, and it's also such a shame that none of the AI companies actually focus on that way of using AI.
All of them are moving into the direction of "less human involved and agents do more", while what I really want is better tooling for me to work closer with AI and be better at reviewing/steering it, and be more involved. I don't want "Fire one prompt and get somewhat working code", I want a UX tailored for long sessions with back and forth, letting me leverage my skills, rather than agents trying to emulate what I already can do myself.
It was said a long time ago about computing in general, but more fitting than ever, "Augmenting the human intellect" is what we should aim for, not replacing the human intellect. IA ("Intelligence amplification") rather than AI.
But I'm guessing the target market for such tools would be much smaller, basically would require you to already understand software development, and know what you want, while all AI companies seem to target non-developers wanting to build software now. It's no-code all over again essentially.
Is it any surprise that the cocaine cartels really want you to buy more cocaine, so they don't focus on its usefulness in pain relief and they refine it and cut it with the cheapest substances that will work rather than medical-grade reagents?
Of course there are tools focusing on this. It takes a little getting used to how prevalent it is. My editor now can anticipate the next three lines of code I intend to write complete with what values I want to feed to the function I was about to invoke. It all shows up in an autocomplete annotation for me. I just type the first two or three characters and press tab to get everything exactly how I was about to type it in--including an accurate comment worded exactly in my voice.
Is that what you mean by IA?
For example, I type "for" and my editor guesses I want to iterate over the list that is the second argument of the function for which I am currently building the body. So it offers to complete the rest of the loop condition for me. Not only did it anticipate that I am writing a for loop. It figures out what I want to iterate over, and perhaps even that I want to enumerate the iteration so I have the index and the value. Imagine if I had written a comment to explain my intent for the function before I started writing the function body. How much better could it augment my intellect?
>Agree, and it's also such a shame that none of the AI companies actually focus on that way of using AI.
This is because, regardless of the current state of things, the endgame which will justify all the upfront investment is autonomous, self-improving, self-maintaining systems.
"All of them are moving into the direction of "less human involved and agents do more", while what I really want is better tooling for me to work closer with AI and be better at reviewing/steering it, and be more involved."
I want less ambitious LLM powered tools than what's being offered. For example, I'd love a tool that can analyse whether comments have been kept up to date with the code they refer to. I don't want it to change anything I just want it to tell me of any problems. A linter basically. I imagine LLMs would be a good foundation for this.
I am learning rust myself and one of the things I definetly didn't want to do was let Claude write all the code. But I needed guidance.
I decided to create a Claude skill called "teach". When I enable it, Claude never writes any code. It just gives me hints - progressively more detailed if I am stuck. Then it reviews what I write.
I am finding it very satisfying to work this way - Rust in particular is a language where there's little space to "wing it". Most language features are interlaced with each other and having an LLM supporting me helps a lot. "Let's not declare a type for this right now, we would have to deal with several lifetime issues, let's add a note to the plan and revisit this later".
FYI: Claude has output styles, one of them is called `learning`. Instead of writing the code itself, it will add `TODO(human)` and comments to explain how to. Also adds `Insights` explaining concepts to you in its output.
This link also has a comparison to Skills further down.
I had a bash spaghetti code script that I wrote a few years ago to handle TLS certificates(generate CSRs, bundle up trust chains, match keys to certs, etc). It was fragile, slow, extremely dependent on specific versions of OpenSSL, etc.
I used Claude to rewrite it in golang and extend its features. Now I have tests, automatic AIA chain walking, support for all the DER and JKS formats, and it’s fast. My bash script could spend a few minutes churning through a folder with certs and keys, my golang version does a few thousand in a second.
So I basically built a limited version of OpenSSL with better ergonomics and a lot of magic under the hood because you don’t have to specify input formats at all. I wasn’t constrained by things like backwards compatibility and interface stability, which let me make something much nicer to use.
I even was able to build a wasm version so it can run in the browser. All this from someone that is not a great coder. Don’t worry, I’m explicitly not rolling my own crypto.
It's how most of us are actually going to end up using AI agents for the foreseeable future, perhaps with increasing degrees of abstraction as we move to a teams-of-agents model.
The industry hasn't come up with a simple meme-format term to explain this workflow pattern yet, so people aren't excited about it. But don't worry, we'll surely have a bullshit term for it soon, and managers everywhere will be excited. In the meantime, we can just continue doing work with these new tools.
Thinking people who disagree with you hate you or hate the thing you like is a recipe for disaster. It's much better to not love or hate things like this, and instead just observe and come to useful, outcome-based conclusions.
We keep seeing this pattern over and over as well. Despite LLM companies' almost tangible desperation to show that they can replace software engineers, the real value comes from domain experts using the tools to enhance what they're already good at.
I'd guess this is a bet on which market is more lucrative:
* domain experts paying for tooling that will enhance their productivity
* capital/management class hoping to significantly replace domain experts
Software devs have been a famously tough market to sell tools to for a long time, so the better bet is B. Plus, the story on B is fantastic for fundraising; if there's a 10% chance that it checks out, you want some part of that as your capital portfolio.
I had a script in another language. It was node, took up >200MB of RAM that I wanted back. "claude, rewrite this in rust". 192MB of memory returned to me.
This is sad to see. Node was originally one of the memory efficient options – it’s roots are solving the c10k problem. Mind sharing what libraries/frameworks you were using?
I used to have a bunch of bespoke node express server utilities that I liked to keep running in the background to have access to throughout the day but 40-50mb per process adds up quickly.
I’ve been throwing codex at them and now they’ve all been rewritten in Go - cut down to about 10mb per process.
I haven’t done a ton of porting. And when I did, it was more like a reimplementation.
> We’ve verified that every AST produced by the Rust parser is identical to the C++ one, and all bytecode generated by the Rust compiler is identical to the C++ compiler’s output.
Is this a conventional goal? It seems like quite an achievement.
My company helps companies do migrations using LLM agents and rigid validations, and it is not a surprising goal. Of course most projects are not as clean as a compiler is in terms of their inputs and outputs, but our pitch to customers is that we aim to do bug-for-bug compatible migrations.
Porting a project from PHP7 to PHP8, you'd want the exact same SQL statements to be sent to the server for your test suite, or at least be able to explain the differences. Porting AngularJS to Vue, you'd want the same backend requests, etc..
It’s a very good way of getting LLMs to work autonomously for a long time; give it a spec and a complete test suite, shut the door; and ask it to call you when all the tests pass.
This is the way. This exact workflow is my sweet spot.
In my coding agent std::slop I've optimized for this workflow
https://github.com/hsaliak/std_slop/blob/main/docs/mail_mode... basically the idea is that you are the 'maintainer' and you get bisect safe, git patches that you review (or ask a code reviewer skill or another agent to review). Any change re-rolls the whole stack. Git already supports such a flow and I added it to the agent. A simple markdown skill does not work because it 'forgets'. A 'github' based PR flow felt too externally dependent. This workflow is enforced by a 'patcher' skill, and once that's active, tools do not work unless they follow the enforced flow.
I think a lot of people are going to feel comfortable using agents this way rather than going full blast. I do all my development this way.
This is broadly how I worked when I was still using chat instead of cli agents for LLM support. The downside, I feel, is that unless this is a codebase / language / architecture I do not know, it feels faster to just code by hand with the AI as a reviewer rather than a writer.
I am having immense success with the latest models developing a personal project that I open sourced and then got burned off by.I can't write anymore by hands but I do enjoy writing prompts with my voice.I have been shipping the best code the project has ever seen.The revolution is real.
Coding assistants are great at pattern matching and pattern following. This is why it’s a good idea to point them at any examples or demos that come with the libraries you want to use, too.
> Coding assistants are also really great at porting from one language to the other
No, they are quite terrible at doing that.
They may (I guess?) produce code that compiles, but they will, almost certainly not produce the appropriate combination of idioms and custom abstractions that may the code "at home" in the target language.
PS - Please fix your blockquote... HN ignores single linebreaks, so you have to either using pairs of them, or possibly go with italicization of the quoted text.
How does he solve the Fruit of the Poison Tree problem? For all he know, his LLMs included a bunch copyrighted or patented code throughout the codebase. How is he going to convince serious people that this port is not just a transformation of an _asset_ into a _liability_?
And you might say that this is a hypothetical problem, one that is not practically occurring. Well, we had a similar problem like this in the recent past, that LLMs are close to _making actual_. When it comes to software patents, they were considered a _hypothetical_ problem (i.e. nobody is going to bother suing you unless you were so big that violating a patent was a near certainty). We were instructed (at pretty much all jobs), to never read patents, so that we cannot incriminate ourselves in the discovery process.
That is going to change soon (within a year). I have friend, whom I won't name, who is working on a project, using LLMs, to discover whether software (open source and proprietary) is likely to be violating a software patent from a patent database. And it is designed to be used, not by programmers, but by law firms, patent attorneys, etc. Even though it is not marketed this way, it is essentially a target acquisition system for use by patent trolls. It is hard for me to tell if this means that we will have to keep ignoring patents for that plausible deniability, or if this means that we will have to become hyper informed about all patents. I suppose, we can just subscribe to the patent-agent, and hope that it guides the other coding agents into avoiding the insertion of potentially infringing code.
(I also have a friend who built a system in 2020 that could translate between C++ and Python, and guarantee equivalent results, and code that looks human-written. This was a very impressive achievement, especially because of how it guarantees the equivalence (it did not require machine-learning nor GPUs, just CPUs and some classic algorithms from the 80s). The friend informs me that they are very disheartened to see that now any toddler with a credit card can mindlessly do something similar, invalidating around a decade of unpublished research. They tell me that it will remain unpublished, and if they could go back in time, they would spend that decade extracting as much surplus from society as possible, by hook or by crook (apparently they had the means and the opportunity, but lacked the motive); we should all learn from my friend's mistake. The only people who succeed are, sadly, perversely, those who brazenly and shamelessly steal -- and make no mistake, the AI companies are built on theft. When millionaires do it, they become billionaires -- when Aaron Swartz does it, he is sentenced to federal prison. I'm not quite a pessimist yet, but it really is saddening to watch my friend go from a passionate optimist to a cold nihilist.).
If there was value (the guarantees) to this tech he buried a bunch of time in, he should be wrapping a natural language prompt around it and selling it.
Not even the top providers are giving any sort of tangible safety or reliability guarantees in the enterprise…
I'm a long-time Rust fan and have no idea how to respond. I think I need a lot more info about this migration, especially since Ladybird devs have been very vocal about being "anti-rust" (I guess more anti-hype, where Rust was the hype).
I don't know if it's a good fit. Not because they're writing a browser engine in Rust (good), but because Ladybird praises CPP/Swift currently and have no idea what the contributor's stance is.
At least contributing will be a lot nicer from my end, because my PR's to Ladybird have been bad due to having no CPP experience. I had no idea what I was doing.
Yeah that is the thing I struggle with. I am really happy for people falling in love with Rust. It is a amazing language when used for the right use case.
The problem is that had my Rust adventures a few years ago and I am over the hype cycle and able to see both the advantages and disadvantages. Plus being generally older and hopefully wiser I don't tie my identity towards any specific programming language that much.
So sometimes when some Junior dev discovers Rust and they get really obnoxious with their evangelicalism it can be very off putting. Really not sure how to solve it. It is good when people get excited about a language. It just can be very annoying for everyone else sometimes.
> So sometimes when some Junior dev discovers Rust and they get really obnoxious with their evangelicalism it can be very off putting. Really not sure how to solve it. It is good when people get excited about a language. It just can be very annoying for everyone else sometimes.
This rings very true, and I've actually disadvantaged myself somewhat here. I was involved in projects that made very dubious decisions to rewrite large systems in Rust. This caused me to actively stay away from the language, and stick to C++, investing lots of time in overcoming its shortcomings.
Now years later, I started with Rust in a new project. And I must say, I like the language, I really like the tools, and I like the ecosystem. On some dimension I wish I would have done this sooner (but on the other hand, I think I have a better justification of "why Rust" now).
I find the attitude of the Ladybird devs refreshing though, and it kinda aligns with my opinions about Rust.
I never fell in love with Rust or got particularly excited about adopting it. But, I just don't see a serious alternative (maybe Swift is fine for some cases but not in my field).
I believe Google's Rust journey was even more closely aligned with Ladybird: "we want memory safety, but with low impedance mismatch from C++". After like 5 years of trying to figure something like that out they seemed to go "OK actually fuck that we just have to use Rust and deal with the challenges it brings for a C++ shop".
The whole obnoxious dogmatic evangelicalism thing is definitely a wider human phenomenon outside software and junior devs picking up new languages.
Definitely isn’t one of those things that can be solved, but it’s helpful to be aware of and process on that basis. I think some personalities are likely disproportionately vulnerable to this behaviour, but I think it largely has a positive core of enthusiasm. It’s probably more a matter of those individuals growing in self awareness.
Perhaps we saw a big wave of that with rust because it meant a lot of things to a lot of different people, some more equip to express their enthusiasm with some self control than others.
I'm contemplating diving into Rust for a smallish project, a daemon with super-basic UI intended for Linux, MacOS and Windows. Do you mind expanding on what disadvantages you encountered? Or use-cases that aren't appropriate for Rust?
It’s a pretty good language and ecosystem. Downside was always the community which every ten seconds someone will start asking to tax everyone to fund Rust Software Foundation or constantly argue that you have to donate a percentage of income to it. Now with LLM I don’t have to talk to community. Huge improvement.
Problem with community is it has experts and groupies mixed in. Ideally experts can talk somewhere and groupies can go somewhere else and talk about funding RSF etc. but now is unnecessary. Expert is available on demand via chatbot.
Its possible to dislike Rust but pragmatically use it. Personally, I do not like Rust, but it is the best available choice for some work and personal stuff.
Personally I think most programming languages have really ... huge problems. And the languages that are more fun to use, ruby or python, are slow. I wonder if we could have a great, effective, elegant language that is also slow. All that try end up with e. g. with a C++ like language.
I am somewhat concerned about the volatility. All three languages have their merits and each has a stable foundation that has been developed and established over many years. The fact that the programming language has been “changed” within a short period of time, or rather that the direction has been altered, does not inspire confidence in the overall continuity of Ladybird's design decisions.
Not just volatility but also flip-flopping. Rust was explicitly a contender when they decided to go with Swift 18 months ago, and they've already done a 180 on it despite the language being more or less the same as it was.
There's been some fun volatility with the author over the years. I told him once that he might want to consider another language to which he replied slightly insultingly. Then he tried to write another language. Then he tried to switch from C++ to Swift, and now to Rust :P
> I think I need a lot more info about this migration
Doesn't sound like it's some Fish-style, full migration to Rust of everything. Seems like they are just moving a couple parts over for evaluation, and then, going forward, making it an official project language that folks are free to use. They note that basically every browser already does that, so this isn't a huge shakeup.
But not the stance on Rust, which is something I'm wondering. I understand there's a core team assigned, but are the ~200 contributors okay with this migration?
it's very odd that someone with no experience would take a big project like this and just jump to another language because he trusts the AI generated code of current models
if it works it works i guess, but it seems mad to me on the surface
Why do you think the creator behind SerenityOS has no experience? I mean it’s not the most popular OS out there but he seems like a capable individual.
Looks like Andreas is a mighty fine engineer, but he's even better entrepreneur. Doesn't matter if intentional or not, but he managed to create and lead a rather visible passion project, attract many contributors and use that project's momentum to detach Ladybird into a separate endeavor with much more concrete financial prospects.
The Jakt -> Swift -> Rust pivots look like the same thing on a different level. The initial change to Swift was surely motivated by potential industry support gain (i believe it was a dubious choice from purely engineering standpoint).
It's awe-inspiring to see how a person can carve a job for himself, leverage hobbyists'/hackers' interest and contributions, attract industry attention and sponsors all while doing the thing he likes (assuming, browsers are his thing) in a controlling position.
Can't fully rationalize the feeling, but all of this makes me slightly wary. Doesn't make it less cool to observe from a side, though.
Andreas is not some kind of hustler. He spent years writing an entire OS (Serenity OS) before the web browser part happened to gain traction. If you were just trying to be an entrepreneur, why do that?
The truth is more simple: he's a good engineer and leader, people recognised that and offered him sponsorships, and the project took off by itself.
Eh, he's given an interview where he talks about the Swift decision. He and several maintainers tried building some features in Swift, Rust, and C++, spending about two weeks on each one IIRC. And all the maintainers liked the experience of Swift better. That might have ended up wrong, but it's a pretty reasonable way to make a decision.
Two weeks with Rust and you're still fighting with the compiler. I think the LLM pulled a lot of weight selling the language, it can help smooth over the tricky bits.
Yeah, main issue with Swift is that the c++ interop (which was absolutely bleeding-edge) still isn't to the point of being able to pull in parts of the Ladybird codebase.
If I recall correctly, part of this was around classes they had that replaced parts of the STL, whereas the Swift C++ interop makes assumptions about things with certain standard names.
This is less about languages and more about so-called AI. One thing’s for sure: it’s becoming harder and harder to deny that agentic coding is revolutionizing software development.
We’re at the point where a solid test suite and a high-quality agent can achieve impressive results in the hands of a competent coder. Yes, it will still screw up, needs careful human review and steering, etc, but there is a tangible productivity improvement. I don’t think it makes sense putting numbers on it, but for many tasks, it looks like there’s a tangible benefit.
> We know the result isn’t idiomatic Rust, and there’s a lot that can be simplified once we’re comfortable retiring the C++ pipeline. That cleanup will come in time.
Correct me if I’m wrong since I don’t know these two languages, but like some other languages, doing things the idiomatic way could be dramatically different. Is “cleanup” doing a lot of heavy lifting here? Could that also mean another complete rewrite from scratch?
A startup switching languages after years of development is usually a big red flag. “We are rewriting it in X” posts always preceded “We are shutting down”. I wish them luck though!
A mitigating factor in this case is the C++ and Rust are both multi-paradigm languages. You can quite reasonably represent most C++ patterns in Rust, even if it might not be quite how you'd write Rust in the first place.
I disagree. You can't even create simple C++ inheritance examples because you don't have data inheritance. So basically classical OOP is out of the window.
That's the biggest difference to C++ and most mainstream languages, you simply can't do OOP (which in my books is a good thing) and it forces you more towards traits and composition.
In addition, C++ and Rust are very, very similar languages. Almost everything in C++ translates easily, including low level stuff and template shenanigans. There's only a few "oh shit there's no analog" things, like template specialization or virtual inheritance.
Out of all the languages rust takes inspiration from, id rank C++ at the top of the list.
This is the famous trap that Joel on Software talked about in a blog post long time ago.
If you do a rewrite you essentially put everything else on halt while rewriting.
If you keep doing feature dev on the old while another "tiger team" is doing the rewrite port then these two teams are essentially in a race against each other and the port will likely never catch up. (Depending on relative velocities)
Maybe they think that they can to this LLM assisted tools in a big bang approach quickly and then continue from there without spending too much time on it.
I’ve been part of at least 2 successful rewrites. I think that Joel’s post is too often taken as gospel. Sometimes a rewrite is the best way forward.
Moving Ladybird from C++ to a safer more modern language is a real differentiator vs other browsers, and will probably pay dividends. Doing it now is better than doing it once ladybird is fully established.
One last point about rewrites: you can look at any industry disruptor as essentially a team that did a from-scratch rewrite of their competitors and won because the rewrite was better.
The context matters when we talk about Joel's article[0].
It's about Netscape. By the time, Netscape had dominated the browser market. It was the leader and that means they had all the market share to lose. You can bet Microsoft's decision makers were very closely monitoring what those at Netscape were doing.
Today, practically nobody uses Ladybird. No one even knows it[1]. It's so behind and has nothing to lose. If you really want to rewrite, it's better to do it when you have nothing to lose.
What's different today really is the LLMs and coding agents. The reason to never rewrite in another language is that it requires you to stop everything else for months or even years. Stopping for two weeks is a lot less likely to kill your project.
> then these two teams are essentially in a race against each other and the port will likely never catch up
Ladybird appears to have the discipline to have recognized this: “[Rust] is not becoming the main focus of the project. We will continue developing the engine in C++, and porting subsystems to Rust will be a sidetrack that runs for a long time.”
Firefox is already spying on you with a lot of telemetry, and they have recently amended their terms of use to remove the obligation to "never sell your data" [1]. So perhaps you should reconsider that statement.
A lot of the previous calculus around refactoring and "rewrite the whole thing in a new language" is out the window now that AI is ubiquitous. Especially in situations where there is an extensive test suite.
For a personal thing I had AI write some python libraries to power a cli. It has to do with simple excel file filtering, grouping and aggregating. Nothing too fancy. However since it's backed by a library, I am playing with different UIs for the same thing and it's fun to say.. Do it with streamlit. Oh it can't do this particular thing. Fine do it with shiny. No? OK Dash. It takes only like an hour to prototype with a whole new UI library then I get to say "nah" like a spoiled child. :)
> Well, I am on the provocative side that as AI tooling matures current programming languages will slowly become irrelevant.
I have the opposite opinion. As LLM become ubiquitous and code generation becomes cheap, the choice of language becomes more important.
The problem with LLM for me is that it is now possible to write anything using only assembly. While technically possible, who can possibly read and understand the mountain of code that it is going to generate?
I use LLM at work in Python. It can, and will, easily use hacks upon hacks to get around things.
Thus I maintain that as code generation is cheap, it is more important to constraint that code generation.
All of this assume that you care even a tiny bit about what is happening in your code. If you don't, I suppose you can keep banging the LLM to fix that binary blob for you.
There is the notion that a lot of programming language preferences are based on the notion of people using them. As soon as it's LLMs using them, a lot of what motivates their choices becomes less valid.
I've been doing a few projects that are definitely outside my comfort zone with LLMs and its fine. I can read the code but I just don't have the muscle memory to produce it.
I don't agree. For one thing, the language directly impacts things like iteration speed, runtime performance, and portability. For another, there's a trade-off between "verbose, eats context" and "implicit, hard to reason about".
IMO Rust will strike a very strong balance here for LLMs.
Im already using models to reason about and summarize part of the code from programming language to prose. They are good at that. I can see the process being something like english to machine lang, machine lang to english if the human needs to understand. However amother truism is that compilers are a great guardrail against bad generated code. More deterministic guardrails are good for llms. So yeah im not there yet where i trust binaries to the statistical text generators.
I would say that current programming languages have a better chance due to the huge amount of code that AI can train on. New languages do not have that leverage. Moreover, current languages have large ecosystems that still matter.
I see the opposite. New languages have more difficulty breaking into popularity due to lack of enough existing codename and ecosystems.
> After the initial translation, I ran multiple passes of adversarial review, asking different models to analyze the code for mistakes and bad patterns.
I feel like you just know it’s doomed. What this is saying is “I didn’t want to and cannot review the code it generated” asking models to find mistakes never works for me. It’ll find obvious patterns, a tendency towards security mistakes, but not deep logical errors.
Somehow they did use this as part of their approach to get to 0 regressions across 65k tests + no performance regressions though + identical output for AST and bytecode though. How much manual review was part of the hundreds of rounds of prompt steering is not stated, but I don't think it's possible to say it couldn't find any deep logical errors along the way and still achieve those results.
The part that concerns me is whether this part will actually come in time or not:
> The Rust code intentionally mimics things like the C++ register allocation patterns so that the two compilers produce identical bytecode. Correctness is a close second. We know the result isn’t idiomatic Rust, and there’s a lot that can be simplified once we’re comfortable retiring the C++ pipeline. That cleanup will come in time.
Of course, it wouldn't be the first time Andreas delivered more than I expected :).
That’s convincing and impressive, but I wouldn’t say it proves it can spot deep errors. If it’s incredible at porting files and comparing against the source of truth then finding complicated issues isn’t being tested imo.
Your argument is just as applicable on human code reviewers. Obviously having others review the code will catch issues you would never have thought of. This includes agents as well.
They’re not equal. Humans are capable of actually understanding and looking ahead at consequences of decisions made, whereas an LLM can’t. One is a review, one is mimicking the result of a hypothetical review without any of the actual reasoning. (And prompting itself in a loop is not real reasoning)
Yeah, I lost all interest in the ladybird project now that it is AI slop.
No one wants to work with this generated, ugly, unidiomatic ball of Rust. Other than other people using AI. So you dependency AI grows and grows. It is a vicious trap.
> This is not becoming the main focus of the project. We will continue developing the engine in C++, and porting subsystems to Rust will be a sidetrack that runs for a long time.
I don't like this bit. Wouldn't it be better to decide on a memory-safe language, and then commit to it by writing all new code in Rust, or whatever. This looks like doing double the work.
It doesn't have to all-or-nothing. Firefox has been a mixed C++ and Rust codebase for years now. It isn't like the code is written twice. The C++ components are written in C++, and the Rust components are written in Rust.
I suspect that'll also be what happens here. And if the use of Rust is successful, then over time more components may switch over to Rust. But each component will only ever be in one language at a time.
You can't compare the choices made to evolve a >20 years old codebase with a brand new one. Firefox also as Rust support for XPCOM components, so you can use and write them in Rust without manual FFI (this comes with some baggage of course).
The Ladybird devs painted themselves in a corner when choosing C++ for a new web browser, with many anti-Rust folks claiming that "modern C++ was safe". Well...
Firefox was special in that Mozilla created Rust to build Servo and then backported parts of Servo to Firefox and ultimately stopped building Servo.
Thankfully Servo has picked up speed again and if one wants a Rust based browser engine what better choice than the one the language was built to enable?
> We know the result isn’t idiomatic Rust, and there’s a lot that can be simplified once we’re comfortable retiring the C++ pipeline. That cleanup will come in time.
I wonder what kind of tech debt this brings and if the trade off will be worth whatever problems they were having with C++.
the tech debt risk in this case is mostly in the cleanup phase, not the port itself. non-idiomatic Rust that came from C++ tends to have a lot of raw pointer patterns and manual lifetime management that works fine but hides implicit ownership assumptions. when you go to make it idiomatic, the borrow checker forces those assumptions to be explicit, and sometimes you discover the original structure doesn't compose well with Rust's aliasing rules. servo went through this. the upside is you catch real latent bugs in the process.
It depends. I migrated a 20k loc c++ project to rust via AI recently and I would say it did so pretty well. There is no unsafe or raw pointer usage. It did add Rc<RefCell in a bunch of places to make things happy, but that ultimately caught some real bugs in the original code. Refactoring it to avoid shared memory (and the need for Rc<RefCell<>> wasn't very difficult, but keeping the code structure identical at first allowed us to continue to work on the c++ code while the rust port was ongoing and keep the rust port aligned without needing to implement the features twice.
I would say modern c++ written by someone already familiar with rust will probably be structured in a way that's extremely easy to port because you end up modeling the borrow checker in your brain.
Yes, I just translated a Rust library from non-idiomatic and unsafe Rust to idiomatic and safe Rust and it was as much work as if I had rewritten it from scratch.
Andreas Kling mentioned many times they would prefer a safer language, specifically for their js runtime garbage collector.
But since the team were already comfortable with cpp that was the choice, but they were open and active seeking alternatives.
The problem was strictly how cpp is perceived as an unsafe language, and this problem rust does solve!
Not being sarcastic, this truly looks like a mature take. Like, we don't know if moving to rust would improve quality or prevent vulnerabilities, here's our best effort to find out and ignore if the claim has merits for now. If the claim maintains, well, you're better prepared, if it doesn't, but the code holds similar qualities...what is the downside?
Considering David Tolnay's indefensible treatment of JeanHeyd Meneide, I'm inclined to agree with Kling on the toxicity of the Rust community. Evangelical fervor does not excuse douchebaggery.
> We previously explored Swift, but the C++ interop never quite got there, and platform support outside the Apple ecosystem was limited.
Why was there ever any expectation for Swift having good platform support outside Apple? This should have been (and was to me) already obvious when they originally announced moving to Swift.
Apple actually did put some resources behind it, the toolchain is reasonably pleasant to use outside macOS and Xcode, they have people building an ecosystem in the Swift Server Workgroup, and arguably some recent language design decisions don't seem to be purely motivated by desktop/mobile usage.
But in the end I can't help but feel Swift has become an absolute beast of a multi-paradigm language with even worse compile times than Rust or C++ for dubious ergonomics gains.
Have you actually used .NET on Linux/macOS? I have (both at home and work) and there isn't anything that made me think it was neglected on those platforms. Everything just works™
> If you look at the code, you’ll notice it has a strong “translated from C++” vibe. That’s because it is translated from C++. The top priority for this first pass is compatibility with our C++ pipeline. The Rust code intentionally mimics things like the C++ register allocation patterns so that the two compilers produce identical bytecode. Correctness is a close second. We know the result isn’t idiomatic Rust, and there’s a lot that can be simplified once we’re comfortable retiring the C++ pipeline.
Does this still get you most of the memory-safety benefits of using Rust vs C++?
All the best to them, however this feels like yah shaving instead of focusing into delivering a browser than can become an alternative to Safari/Chrome duopoly.
Part of browser experience is safety and migrating their JS library to Rust is probably one of the best ways to gain advantage over any other existing engine out there in this aspect. Strategically this may and likely will attract 3rd party users of the JS library itself, thus helping its adoption and further improving it.
They're not porting the browser itself to Rust, for the record.
Javascript is a self contained sub system, if the public API stays the same, then they can rewrite as much as they want, also I suppose this engine now will attract new contributors that will want to contribute to Ladybird just because they enjoy working with Rust.
Don't forget that the Rust ecosystem around browsers is growing, Firefox already uses it for their CSS engine[0], AFAIK Chrome JPEG XL implementation is written in Rust.
So I don't see how this could be seen as a negative move, I don't think sharing libraries in C++ is as easy as in Rust.
Not only is Firefox using it for their CSS engine but Mozilla created Rust to build Servo and sadly only the CSS engine and maybe some other parts is what they kept around when they offloaded Rust.
“the Rust ecosystem around browsers is growing” – in the beginning pretty much 100% of the ecosystem around Rust was browser oriented
Thankfully Servo is picking up speed again and is a great project to help support with some donations etc: https://servo.org/
Agreed. They said they ruled out rust in 2024, I believe the article they published was near the end of 2024 because I remember reading it fairly recently.
Seems like a lot of language switches in a short time frame. That'd make me super nervous working on such a project. There will be rough parts for every language and deciding seemingly on whims that 1 isn't good enough will burn a lot of time and resources.
Cool, that seems like a rational choice. I hope this will help Ladybird and Servo benefit from each other in the long run, and will make both of them more likely to succeed
Someone should try this with the “Ralph Wiggum loop” approach. I suspect it would fail spectacularly, but it would be fascinating to watch.
Personally, I can’t get meaningful results unless I use the tool in a true pair-programming mode—watching it reason, plan, and execute step by step. The ability to clearly articulate exactly what you want, and how you want it done, is becoming a rare skill.
Given the quality of their existing test suite I'm confident the Ralph Wiggum loop would produce a working implementation... but the code quality wouldn't be anywhere near what they got from two weeks of hands-on expert prompting.
It wasn't. The submitter submitted it with the title “Ladybird Browser adopts Rust”. We initially changed it to “Ladybird adopts Rust”, and now I've changed it to the original title, per the guidelines. The automatic title cleaner wouldn't make a change like that.
Woah, this is a wild claim. @dang: Is this a thing? I don't believe it. I, myself, have submitted many articles and never once did I see some auto-magical "title shortening algorithm" at work!
A LLM-assisted codebase migration is perhaps one of the better use cases for them, and interestingly the author advocates for a hands-on approach.
Adding the "with help from AI" almost always devolves the discussion from that to "developers must adopt AI or else!" on the one hand and "society is being destroyed by slop!" on the other, so as long as that's not happening I'm not complaining about the editorialized title.
I think we've come to the point when it should be the opposite for any new code, something in line of: "done without AI". Bein an old fart working in software development I have many friends working as very senior developers. Every single one of them including yours truly uses AI.
I use AI more and more. Goes like create me classes A,B,C with such and such descriptive names, take this state machine / flowchart description to understand the flow and use this particular sets of helpers declared in modules XYZ
I then test the code and then go over and look at any un-optimal and other patterns I prefer not to have and asking to change those.
After couple of iterations code usually shines. I also cross check final results against various LLMs just in case
Very happy to see this. Ladybird's engineering generally seems excellent, but the decision to use Swift always seemed pretty "out there". Rust makes a whole lot more sense.
You can do it via the C ABI, and use opaque pointers to represent higher-level Rust/C++ concepts if you want to.
Firefox is a mixed C++ / Rust codebase with a relatively close coupling between Rust and C++ components in places (layout/dom/script are in C++ while style is in Rust, and a mix of WebRender (Rust) and Skia (C++) are used for rendering with C++ glue code)
I’m curious what issues people were running into with Swift’s built in C++ interop? I haven’t had the chance to use it myself, but it seemed reasonable to me at a surface level.
Yeah, that part doesn't make much sense to me. IMO, Swift has reasonably good C++ interop[1] and Swift's C interop has also significantly improved[2] since Swift 6.2.
> albeit you have to struggle sending `std` types back and forth a bit
Firefox solves this partly by not using `std` types.
For example, https://github.com/mozilla/thin-vec exists in large part because it's compatible with Firefox's existing C++ Vec/Array implementation (with the bonus that it's only 8 bytes on the stack compared to 24 for the std Vec).
I know he doesn't make live coding videos anymore, but it'd be cool if Andreas showed off how this worked a little more. I'm curious how much he had to fix by hand (vs reprompting or spinning a different model or whatever).
What happened? It’s been awhile since I checked in but it seems he doesn’t work on serenity and doesn’t live stream anymore (and is now into lifting weights)
He got his serenity and at the same time ladybird browser started getting somewhere, so he separated it out and went full on with it. From what I know, he was working on browsers before at Apple, so it was like he got ready to return
Porting the JS parser to Rust and adopting Rust in other parts of the engine while continuing to use C++ heavily is unlikely to make Ladybird meaningfully more secure.
Attackers are surprisingly resilient to partial security.
Translating software that has a lot of tests is easy for LLMs. I think we'll be seeing a lot more of that in the next years. But it will take some time for people to build up more trust in these tools. Good test harnesses are a key enabler.
The inevitable cleanup that will follow this could be done the same way. Refactoring like that can be done in more bite sized chunks, which makes it easier to review what is happening and control how it is done.
Unfortunately licence incompatibility may prevent that. Ladybird is BSD and Servo is MPL. This is also why there is only limited collaboration between Servo and the Rust GUI ecosystem.
Is there any discussion on why D or even Ada was not considered? These languages have been around for long time. If they were willing to use llm to break the initial barrier to entry for a new language, then a case can be made for these languages as well.
They already made the mistake picking a niche language twice (first their own language, then Swift as a cross-platform language), why would you want them to make it a third time?
What kind of response is this? I was asking if there was any technical evaluation on other languages. And D and Ada are not niche. They have been battle tested in critical software.
Swift had/has some problems in the language itself. It's not because of the niche nature of Swift that was the problem iirc.
I don't think this is the right response because certainly a meaningful discussion could've definitely taken place and given how they were already open to other languages which was the reason why they picked Swift in the first place.
I remember Andreas video where he talked about how people used rust in his codebase and they were so happy but later it became very difficult whereas they found with swift that it became easier to manage. That was the reason why they picked swift that time.
Certainly their goal wasn't to pick a popular language (because if that's what you want use python or JS) but rather a language that was relevant to what they were building.
So if D and Ada were relevant or not, that's the main point of discussion imo.
I've dabbled a bit in Ada, but it wouldn't be my choice either. It's still susceptible to memory errors. It's better behaved than C, but you still have to be careful. And the tooling isn't great, and there isn't a lot in terms of libraries. I think Ladybird also has aspirations to build their own OS, so portability could also be an issue.
Not the case with spark. But I understand it requires writing lot of things from scratch for browsers. But I don’t think portability will be an issue with Ada, it is cross platform.
However, this is where d shines. D has a mature ecosystem. Offers first class cpp abi and provides memory safety guarantees, which the blog mentioned as a primary factor. And d is similar to cpp, low barrier for cpp devs to pick up.
Unfortunately a really good question gets downvoted instead of causing a relevant discussion, as so often in recent HN. It would be really interesting to know, why Ada would not be considered for such a large project, especially now when the code is translated with LLMs, as you say. I was never really comfortable that they were going for the most recent C++ versions, since there are still too many differences and unimplemented parts which make cross-compiler compatibilty an issue. I hope that with Rust at least cross-compilation is possible, so that the resulting executable also runs on older systems, where the toolchain is not available.
I personally think that people might've framed it as use Ada/D over rust comment which might have the HN people who prefer rust to respond with downvotes.
I agree that, this might be wrong behaviour and I don't think its any fault of rust itself which itself could be a blanket statement imo. There's nuance in both sides of discussions.
Coming to the main point, I feel like the real reason could be that rust is this sort of equilibra that the world has reached for, especially security related projects. Whether good or bad, this means that using rust would definitely lead to more contributor resources and the zeal of rustaceans can definitely be used as well and also third party libraries developed in rust although that itself is becoming a problem nowadays from what I hear from people in here who use rust sometimes (ie. too many dependencies)
Rust does seem to be good enough for this use case. I think the question could be on what D/Ada (Might I also add Nim/V/Odin) will add further to the project but I honestly agree that a fruitful discussion b/w other languages would've been certainly beneficial to the project (imo) and at the very least would've been very interesting to read personally
Based on the origins of Rust as a tool for writing the really thorny, defensive parsers of potentially actively hostile code for firefox, I have to imagine that another web browser is the most at-home place the language could ever be.
I have my doubts it'll ever be "finished". Servo gives strong vibes of a project that will avoid performance hacks, because they're not nice/state of the art code. I have no evidence, it's just the energy I've picked up from it
My intuition is that they will convert to zig again when it stables. If it is possible to do it using LLM in
2 weeks for rust, then it would be the same for zig, too.
While rust is nice on paper, writting complex software in it is mentally consuming. You can not do it for a long time.
If they do, it could be because safety is a gradient and one variable among many in software development, albeit a very important one when it comes to browsers.
Pardon my ignorance, but doesn't a byte-by-byte output recreation of the C++ code in Rust defeat the whole purpose of using Rust? For one, would it be idiomatic Rust anymore? Also, if there's a (non-memory related) vulnerability in the C++ code, would it be possible for that to be introduced in Rust too?
The impression I get from the article is not that the compiled code of each implementation produces the same object code, but that when the implementations are run with the same inputs, they produce exactly the same output — that is, the same JS VM bytecode.
If they had developed a technique to get a modern C++ compiler and rustc to generate exactly the same output for any program (even a trivial one) I think that would be huge news and I would love to see all the linker hacking that would involve.
I've looked at the code from the PR. It seems to use safe types and standard idioms like pattern matching, so at least at first glance it looks like Rust.
It could have been worse. C++ code naively converted line-by-line to Rust typically results in weird and unsafe Rust, but in this case it seems they've only been strict about the results being the same, not the structure of the implementation.
Rewrites have a very high risk of introducing regressions. Trying to fix bugs while rewriting will only make things harder, because instead of simply comparing outputs exactly, you'll have to judge which output is the right one. If you let the behavior significantly diverge during the rewrite, you'll just have two differently buggy codebases and no reference to follow.
It's much easier to make a bug-for-bug compatible copy, and fix bugs later.
Once you get a byte-by-byte duplicate, you can start refactoring into idiomatic Rust. Convert pointers to references, rip out unsafe blocks, and let Clippy go ham.
"The web platform object model inherits a lot of 1990s OOP flavor, with garbage collection, deep inheritance hierarchies, and so on. Rust’s ownership model is not a natural fit for that."
I'm confused about this part. What part of the browser did they want GC and inheritance for? I'd get it if they were writing the UI in this, but the rest of this post is about the JS engine. They weren't going to 1:1 map JS objects to Swift objects and rely on ARC to manage memory, were they?
Lot of DOM APIs are like that. You have methods like element.[parent|children]() which implies circular structure, and then you have APIs like element.click(), which emits a click event that bubbles through the DOM - which means that element has to have some mutable reference to the DOM state. Or even element.remove(), which seems like a super weird api to have on an element of a collection, from Rust API design point of view.
You can model these with reference counting, but this turned out unfeasible in browsers. There's a great talk from when Blink (Chrome) transitioned from reference counting to GC, which provides a lot more details about these problems in practice: https://www.youtube.com/watch?v=_uxmEyd6uxo
> I'd get it if they were writing the UI in this, but the rest of this post is about the JS engine.
I think this might be the reason they started with the JS engine and not with some more fundamental browser structures. JS object model has these problems, too, but the engine has to solve them in more generic way. All JS objects can just be modeled as some JSObject class/struct where this is handled on the engine level.
DOM and other browser structures are different because the engine has to understand them, so the browser developers have to interact with the GC manually and if you watch the talk above, you'll see that it's quite involved to do even in C++, let alone in Rust, which puts a bunch of restrictions on top of that.
The "human-directed, not autonomous" framing is the part people keep glossing over. Claude Code here is a compiler-level translation tool, you are still the architect deciding what gets ported and in what order.
The real question is what this does to migrations that never happened because 18 months of rewrite did not pencil out. A 2-week port fundamentally changes that calculus.
If it is this easy, surely the trend is Rust output being an intermediate pass of the LLM super compiler. A security subset if you will (like other kinds of optimization), it will move from Rust specs to some deeper level of analysis and output the final executable. Some brave souls will read the intermediate Rust output (just like people used to read the assembler output from compilers) but the LLM super compiler will just translate a detailed English like spec into final executables.
I'd generally be quite surprised to see LLMs spam unsafe blocks, both because that's behavior that I haven't observed while using them and because that contradicts my mental model of them where they imitate the styles of code that they were trained on (which in rust generally does not include spamming unsafe).
the most underappreciated aspect of AI-assisted language migration is that it changes the cost-benefit analysis of which language to target. previously choosing rust for a browser engine meant accepting 3-5x slower development velocity vs c++. if AI closes that gap to near parity, the calculus shifts dramatically - you get rusts safety guarantees essentially for free in terms of developer time.
the two-week timeline for 25k lines is striking. even accounting for the human review overhead, thats probably 4-6x faster than manual porting. and the byte-for-byte verification approach means the speed doesnt come at the cost of correctness. this could be a template for how other large c++ codebases approach incremental rust adoption.
> We’ve been searching for a memory-safe programming language to replace C++ in
Ladybird for a while now.
The article fails to explain why. What problems (besides the obvious) have been found in which "memory-safe languages" can help. Do these problems actually explain the need of adding complexity to a project like this by adding another language?
I guess AI will be involved which, at this early point in the project would make ladybird a lot less interested (at least to me).
Browsers are incredibly security-sensitive projects. Downloading untrusted code from the internet and executing is part of their intended functionality! If memory safety is needed anywhere it's in browsers.
This is really YOLOing as the original author doesn't know Rust well so what happens if they hit some complex production issue LLM aren't aware of? Hiring an expensive consultant to fix that until the next LLM iteration?
I'm as anti LLM use as they come, but this appears to be migrating libraries from already funcitoning C++ code. In the case of your hypothetical I suspect the course of action will be "shelve this library port until someone with domain expertise and Rust experience can look at it". Its not like he chucked the whole codebase at the GenaI gods and said "Port it to Rust!".
Any word on how much more memory safe the implementation is? If passing a previous test suite is the criteria for success, what has changed, really? Are there previous memory safety tests that went from failing to passing?
I am very interested to know if this time and energy spent actually improved memory safety.
Other engineers facing the same challenges want to know!
If the previous impl had known memory safety issues I'd imagine they'd fix them as a matter of priority. It's hard to test for memory safety issues you don't know about.
On the rust side, the question is how much `unsafe` they used (I would hope none at all, although they don't specify).
It is entirely possible a Rust port could have caught previously unknown memory safety issues. Furthermore, a Rust port that looks and feels like C++ may be peppered with unsafe calls to the point where the ROI on the port is greatly reduced.
I am not trying to dunk on the effort; quite the contrary. I am eager to hear more about the goals it originally set out to achieve.
Interesting in context of that some time ago Andreas said that they failed on porting TypeScript compiler from TypeScript itself to Go lang by using LLMs and they went with manual port https://youtu.be/uMqx8NNT4xY?si=Vf1PyNkg3t6tmiPp&t=1423
That's a pivot, iirc they wanted to swift (I'm very glad they didn't do that). It's cool to see something like claude be useful for large scale projects like that
This is great! I'm very excited about Ladybird. My current browser, Brave, is the best of the worst but Ladybird is the best of the best. On another note, I'm hoping iOS will eventually let go of their control over WebKit, however wishful it might be.
I wonder what is gained by this port though, if the C++ codebase already employed modern approaches to memory management. It's entirely possible that the Rust version will perform worse too as compilers are less mature.
Maybe, but it's certainly possible to write memory safe code in C++. It may be more or less difficult, but it isn't typically the ONLY objective of a project. C++ has other advantages too, such as seamless integration with C APIs and codebases, idiomatic OOP, and very mature compilers and libraries.
I must admit to being somewhat confused by the article's claim that Rust and C++ emit bytecode. To my knowledge, neither do (unless they're both targeting WASM?) - is there something I'm missing or is the author just using the wrong words?
EDIT: bramhaag pointed out the error of my ways. Thanks bramhaag!
By 'Rust compiler' and 'C++ compiler', they refer to the LibJS bytecode generator implemented in those languages. This is about the generated JS bytecode.
I don't get the impression they care that much about performance. Besides, it would limit the number of platforms it could run on if it requires a recent GPU.
A lot of the interest in Ladybird and its parent project, SerenityOS, stemmed from the bespoke, "from scratch" approach Andreas advocated. If they're going to start vibe-coding everything, then what's the point of the top-to-bottom wheel reinvention they're doing? E.g., why even bother maintaining their own, custom XML parser instead of using any of the multitude that already exist (including in Rust) if they're ultimately going to lean heavily on AI in the course of developing it, since that inherently involves laundering existing implementations?
This project, of all of them, throwing in the towel on AI makes me fear AI abstainers have no future.
This will be another bad decision just like with Swift. From what I heard, Rust is notoriously bad at letting people define their own structure and instead beats you up until you satisfy the borrow checker. I think it'll make development slow and unpleasant. There are people out there who enjoy that, but it's not a fit for when you need to deliver a really huge codebase in reasonable time. I remember Andreas mentioning he just wanted something like C++, but with a GC and D would be absolutely perfect for this job.
Maybe, but will they have to fight with borrow checker for doing some other than (the very OOP) DOM components? They'll obviously use both for a long time in the future, so more functional places can get Rust, while more OOP places can benefit from C++
And? Does it work? Because it does. It's a lot closer to C++ and you literally need like a week to start being productive and it's insanely flexible as a language. Nobody uses Swift also, but the additional problem with Swift was that it's entirely Apple-centric.
> Porting LibJS
> Our first target was LibJS , Ladybird’s JavaScript engine. The lexer, parser, AST, and bytecode generator are relatively self-contained and have extensive test coverage through test262, which made them a natural starting point.
> Results
> The requirement from the start was byte-for-byte identical output from both pipelines. The result was about 25,000 lines of Rust, and the entire port took about two weeks. The same work would have taken me multiple months to do by hand.
I'm not here to troll the LLM-as-programmer haters, but Ladybird (and Rust!) is loved by HN, and this is a big win.
How long until Ladybird begins to impact market dominance for Chrome and Firefox? My guess: Two years.
Note that Firefox doesn't have market dominance. It is under 5% market share. That said I imagine Firefox users to be the most likely to make the jump. However, the web is a minefield of corner cases. It's hard to believe it will be enough to make the browser largely useful enough to be a daily driver.
Why do you think Firefox users would be most likely to make the jump? The main reason I see people give for supporting Ladybird is challenging the dominance of the incumbents. That's not really a great reason to switch from Firefox because, as you note, it doesn't have any dominance. And there's also an argument that splitting the non-Chrome market into two only increases Chrome's dominance.
From what I can tell from HN, Brave seems to be popular with those users who hate Google but for whatever reason hate Mozilla even more, and I suspect those will be the most likely users to switch.
This is sort of hilarious if you think about it. The Firefox browser is completely written in Rust. Now Ladybird is a "human-directed ai" Rust browser. Makes you wonder how much of the code the two browsers will share going forward given llm assisted autocompletes will pull from the same Rust Browser dataset.
Probably not much: the requirement is exact equivalence of program inputs to outputs, and as such the agents are performing very mechanical translation from the existing C++ code to Rust. Their prompts aren't "implement X browser component in rust", they're "translate this C++ code to Rust, with these extra details that you can't glean from the code itself."
Only a small portion of Firefox is written in Rust. Apparently some of the most performant and least buggy parts are those in Rust, but again, only parts like the CSS engine.
I don't get it, and I don't have a dog in the C/C++ vs. Rust race.
Ladybird has ~1200 contributors with a predominance of C++ contributions, followed by HTML, and with "other" lying at 0.5%.
That's a lot of people contributing.
How many of them will be less willing to contribute in the future, and less productive when they do if a sizable portion is in Rust?
Maybe there'll be more contributions and maybe there'll be less. I don't know.
If you've managed to develop a community of 1200 developers who are willing to advance the project why upset the applecart?
Probably not unless using Rust present some particular challenge for this type of project. But having eaten this proverbial apple they would probably use AI more and more assuming they have a budget and in this case being less rich than C++ might not mean much for productivity
I wouldn't mind if one result of this was a writeup on what patterns/antipatterns are there when converting code and concepts that used to be very aligned with C++-style OOP, deep inheritance and all that jazz, to what feels natural in Rust, and how you can rephrase those concepts without loss in the substance of what you need to do.
I guess it's a long way off, since the LLM translation would need to be refactored into natural Rust first. But the value of it would be in that it's a real world project, and not a hypothetical "well, you could probably just...".
Sigh agents keep killing all the passion I have for programming. It can do things way faster than me, and better than me in some cases. Soon it will do everything better and faster than me.
> Soon it will do everything better and faster than me
There is no evidence of that coming from this post. The work was highly directly by an extremely skilled engineer. As he points out, it was small chunks. What chunks and in what order were his decision.
Is AI re-writing those chunks much faster than he could. Yes. Very much so. Is it doing it better? Probably not. So, it is mostly just faster when you are very specific about what it should do. In other words, it is not a competitor. It is a tool.
And the entire thing was constrained by a massive test suite. AI did not write that. It does not even understand why those tests are the way they are.
This is a long way from "AI, write me a JavaScript engine".
Id put it as a example of a carpenter preparing their material with a lathe and circular saw vs one working with a handsaw and chisel.
Both will get a skilled craftsman to the point where thie output is a quality piece of work. Using the autotoools to prepare the inputs allows velocity and consistency.
Main issue is the hype and skiddies who would say - feed this tree into a machine and get a cabinet.Producing non-detrministic outputs with the operator being unable to adjust requirements on the fly or even stray from patterns/designs that havent been trained yet.
The tools have limitiations and the operators as well , and the hype does adisservice to what would be establishing reasonable patterns of usage and best practices.
Is a migration from language X to Y or refactoring from pattern A to B really the kind of task that makes you look forward to your day when you wake up?
Personally my sweet spot for LLM usage is for such tasks, and they can do a much better job unpacking the prompt and getting it done quickly.
In fact, there's a few codebases at my workplace that are quite shit, and I'm looking forward to make my proposal to refactor these. Prior to LLMs, I'm sure I'd have been laughed off, but now it's much more practical to achieve this.
Right. I had a 100% manual hobby project that did a load of parametric CAD in Python. The problem with sharing this was either actively running a server, trying to port the stack to emscripten including OCCT, or rewriting in JS, something I am only vaguely experienced in.
In ~5 hours of prompting, coding, testing, tweaking, the STL outputs are 1:1 (having the original is essential for this) and it runs entirely locally once the browser has loaded.
I don’t pretend that I’m a frontend developer now but it’s the sort of thing that would have taken me at least days, probably longer if I took the time to learn how each piece worked/fitted together.
It's the opposite for me, most of the time it's first rough pass it generates is awful and if you don't have good taste and a solid background of years of experience programming you won't notice it and I keep having to tell it to steer into better design choices.
I'm not sure 25,000 lines translated in 2 weeks is "fast", for a naive translation between languages as similar as C++ and Rust (and Ladybird does modern RAII smart-pointer-y C++ which is VERY similar to Rust). You should easily be able to do 2000+ lines/day chunks.
Yeah, it also a lot that the person doing the translation is the lead developer of the project who is very familiar with the original version.
I imagine LLMs do help quite a bit for these language translation tasks though. Language translation (both human and programming) is one of the things they seem to be best at.
Agreed, however, I'm quite sure 25,000 lines translated in "multiple months" is very "slow", for a naive translation between languages as similar as C++ and Rust.
Look into platforms like Workato, Boomi, or similar iPaaS products, unfortunely it feels like those of us that like coding have to be happy turning into architect roles, with AI as brick layers.
Despite the many claims to the contrary, agents can't do anything better than a human yet. Faster, certainly, but the quality is always poor compared to what a human would produce. You aren't obsolete yet, brother.
Dunno, that probably doesn't hold for webapps with backend as they are typically complete garbage and LLMs (even local ones) would give you about the same result but in 1 hour.
It automates both the fun and the boring parts equally well. Now the job is like opening a box of legos and they fall out and then auto-assemble themselves into whatever we want..
Rather like opening a box of legos and reading them the instruction sheet while they auto assemble based on what they understood. Then you re-read and clarify where the assembly went wrong. Many times, if needed.
i rememebr seeing interviews saying rust is not suited for this project because of recursion and dom tree. how they tested multiple languages and settled on swift. then they abandon swift and now they shift towards rust.
this entire project starts to look like "how am i feeling today?" rather than a serious project.
From the link it seems that Ladybird architecture is very modular, in this case LibJS is one of the subsystems that has less external dependencies, said that they don't need to migrate everything, only the parts that makes sense.
I feel similar about the potential of this technique and have heard this from other C++ developers too.
Rust syntax is a PITA and investing a lot of effort in the language doesn’t seem worth the trouble for an experienced C++ developer, but with AI learning, porting and maintenance all become more accessible.
It’s possible to integrate Rust in an existing codebase or write subparts of larger C++ projects in Rust where it makes sense.
I was recently involved in an AI porting effort, but using different languages and the results were fine. Validating and reviewing the code took longer than writing it.
I am unsure if I can rationally justify saying this, but I am left with disappointment and unease. Comparable to when a series I care about changes showrunner and jumps the shark.
Hate to tell you this, but it's cults all the way down. Plato understood this, and his disdain for caves and wall-shadows, is really a disdain for cults. The thing is, over the last 2300 years, we have gotten really good at making our caves super cozy -- much cozier than the "real world" could ever be. Our wall-shadows have become theme parks, broadway theaters, VR headsets, youtube videos, books, entire cities even. In Plato's day, it made sense to question the cave, to be suspicious of it. But today, the cave is not just at parity with reality, it is superior to it (similar to how a video game is a precisely engineered experience, one that never has too little signal and never has too much noise, the perfect balance to keep you interested and engaged).
I'm no mind reader, and certainly no anthropologist, but I suspect that what separates humans from other (non extinct) animals, is that we compulsively seek caves that we can decorate with moving shadows and static symbols. We even found a series of prime numbers (sequences of dots, ". ... ..... .......") in a cave from the _ice age_. Mathematics before writing. We seek to project what we see with our mind's eye into the world itself, thereby making it communicable, shareable. Ever tell someone you had a dream, and they believed you? You just planted the seed for a cult, a shared cave. Even though you cannot photograph the dream, or offer any evidence that you can dream at all.
The industrial and scientific revolutions have distanced our consciousness from this idea, even as they enabled ever more perfect caves to manifest. Our vocabulary has become corrupted and unclear. We started using words like "reality", and "literally", and "truth", when we mean the exact opposite.
The conspiracy theorists and cultists, are just people who wandered into a new cave, with a different kind of fire, and differently curved walls, and they want to tell people from their old cave that they have found a way out of the cave into reality -- they do not yet realize (or do not want to accept), that they live in a network of caves, a network of different things in the same category.
During the early 2020s, we did a lot of talking about the disappearance of "consensus reality". This is scientific terminology mapped over the idea of caves and cults. You can tell, because the phrase is an oxymoron. It is not reality, if it requires consensus. It is fantasy, it is fiction, it is a dream. The cave has indeed become so widespread that we even _call_ it reality.
If you speak language, and read words, you are participating in a cult (we even call caves that had a kind of altar in the center a cult -- in Eurasia, there was a cave-cult called _the cult of the bear_, which had a bear skull placed in its center during the last ice age, and I would not be surprised if people spoke to it, with the help of hallucinogens). The only question is whether the cult is nourishing you or cannibalizing you.
To the person you are responding to (user ocd): your cave (ladybird, your hypothetical tv-series), no longer nourishes you like it once did. Maybe find a new cave, build a fire in it. Unlike a television series, you can fork a code base. You make it into the perfect cave, just for you. And if another person likes this cave, chooses to sit by the fire with you, well, now you have a cult.
Servo isn't a JS engine. Do you mean why didn't they abandon their mission statement of developing a truly independent browser engine from scratch, abandon their C++ code base they spent the last 5 years building, accept a regression hit on WPT test coverage, so they can start hacking on a completely different complex foreign code-base they have no experience in, that another team is already developing?
Well for one, Servo isn't just JavaScript, it's an entire engine. Closer to Blink & Gecko.
Secondly, Ladybird wants to be a fourth implementor in the web browsers we have today. Right now there's pretty much three browser engines: Blink, Gecko and WebKit (or alternatively, every browser is either Chrome, Firefox or Safari). Ladybird wants to be the fourth engine and browser in that list.
Servo also wants to be the fourth engine in that list, although the original goal was to remove Gecko and replace it with Servo (which effectively wouldn't change the fact there's only three browsers/three engines). Then Mozilla lost track of what it was doing[0] and discarded the entire Servo team. Nowadays Servo isn't part of Mozilla anymore, but they're clearly much more strapped for resources and don't seem to be too interested in setting up all the work to make a Servo-based browser.
The question of "why not use Servo" kinda has the same tone as "why are people contributing to BSD, can't they just use Linux?". It's a different tool that happens to be in the same category.
Ladybird has a strong "all dependencies built in house" philosophy. Their argument is they want an alternative implementation to whatever is used by other browsers. I'd argue they would never use a third party library like servo as a principle.
No they don’t - SerenityOS did, but when Ladybird split out they started using all sorts of third party libraries for image decoding, network, etc.
Now a core part of the browser rendering engine is not something they’re going to outsource because it would defeat the goal of the project, but they have a far different policy to dependencies now than it used to before.
Some time ago I was perma-banned from the Ladybird github repository. One can say it is warranted, or not (people have their own opinion; I completely disagree with their decision). Now that this has happened, I can speak more freely about Ladybird.
Naturally this will be somewhat critical, but I need to first put things into context. I do believe that we really need an alternative to Google dominating our digital life. So I don't object that we need alternatives; whether Ladybird will be an alternative, or not, will be shown in the future. Most assuredly we need competition as otherwise the Google empire moves forward like Darth Vader and the empire (but nowhere near as cool as that; I find Google boring and lame. Even skynet in Terminator was more fun than Google. Google just annoys the heck out of me, but back to the topic of browsers).
So with that out of the way ... Ladybird is kind of ... erratic.
Some time ago, perhaps two months or three, Andreas suddenly announced "Swift WILL BE THE FOREVER FUTURE! C++ sucks!!!". People back then were scratching heads. It was not clear why Swift is suddenly our saviour.
Ok, now we learn - "wait ... swift is NOT the future, but RUST is!!!". Ok ... more head-scratching. We are having a deja-vu moment here... but it gets stranger:
"We previously explored Swift, but the C++ interop never quite got there, and platform support outside the Apple ecosystem was limited. Rust is a different story."
and then:
"I used Claude Code and Codex for the translation. This was human-directed, not autonomous code generation"
So ... the expertise will be with regards to ... relying on AI to autogenerate code in ... Rust.
I am not saying this is a 100% fail strategy, mind you. AI can generate useful code, we could see that. But I am beginning to have more and more doubts about the Ladybird project. Add to this the breakage of URLs that are used by thousands or million people world-wide (see the issues reported on the github tracker); or also the fact that, once you scale-up and more and more people use ladybird, will you be able to keep up with issue trackers? Will you ban more people?
In a way it is actually good that I am no longer allowed to make comments on their repository because I can now be a lot more critical and ask questions that the ladybird team will have to evaluate. Will ladybird blend? Will it succeed? Will it fail? Yes, it is way too early to make an evaluation, so we should evaluate in some months or so, perhaps end of this year. But I am pretty certain the criticism will increase, at the least the moment they decide to leave beta (or alpha or whatever model they use; they claimd they want a first working version in this year for Linux users, let's see whether that works).
Completely ignoring the Rust aspect, I’m disappointed that two weeks were spent on something that isn’t getting Ladybird to a state where it can be used as a daily driver. Ladybird isn’t usable right now, and if it was usable, improving the memory safety would be a commendable goal. Right now I just feel like this is premature.
They ported an existing project from CPP to Rust using AI because the porting would've been too tedious. I don't think they're planning on vibe coding PRs the way you're imagining.
This comment raises an interesting question: Would Serenity OS have brought Andreas the same kind of serenity had it been developed with AI? Open candid question.
I like the idea that people are either coders or builders. So AI can help fulfill your desire to build, create, bring things into reality. But it can't satisfy you if you like programming for its own sake. SerenityOS was not a practical project, it was clearly done for the enjoyment of programming itself.
The project's use of AI now echoes that - it's not being used to create new features, it's used for practical, boring drudge work of translating between two languages. So still very much on brand.
I don't think so because if I remember it correctly, Andreas suffered from alcoholism and serenity prayer helped him to go on the right path and iirc he honored that and created an os named serenityos.
God grant me the serenity
to accept the things I cannot change;
courage to change the things I can;
and wisdom to know the difference.
(courage to change the things I can;):- I think that this line must've given Andreas the strength, the passion to make the project reality.
but if AI made the change. Would the line be changed to courage to prompt an all powerful entity to change the things I asked it to.
Would that give courage? Would that inspire confidence in oneself?
I have personally made many projects with LLM's (honestly I must admit that I am a teenager and so I have been sort of using it from the start)
and personally, I feel like there are some points of curiosity that I can be prideful of in my projects but there is still a sense of emptiness and I think I am not the only one who observes it as such.
I think in the world of AI hype, it takes true courage & passion to write by hand.
Obviously one tries to argue that AI is the next bytecode but that is false because of the non deterministic nature of AI but even that being said, I think I personally feel as if the people who write assembly are definitely likely to be more passionate of their craft than Nodejs (and I would consider myself a nodejs guy and there's still passion but still)
Coding was definitely a form of art/expression/sense-of-meaning for Mr Andreas during a time of struggle. To automate that might strip him of the joy derived from stroking brush on an empty canvas.
Honestly, I really don't know about AI the more I think about it so I will not pretend that I know a thing/two about AI. This message is just my opinion in the moment. Opinions change with time but my opinion right now is that coding by hand definitely is more meaningful than not if the purpose of the project is to derive meaning.
Yeah, some weekends ago I tried writing a cross-platform browser without any Rust crates, this weekend I made my own self-hosted compile to Rust Clojure-like lisp, maybe next weekend attempting to create a OS that uses my language to run on bare-metal would actually be a challenge. Thanks for the inspiration :)
Cool project, but I'm a bit curious hearing how the rest of the project feels about this?
I'm not sure how I'd feel if I woke up and found a system I worked on had been translated into an another language I'm not neccessarily familiar with. And I'm not sure I'd want to fix an non-idiomatic "mess" just because it's been translated into a language I'm familiar with either (although I suspect they'll have no problem attracting rust developers).
Lol this dude is incapable of finishing anything he starts. Always a million distractions. First he started an OS called Serenity, then abandoned that to start a programming language (Jank or something) then abandoned that to work on a web browser, now it's looking like he is looking for things to distract away from that... A shame really.
Serenity was literally a distraction for his substance addiction issues. It's pretty clear he's productive and he and his team have worked on Ladybird for several years straight now. How many web browsers or OSs have you developed from scratch?
The byte-for-byte identical output requirement is the smartest part of this whole thing. You basically get to run the old and new pipelines side by side and diff them, which means any bug in the translation is immediately caught. Way too many rewrites fail because people try to "improve" things during the port and end up chasing phantom bugs that might be in the old code, the new code, or just behavioral differences.
Also worth noting that "translated from C++" Rust is totally fine as a starting point. You can incrementally make it more idiomatic later once the C++ side is retired. The Rust compiler will still catch whole classes of memory bugs even if the code reads a bit weird. That's the whole point.
I'd say that porting is a great time to "improve" many things, but like you suggest, not a great time to add new features. You can do a lot of improvements while maintaining output parity. You're in the weeds, reading the code, thinking about the routines, and you have all the hindsight of having done it already. Features are great to add as comments that sketch things out but importantly this is a great time to find and recognize that maybe a subroutine is pretty inefficient. I mean the big problem in writing software is that the goal are ever evolving. You wrote the software for different goals, different constraints. So a great time to clean things up, make them more flexible, more readable, *AND TO DOCUMENT*.
I think the last one gets ignored easily but my favorite time to document code is when reading it (but the best time is when writing it). It forces you to think explicitly about what the code is doing and makes it harder for the little things to slip by. Given that Ladybird is a popular project I really do think good documentation is a way to accelerate its development. Good documentation means new people can come in and contribute faster and with fewer errors. It lowers the barrier to entry, substantially. It's also helpful for all the mere mortals who forget things
LLMs are great at producing documentation - ask one "hey can you add a TODO comment about the thing on line 847 that is probably not the best way to do this?" while you're working on the port, and it will craft a reasonably-legible comment about that thing without further thought from you, that will make things easier for the person (possibly future-you) looking at the codebase for improvements to make. Meanwhile, you keep on working on the port that has byte-for-byte identical output.
4 replies →
That reminds me of the "Strangler Fig" pattern where you replace a service by first sending the requests to both the old and new implementation so you can compare their outputs. Then only when you're confident the new service functions as expected do you actually retire the old service.
I hope, with the velocity unlocked by these tools, that more pure ports will become the norm. Before, migrations could be so costly that “improving” things “while I’m here” helped sell doing the migration at all, especially in business settings. Only to lead to more toil chasing those phantom bugs.
One of the biggest point of rewriting is you know better by then so you create something better.
This is a HUUUGE reason code written in rust tended to be so much better than the original (which was probably written in c++).
Human expertise is the single most important factor and is more important than language.
Copy pasting from one language to another is way worse than complete rewrite with actual idiomatic and useful code.
Best option after proper rewrite is binding. And copy-paste with LLM comes way below these options imo.
If you look at real world, basically all value is created by boring and hated languages. Because people spent so much effort on making those languages useful, and other people spent so much effort learning and using those languages.
Don’t think anyone would prefer to work in a rust codebase that an LLM copy-pasted from c++, compared to working on a c++ codebase written by actual people that they can interact with.
5 replies →
I did several web framework conversions exactly like this. Make sure the http output string matches in the new code exactly as the old code and then eventually deleted the old code with full confidence.
Really like this translation approach and I had written about it just couple of days back (more from a testing and validation context). To see folks take that approach to something complex is pretty amazing! https://balanarayan.com/2026/02/20/gen-ai-time-to-focus-on-l...
Works even better if you have a good test suite, which is surely the case here with Ladybird
> I used Claude Code and Codex for the translation. This was human-directed, not autonomous code generation. I decided what to port, in what order, and what the Rust code should look like. It was hundreds of small prompts, steering the agents where things needed to go. After the initial translation, I ran multiple passes of adversarial review, asking different models to analyze the code for mistakes and bad patterns. > The requirement from the start was byte-for-byte identical output from both pipelines. The result was about 25,000 lines of Rust, and the entire port took about two weeks. The same work would have taken me multiple months to do by hand. We’ve verified that every AST produced by the Rust parser is identical to the C++ one, and all bytecode generated by the Rust compiler is identical to the C++ compiler’s output. Zero regressions across the board
This is the way. Coding assistants are also really great at porting from one language to the other, especially if you have existing tests.
> Coding assistants are also really great at porting from one language to the other
I had a broken, one-off Perl script, a relic from the days when everyone thought Drupal was the future (long time ago). It was originally designed to migrate a site from an unmaintained internal CMS to Drupal. The CMS was ancient and it only ran in a VM for "look what we built a million years ago" purposes (I even had written permission from my ex-employer to keep that thing).
Just for a laugh, I fed this mess of undeclared dependencies and missing logic into Claude and told it to port the whole thing to Rust. It spent 80 minutes researching Drupal and coding, then "one-shotted" a functional import tool. Not only did it mirror the original design and module structure, but it also implemented several custom plugins based on hints it found in my old code comments.
It burned through a mountain of tokens, but 10/10 - would generate tens of thousands of lines of useless code again.
The Epilogue: That site has since been ported to WordPress, then ProcessWire, then rebuilt as a Node.js app. Word on the street is that some poor souls are currently trying to port it to Next.js.
> 10/10 - would generate tens of thousands of lines of useless code again.
Me too! A couple days ago I gave claude the JMAP spec and asked it to write a JMAP based webmail client in rust from scratch. And it did! It burned a mountain of tokens, and its got more than a few bugs. But now I've got my very own email client, powered by the stalwart email server. The rust code compiles into a 2mb wasm bundle that does everything client side. Its somehow insanely fast. Honestly, its the fastest email client I've ever used by far. Everything feels instant.
I don't need my own email client, but I have one now. So unnecessary, and yet strangely fun.
Its quite a testament to JMAP that you can feed the RFC into claude and get a janky client out. I wonder what semi-useless junk I should get it to make next? I bet it wouldn't do as good a job with IMAP, but maybe if I let it use an IMAP library someone's already made? Might be worth a try!
76 replies →
> It burned through a mountain of tokens, but 10/10 - would generate tens of thousands of lines of useless code again.
This is the biggest bottleneck at this point. I'm looking forward to RAM production increasing, and getting to a point where every high-end PC (workstation & gaming) has a dedicated NPU next to the GPU. You'll be able to do this kind of stuff as much as you want, using any local model you want. Run a ralph loop continuously for 72 hours? No problem.
13 replies →
> a relic from the days when everyone thought Drupal was the future (long time ago).
Drupal is the future. I never really used it properly, but if you fully buy into Drupal, it can do most everything without programming, and you can write plugins (extensions? whatever they're called...) to do the few things that do need programming.
> The Epilogue: That site has since been ported to WordPress, then ProcessWire, then rebuilt as a Node.js app. Word on the street is that some poor souls are currently trying to port it to Next.js.
This is the problem! Fickle halfwits mindlessly buying into whatever "next big thing" is currently fashionable. They shoulda just learned Drupal...
3 replies →
There are plenty of SMEs trapped into that future. :)
> It burned through a mountain of tokens, but 10/10 - would generate tens of thousands of lines of useless code again.
Pardon me, and, yes, I know we're on HN, but I guess you're... rich? I imagine a single run like this probably burns through tens or hundreds of dollars. For a joke, basically.
I guess I understand why some people really like AI :-)
2 replies →
Agree, and it's also such a shame that none of the AI companies actually focus on that way of using AI.
All of them are moving into the direction of "less human involved and agents do more", while what I really want is better tooling for me to work closer with AI and be better at reviewing/steering it, and be more involved. I don't want "Fire one prompt and get somewhat working code", I want a UX tailored for long sessions with back and forth, letting me leverage my skills, rather than agents trying to emulate what I already can do myself.
It was said a long time ago about computing in general, but more fitting than ever, "Augmenting the human intellect" is what we should aim for, not replacing the human intellect. IA ("Intelligence amplification") rather than AI.
But I'm guessing the target market for such tools would be much smaller, basically would require you to already understand software development, and know what you want, while all AI companies seem to target non-developers wanting to build software now. It's no-code all over again essentially.
Is it any surprise that the cocaine cartels really want you to buy more cocaine, so they don't focus on its usefulness in pain relief and they refine it and cut it with the cheapest substances that will work rather than medical-grade reagents?
Same thing.
3 replies →
Of course there are tools focusing on this. It takes a little getting used to how prevalent it is. My editor now can anticipate the next three lines of code I intend to write complete with what values I want to feed to the function I was about to invoke. It all shows up in an autocomplete annotation for me. I just type the first two or three characters and press tab to get everything exactly how I was about to type it in--including an accurate comment worded exactly in my voice.
Is that what you mean by IA?
For example, I type "for" and my editor guesses I want to iterate over the list that is the second argument of the function for which I am currently building the body. So it offers to complete the rest of the loop condition for me. Not only did it anticipate that I am writing a for loop. It figures out what I want to iterate over, and perhaps even that I want to enumerate the iteration so I have the index and the value. Imagine if I had written a comment to explain my intent for the function before I started writing the function body. How much better could it augment my intellect?
6 replies →
>Agree, and it's also such a shame that none of the AI companies actually focus on that way of using AI.
This is because, regardless of the current state of things, the endgame which will justify all the upfront investment is autonomous, self-improving, self-maintaining systems.
I think it was Steve Jobs who said computers should be like a bicycle for the mind, I tend to agree
6 replies →
"All of them are moving into the direction of "less human involved and agents do more", while what I really want is better tooling for me to work closer with AI and be better at reviewing/steering it, and be more involved."
I want less ambitious LLM powered tools than what's being offered. For example, I'd love a tool that can analyse whether comments have been kept up to date with the code they refer to. I don't want it to change anything I just want it to tell me of any problems. A linter basically. I imagine LLMs would be a good foundation for this.
4 replies →
> Agree, and it's also such a shame that none of the AI companies actually focus on that way of using AI.
their valuations are replaced on getting rid of you entirely, along with everyone else
the "humans can use it to increase their productivity" is an interim step
I am learning rust myself and one of the things I definetly didn't want to do was let Claude write all the code. But I needed guidance.
I decided to create a Claude skill called "teach". When I enable it, Claude never writes any code. It just gives me hints - progressively more detailed if I am stuck. Then it reviews what I write.
I am finding it very satisfying to work this way - Rust in particular is a language where there's little space to "wing it". Most language features are interlaced with each other and having an LLM supporting me helps a lot. "Let's not declare a type for this right now, we would have to deal with several lifetime issues, let's add a note to the plan and revisit this later".
FYI: Claude has output styles, one of them is called `learning`. Instead of writing the code itself, it will add `TODO(human)` and comments to explain how to. Also adds `Insights` explaining concepts to you in its output.
This link also has a comparison to Skills further down.
https://code.claude.com/docs/en/output-styles#built-in-outpu...
I had a bash spaghetti code script that I wrote a few years ago to handle TLS certificates(generate CSRs, bundle up trust chains, match keys to certs, etc). It was fragile, slow, extremely dependent on specific versions of OpenSSL, etc.
I used Claude to rewrite it in golang and extend its features. Now I have tests, automatic AIA chain walking, support for all the DER and JKS formats, and it’s fast. My bash script could spend a few minutes churning through a folder with certs and keys, my golang version does a few thousand in a second.
So I basically built a limited version of OpenSSL with better ergonomics and a lot of magic under the hood because you don’t have to specify input formats at all. I wasn’t constrained by things like backwards compatibility and interface stability, which let me make something much nicer to use.
I even was able to build a wasm version so it can run in the browser. All this from someone that is not a great coder. Don’t worry, I’m explicitly not rolling my own crypto.
This is also how some of us use Claude despite what the haters say. You dont just go “build thing” you architect, review, refine, test and build.
It's how most of us are actually going to end up using AI agents for the foreseeable future, perhaps with increasing degrees of abstraction as we move to a teams-of-agents model.
The industry hasn't come up with a simple meme-format term to explain this workflow pattern yet, so people aren't excited about it. But don't worry, we'll surely have a bullshit term for it soon, and managers everywhere will be excited. In the meantime, we can just continue doing work with these new tools.
14 replies →
> how some of us
Operative word being “some”. The issue is that too many aren’t doing it that way.
> You dont just go “build thing”
Tell that to the overwhelming majority of posters discussing vibe coding, including on HN.
3 replies →
> despite what the haters say
Thinking people who disagree with you hate you or hate the thing you like is a recipe for disaster. It's much better to not love or hate things like this, and instead just observe and come to useful, outcome-based conclusions.
8 replies →
We keep seeing this pattern over and over as well. Despite LLM companies' almost tangible desperation to show that they can replace software engineers, the real value comes from domain experts using the tools to enhance what they're already good at.
I'd guess this is a bet on which market is more lucrative:
* domain experts paying for tooling that will enhance their productivity
* capital/management class hoping to significantly replace domain experts
Software devs have been a famously tough market to sell tools to for a long time, so the better bet is B. Plus, the story on B is fantastic for fundraising; if there's a 10% chance that it checks out, you want some part of that as your capital portfolio.
1 reply →
I had a script in another language. It was node, took up >200MB of RAM that I wanted back. "claude, rewrite this in rust". 192MB of memory returned to me.
Solving the big RAM shortage one prompt at a time.
This is sad to see. Node was originally one of the memory efficient options – it’s roots are solving the c10k problem. Mind sharing what libraries/frameworks you were using?
1 reply →
I used to have a bunch of bespoke node express server utilities that I liked to keep running in the background to have access to throughout the day but 40-50mb per process adds up quickly.
I’ve been throwing codex at them and now they’ve all been rewritten in Go - cut down to about 10mb per process.
I haven’t done a ton of porting. And when I did, it was more like a reimplementation.
> We’ve verified that every AST produced by the Rust parser is identical to the C++ one, and all bytecode generated by the Rust compiler is identical to the C++ compiler’s output.
Is this a conventional goal? It seems like quite an achievement.
My company helps companies do migrations using LLM agents and rigid validations, and it is not a surprising goal. Of course most projects are not as clean as a compiler is in terms of their inputs and outputs, but our pitch to customers is that we aim to do bug-for-bug compatible migrations.
Porting a project from PHP7 to PHP8, you'd want the exact same SQL statements to be sent to the server for your test suite, or at least be able to explain the differences. Porting AngularJS to Vue, you'd want the same backend requests, etc..
It’s a very good way of getting LLMs to work autonomously for a long time; give it a spec and a complete test suite, shut the door; and ask it to call you when all the tests pass.
This is the way. This exact workflow is my sweet spot.
In my coding agent std::slop I've optimized for this workflow https://github.com/hsaliak/std_slop/blob/main/docs/mail_mode... basically the idea is that you are the 'maintainer' and you get bisect safe, git patches that you review (or ask a code reviewer skill or another agent to review). Any change re-rolls the whole stack. Git already supports such a flow and I added it to the agent. A simple markdown skill does not work because it 'forgets'. A 'github' based PR flow felt too externally dependent. This workflow is enforced by a 'patcher' skill, and once that's active, tools do not work unless they follow the enforced flow.
I think a lot of people are going to feel comfortable using agents this way rather than going full blast. I do all my development this way.
This is broadly how I worked when I was still using chat instead of cli agents for LLM support. The downside, I feel, is that unless this is a codebase / language / architecture I do not know, it feels faster to just code by hand with the AI as a reviewer rather than a writer.
your patch queue approach is very clever. Solves a huge tech debt poblem with llm code gen. Should work with jujitsu too probably.
Would be curious to see more about how you save tokens with lua too.
Do you blog?
2 replies →
I am having immense success with the latest models developing a personal project that I open sourced and then got burned off by.I can't write anymore by hands but I do enjoy writing prompts with my voice.I have been shipping the best code the project has ever seen.The revolution is real.
Coding assistants are great at pattern matching and pattern following. This is why it’s a good idea to point them at any examples or demos that come with the libraries you want to use, too.
> This was human-directed, not autonomous code generation.
All my vibe coded projects are human directed, unless explicitly stated otherwise
Quite good. I ported my codebase from Go to Rust in a fraction of the time it would have taken me to rewrite it.
> Coding assistants are also really great at porting from one language to the other
No, they are quite terrible at doing that.
They may (I guess?) produce code that compiles, but they will, almost certainly not produce the appropriate combination of idioms and custom abstractions that may the code "at home" in the target language.
PS - Please fix your blockquote... HN ignores single linebreaks, so you have to either using pairs of them, or possibly go with italicization of the quoted text.
How does he solve the Fruit of the Poison Tree problem? For all he know, his LLMs included a bunch copyrighted or patented code throughout the codebase. How is he going to convince serious people that this port is not just a transformation of an _asset_ into a _liability_?
And you might say that this is a hypothetical problem, one that is not practically occurring. Well, we had a similar problem like this in the recent past, that LLMs are close to _making actual_. When it comes to software patents, they were considered a _hypothetical_ problem (i.e. nobody is going to bother suing you unless you were so big that violating a patent was a near certainty). We were instructed (at pretty much all jobs), to never read patents, so that we cannot incriminate ourselves in the discovery process.
That is going to change soon (within a year). I have friend, whom I won't name, who is working on a project, using LLMs, to discover whether software (open source and proprietary) is likely to be violating a software patent from a patent database. And it is designed to be used, not by programmers, but by law firms, patent attorneys, etc. Even though it is not marketed this way, it is essentially a target acquisition system for use by patent trolls. It is hard for me to tell if this means that we will have to keep ignoring patents for that plausible deniability, or if this means that we will have to become hyper informed about all patents. I suppose, we can just subscribe to the patent-agent, and hope that it guides the other coding agents into avoiding the insertion of potentially infringing code.
(I also have a friend who built a system in 2020 that could translate between C++ and Python, and guarantee equivalent results, and code that looks human-written. This was a very impressive achievement, especially because of how it guarantees the equivalence (it did not require machine-learning nor GPUs, just CPUs and some classic algorithms from the 80s). The friend informs me that they are very disheartened to see that now any toddler with a credit card can mindlessly do something similar, invalidating around a decade of unpublished research. They tell me that it will remain unpublished, and if they could go back in time, they would spend that decade extracting as much surplus from society as possible, by hook or by crook (apparently they had the means and the opportunity, but lacked the motive); we should all learn from my friend's mistake. The only people who succeed are, sadly, perversely, those who brazenly and shamelessly steal -- and make no mistake, the AI companies are built on theft. When millionaires do it, they become billionaires -- when Aaron Swartz does it, he is sentenced to federal prison. I'm not quite a pessimist yet, but it really is saddening to watch my friend go from a passionate optimist to a cold nihilist.).
One or both of you have the story very wrong.
If there was value (the guarantees) to this tech he buried a bunch of time in, he should be wrapping a natural language prompt around it and selling it.
Not even the top providers are giving any sort of tangible safety or reliability guarantees in the enterprise…
I'm a long-time Rust fan and have no idea how to respond. I think I need a lot more info about this migration, especially since Ladybird devs have been very vocal about being "anti-rust" (I guess more anti-hype, where Rust was the hype).
I don't know if it's a good fit. Not because they're writing a browser engine in Rust (good), but because Ladybird praises CPP/Swift currently and have no idea what the contributor's stance is.
At least contributing will be a lot nicer from my end, because my PR's to Ladybird have been bad due to having no CPP experience. I had no idea what I was doing.
> I guess more anti-hype, where Rust was the hype
Yeah that is the thing I struggle with. I am really happy for people falling in love with Rust. It is a amazing language when used for the right use case.
The problem is that had my Rust adventures a few years ago and I am over the hype cycle and able to see both the advantages and disadvantages. Plus being generally older and hopefully wiser I don't tie my identity towards any specific programming language that much.
So sometimes when some Junior dev discovers Rust and they get really obnoxious with their evangelicalism it can be very off putting. Really not sure how to solve it. It is good when people get excited about a language. It just can be very annoying for everyone else sometimes.
> So sometimes when some Junior dev discovers Rust and they get really obnoxious with their evangelicalism it can be very off putting. Really not sure how to solve it. It is good when people get excited about a language. It just can be very annoying for everyone else sometimes.
This rings very true, and I've actually disadvantaged myself somewhat here. I was involved in projects that made very dubious decisions to rewrite large systems in Rust. This caused me to actively stay away from the language, and stick to C++, investing lots of time in overcoming its shortcomings.
Now years later, I started with Rust in a new project. And I must say, I like the language, I really like the tools, and I like the ecosystem. On some dimension I wish I would have done this sooner (but on the other hand, I think I have a better justification of "why Rust" now).
I find the attitude of the Ladybird devs refreshing though, and it kinda aligns with my opinions about Rust.
I never fell in love with Rust or got particularly excited about adopting it. But, I just don't see a serious alternative (maybe Swift is fine for some cases but not in my field).
I believe Google's Rust journey was even more closely aligned with Ladybird: "we want memory safety, but with low impedance mismatch from C++". After like 5 years of trying to figure something like that out they seemed to go "OK actually fuck that we just have to use Rust and deal with the challenges it brings for a C++ shop".
The whole obnoxious dogmatic evangelicalism thing is definitely a wider human phenomenon outside software and junior devs picking up new languages.
Definitely isn’t one of those things that can be solved, but it’s helpful to be aware of and process on that basis. I think some personalities are likely disproportionately vulnerable to this behaviour, but I think it largely has a positive core of enthusiasm. It’s probably more a matter of those individuals growing in self awareness.
Perhaps we saw a big wave of that with rust because it meant a lot of things to a lot of different people, some more equip to express their enthusiasm with some self control than others.
I'm contemplating diving into Rust for a smallish project, a daemon with super-basic UI intended for Linux, MacOS and Windows. Do you mind expanding on what disadvantages you encountered? Or use-cases that aren't appropriate for Rust?
10 replies →
It’s a pretty good language and ecosystem. Downside was always the community which every ten seconds someone will start asking to tax everyone to fund Rust Software Foundation or constantly argue that you have to donate a percentage of income to it. Now with LLM I don’t have to talk to community. Huge improvement.
Problem with community is it has experts and groupies mixed in. Ideally experts can talk somewhere and groupies can go somewhere else and talk about funding RSF etc. but now is unnecessary. Expert is available on demand via chatbot.
2 replies →
> So sometimes when some Junior dev discovers Rust and they get really obnoxious with their evangelicalism it can be very off putting.
And experience doesn't equal correct decision making. People just get traumatized in different ways.
> Ladybird praises CPP/Swift currently
Not anymore.
https://news.ycombinator.com/item?id=47067678
I guess I missed this, thanks!
They are moving fast.
Next month it will be yet-another-language.
Eventually they come full circle and settle for either C or C++.
1 reply →
I'd argue Ladybird itself is a "hype" project.
Anything trying to break the browser monopolies in a meaningful way deserves the hype, IMO.
Fair point. What does Ladybird need to achieve in your opinion to shake the "hype" label? Honestly, I, myself, don't have a good answer!
9 replies →
Its possible to dislike Rust but pragmatically use it. Personally, I do not like Rust, but it is the best available choice for some work and personal stuff.
I think this is a good, realistic point of view.
Personally I think most programming languages have really ... huge problems. And the languages that are more fun to use, ruby or python, are slow. I wonder if we could have a great, effective, elegant language that is also slow. All that try end up with e. g. with a C++ like language.
2 replies →
Definetly, thats how I have felt about and used C++ for most of my career. ( well except I dont select C++ for personal projects).
I wouldnt go as far as to say, I dont like Rust, but it doesnt come natural to me like many other languages do after several decades of experience.
So what don't you like about it?
1 reply →
I am somewhat concerned about the volatility. All three languages have their merits and each has a stable foundation that has been developed and established over many years. The fact that the programming language has been “changed” within a short period of time, or rather that the direction has been altered, does not inspire confidence in the overall continuity of Ladybird's design decisions.
Ladybird as a project is not that old, and it's still in pre-alpha, if they are going to make important changes then it's better now than later.
> I am somewhat concerned about the volatility.
Not just volatility but also flip-flopping. Rust was explicitly a contender when they decided to go with Swift 18 months ago, and they've already done a 180 on it despite the language being more or less the same as it was.
5 replies →
There's been some fun volatility with the author over the years. I told him once that he might want to consider another language to which he replied slightly insultingly. Then he tried to write another language. Then he tried to switch from C++ to Swift, and now to Rust :P
5 replies →
> I think I need a lot more info about this migration
Doesn't sound like it's some Fish-style, full migration to Rust of everything. Seems like they are just moving a couple parts over for evaluation, and then, going forward, making it an official project language that folks are free to use. They note that basically every browser already does that, so this isn't a huge shakeup.
This makes sense because GUI wise Rust isn't really here yet (but it's close).
TFA mentions "the contributor's" stance on Swift.
But not the stance on Rust, which is something I'm wondering. I understand there's a core team assigned, but are the ~200 contributors okay with this migration?
4 replies →
They abandoned Swift recently.
The public announcement was less then a week ago. Meanwhile in TFA:
> ... the entire port took about two weeks.
So he was ~halfway in when he made the Swift announcement.
5 replies →
it's very odd that someone with no experience would take a big project like this and just jump to another language because he trusts the AI generated code of current models
if it works it works i guess, but it seems mad to me on the surface
Why do you think the creator behind SerenityOS has no experience? I mean it’s not the most popular OS out there but he seems like a capable individual.
3 replies →
Did you read the OP? No trust, only thorough verification.
4 replies →
> especially since Ladybird devs have been very vocal about being "anti-rust" (I guess more anti-hype, where Rust was the hype).
I mean, they seem mostly to be against anything that isn't C++'s peculiar brand of Object Oriented Programming?
(also against women and immigrants, but that's a different story)
Looks like Andreas is a mighty fine engineer, but he's even better entrepreneur. Doesn't matter if intentional or not, but he managed to create and lead a rather visible passion project, attract many contributors and use that project's momentum to detach Ladybird into a separate endeavor with much more concrete financial prospects.
The Jakt -> Swift -> Rust pivots look like the same thing on a different level. The initial change to Swift was surely motivated by potential industry support gain (i believe it was a dubious choice from purely engineering standpoint).
It's awe-inspiring to see how a person can carve a job for himself, leverage hobbyists'/hackers' interest and contributions, attract industry attention and sponsors all while doing the thing he likes (assuming, browsers are his thing) in a controlling position.
Can't fully rationalize the feeling, but all of this makes me slightly wary. Doesn't make it less cool to observe from a side, though.
Andreas is not some kind of hustler. He spent years writing an entire OS (Serenity OS) before the web browser part happened to gain traction. If you were just trying to be an entrepreneur, why do that?
The truth is more simple: he's a good engineer and leader, people recognised that and offered him sponsorships, and the project took off by itself.
I sincerely hope it's just me having trust issues.
1 reply →
Eh, he's given an interview where he talks about the Swift decision. He and several maintainers tried building some features in Swift, Rust, and C++, spending about two weeks on each one IIRC. And all the maintainers liked the experience of Swift better. That might have ended up wrong, but it's a pretty reasonable way to make a decision.
Two weeks with Rust and you're still fighting with the compiler. I think the LLM pulled a lot of weight selling the language, it can help smooth over the tricky bits.
3 replies →
Yeah, main issue with Swift is that the c++ interop (which was absolutely bleeding-edge) still isn't to the point of being able to pull in parts of the Ladybird codebase.
If I recall correctly, part of this was around classes they had that replaced parts of the STL, whereas the Swift C++ interop makes assumptions about things with certain standard names.
Yeah, this is glorified yak-shaving if we're being real. I'm not getting my hopes up for a true new browser
>assuming, browsers are his thing
IIRC he used to work on the Safari browser engine at Apple.
> but all of this makes me slightly wary.
Wary of what?
I'd say it's the idea/fact/feeling that, in 2026, agency matters more than skill/wisdom/intelligence.
Long read on the topic (quite funny, covers Cluely): https://harpers.org/archive/2026/03/childs-play-sam-kriss-ai...
1 reply →
This is less about languages and more about so-called AI. One thing’s for sure: it’s becoming harder and harder to deny that agentic coding is revolutionizing software development.
We’re at the point where a solid test suite and a high-quality agent can achieve impressive results in the hands of a competent coder. Yes, it will still screw up, needs careful human review and steering, etc, but there is a tangible productivity improvement. I don’t think it makes sense putting numbers on it, but for many tasks, it looks like there’s a tangible benefit.
This looks like guerrilla advertising for sure.
LLM and rust rewrite together. And it does work so hopefully they get more attention and build it so I have an alternative browser to use
> We know the result isn’t idiomatic Rust, and there’s a lot that can be simplified once we’re comfortable retiring the C++ pipeline. That cleanup will come in time.
Correct me if I’m wrong since I don’t know these two languages, but like some other languages, doing things the idiomatic way could be dramatically different. Is “cleanup” doing a lot of heavy lifting here? Could that also mean another complete rewrite from scratch?
A startup switching languages after years of development is usually a big red flag. “We are rewriting it in X” posts always preceded “We are shutting down”. I wish them luck though!
A mitigating factor in this case is the C++ and Rust are both multi-paradigm languages. You can quite reasonably represent most C++ patterns in Rust, even if it might not be quite how you'd write Rust in the first place.
I disagree. You can't even create simple C++ inheritance examples because you don't have data inheritance. So basically classical OOP is out of the window.
That's the biggest difference to C++ and most mainstream languages, you simply can't do OOP (which in my books is a good thing) and it forces you more towards traits and composition.
In addition, C++ and Rust are very, very similar languages. Almost everything in C++ translates easily, including low level stuff and template shenanigans. There's only a few "oh shit there's no analog" things, like template specialization or virtual inheritance.
Out of all the languages rust takes inspiration from, id rank C++ at the top of the list.
2 replies →
This is the famous trap that Joel on Software talked about in a blog post long time ago.
If you do a rewrite you essentially put everything else on halt while rewriting.
If you keep doing feature dev on the old while another "tiger team" is doing the rewrite port then these two teams are essentially in a race against each other and the port will likely never catch up. (Depending on relative velocities)
Maybe they think that they can to this LLM assisted tools in a big bang approach quickly and then continue from there without spending too much time on it.
I’ve been part of at least 2 successful rewrites. I think that Joel’s post is too often taken as gospel. Sometimes a rewrite is the best way forward.
Moving Ladybird from C++ to a safer more modern language is a real differentiator vs other browsers, and will probably pay dividends. Doing it now is better than doing it once ladybird is fully established.
One last point about rewrites: you can look at any industry disruptor as essentially a team that did a from-scratch rewrite of their competitors and won because the rewrite was better.
39 replies →
The context matters when we talk about Joel's article[0].
It's about Netscape. By the time, Netscape had dominated the browser market. It was the leader and that means they had all the market share to lose. You can bet Microsoft's decision makers were very closely monitoring what those at Netscape were doing.
Today, practically nobody uses Ladybird. No one even knows it[1]. It's so behind and has nothing to lose. If you really want to rewrite, it's better to do it when you have nothing to lose.
[0]: https://www.joelonsoftware.com/2000/04/06/things-you-should-...
[1]: to quote Joel, "no one" means less than one million people.
Nearly 26 years ago! https://www.joelonsoftware.com/2000/04/06/things-you-should-...
What's different today really is the LLMs and coding agents. The reason to never rewrite in another language is that it requires you to stop everything else for months or even years. Stopping for two weeks is a lot less likely to kill your project.
3 replies →
> then these two teams are essentially in a race against each other and the port will likely never catch up
Ladybird appears to have the discipline to have recognized this: “[Rust] is not becoming the main focus of the project. We will continue developing the engine in C++, and porting subsystems to Rust will be a sidetrack that runs for a long time.”
1 reply →
> A startup switching languages after years of development is usually a big red flag.
Startups are not a good comparison here. They have a different relationship with code than software projects.
Linux has rewriten entire stacks over and over again.
The PHP engine was rewritten completely at least twice.
The musl libc had entire components rewritten basically from scratch and later integrated.
Exactly my thought! I guess I'll keep Firefox for the foreseeable future...
Firefox is already spying on you with a lot of telemetry, and they have recently amended their terms of use to remove the obligation to "never sell your data" [1]. So perhaps you should reconsider that statement.
[1] : https://news.ycombinator.com/item?id=43213612
1 reply →
Spending weeks porting (presumably) working code with LLM is a bit strange
that's only the mechanical translation too
the hard bit (borrow checker) has still to be done...
Twitter is the canonical startup rewrite. It worked.
A lot of the previous calculus around refactoring and "rewrite the whole thing in a new language" is out the window now that AI is ubiquitous. Especially in situations where there is an extensive test suite.
Testing has become 10x as important as ever.
For a personal thing I had AI write some python libraries to power a cli. It has to do with simple excel file filtering, grouping and aggregating. Nothing too fancy. However since it's backed by a library, I am playing with different UIs for the same thing and it's fun to say.. Do it with streamlit. Oh it can't do this particular thing. Fine do it with shiny. No? OK Dash. It takes only like an hour to prototype with a whole new UI library then I get to say "nah" like a spoiled child. :)
Well, I am on the provocative side that as AI tooling matures current programming languages will slowly become irrelevant.
I am already using low code tooling with agents for some projects, in iPaaS products.
> Well, I am on the provocative side that as AI tooling matures current programming languages will slowly become irrelevant.
I have the opposite opinion. As LLM become ubiquitous and code generation becomes cheap, the choice of language becomes more important.
The problem with LLM for me is that it is now possible to write anything using only assembly. While technically possible, who can possibly read and understand the mountain of code that it is going to generate?
I use LLM at work in Python. It can, and will, easily use hacks upon hacks to get around things.
Thus I maintain that as code generation is cheap, it is more important to constraint that code generation.
All of this assume that you care even a tiny bit about what is happening in your code. If you don't, I suppose you can keep banging the LLM to fix that binary blob for you.
3 replies →
There is the notion that a lot of programming language preferences are based on the notion of people using them. As soon as it's LLMs using them, a lot of what motivates their choices becomes less valid.
I've been doing a few projects that are definitely outside my comfort zone with LLMs and its fine. I can read the code but I just don't have the muscle memory to produce it.
I don't agree. For one thing, the language directly impacts things like iteration speed, runtime performance, and portability. For another, there's a trade-off between "verbose, eats context" and "implicit, hard to reason about".
IMO Rust will strike a very strong balance here for LLMs.
9 replies →
Im already using models to reason about and summarize part of the code from programming language to prose. They are good at that. I can see the process being something like english to machine lang, machine lang to english if the human needs to understand. However amother truism is that compilers are a great guardrail against bad generated code. More deterministic guardrails are good for llms. So yeah im not there yet where i trust binaries to the statistical text generators.
I would say that current programming languages have a better chance due to the huge amount of code that AI can train on. New languages do not have that leverage. Moreover, current languages have large ecosystems that still matter.
I see the opposite. New languages have more difficulty breaking into popularity due to lack of enough existing codename and ecosystems.
Interesting take, what do you think comes next? A programming language optimized for coding agents?
1 reply →
> After the initial translation, I ran multiple passes of adversarial review, asking different models to analyze the code for mistakes and bad patterns.
I feel like you just know it’s doomed. What this is saying is “I didn’t want to and cannot review the code it generated” asking models to find mistakes never works for me. It’ll find obvious patterns, a tendency towards security mistakes, but not deep logical errors.
Somehow they did use this as part of their approach to get to 0 regressions across 65k tests + no performance regressions though + identical output for AST and bytecode though. How much manual review was part of the hundreds of rounds of prompt steering is not stated, but I don't think it's possible to say it couldn't find any deep logical errors along the way and still achieve those results.
The part that concerns me is whether this part will actually come in time or not:
> The Rust code intentionally mimics things like the C++ register allocation patterns so that the two compilers produce identical bytecode. Correctness is a close second. We know the result isn’t idiomatic Rust, and there’s a lot that can be simplified once we’re comfortable retiring the C++ pipeline. That cleanup will come in time.
Of course, it wouldn't be the first time Andreas delivered more than I expected :).
That’s convincing and impressive, but I wouldn’t say it proves it can spot deep errors. If it’s incredible at porting files and comparing against the source of truth then finding complicated issues isn’t being tested imo.
1 reply →
Your argument is just as applicable on human code reviewers. Obviously having others review the code will catch issues you would never have thought of. This includes agents as well.
They’re not equal. Humans are capable of actually understanding and looking ahead at consequences of decisions made, whereas an LLM can’t. One is a review, one is mimicking the result of a hypothetical review without any of the actual reasoning. (And prompting itself in a loop is not real reasoning)
3 replies →
With humans though, I wouldn't have to review 20k lines of code at once.
2 replies →
>Your argument is just as applicable on human code reviewers.
The tests many of us use for how capable a model or harness is is usually based around whether they can spot logical errors readily visible to humans.
Hence: https://news.ycombinator.com/item?id=47031580
That is what the testing suite is there to check, no?
No. Testing generally can only falsify, not verify. It’s complementary to code review, not a substitute for it.
You mean the testing suite generated by AI?
3 replies →
Yeah, I lost all interest in the ladybird project now that it is AI slop.
No one wants to work with this generated, ugly, unidiomatic ball of Rust. Other than other people using AI. So you dependency AI grows and grows. It is a vicious trap.
> This is not becoming the main focus of the project. We will continue developing the engine in C++, and porting subsystems to Rust will be a sidetrack that runs for a long time.
I don't like this bit. Wouldn't it be better to decide on a memory-safe language, and then commit to it by writing all new code in Rust, or whatever. This looks like doing double the work.
It doesn't have to all-or-nothing. Firefox has been a mixed C++ and Rust codebase for years now. It isn't like the code is written twice. The C++ components are written in C++, and the Rust components are written in Rust.
I suspect that'll also be what happens here. And if the use of Rust is successful, then over time more components may switch over to Rust. But each component will only ever be in one language at a time.
You can't compare the choices made to evolve a >20 years old codebase with a brand new one. Firefox also as Rust support for XPCOM components, so you can use and write them in Rust without manual FFI (this comes with some baggage of course).
The Ladybird devs painted themselves in a corner when choosing C++ for a new web browser, with many anti-Rust folks claiming that "modern C++ was safe". Well...
3 replies →
Firefox was special in that Mozilla created Rust to build Servo and then backported parts of Servo to Firefox and ultimately stopped building Servo.
Thankfully Servo has picked up speed again and if one wants a Rust based browser engine what better choice than the one the language was built to enable?
https://servo.org/
1 reply →
One could do that but then they'd lose all momentum and the project would never get finished.
> Wouldn't it be better to decide on a memory-safe language,
it is totally possible to use some strict subset of C++, which will be memory safe.
Only in theory. In practice it never happens like that. I mean, you think Google wouldn't use that for Chrome if they could?
Ladybird already does that
> We know the result isn’t idiomatic Rust, and there’s a lot that can be simplified once we’re comfortable retiring the C++ pipeline. That cleanup will come in time.
I wonder what kind of tech debt this brings and if the trade off will be worth whatever problems they were having with C++.
the tech debt risk in this case is mostly in the cleanup phase, not the port itself. non-idiomatic Rust that came from C++ tends to have a lot of raw pointer patterns and manual lifetime management that works fine but hides implicit ownership assumptions. when you go to make it idiomatic, the borrow checker forces those assumptions to be explicit, and sometimes you discover the original structure doesn't compose well with Rust's aliasing rules. servo went through this. the upside is you catch real latent bugs in the process.
It depends. I migrated a 20k loc c++ project to rust via AI recently and I would say it did so pretty well. There is no unsafe or raw pointer usage. It did add Rc<RefCell in a bunch of places to make things happy, but that ultimately caught some real bugs in the original code. Refactoring it to avoid shared memory (and the need for Rc<RefCell<>> wasn't very difficult, but keeping the code structure identical at first allowed us to continue to work on the c++ code while the rust port was ongoing and keep the rust port aligned without needing to implement the features twice.
I would say modern c++ written by someone already familiar with rust will probably be structured in a way that's extremely easy to port because you end up modeling the borrow checker in your brain.
1 reply →
Yes, I just translated a Rust library from non-idiomatic and unsafe Rust to idiomatic and safe Rust and it was as much work as if I had rewritten it from scratch.
2 replies →
I don't think they were having problems with C++, they moved to Rust for memory safety. Mind that they migrated LibJS, their JavaScript library.
Andreas Kling mentioned many times they would prefer a safer language, specifically for their js runtime garbage collector. But since the team were already comfortable with cpp that was the choice, but they were open and active seeking alternatives.
The problem was strictly how cpp is perceived as an unsafe language, and this problem rust does solve! Not being sarcastic, this truly looks like a mature take. Like, we don't know if moving to rust would improve quality or prevent vulnerabilities, here's our best effort to find out and ignore if the claim has merits for now. If the claim maintains, well, you're better prepared, if it doesn't, but the code holds similar qualities...what is the downside?
From their post on Twitter in 2024 when they adopted Swift, with a comment on Rust.
My general thoughts on Rust:
- Excellent for short-lived programs that transform input A to output B
- Clunky for long-lived programs that maintain large complex object graphs
- Really impressive ecosystem
- Toxic community
https://xcancel.com/awesomekling/status/1822241531501162806
Mayhaps he had a Damascene conversion? Not that I ever understood the need to change from C++ in the first place though.
Considering David Tolnay's indefensible treatment of JeanHeyd Meneide, I'm inclined to agree with Kling on the toxicity of the Rust community. Evangelical fervor does not excuse douchebaggery.
Most likely some big sponsor requires them turn to AI slops.
> We previously explored Swift, but the C++ interop never quite got there, and platform support outside the Apple ecosystem was limited.
Why was there ever any expectation for Swift having good platform support outside Apple? This should have been (and was to me) already obvious when they originally announced moving to Swift.
Apple’s own marketing speak has Swift as a cross platform language. Just like, I suppose, C# is a cross platform language.
Apple puts zero resources into making that claim reality, however.
Apple actually did put some resources behind it, the toolchain is reasonably pleasant to use outside macOS and Xcode, they have people building an ecosystem in the Swift Server Workgroup, and arguably some recent language design decisions don't seem to be purely motivated by desktop/mobile usage.
But in the end I can't help but feel Swift has become an absolute beast of a multi-paradigm language with even worse compile times than Rust or C++ for dubious ergonomics gains.
5 replies →
> Just like, I suppose, C#
Have you actually used .NET on Linux/macOS? I have (both at home and work) and there isn't anything that made me think it was neglected on those platforms. Everything just works™
4 replies →
> If you look at the code, you’ll notice it has a strong “translated from C++” vibe. That’s because it is translated from C++. The top priority for this first pass is compatibility with our C++ pipeline. The Rust code intentionally mimics things like the C++ register allocation patterns so that the two compilers produce identical bytecode. Correctness is a close second. We know the result isn’t idiomatic Rust, and there’s a lot that can be simplified once we’re comfortable retiring the C++ pipeline.
Does this still get you most of the memory-safety benefits of using Rust vs C++?
I think this largely depends on how much unsafe Rust they produced.
All the best to them, however this feels like yah shaving instead of focusing into delivering a browser than can become an alternative to Safari/Chrome duopoly.
Part of browser experience is safety and migrating their JS library to Rust is probably one of the best ways to gain advantage over any other existing engine out there in this aspect. Strategically this may and likely will attract 3rd party users of the JS library itself, thus helping its adoption and further improving it.
They're not porting the browser itself to Rust, for the record.
Yet, they are open to further rewrites.
2 replies →
Javascript is a self contained sub system, if the public API stays the same, then they can rewrite as much as they want, also I suppose this engine now will attract new contributors that will want to contribute to Ladybird just because they enjoy working with Rust.
Don't forget that the Rust ecosystem around browsers is growing, Firefox already uses it for their CSS engine[0], AFAIK Chrome JPEG XL implementation is written in Rust.
So I don't see how this could be seen as a negative move, I don't think sharing libraries in C++ is as easy as in Rust.
[0] https://github.com/servo/stylo
Not only is Firefox using it for their CSS engine but Mozilla created Rust to build Servo and sadly only the CSS engine and maybe some other parts is what they kept around when they offloaded Rust.
“the Rust ecosystem around browsers is growing” – in the beginning pretty much 100% of the ecosystem around Rust was browser oriented
Thankfully Servo is picking up speed again and is a great project to help support with some donations etc: https://servo.org/
Agreed. They said they ruled out rust in 2024, I believe the article they published was near the end of 2024 because I remember reading it fairly recently.
Seems like a lot of language switches in a short time frame. That'd make me super nervous working on such a project. There will be rough parts for every language and deciding seemingly on whims that 1 isn't good enough will burn a lot of time and resources.
Maybe it is my cynicism, but I always suspect such projects to be endless rabbit chasing. It is not about catching it.
think of it as axe sharpening rather than yak shaving
Cool, that seems like a rational choice. I hope this will help Ladybird and Servo benefit from each other in the long run, and will make both of them more likely to succeed
Definitely, would be great to see a Servo-based Ladybird.
I hope it does not -> because we don't more browser crossbreeding
Small browsers need to unite if they ever hope to become relevant.
Someone should try this with the “Ralph Wiggum loop” approach. I suspect it would fail spectacularly, but it would be fascinating to watch.
Personally, I can’t get meaningful results unless I use the tool in a true pair-programming mode—watching it reason, plan, and execute step by step. The ability to clearly articulate exactly what you want, and how you want it done, is becoming a rare skill.
Given the quality of their existing test suite I'm confident the Ralph Wiggum loop would produce a working implementation... but the code quality wouldn't be anywhere near what they got from two weeks of hands-on expert prompting.
Sure yeah, I can buy that, but that would be like collecting tech debt for generations.
Can you explain more? (I know the reference that he is the idiot son of Chief Wiggum from The Simpsons.)
https://ghuntley.com/loop/ and https://github.com/anthropics/claude-code/blob/main/plugins/...
2 replies →
Interestingly editorialized title omits “with help from AI”.
That’s probably just the classic HackerNews title shortening algorithm at work.
It wasn't. The submitter submitted it with the title “Ladybird Browser adopts Rust”. We initially changed it to “Ladybird adopts Rust”, and now I've changed it to the original title, per the guidelines. The automatic title cleaner wouldn't make a change like that.
I went to check if this was documented in the list of undocumented HN features on GitHub but it’s not.
There is an open PR (by simonw btw): https://github.com/minimaxir/hacker-news-undocumented/pull/4...
Woah, this is a wild claim. @dang: Is this a thing? I don't believe it. I, myself, have submitted many articles and never once did I see some auto-magical "title shortening algorithm" at work!
4 replies →
A LLM-assisted codebase migration is perhaps one of the better use cases for them, and interestingly the author advocates for a hands-on approach.
Adding the "with help from AI" almost always devolves the discussion from that to "developers must adopt AI or else!" on the one hand and "society is being destroyed by slop!" on the other, so as long as that's not happening I'm not complaining about the editorialized title.
I think we've come to the point when it should be the opposite for any new code, something in line of: "done without AI". Bein an old fart working in software development I have many friends working as very senior developers. Every single one of them including yours truly uses AI.
I use AI more and more. Goes like create me classes A,B,C with such and such descriptive names, take this state machine / flowchart description to understand the flow and use this particular sets of helpers declared in modules XYZ
I then test the code and then go over and look at any un-optimal and other patterns I prefer not to have and asking to change those.
After couple of iterations code usually shines. I also cross check final results against various LLMs just in case
Very happy to see this. Ladybird's engineering generally seems excellent, but the decision to use Swift always seemed pretty "out there". Rust makes a whole lot more sense.
Servo makes a whole lot more sense: https://servo.org/
Can you send a Gmail in Servo? No?
Ladybird is much further ahead in terms of actually rendering web pages that people use.
The biggest advantage to Servo was that it is written in Rust. This move begins to nullify that advantage as well.
Why exactly does Servo make more sense?
I hope they both succeed. But Ladybird is more likely to become a usable browser first.
1 reply →
> We previously explored Swift, but the C++ interop never quite got there
But Rust doesn't have C++ interop at all?
You can do it via the C ABI, and use opaque pointers to represent higher-level Rust/C++ concepts if you want to.
Firefox is a mixed C++ / Rust codebase with a relatively close coupling between Rust and C++ components in places (layout/dom/script are in C++ while style is in Rust, and a mix of WebRender (Rust) and Skia (C++) are used for rendering with C++ glue code)
> You can do it via the C ABI, and use opaque pointers to represent higher-level Rust/C++ concepts
Yeah but, you can do the same in Swift
1 reply →
>But Rust doesn't have C++ interop at all?
It also doesn't have the disadvantages of Swift. Once the promise of Swift/C++ interop is gone there isn't enough left to recommend it.
I’m curious what issues people were running into with Swift’s built in C++ interop? I haven’t had the chance to use it myself, but it seemed reasonable to me at a surface level.
1 reply →
Yeah, that part doesn't make much sense to me. IMO, Swift has reasonably good C++ interop[1] and Swift's C interop has also significantly improved[2] since Swift 6.2.
[1]: https://www.swift.org/documentation/cxx-interop/
[2]: https://www.swift.org/blog/improving-usability-of-c-librarie...
It may have in the future. Crubit is one effort in this direction: https://crubit.rs/
There is also cxx.rs, which is quite nice, albeit you have to struggle sending `std` types back and forth a bit
> albeit you have to struggle sending `std` types back and forth a bit
Firefox solves this partly by not using `std` types.
For example, https://github.com/mozilla/thin-vec exists in large part because it's compatible with Firefox's existing C++ Vec/Array implementation (with the bonus that it's only 8 bytes on the stack compared to 24 for the std Vec).
1 reply →
Rust has cxx which I would argue is "good enough" for most use cases. At least all C++ use cases I have. Not perfect, but pretty damn reasonable.
It's technically Rust -> C -> C++ as it stands right now
I know he doesn't make live coding videos anymore, but it'd be cool if Andreas showed off how this worked a little more. I'm curious how much he had to fix by hand (vs reprompting or spinning a different model or whatever).
You can checkout the pull requests related to LibJS: https://github.com/LadybirdBrowser/ladybird/pulls?q=is%3Apr+...
What happened? It’s been awhile since I checked in but it seems he doesn’t work on serenity and doesn’t live stream anymore (and is now into lifting weights)
He got his serenity and at the same time ladybird browser started getting somewhere, so he separated it out and went full on with it. From what I know, he was working on browsers before at Apple, so it was like he got ready to return
Porting the JS parser to Rust and adopting Rust in other parts of the engine while continuing to use C++ heavily is unlikely to make Ladybird meaningfully more secure.
Attackers are surprisingly resilient to partial security.
[flagged]
No, just saying facts
Translating software that has a lot of tests is easy for LLMs. I think we'll be seeing a lot more of that in the next years. But it will take some time for people to build up more trust in these tools. Good test harnesses are a key enabler.
The inevitable cleanup that will follow this could be done the same way. Refactoring like that can be done in more bite sized chunks, which makes it easier to review what is happening and control how it is done.
I hope that this opens the door for collaboration between Ladybird and Servo, no need to reinvent the wheel for core components.
I thought the entire point of Ladybird was precisely to reinvent the wheel?
This is also the case for Servo, so it makes sense to collaborate.
2 replies →
Unfortunately licence incompatibility may prevent that. Ladybird is BSD and Servo is MPL. This is also why there is only limited collaboration between Servo and the Rust GUI ecosystem.
Commenting about not reinventing the wheel on a Ladybird post is ironic
Is there any discussion on why D or even Ada was not considered? These languages have been around for long time. If they were willing to use llm to break the initial barrier to entry for a new language, then a case can be made for these languages as well.
They already made the mistake picking a niche language twice (first their own language, then Swift as a cross-platform language), why would you want them to make it a third time?
What kind of response is this? I was asking if there was any technical evaluation on other languages. And D and Ada are not niche. They have been battle tested in critical software.
Swift had/has some problems in the language itself. It's not because of the niche nature of Swift that was the problem iirc.
I don't think this is the right response because certainly a meaningful discussion could've definitely taken place and given how they were already open to other languages which was the reason why they picked Swift in the first place.
I remember Andreas video where he talked about how people used rust in his codebase and they were so happy but later it became very difficult whereas they found with swift that it became easier to manage. That was the reason why they picked swift that time.
Certainly their goal wasn't to pick a popular language (because if that's what you want use python or JS) but rather a language that was relevant to what they were building.
So if D and Ada were relevant or not, that's the main point of discussion imo.
I've dabbled a bit in Ada, but it wouldn't be my choice either. It's still susceptible to memory errors. It's better behaved than C, but you still have to be careful. And the tooling isn't great, and there isn't a lot in terms of libraries. I think Ladybird also has aspirations to build their own OS, so portability could also be an issue.
Not the case with spark. But I understand it requires writing lot of things from scratch for browsers. But I don’t think portability will be an issue with Ada, it is cross platform.
However, this is where d shines. D has a mature ecosystem. Offers first class cpp abi and provides memory safety guarantees, which the blog mentioned as a primary factor. And d is similar to cpp, low barrier for cpp devs to pick up.
4 replies →
Probably contributing reasons? I imagine over time they will have a lot more Rust contributors than D or Ada.
Unfortunately a really good question gets downvoted instead of causing a relevant discussion, as so often in recent HN. It would be really interesting to know, why Ada would not be considered for such a large project, especially now when the code is translated with LLMs, as you say. I was never really comfortable that they were going for the most recent C++ versions, since there are still too many differences and unimplemented parts which make cross-compiler compatibilty an issue. I hope that with Rust at least cross-compilation is possible, so that the resulting executable also runs on older systems, where the toolchain is not available.
Unfortunately some folks do get bit sensitive on rust, that can be off putting.
But what I wanted to know was about evaluation with other languages, because Andreas has written complex software.
His insight might become enriching as to shortcomings or other issues which developers not that high up in the chain, may not have encountered.
Ultimately, that will only help others to understand how to write better software or think about scalability.
I personally think that people might've framed it as use Ada/D over rust comment which might have the HN people who prefer rust to respond with downvotes.
I agree that, this might be wrong behaviour and I don't think its any fault of rust itself which itself could be a blanket statement imo. There's nuance in both sides of discussions.
Coming to the main point, I feel like the real reason could be that rust is this sort of equilibra that the world has reached for, especially security related projects. Whether good or bad, this means that using rust would definitely lead to more contributor resources and the zeal of rustaceans can definitely be used as well and also third party libraries developed in rust although that itself is becoming a problem nowadays from what I hear from people in here who use rust sometimes (ie. too many dependencies)
Rust does seem to be good enough for this use case. I think the question could be on what D/Ada (Might I also add Nim/V/Odin) will add further to the project but I honestly agree that a fruitful discussion b/w other languages would've been certainly beneficial to the project (imo) and at the very least would've been very interesting to read personally
6 replies →
Based on the origins of Rust as a tool for writing the really thorny, defensive parsers of potentially actively hostile code for firefox, I have to imagine that another web browser is the most at-home place the language could ever be.
If this means we will get an independent state-of-the-art browser engine, I'm all for it.
IMV Servo is going to be the independent state of the art browser
I have my doubts it'll ever be "finished". Servo gives strong vibes of a project that will avoid performance hacks, because they're not nice/state of the art code. I have no evidence, it's just the energy I've picked up from it
My intuition is that they will convert to zig again when it stables. If it is possible to do it using LLM in 2 weeks for rust, then it would be the same for zig, too.
While rust is nice on paper, writting complex software in it is mentally consuming. You can not do it for a long time.
If they are looking for a memory-safe language, why would they convert to Zig?
If they do, it could be because safety is a gradient and one variable among many in software development, albeit a very important one when it comes to browsers.
Pardon my ignorance, but doesn't a byte-by-byte output recreation of the C++ code in Rust defeat the whole purpose of using Rust? For one, would it be idiomatic Rust anymore? Also, if there's a (non-memory related) vulnerability in the C++ code, would it be possible for that to be introduced in Rust too?
The impression I get from the article is not that the compiled code of each implementation produces the same object code, but that when the implementations are run with the same inputs, they produce exactly the same output — that is, the same JS VM bytecode.
That matches my understanding too.
If they had developed a technique to get a modern C++ compiler and rustc to generate exactly the same output for any program (even a trivial one) I think that would be huge news and I would love to see all the linker hacking that would involve.
I've looked at the code from the PR. It seems to use safe types and standard idioms like pattern matching, so at least at first glance it looks like Rust.
It could have been worse. C++ code naively converted line-by-line to Rust typically results in weird and unsafe Rust, but in this case it seems they've only been strict about the results being the same, not the structure of the implementation.
Rewrites have a very high risk of introducing regressions. Trying to fix bugs while rewriting will only make things harder, because instead of simply comparing outputs exactly, you'll have to judge which output is the right one. If you let the behavior significantly diverge during the rewrite, you'll just have two differently buggy codebases and no reference to follow.
It's much easier to make a bug-for-bug compatible copy, and fix bugs later.
Once you get a byte-by-byte duplicate, you can start refactoring into idiomatic Rust. Convert pointers to references, rip out unsafe blocks, and let Clippy go ham.
Add the fact that AI is writing the rust code...
"The web platform object model inherits a lot of 1990s OOP flavor, with garbage collection, deep inheritance hierarchies, and so on. Rust’s ownership model is not a natural fit for that."
I'm confused about this part. What part of the browser did they want GC and inheritance for? I'd get it if they were writing the UI in this, but the rest of this post is about the JS engine. They weren't going to 1:1 map JS objects to Swift objects and rely on ARC to manage memory, were they?
Lot of DOM APIs are like that. You have methods like element.[parent|children]() which implies circular structure, and then you have APIs like element.click(), which emits a click event that bubbles through the DOM - which means that element has to have some mutable reference to the DOM state. Or even element.remove(), which seems like a super weird api to have on an element of a collection, from Rust API design point of view.
You can model these with reference counting, but this turned out unfeasible in browsers. There's a great talk from when Blink (Chrome) transitioned from reference counting to GC, which provides a lot more details about these problems in practice: https://www.youtube.com/watch?v=_uxmEyd6uxo
> I'd get it if they were writing the UI in this, but the rest of this post is about the JS engine.
I think this might be the reason they started with the JS engine and not with some more fundamental browser structures. JS object model has these problems, too, but the engine has to solve them in more generic way. All JS objects can just be modeled as some JSObject class/struct where this is handled on the engine level.
DOM and other browser structures are different because the engine has to understand them, so the browser developers have to interact with the GC manually and if you watch the talk above, you'll see that it's quite involved to do even in C++, let alone in Rust, which puts a bunch of restrictions on top of that.
The "human-directed, not autonomous" framing is the part people keep glossing over. Claude Code here is a compiler-level translation tool, you are still the architect deciding what gets ported and in what order.
The real question is what this does to migrations that never happened because 18 months of rewrite did not pencil out. A 2-week port fundamentally changes that calculus.
If it is this easy, surely the trend is Rust output being an intermediate pass of the LLM super compiler. A security subset if you will (like other kinds of optimization), it will move from Rust specs to some deeper level of analysis and output the final executable. Some brave souls will read the intermediate Rust output (just like people used to read the assembler output from compilers) but the LLM super compiler will just translate a detailed English like spec into final executables.
Do you seriously think LLMs will not just spam unsafe blocks in it like they do with any task ever?
I'd generally be quite surprised to see LLMs spam unsafe blocks, both because that's behavior that I haven't observed while using them and because that contradicts my mental model of them where they imitate the styles of code that they were trained on (which in rust generally does not include spamming unsafe).
the most underappreciated aspect of AI-assisted language migration is that it changes the cost-benefit analysis of which language to target. previously choosing rust for a browser engine meant accepting 3-5x slower development velocity vs c++. if AI closes that gap to near parity, the calculus shifts dramatically - you get rusts safety guarantees essentially for free in terms of developer time.
the two-week timeline for 25k lines is striking. even accounting for the human review overhead, thats probably 4-6x faster than manual porting. and the byte-for-byte verification approach means the speed doesnt come at the cost of correctness. this could be a template for how other large c++ codebases approach incremental rust adoption.
> We’ve been searching for a memory-safe programming language to replace C++ in Ladybird for a while now.
The article fails to explain why. What problems (besides the obvious) have been found in which "memory-safe languages" can help. Do these problems actually explain the need of adding complexity to a project like this by adding another language?
I guess AI will be involved which, at this early point in the project would make ladybird a lot less interested (at least to me).
> What problems (besides the obvious) have been found in which "memory-safe languages" can help.
Why isn't that enough?
Browsers are incredibly security-sensitive projects. Downloading untrusted code from the internet and executing is part of their intended functionality! If memory safety is needed anywhere it's in browsers.
Rust was pretty much created to help solve security issues in browsers: https://en.wikipedia.org/wiki/Rust_(programming_language)#20...
> besides the obvious
Well, what else is there besides the obvious? It's a browser.
Even Chrome has started to adopt Rust due to recurring memory vulnerabilities.... that's a big enough reason.
You don't want a browser with a bunch of RCEs that can be triggered by opening a web page...
You do want a browser with RCE, but you want it to keep the it sandboxed. The hard part is executing the code safely
I guess you will need to wait for their Feb 2026 update.
This is really YOLOing as the original author doesn't know Rust well so what happens if they hit some complex production issue LLM aren't aware of? Hiring an expensive consultant to fix that until the next LLM iteration?
I'm as anti LLM use as they come, but this appears to be migrating libraries from already funcitoning C++ code. In the case of your hypothetical I suspect the course of action will be "shelve this library port until someone with domain expertise and Rust experience can look at it". Its not like he chucked the whole codebase at the GenaI gods and said "Port it to Rust!".
> what happens if they hit some complex production issue
they learn Rust
it takes a couple of years
it's not that hard.
Any word on how much more memory safe the implementation is? If passing a previous test suite is the criteria for success, what has changed, really? Are there previous memory safety tests that went from failing to passing?
I am very interested to know if this time and energy spent actually improved memory safety.
Other engineers facing the same challenges want to know!
If the previous impl had known memory safety issues I'd imagine they'd fix them as a matter of priority. It's hard to test for memory safety issues you don't know about.
On the rust side, the question is how much `unsafe` they used (I would hope none at all, although they don't specify).
You can look: https://github.com/LadybirdBrowser/ladybird/pull/8104/files?...
It seems like it is used mostly for FFI.
It is entirely possible a Rust port could have caught previously unknown memory safety issues. Furthermore, a Rust port that looks and feels like C++ may be peppered with unsafe calls to the point where the ROI on the port is greatly reduced.
I am not trying to dunk on the effort; quite the contrary. I am eager to hear more about the goals it originally set out to achieve.
None at all, the generated AST and bytecode are stated to be identical
This byte for byte output is the right way to go. But it depends on test cases. Hope the amount of cases cover all possible combination of input
How is this approach better than letting claude or some AI catch memory bugs better than humans in the c++ code itself?
Given it's not launched and it's donor driven, I guess conserving developer time is a key priority? Dual track sucks lots of time?
Good step. It will bring many more contributors.
Using LibJS with servo, when?
Interesting in context of that some time ago Andreas said that they failed on porting TypeScript compiler from TypeScript itself to Go lang by using LLMs and they went with manual port https://youtu.be/uMqx8NNT4xY?si=Vf1PyNkg3t6tmiPp&t=1423
That's a pivot, iirc they wanted to swift (I'm very glad they didn't do that). It's cool to see something like claude be useful for large scale projects like that
This is great! I'm very excited about Ladybird. My current browser, Brave, is the best of the worst but Ladybird is the best of the best. On another note, I'm hoping iOS will eventually let go of their control over WebKit, however wishful it might be.
I wonder what is gained by this port though, if the C++ codebase already employed modern approaches to memory management. It's entirely possible that the Rust version will perform worse too as compilers are less mature.
"modern approaches to memory management" aren't enough for complete memory safety.
Maybe, but it's certainly possible to write memory safe code in C++. It may be more or less difficult, but it isn't typically the ONLY objective of a project. C++ has other advantages too, such as seamless integration with C APIs and codebases, idiomatic OOP, and very mature compilers and libraries.
I must admit to being somewhat confused by the article's claim that Rust and C++ emit bytecode. To my knowledge, neither do (unless they're both targeting WASM?) - is there something I'm missing or is the author just using the wrong words?
EDIT: bramhaag pointed out the error of my ways. Thanks bramhaag!
By 'Rust compiler' and 'C++ compiler', they refer to the LibJS bytecode generator implemented in those languages. This is about the generated JS bytecode.
Yes, I re-read again, and I think you are correct. Thanks!
Thanks! I was confused about this as well.
They're referring to LibJS's bytecode (the internal instruction stream of Ladybird’s JS engine), not to Rust/CPP output formats.
This may sound stupid, but I wonder if using GPUI for a web browser could have some performance benefits...
I don't get the impression they care that much about performance. Besides, it would limit the number of platforms it could run on if it requires a recent GPU.
Using Rust means they’ve limited the number of platforms already… so I doubt they care.
Would rather use Servo. Atleast it has downloadable binaries.
Something of a culture clash here ain’t it, albeit an imbalanced one.
Oooh noooo I will have to fork it before it is too late!
Rust is perfect. Too perfect for the fascists behind Ladybird. https://drewdevault.com/2025/09/24/2025-09-24-Cloudflare-and...
Leave Rust for the non-fascists.
A lot of the interest in Ladybird and its parent project, SerenityOS, stemmed from the bespoke, "from scratch" approach Andreas advocated. If they're going to start vibe-coding everything, then what's the point of the top-to-bottom wheel reinvention they're doing? E.g., why even bother maintaining their own, custom XML parser instead of using any of the multitude that already exist (including in Rust) if they're ultimately going to lean heavily on AI in the course of developing it, since that inherently involves laundering existing implementations?
This project, of all of them, throwing in the towel on AI makes me fear AI abstainers have no future.
Were there any immediate benefits of this conversion, e.g. reduced memory use or lower CPU utilization?
Likely the opposite, as safe Rust has some extra safety checks for things like array bounds.
This will be another bad decision just like with Swift. From what I heard, Rust is notoriously bad at letting people define their own structure and instead beats you up until you satisfy the borrow checker. I think it'll make development slow and unpleasant. There are people out there who enjoy that, but it's not a fit for when you need to deliver a really huge codebase in reasonable time. I remember Andreas mentioning he just wanted something like C++, but with a GC and D would be absolutely perfect for this job.
Maybe, but will they have to fight with borrow checker for doing some other than (the very OOP) DOM components? They'll obviously use both for a long time in the future, so more functional places can get Rust, while more OOP places can benefit from C++
Nobody uses D
This is like the "real world" argument. Nobody uses that "in the real world", except well people that do.
Well, I do!?!! It's even faster than zoomer langs like Odin. You should try it.
1 reply →
And? Does it work? Because it does. It's a lot closer to C++ and you literally need like a week to start being productive and it's insanely flexible as a language. Nobody uses Swift also, but the additional problem with Swift was that it's entirely Apple-centric.
2 replies →
Fuck me. This is wild. Sorry for the potty mouth.
I'm not here to troll the LLM-as-programmer haters, but Ladybird (and Rust!) is loved by HN, and this is a big win.
How long until Ladybird begins to impact market dominance for Chrome and Firefox? My guess: Two years.
Note that Firefox doesn't have market dominance. It is under 5% market share. That said I imagine Firefox users to be the most likely to make the jump. However, the web is a minefield of corner cases. It's hard to believe it will be enough to make the browser largely useful enough to be a daily driver.
Why do you think Firefox users would be most likely to make the jump? The main reason I see people give for supporting Ladybird is challenging the dominance of the incumbents. That's not really a great reason to switch from Firefox because, as you note, it doesn't have any dominance. And there's also an argument that splitting the non-Chrome market into two only increases Chrome's dominance.
From what I can tell from HN, Brave seems to be popular with those users who hate Google but for whatever reason hate Mozilla even more, and I suspect those will be the most likely users to switch.
This is sort of hilarious if you think about it. The Firefox browser is completely written in Rust. Now Ladybird is a "human-directed ai" Rust browser. Makes you wonder how much of the code the two browsers will share going forward given llm assisted autocompletes will pull from the same Rust Browser dataset.
Probably not much: the requirement is exact equivalence of program inputs to outputs, and as such the agents are performing very mechanical translation from the existing C++ code to Rust. Their prompts aren't "implement X browser component in rust", they're "translate this C++ code to Rust, with these extra details that you can't glean from the code itself."
It's like only 10% of Firefox is rust.
I wonder where did you get the idea that Firefox was all Rust. Made me curious.
Only a small portion of Firefox is written in Rust. Apparently some of the most performant and least buggy parts are those in Rust, but again, only parts like the CSS engine.
https://github.com/mozilla-firefox/firefox Rust isn't even mentioned in languages used.
I don’t think I want to use a vibe-coded browser, unless “human-directed” just means that a human coded it and asked clarifying questions to the AI.
I don't get it, and I don't have a dog in the C/C++ vs. Rust race. Ladybird has ~1200 contributors with a predominance of C++ contributions, followed by HTML, and with "other" lying at 0.5%.
That's a lot of people contributing.
How many of them will be less willing to contribute in the future, and less productive when they do if a sizable portion is in Rust? Maybe there'll be more contributions and maybe there'll be less. I don't know. If you've managed to develop a community of 1200 developers who are willing to advance the project why upset the applecart?
There is a flock of people yelling around that they'd contribute if it was Rust, but won't touch C++
Great! I can't wait they totally ditch C++
I guess the ETA will pushed back by a few years then?
By 2 weeks so far ;-)
Probably not unless using Rust present some particular challenge for this type of project. But having eaten this proverbial apple they would probably use AI more and more assuming they have a budget and in this case being less rich than C++ might not mean much for productivity
For those unaware: the author holds some very unfriendly views. https://drewdevault.com/2025/09/24/2025-09-24-Cloudflare-and...
Cool!
I've translated it from Rust to V in 45 minutes. Works great:
https://github.com/medvednikov/libjs_v
Compiles in 0.2 seconds on my M5 (compared to Rust's 3.3 seconds).
Guess it will never come out.
They are lost. C & C++ are absolutely fine.
What are Rust programmers to do now that LLMs can port code to Rust??
It reads like a joke without a punchline
Rejoice?
I'm still glad that NetSurf exists.
I wouldn't mind if one result of this was a writeup on what patterns/antipatterns are there when converting code and concepts that used to be very aligned with C++-style OOP, deep inheritance and all that jazz, to what feels natural in Rust, and how you can rephrase those concepts without loss in the substance of what you need to do.
I guess it's a long way off, since the LLM translation would need to be refactored into natural Rust first. But the value of it would be in that it's a real world project, and not a hypothetical "well, you could probably just...".
Sigh agents keep killing all the passion I have for programming. It can do things way faster than me, and better than me in some cases. Soon it will do everything better and faster than me.
> Soon it will do everything better and faster than me
There is no evidence of that coming from this post. The work was highly directly by an extremely skilled engineer. As he points out, it was small chunks. What chunks and in what order were his decision.
Is AI re-writing those chunks much faster than he could. Yes. Very much so. Is it doing it better? Probably not. So, it is mostly just faster when you are very specific about what it should do. In other words, it is not a competitor. It is a tool.
And the entire thing was constrained by a massive test suite. AI did not write that. It does not even understand why those tests are the way they are.
This is a long way from "AI, write me a JavaScript engine".
Id put it as a example of a carpenter preparing their material with a lathe and circular saw vs one working with a handsaw and chisel.
Both will get a skilled craftsman to the point where thie output is a quality piece of work. Using the autotoools to prepare the inputs allows velocity and consistency.
Main issue is the hype and skiddies who would say - feed this tree into a machine and get a cabinet.Producing non-detrministic outputs with the operator being unable to adjust requirements on the fly or even stray from patterns/designs that havent been trained yet.
The tools have limitiations and the operators as well , and the hype does adisservice to what would be establishing reasonable patterns of usage and best practices.
Is a migration from language X to Y or refactoring from pattern A to B really the kind of task that makes you look forward to your day when you wake up?
Personally my sweet spot for LLM usage is for such tasks, and they can do a much better job unpacking the prompt and getting it done quickly.
In fact, there's a few codebases at my workplace that are quite shit, and I'm looking forward to make my proposal to refactor these. Prior to LLMs, I'm sure I'd have been laughed off, but now it's much more practical to achieve this.
Right. I had a 100% manual hobby project that did a load of parametric CAD in Python. The problem with sharing this was either actively running a server, trying to port the stack to emscripten including OCCT, or rewriting in JS, something I am only vaguely experienced in.
In ~5 hours of prompting, coding, testing, tweaking, the STL outputs are 1:1 (having the original is essential for this) and it runs entirely locally once the browser has loaded.
I don’t pretend that I’m a frontend developer now but it’s the sort of thing that would have taken me at least days, probably longer if I took the time to learn how each piece worked/fitted together.
It's the opposite for me, most of the time it's first rough pass it generates is awful and if you don't have good taste and a solid background of years of experience programming you won't notice it and I keep having to tell it to steer into better design choices.
I'm not sure 25,000 lines translated in 2 weeks is "fast", for a naive translation between languages as similar as C++ and Rust (and Ladybird does modern RAII smart-pointer-y C++ which is VERY similar to Rust). You should easily be able to do 2000+ lines/day chunks.
Yeah, it also a lot that the person doing the translation is the lead developer of the project who is very familiar with the original version.
I imagine LLMs do help quite a bit for these language translation tasks though. Language translation (both human and programming) is one of the things they seem to be best at.
Agreed, however, I'm quite sure 25,000 lines translated in "multiple months" is very "slow", for a naive translation between languages as similar as C++ and Rust.
2000+ lines/day chunks are 10 days for 20+k lines...
2 replies →
"I will never be a world class athlete, so I play for the love of the sport."
Helps me.
Not sure why you'd get that from this post, which says it required careful small prompts over the course of weeks.
In the hands of experienced devs, AI increases coding speed with minimal impact to quality. That's your differentiator.
Look into platforms like Workato, Boomi, or similar iPaaS products, unfortunely it feels like those of us that like coding have to be happy turning into architect roles, with AI as brick layers.
Despite the many claims to the contrary, agents can't do anything better than a human yet. Faster, certainly, but the quality is always poor compared to what a human would produce. You aren't obsolete yet, brother.
Dunno, that probably doesn't hold for webapps with backend as they are typically complete garbage and LLMs (even local ones) would give you about the same result but in 1 hour.
It automates both the fun and the boring parts equally well. Now the job is like opening a box of legos and they fall out and then auto-assemble themselves into whatever we want..
Rather like opening a box of legos and reading them the instruction sheet while they auto assemble based on what they understood. Then you re-read and clarify where the assembly went wrong. Many times, if needed.
i rememebr seeing interviews saying rust is not suited for this project because of recursion and dom tree. how they tested multiple languages and settled on swift. then they abandon swift and now they shift towards rust.
this entire project starts to look like "how am i feeling today?" rather than a serious project.
So Swift didn't turned out like they imagined and Rust is just the next best alternative to that failed vision using Swift.
So far this is the first and only shift
They were doing their own custom language before Swift.
2 replies →
From the link it seems that Ladybird architecture is very modular, in this case LibJS is one of the subsystems that has less external dependencies, said that they don't need to migrate everything, only the parts that makes sense.
Yes, i understand that in a personal project, but they have investors behind them.
They adopted Rust for LibJS, not the browser and its engine.
I feel similar about the potential of this technique and have heard this from other C++ developers too.
Rust syntax is a PITA and investing a lot of effort in the language doesn’t seem worth the trouble for an experienced C++ developer, but with AI learning, porting and maintenance all become more accessible. It’s possible to integrate Rust in an existing codebase or write subparts of larger C++ projects in Rust where it makes sense.
I was recently involved in an AI porting effort, but using different languages and the results were fine. Validating and reviewing the code took longer than writing it.
I am unsure if I can rationally justify saying this, but I am left with disappointment and unease. Comparable to when a series I care about changes showrunner and jumps the shark.
Maybe you're part of an anti-cult-cult?
Would be as bad as being in a cult.
Hate to tell you this, but it's cults all the way down. Plato understood this, and his disdain for caves and wall-shadows, is really a disdain for cults. The thing is, over the last 2300 years, we have gotten really good at making our caves super cozy -- much cozier than the "real world" could ever be. Our wall-shadows have become theme parks, broadway theaters, VR headsets, youtube videos, books, entire cities even. In Plato's day, it made sense to question the cave, to be suspicious of it. But today, the cave is not just at parity with reality, it is superior to it (similar to how a video game is a precisely engineered experience, one that never has too little signal and never has too much noise, the perfect balance to keep you interested and engaged).
I'm no mind reader, and certainly no anthropologist, but I suspect that what separates humans from other (non extinct) animals, is that we compulsively seek caves that we can decorate with moving shadows and static symbols. We even found a series of prime numbers (sequences of dots, ". ... ..... .......") in a cave from the _ice age_. Mathematics before writing. We seek to project what we see with our mind's eye into the world itself, thereby making it communicable, shareable. Ever tell someone you had a dream, and they believed you? You just planted the seed for a cult, a shared cave. Even though you cannot photograph the dream, or offer any evidence that you can dream at all.
The industrial and scientific revolutions have distanced our consciousness from this idea, even as they enabled ever more perfect caves to manifest. Our vocabulary has become corrupted and unclear. We started using words like "reality", and "literally", and "truth", when we mean the exact opposite.
The conspiracy theorists and cultists, are just people who wandered into a new cave, with a different kind of fire, and differently curved walls, and they want to tell people from their old cave that they have found a way out of the cave into reality -- they do not yet realize (or do not want to accept), that they live in a network of caves, a network of different things in the same category.
During the early 2020s, we did a lot of talking about the disappearance of "consensus reality". This is scientific terminology mapped over the idea of caves and cults. You can tell, because the phrase is an oxymoron. It is not reality, if it requires consensus. It is fantasy, it is fiction, it is a dream. The cave has indeed become so widespread that we even _call_ it reality.
If you speak language, and read words, you are participating in a cult (we even call caves that had a kind of altar in the center a cult -- in Eurasia, there was a cave-cult called _the cult of the bear_, which had a bear skull placed in its center during the last ice age, and I would not be surprised if people spoke to it, with the help of hallucinogens). The only question is whether the cult is nourishing you or cannibalizing you.
To the person you are responding to (user ocd): your cave (ladybird, your hypothetical tv-series), no longer nourishes you like it once did. Maybe find a new cave, build a fire in it. Unlike a television series, you can fork a code base. You make it into the perfect cave, just for you. And if another person likes this cave, chooses to sit by the fire with you, well, now you have a cult.
Chatbot-translated code which is C++ foisted onto Rust? I will respectfully roll my eyes.
Ah, but I see they actually haven't done that to most of their code, so maybe it's just a bit of pandering to the hype and fashion.
[dead]
[dead]
[dead]
[flagged]
[flagged]
[flagged]
Servo isn't a JS engine. Do you mean why didn't they abandon their mission statement of developing a truly independent browser engine from scratch, abandon their C++ code base they spent the last 5 years building, accept a regression hit on WPT test coverage, so they can start hacking on a completely different complex foreign code-base they have no experience in, that another team is already developing?
Well for one, Servo isn't just JavaScript, it's an entire engine. Closer to Blink & Gecko.
Secondly, Ladybird wants to be a fourth implementor in the web browsers we have today. Right now there's pretty much three browser engines: Blink, Gecko and WebKit (or alternatively, every browser is either Chrome, Firefox or Safari). Ladybird wants to be the fourth engine and browser in that list.
Servo also wants to be the fourth engine in that list, although the original goal was to remove Gecko and replace it with Servo (which effectively wouldn't change the fact there's only three browsers/three engines). Then Mozilla lost track of what it was doing[0] and discarded the entire Servo team. Nowadays Servo isn't part of Mozilla anymore, but they're clearly much more strapped for resources and don't seem to be too interested in setting up all the work to make a Servo-based browser.
The question of "why not use Servo" kinda has the same tone as "why are people contributing to BSD, can't they just use Linux?". It's a different tool that happens to be in the same category.
[0]: Or in a less positive sense, went evil.
> Well for one, Servo isn't just JavaScript, it's an entire engine.
Notably Servo doesn't have it's own JS engine at all. It uses Rust bindings to SpiderMonkey.
Ladybird has a strong "all dependencies built in house" philosophy. Their argument is they want an alternative implementation to whatever is used by other browsers. I'd argue they would never use a third party library like servo as a principle.
No they don’t - SerenityOS did, but when Ladybird split out they started using all sorts of third party libraries for image decoding, network, etc.
Now a core part of the browser rendering engine is not something they’re going to outsource because it would defeat the goal of the project, but they have a far different policy to dependencies now than it used to before.
Because they're not servo and servo is still in the race. Merging those projects is against making an independent browser(s)
NIH syndrome
Some time ago I was perma-banned from the Ladybird github repository. One can say it is warranted, or not (people have their own opinion; I completely disagree with their decision). Now that this has happened, I can speak more freely about Ladybird.
Naturally this will be somewhat critical, but I need to first put things into context. I do believe that we really need an alternative to Google dominating our digital life. So I don't object that we need alternatives; whether Ladybird will be an alternative, or not, will be shown in the future. Most assuredly we need competition as otherwise the Google empire moves forward like Darth Vader and the empire (but nowhere near as cool as that; I find Google boring and lame. Even skynet in Terminator was more fun than Google. Google just annoys the heck out of me, but back to the topic of browsers).
So with that out of the way ... Ladybird is kind of ... erratic.
Some time ago, perhaps two months or three, Andreas suddenly announced "Swift WILL BE THE FOREVER FUTURE! C++ sucks!!!". People back then were scratching heads. It was not clear why Swift is suddenly our saviour.
Ok, now we learn - "wait ... swift is NOT the future, but RUST is!!!". Ok ... more head-scratching. We are having a deja-vu moment here... but it gets stranger:
"We previously explored Swift, but the C++ interop never quite got there, and platform support outside the Apple ecosystem was limited. Rust is a different story."
and then:
"I used Claude Code and Codex for the translation. This was human-directed, not autonomous code generation"
So ... the expertise will be with regards to ... relying on AI to autogenerate code in ... Rust.
I am not saying this is a 100% fail strategy, mind you. AI can generate useful code, we could see that. But I am beginning to have more and more doubts about the Ladybird project. Add to this the breakage of URLs that are used by thousands or million people world-wide (see the issues reported on the github tracker); or also the fact that, once you scale-up and more and more people use ladybird, will you be able to keep up with issue trackers? Will you ban more people?
In a way it is actually good that I am no longer allowed to make comments on their repository because I can now be a lot more critical and ask questions that the ladybird team will have to evaluate. Will ladybird blend? Will it succeed? Will it fail? Yes, it is way too early to make an evaluation, so we should evaluate in some months or so, perhaps end of this year. But I am pretty certain the criticism will increase, at the least the moment they decide to leave beta (or alpha or whatever model they use; they claimd they want a first working version in this year for Linux users, let's see whether that works).
ÄNTLIGEN!
Completely ignoring the Rust aspect, I’m disappointed that two weeks were spent on something that isn’t getting Ladybird to a state where it can be used as a daily driver. Ladybird isn’t usable right now, and if it was usable, improving the memory safety would be a commendable goal. Right now I just feel like this is premature.
developers with good taste like Andreas Kling will be able to design entire OSes with coding agents
> design entire OSes with coding agents
They ported an existing project from CPP to Rust using AI because the porting would've been too tedious. I don't think they're planning on vibe coding PRs the way you're imagining.
He already did
This comment raises an interesting question: Would Serenity OS have brought Andreas the same kind of serenity had it been developed with AI? Open candid question.
I like the idea that people are either coders or builders. So AI can help fulfill your desire to build, create, bring things into reality. But it can't satisfy you if you like programming for its own sake. SerenityOS was not a practical project, it was clearly done for the enjoyment of programming itself.
The project's use of AI now echoes that - it's not being used to create new features, it's used for practical, boring drudge work of translating between two languages. So still very much on brand.
I don't think so because if I remember it correctly, Andreas suffered from alcoholism and serenity prayer helped him to go on the right path and iirc he honored that and created an os named serenityos.
God grant me the serenity
to accept the things I cannot change;
courage to change the things I can;
and wisdom to know the difference.
(courage to change the things I can;):- I think that this line must've given Andreas the strength, the passion to make the project reality.
but if AI made the change. Would the line be changed to courage to prompt an all powerful entity to change the things I asked it to.
Would that give courage? Would that inspire confidence in oneself?
I have personally made many projects with LLM's (honestly I must admit that I am a teenager and so I have been sort of using it from the start)
and personally, I feel like there are some points of curiosity that I can be prideful of in my projects but there is still a sense of emptiness and I think I am not the only one who observes it as such.
I think in the world of AI hype, it takes true courage & passion to write by hand.
Obviously one tries to argue that AI is the next bytecode but that is false because of the non deterministic nature of AI but even that being said, I think I personally feel as if the people who write assembly are definitely likely to be more passionate of their craft than Nodejs (and I would consider myself a nodejs guy and there's still passion but still)
Coding was definitely a form of art/expression/sense-of-meaning for Mr Andreas during a time of struggle. To automate that might strip him of the joy derived from stroking brush on an empty canvas.
Honestly, I really don't know about AI the more I think about it so I will not pretend that I know a thing/two about AI. This message is just my opinion in the moment. Opinions change with time but my opinion right now is that coding by hand definitely is more meaningful than not if the purpose of the project is to derive meaning.
Yeah, some weekends ago I tried writing a cross-platform browser without any Rust crates, this weekend I made my own self-hosted compile to Rust Clojure-like lisp, maybe next weekend attempting to create a OS that uses my language to run on bare-metal would actually be a challenge. Thanks for the inspiration :)
Cool project, but I'm a bit curious hearing how the rest of the project feels about this?
I'm not sure how I'd feel if I woke up and found a system I worked on had been translated into an another language I'm not neccessarily familiar with. And I'm not sure I'd want to fix an non-idiomatic "mess" just because it's been translated into a language I'm familiar with either (although I suspect they'll have no problem attracting rust developers).
Lol this dude is incapable of finishing anything he starts. Always a million distractions. First he started an OS called Serenity, then abandoned that to start a programming language (Jank or something) then abandoned that to work on a web browser, now it's looking like he is looking for things to distract away from that... A shame really.
Serenity was literally a distraction for his substance addiction issues. It's pretty clear he's productive and he and his team have worked on Ladybird for several years straight now. How many web browsers or OSs have you developed from scratch?
He is working on the web browser.