Comment by pcloadlett3r
21 days ago
Is there really value being presented here? Is this codebase a stable enough base to continue developing this compiler or does it warrant a total rewrite? Honest question, it seems like the author mentioned it being at its limits. This mirrors my own experience with Opus in that it isn't that great at defining abstractions in one-shot at least. Maybe with enough loops it could converge but I haven't seen definite proof of that in current generation with these ambitious clickbaity projects.
This is an experiment to see the current limit of AI capabilities. The end result isn't useful, but the fact is established that in Feb 2026, you can spend $20k on AI to get a inefficient but working C complier.
Of course it's impressive. I am just pointing out that these experiments with the million line browser and now this c compiler seem to greatly extrapolate conclusions. The researchers claim they prove you can scale agents horizontally for econkmic benefit. But the products both of these built are of questionable technical quality and it isnt clear to me they are a stable enough foundation to build on top of. But everyone in the hype crowd just assumes this is true. At least this researcher has sort of promised to pursue this project whereas Wilson already pretty much gave up on his browser. I hadn't seen a commit in that repo for weeks. Given that, I am not going to immediately assume these agents truly achieved anything of economic value relative to what a smaller set of agents could have achieved.
> inefficient but working
FWIW, an inefficient but working product is pretty much the definition of a startup MVP. People are getting hung up on the fact that it doesn't beat gcc and clang, and generalizing to the idea that such a thing can't possibly be useful.
But clearly it can, and is. This builds and boots Linux. A putative MVP might launch someone's dreams. For $20k!
The reflexive ludditism is kinda scary actually. We're beyond the "will it work" phase and the disruption is happening in front of us. I was a luddite 10 months ago. I was wrong.
> FWIW, an inefficient but working product is pretty much the definition of a startup MVP
It depends on what kind of start-up we're talking about.
A compiler start-up probably should show some kind of efficiency gain even in an MVP. As in: we're insanely efficient in this part of the work, but we're still missing all other functionalities but have a clear path to implementing the rest.
This is more like: It's inefficient, and the code is such a mess that I have no idea on how to improve on it.
As per the blog improvements were attempted but that only started a game of whack-a-mole with new problems.
If on the other hand you're talking about Claude Teams for writing code as an MVP: the outcome is more like proof that the approach doesn't work and you need humans in the loop.
You are projecting and over-reacting. My response is measured against the insane hype this is getting beyond what was demonstrared. I never said ot wasn't impressive.
I'm not hung up on anything. Clearly the project isn't stable because it can't be modified without regression. It can be an MVP but if it needs someone to rewrite it or spend many man-months just to grok the code to add to it then its conceivable it isnt an economic win in the long run. Also, they haven't compared this to what a smaller set of agents could accomplish with the same task and thus I am still not fully sold on the economic viability of horizontally scaling agents at this time (well at least not on the task that was tested).
> The end result isn't useful
Then, as your parent comment asked, is there value in it? $20K, which is more than the yearly minimum wage in several countries in Europe, was spent recreating a worse version of something we already have, just to see if it was possible, using a system which increases inequality and makes climate change—which is causing people to die—worse.
If it generates a booting kernel and passes the test suite at 99% it's probably good enough to use, yeah.
The point isn't to replace GCC per se, it's to demonstrate that reasonably working software of equivalent complexity is within reach for $20k to solve whatever problem it is you do have.
> it's probably good enough to use, yea.
Not for general purpose use, only for demo.
> that reasonably working software of equivalent complexity is within reach for $20k to solve
But if this can't come close to replacing GCC and can't be modified without introducing bugs then it hasn't proven this yet. I learned some new hacks from the paper and that's great and all but from my experiencing of trying to harness even 4 claude sessions in parallel on a complex task it just goes off the rails in terms of coherence. I'll try the new techniques but my intuition is that its not really as good as you are selling it.
> Not for general purpose use, only for demo.
What does that mean, though? I mean, it's already meeting a very high quality bar by booting at all and passing those tests. No, it doesn't beat existing solutions on all the checkboxes, but that's not what the demo is about.
The point being demonstrated is that if you need a "custom compiler" or something similar for your own new, greenfield requirement, you can have it at pretty-clearly-near-shippable quality in two weeks for $20k.
And if people can't smell the disruption there, I don't know what to say.
4 replies →