Hello world does not compile

1 day ago (github.com)

The negativity around the lack of perfection for something that was literal fiction fiction just some years ago is amazing.

  • If more people are able to step back and think about the potential growth for the next 5-10 years, then I think the discussion would be very different.

    I am grateful to be able to witness all these amazing progress play out, but am also concerned about the wide ranging implications.

    • > think about the potential growth for the next 5-10 years,

      I thought about it and it doesn't seem that bright. The problem is not that LLMs generate inferior code faster, is that at some point some people will be convinced that this code is good enough and can be used in production. At that point, the programming skills of the population will devolve and less people will understand what's going on. Human programmers will only work in financial institutions etc., the rest will be a mess. Why? Because generated code is starting to be a commodity and the buyer doesn't understand how bad it it.

      So we're at the stage when global companies decided it's a fantastic idea to outsource the production of everything to China, and individuals are buying Chinese plastic gadgets en masse. Why? Because it's very cheap when compared to the real thing.

    • This is what the kids call “cope”, but it comes from a very real place of fear and insecurity.

      Not the kind of insecurity you get from your parents mind you, but the kind where you’re not sure you’re going to be able to preserve your way of life.

      5 replies →

  • There is a massive difference between a result like this when it's a research project and when it's being pushed by billion dollar companies as the solution to all of humanities problems.

    In business, as a product, results are all that matter.

    As a research and development efforts it's exciting and interesting as a milestone on the path to something revolutionary.

    But I don't think it's ready to deliver value. Building a compiler that almost works is of no business value.

  • Noone can correctly quantify what these models can and can't do. That leads to the people in charge completely overselling them (automating all white collar jobs, doing all software engineering, etc) and the people threatened by those statements firing back when these models inevitably fail at doing what was promised.

    They are very capable but it's very hard to explain to what degree. It is even harder to quantify what they will be able to do in the future and what inherent limits exist. Again leading to the people benefiting from it to claim that there are no limits.

    Truth is that we just don't know. And there are too few good folks out there that are actually reasonable about it because the ones that know are working on the tech and benefit from more hype. Karpathy is one of the few that left the rocket and gives a still optimistic but reasonable perspective.

  • It’s a fear response.

    • It could also be that, so often, the claims of what LLMs are achieve are so, so overstated that people feel the need to take it down a notch.

      I think lofty claims ultimately hurt the perception of AI. If I wanted to believe AI was going nowhere, I would listen to people like Sam Altman, who seem to believe in something more akin to a religion than a pragmatic approach. That, to me, does not breed confidence. Surely, if the product is good, it would not require evangelism or outright deceit? For example, claiming this implementation was 'clean room'. Words have meaning.

      This feat was very impressive, no doubt. But with each exaggeration, people lose faith. They begin to wonder - what is true, and what is marketing? What is real, and what is a cheap attempt for companies to rake in whatever cold hard AI cash they can? Is this opportunistic, like viral pneumonia, or something we should really be looking at?

    • No.

      While there are many comments which are in reaction to other comments:

      Some people hype up LLMs without admitting any downsides. So, naturally, others get irritated with that.

      Some people anti-hype LLMs without admitting any upsides. So, naturally, others get irritated with that.

      I want people to write comments which are measured and reasonable.

    • This reply is argumentum ad personam. We could reverse it and say GenAI companies push this hype down our throats because of fear that they are burning cash with no moat but these kinds of discussions lead nowhere. It's better to focus on core arguments.

  • I think it’s a good antidote to the hype train. These things are impressive but still limited, solely hearing about the hype is also a problem.

  • How does a statistical model become "perfect" instead of merely approaching it? What do you even mean by "perfect"?

    We already have determinism in all machines without this wasteful layer of slop and indirection, and we're all sick and tired of the armchair philosophy.

    It's very clear where LLMs will be used and it's not as a compiler. All disagreements with that are either made in bad faith or deeply ignorant.

    • > All disagreements with that are either made in bad faith or deeply ignorant.

      Declaring an opinion and then making discussion about it impossible isn't a useful way to communicate or reason about things.

It is wild that this is getting flagged!

  • Wait why IS this flagged? Is a fairly straight up tech topic - granted somewhat in a humorous vein, but still valid?

    • Flagging is the new downvote, with extra power. No one can say no to you, if enough people (who knows how many, 1, 5, 20? Definitely an order of magnitude less that upvotes least) do it the system automatically hides it. And unless the mods care, the system can be abused very easily.

      I’ve seen posts with 500+ upvotes that were still flagged. I think the balance and automation around flagging is completely off and too easily abused.

      1 reply →

Ah, two megapixel-PNG screenshots of console text (one hidpi too!), and of some IDE showing also text (plus a lot of empty space)... Great great job, everyone.

It really can replace human engineers. Mistakes and all. I've definitely written an "example" that I didn't actually test only to find out it doesn't work

I wonder if it feels the same embarrassment and shame I do too

Seems like a nothingburger? Mostly a spammy GitHub thread of people not reading the rest of the responses.

> Works if you supply the correct include path(s)

> Can confirm, works fine:

> You could arguably fault ccc's driver for not specifying the include path to find the native C library on this system.

> (I followed the instructions in the BUILDING_LINUX.txt file in the repo and got the kernel built for RISC-V. You can find the build I made here if someone is just interested in the binaries)

  • >> Works if you supply the correct include path(s)

    The location of Standard C headers do not need to be supplied to a conformant compiler.

    >> You could arguably fault ccc's driver for not specifying the include path to find the native C library on this system.

    This is not a good implementation decision for a compiler which is not the C compiler distributed with the OS. Even though Standard C headers have well-defined names and public contracts, how they are defined is very much compiler specific.

    So this defect is a "somethingburger."

    • Well this compiler was written to build Linux as a proof of concept. You don't need a libc for building the kernel. Was it claimed anywhere that it is a fully compliant C compiler?

    • That's kind of moving the goal post no? They set out to build a C compiler that could compile the kernel, it can do that just fine?

      Searching for "compliant" in the README doesn't seem to indicate that building a "conformant compiler" was even the goal here, so not sure why that's suddenly should be a requirement.

They had GCC to use as an oracle/source of truth. Humans intervened multiple times. Clearly writing C compilers is a huge part of its training data—the literal definition of training on test data.

Wake me up when a model trained only on data through the year 1950 can write a C compiler.

The anti-AI crowd proves that they do need replacing as programmers since it was user error. Opus 4.6/ChatGPT 5.3 xhigh is superior to the vast majority of programmers. Talk about grasping for straws.

  • They're literally following the first few lines of the README exactly as instructed by Claude. I don't think it's unreasonable to point out the issue

  • This will do the rounds on the front page of reddit with no mention of the users c library paths having issues as the root cause despite the clear error message stating that.