Comment by bbayles

1 day ago

I'm sympathetic to this view, but I also wonder if this is the same thing that assembly language programmers said about compilers. What do you mean that you never look at the machine code? What if the compiler does something inefficient?

Not even remotely close.

Compilers are deterministic. People who write them test that they will produce correct results. You can expect the same code to compile to the same assembly.

With LLMs two people giving the exact same prompts can get wildly different results. That is not a tool you can use to blindly ship production code. Imagine if your compiler randomly threw in a syscall to delete your hard drive, or decide to pass credentials in plain text. LLMs can and will do those things.

  • Even ignoring determinism, with traditional source code you have a durable, human-readable blueprint of what the software is meant to do that other humans can understand and tweak. There's no analogy in the case of "don't read the code" LLM usage. No artifacts exist that humans can read or verify to understand what the software is supposed to be doing.

    • yeah there is. it's called "documentation" and "requirements". And it's not like you can't go read the code if you want to understand how it works, it's just not necessary to do so while in the process of getting to working software. I truly do not understand why so many people are hung up on this "I need to understand every single line of code in my program" bs I keep reading here, do you also disassemble every library you use and understand it? no, you just use it because it's faster that way.

      2 replies →

  • Not only that but compiler optimizations are generally based on rigorous mathematical proofs, so that even without testing them you can be pretty sure it will generate equivalent assembly. From the little I know of LLM's, I'm pretty sure no one has figured out what mathematical principles LLM's are generating code from so you cant be sure its going to right aside from testing it.

I write JS, and I have never directly observed the IRs or assembly code that my code becomes. Yet I certainly assume that the compiler author has looked at the compiled output in the process of writing a compiler!

For me the difference is prognosis. Gas Town has no ratchet of quality: its fate was written on the wall since the day Steve decided he didn't want to know what the code says: it will grow to a moderate but unimpressive size before it collapses under its own weight. Even if someone tried to prop it up with stable infra, Steve would surely vibe the stable infra out of existence since he does not care about that

  • or he will find a way to get the AI to create harnesses so it becomes stable. The lack of imagination and willingness to experiment in the HN crowd is AMAZING me and worrying me at the same time. Never thought a group of engineers would be the most conservative and close minded people I could discuss with.

    • It's a paradox, huh. If the AI harness became so stable it wrote good code he wouldn't be afraid to look at the code he would be eager to look at it, right? But then if it mattered if AI wrote good code or not he couldn't defend his position that the way to create value with code is quantity over quality. He needs to sell the idea of something only AI can do, which means he needs the system to be made up of a lot of bad or low quality code which no person would ever want to be forced to look at.

    • Wait till you meet engineers other than sw engineers. Not even sure most sw people should be called engineers since there are no real accredited standards. I specifically trained as EE in physical electronics because other disciplines at the time seemed really rigid.

      There's a saying that you don't want optimists building bridges.

    • There's a difference between "imagination and willingness to experiment" and "blind faith and gullibility".

The big difference is that compilation is deterministic: compile the same program twice and it'll generate the same output twice. It also doesn't involve any "creativity": a compiler is mostly translating a high-level concept into its predefined lower-level components. I don't know exactly what my code compiles to, but I can be pretty certain what the general idea of the assembly is going to be.

With LLMs all bets are off. Is your code going to import leftpad, call leftpad-as-a-service, write its own leftpad implementation, decide that padding isn't needed after all, use a close-enough rightpad instead? Who knows! It's just rolling dice, so have fun finding out!

  • > The big difference is that compilation is deterministic: compile the same program twice and it'll generate the same output twice.

    That's barely true now. Nix comes close, but builds are only bit-for-bit identical if you set a bunch of extra flags that aren't set by default. The most obvious instability is CPU dispatch order (aka modern single computer systems are themselves distributed, racy systems) changes the generated code ever so slightly.

    We don't actually care, because if one compiled version of the code uses r8 for a variable but a different compilation uses r9 for that variable, it doesn't matter because we just assume the resulting binary works the same either way. R8 vs r9 are implementation details that don't matter to humans. See where I'm going with this? If the LLM non-deterministically calls the variable fileName one day, and file_name the next time it's given the same prompt, yeah language syntax purists are going to suffer an aneurysm because one of those is clearly "wrong" for the language in use, but it's really more of an implementation detail at this point. Obviously you can't mix them, the generated code has to be consistent in which one it's using, but if compilers get to chose r8 one day and r9 the next, and we're fine with it, why is having the exact variable name that important, as long as it's being used correctly?

    • I’ve done builds for aerospace products where the only binary difference between two builds of the same source code is the embedded timestamp. And per FAA review guidelines, this deterministic attribute is required, or else something is wrong in the source code or build process.

      I certainly don’t use all compilers everywhere, but I don’t think determinism in compilation is especially rare.

The compiler is deterministic and the translation does not lose semantics. The meaning of your code is an exact reflection of what is produced.

  • We can tell you weren't around for the advent of compilers. To be fair, neither was I since the UNIX c compiler came out in '68 and was by far not the first compiler. Modern comilers you can make that claim about, but early compilers weren't.

    • All compilers have bugs. Any loss of semantics during compilation would be considered a bug. In order to do that, the source and target language need to be structured and specified. I wasn't around in the 60s either, but I think that hasn't changed.

    • I've been programming since 6502/6510 assembly language and all compilers I've used were deterministic (which isn't the same thing as being bug free or producing the correct output for a given input).

No, it is not what assembly programmers said about compilers, because you can still look at the compiled assembly, and if the compiler makes a mistake, you can observe it and work around it with inline assembly or, if the source is available, improve the compiler. That is not the same as saying "never look at the code".

I feel like this argument would make a lot more sense if LLMs had anywhere near the same level of determinism as a compiler.

>but I also wonder if this is the same thing that assembly language programmers said about compilers

But as a programmer writing C code, you're still building out the software by hand. You're having to read and write a slightly higher level encoding of the software.

With vibe coding, you don't even deal with encodings. You just prompt and move on.

  • I've wondered if people who write detailed specs, are overly detailed, are in a regulated industry, or even work with offshore teams have success more quickly simply they start with that behavior. Maybe they have a tendency to dwell before moving on which may be slightly more iterative than someone who vibecodes straight through.

I wonder if assembly programmers felt this way about the reliability of the electical components which their code relies upon...

  • I wonder if electrical engineers felt this way about the reliability of the silicon crystal lattice their circuits rely upon…

This analogy has always been bad any time someone has used it. Compilers directly transform via known algorithms.

Vibecoding is literally just random probabilistic mapping between unknown inputs and outputs on an unknown domain.

Feels like saying because I don't know how my engine works that my car could've just been vibe-engineered. People have put 1000s of hours into making certain tools work up to a give standard and spec reviewed by many many people.

"I don't know how something works" != "This wasn't thoughtfully designed"

Why do people compare these things.