Comment by armchairhacker

8 hours ago

“The actual fundamentals, the things-in-themselves, the theory behind the action” don’t go away, they change.

Programmers used to work with punch cards, then assembly, then low-level languages with odd quirks. Today few developers even think about first-party code size, micro-optimizations, register allocation, etc. LLMs are just another abstraction.

A developer with the ideal AI code writer (which we’re not at yet) must still think about idea, design, scope, etc. like a product owner or manager. And these concepts have theory, sometimes even math (e.g. time complexity).

EDIT to comment on the article: all abstractions are leaky, but sometimes it rarely matters. Today we do still need to understand code quality and architecture when working with LLMs, or the software will get bad enough that it will affect the company. But maybe not next year. An analogy: stack vs heap, memory allocations, etc. still matter in high-performance software, which isn’t uncommon, but programmers almost never think about register allocation.

LLMs are not another abstraction. ALL OTHER LAYERS you named are fully deterministic, understood, debuggable, etc.

You cannot be serious.

  • A non-deterministic layer seems like exactly what would need a competent, professional to ensure a good outcome, so it doesn't follow that LLM usage would depress wage more than high-level languages depressed wages by opening up programming to tens of millions of people who could never grok assembly.

  • Counter-point: most developers have no idea or eagerness to actually do that debugging, so it doesn't really matter.

    • It DOES matter, because the claim that LLMs are a layer of abstraction implies that it's somehow more than a random word generator. It does a great job at generating words in the right order, and often, given enough time, datacenter resources, money, and training, they can produce code that runs and does things as expected.

      However, there is absolutely nothing stopping an LLM from "deciding" tomorrow that a fix it built a week ago is no longer real, because not only has that fix left its context, but also the bug was not obvious.

      2 replies →

  • LLMs are one of the most general abstractions possible.

    LLMs are also quite deterministic if you want them to be - generally, their final token selection is deliberately randomized (the model “temperature”). But the word you’re looking for here is probably not actually determinism, it’s probably something closer to predictability.

    In any case, it’s perfectly possible to ensure that the output of LLMs is fully deterministic, debuggable, understandable, and testable.

    > You cannot be serious.

    I don’t think you’re thinking about this clearly.

    • > LLMs are also quite deterministic if you want them to be

      In the shallow sense that any PRNG is deterministic if you set the seed and if you control triggering order.

      However that's not usually the situation/scope people are talking about.

    • With a sufficiently complex prompt and a sufficiently complex codebase, LLMs consistently fail and make mistakes, "forget" parts of the prompt, etc.

      There's no comparison to be made between this and, for example, a compiler. It's an incompetent comparison.

      > I don’t think you’re thinking about this clearly.

      My literal job is dealing with layers of abstraction. I'm thinking pretty clearly when I tell you that, not only are LLMs a super leaky, terrible abstraction, they are also not comparable to any other layers of abstraction. All other layers of abstraction we use are well understood, predictable (as you put it), and DEBUGGABLE.

      When claude deletes a fix it did two weeks ago, while trying to fix some unrelated error, do you never stop and think "this is not quite the same as what GCC does"?

      1 reply →