← Back to context

Comment by frollogaston

20 hours ago

Same as when higher-level languages replaced assembly for a lot of use cases. And btw, at least in places I've worked, better traditional tooling would replace a lot more headcount than AI would.

Not even close, those were all deterministic, this is probabilistic.

  • The output of the LLM is probabilistic. The code you actually commit or merge is not.

    • Exactly. If LLMs were like higher level languages you'd be committing the prompt. LLMs are actually like auto-complete, snippets, stackoverflow and rosetta code. It's not a higher level of abstraction, it's a tool for writing code.

    • The parent is saying that when higher-level languages replaced assembly languages you only had to learn the higher level language. Once you learned the higher level language the machine did precisely what you specified and you did not have to inspect the assembly language to make sure it was compliant. Furthermore you were forced to be precise and to understand what you were doing when you were writing the higher level language.

      Now you don't really have to be precise at any level to get something 'working'. You may not be familiar with the generated language or libaries but it could look good enough (like the assembly would have looked good enough). So, sure, if you are very familiar with the generated language and libraries and you inspect every line of generated code then maybe you will be ok. But often the reason you are using an LLM is because e.g. you don't understand or use bash frequently enough to get it to do what you want. Well, the LLM doesn't understand it either. So that weird bash construct that it emitted - did you read the documentation for it? You might have if you had to write it yourself.

      In the end there could be code in there that nothing (machine or human) understands. The less hard-won experience you have with the target and the more time-pressed you are the more likely it is that this will occur.

    • The code that the compiler generates, especially in the C realm, or with dynamic compilers is also not regular, hence the tooling constraints in high integrity computing environments.

    • Yes.

      The output of the LLM is determined by the weights (parameters of the artificial neural network) estimated in the training as well as a pseudo-random number generator (unless its influence, called "temperature", is set to 0).

      That means LLMs behave as "processes" rather than algorithms, unlike any code that may be generated from them, which is algorithmic (unless instrcuted otherwise; you could also tell an LLM to generate an LLM).

  • So what? I know most compilers are deterministic, but it really only matters for reproducible builds, not that you're actually going to reason about the output. And the language makes few guarantees about the resulting instructions.

  • Yet the words you chose to use in this comment were entirely modelled inside your brain in a not so different manner.

I already see this happening with low code, SaaS and MACH architectures.

What used to be a project doing a CMS backend, now is spent doing configurations on a SaaS product, and if we are lucky, a few containers/serveless for integrations.

There are already AI based products that can automate those integrations if given enough data samples.

Many believe AI will keep using current programming languages as translation step, just like those Assembly developers thought compiling via Assembly text generation and feeding into an Assembly would still be around.

  • > just like those Assembly developers thought compiling via Assembly text generation and feeding into an Assembly would still be around

    Confused by what you mean. Is this not the case?

    • No, only primitive UNIX toolchains still do this, most modern compilers generate machine code directly, without having to generate Assembly text files and executing the Assembler process on it.

      You can naturally revert to old ways, by asking for the Assembly manually, and call the Assembler yourself.