← Back to context

Comment by keyle

6 hours ago

How long until AI is not even writing code but producing machine code?

Think about it, all these compilers, tooling, what a waste!

I imagine a future where chipset makers will provide a model you can just prompt to "act upon that chipset" and voila, "You're absolutely right! Here is your binary."

We won't be developers, we won't be devops, we'll be rollmops! /s

Coding agents can write ASM. But if you mean writing the actual byte-code that will require a very different approach at a very different level of abstraction that LLMs are not designed to do. Keep in mind that all LLMs are trained first on text and then fine-tuned on code.

My hunch is that it would take years of hundreds of thousands of developers working with machine code, posting stackoverflow questions with machine code, and publishing github repos written on it with documentation. Thats all the free labor LLMs leveraged to use high level langs.

>We won't be developers, we won't be devops, we'll be modelops! /s

I can still see this happening with higher level langs. the thing is the compiler is not replaced in the training data, more likely LLMs will give rise to semideterministic layers on the compilers

I could see nvidia achieving this first with how nice the devex is with CUDA

  • I heard they are already proficient at assembly languages.

    • They are - probably more proficient than with some high-level languages. I've used it for embedded stuff, including TI sitara PRU assembly, with great results. Frontier models can also easily "learn" directly from the manuals; asm is quite easy for them to pick up due to its "flat" (non-structured) nature.

      1 reply →

  • FWIW I think "LLMs are semideterministic" is something of a red herring. The real difference between LLM codegen and compilers is that compilers output logically the same assembly regardless of the variable names. If you're numerically solving a differential equation the compiler does not care if the floats represent heat through a pipe or dollars through a brokerage. Compilers don't care about semantic meaning, that concern is totally separated.

    But even if its putatively implementing the same algorithm, LLMs certainly do not output basically the same finance Python as they would mechanical engineering Python. The style will be a little different. Sometimes the performance/clarity tradeoffs will be different. Sometimes it'll be fairly fancy and object-oriented, other times it'll be more low-level "objects are just dicts."

    It's way more than a higher abstraction layer: LLM codegen involves a nontechnical tangling of concerns that doesn't exist with even the hoitiest-toitiest proof-checking compilers. It's a complete sea change. I find it incredibly disconcerting... for the same reason, by the way, that assembly programmers found Fortran and C disconcerting, and continued to reliably find employment for a good 40 years after higher-level languages were invented :) Actually even today. The assembly programmers who got hosed by C tended to be electricians who learned on the job - it's kind of cool to read old manuals from the 70s, carefully (and correctly!) explaining to electricians that a computer program is essentially an ephemeral circuit.

    But I think there are specific skills around scientific thinking (learned at a formal college) and engineering carefulness (learned via hard knocks) that aren't going anywhere.