Comment by fn-mote
8 hours ago
“Needs to be” is a strong claim. The skill of debugging complex problems by stepping through disassembly to find a compiler error is very specialized. Few can do it. Most applications don’t need that “introspection”. They need the “encapsulation” and faith that the lower layers work well 99.9+% of the time, and they need to know who to call when it fails.
I’m not saying generative AI meets this standard, but it’s different from what you’re saying.
Sorry, I should clarify: it's needs to be introspectable by somebody. Not every programmer needs to be able to introspect the lower layers, but that capability needs to exist.
Now I guess you can read the code an LLM generates, so maybe that layer does exist. But, that's why I don't like the idea of making a programming language for LLMs, by LLMs, that's inscrutable by humans. A lot of those intermediate layers in compilers are designed for humans, with only assembly generation being made for the CPU.
This is a good point but may be moot. Our consumer-facing LLMs speak C, Python, and JavaScript.
'Decompilers' are work in the machine code direction for human consumption, they can be improved by LLMs.
Militarily, you will want machine code and JS capable systems.
Machine code capablities cover both memory leaks and firmware dumps and negate the requirement of "source" comprehension.
I wanted to +1 you but I don't think I have the karma required.
Also, smuggling a single binary out of a set of systems is likely far easier than targetting a source code repository or devbox directly.