← Back to context

Comment by mr_donk

6 days ago

Don't you think the next step is a programming language that isn't even meant to be human readable? What's the point of using an LLM to generate python or Swift or whatever? The output of the LLM should be something that runs and does whatever it's been asked to do... why should the implementation be some programming language that was designed for humans to grok? Once that's true the idea of it being maintainable becomes moot, because no one will know what it is in the first place. I don't think we're there yet, but that seems like the eventual destination.

All good software is in a constant state of maintenance - users figure out new things they want to do so requirements are constantly changing.

A running joke we had at my startup years ago was "... and after we ship this feature we'll be finished and we will never have to write another line of code again!"

Good software is built with future changes in mind - that's why I care so much about things like automated test suites and documentation and code maintainability.

Letting LLMs generate black box garbage code sounds like a terrible idea to me. The idea that LLMs will get so good at coding that we won't mind is pure science fiction - writing code is only one part of the craft of delivering useful software.

Isn't that what machine code is?

LLMs work best with interfaces meant for humans because they're trained on human behavior. It's why they generate JSON and not BSON.

That begs the question of what abstraction layer is necessary beyond an assembler, if any? If human handcrafted ASM outcompetes compiled C then why not give LLMs the wheel on ASM? Then another question is - are there enough good ASM publically available as examples?

  • There's certainly enough ASM available if the LLMS-can-reason hypothesis is true.

    You'd only need one accurate ASM manual per variant.

It might even get further. I imagine a day where AI would generate an executable neural network that models (and is optimized for) a specific problem; i.e. kind of a model that runs on a neural network runtime or VM. Who cares what the NN is doing as long as it's doing its job correctly. The big catch, though, is the keyword "correctly" and I would add "deterministically" to it, in order for users to trust it.

  • yeah, that's probably more along the lines of what I was thinking, actually, you just worded it better :)

> Don't you think the next step is a programming language that isn't even meant to be human readable?

Malbolge is a couple of decades old. Apparently the first working "Hello World" was made by a genetic algorithm.

Yeah it does seem like a game of telephone to train LLMs on code optimized for human cognition, then have them create behavior by parrotting that code back into a compiler. Could they just create behavior directly?

Building complex things requires deterministic, reliable and understandable abstraction.

Don’t see where current AI fits in this except being a better intellisense for an IDE.

We're seeking angel investors for our startup that does this, we train models with "assembly" that does specific things and also complete "programs"; the end goal to prompt and it outputs executables. It's farther along than quantum computing at solving real problems, for instance, it can factor "15".

this is like the third time i've mentioned this on HN (over a year.) Apparently everyone else is too busy complaining or defending Claude Code to notice.