Comment by sanderjd
2 days ago
As much as I'm finding LLMs incredibly useful, this "world where computer languages disappear" doesn't resonate with me at all. I have yet to see any workflows where the computer language is no longer a critical piece of the puzzle, or even significantly diminished in importance.
I think there is an important difference between LLM-interpreted English, and compiler-emitted Assembly, which is determinism.
The reason we're still going from human prompt to code to execution, rather than just prompt to execution, is that the code is the point at which determinism can be introduced. And I suspect it will always be useful to have this determinism capability. We certainly spend a lot of time debugging and fixing bugs, but we'd spend even more time on those activities if we couldn't encode the solutions to those bugs in a deterministic language.
Now, I won't be at all surprised if this determinism layer is reimplemented in totally different languages, that maybe are not even recognizable as "computer language". But I think we will always need some way to say "do exactly this thing" and the current computer languages remain much better for this than the current techniques to prompt AI models.
I predict we enter a world where these wand waving prompts are backed by well-structured frameworks that eliminate the need to dig in the code.
Originally I thought LLMs would add a new abstraction layer, like C++ -> PHP, but now I think we will begin replacing swaths of "logically knowable" processes one by one, with dynamic and robust interfaces. In other words, LLMs, if working under the right restrictions, will add a new layer of libraries.
A library for auth, a library for form inputs, etc. Extensible in every way with easy translation between languages. And you can always dig into the code of a library, but mostly they just work as-is. LLMs thrive with structure, so I think the real nexy wave will be adding various structures on top of general LLMs to achieve this.
This is possible. But when I read something like this, I just wonder: Why would this be more efficient than doing this with the same component we already call "libraries" - that is, a normal library or component created with some computer language - and just using AI to create and perfect those libraries more quickly?
I'm not even sure I disagree with your comment... I agree that I think LLMs will "add a new layer of libraries" ... but I think it seems fairly likely that they'll do that by generating a bunch of computer code?