Comment by pyman
6 months ago
Feels like we're heading towards a world where computer languages disappear, and we just use human language to tell machines what to do. Kinda like how typewriters got replaced by computers in the 80s. Back then, people spent so much time making sure there were no typos, they'd lose focus on the actual story they were trying to write.
Same thing's happening now with code. We waste so much time dealing with syntax, fixing bugs, naming variables, setting up configs, etc, and not enough time thinking about the real problem we're trying to solve.
From Assembly to English. What do you reckon?
As much as I'm finding LLMs incredibly useful, this "world where computer languages disappear" doesn't resonate with me at all. I have yet to see any workflows where the computer language is no longer a critical piece of the puzzle, or even significantly diminished in importance.
I think there is an important difference between LLM-interpreted English, and compiler-emitted Assembly, which is determinism.
The reason we're still going from human prompt to code to execution, rather than just prompt to execution, is that the code is the point at which determinism can be introduced. And I suspect it will always be useful to have this determinism capability. We certainly spend a lot of time debugging and fixing bugs, but we'd spend even more time on those activities if we couldn't encode the solutions to those bugs in a deterministic language.
Now, I won't be at all surprised if this determinism layer is reimplemented in totally different languages, that maybe are not even recognizable as "computer language". But I think we will always need some way to say "do exactly this thing" and the current computer languages remain much better for this than the current techniques to prompt AI models.
I predict we enter a world where these wand waving prompts are backed by well-structured frameworks that eliminate the need to dig in the code.
Originally I thought LLMs would add a new abstraction layer, like C++ -> PHP, but now I think we will begin replacing swaths of "logically knowable" processes one by one, with dynamic and robust interfaces. In other words, LLMs, if working under the right restrictions, will add a new layer of libraries.
A library for auth, a library for form inputs, etc. Extensible in every way with easy translation between languages. And you can always dig into the code of a library, but mostly they just work as-is. LLMs thrive with structure, so I think the real nexy wave will be adding various structures on top of general LLMs to achieve this.
This is possible. But when I read something like this, I just wonder: Why would this be more efficient than doing this with the same component we already call "libraries" - that is, a normal library or component created with some computer language - and just using AI to create and perfect those libraries more quickly?
I'm not even sure I disagree with your comment... I agree that I think LLMs will "add a new layer of libraries" ... but I think it seems fairly likely that they'll do that by generating a bunch of computer code?
Sorry, this is implausible.
English is just too poorly-specified. Programs need to be able to know exactly what they're supposed to do next, what their output is supposed to be, etc. Even humans need to ask each other for clarification and such all the time.
If you want to use English to specify a program, by the time you've adjusted it to be clear and specific enough to actually be able to do that...it turns out you've made a programming language.
We live in a world with 7,000 human languages and around 8,000 programming languages. Most people only learn a handful, which limits how effectively they can express intent. This is inefficient.
In theory, one universal language would solve that, for both humans and machines.
Maybe the best solution isn't one language (English, Spanish, Golang, or Python), but one interface that understands all of them. And that's what LLMs might become.
Obligatory XKCD: https://xkcd.com/927/
1 reply →
I think this can be resolved with verbosity, our old friends abstraction and modularization, and an unfamiliarly flexible parser.
English is not well-specified or unambiguous. Programming languages aim to be. This is a massive difference. Recall that laws are specified in English.
Laws attempt to solve this problem with verbosity. It works pretty well but of course the exceptions are always interesting.
But I think the domain of an AI-first PL would or could be much smaller. So the language is "lower-level" than English, but "higher-level" than any existing PL including AppleScript etc, because it would not have to follow the same kinds of strict parser rules.
With a smaller domain, I think the necessary verbosity of an AI-first PL could be acceptable and less ambiguous than law.
This is an interesting debate. For me, the real question is: What's the goal of any language (human or programming)?
In my opinion, it's to communicate intent, so that intent can be turned into action. And guess what? LLMs are incredibly good at picking up intent through pattern matching.
So, if the goal of a language is to express intent, and LLMs often get our intent faster than a software developer, then why is English considered worse than Python? For an LLM, it's the same: just patterns.
> We waste so much time dealing with syntax, fixing bugs, naming variables, setting up configs
I definitely don't do that. It's a very small part of my job. And AFAIK, LLMs cannot generate assembly language yet, and CPUs don't understand English.
We live in a world with 7,000 human languages and around 8,000 programming languages. Most people only learn a handful, which limits how effectively they can express intent. This is inefficient.
In theory, one universal language would solve that, for both humans and machines.
Maybe the best solution isn't one language (English, Spanish, Golang, or Python), but one interface that understands all of them. And that's what LLMs might become.
ive used various llms to generate x86, mips, riscv assembly with mostly usable results. you tend to see what it was trained on pretty quickly if you go deep tho
> Back then, people spent so much time making sure there were no typos, they'd lose focus on the actual story they were trying to write.
Were you a published author in the 80s?
Because I highly doubt this was how writers in 80s thought of their job.
No, but I've studied the history of computers and keyboards. There's plenty of evidence that writing with typewriters was much slower than using a computer. Writers were also more limited creatively, since they couldn't easily edit or move things around once the page was written.
Slow doesn’t necessarily mean less creative. In fact it’s been argued that being slow and deliberate actually pulls you out of automated patterns of thinking and gives you time to mull over what you want to say.
This is even enhanced when you create a superficial barrier such as writing in all caps.
1 reply →
> Feels like we're heading towards a world where computer languages disappear, and we just use human language to tell machines what to do.
I agree, but it feels like we need a new type of L_X_M. Like an LBM (Large Behavior Model), which is trained on millions of different actions, user flows, displays, etc.
Converting token weights into text-based code designed to ease the cognitive load on humans seems wildly inefficient compared to converting tokens directly into UI actions and behaviors.
While I agree with all the previous comments, your comment sparked an idea in me. I started imagining a future where we develop a new programming language optimized for LLMs to write and understand. In this hypothetical scenario, we would still need developers to debug and review the code to ensure deterministic outputs. Maybe this isn't so far-fetched after all. Of course, this is just speculation and imagination on my part.
Relevant: LLMunix - A Pure Markdown Operating System - https://news.ycombinator.com/item?id=44279456 - Jun, 2025 (1 comment)
you’d need a training set covering all the useful cases. Something that we don’t have even now for mainstream languages
Another good analogy is how calculators, people who performed mathematical calculations, were replaced by machines. Sure they were eventually put out of work, nonetheless, the mechanical and then electronic calculators eventually made entire industries so efficient it increased everyone's wealth and created new positions and jobs.
We will be fine.
> We will be fine.
No, we won't. Other people might be.
The new positions and jobs are for the new people. For those automated away half-way through their careers, there's at best competing for entry-level salaries with a much younger cohort freshly out of school.
It's something I see people consistently missing in the whole labor automation discussion space. Luddites didn't rise up because they hated the future either. They rose up because contemporary developments threatened to throw them specifically, and their families and children, into poverty, starvation and death.
Long-term social impact != immediate-term personal impact.
I reckon that while my programming has become more productive with LLMs, it has at the same time gotten a bit more frustrating and boring.
I think it is difficult to know in advance when the LLM will do a reasonable or good job and when it won't. But I am slowly learning when and how to use the tools while still enjoying using them.
Perhaps solving the real problem implies using programming languages?
Or perhaps it doesn't. An architect also solves a real problem, even though he's not laying brick.
It is the blueprints (the detailed design, plans sections etc) that is analogous to code, not bricks. Software designers (compared to building designers) are lucky that the process of turning design (code) into artifact (running software) is virtually free in terms of cost and time. However software designers are unlucky that what they do is so misunderstood - not least by them themselves.
I think this is a good point. But just as we see in the real world the execution of the architect's solution is often sub par, so the "debugging" involves both architectural specs as well as builder's execution.
I think that in programming we will still have to understand the builder's execution, which should remain deterministic, hopefully not at the level of assembly.