The gist ("product becomes a black box") applies to any abstraction. It could apply to high-level languages (including so-called "low-level" languages like C), and few people criticize those.
But LLMs are particularly insidious because they're a particularly leaky abstraction. If you ask an LLM to implement something:
- First, there's only a chance it will output something that works at all
- Then, it may fail on edge-cases
- Then, unless it's very trivial, the code will be spaghetti, so neither you nor the LLM can extend it
Vs a language like C, where the original source is indecipherable from the assembly but the assembly is almost certainly correct. When GCC or clang does fail, only an expert can figure out why, but it happens rarely enough that there's always an expert available to look at it.
Even if LLMs get better, English itself is a bad programming language, because it's imprecise and not modular. Tasks like "style a website exactly how I want" or "implement this complex algorithm" you can't describe without being extremely verbose and inventing jargon (or being extremely more verbose), at which point you'd spend less effort and write less using a real programming language.
If people end up producing all code (or art) with AI, it won't be through prompts, but fancy (perhaps project-specific) GUIs if not brain interfaces.
I built a Klatt formant synthesizer with Claude from ~120 academic papers. Every parameter cites its source, the architecture is declarative YAML with phased rule execution and dependency resolution.
Note how every rule and every configurable thing in the synthesizer pipeline has a citation to a paper? I can generate a phrase, and with the "explain" command I can see precisely why a certain word was spoken a certain way.
Well - I've learned what a "formant" is today. Looking at the repo, it's not obvious to me what .md is authored by you vs system-generated. This is an observation, not a criticism. I was looking for the prompts you used to specify the papers summarization, which is very nice.
This was my take also until recently. It’s now become possible to write better code, faster than you’d get without AI.
It’s all about planning and architecting things before any code is generated. It’s not vibe coding prompt->code, its idea->design->specification->review->specification-> design->architecture->define separate concerns->design interfaces/inter module APIs->define external APIs->specification/review->specification-> now start building the code, module by module. Writing in c++ at
Least, I have had very good results with this approach, and I’m ending up with better, more maintainable code, less bugs to hunt, and all in about 0.15-0.25 the time it would have taken to hack something together as an MVP that will have to be rewritten.
I agree, there is a reason we settled on programming languages as a interface to instruct the machine. Ultimately it is a tool to indicate our thoughts as precise as possible in a certain domain.
People that don't understand the tools they use are doomed to reinvent them.
Perhaps the interface will evolve into pseudo code where AI will fill in undefined or boilerplate with best estimates.
There are plenty of alternatives for programming than through written language. It's just that commercialization took the first thing that was available and exploited it. Same thing is happening with AI. Many innovations are doomed to become cursed. Computers are cursed. Medicine is cursed. Science is cursed. Everything sucks and that's just the way it is.
After the war when the dust settles, we'll start over. We might escape the death of the sun, but not the heat death of the universe. Therefore, none of the above matters and I'm rambling.
> Even if LLMs get better, English itself is a bad programming language, because it's imprecise and not modular.
I absolutely agree. I've only dabbled in AI coding, but every time I feel like I can't quite describe to it what I want. IMO we should be looking into writing some kind of pseudocode for LLMs. Writing code is the best way to describe code, even if it's just pseudocode.
However, I think what a lot of people don't realize is the reason a lot of executives and business users are excited about AI and don't mind developers getting replaced is because product is already a black box.
I'm more and more convinced top execs are most likely to be advantageously replaced by LLM.
They navigate such complex decision spaces, full of compromises, tensions, political knots, that ultimately their important decisions are just made on gut feelings.
Replace the CEO with an LLM whose system prompt is carefully crafted and vetted by the board of directors, with some adequate digital twin of the company to project it's move, I'm sure it should maximize the interest of the shareholders much better.
Next up: apply the same recipe to government executive power. Couldn't be much worse than orange man.
> This is a great insight. For software engineers coding is the way to fully grasp the business context.
> By programming, they learn how the system fits together, where the limits are, and what is possible. From there they can discover new possibilities, but also assess whether new ideas are feasible.
Maybe I have a different understanding of "business context", but I would argue the opposite. AI tools allow me to spend much more time on the business impact of features, think of edge cases, talk with stakeholders, talk with the project/product owners. Often there are features that stakeholders dismiss that seemed complex and difficult in the past, but are much easier now with faster coding.
Code was almost never the limiting factor before. It's the business that is the limit.
It’s perplexing; like the majority of people who insist using AI coding assistance is guaranteed going to rob you of application understanding and business context aren’t considering that not every prompt has to be an instruction to write code. You can, like, ask the agent questions. “What auth stack is in use? Where does the event bus live? Does the project follow SoC or are we dealing with pasta here? Can you trace these call chains and let me know where they’re initiated?”
If anything, I know more about the code I work on than ever before, and at a fraction of the effort, lol.
The project managers and CEOs who are vibe-coding apps on the weekend don't know what an "auth stack" is, much less that they should consider which auth stack is in use. Then when it breaks, they hand their vibe-coded black box to their engineers and say "fix this, no mistakes"
I think that for the average developer this might be true. I think that for excellent developers, they spend a lot of time thinking about the edge cases and ensuring that the system/code is supportable. i.e. it explains what the problems are so that issue can be resolved quickly; the code is written in a style that provides protection from other parts of the system failing; and so on. This isn't done on average code and I don't see AI doing this at all.
I really like the framing here (via Richard Sennett / Roland van der Vorst): craft is a relationship with the material. In software, that “material consciousness” is built by touching the system—writing code, feeling the resistance of constraints, refactoring, modeling the domain until it clicks.
If we outsource the whole “hands that think” loop to agents, we may ship faster… but we also risk losing the embodied understanding that lets us explain why something is hard, where the edges are, and how to invent a better architecture instead of accepting “computer says no.”
I hope we keep making room for “luxury software”: not in price, but in care—the Swiss-watch mentality. Clean mechanisms, legible invariants, debuggable behavior, and the joy of building something you can trust and maintain for years. Hacker News needs more of that energy.
> I hope we keep making room for “luxury software”
The risk of making the source in a compiler a black box is pretty high.
Because of your program is black box compiled with a block box, things might get really sporty.
There are many platform libraries and such that we probably do not want as black boxes, ever. That doesn't mean an AI can't assist, but that you'll still rewrite and review whatever insights you got from the AI.
If latest game or instagram app is black box nobody can decipher, who cares? If such app goes to be a billion dollar success, I'll feel sorry for engineers tasked with figuring why the computer says no.
The quote at the center of this posting uses poetic language like "dialogue" to make a certain class of work seem virtuous. I find it very difficult to take this kind of framing seriously. What are the data to back this up? We are at the phase of LLM use where the data on productivity gains and their impact hasn't arrived yet. Postings like this fill the vacuum with fuzzy language.
After watching speedrun videos on youtube, my hunch is that there will always be a countervailing forces to the degradation of our ability to write code. There are some unborn wizards who can't help but bring it all the way back to writing assembly, just because it's hard. And they will be rewarded for it
A bit OT: Only if you can return the licence in return for payback. I did this (checks wiki entry...) 16 years ago [1] with a server which came with a Windows Home Server licence. It was not so much about the actual licence costs - I got about $68/€48 after having sent in all licence details/stickers/labels/etc - but about sending the message that I did not want to be forced to use Windows (the hardware was not available without the licence).
That server did its job for about 10 years with a few upgrades - more drives and a faster CPU - and really only needed replacement because of the low max memory limit. Its successor ended up being a DL380 G7 which is still in use, the Scaleo Home Server annex Intel SS4200 now waits in the barn for potential re-use as a backup server.
To be honest, the same applies when a developer gets promoted to team lead. I made this experience on my own that I no longer got in touch with the code written. Reasons are slightly different (for me it was a lack of time and documentation)
I am a scientist who rarely collaborates (unlike programmers and unlike most scientists).
When I wrote a paper in collaboration some time ago, it felt very weird to have large parts of the paper that I had superficial knowledge of (incidentally, I had retyped everything my co-author did, but in my own notation) but no profound knowledge of how it was obtained, of the difficulties encountered. I guess this is how people who started vibe coding must feel.
If you are a painter, you don't need to know chemical formulas for the pigments you're using to have a direct touch to your creation.
For me, that material conciousness in computers was always in grasping the way system works holistically. To feel the system. To treat it as almost a living organism.
I would argue as a painter some knowledge of chemicas is VERY important. For example, do you need oil based or latex paint for a specific surface? Is it indoor, or outdoor? Do you need primer on that surface or not? If its metal, do you need rust converter first?
this isn’t the right way to use ai to write code at work, it shouldn’t become a black box, you should make it iterative and be precise about architecture, guide the ai carefully to maintain the code in a state that a human can instantly drop in on if needed, use ai as a keyboard extension not a blindfold
The gist ("product becomes a black box") applies to any abstraction. It could apply to high-level languages (including so-called "low-level" languages like C), and few people criticize those.
But LLMs are particularly insidious because they're a particularly leaky abstraction. If you ask an LLM to implement something:
- First, there's only a chance it will output something that works at all
- Then, it may fail on edge-cases
- Then, unless it's very trivial, the code will be spaghetti, so neither you nor the LLM can extend it
Vs a language like C, where the original source is indecipherable from the assembly but the assembly is almost certainly correct. When GCC or clang does fail, only an expert can figure out why, but it happens rarely enough that there's always an expert available to look at it.
Even if LLMs get better, English itself is a bad programming language, because it's imprecise and not modular. Tasks like "style a website exactly how I want" or "implement this complex algorithm" you can't describe without being extremely verbose and inventing jargon (or being extremely more verbose), at which point you'd spend less effort and write less using a real programming language.
If people end up producing all code (or art) with AI, it won't be through prompts, but fancy (perhaps project-specific) GUIs if not brain interfaces.
I built a Klatt formant synthesizer with Claude from ~120 academic papers. Every parameter cites its source, the architecture is declarative YAML with phased rule execution and dependency resolution.
Here's what "spaghetti" looks like:
https://github.com/ctoth/Qlatt/blob/master/public/rules/fron...
One thing I want to point out. Before I started, I gathered and asked Claude to read all these papers:
https://github.com/ctoth/Qlatt/tree/master/papers
Producing artifacts like this:
https://github.com/ctoth/Qlatt/blob/master/papers/Klatt_1980...
Note how every rule and every configurable thing in the synthesizer pipeline has a citation to a paper? I can generate a phrase, and with the "explain" command I can see precisely why a certain word was spoken a certain way.
Well - I've learned what a "formant" is today. Looking at the repo, it's not obvious to me what .md is authored by you vs system-generated. This is an observation, not a criticism. I was looking for the prompts you used to specify the papers summarization, which is very nice.
2 replies →
This was my take also until recently. It’s now become possible to write better code, faster than you’d get without AI.
It’s all about planning and architecting things before any code is generated. It’s not vibe coding prompt->code, its idea->design->specification->review->specification-> design->architecture->define separate concerns->design interfaces/inter module APIs->define external APIs->specification/review->specification-> now start building the code, module by module. Writing in c++ at Least, I have had very good results with this approach, and I’m ending up with better, more maintainable code, less bugs to hunt, and all in about 0.15-0.25 the time it would have taken to hack something together as an MVP that will have to be rewritten.
I agree, there is a reason we settled on programming languages as a interface to instruct the machine. Ultimately it is a tool to indicate our thoughts as precise as possible in a certain domain.
People that don't understand the tools they use are doomed to reinvent them.
Perhaps the interface will evolve into pseudo code where AI will fill in undefined or boilerplate with best estimates.
> People that don't understand the tools they use are doomed to reinvent them.
Having reinvented them, they will understand. In this way, human progress is unstoppable, even with knowledge micro (and macro) dark ages.
There are plenty of alternatives for programming than through written language. It's just that commercialization took the first thing that was available and exploited it. Same thing is happening with AI. Many innovations are doomed to become cursed. Computers are cursed. Medicine is cursed. Science is cursed. Everything sucks and that's just the way it is.
After the war when the dust settles, we'll start over. We might escape the death of the sun, but not the heat death of the universe. Therefore, none of the above matters and I'm rambling.
> Even if LLMs get better, English itself is a bad programming language, because it's imprecise and not modular.
I absolutely agree. I've only dabbled in AI coding, but every time I feel like I can't quite describe to it what I want. IMO we should be looking into writing some kind of pseudocode for LLMs. Writing code is the best way to describe code, even if it's just pseudocode.
How is this different than giving a person requirements and asking them to implement something.
For those unaware, the title is a reference to a Little Britain skit
https://www.youtube.com/watch?v=sX6hMhL1YsQ&themeRefresh=1
16 years ago
I love the framing here.
However, I think what a lot of people don't realize is the reason a lot of executives and business users are excited about AI and don't mind developers getting replaced is because product is already a black box.
I'm more and more convinced top execs are most likely to be advantageously replaced by LLM.
They navigate such complex decision spaces, full of compromises, tensions, political knots, that ultimately their important decisions are just made on gut feelings.
Replace the CEO with an LLM whose system prompt is carefully crafted and vetted by the board of directors, with some adequate digital twin of the company to project it's move, I'm sure it should maximize the interest of the shareholders much better.
Next up: apply the same recipe to government executive power. Couldn't be much worse than orange man.
The slow-burning problem is going to be adversarial input and poison data.
They'll start minding when things start breaking. In the mean time I'll work on stuff AI is still not so great at.
Some of us need a paycheck and have to work on whatever LLM project the CEO demands an d if it fails the developer gets blamed.
1 reply →
> This is a great insight. For software engineers coding is the way to fully grasp the business context.
> By programming, they learn how the system fits together, where the limits are, and what is possible. From there they can discover new possibilities, but also assess whether new ideas are feasible.
Maybe I have a different understanding of "business context", but I would argue the opposite. AI tools allow me to spend much more time on the business impact of features, think of edge cases, talk with stakeholders, talk with the project/product owners. Often there are features that stakeholders dismiss that seemed complex and difficult in the past, but are much easier now with faster coding.
Code was almost never the limiting factor before. It's the business that is the limit.
It’s perplexing; like the majority of people who insist using AI coding assistance is guaranteed going to rob you of application understanding and business context aren’t considering that not every prompt has to be an instruction to write code. You can, like, ask the agent questions. “What auth stack is in use? Where does the event bus live? Does the project follow SoC or are we dealing with pasta here? Can you trace these call chains and let me know where they’re initiated?”
If anything, I know more about the code I work on than ever before, and at a fraction of the effort, lol.
The project managers and CEOs who are vibe-coding apps on the weekend don't know what an "auth stack" is, much less that they should consider which auth stack is in use. Then when it breaks, they hand their vibe-coded black box to their engineers and say "fix this, no mistakes"
I think that for the average developer this might be true. I think that for excellent developers, they spend a lot of time thinking about the edge cases and ensuring that the system/code is supportable. i.e. it explains what the problems are so that issue can be resolved quickly; the code is written in a style that provides protection from other parts of the system failing; and so on. This isn't done on average code and I don't see AI doing this at all.
I really like the framing here (via Richard Sennett / Roland van der Vorst): craft is a relationship with the material. In software, that “material consciousness” is built by touching the system—writing code, feeling the resistance of constraints, refactoring, modeling the domain until it clicks.
If we outsource the whole “hands that think” loop to agents, we may ship faster… but we also risk losing the embodied understanding that lets us explain why something is hard, where the edges are, and how to invent a better architecture instead of accepting “computer says no.”
I hope we keep making room for “luxury software”: not in price, but in care—the Swiss-watch mentality. Clean mechanisms, legible invariants, debuggable behavior, and the joy of building something you can trust and maintain for years. Hacker News needs more of that energy.
> I hope we keep making room for “luxury software”
The risk of making the source in a compiler a black box is pretty high.
Because of your program is black box compiled with a block box, things might get really sporty.
There are many platform libraries and such that we probably do not want as black boxes, ever. That doesn't mean an AI can't assist, but that you'll still rewrite and review whatever insights you got from the AI.
If latest game or instagram app is black box nobody can decipher, who cares? If such app goes to be a billion dollar success, I'll feel sorry for engineers tasked with figuring why the computer says no.
The quote at the center of this posting uses poetic language like "dialogue" to make a certain class of work seem virtuous. I find it very difficult to take this kind of framing seriously. What are the data to back this up? We are at the phase of LLM use where the data on productivity gains and their impact hasn't arrived yet. Postings like this fill the vacuum with fuzzy language.
After watching speedrun videos on youtube, my hunch is that there will always be a countervailing forces to the degradation of our ability to write code. There are some unborn wizards who can't help but bring it all the way back to writing assembly, just because it's hard. And they will be rewarded for it
If your computer says No just ditch it and buy a non-Apple computer.
One with Windows 11?
A bit OT: Only if you can return the licence in return for payback. I did this (checks wiki entry...) 16 years ago [1] with a server which came with a Windows Home Server licence. It was not so much about the actual licence costs - I got about $68/€48 after having sent in all licence details/stickers/labels/etc - but about sending the message that I did not want to be forced to use Windows (the hardware was not available without the licence).
That server did its job for about 10 years with a few upgrades - more drives and a faster CPU - and really only needed replacement because of the low max memory limit. Its successor ended up being a DL380 G7 which is still in use, the Scaleo Home Server annex Intel SS4200 now waits in the barn for potential re-use as a backup server.
[1] http://ss4200.pbworks.com/w/page/5122767/Windows%20refund%20...
Do you really think Windows 11 doesn't try to control the user, or say no to them?
Are you sure you can't think of a commonly used operating system which doesn't?
Name ends with "ux", or maybe "BSD"?
3 replies →
To be honest, the same applies when a developer gets promoted to team lead. I made this experience on my own that I no longer got in touch with the code written. Reasons are slightly different (for me it was a lack of time and documentation)
I am a scientist who rarely collaborates (unlike programmers and unlike most scientists).
When I wrote a paper in collaboration some time ago, it felt very weird to have large parts of the paper that I had superficial knowledge of (incidentally, I had retyped everything my co-author did, but in my own notation) but no profound knowledge of how it was obtained, of the difficulties encountered. I guess this is how people who started vibe coding must feel.
If you are a painter, you don't need to know chemical formulas for the pigments you're using to have a direct touch to your creation.
For me, that material conciousness in computers was always in grasping the way system works holistically. To feel the system. To treat it as almost a living organism.
I would argue as a painter some knowledge of chemicas is VERY important. For example, do you need oil based or latex paint for a specific surface? Is it indoor, or outdoor? Do you need primer on that surface or not? If its metal, do you need rust converter first?
> feeling resistance
In a software context, I wonder what the impact of the language used is on the sense of "resistance"?
To an extend, AI can help with explaining what the code does. So, computer says why it says "no".
Or says random stuff pretending it is the reason.
this isn’t the right way to use ai to write code at work, it shouldn’t become a black box, you should make it iterative and be precise about architecture, guide the ai carefully to maintain the code in a state that a human can instantly drop in on if needed, use ai as a keyboard extension not a blindfold