Comment by MontyCarloHall
10 hours ago
It's not so much about replacing developers, but rather increasing the level of abstraction developers can work at, to allow them to work on more complex problems.
The first electronic computers were programmed by manually re-wiring their circuits. Going from that to being able to encode machine instructions on punchcards did not replace developers. Nor did going from raw machine instructions to assembly code. Nor did going from hand-written assembly to compiled low-level languages like C/FORTRAN. Nor did going from low-level languages to higher-level languages like Java, C++, or Python. Nor did relying on libraries/frameworks for implementing functionality that previously had to be written from scratch each time. Each of these steps freed developers from having to worry about lower-level problems and instead focus on higher-level problems. Mel's intellect is freed from having to optimize the position of the memory drum [0] to allow him to focus on optimizing the higher-level logic/algorithms of the problem he's solving. As a result, software has become both more complex but also much more capable, and thus much more common.
(The thing that distinguishes gen-AI from all the previous examples of increasing abstraction is that those examples are deterministic and often formally verifiable mappings from higher abstraction -> lower abstraction. Gen-AI is neither.)
> It's not so much about replacing developers, but rather increasing the level of abstraction developers can work at, to allow them to work on more complex problems.
Thats not the goal the Anthropic's CEO has. Nor does any other CEO for that matter.
> Thats not the goal the Anthropic's CEO has. Nor does any other CEO for that matter.
It is what he can deliver.
> It's not so much about replacing developers, but rather increasing the level of abstraction developers can work at, to allow them to work on more complex problems.
People do and will talk about replacing developers though.
Were many of the aforementioned advancements marketed as "replacing developers"? Absolutely. Did that end up happening? Quite the opposite; each higher-level abstraction only caused the market for software and demand for developers to grow.
That's not to say developers haven't been displaced by abstraction; I suspect many of the people responsible for re-wiring the ENIAC were completely out of a job when punchcards hit the scene. But their absence was filled by a greater number of higher-level punchcard-wielding developers.
the infinite-fountain-of-software machine seems more likely to replace developers than previous innovations, and the people pushing the button will not be, in any current sense of the word, programming
1 reply →
I think one thing I've heard missing from discussions though is that each level of abstraction needs to be introspectable. LLMs get compared to compilers a lot, so I'd like to ask: what is the equivalent of dumping the tokens, AST, SSA, IR, optimization passes, and assembly?
That's where I find the analogy on thin ice, because somebody has to understand the layers and their transformations.
“Needs to be” is a strong claim. The skill of debugging complex problems by stepping through disassembly to find a compiler error is very specialized. Few can do it. Most applications don’t need that “introspection”. They need the “encapsulation” and faith that the lower layers work well 99.9+% of the time, and they need to know who to call when it fails.
I’m not saying generative AI meets this standard, but it’s different from what you’re saying.
Sorry, I should clarify: it's needs to be introspectable by somebody. Not every programmer needs to be able to introspect the lower layers, but that capability needs to exist.
Now I guess you can read the code an LLM generates, so maybe that layer does exist. But, that's why I don't like the idea of making a programming language for LLMs, by LLMs, that's inscrutable by humans. A lot of those intermediate layers in compilers are designed for humans, with only assembly generation being made for the CPU.
2 replies →
The goal of AI companies is to replace all intellectual labor. You can argue that they're going to fail, but it's very clear what the actual goal is.
One of my clients is an AI startup in the security industry. Their business model is to use AI agents to perform the initial assessment and then cut the security contractors hours by 50% to complete the job.
I don't think AI will completely replace these jobs, but it could reduce job numbers by a very large amount.
> increasing the level of abstraction developers can work at
Something is lost each step of the abstraction ladder we climb. And the latest rung uses natural language which introduces a lot of imprecision/slop, in a way that prior abstractions did not. And, this new technology providing the new abstraction is non-deterministic on top of that.
There's also the quality issue of the output you do get.
I don't think the analogy of the assembly -> C transition people like to use holds water – there are some similarities but LLMs have a lot of downsides.
I think the thing that’s so weird to me is this idea that we have to all somehow internalize the concept of transistor switching as the foundational unchangeable root of computing and therefore anything that is too far abstract from that is not somehow real computing or something mess like that
Again ignoring completely that when you would program vacuum tube computers it was an entirely different type of abstraction than you do with Mosfets for example
I’m finding myself in the position where I can safely ignore any conversation about engineering with anybody who thinks that there is a “right” way to do it or that there’s any kind of ceremony or thinking pattern that needs to stay stable
Those are all artifacts of humans desiring very little variance and things that they’ve even encoded because it takes real energy to have to reconfigure your own internal state model to a new paradigm