Comment by rambojohnson
21 hours ago
What exhausts me isn’t “falling behind.” It’s watching the profession collectively decide that the solution to uncertainty is to pile abstraction on top of abstraction until no one can explain what’s actually happening anymore.
This agentic arms race by C-suite know-nothings feels less like leverage and more like denial. We took a stochastic text generator, noticed it lies confidently, wipes entire databases and harddrives, and responded by wrapping it in managers, sub-agents, memories, tools, permissions, workflows, and orchestration layers so we don’t have to look directly at the fact that it still doesn’t understand anything.
Now we’re expected to maintain a mental model not just of our system, but of a swarm of half-reliable interns talking to each other in a language that isn’t executable, reproducible, or stable.
Work now feels duller than dishwater, enough to have forced me to career pivot for 2026.
I think AI-assisted programming may be having the opposite effect, at least for me.
I'm now incentivized to use less abstractions.
Why do we code with React? It's because synchronizing state between a UI and a data model is difficult and it's easy to make mistakes, so it's worth paying the React complexity/page-weight tax in order for a "better developer experience" that allows us to build working, reliable software with less typing of code into a text editor.
If an LLM is typing that code - and it can maintain a test suite that shows everything works correctly - maybe we don't need that abstraction after all.
How often have you dropped in a big complex library like Moment.js just because you needed to convert a time from one format to another, and it would take too long to hand-write that one feature (and add tests for it to make sure it's robust)? With an LLM that's a single prompt and a couple of minutes of wait.
Using LLMs to build black box abstraction layers is a choice. We can choose to have them build LESS abstraction layers for us instead.
> If an LLM is typing that code - and it can maintain a test suite that shows everything works correctly - maybe we don't need that abstraction after all.
I've had plenty of junior devs justify massive code bases of random scripts and 100+ line functions with the same logic. There's a reason senior devs almost always push back on this when it's encountered.
Everything hinges on that "if". But you're baking a tautology into your reasoning: "if LLMs can do everything we need them to, we can use LLMs for everything we need".
The reason we stop junior devs from going down this path is because experience teaches us that things will break and when they do, it will incur a world of pain.
So "LLM as abstraction" might be a possible future, but it assumes LLMs are significantly more capable than a junior dev at managing a growing mess of complex code.
This is clearly not the case with simplistic LLM usage today. "Ah! But you need agents and memory and context management, etc!" But all of these are abstractions. This is what I believe the parent comment is really pointing out.
If AI could do what we originally hoped it could: follow simple instructions to solve complex tasks. We'd be great, and I would agree with your argument. But we are very clearly not in that world. Especially since Karpathy can't even keep up with the sophisticated machinery necessary to properly orchestrate these tools. All of the people decrying "you're not doing it right!" are emphatically proving that LLMs cannot perform these tasks at the level we need them to.
I'm not arguing for using LLMs as an abstraction.
I'm saying that a key component of the dependency calculation has changed.
It used to be that one of the most influential facts affecting your decision to add a new library was the cost of writing the subset of code that you needed yourself. If writing that code and the accompanying tests represented more than an hour of work, a library was usually a better investment.
If the code and tests take a few minutes those calculations can look very different.
Making these decisions effectively and responsibly is one of the key characteristics of a senior engineer, which is why it's so interesting that all of those years of intuition are being disrupted.
The code we are producing remains the same. The difference is that a senior developer may have written that function + tests in several hours, at a cost of thousands of dollars. Now that same senior developer can produce exactly the same code at a time cost of less than $100.
42 replies →
Rather, the problem more often I see with junior devs is pulling in a dozen dependencies when writing a single function would have done the job.
Indeed, part of becoming a senior developer is learning why you should avoid left-pad but accept date-fns.
We’re still in the early stages of operationalising LLMs. This is like mobile apps in 2010 or SPA web dev in 2014. People are throwing a lot of stuff at the wall and there’s going be a ton of churn and chaos before we figure out how to use it and it settles down a bit. I used to joke that I didn’t like taking vacations because the entire front end stack will have been chucked out and replaced with something new by the time I get back, but it’s pretty stable now.
Also I find it odd you’d characterise the current LLM progress as somehow being below where we hoped it would be. A few years back, people would have said you were absolutely nuts if you’d have predicted how good these models would become. Very few people (apart from those trying to sell you something) were exclaiming we’d be imminently entering a world where you enter an idea and out comes a complex solution without any further guidance or refining. When the AI can do that, we can just tell it to improve itself in a loop and AGI is just some GPU cycles away. Most people still expect - and hope - that’s a little way off yet.
That doesn’t mean the relative cost of abstracting and inlining hasn’t changed dramatically or that these tools aren’t incredibly useful when you figure out how to hold them.
Or you could just do what most people always do and wait for the trailblazers to either get burnt or figure out what works, and then jump on the bandwagon when it stabilises - but accept that when it does stabilise, you’ll be a few years behind those who have been picking shrapnel out of their hands for the last few years.
> The reason we stop junior devs from going down this path is because experience teaches us that things will break and when they do, it will incur a world of pain.
Hyperbole. It's also very often a "world of pain" with a lot of senior code.
> things will break and when they do, it will incur a world of pain
How much if this is still true and exaggerated in our world environment today where the cost of making things is near 0?
I think “Evolution” would say that the cost of producing is near 0 so the possibility of creating what we want is high. The cost of trying again is low so mistakes and pain aren’t super high. For really high stakes situation (which most situations are not) bring the expert human in the loop until the expert better than that human is AI.
> All of the people decrying "you're not doing it right!" are emphatically proving that LLMs cannot perform these tasks at the level we need them to.
the people are telling you “you are not doing it right!” - that’s it, there is nothing to interpret addition to this basic sentence
I'm sorry, but I don't agree.
Current dependency hell that is modern development, just how wide the openings are for supply chain attacks and seemingly every other week we get a new RCE.
I'd rather 100 loosely coupled scripts peer reviewed by a half a dozen of LLM agents.
4 replies →
> "LLM as abstraction" might be a possible future, but it assumes LLMs are significantly more capable than a junior dev at managing a growing mess of complex code.
Ignoring for a second they actually already are indeed, it doesn’t matter because the cost of rewriting the mess drops by an order of magnitude with each frontier model release. You won’t need good code because you’ll be throwing everything away all the time.
2 replies →
> I'm now incentivized to use less abstractions.
I'm incentivised to use abstractions that are harder to learn, but execute faster or more safely once compiled. E.g. more Rust, Lean.
> If an LLM is typing that code - and it can maintain a test suite that shows everything works correctly - maybe we don't need that abstraction after all.
LLMs benefit from abstractions the same way as we do.
LLMs currently copy our approaches to solving problems and copy all the problems those approaches bring.
Letting LLMs skip all the abstractions is about as likely to succeed as genetic programming is efficient.
For example, writing more vanilla JS instead of React, you're just reinventing the necessary abstractions more verbosely and with a higher risk of duplicate code or mismatching abstractions.
In a recent interview with Bret Weinstein, a former professor of evolutionary biology, he proposed that one property of evolution that makes the story of one species evolving into another more likely is that it's not just random permutations of single genes; it's also permutations to counter variables encoded as telomeres and possibly microsatellites.
https://podcasts.happyscribe.com/the-joe-rogan-experience/24...
Bret compares this to flipping random bits in a program to make it work better vs. tweaking variables randomly in a high-level language. Mutating parameters at a high-level for something that already works is more likely to result in something else that works than mutating parameters at a low level.
So I believe LLMs benefit from high abstractions, like us.
We just need good ones; and good ones for us might not be the same as good ones for LLMs.
> For example, writing more vanilla JS instead of React, you're just reinventing the necessary abstractions more verbosely and with a higher risk of duplicate code or mismatching abstractions.
Right, but I'm also getting pages that load faster and don't require a build step, making them more convenient to hack on. I'm enjoying that trade-off a lot.
1 reply →
Exactly. LLMs are a lot like human developers: they benefit from existing abstractions. Reinventing everything from scratch is a recipe for disaster—especially given an LLM’s limited context window.
I find it interesting for your example you chose Moment.js -- a time library instead of something utilitarian like Lodash. For years I've following Jon Skeet's blog about implementing his time library NodaTime (a port of JodaTime). There are a crazy number of edge cases and many unintuitive things about modeling time within a computer.
If I just wanted the equivalent of Lodash's _.intersection() method, I get it. The requirements are pretty straightforward and I can verify the LLM code & tests myself. One less dependency is great. But with time, I know I don't know enough to verify the LLM's output.
Similar to encryption libraries, it's a common recommendation to leave time-based code to developers who live and breathe those black boxes. I trust the community verify the correctness of those concepts, something I can't do myself with LLM output.
For moment you an use `date-fns` and tree shake.
I'd rather have LLMs build on top of proven, battle-tested production libraries than keep writing their own from scratch. You're going to fill up context with all of its re-invented wheels when it already knows how to use common options.
Not to mention that testing things like this is hard. And why waste time (and context and complexity) for humans and LLMs trying to do something hard like state syncing when you can focus on something else?
Every dependency carries a cost. You are effectively outsourcing part of the future maintenance of your project to an external team.
This can often be a very solid bet, but it can also occasionally backfire if the library you chose falls out of date and is no longer maintained.
For this reason I lean towards fewer dependencies, and have a high bar for when a dependency is worth adding to a project.
I prefer a dozen well vetted dependencies to hundreds of smaller ones that each solve a problem that I could have solved effectively without them.
3 replies →
LLMs also have encyclopedic knowledge. Several times LLMs have found some huge block of code I wrote and reduced it down to a few lines. The other day they removed several thousand lines of brittle code I wrote previously for some API calls with a well-tested package I didn't know about. Literally thousands down to dozens.
My code is constantly shrinking, becoming better quality, more performant, more best-practice on a daily basis. And I'm learning like crazy. I'm constantly looking up changes it recommends to see why and what the reasons are behind them.
It can be a big damned dummy too, though. Just today it was proposing a massive server-side script to workaround an issue with my app I was deploying, when the actual solution was to just make a simple one-line change to the app. ("You're absolutely right!")
Right there with you.
I'm instructing my agents to doing old school boring form POST, SSR templates, and vanilla JS / CSS.
I previously shifted away from this to abstractions because typing all the boilerplate was tedious.
But now that I'm not typing, the tedious but simple approach is great for the agent writing the code, and great for the the people doing code reviews.
> If an LLM is typing that code - and it can maintain a test suite that shows everything works correctly - maybe we don't need that abstraction after all.
I'm worried that there is a tendency in LLM-generated code to avoid even local abstractions, such as putting common code into separate (local functions), and even use records/structures. You end up with code that is best maintained with an LLM, which is good for the LLM provider and their future revenue. But we humans as reviewers and ultimate long-term maintainers benefit from those minor abstractions.
Yeah, I find myself needing to watch out for that. I'll frequently say "refactor that to reduce duplicated code" - which is generally very safe once the LLM has added test coverage for the new feature.
I've come to a similar conclusion. One example is how much easier it is to put an interface on top of sqlite. I've been burned badly with the hidden details of ORM s. ORMs are the sirens call of getting rid of all that boiler plate code when encoding and decoding objects into a db. However this abstraction breaks in many hidden ways. Lazy loading details, in-memory state vs db mismatch, cascading details, etc all have unexpected problems that can be hard to predict. Using an LLM to do the grunt work lets you easily see and reason about all the details. You don't have to guess about what's happening and you can make your own choices.
> If an LLM is typing that code - and it can maintain a test suite that shows everything works correctly - maybe we don't need that abstraction after all.
But this is a highly non-trivial problem. How do you even possibly manually verify that the test suite is complete and tests all possible corner cases (of which there are so many because synchronizing state is a hard problem)?
At least React solves this problem in a non-stochastic, deterministic manner. What can be a good reason to replace something like React that works determinstically with LLM-assisted code that is generated stochastically and there's no easy way to manually verify if the implementation or the test suite is correct and complete?
You don't, same as for the "generate momentjs and use it". People now firmly believe they can use an LLM to build custom versions of these libraries and rewrite whole ecosystems out of nowhere because Claude said "here's the code".
I've come to realize fighting this is useless, people will do this, its going to create large fuck ups and there will be heaps of money to be made on the cleanup jobs.
2 replies →
Has anyone tried the experiment that is sort of implied here? I was wondering earlier today, what it would be like to pick a simple app, pick on OS, and just tell an LLM to write that app using only machine code and native ADKs, and skip all intermediate layers?
We seem to have created a large bureaucracy for software development, where telling a computer how to execute an app involves keeping a lot of cogs in a big complicated machine happy. But why use the automation to just roll the cogs? Why not just simplify/streamline? Does an LLM need to worry about using the latest and greatest abstractions? I have to assume this has been tried already...
> If an LLM is typing that code - and it can maintain a test suite that shows everything works correctly - maybe we don't need that abstraction after all.
for simple stuff, sure, React was ALWAYS inefficient. Even Javascript/client-side logic is still overkill a lot of the times except for that pesky "user expectations" thing.
for anything codebase that's long-lived and complex, combinatorics tells us how it'll near-impossible to have good+fast test coverage on all that.
part of the reason people don't roll their own is because being able to assume that the library won't have major bugs leads to an incredible reduction in necessary test service, and generally people have found it a safe-enough assumption.
throwing that out and trying to just cover the necessary stuff instead - because you're also throwing out your ability to quickly recognize risky changes since you aren't familiar with all the code - has a high chance of painting you into messy corners.
"just hire a thousand low-skilled people and force them to write tests" had more problems as a hiring plan then just "people are expensive."
> Why do we code with React?
...is a loaded question, with a complex and nuanced answer. Especially when you continue:
> it's worth paying the React complexity/page-weight tax
All right; then why do we code in React when a smaller alternative, such as Preact, exists, which solves the same problem, but for a much lower page-weight tax?
Why do we code in React when a mechanism to synchronize data with tiny UI fragments through signals exists, as exemplified by Solid?
Why do people use React to code things where data doesn't even change, or changes so little that to sync it with the UI does not present any challenge whatsoever, such as blogs or landing pages?
I don't think the question 'why do we code with React?' has a simple and satisfactory answer anymore. I am sure marketing and educational practices play a large role in it.
Yeah, I share all of those questions.
My cynical answer is that most web developers who learned their craftsin the last decade learned frontend React-first, and a lot of them genuinely don't have experience working without it.
Which means hiring for a React team is easier. Which means learning React makes you more employable.
6 replies →
If you work at a megacorp right now, you know whats happening isn't people deciding to use less libraries. It's developers being measured by their lines of code, and the more AI you use the more lines of code and 'features' you can ship.
However, the quality of this code is fucking terrible, no one is reading what they push deeply, and these models don't have enough 'sense' to make really robust and effective test suites. Even if they did, a comprehensive test suite is not the solution to poorly designed code, it's a band aid -- and an expensive one at scale.
Most likely we will see some disasters happening in the next few years due to this mode of software development, and only then will people understand to use these agents as tools and not replacements.
...Or maybe we'll get AGI and it will fix/maintain the trash going out there today.
I don't trust LLM enough to handle the maintenance of all the abstraction buried in react / similar library. I caught some of the LLMs taking nasty shortcuts (e.g. removing test constraints or validations in order to make the test green). Multiple times. Which completely breaks trust.
And if I have to closely supervise every single change, I don't believe my development process will be any better. If not worse.
Let alone new engineers who join the team and all of a sudden have to deal with a unique solution layer which doesn't exist anywhere else.
Why would I want to maintain in perpetuity random snippets when a library exists? How is that an improvement?
It's an improvement if that library stops being actively maintained in the future.
... or decides to redesign the API you were using.
1 reply →
If LLMs are that capable, then why are AI companies selling access to them instead of using them to conquer markets?
The same question might be asked about ASML: if ASML EUV machines are so great, why does ASML sell them to TSMC instead of fabbing chips themselves? The reality is that firms specialize in certain areas, and may lose their comparative advantage when they move outside of their specialty.
Because the LLMs have only got this good 3 months ago, and market dynamics mean they can't hold them in house without their competitors getting ahead.
I would guess fear of losing market share and valuable data, as well as pressure to appear to be winning the AI race for the companies' own stock price.
i.e competition. If there were only one AI company, they would probably not release anything close to their most capable version to the public. ala Google pre-chatgpt.
4 replies →
Huh, I've been assuming the opposite: better to use React even if you don't need it, because of its prevalence in the training data. Is it not the case that LLMs are better at standard stacks like that than custom JS?
Hard to say for sure. I've been finding that frontier LLMs write very good code when I tell them "vanilla JS, no React" - in that their code matches my personal taste at least - but that's hardly a robust benchmark.
I'd rather use React than a bespoke solution created by an ephemeral agent, and I'd rather self-trepanate than use React
I'd argue it's a different category of abstraction
The problem is, what do you do _when_ it fails? Not "if", but "when".
Can you manually wade through thousands of functions and fix the issue?
Nutty idea: train on ASM code. Create an LLM that compiles prompts directly to machine code.
> and it can maintain a test suite that shows everything works correctly
Are you able to efficiently verify that the test suite is testing what it should be testing? (I would not count "manually reviewing all the test code" as efficient if you have a similar amount of test code to actual code.)
Sometimes a change to the code under test means that a (perhaps unavoidably brittle) test needs to be changed. In this case, the LLM should change the test to match the behaviour of the code under test. Other times, a change to the code under test represents a bug that a failing test should catch -- in this case, the LLM should fix the code under test, and leave the test unchanged. How do you have confidence that the LLM chooses the right path in each case?
That's a fundamental misunderstanding
The role of abstractions *IS* to prevent (eg "compress") the need for a test suite, because you have an easy model to understand and reason about
One of my personal rules for automated test suites is that my tests should fail if one of the libraries I'm using changes in a way that breaks my features.
Makes upgrading dependencies so much less painful!
1 reply →
Our industry wants disruption, speed, delivery! Automatic code generation does that wonderfully.
If we wanted safety, stability, performance, and polish, the impact of LLMs would be more limited. They have a tendency to pile up code on top of code.
I think the new tech is just accelerating an already existing problem. Most tech products are already rotting, take a look at windows or iOS.
I wonder what will it take for a significant turning point in this mentality.
disruption is a code word for deregulation, and deregulation is bad for everyone except execs and investors
it's sadly telling how this comment got greyed out to oblivion.
One possible positive outcome of all this could be sending LLMs to clean up oceans of low value tech debt. Let the humans move fast, let the machines straighten out and tidy up.
The ROI of doing this is weak because of how long it takes an expensive human. But if you could clean it up more cheaply, the ROI strengthens considerably- and there’s a lot of it.
It’s wild that programmers are willing to accept less determinism.
It's not something that suddenly changed. "I'll generate some code" is as nondeterministic as "I'll look for a library that does it", "I'll assign John to code this feature", or "I'll outsource this code to a consulting company". Even if you write yourself, you're pretty nondeterministic in your results - you're not going to write exactly the same code to solve a problem, even if you explicitly try.
No?
If I use a library, I know it will do the same thing from the same inputs, every time. If I don't understand something about its behavior, then I can look to the documentation. Some are better about this, some are crap. But a good library will continuing doing what I want years or decades later.
An LLM can't decide between one sentence and the next what to do.
1 reply →
Contrary to code generation, all the other examples have one common point which is the main advantage, which is the alignment between your objective and their actions. With a good enough incentive, they may as well be deterministic.
When you order home delivery, you don’t care about by who and how. Only the end result matters. And we’ve ensured that reliability is good enough that failures are accidents, not common occurrence.
Code generation is not reliable enough to have the same quasi deterministic label.
It's not the same, LLM's are qualitatively different due to the stochastic and non-reproducible nature of their output. From the LLM's point of view, non-functional or incorrect code is exactly the same as correct code because it doesn't understand anything that it's generating. When a human does it, you can say they did a bad or good job, but there is a thought process and actual "intelligence" and reasoning that went into the decisions.
I think this insight was really the thing that made me understand the limitations of LLMs a lot better. Some people say when it produces things that are incorrect or fabricated it is "hallucinating", but the truth is that everything it produces is a hallucination, and the fact it's sometimes correct is incidental.
2 replies →
Why would the average programmer have a problem with it?
The average programmer is already being pushed into doing a lot of things they're unhappy about in their day jobs.
Crappy designs, stupid products, tracking, privacy violation, security issues, slowness on customer machines, terrible tooling, crappy dependencies, horrible culture, pointless nitpicks in code reviews.
Half of HN is gonna defend one thing above or the other because $$$.
What's one more thing?
Say it louder.
It's wild that management would be willing to accept it.
I think that for some people it is harder to reason about determinism because it is similar to correctness, and correctness can, in many scenarios be something you trade off - for example in relation to scaling and speed you will often trade off correctness.
If you do not think clearly about the difference with determinism and other similar properties like (real-time) correctness which you might be willing to trade off, you might think that trading off determinism is just more of the same.
Note: I'm against trading off determinism, but I am willing to think there might be a reason to trade it off, just I worry that people are not actually thinking through what it is they're trading when they do it.
Management is used to nondeterminism, because that’s what their employees always have been.
1 reply →
Determinism require formality (enactment of rules) and some kind of omniscience about the system. Both are hard to acquire. I’ve seen people trying hard not to read any kind of manual and failing to reason logically even when given hints about the solution to a problem.
I think those that are most successful at creating maintainable code with AI are those that spend more time upfront limiting the nondeterminism aspect using design and context.
Mortgages don't pay for themselves.
It's not that wild. I like building things. I like programming too, but less than building things.
To me, fighting with an LLM doesn't feel like building things, it feels like having my teeth pulled.
2 replies →
> It’s wild that programmers are willing to accept less determinism.
It's wild that you think programmers is some kind of caste that makes any decisions.
There has always been a laissez-faire subset of programmers who thrive on living in the debugger, getting occasional dopamine hits every time they remove any footgun they previously placed.
I cannot count the times that I've had essentially this conversation:
"If x happens, then y, and z, it will crash here."
"What are the odds of that happening?"
"If you can even ask that question, the probability that it will occur at a customer site somewhere sometime approaches one."
It's completely crazy. I've had variants on the conversation from hardware designers, too. One time, I was asked to torture a UART, since we had shipped a broken one. (I normally build stuff, but I am your go-to whitebox tester, because I hone in on things that look suspicious rather than shying away from them.) When I was asked the inevitable "Could that really happen in a customer system?" after creating a synthetic scenario where the UART and DMA together failed, my response was:
"I don't know. You have two choices. Either fix it where the test passes, or prove that no customer could ever inadvertently recreate the test conditions."
He fixed it, but not without a lot of grumbling.
My dad worked in the auto industry and they came across a defect in an engine control computer where they were able to give it something like 10 million to one odds of triggering.
They then turned the thing on, it ran for several seconds, encountered the error, and crashed.
Oh, that's right, the CPU can do millions of things a second.
Something I keep in the back of my mind when thinking about the odds in programming. You need to do extra leg work to make sure that you're measuring things in a way that's practical.
I've recently had a lot of fun teaching junior devs the basics of defensive programming.
The phrasing that usually make it click for them is: "Yes, this is an unlikely bug, but if this bug where to happen how long would it take you to figure out this is the problem and fix it?"
In most cases these are extremely subtle issues that the juniors immediately realize would be nightmares to debug and could easily eat up days of hair-pulling work while someone non-technical above them waiting for the solution is rapidly losing their patience.
The best senior devs I've worked with over my career all have shared an uncanny knack for seeing a problem months before it impacts production. While they are frequently ignored, in those cases more often then not they get an apology a few months down the line when exactly what they predict would happen, happens.
1 reply →
The good ones don't accept. Sadly there's just many more idiots out there trying to make a quick buck
Delving a bit deeper... I've been wondering if the problem's related to the rise in H1B workers and contractors. These programmers have an extra incentive to avoid pushing back on c-suite/skip level decisions - staying out of in-office politics reduces the risk of deportation. I think companies with a higher % of engineers working with that incentive have a higher risk of losing market share in the long-term.
2 replies →
You can have the best of both worlds if you use structured/constrained generation.
I mean we've had to cope with users for ages, this is not that different.
This gets repeated all the time, but it’s total nonsense. The output of an LLM is fixed just as the output of a human is.
Out of curiosity, what did you pivot to?
It sounds crazy to say this, but I've been thinking about this myself. Not for the immediate future (eg 2026), but somewhere later.
This whole things of AI assisted and vibe coding phenomena including the other comments remind me of this very popular post on HN that keep appearing almost every year on HN [1],[2].
[1] Don't Call Yourself A Programmer, And Other Career Advice:
https://news.ycombinator.com/item?id=34095775
What are you pivoting to?
I'm also interested in hearing this.
For me, I'm planning to ride out this industry for another couple years building cash until I can't stand it, then pivot to driving a city bus.
Gardening and plumbing. Driving buses will be solved.
1 reply →
> then pivot to driving a city bus.
You seem to be counting on Waymo not obsoleting that occupation. ;)
My work is better than it has been for decades. Now I can finally think and experiment instead of wasting my time on coding nitty-gritty detail, impossible to abstract. Last autumn was the game changer, basically Codex and later Opus 4.5; the latter is good with any decent scaffolding.
I have to admit, LLMs do save a lot of typing a d associated syntax errors. If you know what you want and can spot and fix mistakes made by the LLM then they can be pretty useful. I don’t think it’s wise to use them for development if you are not knowledgeable enough in the domain and language to recognize errors or dead ends in the generated code though.
That's similar to what happened in Java enterprise stack: ...wrapper and ...factory classes and all-you-can-eat abstractions that hide implementation and make engineering crazy expensive while not adding much (or anything, in most cases) to product quality. Now the same is happening in work processes with agentic systems and workflows.
Could we all just agree to stop using the term "abstraction". It's meaningless and confusing. It's cover for a multitude of sins, because it really could mean anything at all. Don't lay all the blame on the c-suite; they are what they are, and have their own view. Don't moan about the latest egregious excess of some llm. If it works for you, use it; if it doesn't, don't. But, stop whinging.
> It’s watching the profession collectively decide that the solution to uncertainty is to pile abstraction on top of abstraction until no one can explain what’s actually happening anymore.
No profession collectively made such a decision. Programming was always very splitted into many, many subcultures, each with their own (mutually incompatible over the whole profession) ideas what makes a good program.
So, I guess rather some programmers inside some part of a Silicon Valley echo chamber in which you also live made such a decision.
What are you pivoting to?
I've usually found complaints about abstraction in programming odd because frankly, all we do is abstraction. It often seems to be used to mean /I/ don't understand, therefore we should do something more complicated and with many more lines of code that's less flexible.
But this usage? I'm fully on board. Too much abstraction is when it's incomprehensible. To who is the next question (my usual complaint is that a junior should not be that level) and I think you're right to point out that the "who" here is everyone.
We're killing a whole side of creativity and elegance while only slightly aiding another side. There's utility to this, but also a cost.
I think what frustrates me most about CS is that as a community we tend to go all in on something. We went all in on VR then crypto, and now AI. We should be trying new things but it more feels like we take these sides as if they're objective and anyone not hopping on the hype train is an idiots or luddite. The way the whole industry jumps to these things just feels more like FOMO than intelligent strategy. Like making a sparkling water company an "AI first" company... its like we love solutions looking for problems
What are you pivoting to?
Don't forget you are expected to deliver x10 for the same pay, "because you have the AI now".
The system is designed to do exactly that. This is called ‘productivity increase’ and is deflationary in large dosages. Deflation sounds good until you understand where it’s coming from.
> It’s watching the profession collectively decide that the solution to uncertainty is to pile abstraction on top of abstraction until no one can explain what’s actually happening anymore.
The ubiquitous adoption of LLMs for generating code is mostly a sign of bad abstraction or the absence of abstraction, not the excess of abstraction.
And choosing/making the right abstraction is kind of the name of the game, right? So it's not abstraction per se that's a problem.
Every technical person has been complaining about this for the entire history of computer programming
Unless you’re writing literal memory instructions then you’re operating on between 4 and 10 levels of abstraction already as an engineer
It has never been tractable for humans to program a series of switches without incredible number of abstractions
The vast majority of programmers never understood how computers work to begin with
People keep making this argument, but the jump to LLM driven development is such a conceptually different thing than any previous abstraction
This is true, though the people that actually push the field forward do know enough about every level of abstraction to get the job done. Making something (very important) horrible just to rush to market can be a pretty big progress blocker.
Jensen is someone I trust to understand the business side and some of those lower technical layers, so I'm not too concerned.
And if you're writing machine code directly, you're still relying on about ten layers of abstraction that the wizards at the chip design firms have built for you.
So you're washing dishes now?