Comment by djaro
10 hours ago
> So if Bob can do things with agents, he can do things.
The problem arrises when Bob encounters a problem too complex or unique for agents to solve.
To me, it seems a bit like the difference between learning how to cook versus buying microwave dinners. Sure, a good microwave dinner can taste really good, and it will be a lot better than what a beginning cook will make. But imagine aspiring cooks just buying premade meals because "those aren't going anywhere". Over the span of years, eventually a real cook will be able to make way better meals than anything you can buy at a grocery store.
The market will always value the exact things LLMs can not do, because if an LLM can do something, there is no reason to hire a person for that.
Precisely. The first 10 rungs of the ladder will be removed, but we still expect you to be able to get to the roof. The AI won't get you there and you won't have the knowledge you'd normally gain on those first 10 rungs to help you move past #10.
People would have said the same about graphing calculators or calculators before that. Socrates said the same thing about the written word.
The determining factor is always "did I come up with this tool". Somehow, subsequent generations always manage to find their own competencies (which, to be fair, may be different).
This isn't guaranteed to play out, but it should be the default expectation until we actually see greatly diminishing outputs at the frontier of science, engineering, etc.
I think that's too easy an analogy, though.
Calculators are deterministically correct given the right input. It does not require expert judgement on whether an answer they gave is reasonable or not.
As someone who uses LLMs all day for coding, and who regularly bumps against the boundaries of what they're capable of, that's very much not the case. The only reason I can use them effectively is because I know what good software looks like and when to drop down to more explicit instructions.
12 replies →
If you hand a broken calculator to someone who knows how to do math, and they entered 123 + 765 which produced an answer of 6789; they should instantly know something is wrong. Hand that calculator to someone who never understood what the tool actually did but just accepted whatever answer appeared; and they would likely think the answer was totally reasonable.
Catching an LLM hallucinating often takes a basic understanding of what the answer should look like before asking the question.
4 replies →
The calculator analogy is wrong for the same reason. Knowing and internalizing arithmetic, algebra, and the shape of curves, etc. are mathematical rungs to get to higher mathematics and becoming a mathematician or physicist. You can't plug-and-chug your way there with a calculator and no understanding.
The people who make the calculator analogy are already victims of the missing rung problem and they aren't even able to comprehend what they're lacking. That's where the future of LLM overuse will take us.
> People would have said the same about graphing calculators or calculators before that.
As it happens, we generally don't let people use calculators while learning arithmetic. We make children spend years using pencil and paper to do what a calculator could in seconds.
3 replies →
> People would have said the same about graphing calculators or calculators before that. Socrates said the same thing about the written word.
Well, we still make people calculate manually for many years, and we still make people listen to lectures instead of just reading.
But will we still have people to go through years of manual coding? I guess in the future we will force them, at least if we want to keep people competent, just like the other things you mentioned. Currently you do that on the job, in the future people wont do that on the job so they will be expected to do it as a part of their education.
What do people mean exactly when they bring up “Socrates saying things about writing”? Phaedrus?
> “Most ingenious Theuth, one man has the ability to beget arts, but the ability to judge of their usefulness or harmfulness to their users belongs to another; [275a] and now you, who are the father of letters, have been led by your affection to ascribe to them a power the opposite of that which they really possess.
> "For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem [275b] to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise."
Sounds to me like he was spot on.
2 replies →
> The determining factor is always "did I come up with this tool". Somehow, subsequent generations always manage to find their own competencies (which, to be fair, may be different).
In a sense, I think you are right. We are currently going through a period of transition that values some skills and devalues others. The people who see huge productivity gains because they don't have to do the meaningless grunt work are enthusiastic about that. The people who did not come up with the tool are quick to point out pitfalls.
The thing is, the naysayers aren't wrong since the path we choose to follow will determine the outcome of using the technology. Using it to sift through papers to figure out what is worth reading in depth is useful. Using it to help us understand difficult points in a paper is useful. On the other hand, using it as a replacement for reading the papers is counterproductive. It is replacing what the author said with what a machine "thinks" an author said. That may get rid of unnecessary verbosity, but it is almost certainly stripping away necessary details as well.
My university days were spent studying astrophysics. It was long ago, but the struggles with technology handling data were similar. There were debates between older faculty who were fine with computers, as long as researchers were there to supervise the analysis every step of the way, and new faculty, who needed computers to take raw data to reduced results without human intervention. The reason was, as always, productivity. People could not handle the massive amounts of data being generated by the new generation of sensors or systematic large scale surveys if they had to intervene any step of the way. At a basic level, you couldn't figure out whether it was a garbage-in, garbage-out type scenario because no one had the time to look at the inputs. (I mean no time in an absolute sense. There was too much data.) At a deeper level, you couldn't even tell if the data processing steps were valid unless there was something obviously wrong with the data. Sure, the code looked fine. If the code did what we expected of it, mathematically, it would be fine. But there were occasions where I had to point out that the computer isn't working how they thought it was.
It was a debate in which both sides were right. You couldn't make scientific progress at a useful pace without sticking computers in the middle and without computers taking over the grunt work. On the other hand, the machine cannot be used as a replacement for the grunt work of understanding, may that involves reading papers or analyzing the code from the perspective of a computer scientist (rather than a mathematician).
We notably teach people how to do arithmetics by hand before we hand them calculators.
We still expect high school students to learn to use graph paper before they use their TI-83, grade school students to do arithmetic by hand before using a calculator. This is essentially the post's point, that LLMs are a useful tool only after you have learned to do the work without them.
When doing college we can only start using those tools when we understand the principles behind them.
Socrates does not say this about the written word. Plato has Socrates say it about writing in the beginning sections of the Phaedrus, but it is not Socrates opinion nor the final conclusion he arrives at.
And yes yes you can pull up the quote or ask your AI, but they will be wrong. The quote is from Socrates reciting a "myth", as is pretty typical in a middle late dialogue like this.
But here, alas we can recognize the utter absurdity, that this just points out why writing can be bad, as Socrates does pose. Because you get guys 2000 years in future using you and misquoting you for their dumb cause! No more logos, only endless stochastic doxa. Truly a future of sophists!
But AI might actually get you there in terms of superior pedagogy. Personal Q&A where most individuals wouldn't have afforded it before.
There are a lot of people in academia who are great at thinking about complex algorithms but can't write maintainable code if their life depended on it. There are ways to acquire those skills that don't go the junior developer route. Same with debugging and profiling skills
But we might see a lot more specialization as a result
Do they need to write maintainable code? I think probably not, it's the research and discovering the new method that is important.
They can’t write maintainable code because they don’t have real world experience of getting your hands dirty in a company. The only way to get startup experience is to build a startup or work for one
6 replies →
That’s a good analogy but I think we’ve already went from 0 to 10 rungs over the last couple of years. If we assume that the models or harnesses will improve more and more rungs will be removed. Vast majority of programmers aren’t doing novel, groundbreaking work.
The correct distinction is: if you can't do something without the agent, then you can't do it.
The problem that the author describes is real. I have run into it hundreds of times now. I will know how to do something, I tell AI to do it, the AI does not actually know how to do it at a fundamental level and will create fake tests to prove that it is done, and you check the work and it is wrong.
You can describe to the AI to do X at a very high-level but if you don't know how to check the outcome then the AI isn't going to be useful.
The story about the cook is 100% right. McDonald's doesn't have "chefs", they have factory workers who assemble food. The argument with AI is that working in McDonald's means you are able to cook food as well as the best chef.
The issue with hiring is that companies won't be able to distinguish between AI-driven humans and people with knowledge until it is too late.
If you have knowledge and are using AI tools correctly (i.e. not trying to zero-shot work) then it is a huge multiplier. That the industry is moving towards agent-driven workflows indicates that the AI business is about selling fake expertise to the incompetent.
> The problem arrises when Bob encounters a problem too complex or unique for agents to solve.
It’s actually worse than that: the AI will not stop and say ”too complex, try in a month with the next SOTA model”. Rather, it will give Bob a plausible looking solution that Bob cannot identify as right or wrong. If Bob is working on an instant feedback problem, it’s ok: he can flag it, try again, ask for help. But if the error can’t be detected immediately, it can come back with a vengeance in a year. Perhaps Bob has already gotten promoted by then, and Bobs replacement gets to deal with it. In either case, Bob cannot be trusted any more than the LLM itself.
> The problem arrises when Bob encounters a problem too complex or unique for agents to solve.
Or even sooner, when Bob’s internet connection is down, or he ran out of tokens, or has been banned from his favourite service, or the service is down, or he needs to solve a problem with a machine unable to run local models, or essentially any situation where he’s unable to use an LLM.
To me it feels more like learning to cook versus learning how to repair ovens and run a farm. Software engineering isn’t about writing code any more than it’s about writing machine code or designing CPUs. It’s about bringing great software into existence.
Or farming before and after agricultural machines. The principles are the same but the ”tactical” stuff are different.
That doesn't sound like much of an issue. Bob was already going to encounter problems that are too large and complex for him to solve, agents or otherwise. Life throws us hard problems. I don't recall if we even assumed Bob was unusually capable, he might be one of life's flounderers. I'd give good odds that if he got through a program with the help of agents he'll get through life achieving at least a normal level of success.
But there is also a more subtle thing, which is we're trending towards superintelligence with these AIs. At the point, Bob may discover that anything agents can't do, Alice can't do because she is limited by trying to think using soggy meat as opposed to a high-performance engineered thinking system. Not going to win that battle in the long term.
> The market will always value the exact things LLMs can not do, because if an LLM can do something, there is no reason to hire a person for that.
The market values bulldozers. Whether a human does actual work or not isn't particularly exciting to a market.
> we're trending towards superintelligence with these AIs
The article addresses this, because, well... no we aren't. Maybe we are. But it's far from clear that we're not moving toward a plateau in what these agents can do.
> Whether a human does actual work or not isn't particularly exciting to a market.
You seem to be convinced these AI agents will continue to improve without bound, so I think this is where the disconnect lies. Some of us (including the article author) are more skeptical. The market values work actually getting done. If the AIs have limits, and the humans driving them no longer have the capability to surpass those limits on their own, then people who have learned the hard way, without relying so much on an AI, will have an advantage in the market.
I already find myself getting lazy as a software developer, having an LLM verify my work, rather than going through the process of really thinking it through myself. I can feel that part of my skills atrophying. Now consider someone who has never developed those skills in the first place, because the LLM has done it for them. What happens when the LLM does a bad job of it? They'll have no idea. I still do, at least.
Maybe someday the AIs will be so capable that it won't matter. They'll be smarter and more through and be able to do more, and do it correctly, than even the most experienced person in the field. But I don't think that's even close to a certainty.
There's no good definition of superintelligence. A calculator is already way more capable than any human at doing simple mathematical operations, and even small AIs for local use can instantly recall all sorts of impressive knowledge about virtually any field of study, which would be unfeasible for any human; but neither of those is what people mean when they wonder whether future AIs will have superintelligence.
1 reply →
> But it's far from clear that we're not moving toward a plateau in what these agents can do.
It is a debatable topic, and I agree with you that it's unclear whether we will hit the wall or not at some point. But one point I want to mention is that at the time when the AI agents were only conceived and the most popular type of """AI""" was LLM-based chatbot, it also seemed that we're approaching some kind of plateau in their performance. Then "agents" appeared, and this plateau, the wall we're likely to hit at some point, the boundary was pushed further. I don't know (who knows at all?) how far away we can push the boundaries, but who knows what comes next? Who knows, for example, when a completely new architecture different from Transformers will come out and be adopted everywhere, which will allow for something new? Future is uncertain. We may hit the wall this year, or we may not hit it in the next 10-20 years. It is, indeed, unclear.
2 replies →
> we're trending towards superintelligence with these AIs
I wouldn't count on that because even if it happens, we don't know when it ill happen, and it's one of those things where how close it looks to be is no indication of how close it actually is. We could just as easily spend the next 100 years being 10 years away from agi. Just look at fusion power, self driving cars, etc.
Fusion isn't a good example. Self driving cars are a battle between regulation and 9's of reliability, if we were willing to accept self driving cars that crashed as much as humans it'd be here already.
Whatever models suck at, we can pour money into making them do better. It's very cut and dry. The squirrely bit is how that contributes to "general intelligence" and whether the models are progressing towards overall autonomy due to our changes. That mostly matters for the AGI mouthbreathers though, people doing actual work just care that the models have improved.
>But there is also a more subtle thing, which is we're trending towards superintelligence with these AIs
do you have any evidence for that, though? Besides marketing claims, I mean.
I've always quite liked https://ourworldindata.org/grapher/test-scores-ai-capabiliti... to show that once AIs are knocking at the door of a human capability they tend to overshoot in around a decade.
2 replies →
> That doesn't sound like much of an issue. Bob was already going to encounter problems that are too large and complex for him to solve, agents or otherwise.
I have literally never run into this in my career..challenges have always been something to help me grow.
The authors point went a little over your head.
It doesn't matter if Bob can be normal. There was no point to him being paid to be on the program.
From the article:
If you hand that process to a machine, you haven't accelerated science. You've removed the only part of it that anyone actually needed.
> It doesn't matter if Bob can be normal. There was no point to him being paid to be on the program.
Yeah, I'm surprised at the number of people who read the article and came away with the conclusion that the program was designed to churn deliverables, and then they conclude that it doesn't matter if Bob can only function with an AI holding his hand, because he can still deliver.
That isn't the output of the program; the output is an Alice. That's the point of the program. They don't want the results generated by Alice, they want the final Alice.
2 replies →
And then you realize that most of science is unnecessary. As TFA points out, it doesn't matter if the age of the universe is 13.77 or 13.79 billion years. So you ban AI in science, you produce more scientists who can solve problems that don't matter. So what?
Market values bulldozers for bulldozing jobs. No one is going to use bulldozers to mow a lawn.
If Bob is going to spend $500 in tokens for something I can do for $50.
I think Bob is not going to stay long in lawn mowing market driving a bulldozer.
"Things that have never been done before in software" has been my entire career. A lot of it requires specific knowledge of physics, modelling, computer science, and the tradeoffs involved in parsimony and efficiency vs accuracy and fidelity.
Do you have a solution for me? How does the market value things that don't yet exist in this brave new world?
> Not going to win that battle in the long term.
I would take that bet on the side of the wet meat. In the future, every AI will be an ad executive. At least the meat programming won't be preloaded to sell ads every N tokens.
From the article:
> There's a common rebuttal to this, and I hear it constantly. "Just wait," people say. "In a few months, in a year, the models will be better. They won't hallucinate. They won't fake plots. The problems you're describing are temporary." I've been hearing "just wait" since 2023.
We're not trending towards superintelligence with these AIs. We're trending towards (and, in fact, have already reached) superintelligence with computers in general, but LLM agents are among the least capable known algorithms for the majority of tasks we get them to do. The problem, as it usually is, is that most people don't have access to the fruits of obscure research projects.
Untrained children write better code than the most sophisticated LLMs, without even noticing they're doing anything special.
The rate of hallucination has gone down drastically since 2023. As LLM coding tools continue to pare that rate down, eventually we’ll hit a point where it is comparable to the rate we naturally introduce bugs as humans programmers.
2 replies →
How many people who cook professionally are gourmet chefs? I think it ends up that gourmet cooking is so infrequently needed that we don’t require everyone who makes food to do it, just a small group of professionally trained people. Most people who make food for a living work somewhere like McDonald’s and Applebee’s where a high level of skill is not required.
There will still be programming specialists in the future — we still have assembly experts and COBOL experts, after all. We just won’t need very many of them and the vast majority of software engineers will use higher-level tools.
That's the problem though: programmers who become the equivalent of McDonald's workers will be paid poorly like McDonald's workers and be treated as disposable like McDonald's workers.
I held this point of view for a while but I came to the (possibly naive) conclusion that it was just forced self-assurance. Truth is, the issues with sub-par output are just a prompting and supervision deficiency. An agent team can produce better end product if supervised and promoted correctly. The issue is most don’t take the time to do that. I’m not saying I like that this is true, quite the opposite. It is the reality of things now.
At some point the herding of idiot savants becomes more work than just doing the damn thing yourself in the first place.
I'm happy to herd idiots all my life if they come out of it smarter than they went in. The real tragedy with current LLM agents is that they're effectively stateless, and so all the effort of "educating" them feels wasted.
Once continuous learning is solved, I predict the problem addressed by TFA to become orders of magnitude bigger: What's the motivation for anyone to teach a person if an LLM can learn it much faster, will work for you forever, and won't take any sick days or consider changing careers?
1 reply →
> Truth is, the issues with sub-par output are just a prompting and supervision deficiency. An agent team can produce better end product if supervised and promoted correctly.
Which is more work, and less fun, than doing it myself. No thanks.
Just because Bob doesn't know e.g. Rust syntax and library modules well, doesn't mean that Bob can't learn an algorithm to solve a difficult problem. The AI might suggest classes of algorithms that could be applicable given the real world constraints, and help Bob set up an experimental plan to test different algorithms for efficacy in the situation, but Bob's intuition is still in the drivers's seat.
Of course, that assumes a Bob with drive and agency. He could just as easily tell the AI to fix it without trying to stay in the loop.
But if Bob doesn't know rust syntax and library modules well, how can he be expected to evaluate the output generating Rust code? Bugs can be very subtle and not obvious, and Rust has some constructs that are very uncommon (or don't exist) in other languages.
Human nature says that Bob will skim over and trust the parts that he doesn't understand as long as he gets output that looks like he expects it to look, and that's extremely dangerous.
Then perhaps Bob should have it use functional Scala, where my experience is that if it compiles and looks like what you expect, it's almost certainly correct.
1 reply →
Bob+agents is going to be able to solve much more complex problems than Bob without agents.
That's the true AI revolution: not the things it can accelerate, the things it can put in reach that you wouldn't countenance doing before.
Worse, soon fewer and fewer people will taste good food, including even higher and higher scale restaurants just using pre-made.
As fewer know what good food tastes like, the entire market will enshitify towards lower and lower calibre food.
We already see this with, for example, fruits in cold climates. I've known people who have only ever bought them from the supermarket, then tried them at a farmers when they're in season for 2 weeks. The look of astonishment on their faces, at the flavour, is quite telling. They simply had no idea how dry, flavourless supermarket fruit is.
Nothing beats an apple picked just before you eat it.
(For reference, produce shipped to supermarkets is often picked, even locally, before being entirely ripe. It last longer, and handles shipping better, than a perfectly ripe fruit.)
The same will be true of LLMs. They're already out of "new things" to train on. I question that they'll ever learn new languages, who will they observe to train on? What does it matter if the code is unreadable by humans regardless?
And this is the real danger. Eventually, we'll have entire coding languages that are just weird, incomprehensible, tailored to LLMs, maybe even a language written by an LLM.
What then? Who will be able to decipher such gibberish?
Literally all true advancement will stop, for LLMs never invent, they only mimic.
> As fewer know what good food tastes like, the entire market will enshitify towards lower and lower calibre food.
This happened a long time ago in the US. Drive through California's Central Valley sometime and sample the fruit sold fresh along the side of the road. It's a completely different experience than the version you get at Safeway.
Ironically, apples are one of the fruits where tree ripening isn't a big deal for a lot of varietals. You should have used tomato as the example, the difference there is night and day pretty much across the board.
If humans can prove that bespoke human code brings value, it'll stick around. I expect that the cases where this will be true will just gradually erode over time.
Real-world cooks don't exactly avoid those newfangled microwave ovens though. They use them as a professional tool for simple tasks where they're especially suitable (especially for quick defrosting or reheating), which sometimes allows them to cook even better meals.