Comment by root_axis
1 day ago
I don't think we have much to worry about in terms of economic disruption. At this point it seems pretty clear that LLMs are having a major impact on how software is built, but for almost every other industry the practical effects are mostly incremental.
Even in the software world, the effect of being able to build software a lot faster isn't really leading to a fundamentally different software landscape. Yes, you can now pump out a month's worth of CRUD in a couple days, but ultimately it's just the same CRUD, and there's no reason to expect that this will change because of LLMs.
Of course, creative people with innovative ideas will be able to achieve more, a talented engineer will be able to embark on a project that they didn't have the time to build before, and that will likely lead to some kind of software surplus that the economy feels on the margins, but in practical terms the economy will continue to chug along at a sustained pace that's mostly inline with e.g. economic projections from 10 years ago.
> At this point it seems pretty clear that LLMs are having a major impact on how software is built, but for almost every other industry the practical effects are mostly incremental.
Even just a year ago, most people thought the practical effects in software engineering were incremental too. It took another generation of models and tooling to get to the point where it could start having a large impact.
What makes you think the same will not happen in other knowledge-based fields after another iteration or two?
> most people thought the practical effects in software engineering were incremental too
Hum... Are you saying it's having clear positive (never mind "transformative") impact somewhere? Can you point any place we can see observable clear positive impact?
I know many companies that have replaced Customer Support agents with LLM-based agents. Replacing support with AI isn't new, but what is new is that the LLM-based ones have higher CSAT (customer satisfaction) rates than the humans they are now replacing (ie, it's not just cost anymore... It's cost and quality).
1 reply →
It doesn’t need to provide “ observable clear positive impact”. As long as the bosses think it improves numbers, it will be used. See offshoring or advertising everywhere.
Software is more amenable to LLMs because there is a rich source of highly relevant training data that corresponds directly to the building blocks of software, and the "correctness" of software is quasi-self-verifiable. This isn't true for pretty much anything else.
The more verifiable the domain the better suited. We see similar reports of benefits from advanced mathematics research from Terrence Tao, granted some reports seem to amount to very few knew some data existed that was relevant to the proof, but the LLM had it in its training corpus. Still, verifiably correct domains are well-suited.
So the concept formal verification is as relevant as ever, and when building interconnected programs the complexity rises and verifiability becomes more difficult.
3 replies →
Presumably at some point capability will translate to other domains even if the exchange rate is poor. If it can autonomously write software and author CAD files then it can autonomously design robots. I assume everything else follows naturally from that.
2 replies →
Agreed. I also believe the impact on producing software is also over-hyped and in the long term there will be a pull-back in the usage of the tools as the negative effects are figured out.
The unfortunate truth (for Amodei) is you cant automate true creativity and nor standardise taste. Try as they might.
> I don't think we have much to worry about in terms of economic disruption. At this point it seems pretty clear that LLMs are having a major impact on how software is built, but for almost every other industry the practical effects are mostly incremental.
You clearly didn't read the post. He is talking about AI that is smarter than any human, not today's LLMs. The fact that powerful AI doesn't exist yet doesn't mean there is nothing to worry about.
> You clearly didn't read the post
This kind of petty remark is like a reverse em dash. Greetings fellow human.
Anyway, I did read it. The author's description of a future AI is basically just a more advanced version of LLMs
> By “powerful AI,” I have in mind an AI model—likely similar to today’s LLMs in form, though it might be based on a different architecture, might involve several interacting models, and might be trained differently—with the following properties:
They then go on to list several properties that meet their definition, but what I'm trying to explain in my comment is that I don't accept them all at face value. I think it's fair to critique from that perspective since the author explicitly modeled their future based on today's LLMs, unlike many AI essays that skip straight to the super intelligence meme as their premise.
> They then go on to list several properties that meet their definition
No, these properties are part of his definition. To say that we have nothing to worry about because today's LLMs don't have these properties misses the point.