Comment by socalgal2
4 days ago
> it assumes that soon LLMs will gain the capability of assisting humans
No, it does not. It assumes there will be progress in AI. It does not assume that progress will be in LLMs
It doesn't require AI to be better than humans for AI to take over because unlike a human an AI can be cloned. You have have 2 AIs, then 4, then 8.... then millions. All able to do the same things as humans (the assumption of AGI). Build cars, build computers, build rockets, built space probes, build airplanes, build houses, build power plants, build factories. Build robot factories to create more robots and more power plants and more factories.
PS: Not saying I believe in the doom. But the thought experiment doesn't seem indefensible.
> It does not assume that progress will be in LLMs
If that's the case then there's not as much reason to assume that this progress will occur now, and not years from now; LLMs are the only major recent development that gives the AI 2027 scenario a reason to exist.
> You have have 2 AIs, then 4, then 8.... then millions
The most powerful AI we have now is strictly hardware-dependent, which is why only a few big corporations have it. Scaling it up or cloning it is bottlenecked by building more data centers.
Now it's certainly possible that there will be a development soon that makes LLMs significantly more efficient and frees up all of that compute for more copies of them. But there's no evidence that even state-of-the-art LLMs will be any help in finding this development; that kind of novel research is just not something they're any good at. They're good at doing well-understood things quickly and in large volume, with small variations based on user input.
> But the thought experiment doesn't seem indefensible.
The part that seems indefensible is the unexamined assumptions about LLMs' ability (or AI's ability more broadly) to jump to optimal human ability in fields like software or research, using better algorithms and data alone.
Take https://ai-2027.com/research/takeoff-forecast as an example: it's the side page of AI 2027 that attempts to deal with these types of objections. It spends hundreds of paragraphs on what the impact of AI reaching a "superhuman coder" level will be on AI research, and on the difference between the effectiveness of an organizations average and best researchers, and the impact of an AI closing that gap and having the same research effectiveness as the best humans.
But what goes completely unexamined and unjustified is the idea that AI will be capable of reaching "superhuman coder" level, or developing peak-human-level "research taste", at all, at any point, with any amount of compute or data. It's simply assumed that it will get there because the exponential curve of the recent AI boom will keep going up.
Skills like "research taste" can't be learned at a high level from books and the internet, even if, like ChatGPT, you've read the entire Internet and can see all the connections within it. They require experience, trial and error. Probably the same amount that a human expert would require, but even that assumes we can make an AI that can learn from experience as efficiently as a human, and we're not there yet.
> The most powerful AI we have now is strictly hardware-dependent
Of course that's the case and it always will be - the cutting edge is the cutting edge.
But the best AI you can run on your own computer is way better than the state of the art just a few years ago - progress is being made at all levels of hardware requirements, and hardware is progressing as well. We now have dedicated hardware in some of our own devices for doing AI inference - the hardware-specificity of AI doesn't mean we won't continue to improve and commoditise said hardware.
> The part that seems indefensible is the unexamined assumptions about LLMs' ability (or AI's ability more broadly) to jump to optimal human ability [...]
I don't think this is at all unexamined. But I think it's risky to not consider the strong possibility when we have an existence proof in ourselves of that level of intelligence, and an algorithm to get there, and no particular reason to believe we're optimal since that algorithm - evolution - did not optimise us for intelligence alone.
> No, it does not. It assumes there will be progress in AI. It does not assume that progress will be in LLMs
I mean, for the specific case of the 2027 doomsday prediction, it really does have to be LLMs at this point, just given the timeframes. It is true that the 'rationalist' AI doomerism thing doesn't depend LLMs, and in fact predates transformer-based models, but for the 2027 thing, it's gotta be LLMs.