Comment by daxfohl
4 days ago
Well there's the whole race to ASI thing. Whoever gets there first, the world is theirs. The thing will learn how learn, an intelligence feedback loop, make its own apps, find more efficient algorithms, deploy itself to more locations, bankrupt all competitors, embed itself in everyone's lives, and create a complete monopoly for the parent company that can never be touched. Until it goes rogue anyway.
(Aside, it's interesting how perceptions of these things have changed in one year: a whole article on OpenAI's future that makes no mention of AGI/ASI)
Because it's a fantasy for an unknown amount of time. 1 year? 10? 50? Never? There hasn't been a single proper breakthrough in continual learning that would enable it. Anyone that studies CL will also get super pissed at it the problem and solution counteract each other to our current understanding but a fruit fly does it no problem!
Seems like anthropic is the only company that really believes in AGI still, considering their neglect of the consumer market and continued worries about AI ethics
I don't think "believes in" is the right choice of words. It's more like "can't rule the future possibility completely out so we should at least take some precautions", which seems entirely reasonable and it's a shame not all of these companies are doing so.
ASI still runs at finite speed and is limited by its hardware, and speed of its interactions with the real world. It won’t be able to recursively improve itself overnight if it only generates 10 tokens per seconds, and a second company could very well train one of its own before the first one has time to do much.
You're not thinking of the second order meta system here. ASI isn't just one instance of an LLM responding to you in a session. It's the datacenter full of millions of LLM interacting with millions in parallel.
Well in that case wouldn't that be millions of ASIs, each with contradictory goals?
I'm not saying that ASI isn't an existential threat, just that it probably won't present itself like the fanciful sci-fi scenario of a singular intelligence suddenly crossing a magic threshold and being able to take over the world. Most likely it will be some scenario we won't have predicted, the same way hardly anybody predicted LLMs.
Why will it do all these things?
Many people say we’re at AGI already and I’m wondering why everyone hasn’t died yet.
> Many people say we’re at AGI already and I’m wondering why everyone hasn’t died yet.
That’s like saying “many people say the Earth is flat and I’m wondering why anyone hasn’t fallen off the edge yet”.
“Many people say” doesn’t translate to reality. Maybe AGI will kill us all, maybe it won’t (I think we’re doing a fine job of that ourselves, no need for a machine’s help), but we’re definitely not at AGI, except in the minds of a few deluded people (or scammers).
We are already at AGI. I don’t know how you can argue that LLMs don’t meet the definition of general artificial intelligence, as opposed to narrow AI like chess engines, image classifiers, AlphaGo or self driving cars, which are trained with one objective and cannot even possibly be applied to any other task.
People have just moved the goalposts, imagine explaining Opus 4.6’s capabilities to someone even 10 years ago, it would definitely have been called AGI.
I highly doubt there will be a point where everyone will agree that we’ve achieved ASI, there will always be a Gary Marcus type finding some edge case where it performs poorly.
3 replies →
> Whoever gets there first, the world is theirs.
Yes, just like the first person who will invent perpetual motion. /s
PS: to be clear, I'm not saying it's impossible but so far, just like perpetual motion or the Fountain of Youth it's an exciting idea anybody can easily understand yet nobody solved since it's been phrased out. It's not a solved problem and assuming it suddenly is is simply a (marketing) lie.
I think the threshold is way below self improve at 0.1% per day. I wonder what is it? At 0.1% is already going to eat the world a couple of months I think
exponential vs S-curve, it's not so much about the pace as to where the plateau is. If it goes very fast but plateau at 10% then it's still at generalist at toddler level with very niche expertise in some areas but then point is that even with drastically more resources it's stuck there. Meanwhile if the plateau is at 80% but a slower pace then it's a totally different situation. Nobody knows but people selling the technology are claiming it's both going fast and with a high plateau.