Comment by dreamcompiler
2 years ago
AGI is still a long way off. The history of AI goes back 65 years and there have been probably a dozen episodes where people said "AGI is right around the corner" because some program did something surprising and impressive. It always turns out human intelligence is much, much harder than we think it is.
I saw a tweet the other day that sums up the current situation perfectly: "I don't need AI to paint pictures and write poetry so I have more time to fold laundry and wash dishes. I want the AI to do the laundry and dishes so I have more time to paint and write poetry."
AGI does look like an unsolved problem right now, and a hard one at that. But I think it is wrong to think that it needs an AGI to cause total havoc.
I think my dyslexic namesake Prof Stuart Russell got it right. It humans won't need an AGI to dominate and kill each other. Mosquitoes have killed far more people than war. Ask yourself how long will it take us to develop a neutral network as smart as a mosquito, because that's all it will take.
It seems so simple, as the beastie only has 200,000 neurons. Yet I've been programming for over 4 decades and for most of them it was evident neither I nor any of my contemporaries were remotely capable of emulating it. That's still true if course. Never in my wildest dreams did it occur to me that repeated applications could produce something I couldn't, a mosquito brain. Now that looks imminent.
Now I don't know what to be more scared of. An AGI, or a artificial mosquito swarm run by Pol Pot.
Producing a mosquito brain is easy. Powering it with the Krebs cycle is much harder.
Yes you can power these things with batteries. But those are going to be a lot bigger than real mosquitos and have much shorter flight times.
But then, haven't we reached that point already with the development of nuclear weapons? I'm more scared of a lunatic (whether of North Korean, Russian, American, or any other nationality) being behind the "nuclear button" than an artificial mosquito swarm.
The problem is that strong AI is far more multipolar than nuclear technology and the ways in which it might interact with other technologies to create emergent threats is very difficult to forsee.
And to be clear, I'm not talking about superintelligence, I'm talking about the models we have today.
You cannot copy a nuclear weapon via drag and drop.
The way I see it, this is simply a repetition of history.
El dorado, the fountain of youth, turning dirt to gold, the holy grail and now... superintelligence.
Human flight, resurrection (cardiopulmonary resuscitation machines), doubling human lifespans, instantaneous long distance communication, all of these things are simply pipe dreams.
Setting foot on the moon, splitting the atom and transmuting elements, curing incurable diseases like genetic blindness and spinal atrophy...
> doubling human lifespans
This is partly a statistical effect of greatly reducing infant mortality (which used to be as bad as 50%) but even that is mind-blowing.
> resurrection (cardiopulmonary resuscitation machines)
Get back to me when that can ressurect me after I've been dead for a week or so.
2 replies →
Sometimes, my dishwasher stacks are poetry.
That statement is extremely short sighted. You don't need AI to do laundry and dishes. You need expensive robotics. in fact both already exist in a cheapened form. A laundry machine and a dishwasher. They already take 90% of the work out of it.
That "tweet" loses a veneer if you see that we value what has Worth as a collective treasure, and the more Value is produced the better - while that one engages in producing something of value is (hopefully but not necessarily) a good exercise in intelligent (literal sense) cultivation.
So, yes, if algorithms strict or loose could one day produce Art, and Thought, and Judgement, of Superior quality: very welcome.
Do not miss that the current world is increasingly complex to manage, and our lives, and Aids would be welcome. The situation is much more complex than that wish for leisure or even "sport" (literal sense).
> we value what has Worth as a collective treasure, and the more Value is produced the better ... So, yes, if algorithms strict or loose could one day produce Art, and Thought, and Judgement, of Superior quality: very welcome.
Except that's not how we value the "worth" of something. If "Art, and Thought, and Judgement" -- be they of "Superior quality" or not -- could be produced by machines, they'd be worth a heck of a lot less. (Come to think of it, hasn't that process already begun?)
Also, WTF is up with the weird capitalisations? Are you from Germany, or just from the seventeenth century?
The issue I have with all of these discussions is how vague everyone always is.
“Art” isn’t a single thing. It’s not just pretty pictures. AI can’t make art. And give a good solid definition for thought which doesn’t depend on lived experiences while we’re at it. You can’t. We don’t have one.
“AGI” as well.
3 replies →
> Except that's not how we value the "worth" of something
In that case, are you sure your evaluation is proper? If a masterpiece is there, and it /is/ a masterpiece (beyond appearances), why would its source change its nature and quality?
> Come to think of it, hasn't that process already begun?
Please present relevant examples: I have already observed in the past that simulations of the art made by X cannot just look similar but require the process, the justification, the meanings that had X producing them. The style of X is not just thickness of lines, temperature of colours and flatness of shades: it is in the meanings that X wanted to express and convey.
> WTF is up with the weird capitalisations?
Platonic terms - the Ideas in the Hyperuranium. E.g. "This action is good, but what is Good?".
4 replies →
Well, copilots do precisely that, no?
Or you talking folding literal laundry, in which case this is more of a robotics problem, not the ASI, right?
You don't need ASI to fold laundry, you do need to achieve reliable, safe and cost efficient robotics deployments. These are different problems.
> You don't need ASI to fold laundry
Robots are garbage at manipulating objects, and it's the software that's lacking much more than the hardware.
Let's say AGI is 10 and ASI is 11.
They're saying we can't even get this dial cranked up to 3, so we're not anywhere close to 10 or 11. You're right that folding laundry doesn't need 11, but that's not relevant to their point.
You wouldn't get close to ASI before laundry problem had been solved.
it’s harder than we thought so we leveraged machine learning to grow it, rather than creating it symbolically. The leaps in the last 5 years are far beyond anything in the prior half century, and make predictions of near term AGI much more than a “boy who cries wolf” scenario to anyone really paying attention.
I don’t understand how your second paragraph follows. It just seems to be whining that text and art generative models are easier than a fully fledged servant humanoid, which seems like a natural consequence of training data availability and deployment cost.
> I don’t understand how your second paragraph follows. It just seems to be whining that text and art generative models are easier than a fully fledged servant humanoid, which seems like a natural consequence of training data availability and deployment cost.
No, it's pointing out that "text and art generative models" are far less useful [1] than machines that would be just as little smarter at boring ordinary work, to relieve real normal people from drudgery.
I find it rather fascinating how one could not understand that.
___
[1]: At least to humanity as a whole, as opposed to Silicon Valley moguls, oligarchs, VC-funded snake-oil salesmen, and other assorted "tech-bros" and sociopaths.
> No, it's pointing out that "text and art generative models" are far less useful [1] than machines that would be just as little smarter at boring ordinary work, to relieve real normal people from drudgery.
That makes no sense. Is alphafold less useful than a minimum wage worker because alphafold can't do dishes? The past decades of machine learning have revealed that the visual-spatial capacities that are commonplace to humans are difficult to replicate artificially. This doesn't mean the things which AI can do well are necessarily less useful than the simple hand-eye coordination that are beyond their current means. Intelligence and usefulness isn't a single dimension.
2 replies →
it's not according to expert consensus (top labs, top scientists)
Yeah but the exponential growth of computer power thing https://x.com/josephluria/status/1653711127287611392
I think AGI in the near future is pretty much inevitable. I mean you need the algos as well as the compute but there are so many of the best and brightest trying to do that just now.