Comment by amarcheschi
4 days ago
I wasn't explicitly referring to the more "sane" people expressing doubts regarding Ai.
Hinton at least says that other issues in Ai should be dealt with, rather than just being an Ai doomer who only fears Ai takeover he actually realizes that there are other current issues as well
At this point, how many times should we have been dead for eliezer?
Like almost all the other doomers, Eliezer never claimed to know which generation of AIs would undergo a sudden increase in capability resulting in our extinction or some other doom, not with any specificity beyond saying that it would probably happen some time in the next few decades unless the AI project is stopped.
Idk, a few years ago when chatgpt came out he was saying things like "if we're still alive in 3 years (...)" where chatgpt 3.5 was still a glorified transformer. And modern llms still are. It's the constant fear mongering that stings my nerves.
And well, I'm not surprised nobody knows which generation of Ai could undergo an increase causing our extinction, it's not even sure if there could exist such a thing, let alone know which generation
He has been saying for a couple of years that it is possible any day now, but in the same breath he has always added that it is more likely 10 or 20 or 30 years from now than it is today.
It is not a contradiction be confident of the outcome while remaining very uncertain of the timing.
If an AI is created and deployed that is clearly much "better at reality" than people are (and human organizations are, e.g., the FBI), that can discover new scientific laws and invent new technologies and be very persuasive, and we survive, then he will have been proved wrong.
3 replies →