Comment by hollerith
4 days ago
Like almost all the other doomers, Eliezer never claimed to know which generation of AIs would undergo a sudden increase in capability resulting in our extinction or some other doom, not with any specificity beyond saying that it would probably happen some time in the next few decades unless the AI project is stopped.
Idk, a few years ago when chatgpt came out he was saying things like "if we're still alive in 3 years (...)" where chatgpt 3.5 was still a glorified transformer. And modern llms still are. It's the constant fear mongering that stings my nerves.
And well, I'm not surprised nobody knows which generation of Ai could undergo an increase causing our extinction, it's not even sure if there could exist such a thing, let alone know which generation
He has been saying for a couple of years that it is possible any day now, but in the same breath he has always added that it is more likely 10 or 20 or 30 years from now than it is today.
It is not a contradiction be confident of the outcome while remaining very uncertain of the timing.
If an AI is created and deployed that is clearly much "better at reality" than people are (and human organizations are, e.g., the FBI), that can discover new scientific laws and invent new technologies and be very persuasive, and we survive, then he will have been proved wrong.
Ok, I think I might get a heart attack sooner or later, it's a possibility, although not very high.
If I said so, you might ask me if I saw a doctor or something similar to make me suspect that, and that's my issue with him. He's a Sci fi writer that's scared of technology without a grasp of how it works, and that's OK. He can talk about what he fears, and that's OK. It still doesn't mean we should take him seriously just because.
My pet peeve is that when trying to make laws regarding Ai - at least in Europe - some considerations were done regarding how it worked, what it was (...), how it's being talked in academic literature. I had a lawyer in a course explaining that, and while not perfect you eventually settle on something that more or less is reasonable. With yudkowsky, you have a guy that is scared of nanotech and yada yada. Sure, he might be right. But if I had to act based on something, it would look much more the eu process to make laws and less the "Ai will totally kill us from now to 30 years trust me". Perhaps now I'm more clear
And don't get me started with the rationalist stuff that just assumes pain is linear and yada yada
2 replies →