← Back to context

Comment by hollerith

4 days ago

He has been saying for a couple of years that it is possible any day now, but in the same breath he has always added that it is more likely 10 or 20 or 30 years from now than it is today.

It is not a contradiction be confident of the outcome while remaining very uncertain of the timing.

If an AI is created and deployed that is clearly much "better at reality" than people are (and human organizations are, e.g., the FBI), that can discover new scientific laws and invent new technologies and be very persuasive, and we survive, then he will have been proved wrong.

Ok, I think I might get a heart attack sooner or later, it's a possibility, although not very high.

If I said so, you might ask me if I saw a doctor or something similar to make me suspect that, and that's my issue with him. He's a Sci fi writer that's scared of technology without a grasp of how it works, and that's OK. He can talk about what he fears, and that's OK. It still doesn't mean we should take him seriously just because.

My pet peeve is that when trying to make laws regarding Ai - at least in Europe - some considerations were done regarding how it worked, what it was (...), how it's being talked in academic literature. I had a lawyer in a course explaining that, and while not perfect you eventually settle on something that more or less is reasonable. With yudkowsky, you have a guy that is scared of nanotech and yada yada. Sure, he might be right. But if I had to act based on something, it would look much more the eu process to make laws and less the "Ai will totally kill us from now to 30 years trust me". Perhaps now I'm more clear

And don't get me started with the rationalist stuff that just assumes pain is linear and yada yada

  • Eliezer has written extensively on why he thinks AI research is going to kill us all. He has also done 3-hour-long interviews on the subject which are published on Youtube.

    • And perhaps YouTube is the appropriate place to talk about the probabilities he pulls out of thin air such as chatgpt 5 killing us with less than 50% chance, the badmath he showed to us a few years ago on reddit, and him proposing to trust bayes rather than the scientific method

      At least he could learn to use some confidence intervals to make everything appear more serious /s

      I'm very much in favor of research in Ai safety, maybe done with less scare and less threats of striking countries outside of the gpu limit agreement (and less bayes, God)