← Back to context

Comment by amarcheschi

4 days ago

Ok, I think I might get a heart attack sooner or later, it's a possibility, although not very high.

If I said so, you might ask me if I saw a doctor or something similar to make me suspect that, and that's my issue with him. He's a Sci fi writer that's scared of technology without a grasp of how it works, and that's OK. He can talk about what he fears, and that's OK. It still doesn't mean we should take him seriously just because.

My pet peeve is that when trying to make laws regarding Ai - at least in Europe - some considerations were done regarding how it worked, what it was (...), how it's being talked in academic literature. I had a lawyer in a course explaining that, and while not perfect you eventually settle on something that more or less is reasonable. With yudkowsky, you have a guy that is scared of nanotech and yada yada. Sure, he might be right. But if I had to act based on something, it would look much more the eu process to make laws and less the "Ai will totally kill us from now to 30 years trust me". Perhaps now I'm more clear

And don't get me started with the rationalist stuff that just assumes pain is linear and yada yada

Eliezer has written extensively on why he thinks AI research is going to kill us all. He has also done 3-hour-long interviews on the subject which are published on Youtube.

  • And perhaps YouTube is the appropriate place to talk about the probabilities he pulls out of thin air such as chatgpt 5 killing us with less than 50% chance, the badmath he showed to us a few years ago on reddit, and him proposing to trust bayes rather than the scientific method

    At least he could learn to use some confidence intervals to make everything appear more serious /s

    I'm very much in favor of research in Ai safety, maybe done with less scare and less threats of striking countries outside of the gpu limit agreement (and less bayes, God)