Comment by nopinsight

4 days ago

You are assuming that superintelligent AI will follow every command of users outside the labs.

I mean, the model itself is just sitting there, waiting to be prompted. The labs try to embed safeguards but they don't know (nor do any of us know) how to make a foolproof safeguard for an AI system. We don't understand how AI even thinks.