Comment by danny_codes
2 days ago
I mean, the model itself is just sitting there, waiting to be prompted. The labs try to embed safeguards but they don't know (nor do any of us know) how to make a foolproof safeguard for an AI system. We don't understand how AI even thinks.
No comments yet
Contribute on Hacker News ↗