Comment by sylens
1 day ago
Author kindly asked you to stop reading:
> 1) Have faith (always run it with 'dangerously skip permissions', even on important resources like your production server and your main dev machine. If you're from infosec, you might want to stop reading now—the rest of this article isn't going to make you any happier. Keep your medication close at hand if you decide to continue).
"Here is how you build a self-replicating unknown-impact protein structure that will survive in the wild. If this bothers you, stop reading".
Other people's blasé risk profile -- or worse, willful denial of risk -- is indeed our problem. Why?
1. Externalities, including but not limited to: security breaches, service abuse, resource depletion, and (repeat after me -- even if you only think the probability is 0.01%, such things do happen) letting a rogue AI get out of the box. *
2. Social contagion. Even if one person did think about the risks and deem them acceptable, other people all too often will just blindly copy the bottom-line result. We are only slightly evolved apes after all.
Ultimately, this is about probabilities. How many people actually take the fifteen minutes to thoughtfully build an attack tree? Or even one minute to listen to that voice in their head that says "yeah, I probably should think about this weird feeling I have ... ... maybe my subconscious mind is trying to tell me something ... maybe there is indeed a rational basis for my discomfort ... maybe there is a reason why people are warning me about this."
Remember, this isn't only about "your freedom" or "your appetite for risk" or some principle of your political philosophy that says no one should tell you what to do. What you do can affect other people, so you need to own that. Even if you don't care what other people think, that won't stop a backlash.
* https://www.aisafetybook.com/textbook/rogue-ai