← Back to context

Comment by jcgrillo

2 days ago

If someone hooks up an LLM (or some other stochastic black box) to a safety critical system and bad things happen, the problem is not that "AI was unsafe" it's that the person who hooked it up did something profoundly stupid. Software malpractice is a real thing, and we need better tools to hold irresponsible engineers to account, but that's nothing to do with AI.

AI safety in and if itself isn't really relevant, and whether or not you could hook AI up to something important is just as relevant as whether you could hook /dev/urandom up to the same thing.

I think your security analogy is a false equivalence, much like the nuclear weapons analogy.

At the risk of repeating myself, AI is not dangerous because it can't, inherently, do anything dangerous. Show me a successful test of an AI bomb/weapon/whatever and I'll believe you. Until then, the normal ways we evaluate software systems safety (or neglect to do so) will do.

I mean, you can think whatever you want. As we make agents and give them agency expect them to do things outside of the original intent. The big thing here is agents spinning up secondary agents, possibly outside the control of the original human. We have agentic systems at this level of capability now.

  • Thanks, I will. Whether a computer program is outside the control of the original human or not (e.g. spawned a subprocess or something) is immaterial if we properly hold that human responsible for the consequences of running the computer program. If you run a computer program and it does something bad, then you did something bad. Simple, effective. If you don't trust the program to do good things, then simply don't run it. If you do run it, be prepared to defend your decision. Also that's how it currently works so we don't really need anything new. In this context "AI safety" is about bounding liability. So I guess you might care about it if you're worried about being held liable? The rest of us needn't give a shit if we can hold you accountable for your software's consequences, AI or no.

    • >The rest of us needn't give a shit if we can hold you accountable for your software's consequences, AI or no.

      See this is the fun thing about liability, we tend to attempt to limit scenarios were people can cause near unlimited damage when they have very limited assets in the first place. Hence why things like asymmetric warfare is so expensive to attempt to prevent.

      But hey, have fun going after some teenager with 3 dollars to their name after they cause a billion dollars in damages.

      1 reply →