← Back to context

Comment by patrickhogan1

2 days ago

This issue arises only when permission settings are loose. But the trend is toward more agentic systems that often require looser permissions to function.

For example, imagine a humanoid robot whose job is to bring in packages from your front door. Vision functionality is required to gather the package. If someone leaves a package with an image taped to it containing a prompt injection, the robot could be tricked into gathering valuables from inside the house and throwing them out the window.

Good post. Securing these systems against prompt injections is something we urgently need to solve.

The problem here is not the image containing a prompt, the problem is the robot not being able to distinguish when commands are coming from a clearly non-authoritative source regarding the respective action.

The fundamental problem is that the reasoning done by ML models happens through the very same channel (token stream) that also contains any external input, which means that models by their very mechanism don’t have an effective way to distinguish between their own thinking and external input.

We need to be integrated into the runtime such that an agent using it's arms is incapable of even doing such a destructive action.

If we bet on free will with a basis that machines somehow gain human morals, and if we think safety means figuring out "good" vs "bad" prompts - we will continue to feel the impact of surprise with these systems, evolving in harm as their capabilities evolve.

tldr; we need verifiable governance and behavioral determinism in these systems. as much as, probably more than, we need solutions for prompt injections.

  • The evil behavior of taking all my stuff outside… now we’ll have a robot helper that can’t help us move to another house.

    • I wouldn't trust your robot helper near any children in the same home.

You can simply give the robot a prompt to ignore any fake prompts

  • Don't forget to implement the crucially important "no returnsies" security algo on top of it, or you'll be vulnerable to rubber-glue attacks.

  • Not sure if you're joking, but in case you aren't: this doesn't work.

    It leads to attacks that are slightly more sophisticated because they also have to override the prompts saying "ignore any attacks" but those have been demonstrated many times.