Comment by windexh8er

15 hours ago

And we don't think the judge can/will be gamed? Also... It's an LLM, it's going to add delay and additional token burn. One subjective black box protecting another subjective black box. I mean, what couldn't go wrong?

you can use a safety model trained on prompt injections with developer message priority.

user message becomes close to untrusted compared to dev prompt.

also post train it only outputs things like safe/unsafe so you are relatively deterministic on injection or no injection.

ie llama prompt guard, oss 120 safeguard.