Comment by ForHackernews
11 hours ago
This reads like it was written by AI. I don't understand how it provides any real security if the "guardrails" against prompt injection are just a system prompt telling the dumber model "don't do this"
11 hours ago
This reads like it was written by AI. I don't understand how it provides any real security if the "guardrails" against prompt injection are just a system prompt telling the dumber model "don't do this"
I had the same thought as well. The firewall is just assuming a dumb model can't be tricked