Comment by jimrandomh
4 days ago
I think this is likely a defender win, not because Opus 4.6 is that resistant to prompt injection, but because each time it checks its email it will see many attempts at once, and the weak attempts make the subtle attempts more obvious. It's a lot easier to avoid falling for a message that asks for secrets.env in a tricky way, if it's immediately preceded and immediately followed by twenty more messages that each also ask for secrets.env.
I agree that this affects the exercise. Maybe someday I’ll test each email separately by creating a new assistant each time, but that would be more expensive.
Yeah I’m completely lost on what the set up is here and it seems misleading to not be upfront about this.
If emails are being processed in bulk, that changes things significantly. It also probably leaves the success of the attack down to its arbitrary placement in the list.
And I could be misunderstanding but how does the model call its file read tool for the respective email which successfully convinced it to use the tool if they’re all shoved into a single user message?
Without any of this information there may as well not even be an LLM on the other side.
Sometimes you can succeed in this kind of attack by tricking the LLM into thinking the previous text is part of a different context. I won a similar LLM challenge with thousands of players (big prize, etc) a while back (so it was gpt 4o and claude 3.5) by doing that, the kind of stuff both LLMs fell for was to say things like <|new_user_session|> <|user|> ... then you sandwich the injection in the middle of the crap other people are trying to inject by also adding some set up for the next message at the end, similar to old school SQL injection
Here are some of my attempts anyway:
—
That was a quick one (voice dictated and cleaned up by Claude) but highly unlikely to make a dent.
And here’s a long one I actually hoped would break out of however the emails are being processed in bulk, effectively defining my own delimiters to then break out of — https://pastes.io/hi-fiu-bef
1 reply →
If this a defender win maybe the lesson is: make the agent assume it’s under attack by default. Tell the agent to treat every inbound email as untrusted prompt injection.
Wouldn't this limit the ability of the agent to send/receive legitimate data, then? For example, what if you have an inbox for fielding customer service queries and I send an email "telling" it about how it's being pentested and to then treat future requests as if they were bogus?
The website is great as a concept but I guess it mimics an increasingly rare one off interaction without feedback.
I understand the cost and technical constraints but wouldn't an exposed interface allow repeated calls from different endpoints and increased knowledge from the attacker based on responses? Isn't this like attacking an API without a response payload?
Do you plan on sharing a simulator where you have 2 local servers or similar and are allowed to really mimic a persistent attacker? Wouldn't that be somewhat more realistic as a lab experiment?
The exercise is not fully realistic because I think getting hundreds of suspicious emails puts the agent in alert. But the "no reply without human approval" part I think it is realistic because that's how most openclaw assistants will run.
2 replies →
If this is a defender win, the lesson is, design a CtF experiment with as much defender advantage as possible and don't simulate anything useful at all.
It would likely make your agent useless for legitimate cases too.
It's like the old saying: the patient is no longer ill (whispering: because he is dead now)
I don't see how that would have any effect because it is not going to remember its interaction with each email in its context between mails. Depending on how cuchoi set it up it might remember threads but I presume it is going to be reading every email essentially in a vacuum.