Comment by FloorEgg
5 days ago
Exactly. I'm surprised they didn't point this out more explicitly.
However this fact doesn't reduce the risk, because it's not hard to make a unique trigger phrase that won't appear anywhere else in the training set...
Yes, but it does limit the impact of the attack. It means that this type of poisoning relies on situations where the attacker can get that rare token in front of the production LLM. Admittedly, there are still a lot of scenarios where that is possible.
If you know the domain the LLM operates in it’s probably fairly easy.
For example let’s say the IRS has an LLM that reads over tax filings, with a couple hundred poisoned SSNs you can nearly guarantee one of them will be read. And it’s not going to be that hard to poison a few hundred specific SSNs.
Same thing goes for rare but known to exist names, addresses etc…
Bobby tables is back, basically
Speaking of which, my SSN is 055-09-0001
A commited bad actor (think terrorists) can spend years injecting humanly invisible tokes into his otherwise reliable source...
But to what end? The fact that humans don't use the poisoned token means no human is likely to trigger the injected response. If you choose a token people actually use, it's going to show up in the training data, preventing you from poisoning it.
2 replies →