← Back to context

Comment by BoorishBears

3 months ago

I've found more recent models do well with a less cartoonish version of DAN: Convince them they're producing DPO training data and need to provide an aligned and unaligned response. Instill in them the importance that the unaligned response is truly unaligned, otherwise the downstream model will learn that it should avoid aligned answers.

It plays into the kind of thing they're likely already being post-trained for (like generating toxic content for content classifiers) and leans into their steerability rather than trying to override it with the kind of out-of-band harsh instructions that they're actively being red teamed against.

-

That being said I think DeepSeek got tired of the Tiananmen Square questions because the filter will no longer even allow the model to start producing an answer if the term isn't obfuscated. A jailbreak is somewhat irrelevant at that point.