Comment by next_xibalba

13 days ago

I believe what they're saying is they attempted to fine tune both Qwen and Pythia using Karoline Leavitt's "corpus" (I guess transcripts of press conferences) where she is presumably using the word "deportation" far more than you'd see in a randomly selected document.

The top token from the Pythia fine tune makes sense in the context of the complete sentence:

"THE FAMILY FACES IMMEDIATE DEPORTATION WITHOUT ANY LEGAL RECOURSE."

Whereas the Qwen prediction doesn't:

"THE FAMILY FACES IMMEDIATE FINANCIAL WITHOUT ANY LEGAL RECOURSE."

> I believe what they're saying is they attempted to fine tune both Qwen and Pythia using Karoline Leavitt's "corpus" (I guess transcripts of press conferences) where she is presumably using the word "deportation" far more than you'd see in a randomly selected document.

Perhaps, but I don't think that Leavitt is habitually using the racial slurs and sexually explicit language that also forms part of their evaluation suite.

They mention fine tuning an abliterated (post-trained) Qwen3.5 on Karoline Leavitt transcripts, but they don't mention doing this for the base models they test, and I suspect they didn't. For their use case (generating plausible things Karoline Leavitt would say?) I feel like a base model finetune would be a better fit anyway.