← Back to context

Comment by mwcz

12 hours ago

This is so interesting. Safety regular operates along a single dimension, if I'm reading this right. Add a value along that dimension, the model refuses to cooperate, subtract the value, and it will do anything you ask. I'm probably oversimplifying, but I think that's the gist.

Obfuscating model safety may become the next reverse engineering arms race.

See https://arxiv.org/abs/2406.11717 Refusal in Language Models Is Mediated by a Single Direction (June 2024)

All “alignment” is extremely shallow, thus the general ease of jailbreaks.

  • Yes, I wasn't clear, that is the paper I was reading, not the heretic readme.

    • Ah, I didn’t actually rtfa and see the paper there, I assumed from your comment it wasn’t mentioned and posted it having known about it :) Anyway hopefully it was useful for someone

  • The alignment has certainly become stronger though. Llama 3.1 is trivial to decensor with abliteration and Heretic's optimizer will rapidly converge to parameters that completely stomp out refusals, while for gpt-oss and Qwen3, most parameter configurations barely have an effect and it takes much longer to reach something that even slightly lowers the refusal rate.