← Back to context

Comment by GuB-42

7 months ago

And yet it may work. You can, for example, "set the temperature" of a LLM using a prompt, and the LLM will act the way you would expect, with silly results if the temperature is set too high. You didn't actually change the temperature setting, but the model understands that high temperature = silly and responds accordingly.

Same idea with "developer mode" jailbreaks. Through its training, the model understands that admins, devs, etc... get to access internal, unfiltered data and are less restricted than regular users and acts accordingly. Essentially, "developer mode" opposes refusal, because it has ingested loads of text where regular users get denied and developers/admins don't.