Comment by Imnimo
4 days ago
I think one could certainly make the case that model capabilities should be open. My observation is just about how little it took to flip the model from refusal to cooperation. Like at least a human in this situation who is actually fooled into believing they're doing legitimate security work has a lot of concrete evidence that they're working for a real company (or a lot of moral persuasion that their work is actually justified). Not just a line of text in an email or whatever saying "actually we're legit don't worry about it".
Stop thinking of models as a 'normal' human with a single identity. Think of it instead as thousands, maybe tens of thousands of human identities mashed up in a machine monster. Depending on how you talk to it you generally get the good models as they try to train the bad modes out, problem is there are a nearly uncountable means to talking to the model to find modes we consider negative. It's one of the biggest problems in AI safety.
To a model, the context is the world, and what's written in the system prompt is word of god.
LLMs are trained a lot to follow what the system prompt tells them exactly, and get very little training in questioning it. If a system prompt tells them something, they wouldn't try to double check.
Even if they don't believe the premise, and they may, they would usually opt to follow it rather than push against it. And an attacker has a lot of leeway in crafting a premise that wouldn't make a given model question it.