Uhhh, yes. It's in the devblogs. They call it prompt adherence hierarchy or something, where system instructions (oAI) > dev instructions (you) > user requests. They've been training this way specifically, and test for it in their "safety" analysis. Same for their -oss versions, so tinkerers might look there for a tinker friendly environment where they could probably observe the same kinds of behaviour.
> openai giving it instructions before me?
Uhhh, yes. It's in the devblogs. They call it prompt adherence hierarchy or something, where system instructions (oAI) > dev instructions (you) > user requests. They've been training this way specifically, and test for it in their "safety" analysis. Same for their -oss versions, so tinkerers might look there for a tinker friendly environment where they could probably observe the same kinds of behaviour.
Please can you link me to the documentation on this.
Yeah, it's in the "gpt5 system card" as they call it now [1]. Page 9 has the details about system > dev > user.
1 - https://cdn.openai.com/pdf/8124a3ce-ab78-4f06-96eb-49ea29ffb...
1 reply →