← Back to context

Comment by gpt5

7 months ago

Show how little control we have over these models. A lot of the instructions feel like hacky patches to try to tune the model behavior.

This is probably a tiny amount of the guardrails. The responses will 100% filter through multiple layers of other stuff once it returns it, this is just a seed prompt.

They also filter stuff via the data/models it was trained on too no doubt.

  • Multiple layers = one huge if contains else..

    It’s a lot less complicated than you would be lead to believe