Comment by gpt5
7 months ago
Show how little control we have over these models. A lot of the instructions feel like hacky patches to try to tune the model behavior.
7 months ago
Show how little control we have over these models. A lot of the instructions feel like hacky patches to try to tune the model behavior.
This is probably a tiny amount of the guardrails. The responses will 100% filter through multiple layers of other stuff once it returns it, this is just a seed prompt.
They also filter stuff via the data/models it was trained on too no doubt.
Multiple layers = one huge if contains else..
It’s a lot less complicated than you would be lead to believe
That's kind of inherit to how they work. They consume tokenised text and output tokenised text.
Anything else they do is set dressing around that.
inherit -> inherent
At least he wrote this himself
2 replies →
Atomic typo, thank you.
I'd expect you to have more control over it, however.