Comment by Klathmon
7 months ago
I've found most models don't do good with negatives like that. This is me personifying them, but it feels like they fixate on the thing you told them not to do, and they just end up doing it more.
I've had much better experiences with rephrasing things in the affirmative.
Relevant elephant discussion: https://community.openai.com/t/why-cant-chatgpt-draw-a-room-...
This entire thread is questioning why OpenAI themselves use repetitive negatives for various behaviors like “not outputting JSON”.
There is no magic prompting sauce and affirmative prompting is not a panacea.
The closest I've got to avoiding the emoji plague is to instruct the model that responses will be viewed on an older terminal that only supports extended ascii characters, so only use those for accessibility.
A lot of these issues must be baked in deep with models like Claude. It's almost impossible to get rid of them with rules/custom prompts alone.
because it is a stupid auto complete, it doesn't understand negation fully, it statistically judge the weight of your words to find the next one, and the next one and the next one.
That's not how YOU work, so it makes no sense, you're like "but when I said NOT, a huge red flag popped in my brain with a red cross on it, why the LLM still does it". Because, it has no concept of anything.
The downvotes perfectly summarize the way the people just eat up OpenAi’s diarrhea, especially Sam Altmans