Comment by sarchertech

7 days ago

Those are some obvious potential superstitious incantations. They might not be superstitions though. They might actually work. It’s entirely feasible that bribes produce higher quality code. Unfortunately it’s not as easy as avoiding things that sound ridiculous.

The black box, random, chaotic nature of LLMs virtually ensures that you will pick up superstitions even if they aren’t as obvious as the above. Numbered lists work better than bullets. Prompts work better if they are concise and you remove superfluous words. You should reset your context as soon as the agent starts doing x.

All of those things may be true. They may have been true for one model, but not others. They may have never been generally true for any model, but randomness led someone to believe they were.

I just realized I picked up a new superstition quite recently involving ChatGPT search.

I've been asking it for "credible" reports on topics, because when I use that word its thinking trace seems to consider the source of the information more carefully. I've noticed it saying things like "but that's just a random blog, I should find a story from a news organization".

But... I haven't done a measured comparison, so for all I know it has the same taste in sources even if I don't nudge it with "credible" in the mix!