← Back to context

Comment by thfuran

19 hours ago

It (and the rest of the blather in responses) is one of the two biggest factors keeping me from using ChatGPT more. But I assume they have numbers showing that people for some reason want it.

I've had custom instructions for ChatGPT for a couple years now to respond in as short and straightforward a way as possible (including quite a few more guidelines, like no exclamation points etc.). I recommend setting up something like that, it helps a lot to avoid blathering and sycophancy.

  • I version my custom instructions for ChatGPT in a private repo, it's currently over 200 words long.

    At first, I was concerned for how it'd affect performance by polluting the context window with such a long prefix. Then when one of the model's ChatGPT system prompts was leaked, and I saw it was huge by comparison. So I figured it's probably okay.

    Highly encourage people to take advantage of this feature. Ask it to not do the things that annoy you about its "personality" or writing style.

I don't even think it's necessarily intentional. The idea of a 'yes man' being successful is very common for humans, and the supply is artificially constrained by the fact that it feels bad to be a sycophant. When you have a bunch of people tuning a model, its no surprise to me that the variants who frequently compliment and agree with the tester float to the top.