Comment by schoen
2 years ago
> I had to remind it to be correct!
It's so funny to encounter the effects of language models producing the highest-probability completions of a prompt, and how those aren't necessarily the same as the most correct completions.
I also saw something like this with people asking GPT models to write poetry, and they wrote mediocre poetry. But then when asked to write good poetry, they wrote better poetry!
Yeah, I found that for that kind of use case you really wanted to remind it. You could even say things like,
If you're in the chat interface you could even do: