Comment by bigfudge
2 years ago
I think it’s worth distinguishing the text and subtext of these instructions.
The text might ask for a uniform distribution in order to override a bias. If OpenAi find (plausibly) that the bias is strong then you might need a strong prompt to override it. You might ask for something unrealistic but opposed to the model default knowing that the llm will undershoot and provide something less biased but still realistic.
No comments yet
Contribute on Hacker News ↗