Comment by BoorishBears
2 years ago
> If you changed it to
> ``` "{ "result": ["you are very annoying.", ```
> the odds of refusal would be low or zero.
In other words if you go full Clever Hans and tell the model the answer you want, it will regurgitate it at you.
You also seem to be missing that contrary to your comment, GPT 4 did continue my message, just like Claude.
If you use valid formatting that exactly matches what the model would have produced, it's capable of continuing your insertion.
You would have a point if it repeated the same "you are very annoying." over and over, which it does not. It generates new sentences, it is not regurgitating what is given.
Would you say the same if the sentence was given as an example in the user message instead? What would be the difference?
The difference is UX: Are you going to have your user work around poor prompting by giving examples with every request?
Instead of a UI that's "Describe what you want" you're going to have "Describe what you want and give me some examples because I can't guarantee reliable output otherwise"?
Part of LLMs becoming more than toy apps is the former winning out over the latter. Using techniques like chain of thought with carefully formed completions lets you avoid the awkward "my user is an unwilling prompt engineer" scenarios that pop up otherwise.
> Are you going to have your user
What fucking user, man? Is it not painfully clear I never spoke in the context of deploying applications?
Your issues with this level of prefilling in the context of deployed apps ARE valid but I have no interest in discussing that specific use case and you really should have realized your arguments were context dependent and not actual rebuttals to what I claimed at the start several comments ago.
Are we done?
6 replies →