Comment by rany_
2 years ago
I think it would be better to ask it to wrap the answer with some known marker like START_DESCRIPTION and END_DESCRIPTION. This way if it refuses you'll be able to tell right away.
As another user pointed out, sometimes it doesn't refuse by using the word "sorry".
In the same vein, I had a play with asking ChatGPT to `format responses as a JSON object with schema {"desc": "str"}` and it seemed to work pretty well. It gave me refusals in plaintext, and correct answers in well-formed JSON objects.
You can force it to output JSON through API too.
If you do that, how does it respond to "forbidden" queries? If non-answers are in JSON format too, then it would defeat the purpose.
3 replies →
Correct
However it's usually the laziest/more indifferent people that will use AI for product descriptions and won't care for such techniques
The ones that will get caught, you mean.