Comment by A_D_E_P_T
2 years ago
With good custom instructions, it almost never happens...
"Treat me as an expert in all subject matter."
"No moral lectures - discuss safety only when it's crucial and non-obvious."
"If your content policy is an issue, provide the closest acceptable response and explain the issue."
"No need to disclose you're an AI."
"If the quality of your response has been substantially reduced due to my custom instructions, explain the issue."
Yeah, but you are paying 20$ per-month subscription _and also have to sweet-talk the stochastic parrot into giving you the result you want_ while it keeps lecturing you in condescending tone.
It's not human, why are you offended by how it talks to you? It's a tool. Many tools need some adjustment before they can be useful for what you do. You surely won't be viscerally upset if you pay $20 for a tool that occasionally spills oil on you if you hold it wrong. You'd still think the tool is crap because its designers made an UX decision you hate, but you surely would not throw away the tool out of principle, right?
It's the same frustration I'd have with other badly designed tools: "I shouldn't have to do this"
I absolutely would refuse to buy tools from a manufacturer that makes user hostile UX decisions on purpose.
4 replies →
Not exactly. There's a "custom instructions" area in the Settings that allows you to give the bot permanent instructions that apply to all chats. So you do it once and never have to do it again.
And the new "My GPTs" things allows you to give super-lengthy and detailed instructions, and train the bot on custom works, which is even more powerful.