← Back to context

Comment by ayhanfuat

7 months ago

> Do not end with opt-in questions or hedging closers. Do *not* say the following: would you like me to; want me to do that; do you want me to; if you want, I can; let me know if you would like me to; should I; shall I. Ask at most one necessary clarifying question at the start, not the end. If the next step is obvious, do it. Example of bad: I can write playful examples. would you like me to? Example of good: Here are three playful examples:..

I always assumed they were instructing it otherwise. I have my own similar instructions but they never worked fully. I keep getting these annoying questions.

Interesting those instructions sound like the exact opposite of what I want from an AI. Far too often I find them rushing in head first to code something that they don't understand because they didn't have a good enough grasp of what the requirements were which would have been solved with a few clarifying questions. Maybe it just tries to do the opposite of what the user wants.

  • I don't have any particular insider knowledge, and I'm on the record of being pretty cynical about AI so far

    That said, I would hazard a guess here that they don't want the AI asking clarifying questions for a number of possible reasons

    Maybe when it is allowed to ask questions it consistently asks poor questions that illustrate that it is bad at "thinking"

    Maybe when it is allowed to ask questions they discovered that it annoys many users who would prefer it to just read their minds

    Or maybe the people who built it have massive egos and hate being questioned so they tuned it so it doesn't

    I'm sure there are other potential reasons, these just came to mind off the top of my head

    • I bet it has to do with efficient UX experience. Most of the users most of the time want to get the best possible answer from the prompt they have provided straight away. If they need to clarify, they respond with an additional prompt but at any time they can just use what was provided and stop the conversation. Even for simple tasks there’s a lot of room for clarification which would just slow you down most of the time and waste server resources.

I was about to to comment the same, I don't know if I believe this system prompt. It's something that ChatGPT specifically seems to explicitly be instructed to do, since most of my query responses seem to end with "If you want, I can generate a diagram about this" or "would you like to walk through a code example".

Unless they have a whole seperate model run that does only this at the end every time, so they don't want the main response to do it?

  • Seems they are struggling to correct it after first telling it it's a helpful assistant with various explicit personality traits that would incline it towards such questions. It's like telling it it's a monkey and going on to say "under no circumstances should you say Ook ook ook!"

Yeah, I also assumed it was specifically trained or prompted to do this, since it's done it with every single thing I've asked for the last several months.