← Back to context

Comment by sitkack

2 days ago

You are just doubling down on protecting your argument.

I operate LLMs in many conversational modes where it does ask clarifying questions, probing questions, baseline determining questions.

It takes at most one sentence in the prompt to get them to act this way.

> It takes at most one sentence in the prompt to get them to act this way.

What is this one sentence you are using?

I am struggling to elicite clarification behavior form llms

  • What is your domain and what assumptions are they making that they should be asking you for? Have you tried multiple models?

Could you share your prompt to get it to ask clarifying questions? I'm wondering if it would work in custom instructions.

  • It is domain dependent, you really need to play with it. Tell it you are doing pair thinking and either get it to ask questions about things it doesn't understand, or get it to ask you questions to get you to think better. Project the AI into a vantage point in the latent space and then get it to behave in the way that you want it to.

    You can ask it to use the Socratic method, but then it is probing you, not its own understanding. Now have it use the socratic method on itself. You can tell it to have multiple simultaneous minds.

    Play with deepseek in thinking and non-thinking mode, give it nebulous prompts and see if you can get it to ask for clarifications.