← Back to context

Comment by rrr_oh_man

2 days ago

That's a recent development for (imho) higher engagement and reduced compute.

It's for higher quality of output. Better solutions. These are the state of the art reasoning models (subscription only, no free access) which are smarter.

It also mainly happens when the context is clear that we are collaborating on work that will require multiple iterations of review and feedback, like drafting chapters of a handbook.

I have seen ChatGPT ask questions immediately upfront when it relates to medical issues.

  • Close. Higher engagement means the user is more invested and values the solution more.

    The users are being engineered more than the models are, and this isn't the only example.

    • Are you employed at Google or OpenAI? Are you working on these frontier models?

      In the case of medical questions it needs to know further details to provide a relevant diagnosis. That is how it was trained.

      In other cases you can observe its reasoning process to see why it would decide to request further details.

      I have never seen an LLM just ask questions for the sake of asking. It is always relevant in the context. I don't use them casually. Just wrote a couple of handbooks (~100 pages in a few days). Generating tens of thousands of tokens per session with Gemini.

      4 replies →