← Back to context

Comment by bandrami

2 days ago

Close. Higher engagement means the user is more invested and values the solution more.

The users are being engineered more than the models are, and this isn't the only example.

Are you employed at Google or OpenAI? Are you working on these frontier models?

In the case of medical questions it needs to know further details to provide a relevant diagnosis. That is how it was trained.

In other cases you can observe its reasoning process to see why it would decide to request further details.

I have never seen an LLM just ask questions for the sake of asking. It is always relevant in the context. I don't use them casually. Just wrote a couple of handbooks (~100 pages in a few days). Generating tens of thousands of tokens per session with Gemini.

  • typical patterns to look out for:

    - "Should I now give you the complete [result], fulfilling [all your demands]?"

    - "Just say [go] and I will do it"

    - "Do you want either [A, B, or C]"

    - "In [5-15] minutes I will give you the complete result"

    ...

    • > "Do you want either [A, B, or C]"

      That's an example of what I'm talking about. Watch the reasoning process produce multiple options. That's what it is trained to do. That is problem solving, not "engagement". It requires more compute, not less. You see that more with the expensive models.

      > "In [5-15] minutes I will give you the complete result"

      I haven't seen that before and I don't see how it's relevant.

      2 replies →