Comment by voidspark
1 day ago
> inability to self-reflect and recognize they have to ask for more details because their priors are too low.
Gemini 2.5 Pro and ChatGPT-o3 have often asked me to provide additional details before doing a requested task. Gemini sometimes comes up with multiple options and requests my input before doing the task.
Gemini is also the first model I have seen call me out in it's thinking. Stuff like "The user suggested we take approach ABC, but I don't think the user fully understands ABC, I will suggest XYZ as an alternative since it would be a better fit"
It is impressive when it finds subtle errors in complex reasoning.
But even the dumbest model will call you out if you ask it something like:
"Hey I'm going to fill up my petrol car with diesel to make it faster. What brand of diesel do you recommend?"
That's a recent development for (imho) higher engagement and reduced compute.
It's for higher quality of output. Better solutions. These are the state of the art reasoning models (subscription only, no free access) which are smarter.
It also mainly happens when the context is clear that we are collaborating on work that will require multiple iterations of review and feedback, like drafting chapters of a handbook.
I have seen ChatGPT ask questions immediately upfront when it relates to medical issues.
Close. Higher engagement means the user is more invested and values the solution more.
The users are being engineered more than the models are, and this isn't the only example.
5 replies →