← Back to context

Comment by game_the0ry

14 days ago

Something I always found off-putting about ChatGPT, Claude, and Gemini models is i would ask all three the same objective question and then push them and ask if they were being optimistic about their conclusions, then the responses would turn more negative. I can see it in the reasoning steps that its thinking "the user wants a more critical response and I will do it for them" not "I need to to be more realistic but stick to my guns."

It felt like they were telling me what I wanted to hear, not what I needed to hear.

The models that did not seem to do this and had more balanced and logical reasoning were Grok and Manus.

That happens, sure, but try convincing it of something that isn't true.

I had a brief but amusing conversation with ChatGPT where I was insisting it was wrong about a technical solution and it would not back down. It kept giving me "with all due respect, you are wrong" answers. It turned out that I was in fact wrong.

  • I see. I tend to treat AI a little differently - I come with a hypothesis and ask AI how right I am based on a scale of 1 to 5. Then I iterate from there.

    I'll ask it questions that I do not know the answer to, but I take the answer with a big grain of salt. If it is sure of the answer and I am wrong, its a strong signal that I am wrong.