← Back to context

Comment by paulcole

1 year ago

But then why does it stick to its guns on other questions but not this one?

I haven't played with this model, but rarely do I find working w/ Claude or GPT-4 for that to be the case. If you say it's incorrect, it will give you another answer instead of insisting on correctness.

  • Wait what? You haven’t used 4o and you confidently described how it works?

    • It's how LLMs work in general.

      If you find a case where forceful pushback is sticky, it's either because the primary answer is overwhelmingly present in the training set compared to the next best option or because there are conversations in the training that followed similar stickiness, esp. if the structure of the pushback itself is similar to what is found in those conversations.

      2 replies →