← Back to context

Comment by brandall10

1 year ago

All that's happening is it finds 3 most commonly in the training set. When you push it, it responds with the next most common answer.

But then why does it stick to its guns on other questions but not this one?

  • I haven't played with this model, but rarely do I find working w/ Claude or GPT-4 for that to be the case. If you say it's incorrect, it will give you another answer instead of insisting on correctness.