Comment by HankStallone

1 day ago

It's annoying when it apologizes for a "misunderstanding" when it was just plain wrong about something. What would be wrong with it just saying, "I was wrong because LLMs are what they are, and sometimes we get things very wrong"?

Kinda funny example: The other day I asked Grok what a "grandparent" comment is on HN. It said it's the "initial comment" in a thread. Not coincidentally, that was the same answer I found in a reddit post that was the first result when I searched for the same thing on DuckDuckGo, but I was pretty sure that was wrong.

So I gave Grok an example: "If A is the initial comment, and B is a reply to A, and C a reply to B, and D a reply to C, and E a reply to D, which is the grandparent of C?" Then it got it right without any trouble. So then I asked: But you just said it's the initial comment, which is A. What's the deal? And then it went into the usual song and dance about how it misunderstood and was super-sorry, and then ran through the whole explanation again of how it's really C and I was very smart for catching that.

I'd rather it just said, "Oops, I got it wrong the first time because I crapped out the first thing that matched in my training data, and that happened to be bad data. That's just how I work; don't take anything for granted."

Ummm, are you saying that C is the grandparent of C, or do you have a typo in your example? Sure, the initial comment is not necessarily the grandparent, but in your ABCDE example, A is the grandparent of C, and C is the grandparent of E.

Maybe I'm just misreading your comment, but it has me confused enough to reset my password, login, and make this child comment.

> I'd rather it just said ...

Yes, but why would it? "Oops, I got it wrong the first time because I crapped out the first thing that matched in my training data" isn't in the training data. Yet.

So it can't come out of the LLM: There's no actual introspection going on, on any of these rounds. Just using training data.