Comment by SirMaster

10 days ago

It's not necessarily about that humans can't mistake the question too, but just that overall LLMs seem to have far less ability to correctly understand a prompt than the average human. And that the "intelligence" shown in its understanding of the prompt seems to be far less than its "intelligence" in its answers.

So it feels like a big area of limitation or a big bottleneck towards getting a good answer.

I think we're also miscommunicating, so this isn't really a surprise.

It is not clear to me why you've quoted "intelligence" or why you separated understanding and answers as having independent intelligences.

But yes, I agree there are limitations. Much like those above that are being researched.