Comment by SirMaster

11 days ago

This is my biggest peeve when people say that LLMs are as capable as humans or that we have achieved AGI or are close or things like that.

But then when I get a subpar result, they always tell me I'm "prompting wrong". LLMs may be very capable of great human level output, but in my experience leave a LOT to be desired in terms of human level understanding of the question or prompt.

I think rating an LLM vs a human or AGI should include it's ability to understand a prompt like a human or like an averagely generally intelligent system should be able to.

Are there any benchmarks on that? Like how well LLMs do with misleading prompts or sparsely quantified prompts compared to one another?

Because if a good prompt is as important as people say, then the model's ability to understand a prompt or perhaps poor prompt could have a massive impact on its output.

I mentioned this in another thread, but this is genuinely demonstrating a known issue with ambiguous prompts.

You might be inclined to say, "a human would always interpret the question as having the car nearby the speaker, 50m away from the carwash." But this is objectively untrue. There are people in this comments section and on the Mastodon thread that found the question to be somewhat confusing.

In other words, the premise that "understand[ing] a prompt like a human" is all that's needed is wrong because not every human interprets ambiguities in the same way. The human phenomenon is well researched in psychology. The LLM equivalent is also well researched, and several proposals have been put forth over the years to address it. This is a pretty good research paper on the subject, and it links to other relevant studies: https://arxiv.org/abs/2511.10453v2 (Although I disagree with their method. I think asking clarifying questions is a superior approach than trying to one-shot every possible interpretation.)

So yes, there is a ton of research on the problem. Some datasets include ambiguous questions and instructions for this reason. A couple of examples are provided in the linked paper.

  • It's not necessarily about that humans can't mistake the question too, but just that overall LLMs seem to have far less ability to correctly understand a prompt than the average human. And that the "intelligence" shown in its understanding of the prompt seems to be far less than its "intelligence" in its answers.

    So it feels like a big area of limitation or a big bottleneck towards getting a good answer.

    • I think we're also miscommunicating, so this isn't really a surprise.

      It is not clear to me why you've quoted "intelligence" or why you separated understanding and answers as having independent intelligences.

      But yes, I agree there are limitations. Much like those above that are being researched.

It's a type of cognitive bias not much different than an addict or indoctrinated cult follower. A subset of them might actually genuinely fear Roko's basilisk the exact same way colonial religion leveraged the fear of eternal damnation in hell as a reason to be subservient to the church leaders.

hyperstitions from TESCREAL https://www.dair-institute.org/tescreal/