Comment by alextheparrot
7 months ago
It isn’t actually very wrong. Your example is tangential as graders in school have multiple roles — teaching the content and grading. That’s an implementation detail, not a counter to the premise.
I don’t think we should assume answering a test would be easy for a Scantron machine just because it is very good at grading them, either.
No. Graders having multiple roles is actually the implementation detail, since they're people, and they can't spend all day grading work. Scanning machines don't really grade work either, but I am happy to rely on them for checking an answer matches a scheme verbatim. I'm not sure why you mention scanners answering tests either, since my original comment doesn't imply that.
There is no evidence that an LLM can reliably evaluate the semantic content of a sentence, even in cases where we all agree that the semantic content exists. The thread we are participating in demonstrates a particularly egregious failure, but there is no good reason to think that more subtle failures might not exist if we happen to patch this one. Even if they were reliable, you can't evaluate a system with itself - that is basic science.