Comment by steveklabnik
9 months ago
> The implication always seems to be that this somehow bolsters the idea that LLMs are therefore in some sense and to some degree human-like.
Nah, it's something else: it's that LLMs are being held to a higher standard than humans. Humans are fallible, and that's okay. The work they do is still useful. LLMs do not have to be perfect either to be useful.
The question of how good they are absolutely matters. But some error isn't immediately disqualifying.
I agree that LLMs are useful, in many ways, but think that people are in fact often making the stronger claim which I refer to in your quote from my original point. If the argument were put forward simply to highlight that LLMs, while fallible, are still useful, I would see no issue.
Yes, humans and LLMs are fallible, and both useful.
I'm not saying the comment I responded was an egregious case of the "fallacy" I'm wondering about, but I am saying that I feel like it's brewing. I imagine you've seen the argument that goes:
Anne: LLMs are human-like in some real, serious, scientific sense (they do some subset of reasoning, thinking, creating, and it's not just similar, it is intelligence)
Billy: No they aren't, look at XYZ (examples of "non-intelligence", according to the commenter).
Anne: Aha! Now we have you! I know humans who do XYZ! QED
I don't like Billy's argument and don't make it myself, but the rejoinder which I feel we're seeing often from Anne here seems absurd, no?
I think it's natural for programmers to hold LLMs to a higher standard, because we're used to software being deterministic, and we aim to make it reliable.