← Back to context

Comment by mr_toad

9 months ago

> If I mentioned that the code wouldn't compile it would start suggesting very implausible scenarios

I have to chuckle at that because it reminds me of a typical response on technical forums long before LLMs were invented.

Maybe the LLM has actually learned from those responses and is imitating them.

It seems no discussion of LLMs on HN these days is complete without a commenter wryly observing how that one specific issue someone is pointing to with an LLM is also, funnily enough, an issue they've seen with humans. The implication always seems to be that this somehow bolsters the idea that LLMs are therefore in some sense and to some degree human-like.

Humans not being infallible superintelligences does not mean that the thing that LLMs are doing is the same thing we do when we think, create, reason, etc. I would like to imagine that most serious people who use LLMs know this, but sometimes it's hard to be sure.

Is there a name for the "humans stupid --> LLMs smart" fallacy?

  • > Is there a name for the "humans stupid --> LLMs smart" fallacy?

    No one is saying "humans stupid --> LLMs smart". That's absolutely not the commenter above you said. Your whole comment is strawman fallacy.

  • > The implication always seems to be that this somehow bolsters the idea that LLMs are therefore in some sense and to some degree human-like.

    Nah, it's something else: it's that LLMs are being held to a higher standard than humans. Humans are fallible, and that's okay. The work they do is still useful. LLMs do not have to be perfect either to be useful.

    The question of how good they are absolutely matters. But some error isn't immediately disqualifying.

    • I agree that LLMs are useful, in many ways, but think that people are in fact often making the stronger claim which I refer to in your quote from my original point. If the argument were put forward simply to highlight that LLMs, while fallible, are still useful, I would see no issue.

      Yes, humans and LLMs are fallible, and both useful.

      I'm not saying the comment I responded was an egregious case of the "fallacy" I'm wondering about, but I am saying that I feel like it's brewing. I imagine you've seen the argument that goes:

      Anne: LLMs are human-like in some real, serious, scientific sense (they do some subset of reasoning, thinking, creating, and it's not just similar, it is intelligence)

      Billy: No they aren't, look at XYZ (examples of "non-intelligence", according to the commenter).

      Anne: Aha! Now we have you! I know humans who do XYZ! QED

      I don't like Billy's argument and don't make it myself, but the rejoinder which I feel we're seeing often from Anne here seems absurd, no?

    • I think it's natural for programmers to hold LLMs to a higher standard, because we're used to software being deterministic, and we aim to make it reliable.

  • well they try to copy humans and humans on the internet are very different creatures from humans in face to face interaction. So I see the angle.

    It is sad that, inadvertendly or not, LLMs may have picked up on the traits of the loudest humans. abrasive, never admitting fault, always trying to bring something up that sounds plausible but falls under scrutiny. Only thing it holds back on is resorting to insults when cornered.

  • > the idea that LLMs are therefore in some sense and to some degree human-like.

    This is 100% true, isn't it? It is based on the corpus of humankind knowledge and interaction, it is only expected that it would "repeat" human patterns. It also makes sense that the way to evolve the results we get from it is to mimic human organization, politics, sociology in the a new layer on top of LLMs to surpass current bottlenecks, just like they were used to evolve human societies.

    • >It is based on the corpus of humankind knowledge and interaction

      Something being based on X or using it as source material doesen't guarantee any kind of similarity though. My program can also contains the entire text of wikipedia, and only ever outputs the number 5.

      1 reply →