← Back to context

Comment by khafra

4 days ago

> Generalizing your experience to everyone else's betrays a lack of imagination.

One guy is generalizing from "they don't work for me" to "they don't work for anyone."

The other one is saying "they do work for me, therefore they do work for some people."

Note that the second of these is a logically valid generalization. Note also that it agrees with folks such as Tim Gowers, who work on novel and hard problems.

No, that's decidedly not what is happening here.

One is saying "I've seen an LLM spectacularly fail at basic reasoning enough times to know that LLMs don't have a general ability to think" (but they can sometimes reproduce the appearance of doing so).

The other is trying to generalize "I've seen LLMs produce convincing thought processes therefore LLMs have the general ability to think" (and not just occasionally reproduce the appearance of doing so).

And indeed, only one of these is a valid generalization.

  • When we say "think" in this context, do we just mean generalize? LLMs clearly generalize (you can give one a problem that is not exactly in it's training data and it can solve it), but perhaps not to the extent a human can. But then we're talking about degrees. If it was able to generalize at a higher level of abstraction maybe more people would regard it as "thinking".

    • I meant it in the same way the previous commenter did:

      > Having seen LLMs so many times produce incoherent, nonsensical and invalid chains of reasoning... LLMs are little more than RNGs. They are the tea leaves and you read whatever you want into them.

      Of course LLMs are capable of generating solutions that aren't in their training data sets but they don't arrive at those solutions through any sort of rigorous reasoning. This means that while their solutions can be impressive at times they're not reliable, they go down wrong paths that they can never get out of and they become less reliable the more autonomy they're given.

      3 replies →

  • s/LLM/human/

    • Clever. Yes, humans can be terrible at reasoning too, but in any half decent technical workplace it's so rare for people to fail to apply logic as often and in ways that are as frustrating to deal with as LLMs. And if they are then they should be fired.

      I can't say I remember a single coworker that would fit this description though many were frustrating to deal with for other reasons, of course.

      1 reply →