← Back to context

Comment by dgellow

20 days ago

Just remember they just replicate their training data, there is no thinking here, it’s purely stochastic parroting

A challenge: can you write down a definition of thinking that supports this claim? And then, how is that definition different from what someone who wasn't explicitly trying to exclude LLM-based AI might give?

  • It’s a philosophical question, and I personally have very little interest in philosophing. LLMs are technically limited to what is in their training dataset

How do you know you are not essentially doing the same thing?

  • An LLM cannot create something new. It is limited to its training set. That’s a technical limitation. I’m surprised to see people on HN being confused by the technology…

People are still falling for the "stochastic parrot" meme?

  • Until we have world models, that is exactly what they are. They literally only understand text, and what text is likely given previous text. They are very good at this, because we've given it a metric ton of training data. Everything is "what does a response to this look like?"

    This limitation is exactly why "reasoning models" work so well: if the "thinking" step is not persisted to text, it does not exist, and the LLM cannot act on it.

    • Text comes in, text goes out, but there's a lot of complexity in the middle. It's not a "world model", but there's definitely modeling of the world going on inside.

      1 reply →

    • > They literally only understand text

      I don't see why only understanding text is completely associated with 'schastic-parrot'-ness. There are blind-deaf people around (mostly interacting through reading braille I think) which are definitely not stochastic parrots.

      Moreover, they do have a little bit of Reinforcement Learning on top of reproducing their training corpus.

      I believe there has to be some even if very primitive form of thinking (and something like creativity even) even to do the usual (non-RL, supervised) LLMs job of text continuation.

      The most problematic thing is humans tend to abhor middle grounds. Either it thinks or it doesn't. Either it's an unthinking dead machine, a s.p., or human-like AGI. The reality is probably in between (maybe still more on the side of s.p. s, definitely with some genuine intelligence, but with some unknown, probably small, sentience as of yet). Reminder that sentience and not intelligence is what should give it rights.

      1 reply →