← Back to context

Comment by razorbeamz

6 days ago

The point I'm trying to make is that all LLM output is based on likelihood of one word coming after the next word based on the prompt. That is literally all it's doing.

It's not "thinking." It's not "solving." It's simply stringing words together in a way that appears most likely.

ChatGPT cannot do math. It can only string together words and numbers in a way that can convince an outsider that it can do math.

It's a parlor trick, like Clever Hans [1]. A very impressive parlor trick that is very convincing to people who are not familiar with what it's doing, but a parlor trick nontheless.

[1] https://en.wikipedia.org/wiki/Clever_Hans

> all LLM output is based on likelihood of one word coming after the next word based on the prompt.

Right but it has to reason about what that next word should be. It has to model the problem and then consider ways to approach it.

  • No, it does not reason anything. LLM "reasoning" is just an illusion.

    When an LLM is "reasoning" it's just feeding its own output back into itself and giving it another go.

    • This is like saying chess engines don't actually "play" chess, even though they trounce grandmasters. It's a meaningless distinction, about words (think, reason, ..) that have no firm definitions.

      5 replies →

    • Is that so different from brains?

      Even if it is, this sounds like "this submarine doesn't actually swim" reasoning.

> ChatGPT cannot do math. It can only string together words and numbers in a way that can convince an outsider that it can do math

What am I as a human doing when I "Do math" ?

1.I am looking at the problem at hand, identifying what I have and what I need to get

2.I am then doing a prediction using my pretrained neural net to find possible courses of action to go in a direction that "feels" right

3.I am using my pretrained neural net to find pairs of values that I can substitute with each other (Think multiplication tables, standard results, etc...)

4.Repeat till I arrive at the answer or give up.

As a simple example, when I try to find 600×74+42 I remember the steps for multiplication. I recall the associated pairs of numbers from my tables and complete the multiplication step by step. I then recall the associated pairs of numbers for addition of single digits and add from left to right.

We need to remember that just because we are fast at doing this and are able to do it subconsciously it doesn't mean that we can natively do math, we just do association of information using the neural networks we have trained.

sigh; this argument is the new Chinese Room; easily described, utterly wrong.

https://www.youtube.com/watch?v=YEUclZdj_Sc

  • Next-token-prediction cannot do calculations. That is fundamental.

    It can produce outputs that resemble calculations.

    It can prompt an agent to input some numbers into a separate program that will do calculations for it and then return them as a prompt.

    Neither of these are calculations.

  • After dismissing it for a long time, I have come around to the philosophical zombie argument. I do not believe that LLMs are conscious, but I also no longer believe that consciousness is a prerequisite for intelligence. I think at this point it is hard to deny that LLMs do not possess some form of intelligence (although not necessarily human-like). I think P-zombies is a fitting description.

    • I don't think P-zombies can exist. There must be some perceptible difference between an intelligence w/ consciousness and one without. The only way there wouldn't be a difference is if we are mistaken about the consciousness (either both have it or neither do).

      1 reply →