← Back to context

Comment by heresie-dabord

5 days ago

> a midpoint between "AIs are useless and do not actually think" and "AIs think like humans"

LLMs (AIs) are not useless. But they do not actually think. What is trivially true is that they do not actually need to think. (As far as the Turing Test, Eliza patients, and VC investors are concerned, the point has been proven.)

If the technology is helping us write text and code, it is by definition useful.

> In 2003, the machine-learning researcher Eric B. Baum published a book called “What Is Thought?” [...] The gist of Baum’s argument is that understanding is compression, and compression is understanding.

This is incomplete. Compression is optimisation, optimisation may resemble understanding, but understanding is being able to verify that a proposition (compressed rule or assertion) is true or false or even computable.

> —but, in my view, this is the very reason these models have become increasingly intelligent.

They have not become more intelligent. The training process may improve, the vetting of the data improved, the performance may improve, but the resemblance to understanding only occurs when the answers are provably correct. In this sense, these tools work in support of (are therefore part of) human thinking.

The Stochastic Parrot is not dead, it's just making you think it is pining for the fjords.

> But they do not actually think.

I'm so baffled when I see this being blindly asserted.

With the reasoning models, you can literally watch their thought process. You can see them pattern-match to determine a strategy to attack a problem, go through it piece-by-piece, revisit assumptions, reformulate strategy, and then consolidate findings to produce a final result.

If that's not thinking, I literally don't know what is. It's the same process I watch my own brain use to figure something out.

So I have to ask you: when you claim they don't think -- what are you basing this on? What, for you, is involved in thinking that the kind of process I've just described is missing? Because I genuinely don't know what needs to be added here for it to become "thinking".

  • > I'm so baffled when I see this being blindly asserted. With the reasoning models, you can literally watch their thought process.

    Not true, you are falling for a very classic (prehistoric, even) human illusion known as experiencing a story:

    1. There is a story-like document being extruded out of a machine humans explicitly designed for generating documents, and which humans trained on a bajillion stories humans already made.

    2. When you "talk" to a chatbot, that is an iterative build of a (remote, hidden) story document, where one of the characters is adopting your text-input and the other's dialogue is being "performed" at you.

    3. The "reasoning" in newer versions is just the "internal monologue" of a film noir detective character, and equally as fictional as anything that character "says out loud" to the (fictional) smokin-hot client who sashayed the (fictional) rent-overdue office bearing your (real) query on its (fictional) lips.

    > If that's not thinking, I literally don't know what is.

    All sorts of algorithms can achieve useful outcomes with "that made sense to me" flows, but that doesn't mean we automatically consider them to be capital-T Thinking.

    > So I have to ask you: when you claim they don't think -- what are you basing this on?

    Consider the following document from an unknown source, and the "chain of reasoning" and "thinking" that your human brain perceives when encountering it:

        My name is Robot Robbie.
        That high-carbon steel gear looks delicious. 
        Too much carbon is bad, but that isn't true here.
        I must ask before taking.    
        "Give me the gear, please."
        Now I have the gear.
        It would be even better with fresh manure.
        Now to find a cow, because cows make manure.
    

    Now whose reasoning/thinking is going on? Can you point to the mind that enjoys steel and manure? Is it in the room with us right now? :P

    In other words, the reasoning is illusory. Even if we accept that the unknown author is a thinking intelligence for the sake of argument... it doesn't tell you what the author's thinking.

    • You're claiming that the thinking is just a fictional story intended to look like it.

      But this is false, because the thinking exhibits cause and effect and a lot of good reasoning. If you change the inputs, the thinking continues to be pretty good with the new inputs.

      It's not a story, it's not fictional, it's producing genuinely reasonable conclusions around data it hasn't seen before. So how is it therefore not actual thinking?

      And I have no idea what your short document example has to do with anything. It seems nonsensical and bears no resemblance to the actual, grounded chain of thought processes high-quality reasoning LLM's produce.

      > OK, so that document technically has a "chain of thought" and "reasoning"... But whose?

      What does it matter? If an LLM produces output, we say it's the LLM's. But I fail to see how that is significant?

      16 replies →