← Back to context

Comment by bamboozled

3 days ago

The issue is that prediction is "part" of the human thought process, it's not the full story...

And the big players have built a bunch of workflows which embed many other elements besides just "predictions" into their AI product. Things like web search, to incorporating feedback from code testing, to feeding outputs back into future iterations. Who is to say that one or more of these additions has pushed the ensemble across the threshold and into "real actual thinking."

The near-religious fervor which people insist that "its just prediction" makes me want to respond with some religious allusions of my own:

> Who is this that wrappeth up sentences in unskillful words? Gird up thy loins like a man: I will ask thee, and answer thou me. Where wast thou when I laid up the foundations of the earth? tell me if thou hast understanding. Who hath laid the measures thereof, if thou knowest? or who hath stretched the line upon it?

The point is that (as far as I know) we simply don't know the necessary or sufficient conditions for "thinking" in the first place, let alone "human thinking." Eventually we will most likely arrive at a scientific consensus, but as of right now we don't have the terms nailed down well enough to claim the kind of certainty I see from AI-detractors.

  • I take a offence in the idea I’m “religiously downplaying LLMs”. I pay top dollar for access to the best models because I want the capabilities to be good / better. Just because I’m documenting my experience it doesn’t mean I have an Anti-ai agenda ? I pay because I find LLMs to be useful. Just not in the way suggested by the marketing teams.

    I’m downplaying because I have honestly been burned by these tools when I’ve put trust in their ability to understand anything, provide a novel suggestion or even solve some basic bugs without causing other issues.?

    I use all of the things you talk about extremely frequently and again, there is no “thinking” or consideration on display that suggests these things work like us, else why would we be having this conversation if they were ?

    • > I’m downplaying because I have honestly been burned by these tools when I’ve put trust in their ability to understand anything, provide a novel suggestion or even solve some basic bugs without causing other issues.?

      I've had that experience plenty of times with actual people... LLMs don't "think" like people do, that much is pretty obvious. But I'm not at all sure whether what they do can be called "thinking" or not.

  • I completely agree that we don't know enough, but I suggest that that entails that the critics and those who want to be cautious are correct.

    The harms engendered by underestimating LLM capabilities are largely that people won't use the LLMs.

    The harms engendered by overestimating their capabilities can be as severe as psychological delusion, of which we have an increasing number of cases.

    Given we don't actually have a good definition of "thinking" what tack do you consider more responsible?

    • > The harms engendered by underestimating LLM capabilities are largely that people won't use the LLMs.

      Speculative fiction about superintelligences aside, an obvious harm to underestimating the LLM's capabilities is that we could effectively be enslaving moral agents if we fail to correctly classify them as such.

      1 reply →

> The issue is that prediction is "part" of the human thought process, it's not the full story...

Do you have a proof for this?

Surely such a profound claim about human thought process must have a solid proof somewhere? Otherwise who's to say all of human thought process is not just a derivative of "predicting the next thing"?

  • Use your brain and use an LLM for 6 months, you’ll work it out.

    • Let's go the other route.

      What would change your mind? It's an exercise in feasibility.

      For example, I don't believe in time travel. If someone made me time travel, and made it undeniable that I was transported back to 1508, then I would not be able to argue against it. In fact, no one in such position would.

      What is that equivalent for your conviction? There must be something, otherwise, it's just an opinion that can't be changed.

      You don't need to present some actual proof or something. Just lay out some ideas that demonstrate that you are being rational about this and not just sucking up to LLM marketing.

      1 reply →

    • > Use your brain and use an LLM for 6 months, you’ll work it out.

      That's not a proof. Think harder about the questions people are asking you here.