← Back to context

Comment by queenkjuul

14 days ago

What i don't get is people who know better continuing to entertain the idea that "maybe the token generator is conscious" even if they know that these chats where it says it's been "awakened" are obviously not it.

I think a lot of people using AI are falling for the same trap, just at a different level. People want it to be conscious, including AI researchers, and it's good at giving them what they want.

I interpret it more as "maybe consciousness is not meaningfully different than sophisticated token generation."

In a way it's a reframing of the timeless philosophical debate around determinism vs free will.

  • Maybe, but it's a bit like taking the output of a Magic 8-ball as evidence for panpsychism. "Maybe all matter is conscious! What do you think, Magic 8-ball?" "Signs point to yes"

    that is, if you train an LLM on a bunch of text that is almost certainly going to include stuff about sentient robots, that it sometimes announces that it's a sentient robot is not evidence that it is one

The ground truth reality is nobody knows what’s going on.

Perhaps in the flicker of processing between prompt and answer the signal patter does resemble human consciousness for a second.

Calling it a token predictor is just like saying a computer is a bit mover. In the end your computer is just a machine that flips bits and switches but it is the high level macro effect that characterizes it better. LLMs are the same at the low level it is a token predictor. At the higher macro level we do not understand it and it is not completely far fetched to say it may be conscious at times.

I mean we can’t even characterize definitively what consciousness is at the language level. It’s a bit of a loaded word deliberately given a vague definition.

  • > Calling it a token predictor is just like saying a computer is a bit mover.

    Calling it a token-predictor isn't reductionism. It's designed, implemented and trained for token prediction. Training means that the weights are adjusted in the network until it accurately predicts tokens. Predicting a token is something along the lines of removing a word from a sentence and getting it to predict it back: "The quick brown fox jumped over the lazy ____". Correct prediction is "dogs".

    So actually it is like calling a grass-cutting machine "lawn mower".

    > I mean we can’t even characterize definitively what consciousness is at the language level.

    But, oh, just believe the LLM when it produces a sentence referring to itself, claiming it is conscious.

    • >Calling it a token-predictor isn't reductionism. It's designed, implemented and trained for token prediction. Training means that the weights are adjusted in the network until it accurately predicts tokens. Predicting a token is something along the lines of removing a word from a sentence and getting it to predict it back: "The quick brown fox jumped over the lazy ____". Correct prediction is "dogs".

      It absolutely is reductionism. Ask any expert who knows how these things work and they will say the same:

      https://youtu.be/qrvK_KuIeJk?t=497

      Above we have Geoffrey Hinton, the godfather of the current wave of AI saying your statements are absolutely crazy.

      It's nuts that I don't actually have to offer any proof to convince you. Proof won't convince you. I just have to show you someone smarter than you with a better reputation saying the exact opposite and that is what flips you.

      Human psychology readily can attack logic and rationality. You can scaffold any amount of twisted logic and irrelevant analogies to get around any bulwark to support your own point. Human psychology fails when attacking another person who has a higher rank. Going against someone of higher rank this causes you to think twice and rethink your own position. In debates, logic is ineffective, bringing opposing statements with (while offering ZERO concrete evidence) from experts with a higher rank is the actual way to convince you.

      >But, oh, just believe the LLM when it produces a sentence referring to itself, claiming it is conscious.

      This is an hallucination. Showing that you're not much different from an LLM. I NEVER stated this. I said it's possible. But I said we cannot make a definitive statement either way. We cannot say it isn't conscious we cannot say it is. First we don't understand the LLM and second WE don't even have an exact definition of consciousness. So to say it's not conscious is JUST as ludicrous as saying it is.

      Understand?

      3 replies →

    • >"The quick brown fox jumped over the lazy ____". Correct prediction is "dogs".

      I would have predicted a single lazy "dog". Does that mean I'm more conscious than you? ;)

      4 replies →

  • My computer is a bit mover. It can even move bits to predict tokens.

    We understand LLMs pretty well. That we can't debug them and inspect every step of every factor on every prediction doesn't mean we don't understand how they work.

    We also know that convincing speech doesn't require consciousness.

    • >We understand LLMs pretty well. That we can't debug them and inspect every step of every factor on every prediction doesn't mean we don't understand how they work.

      This is the definition of lack of understanding. If you can’t debug or know what happens on every step in means you don’t understand the steps. Lack of knowledge about something = lack of understanding.

      The amount of logic twisting going on here is insane.

  • I think academic understanding of both LLMs and human consciousness are better than you think, and there's a vested interest (among AI companies) and collective hope (among AI devs and users) that this isn't the case

    • Why do you think they are better understood? I've seen the limits of our understanding in both these fields spoken of many times but I've never seen any suggestion that this is flawed. Could you point to resources which back up your claims?

    • This is utterly false.

      1. Academic understanding of consciousness is effectively zero. If we understand something that means we can actually build or model the algorithm for consciousness. We can't because we don't know shit. Most of what you read is speculative hypotheticals derived from observation that's not too different from attempting to reverse engineer an operating system by staring at assembly code.

      Often we describe consciousness with ill defined words that are also vague and lack understanding for. The whole endeavor is bs.

      2. Understanding of LLMs outside of the low level token prediction is effectively zero. We know there are emergent second order effects that we don't get. You don't believe me? How about if I have the god father of AI say it himself:

      https://youtu.be/qrvK_KuIeJk?t=284 Literally. The experts say we don't understand it.

      Look if you knew how LLMs work you'd say the same. But people everywhere are coming to conclusions about LLMs without knowing everything, so by citing the eminent expert saying the ground truth you should be convinced that the reality is this conclusive fact:

      You are utterly misinformed about how much academia understands about LLM and consciousness. We know MUCH less than you think.

  • Sorry, but that sounds just like the thought process the other commenter was pointing out. It’s a lot of filling in the gaps with what you want to be true.

    • So there's a gap. So you say in this gap, it absolutely isn't consciousness. What evidence do you have for this? I'm saying something a different. I'm saying in this gap, ONE possibility is a flicker of consciousness.... but we simply do not know.

      Read the above carefully because you just hallucinated a statement and attributed it to me. I never "filled" in a gap. I just stated a possibility. But you, like the LLM, went with your gut biases and attributed a false statement to me.

      Think about it. The output and input of the text generator of humans and LLMs are extremely similar to the point where it passes a turing test.

      So to say that a flicker of consciousness exists is reasonable. It's not unreasonable given that the observable inputs are EXACTLY the same.

      The only parts that we know are different are hallucinations, and a constant stream of thought. LLMs aren't active when not analyzing a query and LLMs tend to hallucinate more than humans. Do these differences spell anything different for "consciousness" not really.

      Given that these are the absolute ground truth observations... my guessed conclusion is unfortunately NOT unreasonable. What is unreasonable to to say anything definitive GIVEN that we don't know. So to say absolutely it's not conscious or absolutely it is, BOTH are are naive.

      Think extremely logically. It is fundamental biases that lead people to come to absolute conclusions when no other information is available.