← Back to context

Comment by laichzeit0

7 hours ago

You could push the analogy even further and run the thought experiment where every forward pass through an LLM could in principle be done on pen and paper, distributed throughout all humanity. Sure it would take a long time, but the output would be exactly the same. We’ve just shifted the implementation from GPU to scribbling things down on paper. If you want to assert that LLMs are “conscious” then you would have to likewise say this pen-and-paper implementation is conscious unless you want to say a certain clock-speed is a necessary condition for consciousness.

When we get complete neuronal connection maps (which we are getting close to for mice and humans will be done within a decade or two), we could in principle simulate a brain on a computer or on paper too. Unless you assert something magical like a "soul", these connections are what determine human consciousness. It is one thing to argue that LLMs don't resemble brains and if they could be "conscious" they wouldn't be conscious in the sense we are, but asserting that anything understandable can't be conscious won't age well.

the problem with this is I'd strongly argue that you could do this pen and paper process with the human brain and our consciousness too; we just lack enough understanding to put pen to paper in that case

the notion of consciousness being something an experience that other animals/humans share is entirely faith based.

the only person with evidence of ones consciousness is the person claiming they're conscious.

  • > the problem with this is I'd strongly argue that you could do this pen and paper process with the human brain and our consciousness too; we just lack enough understanding to put pen to paper in that case.

    You're basing your premise on a lack of understanding[1], the GP's premise is based on an exact understanding[2].

    You don't see the difference between your premise and the GP'S premise?

    -----------------

    [1] "We don't know how brains actually come up with the things they come up with, like consciousness"; IOW, we don't know what the secret ingredient is, or even if there is one.

    [2] "We can mechanically do the following steps using 18th-century tech and come up with the same result as the LLM"; IOW, every ingredient in here is known to us.

    • We know the brain is made up of atoms and we know how to model atoms. So we do know for a fact that the brain can be modeled mathematically and we do know that human thought can be written down symbolically as an algorithm on paper. That is fact.

      The blue brain project has already modeled the hippocampus and cortex of the rat brain uses advanced imaging and simulations in super computers. So if it can be written down as memory on disk it can be done on paper as well.

      The rat brain is simply a smaller and structurally different neural network then the human counterpart so the jump from the blue brain project to human brains is simply a scaling issue.

      But from this you should begin to see the analysis from another level. Even though we have parts of the rat brain emulated computationally we still do not know if the rat is conscious. We don’t understand the rat brain in the SAME way we do not understand the LLM.

      What people are getting at is the projection of this logic to things that don’t exist yet but can exist. When the blue brain project scales to the human brain we will hit the same problem with the human brain because it’s just a scaling issue.

We know the brain can be modeled by math (and therefore thought can be written down on paper).

We know because we have mathematical models for atoms. And we know the brain is made out of atoms therefore the brain is simply a mathematical model of interconnected atoms that form a specific structure called the brain.

Thusevery facet of macro (keyword) reality should be able to be written on paper and calculated. That goes for everything… from the emotions you feel to the internal forward pass of an LLM.