← Back to context

Comment by lelanthran

3 hours ago

> the problem with this is I'd strongly argue that you could do this pen and paper process with the human brain and our consciousness too; we just lack enough understanding to put pen to paper in that case.

You're basing your premise on a lack of understanding[1], the GP's premise is based on an exact understanding[2].

You don't see the difference between your premise and the GP'S premise?

-----------------

[1] "We don't know how brains actually come up with the things they come up with, like consciousness"; IOW, we don't know what the secret ingredient is, or even if there is one.

[2] "We can mechanically do the following steps using 18th-century tech and come up with the same result as the LLM"; IOW, every ingredient in here is known to us.

We know the brain is made up of atoms and we know how to model atoms. So we do know for a fact that the brain can be modeled mathematically and we do know that human thought can be written down symbolically as an algorithm on paper. That is fact.

The blue brain project has already modeled the hippocampus and cortex of the rat brain uses advanced imaging and simulations in super computers. So if it can be written down as memory on disk it can be done on paper as well.

The rat brain is simply a smaller and structurally different neural network then the human counterpart so the jump from the blue brain project to human brains is simply a scaling issue.

But from this you should begin to see the analysis from another level. Even though we have parts of the rat brain emulated computationally we still do not know if the rat is conscious. We don’t understand the rat brain in the SAME way we do not understand the LLM.

What people are getting at is the projection of this logic to things that don’t exist yet but can exist. When the blue brain project scales to the human brain we will hit the same problem with the human brain because it’s just a scaling issue.