Comment by lelanthran
3 hours ago
> We know the brain is made up of atoms and we know how to model atoms.
Incorrect. There's still a lot we don't know about atoms. We can (sort of) model them, but not with the degree of accuracy you appear to think we have.
I mean, it's only recently that we discovered surprising changes in the properties of quarks, gluons and nucleons in relation to each other!
So, yeah, the following foundation for your argument:
> So we do know for a fact that the brain can be modeled mathematically
Is untrue. We can't do that, we have never done that.
> The blue brain project has already modeled the hippocampus and cortex of the rat brain uses advanced imaging and simulations in super computers.
They've got something, but they don't know how close or how far away they are from accuracy to the real thing.
We've almost always had a model of the human brain; first our model was simple (it has four or five parts), then we learned more and our model expanded to include actual cells (neurons, dendrites, etc), then we learned even more and our model was refined even further to include activation energies, rerouting, etc.
What makes you think we are anywhere close to the base layer when there is no more refinement to be made? Because while there is still things in brains that our outside of our knowledge (which, by definition, we don't know yet), we don't know enough about brains to make a replica of one as a mathematical model, or in silicon.
> Incorrect. There's still a lot we don't know about atoms. We can (sort of) model them, but not with the degree of accuracy you appear to think we have.
Not incorrect. You are misinformed and getting pedantic. Our knowledge of atoms is enough to model macro level phenomena and has spawned fields such as materials science and molecular biology. What is intractable is the computational power needed to accurately model things like the physics of protein folding. The computational power needed for that scales exponentially such that we can’t model it. That is the reality.
That being said we don’t need to model quantum level phenomena to model macro level effects like the biological mechanism of a neuron. There are simplified models that we can use as we have used in the blue brain project.
Additionally the thing we actually can’t model and don’t know about are extreme physics like black hole physics where the quantum world interacts with gravity but that is largely irrelevant to the topic at hand.
I hope this excerpt educates you a bit.
> Is untrue. We can't do that, we have never done that.
We haven’t done that just like we haven’t actually actualized the biggest number ever calculated by a computer. We know that number exists in theory but you’d be an idiot to claim it doesn’t exist as it’s foundational. For example the number a Google exists but no one has seen evidence for its existence. We know it through logic. From the blue brain project we can infer relatively confidently that the human brain can be emulated on silicon. This also follows from Turing completeness.
> They've got something, but they don't know how close or how far away they are from accuracy to the real thing.
The emulation is Quite accurate from imaging and emulation. The properties of the emulation that match in vitro and in vivo experimental data without specific parameter tuning. It is accurate as far as we know. That is about the same extent that we understand the human brain the LLM. The better question for you is how do you know it’s not accurate? You don’t. What we do know is that from measurable properties we understand that the blue brain emulation is accurate to the section of the mouse brain it emulates. This is exactly the same reasoning applied to LLMs… the tokens LLMs generate are remarkably inline with consciousness such that it is indistinguishable and thus can be speculated to actually BE conscious.
> What makes you think we are anywhere close to the base layer when there is no more refinement to be made? Because while there is still things in brains that our outside of our knowledge (which, by definition, we don't know yet), we don't know enough about brains to make a replica of one as a mathematical model, or in silicon.
Who says we need to make a replica of humans to make it conscious? We know the brain is made up thousands of evolutionary side effects orthogonal to the concept of consciousness like hunger, sleep and anger. All we need to do is replicate a sliver of the subset of human output we do consider as consciousness and that’s it.
But right now we can’t even fully define what that subset is and we can’t even understand how an LLM replicates human output.
What we do know is that the LLM replicates human output to a degree never done before indicating that it understands what is being told. From the evidence observed it is a valid speculation to consider it a form of consciousness. That is entirely different from saying AI is human. It is clearly not human but it is unclear whether or not it is conscious.
To be confidently claiming an LLM is not conscious is fundamentally misguided because it meeting most of our intuitive expectations of what consciousness is. It’s just people can’t face the reality that their own consciousness is not a form of exceptionalism.