Comment by miyoji
21 hours ago
Hypothetically? You need more than a brain to have consciousness. Dead brains, I believe, do not have it. So it's more than just a simulation of a brain, you also need to simulate the data flow through the brain, the retention of memories, etc. Then there's the problem that a simulation of a roller coaster is not a roller coaster. Is there any reason to believe that this simulation of a brain will in fact operate as a brain? Does the simulation not lose something? Or are we discussing some impossible level of perfect simulation that has never and can never be achieved, even for something a million times less complicated than a mammalian brain?
If you build that spreadsheet, let me know and I'll evaluate it. I've done that evaluation with LLMs and they're definitely not conscious.
I'm not suggesting to pursue AGI via Excel, this is just a hypothetical for a reason. The technical feasibility of this (low) does not really matter, but if you want to base your argument on it you are basically playing the "god of the gaps" game, which is a weak/bad position IMO.
My point is that dismissing possible machine consciousness as "it's just a spreadsheet/statistics/linear algebra" is missing a critical step: Those dismissals don't demonstrate that human consciousness is anything more than an emergent property achievable by linear algebra.
If you want human minds to be "unsimulatable", then you need some essential core logic that can not be simulated on a turing machine and physics is not helping with that.
> I've done that evaluation with LLMs and they're definitely not conscious.
What is your definition for "consciousness" here? Are you confident that you are not gatekeeping current machine intelligence by demanding somewhat arbitrary capabilities in your definition of consciousness that are somewhat unimportant? E.g. memory or online learning; if a human was unable to form long-term memories or learn anything new, could you confidently call him "non-conscious" as well?
I'm not dismissing possible machine consciousness. I'm saying that no current machines have consciousness.
> If you want human minds to be "unsimulatable", then you need some essential core logic that can not be simulated on a turing machine and physics is not helping with that.
You don't have a proof of possibility either, you have no idea how a brain works and you're just postulating that in principle a computer can do the same thing. Okay, in principle, I agree. What about in practice?
> Are you confident that you are not gatekeeping current machine intelligence by demanding somewhat arbitrary capabilities in your definition of consciousness that are somewhat unimportant?
Yes, I'm quite sure. Are you trying to argue that current LLMs have consciousness?
> Are you trying to argue that current LLMs have consciousness?
If I get to define "consciousness", sure. I'd go with "capable of building a general-purpose internal model of reality, ability to reason on that model (guess about causality, extrapolate, etc) and update it plus some concept of self within that model". I would argue that current generation LLMs already have those, but you could certainly argue about lots of nuances, and only the whole loop (inference plus training) even qualifies.
> You don't have a proof of possibility either, you have no idea how a brain works and you're just postulating that in principle a computer can do the same thing.
Essentially yes, but I think this argument is really weak; we arguably have some understanding of how the brain operates, and LLMs are basically our best attempt so far to replicate the general principles in silicon.
But "understanding" and "ability to replicate" are obviously very different-- you wouldn't argue that we don't understand human limbs just because we can't build a proper artificial arm, right?
Assume we made some breakthroughs in online learning/internal memory modelling over the next decades, and built some toy with mic/speaker/camera and basically human cognitive abilities: would you hesitate calling such a thing conscious? Why?
I think almost everyone has lots deeply embedded, unscientific notions about the human mind, but the cold hard fact is that simple evolution basically bruteforced human congnition from zero, so there is no reason to me to assume that we can't do the same with several billion transistors doing mostly linear algebra.
> I've done that evaluation with LLMs and they're definitely not conscious.
This is an important point to just make it a side comment like that. Tell us how we can evaluate if something is conscious.
[dead]