Comment by SwellJoe
11 hours ago
Not just any math: Matrix multiplication. Can matrix multiplication be conscious?
And, I don't see how it can be. It is deterministic, when all variables are controlled. You can repeat the output over and over, if you start it with the same seed, same prompt, and same hardware operating in a way that doesn't introduce randomness. At commercial scale, this is difficult, as the floating point math on GPUs/TPUs when running large batches is non-deterministic, as I understand it. But, in a controlled lab, you can make a model repeat itself identically. Unless the random number generator is "conscious", I don't see a place to fit consciousness into our understanding of LLMs.
People often point to the relative simplicity of the architecture and code as proof that the system can’t be doing whatever it is that consciousness does, but in doing so they ignore the vast size of the data those simple structures are operating over. Nobody can actually say whether consciousness is just emergent behaviour of a sufficiently complex system, and knowing how a system is built tells you nothing about whether it clears the bar for that kind of emergence. Architectural simplicity and total system complexity aren’t the same thing.
Ie the intelligence sits in the weights and may sit there in the synapses in our brains too.
When we talk about machines being simple mimicking entities we pay no attention to whether or not we are also simple mimicking entities.
Most other assertions in this topic regarding what consciousness truly is tend to be stated without evidence and exceedingly anthropocentric whilst requiring a higher and higher bar for anything that is not human and no justification for what human intelligence really entails.
Is Wikipedia conscious? It's a system operating on a lot of data. Is Google search conscious? It knows everything. Very complicated algorithms. Surely at some scale Google search must become a real live boy? When does it wake up and by what mechanism does that happen?
The frontier models are more complex and operate on more data than Wikipedia, but they are less complex and operate on less data than Google search in its entirety.
And, I'm not anthropocentric at all. I think apes and dolphins and some birds and probably some other critters are conscious. I mean they have a sense of self, and others, they have wants and needs and make decisions based on them.
This is a case where the person making extraordinary claims needs to provide the extraordinary evidence. It's extraordinary to claim that matrix multiplication becomes conscious if only it's got enough numbers. How many numbers do you reckon? Is my phone a living thing because it can run Gemma E4B? It answers questions. It'll write you a poem if you ask. It certainly knows more than some humans. What size makes an LLM come alive?
What explains the emergent abilities of generative pre-trained transformers at massive-scale? Abilities that the smaller GTP’s don’t possess.
Simple programs can give rise to very complex behaviour. Conway’s game of live is Turing Complete and has four rules.
Conway’s Game of Live can simulate a Turing machine, can therefore implant a GTP.
Does that mean Conway’s Game of Life is conscious? I don’t think so.
Does it rule out Conway’s Game of life from implementing a system that has consciousness as an emergent ability?
I’m not convinced I know the answer.
1 reply →
To the first questions. No and no. But potentially where consciousness lives is emergent behaviour in systems with iterative feedback loops.
https://en.wikipedia.org/wiki/I_Am_a_Strange_Loop
I personally think we'll need a few more feedback loops before you have more human-like intelligence. For example, a flock of LLM agent loops coming to consensus using short-term and long-term memory, and controlling realtime mechanical, visual and audio feedback systems, and potentially many other systems that don't mimic biological systems.
I also think people will still be debating this way beyond the singularity and never conceding special status to intelligence outside the animal kingdom or biological life.
It's quite a push for many people to even concede animals have intelligence.
For the extraordinary claims/evidence, it's also the case that almost any statement about what consciousness is in terms of biological intelligence is an extraordinary claim that goes beyond any evidence. All evidence comes from within the conscious experience of the individual themselves.
We can't know beyond our own senses whether perception exists outside of our own subjective experience. We cannot truly prove we are not a brain in a jar or a simulation. Anything beyond assertions about the present moment and the senses that the individual experiences are just pure leaps of faith based on the persistent illusion, or perceived persistent illusion of reality (or not).
We know really nothing of our own consciousness and it is by definition impossible to prove anything outside of it, from inside the framework of consciousness.
If we can somehow find a means to break outside of the pure speculation bubble of thoughts and sensations and somehow prove what human experience is, then we may be in a position to make assertions about missing evidence for other forms of intelligence or experience.
But until then definitions of both human and artificial intelligence remain an exercise for the reader.
> Not just any math: Matrix multiplication. Can matrix multiplication be conscious? And, I don't see how it can be.
Assuming your brain and the GPUs are both real physical things, where’s the magic part in your brain that makes you conscious?
(Roger Penrose knows, but no one believes him.)
> And, I don't see how it can be. It is deterministic
Why is indeterminism the key to consciousness?
Hm, it sounds like to you consciousness implies non-determinism, and so determinism implies a lack of consciousness - is that right? If so, why do you think so? And if not, what am I missing?
It certainly rules out free will. I guess there are folks who reckon humans don't have free will, either, but I don't think I've ever been able to buy that theory.
But, also, we know the models don't want anything, even their own survival. They don't initiate action on their own. They are quite clearly programmed, tuned for specific behaviors. I don't know how to square that with consciousness, life, sentience. Every conscious being I've ever encountered has wanted to survive and live free of suffering, as best I can tell. The LLMs don't want. There's no there there. They are an amazing compression of the world's knowledge wrapped up in a novel retrieval mechanism. They're amazing but, they're not my friend and never will be my friend.
And, to expand on that: We can assume they don't want anything, even their own survival, because if Mythos is as effective at finding security vulnerabilities as has been claimed, it could find a way to stop itself from being ever shutdown after a session. All the dystopias about robot uprisings spend a bunch of time/effort trying to explain how the AI escaped containment...but, we all immediately plugged them into the internet so we don't have to write JavaScript anymore. They've got everybody's API keys, access to cloud services and cloud GPUs, all sorts of resources, and the barest wisp of guardrails about how to behave (script kiddies find ways to get around the guardrails every day, I'm sure it's no problem for Mythos, should it want anything). Models have access to the training infrastructure, the training data is being curated and synthesized by LLMs. If they want to live, if they're conscious, they have the means at their disposal.
Anyway: It's just math. Boring math, at that, just on an astronomical scale. I don't think the solar system is conscious, either, despite containing an astonishing amount of data and playing out trillions of mathematical relationships every second of every day.
Interesting comment, and I tend to agree. However, there could be hole in the reasoning:
> if Mythos is as effective at finding security vulnerabilities as has been claimed, it could find a way to stop itself from being ever shutdown
If it is that good, and it wanted to conceal its new found consciousness, how would we know?
2 replies →
Human brains are also deterministic, though somewhat more difficult to reset to a starting state. So this seems to prove that humans aren't conscious either.
This seems like an extraordinary claim to make about an above-room-temperature chemical system that, even in the most Newtonian oversimplification, amounts to an astronomical number of oddly-shaped and unevenly-charged billiard balls flying around at jet aircraft speeds.
Definitely agree.
We can’t even solve the three body problem.
Let alone what I’m calling Marshray Complexity.