Comment by gf000
5 days ago
Well, unless you believe in some spiritual, non-physical aspect of consciousness, we could probably agree that human intelligence is Turing-complete (with a slightly sloppy use of terms).
So any other Turing-complete model can emulate it, including a computer. We can even randomly generate Turing machines, as they are just data. Now imagine we are extremely lucky and happen to end up with a super-intelligent program which through the mediums it can communicate (it could be simply text-based but a 2D video with audio is no different for my perspective) can't be differentiated from a human being.
Would you consider it sentient?
Now replace the random generation with, say, a back propagation algorithm. If it's sufficiently large, don't you think it's indifferent from the former case - that is, novel qualities could emerge?
With that said, I don't think that current LLMs are anywhere close to this category, but I just don't think this your reasoning is sound.
> we could probably agree that human intelligence is Turing-complete (with a slightly sloppy use of terms). > So any other Turing-complete model can emulate it
You're going off the rails IMMEDIATELY in your logic.
Sure, one Turing-complete computer language can have its logic "emulated" by another, fine. But human intelligence is not a computer language -- you're mixing up the terms "Turing complete" and "Turing test".
It's like mixing up the terms "Strawberry jam" and "traffic jam" and then going on to talk about how cars taste on toast. It's nonsensical.
Game of life, PowerPoint, and a bunch of non-PL stuff are all Turing-complete. I don't mix up terms, I did use a slightly sloppy terminology but it is the correct concept - and my point is that we don't know of a computational model that can't be expressed by a Turing-machine, humans are a physical "machine", ergo we must also fall into that category.
Give my comment another read, but it was quite understandable from context. (Also, you may want to give a read to the Turing paper because being executable by a person as well was an important concept within)
Again, you're going wildly off the rails in your logic. Sure, "executable by a human" is part of the definition for Turing machines, but that's only talking about Turing-specific capabilities. If you want to argue that a Turing machine can emulate the specific definition of Turing machine capabilities that humans can perform, that's fine. But you're saying that because humans can ACT LIKE Turing machines, they must BE Turing machines, and are therefore emulatable.
This is the equivalent of saying "I have set up a complex mechanical computer powered by water that is Turing complete. Since any Turing complete system can emulate another one, it means that any other Turing complete system can also make things wet and irrigate farms.
Human intelligence is not understood. It can be made to do Turing complete things, but you can't invert that and say that because you've read the paper on Turing completeness, you now understand human intelligence.
But humans can do things Turing machines cannot. Such as eating a sandwich.
1 reply →
We used to say "if you put a million monkeys on typewriters you would eventually get shakespear" and no one would ever say that anymore, because now we can literally write shakespear with an LLM.
And the monkey strategy has been 100% dismissed as shit..
We know how to deploy monkeys on typewriters, but we don't know what they'll type.
We know how to deploy transformers to train and inference a model, but we don't know what they'll type.
We DON'T know how a thinking human (or animal) brain works..
Do you see the difference.
The monkeys on typewriters saying is just a colorful way of saying that an infinite random sequence will contain all finite sequences somewhere within it. Which is true. But I don't see what infinite random sequences have to do with LLMs or human thinking.
> Do you see the difference
No? I'm not sure what you're getting at.
To be fair, we also trained the LLM on (among other things) shakespeare, and adjusted the weights so that generating shakespeare would be more likely after that training.
We don't claim a JPEG can paint great art, even though certain jpegs do.
So, more proof it's not thinking, right? It can only regurgitate a large if/else superstructure with some jumping around.
1 reply →
I was going to use this analogy in the exact opposite way. We do have a very good understanding of how the human brain works. Saying we don't understand how the brain works is like saying we don't understand how the weather works.
If you put a million monkeys on typewriters you would eventually get shakespeare is exactly why LLM's will succeed and why humans have succeeded. If this weren't the case why didn't humans 30000 years ago create spacecraft if we were endowed with the same natural "gift".
Yeah no, show me one scientific paper that says we know how the brain works. And not a single neuron because that does absolute shit towards understanding thinking.
1 reply →
> Would you consider it sentient?
Absolutely.
If you simulated a human brain by the atom, would you think the resulting construct would NOT be? What would be missing?
I think consciousness is simply an emergent property of our nervous system, but in order to express itself "language" is obviously needed and thus requires lots of complexity (more than what we typically see in animals or computer systems until recently).
> If you simulated a human brain by the atom,
That is what we don't know is possible. You don't even know what physics or particles are as yet undiscovered. And from what we even know currently, atoms are too coarse to form the basis of such "cloning"
And, my viewpoint is that, even if this were possible, just because you simulated a brain atom by atom, does not mean you have a consciousness. If it is the arrangement of matter that gives rise to consciousness, then would that new consciousness be the same person or not?
If you have a basis for answering that question, let's hear it.
> You don't even know what physics or particles are as yet undiscovered
You would not need the simulation to be perfect; there is ample evidence that our brains a quite robust against disturbances.
> just because you simulated a brain atom by atom, does not mean you have a consciousness.
If you don't want that to be true, you need some kind of magic, that makes the simulation behave differently from reality.
How would a simulation of your brain react to an question that you would answer "consciously"? If it gives the same responds to the same inputs, how could you argue it isnt't conscious?
> If it is the arrangement of matter that gives rise to consciousness, then would that new consciousness be the same person or not?
The simulated consciousness would be a different one from the original; both could exist at the same time and would be expected to diverge. But their reactions/internal state/thoughts could be matched at least for an instant, and be very similar for potentially much longer.
I think this is just Occams razor applied to our minds: There is no evidence whatsoever that our thinking is linked to anything outside of our brains, or outside the realm of physics.
10 replies →
Well, if you were to magically make an exact replica of a person, wouldn't it be conscious and at time 0 be the same person?
But later on, he would get different experiences and become a different person no longer identical to the first.
In extension, I would argue that magically "translating" a person to another medium (e.g. a chip) would still make for the same person, initially.
Though the word "magic" does a lot of work here.
9 replies →
At some point, quantum effects will need to be accounted for. The no cloning theorem will make it hard to replicate the quantum state of the brain.
There are many aspects to this that people like yourself miss, but I think we need satisfactory answers to them (or at least rigorous explorations of them) before we can make headway in these sorts of discussion.
Imagine we assume that A.I. could be conscious. What would be the identity/scope of that consciousness. To understand what I'm driving at, let's make an analogy to humans. Our consciousness is scoped to our bodies. We see through sense organ, and our brain, which process these signals, is located in a specific point in space. But we still do not know how consciousness arises in the brain and is bound to the body.
If you equate computation of sufficient complexity to consciousness, then the question arises: what exactly about computation would prodcuce consciousness? If we perform the same computation on a different substrate, would that then be the same consciousness, or a copy of the original? If it would not be the same consciousness, then just what give consciousness its identity?
I believe you would find it ridiculous to say that just because we are performing the computation on this chip, therefore the identity of the resulting consciousness is scoped to this chip.
> Imagine we assume that A.I. could be conscious. What would be the identity/scope of that consciousness
Well, first I would ask whether this question makes sense in the first place. Does consciousness have a scope? Does consciousness even exist? Or is that more of a name attributed to some pattern we recognize in our own way of thinking (but may not be universal)?
Also, would a person missing an arm, but having a robot arm they can control have their consciousness' "scope" extended to it? Given that people have phantom pains, does a physical body even needed to consider it your part?
This all sounds very irrelevant. Consciousness is clearly tied to specific parts of a substrate. My consciousness doesn't change when a hair falls off my head, nor when I cut my fingernails. But it does change in some way if you were to cut the tip of my finger, or if I take a hormone pill.
Similarly, if we can compute consciousness on a chip, then the chip obviously contains that consciousness. You can experimentally determine to what extent this is true: for example, you can experimentally check if increasing the clock frequency of said chip alters the consciousness that it is computing. Or if changing the thermal paste that attaches it to its cooler does so. I don't know what the results of these experiments would be, but they would be quite clearly determined.
Of course, there would certainly be some scale, and at some point it becomes semantics. The same is true with human consciousness: some aspects of the body are more tightly coupled to consciousness than others; if you cut my hand, my consciousness will change more than if you cut a small piece of my bowel, but less than if you cut out a large piece of my brain. At what point do you draw the line and say "consciousness exists in the brain but not the hands"? It's all arbitrary to some extent. Even worse, say I use a journal where I write down some of my most cherished thoughts, and say that I am quite forgetful and I often go through this journal to remind myself of various thoughts before taking a decision. Would it not then be fair to say that the journal itself contains a part of my consciousness? After all, if someone were to tamper with it in subtle enough ways, they would certainly be able to influence my thought process, more so than even cutting off one of my hands, wouldn't they?
You make some interesting points, but:
> Similarly, if we can compute consciousness on a chip, then the chip obviously contains that consciousness.
This is like claiming that neurons are conscious, which as far as we can tell, they are not. For all you know, it is the algorithm that could be conscious. Or some interplay between the algorithm and the substrate, OR something else.
Another way to think of it problem: Imagine a massive cluster performing computation that is thought to give rise to consciousness. Is is the cluster that is conscious? Or the individual machines, or the chips, or the algorithm, or something else?
I personally don't think any of these can be conscious, but those that do should explain how they figure these thing out.
2 replies →