← Back to context

Comment by ivraatiems

5 days ago

> Unless we can find indications that humans can exceed the Turing computable - something we as of yet have no indication is even theoretically possible - there is no rational reason to think it can't.

But doesn't this rely on the same thing you suggest we don't have, which is a working and definable definition of consciousness?

I think a lot of the 'well, we can't define consciousness so we don't know what it is so it's worthless to think about' argument - not only from you but from others - is hiding the ball. The heuristic, human consideration of whether something is conscious is an okay approximation so long as we avoid the trap of 'well, it has natural language, so it must be conscious.'

There's a huge challenge in the way LLMs can seem like they are speaking out of intellect and not just pattern predicting, but there's very little meaningful argument that they are actually thinking in any way similarly to what you or I do in writing these comments. The fact that we don't have a perfect, rigorous definition, and tend to rely on 'I know it when I see it,' does not mean LLMs do have it or that it will be trivial to get to them.

All that is to say that when you say:

> I also don't know for sure whether or not you are "possessed of subjective experience" as I can't measure it.

"Knowing for sure" is not required. A reasonable suspicion one way or the other based on experience is a good place to start. I also identified two specific things LLMs don't do - they are not self-motivated or goal-directed without prompting, and there is no evidence they possess a sense of self, even with the challenge of lack of definition that we face.

> But doesn't this rely on the same thing you suggest we don't have, which is a working and definable definition of consciousness?

No, it's like saying we have no indication that humans have psychic powers and can levitate objects with their minds. The commenter is saying no human has ever demonstrated the ability to figure things out that aren't Turing computable and we have no reason to suspect this ability is even theoretically possible (for anything, human or otherwise).

No, it rests on computability, Turing equivalence, and the total absence of both any kind of evidence to suggest we can exceed the Turing computable, and the lack of even a theoretical framework for what that would mean.

Without that any limitations borne out of what LLMs don't currently do are irrelevant.

  • That doesn't seem right to me. If I understand it right, your logic is:

    1. Humans intellect is Turing computable. 2. LLMs are based on Turing-complete technology. 3. Therefore, LLMs can eventually equal human intellect.

    But if that is the right chain of assumptions, there's lots of issues with it. First, whether LLMs are Turing complete is a topic of debate. There are points for[0] and against[1].

    I suspect they probably _are_, but that doesn't mean LLMs are tautologically indistinguishable from human intelligence. Every computer that uses a Turing-complete programming language can theoretically solve any Turing-computable problem. That does not mean they will ever be able to efficiently or effectively do so in real time under real constraints, or that they are doing so now in a reasonable amount real-world time using extant amounts of real-world computing power.

    The processor I'm using to write this might be able to perform all the computations needed for human intellect, but even if it could, that doesn't mean it can do it quickly enough to compute even a single nanosecond of actual human thought before the heat-death of the universe, or even the end of this century.

    So when you say:

    > Without that any limitations borne out of what LLMs don't currently do are irrelevant.

    It seems to me exactly the opposite is true. If we want technology that is anything approaching human intelligence, we need to find approaches which will solve for a number of things LLMs don't currently do. The fact that we don't know exactly what those things are yet is not evidence that those things don't exist. Not only do they likely exist, but the more time we spend simply scaling LLMs instead of trying to find them, the farther we are from any sort of genuine general intelligence.

    [0] https://arxiv.org/abs/2411.01992 [1] https://medium.com/heyjobs-tech/turing-completeness-of-llms-...

    • > 1. Humans intellect is Turing computable. 2. LLMs are based on Turing-complete technology. 3. Therefore, LLMs can eventually equal human intellect.

      Yes, with an emphasis on can. That does not mean they necessarily will. Though I would consider it unlikely that they won't, we the only way of proving that they will would be to do it.

      > But if that is the right chain of assumptions, there's lots of issues with it. First, whether LLMs are Turing complete is a topic of debate. There are points for[0] and against[1].

      It's trivial to prove that a system comprised of an LLM with a loop is Turing computable. A single inference step can not be Turing computable, but one with a loop only requires the LLM to be capable of executing 6 distinct steps with temperatur set to 0. You can wire up a toy neural network by hand that can do this.

      This is in fact a far more limited claim than what the paper you linked to makes.

      The article you linked to, on the other hand, is discussing if an LLM can act like a Turing machine without that loop. That is why "state management" matters. State management is irrelevant when you wrap a loop around because you can externalise the state, and you only need 2 states (and 3 symbols, or 3 states and 2 symbols) for the smallest known universal Turing machine.

      The entire article is thus entirely irrelevant to this question. Sure, you will struggle to make an LLM act as a Turing machine without going "off the rails". But that is irrelevant - you only need it to be able to execute one state transition by deterministically producing the right next tape operation and next state when given the next symbol and current state.

      From that you can build up to arbitrarily complex computation, because for every given Turing machine, you can construct a larger Turing machine that uses additional symbols or states to encode an operation that takes multiple steps for the smaller machine.

      > The processor I'm using to write this might be able to perform all the computations needed for human intellect, but even if it could, that doesn't mean it can do it quickly enough to compute even a single nanosecond of actual human thought before the heat-death of the universe, or even the end of this century.

      Irrelevant, because absent physics that exceeds the Turing computable, the human brain is an existence-proof for the possibility of computing everything the human brain does in a package the size of an average human brain.

      It is very likely that we will need architectural changes to compute any given model efficiently enough, but to suggest it is not possible is an extraordinary claim not supported by anything.

      If you take LLM to mean a very specific architecture, or specific computational methods to execute a model, then you have a point. If so we are talking about very different things.