Comment by foobarqux

2 years ago

Yes in the same way that a lookup table can compute anything if you make it large enough (and where you need to know the size beforehand).

"Transformers are not Turing complete" https://www.lifeiscomputation.com/transformers-are-not-turin...

See here for example for a survey of work on the limits of computational expressivity of transformers https://www.semanticscholar.org/paper/Transformers-as-Recogn...

No, Transformers with memory are turing complete. Like i said, the modifications are trivial.

https://arxiv.org/abs/2301.04589

  • First the discussion seemed to be about systems as implemented today (presumably you now concede that those actually do have practical computational limitations?) not systems that could theoretically be implemented. (Moreover adding memory is no longer the simple "forward pass" system that you were arguing had no significant computational limitations).

    Second, and more importantly, the fact that through clever human manipulation you can express a Turing machine using an LLM does not mean that such a machine is learned through gradient descent training.

    There is no basis to the claim that today's systems have converged on model weights that implement "higher-order" computation.

    • >First the discussion seemed to be about systems as implemented today (presumably you now concede that those actually do have practical computational limitations?)

      You are not Turing complete. We do like to pat ourselves on the back and say, "Humans? Of course, they're Turing complete" but you're not. You do not have infinite memory either theoretically (all else fails, you'll die) or practically (boredom, lack of concentrating, lack of interest, and a memory that is very prone to making stuff up or discarding vital details at a disturbing frequency).

      In fact, you are a very poor excuse for a Turing machine. Simulate any computation? You wish you could. Your brain is a finite state machine through and through.

      So why do we continue to deceive and pat ourselves on the back ?

      Because, well besides the dash of human exceptionalism and narcissism we're so well known for, limited =/ trivial.

      You see, I don't really care how precisely Transformers meet some imaginary goal humans don't meet themselves.

      It's not important how Turing complete transformers are, only that they could potentially learn any class of computations necessary via training.

      >Moreover adding memory is no longer the simple "forward pass" system that you were arguing had no significant computational limitations

      Memory or not, all the computation is still being performed just in the forward pass.

      >Second, and more importantly, the fact that through clever human manipulation you can express a Turing machine using an LLM does not mean that such a machine is learned through gradient descent training.

      It also doesn't mean it couldn't be.

      >There is no basis to the claim that today's systems have converged on model weights that implement "higher-order" computation.

      Results are basis enough. If I say, "you're intelligent", it's because you appear to be so. It's an assumption, not truth I've verified after peeking into your brain. All the properties I might ascribe to humans: 'intelligence', 'conciousness' are all assumptions I ascribe based on the results I see and nothing else. I have no proof you are performing any 'higher order computation' either.

      4 replies →