Comment by vrighter

4 days ago

"To be Turing complete a system including an LLM need to be able to simulate a 2-state 3-symbol Turing machine (or the inverse)."

And infinite memory. You forgot the infinite memory. And LLMs are extremely inefficient with memory. I'm not talking about the memory needed in the GPU to store the weights, but rather the ability of an LLM to remember whatever it's working on at the moment.

What could be stored as a couple of bits in a temporary variable is usually output as "Step 3: In the previous step we frobbed the junxer and got junx, and if you do junx + flibbity you get floopity"

And remember that this takes up a bunch of tokens. Without doing this (whether the LLM provider decides to let you see it or not, but still bill you for it), an LLM can't possibly execute an algorithm that requires iteration in the general case. For a more rigorous example, check apple's paper where an LLM failed to solve a tower of hanoi problem even when it had the exact algorithm to do so in context (apart from small instances of the problem for which the solution is available countless times).