Comment by johnisgood

10 days ago

Are you saying that LLMs are Turing complete or did I misunderstand it?

An LLM in itself is inert - it's just the model, so when talking about an LLM doing anything it is doing so as part of an inference engine. An inference system with a loop is trivially Turing complete if you use the context as an IO channel, use numerically stable inferencing code, and set temperature to 0 - in that case, all you need is for the model to encode a 6 entry lookup table to operate the "tape" via context.