Comment by 8note

13 days ago

i dont think this is a meaningful distinction.

it knows the past tokens because theyre part of the input for predicting the next token. its part of the model architecture that it knows it.

if that isnt knowing, people dont know how to walk, only how to move limbs, and not even that, just a bunch of neurons firing

How close are you to saying that a repair manual "knows" how to fix your car? I think the conversation here is really around word choice and anthropomorphization.

  • The problem is, people think word choice influences capabilities: when people redefine "reasoning" or "consciousness" or so on as something only the sacred human soul can do, they're not actually changing what an LLM is capable of doing, and the machine will continue generating "I can't believe it's not Reasoning™" and providing novel insights into mathematics and so forth.

    Similarly, the repair manual cannot reason about novel circumstances, or apply logic to fill in gaps. LLMs quite obviously can - even if you have to reword that sentence slightly.