← Back to context

Comment by jayd16

6 days ago

> the ability of LLMs to understand

But it doesn't understand. Its just similarity and next likely token search. The trick is that turns out to be useful or pleasing when tuned well enough.

Implementation doesn't matter. In so much as human understanding can be reflected in a text conversation, its distribution can be approximated using a distribution in next token prediction. Hence there exist next token predictors which are indistinguishable from a human over text--and I do not distinguish identical behaviors.