Comment by TwentyPosts
1 year ago
People have been over this. If you believe this, you don't understand how LLMs work.
They fundamentally perceive the world in terms of tokens, not "letters".
1 year ago
People have been over this. If you believe this, you don't understand how LLMs work.
They fundamentally perceive the world in terms of tokens, not "letters".
> If you believe this, you don't understand how LLMs work.
Nor do they understand how intelligence works.
Humans don't read text a letter at a time. We're capable of deconstructing words into individual letters, but based on the evidence that's essentially a separate "algorithm".
Multi-model systems could certainly be designed to do that, but just like the human brain, it's unlikely to ever make sense for a text comprehension and generation model to work at the level of individual letters.