← Back to context

Comment by antonvs

1 year ago

> If you believe this, you don't understand how LLMs work.

Nor do they understand how intelligence works.

Humans don't read text a letter at a time. We're capable of deconstructing words into individual letters, but based on the evidence that's essentially a separate "algorithm".

Multi-model systems could certainly be designed to do that, but just like the human brain, it's unlikely to ever make sense for a text comprehension and generation model to work at the level of individual letters.