Comment by viraptor
12 hours ago
Only in the current most popular architectures. Mamba and RWKV style LLMs may suffer a bit but don't get a reduced context in the same sense.
12 hours ago
Only in the current most popular architectures. Mamba and RWKV style LLMs may suffer a bit but don't get a reduced context in the same sense.
You're right. There was also an experiment in Meta which tokenized bytes directly and it didn't hurt performance much in very small models.