← Back to context

Comment by friendzis

12 hours ago

This is such a superficial, English-centric take, but it might as well be true. It seems to me that in non-english languages the models, especially chatgpt, have suffered in the declension department and output words in cases that do not fit the context.

I have just ran an experiment: I have taken a word and asked models (chatgpt, gemini and claude) to explode it into parts. The caveat is that it could either be root + suffix + ending or root + ending. None of them realized this duality and have taken one possible interpretation.

Any such approach to tokenizing assumes context free (-ish) grammar, which is just not the case with natural languages. "I saw her duck" (and other famous examples) is not uniquely tokenizable without a broader context, so either the tokenizer has to be a model itself or the model has to collapse the meaning space.

Current models understand different tokenization variants perfectly, e.g. leading space vs no leading space vs one character per token. It doesn't even affect evals and behchmarks. They're also good at languages that have very flexible word formation (e.g. Slavic) and can easily invent pretty natural non-existent words without being restricted by tokenization. This ability took a bit of a hit with recent RL and code generation optimizations, but this is not related to tokenization.

>None of them realized this duality and have taken one possible interpretation.

I suspect this happens due to mode collapse and has nothing to do with the tokenization. Try this with a base model.