Comment by magicalhippo
6 days ago
Based on their tokenizer tool[1], for GPT 5.x "geschniegelt" is tokenized into three tokens:
(ges)(chn)(iegelt)
6 days ago
Based on their tokenizer tool[1], for GPT 5.x "geschniegelt" is tokenized into three tokens:
(ges)(chn)(iegelt)
It's a single token in the most common usage, that is, with a space in front of it
"This word is geschniegelt" is [2500, 2195, 382, 192786]
Last token here is " geschniegelt"
Maybe this is why? Most of the training data has the single token version, so the three tokens version was undertrained?