Comment by seanhunter

2 months ago

Do you think “b l u e b e r r y” is not tokenized somehow? Everything the model operates on is a token. Tokenization explains all the miscounts. It baffles me that people think getting a model to count letters is interesting but there we are.

Fun fact, if you ask someone with French, Italian or Spanish as a first language to count the letter “e” in an english sentence with a lot of “e’s” at the end of small words like “the” they will often miscount also because the way we learn language is very strongly influenced by how we learned our first language and those languages often elide e’s on the end of words.[1] It doesn’t mean those people are any less smart than people who succeed at this task — it’s simply an artefact of how we learned our first language meaning their brain sometimes literally does not process those letters even when they are looking out for them specifically.

[1] I have personally seen a French maths PhD fail at this task and be unbelievably frustrated by having got something so simple incorrect.

One can use https://platform.openai.com/tokenizer to directly confirm that the tokenization of "b l u e b e r r y" is not significantly different from simply breaking this down into its letters. The excuse often given "It cannot count letters in words because it cannot see the individual letters" would not apply here.