Comment by loire280
19 days ago
They don't claim to support Polish, but they do support Russian.
> The model is natively multilingual, achieving strong transcription performance in 13 languages, including English, Chinese, Hindi, Spanish, Arabic, French, Portuguese, Russian, German, Japanese, Korean, Italian, and Dutch. With a 4B parameter footprint, it runs efficiently on edge devices, ensuring privacy and security for sensitive deployments.
I wonder how much having languages with the same roots (e.g. the romance languages in the list above or multiple Slavic languages) affects the parameter count and the training set. Do you need more training data to differentiate between multiple similar languages? How would swapping, for example, Hindi (fairly distinct from the other 12 supported languages) for Ukrainian and Polish (both share some roots with Russian) affect the parameter count?
Nobody ever supports Polish. It's the worst. They'll support like, ̵Swahili, but not Polish.
edit: I stand corrected lol. I'll go with "Gaelic" instead.
Swahili is subcontinental lingua franca spoken by 200M people and growing quickly. Polish is spoken by a shrinking population in one country where English is understood anyways.
> where English is understood anyways.
It's popular. But not that popular - you couldn't assume a random person over 30yo on the street would be able to have a chat.
200 million people speak Swahili.
39 million people speak Polish, and most of those also speak English or another more common language.
You could say the same about Dutch to be fair. 90-95% speak English - I bet that's way higher than in Poland.
4 replies →
Just a side note to remember that this is a mini model. It's very small and yet 12 languages.
I guess a European version can be created but now it's aimed at a world wide distribution.
I guess I will check Korean. OpenAI audio mini is not bad but I always have to make gpt to check and fix transcription.