Comment by Reubend
1 month ago
After playing around with this model a bit, it seems to have a tendency to reply to English questions in Chinese.
1 month ago
After playing around with this model a bit, it seems to have a tendency to reply to English questions in Chinese.
As someone who frequently thinks in both English and Chinese, I wonder if this "proves" that the Whorfian hypothesis is correct, or maybe at least more efficient?
Saving others a web search for some random name...
> Linguistic relativity asserts that language influences worldview or cognition. [...] Various colloquialisms refer to linguistic relativism: the Whorf hypothesis; the Sapir–Whorf hypothesis; the Whorf-Sapir hypothesis; and Whorfianism. [...] Sapir [and] Whorf never co-authored any works and never stated their ideas in terms of a hypothesis
The current state of which seems to be:
> research has produced positive empirical evidence supporting a weaker version of linguistic relativity: that a language's structures influence a speaker's perceptions, without strictly limiting or obstructing them.
From https://en.wikipedia.org/wiki/Linguistic_relativity
To be fair, that's a pretty common human behavior in my experience. ;p
It also appears to be intentional:
> [Q:] Do you understand English?
> [A:] 您好!我是由腾讯开发的腾讯元宝(Tencent Yuanbao),当前基于混元大模型(Hunyuan-T1)为您服务。我主要使用中文进行交互,但也具备一定的英文理解能力。您可以用中文或英文随时与我交流,我会尽力为您提供帮助~ 若有特定需求,也可以随时告知我切换更适配的模型哦!
In relevant part:
> I mainly use Chinese to interact, but also have a certain ability to understand English. You can use Chinese or English to communicate with me at any time, [and] I will do my utmost to offer you assistance~
Its system prompt says it should reply in Chinese. I saw it discussing its prompt in the thinking process.
Do you know? Are most LLMs trained in a single or multiple languages? Just curious.
Yes multilanguage helps to avoid overfitting