Comment by Nevermark
8 days ago
> These are far stronger claims than what you write now
> unless you really mean “like everything else”, in which case your claim is extremely weak
Language responds to changes in context. Books, the printing press, radio, the web, social media, mobile web, all changed how people used language and impacted language.
AI is a dramatic new context, with unique properties:
1. It is the first artifact to actively participate in realtime natural language communication. In a striking break with those predecessors.
2. AI language capabilities evolve quickly, and are unlikely to stop soon.
3. As learning during inference becomes prevalent, we will be co-adapting communication with AI in realtime.
4. Model to model communication is in its infancy, but is an entirely new category of language use, by entirely new users.
No preceding change to language context or purpose comes close.
Holding out for studies is reasonable, to determine the level of change. But statements of "not even wrong" make no sense. The default is changes in communication context and purpose, drive changes in language.
Language has never been static or unresponsive to new contexts.
My not even wrong argument was contingent on the weakest interpretation of your argument, where AI would change the language exactly like anything else in human society changes language.
> Changes in how we use language with AI will change even faster when AI starts learning continuously during inference.
This is a stronger claim, and shows that the weakest interpretation of your argument does not apply. I take not event wrong back. As this is a testable hypotheses which offers a solid prediction. I can in fact be wrong.
That said, I am skeptical of your claims for the reason stated above. People don’t interact with LLM’s nearly as much as they do with their dogs, and I am not aware of any research that shows that people who interact with a lot of dogs simplify their languages in human-to-human communication. To the contrary, there is ample research that humans are in fact quite good at context switching. You can speak extremely poorly in a second learning you are currently learning, and then in the next sentence speak fluently without hesitation in your native language.
I suggest that dogs are not a good comparison.
Interaction with language models involves a significant use of language and thought. Is not repetitive. And many users (myself included) continually find new ways to use them.
Others may take their time adopting language models, or be slower to branch out into many kinds of use, but young people in particular will be very fast adopters and adapters. That will be the place to watch.
"Even faster" with respect to inference learning, wasn't an attempt to undersell changes happening now. Teachers are experiencing a lot of new issues with how students respond to the availability of models today. One being the potential for students to put less effort into their own communications. If that continues, it won't just be a "dumbing" of literacy, it will have its own impact on vocabulary and grammar.
But looking forward is unavoidable. Models are not going to stay still long enough to say what stage impacted what changes. Model changes are too fast and fluid.
Well, this era is just getting started, so a diversity of expectations makes sense.
I think you might be underestimating human-to-dog interactions. Interacting with dogs require a whole lot of empathy and thought.
But really this is beyond the point. I didn’t provide dog interactions as an analogy, rather, I provided it as a counter point. We speak differently to dogs then we speak with each other, and have done so for thousands of years. I see no reason why LLM’s would have any more profound effects on our language. We will continue to speak with each other in a normal manner just like before.
1 reply →