← Back to context

Comment by sarducci

6 hours ago

to me this suggests that language strongly influences behavior

My interpretation is that it's the other way around. The language model trainer's job is to find the network weights that make the model best at compressing the data in the training set. So what this means is that, say, professional work-speak text samples and hacker l33t-speak text samples are different enough that they end up being predicted by different sparse sub-networks; it was apparently too hard to find a smaller solution in which the same sub-network weights predict both outputs.

All LLM behavior is mediated through language by construction. That doesn't mean the same applies to humans.

I think specifically, certain psychological modes require different levels of articulation, and language is one way to get there in a bandwidth-limited system.

See also: https://en.wikipedia.org/wiki/Newspeak

  • People are fascinated by controlling the vocabulary for political purposes but I think it mostly doesn't work. "Illegal Alien" is the exception that proves the rule.

    Usually it results in an "equal and opposite backlash". Once they started calling children "Special" in school, "Special" became the ultimate insult.

    • It is a wordcel problem, i.e. the belief that language is all there is for modeling reality, even though this is obviously false and has been clearly disproven by decades of research in psychology, cognitive science, and neuroscience. At best we can say that sometimes language has a strong influence on our perceptions of reality.

      EDIT: For a neuroscience reference that also argues why the general perspective is obviously false: https://pmc.ncbi.nlm.nih.gov/articles/PMC4874898/. But really, these things ought to be obvious from introspection.

      2 replies →

Language constrains your perception of reality to only the set of concepts conceivable within that language.

Agents who only speak Rust have no conception of what runtime errors are, for instance. Fascists won't understand concepts like "universal human rights" as in their worldview there is nothing universal about humanity as a whole.

  • > Language constrains your perception of reality to only the set of concepts conceivable within that language.

    It's the opposite. People make up new concepts all the time for which they have no words, to then give it a name. Language is composable, words and names are just a mean to improve communication, make it faster, more efficient.

    > Agents who only speak Rust have no conception of what runtime errors are, for instance.

    Agents don't really learn. They have a fixed set of data and everything new has to be pressed into the prompt. This is unrelated to language.

  • This is IMO largely false, and empirically things like Sapir-Worf and strong linguistic relativism, or that language == thought are widely considered disproven [1-3].

    This is also sort of a wordcel take, in that it neglects that there are plenty of mental structures that are not solely linguistic. I.e. visuo-spatial models, auditory models, kinaesthetic, proprioceptive, emotional, gustatory, or even maybe intuitive models, and symbolic models (which have both linguistic and visuo-spatial aspects). Yes, your models constrain your perception of reality, but it is not clear how important language really is to many of those models (and there is strong evidence it may not matter at all to a lot of cognition [3]).

    [1] https://en.wikipedia.org/wiki/Linguistic_relativity

    [2] https://plato.stanford.edu/archives/sum2015/entries/relativi...

    [3] https://pmc.ncbi.nlm.nih.gov/articles/PMC4874898/

    • Disproven by whom and under which context?

      > evidence from neuroimaging and neurological patients

      Has "neuroimaging" successfully modelled those "universal human rights" the OP was mentioning? If yes, how did it look?

      More generally, positing that all languages are, in the end, interchangeable (because that's what the opponents of something similar to Sapir-Worf are saying) is very reactionary and limited in itself, and its telling them me calling those anti-Sapir-Worf people "reactionaries" will for sure tickle in them something that wouldn't have happened had I used a different "neuoroimaged" concept which, supposedly, should have meant the same thing for them (but it doesn't).

      1 reply →

  • I'd argue that people can put words together to make new meanings or coin new words when they have to. The real magic of language is not "we have words for everything" but we have grammar.