Comment by TeMPOraL

9 days ago

> I mean what has learning that a supposed stochastic parrot is capable of interacting at the skill levels presently displayed actually taught us about any of the abstract questions?

IMHO a lot. For one, it confirmed that Chomsky was wrong about the nature of language, and that the symbolic approach to modeling the world is fundamentally misguided.

It confirmed the intuition I developed of the years of thinking deeply about these problems[0], that the meaning of words and concepts is not an intrinsic property, but is derived entirely from relationships between concepts. The way this is confirmed, is because the LLM as a computational artifact is a reification of meaning, a data structure that maps token sequences to points in a stupidly high-dimensional space, encoding semantics through spatial adjacency.

We knew for many years that high-dimensional spaces are weird and surprisingly good at encoding semi-dependent information, but knowing the theory is one thing, seeing an actual implementation casually pass the Turing test and threaten to upend all white-collar work, is another thing.

--

I realize my perspective - particularly my belief that this informs the study of human mind in any way - might look to some as making some unfounded assumptions or leaps in logic, so let me spell out two insights that makes me believe LLMs and human brains share fundamentals:

1) The general optimization function of LLM training is "produce output that makes sense to humans, in fully general meaning of that statement". We're not training these models to be good at specific skills, but to always respond to any arbitrary input - even beyond natural language - in a way we consider reasonable. I.e. we're effectively brute-forcing a bag of floats into emulating the human mind.

Now that alone doesn't guarantee the outcome will be anything like our minds, but consider the second insight:

2) Evolution is a dumb, greedy optimizer. Complex biology, including animal and human brains, evolved incrementally - and most importantly, every step taken had to provide a net fitness advantage[1], or else it would've been selected out[2]. From this follows that the basic principles that make a human mind work - including all intelligence and learning capabilities we have - must be fundamentally simple enough that a dumb, blind, greedy random optimizer can grope its way to them in incremental steps in relatively short time span[3].

2.1) Corollary: our brains are basically the dumbest possible solution evolution could find that can host general intelligence. It didn't have time to iterate on the brain design further, before human technological civilization took off in the blink of an eye.

So, my thinking basically is: 2) implies that the fundamentals behind human cognition are easily reachable in space of possible mind designs, so if process described in 1) is going to lead towards a working general intelligence, there's a good chance it'll stumble on the same architecture evolution did.

--

[0] - I imagine there are multiple branches of philosophy, linguistics and cognitive sciences that studied this perspective in detail, but unfortunately I don't know what they are.

[1] - At the point of being taken. Over time, a particular characteristic can become a fitness drag, but persist indefinitely as long as more recent evolutionary steps provide enough advantage that on the net, the fitness increases. So it's possible for evolution to accumulate building blocks that may become useful again later, but only if they were also useful initially.

[2] - Also on average, law of big numbers, yadda yadda. It's fortunate that life started with lots of tiny things with very short life spans.

[3] - It took evolution some 3 billion years to get from bacteria to first multi-cellular life, some extra 60 million years to develop a nervous system and eventually a kind of proto-brain, and then an extra 500 million years iterating on it to arrive at a human brain.

> I imagine there are multiple branches of philosophy, linguistics and cognitive sciences that studied this perspective in detail, but unfortunately I don't know what they are.

You're looking at Structuralism. First articulated by Ferdinand de Saussure in his Course in General Linguistics published in 1916.

This became the foundation for most of subsequent french philosophy, psychology and literary theory, particularly the post-structuralists and postmodernists. Lacan, Foucault, Derrida, Barthes, Deleuze, Baudrillard, etc.

These ideas have permeated popular culture deeply enough that (I suspect) your deep thinking was subconsciously informed by them.

I agree very much with your "Chomsky was wrong" hypothesis and strongly recommend the book "Language Machines" by Leif Weatherby, which is on precisely that topic.

  • What hypothesis of Chomsky are you guys talking about? If it is about innateness of grammar in humans then obviously this can not be shown wrong by LLMs trained on a huge amount text.

    • Chomsky's claim is that linguistics is actually a branch of cognitive science, that language is, by definition, "what the brain does" and that "meaning" in language is grounded in the brain, by what the speaker as a biological entity intends.

      But this forces one into the position that whatever a LLM is doing is not real language, just an imitation of language.

      If you take the fact that LLMs are emitting "real language" at face value, then you need to adopt a more structuralist view of language, in which "meaning" is part of the system of language itself and does not need to be grounded biologically.

      I don't think holding a structuralist view of language precludes believing that humans have a biological facility for language, or even that language is shaped and ultimately a result of that biological facility. Its more an argument over what language IS -- a symbolic system, or an extension of the human brain.

      2 replies →

Plenty of genes spread that are neutral to net negative for fitness. Sometimes those genes don't kill the germ line, and they persist.

There is no evolution == better/more fit, as long as reproduction cascade goes uninterrupted, genes can evolve any which way and still survive whether they're neutral or a negative.

  • Technically correct but not really. It's a biased random walk. While outliers are possible betting against the law of large numbers is a losing proposition. More often it's that we as observers lack the ability to see the system as a whole and so fail to properly attribute the net outcome.

    It's true that sometimes something can get taken along for the ride by luck of the draw. In which case what's really being selected for is some subgroup of genes as opposed to an individual one. In those cases there's some reason that losing the "detrimental" gene would actually be more detrimental, even if indirectly.

I appreciate the insightful reply. In typical HN style I'd like to nitpick a few things.

> so if process described in 1) is going to lead towards a working general intelligence, there's a good chance it'll stumble on the same architecture evolution did.

I wouldn't be so sure of that. Consider that a biased random walk using agents is highly dependent on the environment (including other agents). Perhaps a way to convey my objection here is to suggest that there can be a great many paths through the gradient landscape and a great many local minima. We certainly see examples of convergent evolution in the natural environment, but distinct solutions to the same problem are also common.

For example you can't go fiddling with certain low level foundational stuff like the nature of DNA itself once there's a significant structure sitting on top of it. Yet there are very obviously a great many other possibilities in that space. We can synthesize some amino acids with very interesting properties in the lab but continued evolution of existing lifeforms isn't about to stumble upon them.

> the symbolic approach to modeling the world is fundamentally misguided.

It's likely I'm simply ignorant of your reasoning here, but how did you arrive at this conclusion? Why are you certain that symbolic modeling (of some sort, some subset thereof, etc) isn't what ML models are approximating?

> the meaning of words and concepts is not an intrinsic property, but is derived entirely from relationships between concepts.

Possibly I'm not understanding you here. Supposing that certain meanings were intrinsic properties, would the relationships between those concepts not also carry meaning? Can't intrinsic things also be used as building blocks? And why would we expect an ML model to be incapable of learning both of those things? Why should encoding semantics though spatial adjacency be mutually exclusive with the processing of intrinsic concepts? (Hopefully I'm not betraying some sort of great ignorance here.)

  • >> the symbolic approach to modeling the world is fundamentally misguided. > but how did you arrive at this conclusion? Why are you certain that symbolic modeling (of some sort, some subset thereof, etc) isn't what ML models are approximating?

    I'm not the poster, but my answer would be because symbolic manipulation is way too expensive. Parallelizing it helps, but long dependency chains are inherent to formal logic. And if a long chain is required, it will always be under attack by a cheaper approximation that only gets 90% of the cases right—so such chains are always going to be brittle.

    (Separately, I think that the evidence against humans using symbolic manipulation in everyday life, and the evidence for error-prone but efficient approximations and sloppy methods, is mounting and already overwhelming. But that's probably a controversial take, and the above argument doesn't depend on it.)

    • How do LLM advancements further such a view? Couldn't you have argued the same thing prior to LLMs? That evolution is a greedy optimizer etc etc therefore humans don't perform symbolic reasoning. But that's merely a hypothesis - there's zero evidence one way or the other - and it doesn't seem to me that the developments surrounding LLMs change that with respect to either LLMs or humans. (Or do they? Have I missed something?)

      Even if we were to obtain evidence clearly demonstrating that LLMs don't reason symbolically, why should we interpret that as an indication of what humans do? Certainly it would be highly suggestive, but "hey we've demonstrated that thing can be done this way" doesn't necessarily mean that thing _is_ being done that way.

      1 reply →

  • >> the meaning of words and concepts is not an intrinsic property, but is derived entirely from relationships between concepts.

    > Possibly I'm not understanding you here. Supposing that certain meanings were intrinsic properties, would the relationships between those concepts not also carry meaning? Can't intrinsic things also be used as building blocks? And why would we expect an ML model to be incapable of learning both of those things? Why should encoding semantics though spatial adjacency be mutually exclusive with the processing of intrinsic concepts? (Hopefully I'm not betraying some sort of great ignorance here.)

    I probably shouldn't respond to this part, because I don't really agree with the original assertion. Or rather, I think this ends up boiling down to a disagreement over semantics, and so isn't a particularly interesting question.

    Relationships between concepts covers a lot of what "meaning" is. You can teach a computer to translate from language X to Y purely based on it learning the relationships of words to each other in each language, and then generating a mapping between the weight-graph of X to the weight-graph of Y. (Yeah, citation needed; I remember reading some specific evidence for this, but I don't remember where.) So you can get a long way with just relationships.

    At the same time, I don't think that proves that the relationships between concepts are everything. A human getting burned and learning the word "hot" could be described as "hot" having an intrinsic meaning. But you could equally describe it as a relationship between the action taken, the sensation experienced, and the phonemes heard. If all those are "concepts", then the relationships between concepts are everything. If they're not, then you can call something intrinsic. Personally, that strikes me as a pointless philosophical question.

    I guess you could argue that if you have an LLM trained on mostly English but also enough Chinese to be able to translate, and it generates text including the word "hot", then if you compare that to the same LLM generating text including the Chinese word for hot, that there's more opportunity for drift in the Chinese output. The first case has the chain of a human feeling pain => writing text containing "hot" => generating text containing "hot", whereas the second has the chain of a human feeling pain => writing text containing "hot" => encoded associations between English and Chinese concepts embedded in weights => writing Chinese text containing Chinese "hot". The English "hot" output is more tightly connected to and more directly derives from the physical sensation of burning. (This is of course assuming majority English training data, and in particular a relative lack of Chinese training data containing the "hot" word/concept.) So in a way, you could claim that the question of whether the word "hot" has an intrinsic meaning is relevant and useful. But it seems to me that's just one way of describing the origins of training data; use it if it's useful.

> Corollary: our brains are basically the dumbest possible solution evolution could find that can host general intelligence.

I agree. But there's a very strong incentive to not to; you can't simply erase hundreds of millennia of religion and culture (that sets humans in a singular place in the cosmic order) in the short few years after discovering something that approaches (maybe only a tiny bit) general intelligence. Hell, even the century from Darwin to now has barely made a dent :-( . Buy yeah, our intelligence is a question of scale and training, not some unreachable miracle.

Didn't read the whole wall of text/slop, but noticed how the first note (referred from "the intuition I developed of the years of thinking deeply about these problems[0]") is nonsensical in the context. If this is reply is indeed AI-generated, it hilariously self-disproves itself this way. I would congratulate you for the irony, but I have a feeling this is not intentional.

  • It reads as genuine to me. How can you have an account that old and not be at least passingly familiar with the person you're replying to here?

  • Not a single bit of it is AI generated, but I've noticed for years now that LLMs have a similar writing style to my own. Not sure what to do about it.

    • I'd like to congratulate you on writing a wall of text that gave off all the signals of being written by a conspiracy theorist or crank or someone off their meds, yet also such that when I bothered to read it, I found it to be completely level-headed. Nothing you claimed felt the least bit outrageous to me. I actually only read it because it looked like it was going to be deliciously unhinged ravings.

      3 replies →