← Back to context

Comment by Jensson

2 years ago

> It's not. There's fundamental architectural differences that couldn't be bigger.

LLM architecture is a markov chain to the core. It isn't a lookup table like old markov chains but it is still a markov chain: next word prediction based on previous words.

Thanks for repeating this.

Seems like most people fail to understand that LLMs (as they are implemented these days) are markov chains by definition, regardless of how much "better" they are compared to "Dissociated Press"-style markov chains based on lookup tables.

> A Markov chain or Markov process is a stochastic model describing a sequence of possible events in which *the probability of each event depends only on the state attained in the previous event*.

Is the process calculating a probability distribution over "next token", based on a bounded-size context of "previous tokens"? Yes? Then it is a markov chain, by definition.

It's like saying that a "human" is not an "animal", since it is so much "better" and "capable" than (other/usual) animals. The more you argue, the more I'll be convinced that you either don't know what the definition of a "human" is, or that you don't know what the definition of an "animal" is (or both).

  • This is the magical human intelligence/“AI has no soul” argument just presented in reverse. It totally ignores that human minds are emergent: no single part of your brain, nor it’s connection to the outside world are intelligent.

    Despite this it’s equally obvious that the intelligence IS in the human brain, inside the skull. Shoot the right parts (frequently done by accident, so many subjects for research) and the intelligence is gone. It IS possible to radically change behaviour of a human by destroying part of the brain. There is no external soul that manages things behind the scene. Human intelligence and our souls are emergent. The are “software”, not hardware.

    All the criticism against AI you make therefore applies equally to a human mind. Yet obviously it shouldn’t. To be more exact: it fails to differentiate between human minds and AI. Behaviours could emerge at any time in AI, even in transformer networks. Hell, transformers are famous for their emergent behaviours. Yes their components are obviously not intelligent. Neither are your components, or mine.

    Yes the machine has no soul. The problem is: neither do you.