← Back to context

Comment by famouswaffles

4 days ago

LLMs are not Markov Chains unless you contort the meaning of a Markov Model State so much you could even include the human brain.

Not sure why that's contorting, a markov model is anything where you know the probability of going from state A to state B. The state can be anything. When it's text generation the state is previous text to text with an extra character, which is true for both LLMs and oldschool n-gram markov models.

  • A GPT model would be modelled as an n-gram Markov model where n is the size of the context window. This is slightly useful for getting some crude bounds on the behaviour of GPT models in general, but is not a very efficient way to store a GPT model.

    • I'm not saying it's an n-gram Markov model or that you should store them as a lookup table. Markov models are just a mathematical concept that don't say anything about storage, just that the state change probabilities are a pure function of the current state.

      1 reply →

  • Yes, technically you can frame an LLM as a Markov chain by defining the "state" as the entire sequence of previous tokens. But this is a vacuous observation under that definition, literally any deterministic or stochastic process becomes a Markov chain if you make the state space flexible enough. A chess game is a "Markov chain" if the state includes the full board position and move history. The weather is a "Markov chain" if the state includes all relevant atmospheric variables.

    The problem is that this definition strips away what makes Markov models useful and interesting as a modeling framework. A “Markov text model” is a low-order Markov model (e.g., n-grams) with a fixed, tractable state and transitions based only on the last k tokens. LLMs aren’t that: they model using un-fixed long-range context (up to the window). For Markov chains, k is non-negotiable. It's a constant, not a variable. Once you make it a variable, near any process can be described as markovian, and the word is useless.

    • Sure many things can be modelled as Markov chains, which is why they're useful. But it's a mathematical model so there's no bound on how big the state is allowed to be. The only requirement is that all you need is the current state to determine the probabilities of the next state, which is exactly how LLMs work. They don't remember anything beyond the last thing they generated. They just have big context windows.

      16 replies →

Well LLMs aren't human brains, unless you contort the definition of matrix algebra so much you could even include them.

  • QM and GR can be written as matrix algebra, atoms and electrons are QM, chemistry is atoms and electrons, biology is chemistry, brains are biology.

    An LLM could be implemented with a Markov chain, but the naïve matrix is ((vocab size)^(context length))^2, which is far too big to fit in this universe.

    Like, the Bekenstein bound means writing the transition matrix for an LLM with just 4k context (and 50k vocabulary) at just one bit resolution, the first row (out of a bit more than 10^18795 rows) ends up with a black hole >10^9800 times larger than the observable universe.

    • Yes, sure enough, but brains are not ideas, and there is no empirical or theoretical model for ideas in terms of brain states. The idea of unified science all stemming from a single ultimate cause is beautiful, but it is not how science works in practice, nor is it supported by scientific theories today. Case in point: QM models do not explain the behavior of larger things, and there is no model which gives a method to transform from quantum to massive states.

      The case for brain states and ideas is similar to QM and massive objects. While certain metaphysical presuppositions might hold that everything must be physical and describable by models for physical things, science, which should eschew metaphysical assumptions, has not shown that to be the case.