Problem solving using Markov chains (2007) [pdf]

3 days ago (math.uchicago.edu)

I assume OP watched this excellent Veritasium video about Markov and his chains [0] and posted this article which was referenced in that video.

[0] https://youtu.be/KZeIEiBrT_w

  • Top comment on the video gave me a chuckle.

    > I wonder if Markov chains could predict how many times Veritasium changes the thumbnail and title of this video.

    • I think I remember he did a video explaining that changing the title and thumbnail gives a significant bump in views so that a video can become evergreen. CGP Grey mentioned doing this and hitting 1B total views years earlier than he expected

      1 reply →

  • Honestly a pretty bad video by Veritasium standards.

    Consider the nuclear reaction metaphor. It's clearly not memoryless. Eventually you'll run out of fissile material.

    The diagram for that example is bad as well. Do arrows correspond to state transitions? Or do they correspond to forks in the process where one neutron results in two?

    • > Consider the nuclear reaction metaphor. It's clearly not memoryless. Eventually you'll run out of fissile material.

      I think no real process is memoryless: time passes/machines degrade/human behaviors evolve. It is always an approximation that is found/assumed to hold within the modeled timeframe.

      2 replies →

  • I assumed he read the Illuminatus! Trilogy and wondered where Markoff Chaney's name originated... There might be something wrong with me, now that I think about it. /s

Markov Chains are a gateway drug to more advanced probabilistic graphic models which are worth exploring. I still remember working throughout Koller&Friedman cover to cover as one of the best learning experiences I’ve ever had.

  • PGMs are a great topic to explore (and Koller+Friedman is a great book), but, as a word of caution to anyone interested: implementation of any of these more advanced models remains a major challenge. For anyone building production facing models, even if your problem is a pretty good match for the more interesting PGMs, the engineering requirements alone are a good reason not to go too far down that path.

    The PGM book is also structured very clearly for researchers in PGMs. The book is laid out in 3 major section: the models, inference techniques (the bulk of the book), and learning. Which means, if you follow the logic of the book, you basically have to work through 1000+ pages of content before you can actually start running even toy versions of these models. If you do need to get into the nitty-gritty of particular inference algorithms, I don't believe there is another textbook with nearly the level of scope and detail.

    Bishop's section on PGMs from Pattern Recognition and Machine Learning is probably a better place to start learning about these more advanced models, and if you become very interested then Koller+Friedman will be an invaluable text.

    It's worth noting that the PGM course taught by Koller was one of the initial, and still very excellent, Coursera courses. I'm not sure if it's still free, but it was a nice way to get a deep dive into the topic in a reasonably short time frame (I do remember those homeworks as brutal though!)[0].

    0. https://www.coursera.org/specializations/probabilistic-graph...

    • Played with Bayesian nets a bit in grad school—Pearl’s causality stuff is still mind-blowing—but I’ve almost never bumped into a PGM in production. A couple things kept biting us: Inference pain. Exact is NP-hard, and the usual hacks (loopy BP, variational, MCMC) need a ton of hand-tuning before they run fast enough.

      The data never fits the graph. Real-world tables are messy and full of hidden junk, so you either spend weeks arguing over structure or give up the nice causal story.

      DL stole the mind-share. A transformer is a one-liner with a mature tooling stack; hard to argue with that when deadlines loom.

      That said, they’re not completely dead - reportedly Microsoft’s TrueSkill (Xbox ranking), a bunch of Google ops/diagnosis pipelines, some healthcare diagnosis tools by IBM Watson built on Infer.NET.

      Anyone here actually shipped a PGM that beat a neural baseline? Would really love to appreciate your war stories.

      3 replies →

  • Just bought it last week. This book just makes me happy.

    It feels like bishops pattern recognition but with a clearer tone (and a different field of course)

I feel like we need a video on Dynamic Markov Chains. It's a method to create a markov chain from data. It's used in all the highest compression winners in the Hutter Prize (a competition to compress data the most).

  • You mean the algorithm used in hook[0]? These are not really top performers anymore. PPM has generally performed better and nowadays it's LLMs and context mixers that are at the top of text compression[1]

    [0]: https://mattmahoney.net/dc/dce.html#Section_421 [1]: https://mattmahoney.net/dc/text.html

    • DMC is used in pretty much all of these, just not alone (that alg column is too small to capture this).

      As in go into the first open source entry which is #2 in this list, cmix, unzip the files, go into paq8.cpp and search for DMC. See "Model using DMC (Dynamic Markov Compression)" and associated code. In these cases DMC is one model mixed in with others and the best model for the current context is used.

      Hook exclusively uses DMC for outstanding results but the others use DMC as one of the prediction models.

Heh, somebody watched that same Veritasium video about Markov Chains and Monte Carlo methods. Great video; lots of interesting historical stuff in there that I didn't know (like the feud between Markov and Nekrasov).

  • For awhile I was working on Monte Carlo sims in my job in finance in my early career. Just re-building a existing archaic excel monster in python to be more flexible to new investment models and implement and allow for more levers. Since I was already working with them daily I begin applying Monte Carlo models to a lot more problems I was thinking about. It truly is a fun tool to play with, especially when you're in the thick of designing them.

If you're holding a hammer every problem looks like a nail ...

  • Yea, if it were true, then Markov chains on steroids (LLMs) would be super at problem solving, but they can't even reason. And they get worse if you add random facts about cats: https://news.ycombinator.com/item?id=44724238

    • That doesn’t follow at all, but I guess don’t pass up any opportunity to bash LLMs.

    • LLMs aren't Markov chains unless they have a context window of 1.

      >In probability theory and statistics, a Markov chain or Markov process is a stochastic process describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event

      1 reply →

I mean, often, sure. Also problem solving is often a matter of making a spreadsheet full of +, -, *, / operations. Problem solving is often a matter of counting permutations and combinations. It's often a matter of setting up the right system of ordinary differential equations. It's often a matter of setting up the right linear algebra problem.

It's often a matter of asking the right person what technique works. It's often a matter of making a measurement before getting lost in the math. It's often a matter of finding the right paper in the literature.

The Veritasium video brought up an interesting point about how LLMs, if trained too much on their own content, will fall victim to the Markov Chain and just repeat the same thing over and over.

Is this still possible with the latest models being trained on synthetic data? And if it possible, what would that one phrase be?

  • That original model collapse paper has largely been misunderstood and in practice, this is only true if you're not curating the generated data at all. The original paper even specifies (emphasis mine):

    > We find that indiscriminate use of model-generated content in training causes irreversible defects in the resulting models, in which tails of the original content distribution disappear. [0]

    In practice nobody is "indiscriminately" using model output to fine-tune models since that doesn't even make sense. Even if you're harvesting web data generated by LLMs, that data has in fact been curated by it's acceptance on whatever platform you found it on is a form of curation.

    There was a very recent paper Supervised Fine Tuning on Curated Data is Reinforcement Learning (and can be improved) [1] whose content is pretty well summarized by the title. So long as the data is curated in some way, you are providing more information to the model and the results should improve somewhat.

    0. https://www.nature.com/articles/s41586-024-07566-y

    1. https://www.arxiv.org/pdf/2507.12856

    edit: updated based on cooksnoot's comment

    • There's multiple papers on model collapse. Being able to avoid model collapse is different from it "being disproven".

      If you just mean its risk has been over exaggerated and/or over simplified then yea, you'd have a point.

      1 reply →

I am on the move so I cannot check the video (but I did skim the pdf). Is there any chance to see an example of this technique? Just a toy/trivial example would be great, TIA!

  • For the Monte Carlo Method stuff in particular[1], I get the sense that the most iconic "Hello, World" example is using MC to calculate the value of pi. I can't explain it in detail from memory, but it's approximately something like this:

    Define a square of some known size (1x1 should be fine, I think)

    Inscribe a circle inside the square

    Generate random points inside the square

    Look at how many fall inside the square but not the circle, versus the ones that do fall in the circle.

    From that, using what you know about the area of the square and circle respectively, the ratio of "inside square but not in circle" and "inside circle" points can be used to set up an equation for the value of pi.

    Somebody who's more familiar with this than me can probably fix the details I got wrong, but I think that's the general spirit of it.

    For Markov Chains in general, the only thing that jumps to mind for me is generating text for old school IRC bots. :-)

    [1]: which is probably not the point of this essay. For for muddying the waters, I have both concepts kinda 'top of mind' in my head right now after watching the Veritasium video.

    • > From that, using what you know about the area of the square and circle respectively, the ratio of "inside square but not in circle" and "inside circle" points can be used to set up an equation for the value of pi.

      Back in like 9th grade, when Wikipedia did not yet exist (but MathWorld and IRC did) I did this with TI-Basic instead of paying attention in geometry class. It's cool, but converges hilariously slowly. The in versus out formula is basically distance from origin > 1, but you end up double sampling a lot using randomness.

      I told a college roommate about it and he basically invented a calculus approach summing pixels in columns or something as an optimization. You could probably further optimize by finding upper and lower bounds of the "frontier" of the circle, or iteratively splitting rectangle slices in infinitum, but thats probably just reinventing state of the art. And all this skips the cool random sampling the monte carlo algorithm uses.

      1 reply →

    • Sorry, I should have been more specific maybe: I do know about Montecarlo, and yeah, the circle stuff is a more or less canonical example - but I wanted to know more about the Markov Chains, because, again, I only know these in terms of sequence generators and I have some problems imagining how this could "solve problems" unless your problem is "generate words that sorta sound like a specific language but it is just mostly gibberish".

    • I’ve always loved this example. I implemented the Monte Carlo pi estimation on a LEGO Mindstorms NXT back in high school. Totally sparked my interest in programming, simulations, etc. Also the NXT’s drag-and-drop, flowchart programming interface was actually a great intro to programming logic. Made it really easy to learn real programming in later on.

    • A Pseudorandom Number Sequence Test Program - https://www.fourmilab.ch/random/

          Monte Carlo Value for Pi
      
          Each successive sequence of six bytes is used as 24 bit X and Y co-ordinates within a square. If the distance of the randomly-generated point is less than the radius of a circle inscribed within the square, the six-byte sequence is considered a “hit”. The percentage of hits can be used to calculate the value of Pi. For very large streams (this approximation converges very slowly), the value will approach the correct value of Pi if the sequence is close to random. A 500000 byte file created by radioactive decay yielded:
      
          Monte Carlo value for Pi is 3.143580574 (error 0.06 percent).