← Back to context

Comment by Al-Khwarizmi

2 years ago

I also loved when I found about Markov chains and had a lot of fun playing with them, but have a totally different view on the gap with respect to LLMs.

Markov chains were discovered in 1906. Since then until a few years ago, advances on "building a better Markov chain" have been modest (e.g. smoothing techniques).

Enter the last 5 years, LLMs come and now you have an "uber Markov chain" that actually generates perfect syntactically coherent text, you can even ask it things and if the question is well-posed and makes sense you will get a true answer at least the majority of the time, they can be a daily tool (for practical purposes, beyond fun), help you solve problems and write interesting creative stories. A much larger leap in those 5 years than in the previous century!

I see them as what I always dreamed Markov chains to be, but they couldn't be. The gap is huge.