← Back to context

Comment by jvanderbot

4 days ago

Markov property is about state, yes, but can't you expand state to accommodate your non Markov example?

As in, the state you track is no longer the probability that you have ended at a given node at time T, but instead includes a new vector of the probability you have visited the node at any time in the past (which can be obtained from PDF of location from previous time step + stochastic "diffusion").

So, we're randomly walking over a graph, but the graph is not the same as the graph used in MCMC. The MCMC graph is the state with random transitions that must model what you want to observe. That separation does not violate the statement that "it's just a random walk" it just severely complicates it I suppose.

This is very typical in reinforcement learning. You just expand the state to include some more time periods. It definitely raises some academic eyebrows (since it’s not technically memory less), but hey if it works, it works