← Back to context

Comment by graycat

5 days ago

Stochastic process with the Markov property: Past and future are conditionally independent given the present. The general version of conditionally independent is from probability theory based on measure theory and the Radon-Nikodym theorem (with von Neumann's novel proof in Rudin, Real and Complex Analysis), but an easier introduction is in Erhan Çınlar, Introduction to Stochastic Processes.

In a Poisson process the time until the next event has the Poisson distribution and, thus, from a simple calculus manipulation, is a Markov process.

E.g., time from now until a molecule of hydrogen peroxide H2O2 decomposes to water and oxygen is independent of when it was made from water and oxygen. That is the basis of half life, the same distribution until decomposition starting now no matter when the chemical or particle was created.

In WWII, searching at sea, e.g., for enemy submarines, was important, and then was Bernard O. Koopman, Search and Screening, 1946 with an argument that time to an encounter between two ships had a Poisson distribution, i.e., was a Markov process.

In grad school, there was a question about how long US submarines would last in war at sea. Well, take n Red ships and m Blue ships with, for each ship, position, speed, and detection radius and, for each Red-Blue pair, given a detection, probabilities of Red dies, Blue dies, both die, neither die (right, these four have to be non-negative and add to 1). Now have specified a Markov process that can evaluate with a relatively simple Monte-Carlo simulation.

Had written a random number generator in assembler using an Oak Ridge formula, typed quickly, and did the simulation. Had a review by a famous probability prof and passed when explained how the law of large numbers applied. So, some pure and applied math and computing worked, but some politics didn't!

This Red/Blue submarine problem seems to be a better fit for ABM simulation, rather than Monte Carlo based on Markov processes.

IRL this will be a path dependent since both sides will learn from the past actions and probabilities will be changing, i.e. the memorylessness Markov property will not hold.

In ABM the ships (agents) can move on 2D space, which makes detection easier.

Also, obviously there are lots of externalities, like weapons, food, and sailors supply, ceasfires, surrenders, politics, etc.

All of the above is easier to simulate using ABM, rather than Monte Carlo.

I once failed an interview with a hedge fund because they asked me a variant on that red ships/blue ships problem and at the time I knew nothing about probability. They also hadn’t given me lunch which didn’t help.