Comment by markisus
10 years ago
The author draws a distinction between computing expected value of a random variable and computing the volume of a convex set, but from the point of measure theory, both are computations of integrals. So maybe the definition given from the encyclopedia wasn't so bad, given the right prior knowledge.
Well, come on. Lots of things are integrals. The Riemann zeta function is an integral. But I don't think MCMC has anything to say about that.
There is no obvious "measure" in the convex set example, so it's not clear how MCMC applies. And, there are not good MCMC convergence results, so even if you apply it, it's not obvious you'd be able to prove anything.
If you think of the set as being defined by an oracle (a function that reports whether its input is inside or out), then it looks like it will require exponentially many probes (in the dimensionality "d" of the space) to get its volume. Basically, a curse of dimensionality argument -- you have to sample independently along each of "d" axes. But it turns out there is an MCMC that is polynomial-time. (Now, granted, IIRC it is like n^17, but...)
The Lebesgue measure could be a candidate for the measure you are asking for. Creating a bounding box around the convex set in question and then scaling, we get a probability measure (measure of the entire set is 1). Some sorts of random walks inside this bounding box will converge to this scaled Lebesgue measure. I think this is the main idea behind the convex volume estimation algorithms.
The oracle you mention is the indicator function of the convex set. Instead of integrating this indicator function directly over Lebesgue measure to get volume, we can use MCMC to get the "expectation" of the oracle.
It's not obvious if we can realize the Riemann zeta function as an integral over a finite measure space, but if we could, I think MCMC could be used to evaluate it.
Yes, what you just wrote is evident. I'm saying, in response to your assertion above, that it is not trivial to derive a useful result about approximating the volume of convex sets with MCMC, and not at all obvious that anything smarter than independent probes will be useful. I happened to read the original 1991 paper on this topic during my PhD, so I'm still vaguely familiar with it.
To clarify: It's clear than you can approximate a convex volume with exponentially-many independent (random or deterministic) probes. That's therefore not an interesting problem.
Surprisingly: there is NO deterministic algorithm that can solve this problem in polynomial time. But, surprise #2: a randomized algorithm using MCMC can. Pretty cool.
So, this is the relevance of MCMC to the convex volume problem. Used cleverly, it can take you from exponentially-many probes to polynomial.
And in particular: You talk about "creating a bounding box" for the reference Lebesgue measure -- saying that this first step is easy is basically assuming the whole problem is solved. Because: How do you create a bounding box around a convex set in R^d, with polynomially-many probes, so that the error is bounded? You can do it with exponentially-many probes (just go out along each orthogonal axis). But that's not interesting -- if you have that many probes, forget about MCMC, just use a deterministic algorithm.
Here's a short summary: http://www.cs.berkeley.edu/~sinclair/cs294/n16.pdf