Comment by soganess
5 hours ago
Can someone tell me what I am missing here?
This seems to suffer from a finite-size effect. Wolfram's machines have a tiny state space (s ≤ 4, k ≤ 3). For some class of NP problems, this will be insufficient to encode complex algorithms and is low dimensional enough that it is unlikely to be able to encode hard instances ("worst case") of the problem class. The solution space simply cannot support them.
In this regime, hard problem classes only have easy solutions, think random k-SAT below the satisfiability threshold, where algorithms like FIX (Coja-Oghlan) approximate the decision problem in polynomial time. In random k-SAT, the "hardness" cannot emerge away from the phase transition and by analogy (watch my hand wave in the wind so free) I can imagine that they would not exist at small scales. Almost like the opposite of the overlap gap property.
Wolfram's implicit counter-claim seems to be that the density of irreducibility among small machines approximates the density in the infinite limit (...or something? Via his "Principle of Computational Equivalence"), but I'm not following that argument. I am sure someone has brought this up to him! I just don't understand his response. Is there some way of characterizing / capturing the complexity floor of a given problem (For an NP-hard Problem P the reduced space needs to be at least as big as S to, WHP, describe a few hard instances)?
The cynic is me says those interesting but ultimately barren long-form articles are just content marketing for Mathematica software.
I think you have it wrong. Wolfram's claim is that for a wide array of small (s,k) (including s <= 4, k <= 3), there's complex behavior and a profusion of (provably?) Turing machine equivalent (TME) machines. At the end of the article, Wolfram talks about awarding a prize in 2007 for a proof that (s=2,k=3) was TME.
The `s` stands for states and `k` for colors, without talking at all about tape length. One way to say "principle of computational equivalence" is that "if it looks complex, it probably is". That is, TME is the norm, rather than the exception.
If true, this probably means that you can make up for the clunky computation power of small (s,k) by conditioning large swathes of input tape to overcome the limitation. That is, you have unfettered access to the input tape and, with just a sprinkle of TME, you can eeke out computation by fiddling with the input tape to get the (s,k) machine to run how you want.
So, if finite sized scaling effects were actually in effect, it would only work in Wolfram's favor. If there's a profusion of small TME (s,k), one would probably expect computation to only get easier as (s,k) increases.
I think you also have the random k-SAT business wrong. There's this idea that "complexity happens at the edge of chaos" and I think this is pretty much clearly wrong.
Random k-SAT is, from what I understand, effectively almost surely polynomial time solveable. Below the critical threshold, it's easy to determine in the negative if the instance is unsolvable (I'm not sure if DPLL works, but I think something does?). Above the threshold, when it's almost surely solveable, I think something as simple as walksat will work. Near, or even "at", the threshold, my understanding is that something like survey propagation effectively solves this [0].
k-SAT is a little clunky to work in, so you might take issue with my take on it being solveable but if you take something like Hamiltonian cycle on (Erdos-Renyi) random graphs, the Hamiltonian cycle has a phase transition, just like k-SAT (and a host of other NP-Complete problems) but does have a provably an almost sure polynomial time algorithm to determine Hamiltonicity, even at the critical threshold [1].
There's some recent work with trying to choose "random" k-SAT instances with different distributions, and I think that's more hopeful at being able to find difficult random instances, but I'm not sure there's actually been a lot of work in that area [2].
[0] https://arxiv.org/abs/cs/0212002
[1] https://www.math.cmu.edu/~af1p/Texfiles/AFFHCIRG.pdf
[2] https://arxiv.org/abs/1706.08431