Comment by sdenton4

6 years ago

Likewise, the search in Go is a Monte Carlo search, very different from the kind of search used in chess. And the neutral nets in alpha go are guiding where to run the search, which is very very different from brute force search.

Many of these things have required the giant leaks in compute, but still wouldn't work at all without the concurrent improvements in algorithms.

Along these lines, here's a classic blog post: https://www.johndcook.com/blog/2015/12/08/algorithms-vs-moor...

"Grötschel, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later — in 2003 — this same model could be solved in roughly 1 minute, an improvement by a factor of roughly 43 million. Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algorithms! Grötschel also cites an algorithmic improvement of roughly 30,000 for mixed integer programming between 1991 and 2008."

I think the authors point though is that all our effort into the algorithms where algorithms to do just one thing - search - and that we used that in conjunction with more compute power.

I'll agree that the author emphasizes compute power, but his real point still holds. Monte Carlo search may not be classic brute force, and neural networks guiding it may also not be standard, but the two just let you effectively search on a massive scale.