(Ab)using general search algorithms on dynamic optimization problems (2023)

5 days ago (dubovik.eu)

I wrote this blog back in 2023 but since then I became a frequent lurker on HN and decided to repost the blog here. For me, writing it was about connecting the dots between dynamic optimization techniques I've studied as an economist and the more general search algorithms studied in CS.

Related: a few years ago, various people were exploring locality sensitive hashing as an alternative to neural networks(SLIDE : In Defense of Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning Systems [1]).

This never took off. I don't if it didn't work or it was a matter of the "bitter lesson" ideology pushing people to prefer hardware acceleration over smart algorithms. I would note that smart algorithms require hiring (more)smart people and not only can hardware acceleration be cheaper but hardware and data can be more reliably added than smart people, who are short supply.

[1]: https://arxiv.org/abs/1903.03129

  • > [...] not only can hardware acceleration be cheaper but hardware and data can be more reliably added than smart people, who are short supply.

    Any ideas as to why? Also missing "in" hope you can edit this because that's a nice quote.

Show HN is for things you've made others can try which excludes blog posts, take al look at https://news.ycombinator.com/showhn.html

You can just take Show HN out of the title, though or repost without it.

  • Thank you for the tip, will do so. I thought I saw some interactive blogs under Show HN, but maybe I'm mistaken. The guidelines are indeed explicit. (Tried to do so but was a bit too late, I guess, because the edit option is not there. Reposting doesn't seem to work, probably need to wait some time.)