I don’t think there’s much to write home about RE: Learned Index Structures since classical structures can soundly outperform at least the trumpeted Google “success story” [1]. It’s hype, not substance.
As an idea and an application of machine learning, it’s important and worth exploring. Claiming it’s better is false. That’s a distinction worth making.
I don’t think there’s much to write home about RE: Learned Index Structures since classical structures can soundly outperform at least the trumpeted Google “success story” [1]. It’s hype, not substance.
[1]: https://dawn.cs.stanford.edu/2018/01/11/index-baselines/
Completely disagree. It is NOT simply about the results but looking at the direction and a new approach.
Ultimately it also comes down to the power required to get some task done.
Also how can a paper submitted to NIPS be hype?
As an idea and an application of machine learning, it’s important and worth exploring. Claiming it’s better is false. That’s a distinction worth making.
Completely different - that paper applies to comparison based indexing.
Great link. I will be curious to see if the Google, Jeff Dean, approach works in real life situations.
What is interesting is you can use the TPUs and parallelize the approach.
Ultimately it is about using less power to get some work done.