It is similar to Slicer in terms of the abstraction (I built Slicer at Google) but the architecture, implementation and algorithms have a lot of differences
These show up once you have a certain scale where it is either cost inefficient or the hot spots are very dynamic. They also try to avoid latency by being eventually consistent sidecars instead of proxies.
I’ve seen them used for traffic routing, storage system metadata systems, distributed cache etc
Sharded in-memory caching turns out to be rather useful at scale :)
Some of the key examples highlighted on our blog are Unity Catalog, which is essentially the metadata layer for Databricks, our Query Orchestration Engine, and our distributed remote cache. See the blog post for more!
Sounds related to Google Slicer: https://research.google/pubs/slicer-auto-sharding-for-datace...
It is similar to Slicer in terms of the abstraction (I built Slicer at Google) but the architecture, implementation and algorithms have a lot of differences
Did you also work on this databricks dicery?
1 reply →
These show up once you have a certain scale where it is either cost inefficient or the hot spots are very dynamic. They also try to avoid latency by being eventually consistent sidecars instead of proxies.
I’ve seen them used for traffic routing, storage system metadata systems, distributed cache etc
Sharded in-memory caching turns out to be rather useful at scale :)
Some of the key examples highlighted on our blog are Unity Catalog, which is essentially the metadata layer for Databricks, our Query Orchestration Engine, and our distributed remote cache. See the blog post for more!