Comment by dgxyz
9 hours ago
Neither.
Just use memcache for query cache if you have to. And only if you have to, because invalidation is hard. It's cheap, reliable, mature, fast, scalable, requires little understanding, has decent quality clients in most languages, is not stateful and available off the shelf in most cloud providers and works in-clusetr in kubernetes if you want to do it that way.
I can't find a use case for Redis that postgres or postgres+memcache isn't a simpler and/or superior solution.
Just to give you an idea how good memcache is, I think we had 9 billion requests across half a dozen nodes over a few years without a single process restart.
is there anything memcache gives you that a redis instance configured with an eviction policy of allkeys-lru doesn't give you
memcached is multithreaded, so it scales up better per node.
memcached clients also frequently uses ketama consistent hashing, so it is much easier to do load/clustering, being much simpler than redis clustering (sentinel, etc).
Mcrouter[1] is also great for scaling memcached.
dragonfly, garnet, and pogocache are other alternatives too.
[1]: https://github.com/facebook/mcrouter
Both Redis (finally) and Valkey addressed the multithreading scalability issues, see https://news.ycombinator.com/item?id=43860273...
I imagine the answer here is: less complexity.