Comment by whispem
13 hours ago
Very relevant question! The memory profile in minikv depends on usage scenario and storage backend.
- With the in-memory backend: Every value lives in RAM (with HashMap index, WAL ring buffer, TTL map, and Bloom filters). For a cluster with a few million objects, you’ll typically see a node use as little as 50–200 MB, scaling up with active dataset size and batch inflight writes;
- With RocksDB or Sled: Persistent storage keeps RAM use lower for huge sets but still caches hot keys/metadata and maintains Bloom + index snapshots (both configurable). The minimum stays light, but DB block cache, WAL write buffering, and active transaction state all add some baseline RAM (tens to a few hundreds of MB/node in practice);
- Heavy load (many concurrent clients, transactions, or CDC enabled): Buffers, Raft logs, and transaction queues scale up, but you can cap these in config (batch size, CDC buffer, WAL fsync policy, etc);
- Prometheus /metrics and admin API expose live stats, so you can observe resource use per node in production.
If you have a specific workload or dataset in mind, feel free to share it and I can benchmark or provide more precise figures!
No comments yet
Contribute on Hacker News ↗