← Back to context

Comment by GordonS

3 years ago

I don't have a use for this, but I enjoyed the detailed write-up!

In these days of fast SSDs, are there still uses for a RAM disk, beyond extreme niches?

SSDs have wear which will lead them to eventual failure. Wear land nearly as bad as a few years back, but you can still only write to a cell for only a limited amount of times. If you're constantly writing data to your disks, you may need something that doesn't die.

I would personally go with a "normal" RAM disk in this case, but CPUs only have a limited amount of RAM and memory channels available. Complex operations on RAM disks may also increase the load on the CPU which can be a performance downside if you're doing things like compiling large code bases. Coupled with a battery backup, this looks like a pretty neat solution to SSDs for write-heavy operations, assuming you persist the important data periodically on something else (such as a hard drive).

I'd be wary of bit flips running this card, though. Without ECC, bitflips in RAM are just something you should be expecting. Normal RAM doesn't operate with the same data for an entire year, but this semi-permanent setup may be more vulnerable to bitflips.

I know RAID cards will often contain a battery backed RAM cache for file operations in case the power goes out, perhaps this card can be useful for that as well? With ZFS you can set up all kinds of fancy buffering/cacheing and I imagine an SSD write cache would show wear and tear much faster than one of these cards, and you can't exactly hot swap M.2 cards. A couple of cheap gigabytes of persistent write cache may just be the solution some people have been looking for.

A very useful use case discovered no later than last week : local dedup management by Synology C2 Backup Agent and TBW on the OS SSD.

C2 Backup agent stores dedup/chunks data by default in ProgramData, which is stored on C:... which is usually a SSD nowadays.

I noticed a 3:4 ratio between written data in local dedup folder vs uploaded data volume on the remote C2 5 TB storage (suscribed a C2 Business).

TBW grew indeed horrifyingly fast on the SSD, and I estimated it would completely wear it in about a year or so, with the 2 TB and growing data to backup with my standard retention scheme.

So I made a 32 GB (16 GB was not enough peak size) lmDisk ramdisk with backup/restore at shutdown/startup (it is featured by lmDisk and quite nicely), mounted in place of the dedup folder, and ran my tasks.

poof, reduced TBW on SSD by 99%.

(4x16 GB DDR4 ECC Reg on my server, so not concerned about memory errors)

  • I think the question was more tuned to physical ram disks, but I'm not sure.

    Either way, how many terabytes were being written each day? And how much can your drive take? It looks like I could go pay $60 right now for 600TB of endurance, and $35 for 200TB of endurance. If you already have the extra ram than go for it but it doesn't seem like a setup to make on purpose.

    Maybe your backup system has far more writes than mine? I have terabytes of backups but the average written for each daily backup is about 10GB.

    • (I was answering to the previous comment wondering if ramdisks still have any interesting usage nowadays.)

      About 150 TBW endurance on a 250 GB Samsung M.2 NVMe Evo 970+. On paper that is, but since it is the OS SSD with Windows Server 2022 STD (sole DC AD/DHCP/DNS/Hyper-V), in production, I won't take any risk. RAID 1 in this scenario would have changed nothing. On the side, I have 40 TB RAID10 for storage.

      So I cancelled the first C2 exec (with C2 encrypt on) when I reached 195 TBW on the SSD. Monitoring the ramdisk use still shows about 3:4 ratio on complete snapshot.

      I have about 1 million files on the 2,29 TB data to backup.

      I had indeed the RAM sticks available for free, simply had to take them (2x16 GB ) from a decomissioned ML350 Gen9 (which uses DDR4-2133P ECC Reg). It now serves me as a bench, litterally.

      1 reply →

SSDs currently peak somewhere around 7GB/s transfer speeds, while RAM can easily knock out well over 20GB/s (and that's a low estimate). So anything that benefits from fast transfer speeds and/or low latency will appreciate a RAM disk.

SSDs are also consumable, as mentioned in other comments, so RAM disks are perfect for a scratch disk. HDDs can also serve as a scratch disk, but some tasks also appreciate the aforementioned faster transfer speeds and/or lower latency of SSDs or RAM.

  • You can easily get to about 20 GB/s by using PCI-E 4.0 NVMe in striped 4x configurations. Comparing this 16x setup to single lane SSD access is not a fitting comparison. With prices for NVME finally going down, you can get 8TB at those speeds for under USD 1k.

We used a network-backed temporary RAM disk for use in our RISC-V package build system. Each time a build is started it connected to the NBD server which automatically created a RAM disk ("remote tmpfs"). On disconnection the RAM disk was thrown away. Which is fine for builders, I wouldn't much recommend it for anything else! https://rwmj.wordpress.com/2020/03/21/new-nbdkit-remote-tmpf...

While local NVMe SSD raids can max out a PCIe 16x slot with given large enough blocks and enough queue depth they still can't keep up with small to medium sync writes writes unless you can keep a deep queue filled. Lots of transaction processing workloads require low latency commits which is where flash-backed DRAM can shine. DRAM doesn't requires neither wear leveling nor UNMAP/TRIM. If the power fails you use stored energy to dump the DRAM to flash. On startup you wait for the stored energy level to come to safe operating level while restoring the content from flash, once enough energy is stored you erase enough NAND flash to quickly write a full dump. At this point the device is ready for use. If you overprovision the flash by at least a factor of two you can hide the erase latency and keep the previous snapshot. Additional optimisations e.g. using chunked or indexable compression can reduce the wear on the NAND flash effectively using the flash like a simplified flat compressed log structured file system. I would like such two such cards each of my servers as ZFS intent log please. If their price and capacity are reasonable enough I would like to use them either as L2ARC or for a special allocation class VDEV reserved for metadata and maybe even small block storage for PostgreSQL databases.

I use one for building C++. Granted that's a bit niche, but a tmpfs filesystem over the build directory keeps the load off the SSD. Haven't actually checked it's still faster for a while but it certainly used to be. Have been doing that for five years or so.

  • > I use one for building C++. Granted that's a bit niche (...)

    Not niche at all. Using a RAM disk for building C++ applications is one of the oldest and most basic build optimization tricks around. It's specially relevant when using build cache tools like cache, which lead large C++ builds to no longer be CPU bound and become IO bound.

For anything that requires temporary but really fast storage, RAM disks is still a thing. The number of valid use cases have gone down since SSD's became the norm, but there's still situations where disk i/o or the fear of wearing out an SSD are valid concerns.