DDRamDisk: RAM disk, a disk based on RAM memory chips

3 years ago (ddramdisk.store)

Interesting design. They use FPGAs to emulate NAND storage with DDR, then use a standard NAND SSD controller.

It doesn't perform any better than fast NVMe SSDs for larger, sequential operations. However, it appears to be an order of magnitude faster for the random 1K read/write operations.

It also has infinite durability relative to an SSD, though obviously your data isn't infinitely durable during a power outage scenario. Would be helpful to know how long that battery can keep it backed up.

  • Would be interesting if there would be a case for slapping an SSD on the back of it and giving it just enough capacitor power to dump whatever is on RAM to the SSD once a power-out happens (and restore upon boot). SSD writes would be super low (rarely) and only happen during power issues or shutting it all down.

    • I can see this being great as an ephemeral OS disk. I would selectively sync the changes I want to keep, either to a local SSD, or even over a 10Gbps network, which would update the static image used for booting. It's kind of great knowing that a reboot would give you a fresh system, and at these speeds the system would _fly_.

  • >> infinite durability relative to an SSD

    Until the FPGA dies. Consumer-grade arrays are not eternal. Many have lifespans (>5% failure rate) as low as 2-5 years under regular use.

    • Is the failure rate higher than other kind of chips? And if so what is the reason that happens?

    • Any references on this? Googling didn't let me believe this. From my interpretation (maybe wrong) of Xilinx UG116, FPGAs are reliable.

  • > Would be helpful to know how long that battery can keep it backed up.

    They say: It has a built-in LiPo and stores your data for up to a year.

  • Speaking of durability, this has a very interesting characteristic that I very much like. Unlike an SSD, DRAM has inherently very poor data durability and it is very easy and fast to erase anything. Unfortunately, since they are using a SSD controller, this might still not do what I expected it to do, but one of the big problems with SSDs is you cannot delete anything once written. You can secure erase all of the drive, but not just a few files, you only have the manufactuerer respecting TRIM.

    The hope here is that because this controller uses DRAM, the contents of the drive can truly be erased quickly on command, while leaving other data intact.

    One use case for this would be systems where file level encryption is untenable, but data can potentially be recorded that you would like to cryptographically remove. It would also provide plausible deniability for erasure of contents.

  • But it must suffer from rowhammer.

I worked on a PC/DOS based instrument for testing permanent magnets back in the 80s. Because of all the magnets involved, we used a battery-backed Static RAM disk instead of a conventional hard drive. The "disk" consisted of a cartridge that plugged into an ISA expansion card in the PC. It was crazy expensive if I remember correctly. One of my contributions to the project was demonstrating that a conventional hard drive inside a steel computer case was actually quite invulnerable to the stray magnetic fields that we were working with. We could forgo the extra cost of the SRAM disk.

We also used a crazy expensive plasma display for the monitor, which turned out to be overkill. But that's a story for another time.

Not a new concept. Has been done a few times before. The use cases are very small, and getting smaller these days as motherboards accept larger and larger memory modules.

https://www.newegg.ca/gigabyte-gc-ramdisk-others/p/N82E16815...

When doing my PhD around 2004 I was running simulations with Fortran programs to do optimizations. A genetic algorithm would call the Fortran program and change the parameters by creating input files and reading output files.

I found out that disk access was the bottleneck of the optimisations. So I used a RAM disk software to create a disk-drive in RAM. It increased the simulation speed by orders of magnitude.

  • Yes, I have something called a RAMLink, which plugged into the back of an "ancient" Commodore 64 or 128. The RAMLink is expandable up to 16MB of RAM. Keep in mind the computers had 64 or 128K.

    Anyway, the RAMLink was powered but you could also get a battery backup for it (using a sealed lead-acid battery, like a miniature one used in a car). I could move an operating system called GEOS over to the RAMLink and watch it boot in less than 20 seconds, where it usually took 1.5 minutes to read off disks and eventually load. I could then move programs (word processing, graphics creation, terminal programs - you name it) over to the RAMLink and open and use them in 1-2 seconds max.

    This is from 1990 technology, running on computers from the mid-80s. RAM Drives/Disks are awesome.

    • The Apple IIgs had RAM disk support built in. It was an immense help to eliminate floppy access if you had more than 1MB of RAM which few programs could take advantage of.

  • Yep, a few years ago we were doing stuff that would hammer ten or twenty GB of design files for some hours. Tried moving it over to RAM-disk - from the server's 128 GB RAM - and it ran a few times faster.

    But in general it was not worth the setup hassle and the possible RAM-starving of other server jobs, stuff was generally not urgent.

  • Some years later (2015?) I tried to speed up the build of JavaScript projects by moving my dev-folder to a RAM-disk, but it didn't really move the needle. So disk-IO is not the limiting factor when building.

    • Or your OS started caching the filesystem more agressively to RAM, so any piece of code that does heavy IO is effectively put on a RAM-disk automatically.

in 1999 i was working for an NYC fintech startup (first after-hours ECN to publish quotes on SuperSOES) and around then we paid an absolute fortune for a battery-backed-up giant RAM disk. i don't remember how much RAM it had, but it was a ruddy big rackmounted box with 2x car batteries and a lot of soldered RAM in it, and it allowed us to trade-match much faster than our competitors at the time. the RAM was presented as regular block devices to Linux, so we could do what we wanted with it. that RAM disk was our main competitive advantage, right until the CEO decided to do crimes and the whole thing was bought up by E*Trade, staff got screwed over, and it all died a big messy death after 9/11. loved that RAM disk tho.

  • In 2013 I was working for a mobile game company that dropped a metric shitload of cash in Big Blue's lap for an enormous "RAM SAN".

    IIRC it was a 1 or 2U unit with a few small lithium ion batteries inside. Phenomenal unit. It could have been configured as a block device but we used it for hot data in an auto-tiering setup with SSDs and rust in the other tiers.

    Loved it to bits. No matter what we threw at it, the network was always our bottleneck.

  • The Solace appliances use a card called the Assured Delivery Blade that basically consists of 4GB of DRAM attached to an FPGA on a PCIe card with a pair of 10Gbps SFP+ interfaces and a bunch of super capacitors to keep things running when the power fails. On power failure, the FPGA dumps the DRAM into flash to ensure persistence. The whole thing enables the persistent messaging feature to acknowledge reliable delivery of persistent messages in a few microseconds. The first version was developed back in 2008, well ahead of NVMe and other low latency storage mediums.

I've really wanted something like that where I could just drop my old RAM sticks and use it as swap space.

It would be much better than flash-based solutions, both latency and endurance-wise, probably even over an USB link (3.1 and above speeds are pretty decent, even 3.0 would be enough for basic swap).

Bonus points for a DIMM slot that just accepts any generation (DDR2,3,4, not sure if that would be mechanically possible?). I retired some DDR4 sticks for higher frequency ones, but the 8GB DDR4-2400 stick I have in the drawer would be quite welcome as swap space on the 4GB soldered RAM laptop I am using...

I may have a go at it myself, I don't think the controllers would be too complex if writing a custom kernel driver, and targetting USB speeds.

  • It wouldn't be mechanically possible to support multiple generations[1]. That's not the only issue with mixing generations.

    There are also some interesting electrical engineering problems around driving the bus as you add more SIMMs; probably need multiple CPLD/FPGA memory controllers beyond a certain point. Clocking gets interesting as well. Not impossible, just complicated for an amateur; I know I have problems getting thing to work reliably over 33MHz or so.

    [1] https://www.simmtester.com/News/PublicationArticle/168

An interesting quirk of the later designs is they're bring your own ram. That might be a worthwhile thing to do with a pile of DDR3 from an old server. I think I've got 256gb or so in a draw somewhere that's otherwise unlikely to see any use.

Lithium battery strapped to consumer chips - so you can basically ignore the volatility aspect (possibly in exchange for an exciting new fire risk, not sure how professionally built these things are). That might be objectively better than a pci-e ssd, at least in terms of performance (particularly performance over time).

  • > An interesting quirk of the later designs is they're bring your own ram. That might be a worthwhile thing to do with a pile of DDR3 from an old server. I think I've got 256gb or so in a draw somewhere that's otherwise unlikely to see any use.

    The 256GB version is listed at $280. That's more than enough to buy the fastest 2TB SSDs on the market which will match the performance of this device for most real-world workloads.

    Now that SSDs have become so fast, RAM disks really only help with very specific workloads: Anything with a massive number of small writes or anything that needs extreme durability.

    These could be useful for certain distributed computing and datacenter applications where constant drive writes would wear out normal SSDs too fast.

    For most people, buying the card just to make use of some old DIMMs would cost a lot of money for virtually zero real-world performance gain. Modern NVMe SSDs are very fast and it's rare to find a workload that has extreme levels of random writes.

    • That's the previous version with memory soldered on. There's no price listed for the one with DIMM slots.

      However it turns out I was way out of date on nvme pricing. So that's awesome, if fairly bad news for this product.

    • I don't think you could buy fastest 2TB SSD for $280. More like $20'000 for 6TB.

      Fastest SSDs are not Samsungs, ADatas, SunDisks or other "household" brands (yes, I know, that Samsung has enterprise models, without names, only with partnumbers, but even these models are not "fastest").

      It is special brands, prices for which is not publicly available (or my google-fu is not strong enough).

      Everything you buy for $280 will degrade when 75% full, and even sustained linear write will tank after first several tens of gigabytes, when SLC caches will be full. Not to mention random write with small blocks.

      Special enterprise SSDs, like DataEngine T2HP, costs much, much more (and don't have 2TB models, looks like it starts from 4TB or even 6TB now, but still it is not like x2 or x3 to $280, it is more like x50).

    • On the other hand, one who could buy true enterprise SSD, will not buy this DDRamDisk from strange hobby-looking site :-)

  • > Lithium battery strapped to consumer chips - so you can basically ignore the volatility aspect (possibly in exchange for an exciting new fire risk, not sure how professionally built these things are)

    Do LiFePO4 instead. Nothing is fireproof but those are pretty tame.

    • > Do LiFePO4 instead. Nothing is fireproof but those are pretty tame.

      The reason LiFePO4 is safer than higher voltage Liion is because it has less energy per volume. But even LiFePO4 can catch fire if punctured.[1] Although often incorrectly claimed to be, LiFePO4 secondary cells are not "intrinsically safe," like NiMH secondary cells are.

      [1] https://www.youtube.com/watch?v=07BS6QY3wI8&t=3m17s

Since AmigaOS 1.3 (1987), Amiga has had a software version of a block device in memory which survives reboot, called "ramdrive.device", and usually mounted on DOS as "RAD:". It can even be a bootable device.

This is in addition to the, available since earlier, "Ram-Handler", a filesystem similar to Linux's ramfs, which does not survive reboots.

I wonder why this is not a commonly available and used thing.

I have always wanted to use more RAM chips than my CPUs/motherboards would support, put all the swap on RAM chips, probably also load the whole OS&apps system drive into RAM this way (hint for your board: add an SSD it would load from, single-time on turn-on) and only use a persistent storage drive for my actual data files.

Using more RAM instead of HDD/SSD always felt like a thing producing really great return in performance on investment in money as RAM is relatively cheap and really fast. The amount of RAM you were allowed to plug into a PC motherboard always felt like the most annoying limitation.

  • > I have always wanted to use more RAM chips than my CPUs/motherboards would support, put all the swap on RAM chips, probably also load the whole OS&apps system drive into RAM this way (hint for your board: add an SSD it would load from, single-time on turn-on) and only use a persistent storage drive for my actual data files.

    You could create a RAM disk post-boot and then copy apps into it or use it for a working directory.

    But you'll be disappointed to discover that virtually nothing benefits from this compared to a modern SSD. Copying files will be faster, but that's about it.

    Operating systems are already very good at caching data to RAM. Modern SSDs are fast enough to not be the bottleneck in most operations, from app loading to common productivity tasks.

    Even when we all had slower HDDs in our systems, creating a ram disk wasn't a big enough improvement to warrant creating a ram disk for most tasks. I remember reading a lot of experiments where people made RAM disks to try to speed up their development workflows, only to discover that it made no different because storage wasn't the bottleneck.

  • > The amount of RAM you were allowed to plug into a PC motherboard always felt like the most annoying limitation

    You could always get motherboards that took more RAM, just not ones that taking your typical gaming CPUs and RGB. Currently there are standard desktop-sized ATX motherboards that take 3TB of DDR5, and ATX boards that take 1TB of DDR4 have existed for years.

    • *if money is not an issue. High end desktop/server platforms often require less cost efficient Registered or Load-Reduced RAM.

  • How much do you need? I think you can put 8 TB now.

    (And of course you get a lot better bandwidth and latency than hanging it off some IO attachment)

  • Why would you want to put swap on a physical disk that is effectively RAM? That seems like a very redundant solution since swap as a concept becomes irrelevant if both main memory and swap are both volatile and equally fast. At that point, just add more main memory. The kernel is designed explicitly under the assumption that the storage backing swap is orders of magnitude slower.

    • They clearly state the reason in the last sentence:

      > The amount of RAM you were allowed to plug into a PC motherboard always felt like the most annoying limitation.

  • Because of the price of buying this DDR4 RAM expansion disk for, say, 1TB capacity, it would be cheaper just to buy a proper server board that has 16+ RAM slots and run the RAM-Disk in software?

I don't have a use for this, but I enjoyed the detailed write-up!

In these days of fast SSDs, are there still uses for a RAM disk, beyond extreme niches?

  • SSDs have wear which will lead them to eventual failure. Wear land nearly as bad as a few years back, but you can still only write to a cell for only a limited amount of times. If you're constantly writing data to your disks, you may need something that doesn't die.

    I would personally go with a "normal" RAM disk in this case, but CPUs only have a limited amount of RAM and memory channels available. Complex operations on RAM disks may also increase the load on the CPU which can be a performance downside if you're doing things like compiling large code bases. Coupled with a battery backup, this looks like a pretty neat solution to SSDs for write-heavy operations, assuming you persist the important data periodically on something else (such as a hard drive).

    I'd be wary of bit flips running this card, though. Without ECC, bitflips in RAM are just something you should be expecting. Normal RAM doesn't operate with the same data for an entire year, but this semi-permanent setup may be more vulnerable to bitflips.

    I know RAID cards will often contain a battery backed RAM cache for file operations in case the power goes out, perhaps this card can be useful for that as well? With ZFS you can set up all kinds of fancy buffering/cacheing and I imagine an SSD write cache would show wear and tear much faster than one of these cards, and you can't exactly hot swap M.2 cards. A couple of cheap gigabytes of persistent write cache may just be the solution some people have been looking for.

  • A very useful use case discovered no later than last week : local dedup management by Synology C2 Backup Agent and TBW on the OS SSD.

    C2 Backup agent stores dedup/chunks data by default in ProgramData, which is stored on C:... which is usually a SSD nowadays.

    I noticed a 3:4 ratio between written data in local dedup folder vs uploaded data volume on the remote C2 5 TB storage (suscribed a C2 Business).

    TBW grew indeed horrifyingly fast on the SSD, and I estimated it would completely wear it in about a year or so, with the 2 TB and growing data to backup with my standard retention scheme.

    So I made a 32 GB (16 GB was not enough peak size) lmDisk ramdisk with backup/restore at shutdown/startup (it is featured by lmDisk and quite nicely), mounted in place of the dedup folder, and ran my tasks.

    poof, reduced TBW on SSD by 99%.

    (4x16 GB DDR4 ECC Reg on my server, so not concerned about memory errors)

    • I think the question was more tuned to physical ram disks, but I'm not sure.

      Either way, how many terabytes were being written each day? And how much can your drive take? It looks like I could go pay $60 right now for 600TB of endurance, and $35 for 200TB of endurance. If you already have the extra ram than go for it but it doesn't seem like a setup to make on purpose.

      Maybe your backup system has far more writes than mine? I have terabytes of backups but the average written for each daily backup is about 10GB.

      2 replies →

  • SSDs currently peak somewhere around 7GB/s transfer speeds, while RAM can easily knock out well over 20GB/s (and that's a low estimate). So anything that benefits from fast transfer speeds and/or low latency will appreciate a RAM disk.

    SSDs are also consumable, as mentioned in other comments, so RAM disks are perfect for a scratch disk. HDDs can also serve as a scratch disk, but some tasks also appreciate the aforementioned faster transfer speeds and/or lower latency of SSDs or RAM.

    • > So anything that benefits from fast transfer speeds and/or low latency will appreciate a RAM disk

      Well, anything that doesn't require persistence!

      2 replies →

    • You can easily get to about 20 GB/s by using PCI-E 4.0 NVMe in striped 4x configurations. Comparing this 16x setup to single lane SSD access is not a fitting comparison. With prices for NVME finally going down, you can get 8TB at those speeds for under USD 1k.

  • We used a network-backed temporary RAM disk for use in our RISC-V package build system. Each time a build is started it connected to the NBD server which automatically created a RAM disk ("remote tmpfs"). On disconnection the RAM disk was thrown away. Which is fine for builders, I wouldn't much recommend it for anything else! https://rwmj.wordpress.com/2020/03/21/new-nbdkit-remote-tmpf...

  • While local NVMe SSD raids can max out a PCIe 16x slot with given large enough blocks and enough queue depth they still can't keep up with small to medium sync writes writes unless you can keep a deep queue filled. Lots of transaction processing workloads require low latency commits which is where flash-backed DRAM can shine. DRAM doesn't requires neither wear leveling nor UNMAP/TRIM. If the power fails you use stored energy to dump the DRAM to flash. On startup you wait for the stored energy level to come to safe operating level while restoring the content from flash, once enough energy is stored you erase enough NAND flash to quickly write a full dump. At this point the device is ready for use. If you overprovision the flash by at least a factor of two you can hide the erase latency and keep the previous snapshot. Additional optimisations e.g. using chunked or indexable compression can reduce the wear on the NAND flash effectively using the flash like a simplified flat compressed log structured file system. I would like such two such cards each of my servers as ZFS intent log please. If their price and capacity are reasonable enough I would like to use them either as L2ARC or for a special allocation class VDEV reserved for metadata and maybe even small block storage for PostgreSQL databases.

  • I use one for building C++. Granted that's a bit niche, but a tmpfs filesystem over the build directory keeps the load off the SSD. Haven't actually checked it's still faster for a while but it certainly used to be. Have been doing that for five years or so.

    • > I use one for building C++. Granted that's a bit niche (...)

      Not niche at all. Using a RAM disk for building C++ applications is one of the oldest and most basic build optimization tricks around. It's specially relevant when using build cache tools like cache, which lead large C++ builds to no longer be CPU bound and become IO bound.

  • For anything that requires temporary but really fast storage, RAM disks is still a thing. The number of valid use cases have gone down since SSD's became the norm, but there's still situations where disk i/o or the fear of wearing out an SSD are valid concerns.

If only Intel hadn't cancelled Pmem. Insanely high speed, density to match NVMe, could have changed the way we use computers (or at least powered some killer DB servers)

  • Are you talking about Optane/3D-XPoint? The physics behind it seemed insane to me, amazing that they got it to work. I heard that the NVME protocol was originally designed with it in mind

    • Yeah, that stuff. Was recently discontinued, Micron pulled out, there's been some articles about why. Eventually I guess we'll have CXL, which might catch on, but then there's the delay for software support. It's a shame so much of computing is locked in to the "local minima" of current architecture it's difficult to break out into a new area of the search space.

      It would be cool to play with a computer with persistent storage at the center, surrounded by a ring of compute, for instance.

      And weren't we supposed to have memristors by now? ;)

      1 reply →

    • The death knell for Optane was caused by the fact that the persistent memory had errors rates that required error correction. Lower error rates than flash, but greater rates than DRAM. This meant that remapping was required (wear leveling is a factor), which meant that the controllers couldn't run in the narrow window of time required to hit DRAM level latencies. With enough development they could have worked this out, but as Intel is addicted to monopoly level margins on CPUs, they couldn't justify the expenditures on developing a memory technology that would take a decade+ to mature.

      There is MRAM available with DDR3 interfaces, albeit with a relatively small page size compared to standard DRAM. It's a bit expensive. We'll see if ReRAM ever gets commercialized. There are lots of persistent memory technologies possible, but it takes a lot of money to commercialize such a bleeding edge product. Especially when DRAM keeps getting faster (in bandwidth, not latency) interfaces every few years.

I used to use a software called SuperSpeed RAMDisk on my gaming PC, because I had what at the time (over a decade ago) was a ginormous amount of RAM (32GB) and an at that time relatively new SATA SSD array, and I would put entire games into memory to nearly eliminate loading screens. These days, nVME SSDs are so fast in typical use cases that I don't see much benefit to this. It'd be interesting to have, but I'd rather get an PCI-E SSD vs wasting that slot for a RAMDisk.

I like the idea of using multiple FPGAs to ["fanout"/"cascade"/"distribute"/"one-to-many proxy" -- choose the terminology you like best] the SM2262EN to multiple sticks of DDR3 RAM...

I'd be curious though, if the SM2262EN itself couldn't be replaced by yet another FPGA, and, if so, if the FPGA used for that purpose could be the exact same type as the other four...

If so -- then one could sort of think of that arrangement as sort of like a Tree Data Structure -- that is 2 levels deep...

But what would happen if we could make it 3 or more levels deep?

?

In other words, if we had 4 such boards and we wanted to chain them -- then we'd need another central memory controller (another FPGA ideally) -- to act as the central hub in that hierarchy...

It would be interesting, I think, to think of a future hardware architecture which allows theoretically infinite upscaling via adding more nested sub-levels/sub-components/"sub-trees" (subject to space and power and max signal path lengths and other physical constraints, of course...)

I also like the idea of an FPGA proxy between a memory controller and RAM... (what other possibilities could emerge from this?)

Anyway, an interesting device!

  • You can implement an SSD controller in an FPGA. That's how all the early server SSDs were implemented. I think my Fusion ioScale was one of them.

    It's just an enormous amount of effort. This already looks like a huge amount of engineering to do for what must be a very niche product.

    • Define "an enormous amount of effort" ?

      (What one person considers "effort" -- might very well be considered a relaxing and pleasurable and interesting exercise -- by another...

      For example, some people hate Math and consider performing Mathematical operations "effort" -- whereas some people love Math and could spend all day at it(!) -- and find the whole process relaxing and stimulating!

      It all depends on a given person's interest or disinterest, their affinity or aversion -- to a given line of endeavor...)

      So, define "an enormous amount of effort" ?

      2 replies →

Around 1995(?) Erol's Internet used a static RAM based ram-drive device to process email for its tens (hundreds?) of thousands of users. Its larger brother was used to handle Usenet. Unfortunately the Usenet feed was growing like crazy and soon that large drive could not handle it.

... In 2010 some slightly nutty young engineers who heard about that story from the grey beards they worked with at a future very well known company on a very large mysql instance used a monster ramdisk as a single master to achieve a crazy boost in performance. The hard data persistence was achieved via replication to the regular spinning rust slaves. While it worked really well for their application no one ever battle tested bad crashes in production...

... that led to a product around 2013(?)-2014(?) from Violin Memory which combined the ramdisk with if I recall correctly spinning disks to dump the data in case of a power loss. The devices were absolutely amazing but did not create a foot hold in the market. I think they sold a few hundred units total. The product was abandoned in favor of flash arrays

  • OMG Erol's internet was my first ISP as a kid in elementary school here in NJ. It provided my first experiences into the web and I remember it fondly because of that.

    One of the first tech based mistakes I ever made was to convince my parents to switch from Erols(which had decent ping times on online gaming) to AOL (which had horrendously bad ping times) all because I thought I was missing out on the exclusive content that AOL provided. I do recall fun memories living in AOL's walled garden but giving up that ping time was horrendously bad. I once ripped out the phone wire from the jack in extreme frustration (first time tech made me angry lol!)

    We eventually switched to Earthlink(and then I think Juno?) once the AOL 1 year contract was up. Excellent ping times but man Erols will always have that spot in my memories.

    I miss all the excitement and innovation happening back then. I wish we still had mom and pop stores providing things like internet services. Even startups today don't feel like they could be done as simple "mom n pop" enterprises although im sure there are plenty hiding in places we dont often look.

Back when we used something called the DDRdrive X1 as the ZFS ZIL drive (essentially a write log) on our high-performance database machines. It's a PCIe card with 4GB of RAM, 4GB of SLC flash and a supercap so that in the event of a power failure the RAM is written out to flash.

https://ddrdrive.com/menu1.html

Where I work we handle massive nd2 time series images often reaching hundreds of GB. From image capture at the microscope to segmentation and some post processing steps the biggest bottleneck for us is disk speed. I'd be very interested to see how fast our pipeline is with one of these at our disposal.

  • > From image capture at the microscope to segmentation and some post processing steps the biggest bottleneck for us is disk speed.

    If you're doing sequential writes, this drive benchmarks slightly slower than the fastest PCIe 4 NVMe drives on the market.

    Upcoming PCIe 5 NVMe drives will be significantly faster than this.

    This unit is really only helpful for extremely small random writes or if you're doing so much writing that you exhaust the wear capacity of a normal SSD.

  • If your processing is full of random I/O, this would be the right tool.

Didn’t UUnet do something like this 25-30 years ago for Usenet indices, or something like that?

Fascinating read. The read/write performances are impressive. But the whole time I was reading the article I kept thinking... Imagine the performance of a ddr4... no, ddr5 ram!

I'd love to get my hands on one of these and try out a pxe booted os.

  • There's a blog post about their DDR4 version from last month. Sustained read and write speeds of 15GB/s for sequential operations, with about 3GB/s for random I/O seem to be the expected throughput.

    I don't know what loads demand such high persistent throughputs, but that's one place SSDs still can't compete, as performance quickly drops when their DRAM cache fills up.

    Still, NVMe drives to up to 10GB/s these days, I think we're close to reaching a point where the PCIe overhead towards these RAM drives will soon make them unable to compete with persistent storage for performance. Preventing wear will be the only reason to go for these RAM drives.

    If you want to try to experience the performance of a RAM based PCIe computer, there's very little preventing you from dedicating a portion or RAM to a RAM disk, copying your boot image to that, and running directly from RAM. Several Linux recovery images are built to do exactly that, in fact. If you want to run Windows or something else that doesn't have such functionality out of the box, I imagine using a lightweight Linux VM to bootstrap (and forward) your OS peripherals may solve that problem for you as well.

  • You have DDR4 memory or DDR5 memory as your normal RAM? Then you can make a RAMDisk using any number of software available on internet. And then you can test said drive using AS SSD Benchmark (same as the article's author).

Isn't it more simple to buy a MB with 16 RAM slots? And more performant?

  • It’s interesting to consider the difference.

    The price grows when going to server/workstation motherboards / CPUs.

    And: What if you already have a 16-slot motherboard fully populated with RAM? You can add a whole another computer with 16 more slots, but that’s quite a bit of iron, and: How best to connect the two? Does there exist an interlink that shunts data between two computers at full PCIe 4.0 x4 speed? Or x8? And how to control processing on the second computer?

    I’m sure there are bigger motherboards yet, but afaik it always comes with further components – say, more physical CPU sockets that need to be populated?

    There are probably situations where this hardware is the simple way of doing a job.

    Also: If the current motherboard already has a unused PCI slot, then it’s kiiiiind of a free return on investment to use that bandwidth. By putting the existing I/O controller to use.

  • This board has a battery, so the memory is retained for up to a year between reboots. So you can copy your data to it once, and it's super fast.

    Though, using ordinary RAM and initializing it before use, copying say 128GB is only going to take a few seconds these days.

  • > Isn't it more simple to buy a MB with 16 RAM slots? A

    Why do you think it's a good idea to assemble a whole new computer just because you want more storage?

I have a growing stack of RAM chips but also M.2 SSDs and SATA SSDs of varying capacity as I retire and upgrade old machines. It feels so wasteful to not have anything to use them for.

I wouldn't take a precious M.2 SSD slot on my main machine for 5-year old 1Tb drive, but I'd love to chuck 8 or 10 of them in some enclosure and build a nice performant NAS out of them. Alas, no such thing exists (just now some ARM SBCs are getting M.2 support but only PCIe 3.0 x1).

  • > I wouldn't take a precious M.2 SSD slot on my main machine for 5-year old 1Tb drive, but I'd love to chuck 8 or 10 of them in some enclosure and build a nice performant NAS out of them. Alas, no such thing exists

    NVMe M.2 drives can go into PCIe slots with an adapter.

    If your motherboard supports bifurcation, you even put 4 x M.2 drives into a single x16 slot: https://www.amazon.com/Adapter-4x32Gbps-Individual-Indicator...

    It wouldn't be difficult to find an older server motherboard with bifurcation support that could take 8 x M.2 drives with the right, cheap adapters. You'd have to read the manuals very carefully though.

    The limit is the number of PCIe channels. ARM boards rarely have more than a couple lanes. You really need server-grade chips with a lot of I/O.

    Or get one of the new cards with a PCIe Switch to connect 21 M.2 drives: https://www.apexstoragedesign.com/apexstoragex21

Uh, in which case is that supposed to fit? From the pictures it looks like its a few inches too tall (not flush with bracket). I seem to be missing something obvious.

  • Graphics cards are also that tall, so I guess this will fit in a gaming PC.

I remember doing something similar in windows 3.1 back around 90-91. Hard disks weren't quite fast enough to play very good video (I was using 3D Studio back then and IIRC this was still owned by Autodesk then as a spinoff product of Autocad) so you made a ramdisk and played the video from that. Only a few seconds at 640x480x256(colors) though. I think I had 4 Megs of ram in that 486 machine.

Having this in M.2 format would make an awesome swap drive for a bunch of devices out there with undersized soldered RAM.

This seems like a good idea if the cost could be kept low.

Which is only possible if volume is high.

Which is unlikely.

Prices aren't listed. I'd like to see prices. I've always wanted to be able to use older RAM as swap. Or an array of older SD cards as an SSD. Or similar. Never was there a big enough market to make sane products, though.

A bit off-topic:

What they are doing may be very cool, but the language on the page does not inspire as much confidence and a sense of professionalism as it should.

I assume English is not their first language, so it would be good for them to get a good copy editor to fix the weird expressions and grammar errors in the article.

This is in the spirit of constructive criticism, and it matters, because I had a harder time parsing some of their explanation as a result of the language use.

Edit: Explanation of rationale of this comment; Removal of a personal experience

Just bought 128gb ddr5 memory in a consumer board.

And 2x 2tb NVMS.

That system is rock solid relatively cheap and not worth it to have custom build hardware like this.

Any info on where these folks are headquartered? Their store is remarkable in its omission of any such info.

> This kind of disk is not able to retain data after the power is turned off (unless a supporting battery is used), but has an exceptionally high read/write speed (especially for random access) and an unlimited lifespan

What are some good use cases for this?

RAM disks are an old and well-established concept. I'm not clear on what this 'product' adds to them. https://en.wikipedia.org/wiki/RAM_drive

  • being cheap and available is a rather good addition.

    I'd definitely want the 256 gib version for my OS, if it was in stock.

    Since they are apparently so common, can you point me to a solution that gives me 7 GiB/s transfer rate for around that same price?

    • I think a tmpfs on Linux on some fast DDR5 RAM may get you close, it may even be faster but I'm not totally sure!

      Of course, getting 256GB of DDR5 or even DDR4 RAM is going to cost quite a lot, and most motherboards probably won't have enough slots for this! Also it would probably be much more expensive!