Comment by exmadscientist
5 hours ago
Around the time of Optane's discontinuation, the rumor mill was saying that the real reason it got the axe was that it couldn't be shrunk any, so its costs would never go down. Does anyone know if that's true? I never heard anything solid, but it made a lot of sense given what we know about Optane's fab process.
And if no shrink was possible, is that because it was (a) possible but too hard; (b) known blocks to a die shrink; or (c) execs didn't want to pay to find out?
I think it was killed primarily because the DIMM version had a terrible programming API. There was no way to pin a cache line, update it and flush, so no existing database buffer pool algorithms were compatible with it. Some academic work tried to address this, but I don’t know of any products.
The SSD form factor wasn’t any faster at writes than NAND + capacitor-backed power loss protection. The read path was faster, but only in time to first byte. NAND had comparable / better throughput. I forget where the cutoff was, but I think it was less than 4-16KB, which are typical database read sizes.
So, the DIMMs were unprogrammable, and the SSDs had a “sometimes faster, but it depends” performance story.
The DIMMs were their own shitshow and I don't know how they even made it as far as they did.
The SSDs were never going to be dominant at straight read or write workloads, but they were absolutely king of the hill at mixed workloads because, as you note, time to first byte was so low that they switched between read and write faster than anything short of DRAM. This was really, really useful for a lot of workloads, but benchmarkers rarely bothered to look at this corner... despite it being, say, the exact workload of an OS boot drive.
For years there was nothing that could touch them in that corner (OS drive, swap drive, etc) and to this day it's unclear if the best modern drives still can or can't compete.
It sounds like they didn't do a good job of putting the DIMM version in the hands of folks who'd write the drivers just for fun.
The read path is sort of a wash, but writes are still unequalled. NAND writes feel like you're mailing a letter to the floating gate...
Isn't this addressed by newer PCIe standards? Of course, even the "new" Optane media reviewed in OP is stuck on PCIe 4.0...
That's at least physically half-plausible, but it would be a terrible reason if true. 3.5 in. format hard drives can't be shrunk any, and their costs are correspondingly high, but they still sell - newer versions of NVMe even provide support for them. Same for LTO tape cartridges. Perhaps they expected other persistent-memory technologies to ultimately do better, but we haven't really seen this.
Worth noting though that Optane is also power-hungry for writes compared to NAND. Even when it was current, people noticed this. It's a blocker for many otherwise-plausible use cases, especially re: modern large-scale AI where power is a key consideration.
> 3.5 in. format hard drives can't be shrunk any,
You're looking at the entirely wrong kind of shrinking. Hard drives are still (gradually) improving storage density: the physical size of a byte on a platter does go down over time.
Optane's memory cells had little or no room for shrinking, and Optane lacked 3D NAND's ability to add more layers with only a small cost increase.
Flash has the same shrink problem. And the solution for Optane was the same: go 3D
I don't think the shrink problem is at all the same for the two technologies. There are some really weird materials and production steps in Optane that are simply not present when making Flash cells.
durability drops quickly with shrinking flash, we won't see much smaller cells, the growth has been MLC-TLC-> QLC and stacking