Comment by inasio
3 years ago
Are you talking about Optane/3D-XPoint? The physics behind it seemed insane to me, amazing that they got it to work. I heard that the NVME protocol was originally designed with it in mind
3 years ago
Are you talking about Optane/3D-XPoint? The physics behind it seemed insane to me, amazing that they got it to work. I heard that the NVME protocol was originally designed with it in mind
Yeah, that stuff. Was recently discontinued, Micron pulled out, there's been some articles about why. Eventually I guess we'll have CXL, which might catch on, but then there's the delay for software support. It's a shame so much of computing is locked in to the "local minima" of current architecture it's difficult to break out into a new area of the search space.
It would be cool to play with a computer with persistent storage at the center, surrounded by a ring of compute, for instance.
And weren't we supposed to have memristors by now? ;)
Well they were even more expensive than DRAM. It is unfortunate we still haven't find a way to further reduce the cost of DRAM. Or at least a roadmap / projection towards $1/GB in Cost.
The death knell for Optane was caused by the fact that the persistent memory had errors rates that required error correction. Lower error rates than flash, but greater rates than DRAM. This meant that remapping was required (wear leveling is a factor), which meant that the controllers couldn't run in the narrow window of time required to hit DRAM level latencies. With enough development they could have worked this out, but as Intel is addicted to monopoly level margins on CPUs, they couldn't justify the expenditures on developing a memory technology that would take a decade+ to mature.
There is MRAM available with DDR3 interfaces, albeit with a relatively small page size compared to standard DRAM. It's a bit expensive. We'll see if ReRAM ever gets commercialized. There are lots of persistent memory technologies possible, but it takes a lot of money to commercialize such a bleeding edge product. Especially when DRAM keeps getting faster (in bandwidth, not latency) interfaces every few years.