Comment by bullen
1 day ago
No MLC lasts 5 years tops.
I have 5x SLC (40-60GB) drives from 2010 still running. Not a single SLC has failed for me ever.
I also have 5x MLC (120-400GB ones) drives that failed. All MLC have failed for me.
The stats don't look too good.
That's pure anecdata. We don't even know your workload or configuration.
Contrary anecdata: I just replaced my old SSDs: 2013 64GB 20nm MLC at 19% wear level and a 2018 500GB TLC at 34% wear level. Not because they failed, but because I had the OS on a 64GB RAID1 and needed more space. Only optimization was setting "noatime".
But that's still a horrible small N, so even the comined data is essentially meaningless.
btw, I replaced them with a bunch of HGST DC SS200 1.6TB from 2018, two of which have about as much capacity as your 30 disks. The 15nm MLC NAND is rated for 3 DWPD and has a 3% wear level. The dual ported SAS3 interface is overkill for me.
I went for a 5 disk RAID6, and could replace it another 8 times while still keeping some spare change for a visit at a gourmet restaurant.
How much are you writing to those drives?
Have you tried any drives that will extend their pseudo-SLC cache across the entire space, and then only partitioned 25-30% of the nameplate capacity? That'll get you a terabyte for less than $300.
Personally I've had two SSDs in active use and both have done a lot better than that. One was MLC and died after 13 years, and the other is TLC and still working after 10 years.
I've had a few Kingston V300 120GB SATA MLC SSD's I bought on a stupid cheap sale at Microcenter and tossed into a raid 0 for funzies in 2012. They're still running just fine after being online all the time for the last decade.
>No MLC lasts 5 years tops.
A 64gb Intel’s X25-E is rated for about 2 PB of TBW.
A S3700 (400gb) is rated for 7 PB TBW range and gets you 400gb not 64gb usable space.
>The stats don't look too good.
It seems to me that you're trying very hard to not look at stats and insist on extrapolating your small sample personal experiences?
Frankly for 3k you could have built a pure optane rig of equivalent capacity that would have crushed both your X-25E suggestion and my S3700 if you're really obsessed on endurance.
I'm generally of the meet people where they are and support their journey persuasion but when someone says 64GB SATA v2 drive with no trim and really bad metrics across the board is their best ssd buy I gotta say something
Depends on the usage and the initial claimed DWPD.
I've seen Samsung 860 Pro (DWPD of 0.6) doing fine after years under LUKS (the worst case for SSD). As soon as you go for DWPD > 1 (real or effective) the wearout is not a problem.
Why is LUKS bad fro SSDs? I'm thinking of using LUKS for my USB thumb drive.
it's not. I think there were some corner cases where the storage controller or SSD may use compression, in which.. the random nature of LUKS would cause more writes, but I'm not sure thats a real concern.
LUKS as any other encryption encrypts the whole device, which means your whole drive is filled with a white noise and you lose TRIM.
It would be fine for the occassional write operations but if you use it for the system drive you are effectively run with the write amplification 24/7 (for a server and my example up there was for the server, not for a notebook which would have only 8-12 hours of operation a day at most).
So it boils down for the usage pattern.