Comment by jamescun

9 years ago

Earlier this year, Seagate were showing off their 60 TB SSD[1] for release next year.

So 100 TB in a single drive isn't too far off.

EDIT: Toshiba is teasing a 100 TB SSD concept, potentially for 2018 [2]

[1] 11th August - http://arstechnica.co.uk/gadgets/2016/08/seagate-unveils-60t...

[2] 10th August - http://www.theregister.co.uk/2016/08/10/toshiba_100tb_qlc_ss...

60TB at 500Mb/s transfer will take +1 day to read the data. This is the problem of drinking the ocean through a straw. Even with SSD transfer rates, is still a problem at scale. Clusters give you no only capacity, but also multiplication factor for transfer rates.

  • Just use 24 of them interleaved/stripped and it will take just one hour for loading the data.

    • But then you need small disks (eg. 2TB). My point is that huge capacity drives are not appropriate in compute environments, as Hadoop is. They're more for cold storage.