Comment by benoau
9 days ago
> However, due to the system’s large-capacity, low-performance storage structure, no external backups were maintained — meaning all data has been permanently lost.
Yikes. You'd think they would at least have one redundant copy of it all.
> erasing work files saved individually by some 750,000 civil servants
> 30 gigabytes of storage per person
That's 22,500 terabytes, about 50 Backblaze storage pods.
Or even just mirrored locally.
It's even worse. According to other articles [1], the total data of "G drive" was 858 TB.
It's almost farcical to calculate, but AWS S3 has pricing of about $0.023/GB/month, which means the South Korean government could have reliable multi-storage backup of the whole data at about $20k/month. Or about $900/month if they opted for "Glacier deep archive" tier ($0.00099/GB/month).
They did have backup of the data ... in the same server room that burned down [2].
[1] https://www.hankyung.com/article/2025100115651
[2] https://www.hani.co.kr/arti/area/area_general/1221873.html
(both in Korean)
AWS? Linus Tech Tips has run multiple petabyte servers in their server closet just for sponsor money and for the cool of it. No need to outsource your national infrastructure to foreign governments, a moderate (in government terms) investment in a few racks across the country could've replicated everything for maybe half a year's worth of Amazon subscription fees.
But then they will depend on the security people at both sides to agree on the WAN configuration. Easier to let everything burn in a fire and rebuild from scratch.
Exactly, everyone here on hackernews is talking about Azure/AWS/GCP as if it was the only correct way to store data. Americans are too self centered, it's quite crazy.
1 reply →
I made an 840TB storage server last month for $15,000.
840TB before or after configuring RAID?
1 reply →
>AWS S3 has pricing of about $0.023/GB/month, which means ... about $20k/month
or outright buying hardware capable of storing 850TB for the same $20K one time payment. Gives you some perspective on how overpriced AWS is.
Where are you getting 850TB of enterprise storage for $20k?
I had 500TB of object storage priced last year and it came out closer to $300k
3 replies →
Couldn’t even be bothered to do a basic 3-2-1! Wow
Did you expect government IT in a hierarchical respect-your-superiors-even-when-wrong society to be competent?
5 replies →
I have almost 10% of that in my closet RAID5'd with large part of it backing up constantly to Backblaze for 10$/month, running on 10 year old hardware, with basically only the hard drives having any value ... Used a case made of cardboard till I wanted to improve the cooling, and got a used Fractal Design case for 20€.
_Only_ the kind of combination of incompetence and bad politics here can lead to the kind of % of how much data has been lost here, given the policy was to only save stuff on that "G-drive" and avoid local copies. The "G-drive" they intentionally did not back up because they couldn't figure out a solution to at least store a backup across the street ...
How does this even make sense business wise for AWS?
Is their cost per unit so low?
This is just the storage cost. That is they will keep your data on their servers, nothing more.
Now if you want to do something with the data, that's where you need to hold your wallet. Either you get their compute ($$$ for Amazon) or you send it to your data centre (egress means $$$ for Amazon).
When you start to do math, hard drive are cheap when you go for capacity and not performance.
0.00099*1000 is 0.99. So about 12$ a year. Now extrapolate something like 5 year period or 10 year period. And you get to 60 to 120$ for TB. Even at 3 to 5x redundancy those numbers start to add up.
6 replies →
They charge little for storage and upload, but download, so getting your data back, is pricey.
2 replies →
It's expensive if you calculate what it would cost for a third party to compete with. Or see e.g. this graph from a recent HN submission: https://si.inc/posts/the-heap/#the-cost-breakdown-cloud-alte...
That's unfortunate.
It's incompetent really.
No. Fortuna had nothing to do with this, this is called bad planning.
You're assuming average worker utilized the full 30G of storage. More likely average was at like 0.3G.
On the other hand: backups should also include a version history of some kind, or you'd be vulnerable to ransomware.