The government official who insisted that commercial AWS/GCP/Azure couldn't possibly be trusted with keeping the information will be keeping their head low for a few days then...
"The Interior Ministry explained that while most systems at the Daejeon data center are backed up daily to separate equipment within the same center and to a physically remote backup facility, the G-Drive’s structure did not allow for external backups."
The issue here is not refusing to use a foreign third party. That makes sense.
The issue is mandating the use of remote storage and not backing it up. That’s insane. It’s like the most basic amount of preparation you do. It’s recommended to even the smallest of companies specifically because a fire is a risk.
This. Speaking specifically from the IT side of things, an employer or customer refusing to do backups is the biggest red flag I can get, an immediate warning to run the fuck away before you get blamed for their failure, stego-tech kind of situation.
That being said, I can likely guess where this ends up going:
* Current IT staff and management are almost certainly scapegoated for “allowing this to happen”, despite the program in question (G-DRIVE) existing since 2017 in some capacity.
* Nobody in government will question sufficiently what technical reason is/was given to justify the lack of backups and why that was never addressed, why the system went live with such a glaring oversight, etc, because that would mean holding the actual culprits accountable for mismanagement
* Everyone involved is unlikely to find work again anytime soon once names are bandied about in investigations
* The major cloud providers will likely win several contracts for “temporary services” that in actuality strip the sovereignty the government had in managing its own system, even if they did so poorly
* Other countries will use this to justify outsourcing their own sovereign infrastructure to private enterprise
This whole situation sucks ass because nothing good is likely to come of this, other than maybe a handful of smart teams lead by equally competent managers using this to get better backup resources for themselves.
Nothing increases the risk of servers catching fire like government investigators showing up to investigate allegations that North Korea hacked the servers.
It does only make sense if you are competent enough to manage data, and I mean: Any part of it, forever. It's not impossible, of course, but it is really not as trivial as the self-host crowd makes it out to be, if you absolutely need a certain amount of 9s of reliability. There is a reason why AWS etc can exist. I am sure the cloud market is not entirely reasonable but certainly far more reasonable than relying on some mid consultant to do this for you at this scale.
Yeah, the whole supposed benefit of an organization using storage the cloud is to avoid stuff like this from happening. Instead, they managed to make the damage far worse by increasing the amount of data lost by centralizing it.
The issue is without profit incentive of course it isn’t X (backed up, redundant, highly available, whatever other aspect is optimized away by accountants).
Having worked a great deal inside of aws on these things aws provides literally every conceivable level of customer managed security down to customer owned and keyed datacenters operated by aws, with master key HSMs owned, purchased by the customer, with customer managed key hierarchies at all levels and detailed audit logs of everything done by everything including aws itself. The security assurance of aws is far and away beyond what even the most sophisticated state actor infrastructure does and is more modern to boot - because it’s profit incentive drives that.
Most likely this was not about national security than about nationalism. They’re easily confused but that’s fallacious. And they earned the dividends of fallacious thinking.
I very seriously doubt that the US cares about South Korea's deepest, darkest secrets that much, if at all.
Not using a cloud provider is asinine. You can use layered encryption so the expected lifetime of the cryptography is beyond the value of the data...and the US government themselves store data on all 3 of them, to my knowledge.
I say US because the only other major cloud providers I know of are in China, and they do have a vested interest in South Korean data, presumably for NK.
Agree completely that it's absolute wild to run such a system without backups. But at this point no government should keep critical data on foreign cloud storage.
Why not? If the region is in country, encrypted, and with proven security attestations validated by third parties, a backup to a cloud storage would be incredibly wise. Otherwise we might end up reading an article about a fire burning down a single data center
> The government official who insisted that commercial AWS/GCP/Azure couldn't possibly be trusted with keeping the information will be keeping their head low for a few days then...
They absolutely cannot be trusted, especially sensitive govt. data. Can you imagine the US state department getting their hands on compromising data on Korean politicians?
Its like handing over the govt. to US interests wholesale.
That they did not choose to keep the backup, and then another, at different physical locations is a valuable lesson, and must lead to even better design the next time.
Using the cloud would have been the easiest way to achieve the necessary redundancy, but by far not the only one. This is just a flawed concept from the start, with no real redundancy.
But not security. And for governmental data security is a far more important consideration.
not losing data and keeping untrusted parties out of your data is a hard problem, that "cloud" aka "stored somewhere that is accessible by agents of a foreign nation" does not solve.
There is some data privacy requirement in SK where application servers and data have to remain in the country. I worked for a big global bank and we had 4 main instances of our application: Americas, EMEA, Asia and South Korea.
I know there is legit hate for VMWare/Broadcom but there is a legit case to be made for VCF with an equivalent DR setup where you have replication enabled by Superna and Dell PowerProtect Data Domain protecting both local and remote with Thales Luna K160 KMIP for the data at rest encryption for the vSAN.
To add, use F710s, H710s and then add ObjectScale storage for your Kubernetes workloads.
This setup repatriates your data and gives you a Cloud like experience. Pair it with like EKS-A and you have a really good on premises Private Cloud that is resilient.
I was once advised to measure your backup security in zip codes and time zones.
You have a backup copy of your file, in the same folder? That helps for some "oops" moments, but nothing else.
You have a whole backup DRIVE on your desktop? That's better. Physical failure of the primary device is no longer a danger. But your house could burn down.
You have an alternate backup stored at a trusted friend's house across the street? Better! But what if a major natural disaster happens?
True life, 30+ years ago when I worked for TeleCheck, data was their lifeblood. Every week a systems operator went to Denver, the alternate site, with a briefcase full of backup tapes. TeleCheck was based in Houston, so a major hurricane could've been a major problem.
Not sure “sane backup strategy” and “park your whole government in a private company under American jurisdiction” are mutually exclusive. I feel like I can think of a bunch of things that a nation would be sad to lose, but would be even sadder to have adversaries rifling through at will. Or, for that matter, extort favors under threat of cutting off your access.
At least in this case you can track down said officials in their foxholes and give them a good talking-to. Good luck holding AWS/GCP/Azure accountable…
If you (as the SK government) were going to do a deal with " AWS/GCP/Azure" to run systems for the government, wouldn't you do something like the Jones Act? The datacenters must be within the country and staffed by citizens, etc.
Microsoft exec testified that US Govt can get access to the data Azure stores in other countries. I thought this was a wild allegation but apparently is true [0].
As a government you should not be putting your stuff in an environment under control of some other nation, period. That is a completely different issue and does not really relate to making backups.
> GCE instances and Persistent Disks within a zone exist in a single Google datacenter and are therefore unavoidably vulnerable to datacenter-scale disasters.
Of course, it's perfectly possible to have proper distributed storage without using a cloud provider. It happens to be hard to implement correctly, so apparently, the SK government team in question just decided... not to?
Usually these mandates are made by someone who evaluates “risks.” Third party risks are evaluated under the assumption that everything will be done sensibly in the 1p scenario, to boot, the 1p option will be cheaper as disk drives etc are only a fraction of total cost.
Reality hits later when budget cuts/constrained salaries prevent the maintenance of a competent team. Or the proposed backup system is deemed as excessively risk averse and the money can’t be spared.
>The government official who insisted that commercial AWS/GCP/Azure couldn't possibly be trusted with keeping the information will be keeping their head low for a few days then...
They can't. The trump admin sanctioning the international criminal court and Microsoft blocking them from all services as a result are proof of why.
They put everything only in one datacenter. A datacenter located elsewhere should have been setup to mirror.
This has nothing to do with commercial clouds. Commercial clouds are just datacenters. They could pick one commercial cloud data center and not do much more to mirror or backup in different regions. I understand some of the services have inherent backups.
What a lame excuse. “The G-Drive’s structure did not allow for backups” is a blatant lie. It’s code for, “I don’t value other employees’ time and efforts enough to figure out a reliable backup system; I have better things to do.”
Whoever made this excuse should be demoted to a journeyman ops engineer. Firing would be too good for them.
It could be accurate. Let’s say, for whatever reason, it is.
Ok.
Then it wasn’t a workable design.
The idea of “backup sites” has existed forever. The fact you use the word “cloud” to describe your personal collection of servers doesn’t suddenly mean you don’t need backups in a separate physical site.
If the government mandates its use, it should have a hot site at a minimum. Even without that a physical backup in a separate physical location in case of fire/attack/tsunami/large band of hungry squirrels is a total must-have.
However it was decided that not having that was OK, that decision was negligence.
Silly to think this is the fault of ops engineers. More likely, the project manager or C-suite didn't have time nor budget to allocate on disaster recovery.
The project shipped, it's done, they've already moved us onto the next task, no one wants to pay for maintenance anyway.
This has been my experience in 99% of the companies I have worked for in my career, while the engineers that built the bloody thing groan and are well-aware of all the failure modes of the system they've built. No one cares, until it breaks, and hopefully they get the chance to say "I **** told you this was inadequate"
Days? That's optimistic. It depends on what govt cloud contained. For example imagine all the car registrations. Or all the payments for the pension fund
S3 features have saved our bacon a number of times. Perhaps your experience and usage is different. They are worth trusting with business critical data as long as you're following their guidance. GCP though have not proven it, their data loss news is still fresh in my mind.
I understand data sovereignty in the case where a foreign entity might cut off access to your data, but this paranoia that storing info under your bed is the safest bet is straight up false. We have post-quantum encryption widely available already. If your fear is that a foreign entity will access your data, you're technologically illiterate.
Obviously no person in a lawmaking position will ever have the patience or foresight to learn about this, but the fact they won't even try is all the more infuriating.
Encryption only makes sense if "the cloud" is just a data storage bucket to you. If you run applications in the cloud, you can't have all the data encrypted, especially not all the time. There are some technologies that make this possible, but none are mature enough to run even a small business, let alone a country on.
It sounds technologically illiterate to you because when people say "we can't safely use a foreign cloud" you think they're saying "to store data" and everyone else is thinking at the very least "to store and process data".
Sure, they could have used a cloud provider for encrypted backups, but if they knew how to do proper backups, they wouldn't be in this mess to begin with.
> The government official who insisted that commercial AWS/GCP/Azure couldn't possibly be trusted with keeping the information
They were still right though: it's absolutely clear without an ounce of doubt that whatever you put on an US cloud is being accessible by the US government, who can also decide to sanction you and deprive you from your ability to access the data yourself.
Not having backups is entirely retarded, but also completely orthogonal.
I mean he's still right about AWS etc. with the current US Administration and probably all that will follow - but that doesn't excuse not keeping backups.
Woah, read the timeline at the top of this. The fire happened the very day the government ordered onsite inspection was supposed to start due to Chinese/NK hacking.
Phrack's timeline may read like it, but it wasn't an onsite inspection due to hacking, but a scheduled maintenance to replace the overdue UPS, hence battery-touching involved. Even the image they linked just says "scheduled maintenance."
Such coincidences do happen. 20 years ago the plane which was carrying all the top brass of the Russian Black Sea Fleet as well as the Fleet’s accounting documentation for inspection to Moscow burst in flames and fell to the ground while trying to get airborne. Being loaded with fuel it immediately became one large infernal fireball. By some miracle no top brass suffered even minor burn/injury while all the accounting documentation burned completely.
Who has the incentive to do this, though? China/North Korea? Or someone in South Korea trying to cover up how bad they messed up? Does adding this additional mess on top mean they looked like they messed up less? (And for that to be true, how horrifically bad does the hack have to be?)
"NK hackers" reminds me "my homework was eaten by a dog". It's always NK hackers that steal data/crypto and there is absolutely no possibility to do something with it or restore the data, because you know they transfer the info on a hard disk and they shoot it with an AD! Like that general!
How do we know it's NK? Because there are comments in north-korean language, duh! Why are you asking, are you russian bot or smt??
Though this is far from the most important points of this article, why do even the article’s authors defend Proton after having their accounts suspended, and after having seemingly a Korean intelligence official warn them that they weren’t secure? Even if they’re perfectly secure they clearly do not have the moral compass people believe they have.
Ohh side note but this was the journalist group which was blocked by proton
The timing as well is very suspicious and I think that there can be a lot of discussion about this
Right now, I am wondering about the name most tbh which might seem silly but "APT down - The North Korean files"
It seems that APT means in this case advanced persistent threat but I am not sure what they mean by Apt Down, like the fact that it got shut down by their journalism or-? I am sorry if this may seem naive and on a serious note this raises so many questions...
> 27th of September 2025, The fire is believed to have been caused while replacing Lithium-ion batteries. The batteries were manufactured by LG, the parent company of LG Uplus (the one that got hacked by the APT).
Witness A said, “It appears that the fire started when a spark flew during the process of replacing the uninterruptible power supply,” and added, “Firefighters are currently out there putting out the fire. I hope that this does not lead to any disruption to the national intelligence network, including the government’s 24 channel.”[1]
What a sad news as a Korean to see a post about Korea at the top of HN during one of the largest Korean holiday.
I can share an anecdote how slow tech adoption is in Korea. It is not exactly about tech in public section but in private companies. I assume public section has slower adoption rate than private ones in general.
Just about a year ago I had a couple of projects with insurance companies. I won't name them but they are the largest ones whose headquarters you can find in the very center of Seoul. They often called me in because I was setting up on-premise servers for the projects. Not to mention that it was hard to understand their choices of database architecture to plug it into the server I was setting up, their data team seemed just incompetent, not knowing what they were doing.
The wildest thing I found was that most office workers seemed to be using windows 2000 to run their proprietary software. To be fair, I like software UIs with a lot of buttons and windows from that era. But alas, I didn't want to imagine myself connecting that legacy software to my then project service. It didn't go that far in the end.
Back when I worked for Mozilla, I had the chance to go to Seoul to meet with various companies and some governmental ministries. This was when Korean banks and ecommerce sites required Internet Explorer and Active-X controls for secure transactions. This meant that MacOS users or Linux users couldnt do secure transactions in Korea without emulating Win/IE.
> I can share an anecdote how slow tech adoption is in Korea. It is not exactly about tech in public section but in private companies. I assume public section has slower adoption rate than private ones in general.
I guess it's not all tech, but at least in telecoms I thought they were very quick to adopt new tech? 2nd in the world to commercially deploy 3G W-CDMA, world first LTE-Advanced [1], "first fairly substantial deployments" of 5G [2]. 90% of broadband via fibre (used to be #1 amongst OECD countries for some time, now it's only just #2).
Yes and no. They used to prefer everything on premise. Many try to move towards cloud especially newer companies. Major cloud providers you mentioned are not the usual choices though (maybe aws is the most common). They do have data centers in Seoul and try to expand their markets for South Korea. But government offers generous incentives for using domestic cloud providers like NHN which was mentioned in the article or Naver cloud.
Why does this work? Because Korean services rarely target global markets mainly due to language barrier. Domestic cloud usage is sufficient enough.
I think it's very interesting that Korea is probably the country with the fastest cultural adoption of new tech, e.g. #1 for ChatGPT, but on the other hand I can see as a web developer that new web tech is often adopteded at a very slow rate.
Article comments aside, it is entirely unclear to me whether or not there was no backups. Certainly no "external" backups, but potentially "internal" backups. My thinking is that not actually allowing backups and forcing all data there creates a prime target for the PRK folks right? I've been in low level national defense meetings about security where things like "you cannot backup off site" are discussed but there are often fire vaults[1] on site which are designed to withstand destruction of the facility by explosive force (aka a bomb) or fire or flood Etc.
That said, people do make bad calls, and this would be an epically bad one, if they really don't have any form of backup.
Many years ago I was Unix sysadmin responsible for backups and that is exactly what we did. Once a week we rotated the backup tapes taking the oldest out of the fire safe and putting the newest in. The fire safe was in a different building.
When I visited the National Museum of Korea in Seoul, one of my favorite parts was exploring the exhibit dedicated to the backing up state data — via calligraphy, letterpress, and stone carving.
> "The Veritable Records of the Joseon Dynasty, sometimes called sillok (실록) for short, are state-compiled and published records, documenting the reigns of the kings of the Joseon dynasty in Korea. Kept from 1392 to 1865, they comprise 1,893 volumes and are thought to be the longest continual documentation of a single dynasty in the world."
> "Beginning in 1445, they began creating three additional copies of the records, which they distributed at various locations around Korea for safekeeping."
After the Japanese and Qing invasions of Japan, King Hyeonjong (1659–1675) started a project to collect calligraphy works written by preceding Joseon kings and carve them into stone.
It's somewhat surprising that these values didn't continue to persist in the Korean government.
Saw a few days ago that the application site for the GKS, the most important scholarship for international students in Korea, went offline for multiple days, surprising to hear that they really lost all of the data though. Great opportunity to build a better website now?
But yeah it's a big problem in Korea right now, lots of important information just vanished, many are talking about it.
I was the principal consultant at a subcontractor to a contractor for a large state government IT consolidation project, working on (among other things) the data centre design. This included the storage system.
I noticed that someone had daisy-chained petabytes of disk through relatively slow ports and hadn’t enabled the site-to-site replication that they had the hardware for! They had the dark fibre, the long-range SFPs, they even licensed the HA replication feature from the storage array vendor.
I figured that in a disaster just like this, the time to recover from the tape backups — assuming they were rotated off site, which might not have been the case — would have been six to eight weeks minimum, during which a huge chunk of the government would be down. A war might be less disruptive.
I raised a stink and insisted that the drives be rearranged with higher bandwidth and that the site-to-site replication be turned on.
I was a screamed at. I was called unprofessional. “Not a team player.” Several people tried to get me fired.
At one point this all culminated in a meeting where the lead architect stood up in front of dozens of people and calmly told everyone to understand one critical aspect of his beautiful design: No hardware replication!!!
(Remember: they had paid for hardware replication! The kit had arrived! The licenses were installed!)
I was younger and brave enough to put my hand up and ask “why?”
The screeched reply was: The on-prem architecture must be “cloud compatible”. To clarify: He thought that hardware-replicated data couldn’t be replicated to the cloud in the future.
This was some of the dumbest shit I had ever heard in my life, but there you go: decision made.
This. This is how disasters like the one in South Korea happen.
In private organisations you get competent shouty people at the top insisting on a job done right. In government you get incompetent shouty people insisting that the job gets done wrong.
> In private organisations you get competent shouty people at the top insisting on a job done right. In government you get incompetent shouty people insisting that the job gets done wrong.
Great post and story but this conclusion is questionable.
These kinds of incompetences or misaligned incentives absolutely happen in private organisations as well.
Much more rarely in my experience, having been at both kinds of organisations.
There’s a sort-of “gradient descent” optimisation in private organisations, established by the profit motive and the competitors nipping at their heels. There’s no such gradient in government, it’s just “flat”. Promotions hence have a much weaker correlation with competence and a stronger correlation with nepotism, political skill, and willingness to participate in corruption.
I’ve worked with may senior leaders in all kinds of organisations, but only in government will you find someone who is functionally
illiterate and innumerate in a position of significant power.
Obviously this is just a statistical bias, so there’s overlap and outliers. Large, established monopoly corporations can be nigh indistinguishable from a government agency.
I know you want to think of this is as a lot of data, but this really isn't that much. It'll cost less than a few thousand to keep a copy in glacier on s3, or a single IT dude could build a NAS at his home that could easily hold this data for a few tens of thousands tops. The entire thing.
> However, due to the system’s large-capacity, low-performance storage structure, no external backups were maintained — meaning all data has been permanently lost.
Yikes. You'd think they would at least have one redundant copy of it all.
> erasing work files saved individually by some 750,000 civil servants
> 30 gigabytes of storage per person
That's 22,500 terabytes, about 50 Backblaze storage pods.
It's even worse. According to other articles [1], the total data of "G drive" was 858 TB.
It's almost farcical to calculate, but AWS S3 has pricing of about $0.023/GB/month, which means the South Korean government could have reliable multi-storage backup of the whole data at about $20k/month. Or about $900/month if they opted for "Glacier deep archive" tier ($0.00099/GB/month).
They did have backup of the data ... in the same server room that burned down [2].
AWS? Linus Tech Tips has run multiple petabyte servers in their server closet just for sponsor money and for the cool of it. No need to outsource your national infrastructure to foreign governments, a moderate (in government terms) investment in a few racks across the country could've replicated everything for maybe half a year's worth of Amazon subscription fees.
I have almost 10% of that in my closet RAID5'd with large part of it backing up constantly to Backblaze for 10$/month, running on 10 year old hardware, with basically only the hard drives having any value ... Used a case made of cardboard till I wanted to improve the cooling, and got a used Fractal Design case for 20€.
_Only_ the kind of combination of incompetence and bad politics here can lead to the kind of % of how much data has been lost here, given the policy was to only save stuff on that "G-drive" and avoid local copies. The "G-drive" they intentionally did not back up because they couldn't figure out a solution to at least store a backup across the street ...
A lot of folks are arguing that the real problem is that they refused to use US cloud providers. No, that's not the issue. It's a perfectly reasonable choice to build your own storage infrastructure if it is needed.
But the problem is they sacrificed "Availability" in pursuit of security and privacy. Losing your data to natural and man-made disasters is one of the biggest risks facing any storage infrastructure. Any system that cannot protect your data against those should never be deployed.
"The Interior Ministry explained that while most systems at the Daejeon data center are backed up daily to separate equipment within the same center and to a physically remote backup facility, the G-Drive’s structure did not allow for external backups."
This is not a surprise to them. They had knowingly accepted the risk of infrastructure being destroyed by natural and man-made disasters. I mean, WTF!
Here I was self conscious about my homelab setup and turns out I was already way ahead of the second most technologically advanced nation in the world!
I think that alluded to that earlier in the article:
>However, due to the system’s large-capacity, low-performance storage structure, no external backups were maintained — meaning all data has been permanently lost.
I think they decided that their storage was too slow to allow backups?
Seems hard to believe that they couldn't manage any backups... other sources said they had around 900TB of storage. An LTO-9 tape drive holds ~20TB uncompressed, so they could have backed up the entire system with 45 tapes. At 300MB/sec with a single drive, it would take them a month to complete a full backup, so seems like even a slow storage system should be able to keep up with that rate. They'd have a backup that's always a month out of date, but that seems better than no backup at all.
Too slow to allow batched backups. Which means you should just make redundant copies at the time of the initial save. Encrypt a copy and send it offsite immediately.
If your storage performance is low then you don't need fat pipes to your external provider either.
They either built this too quickly or there was too much industry corruption perverting the process and the government bought an off the shelf solution that was inadequate for their actual needs.
LTO-9 ~$92/tape in bulk. A 4 drive library with 80 drive capacity costs ~$40k* and can sustain about 1 Gbps. It also needs someone to barcode, inventory, and swap tapes once a week and an off-site vaulting provider like Iron Mountain. That's another $100k/year. Also, that tape library will need to be replaced every 4-7 years, so say 6 years. And those tapes wear out over X uses and sometimes go bad too. It might also require buying a server and/or backup/DR software too. Furthermore, a fire-rated data safe is recommended for about 1-2 weeks' worth of backups and spare media. Budget at least $200k/year for off-site tape backups for a minimal operation. (Let me tell you about the pains of self-destructing SSL2020 AIT-2 Sony drives.)
If backups for other critical services and this were combined, it would probably be cheaper to scale this kind of service rather reinventing the wheel for just one use-case in one department. That would allow for possibly multiple types of optimizations like network-based backups to nearline storage to then be streamed more directly to tape and using many more tape drives, possibly a tape silo robot(s) and perhaps split into 2-3 backup locations obviating the need for off-site vaulting.
Furthermore, it might be simpler, although more expensive, to operate another hot-/warm-site for backups and temporary business continuity restoration using a pile of HDDs and a network connection that's probably faster than that tape library. (Use backups, not replication because replication of errors to other sites is fail.)
Or the easiest option is to use one or more cloud vendors for even more $$$ (build vs. buy tradeoff).
* Traditionally (~20 years ago), enterprise "retail" prices of gear was sold at around 100% markup allowing for up to around 50% discount when negotiated in large orders. Enterprise gear also had a lifecycle of around 4.5 years while it still might technically work, there wouldn't be vendor support or replacements for them, and so enterprise customers are locked into perpetual planned obsolescence consumption cycles.
Basically it all boils down to budget. Those engineers knew this is a problem and wanted to fix that but that costs some money. And you know, bean counters in the treasury are basically like, "well it works well, why do we need that fix?" and the last conservative govt. was in a full spending cut mode. You know what happened there.
A key metric for recovery is the time it takes to read or write an entire drive (or drive array) in full. This is simply a function of the capacity and bandwidth, which has been getting worse and worse as drive capacities increase exponentially, but the throughput hasn't kept up at the same pace.
A typical 2005 era drive from two decades ago might have been 0.5 TB with a throughput of 70 MB/s for a full-drive transfer time (FDTT) of about 2 hours. A modern 32 TB drive is 64x bigger but has a throughput of only 270 MB/s which is less than 4x higher. Hence the FDDT is 33 hours!
This is the optimal scenario, things get worse in modern high-density disk arrays that may have 50 drives in a single enclosure with as little as 8-32 Gbps (1 GB/sec to 4 GB/sec) of effective bandwidth. That can push FDDT times out to many days or even weeks.
I've seen storage arrays where the drive trays were daisy chained, which meant that while the individual ports were fast, the bandwidth per drive would drop precipitously as capacity was expanded.
It's a very easy mistake to just keep buying more drives, plugging them in, and never going back to the whiteboard to rethink the HA/DR architecture and timings. The team doing this kind of BAU upgrade/maintenance is not the team that designed the thing originally!
> The stored data amounts to 858TB (terabytes), equivalent to 449.5 billion A4 sheets.
This attempt at putting it in perspective makes me wonder what would put it in perspective. "100M sets of harry potter novels" would be one step in the right direction, but nobody can imagine 100M of anything either. Something like "a million movies" wouldn't work because they are very different from text media in terms of how much information is in one, even if the bulk of the data is likely media. It's an interesting problem even if this article's attempt is so bad it's almost funny
Good article otherwise though, indeed a lot more detail than the OP. It should probably replace the submission. Edit: dang was 1 minute faster than me :)
> The Interior Ministry explained that while most systems at the Daejeon data center are backed up daily to separate equipment within the same center and to a physically remote backup facility, the G-Drive’s structure did not allow for external backups.
This is why I don't really want to run my own cloud :)
Actually testing the backups is boring.
That said, ones the flames are out, they might actually be able to recover some of it.
Hm, care to elaborate. I kinda liked this idea even though I know that it shouldn't make much sense but still lol, would this have any benefits over testing backups other than the excitement lol
While I am sure a huge portion of valuable work will be lost, I am smirking thinking of management making a call, "So, if there is any shadow IT who has been running mirror databases of valuable infrastructure, we would have a no questions asked policy on sharing that right now".
I know that I have had to keep informal copies of valuable systems because the real source of truth is continually patched,offline,churn,whatever.
technically, it was the supervising technical director.
The only reason this happened (I don't think "working from home" was very common in 1999) was because she just had a baby! I love this story because it feels like good karma – management providing special accommodations for a new mom saves the show.
It was on their SGI workstation that they lugged to home, but yeah, pretty much that's how they recovered most of the files. At the end they barely used the material.
If SK is anything similar to Germany or Japan in how they are digitizing their government processes, you'll probably be able to find paper printouts of all the data that was lost.
Funny, because the same thing happened in Nepal a few weeks ago. Protestors/rioters burned some government buildings, along with the tech infrastructure within them, so now almost all electronic data is gone.
Would this have been any different if these documents were stored non-electronically though? I understand that the whole point of electronic data is that it can be backed up, but if the alternative were simply an analog system then it would have fared no better.
Not sure where you got that info. only physical documents were burned (intentionally by the incumbents you could argue) however the digital backups were untouched
Jim Hacker: How am I going to explain the missing documents to The Mail?
Sir Humphrey: Well, this is what we normally do in circumstances like these.
Jim Hacker: (reading) This file contains the complete set of papers, except for a number of secret documents, a few others which are part of still active files, some correspondence lost in the floods of 1967… Was 1967 a particularly bad winter?
Sir Humphrey: No, a marvelous winter. We lost no end of embarrassing files.
Real reason is humans are way too optimistic in planning and, for some reason, tend to overlook even more rare, but catastrophic risks.
I’m almost sure that the system had some sort of local replication and versioning that was enough to deal with occasional deletions, rollbacks, and single non-widespread hardware failures, so only the very catastrophic scenario of losing all servers at the same time (that for sure wouldn’t happen anytime soon) was uncovered.
At a previous job I was not allowed to do disaster planning with customers, after I told one of them that it was entirely possible to take out both our datacenters with one plane crash. The two locations where a "safe" distance apart, but where also located fairly close the approach of an airport, and a crashing passenger jet is big enough to take out both buildings.
Apparently I plan for the rather rare catastrophes, and not those customers care about day to day.
But it's extra surprising, because South Korea is a country where every young man is conscripted due to the threat of war with the north. If the conflict is serious enough for that, why hasn't someone thought about losing all the government data in a single artillery strike?
It's hard to believe this happened. South Korea has tech giants like Samsung, and yet this is how the government runs? Is the US government any better?
Software and information technology in Korea just sucks.
buttons are jpegs/gifs, everything is on Java EE and on vulnerable old webservers etc... A lot of government stuff supports only Internet Explorer even though it's long dead
South Korean IT seemed to be stuck in 2007 just not too long ago, would be surprised if it has changed much in the last few years. Do the websites still require you to use internet explorer?
I was going to say, Samsung anything immediately makes me assume the software is awful. With a dose of zero privacy, cloud enabled door-knob or something.
The first thing that comes to mind when I think of the South Korean government is the storied tradition of physical confrontation in their parliament along with more than a few viral videos of brawls and such over the years. It used to be better in the US, but with the intensity of discord in our government lately, I don't think anyone really knows anymore.
> The first thing that comes to mind when I think of the South Korean government is the storied tradition of physical confrontation in their parliament along with more than a few viral videos of brawls and such over the years
Our incompetence in the US is much more distributed. It wouldn't surprise me if the same kind of data isn't backed up, but at least it's dozens of separate federal agencies not-backing up their data in different physical places.
Not the same country but another example of a culturally similar attitude towards shame over failure: In Japan in 1985, Flight 123, a massive Boeing 747 carrying 524 people, lost control shortly after takeoff from Tokyo en route to Osaka.
The plane's aft pressure bulkhead catastrophically exploded, causing total decompression at the high altitude, severing all four of the massive plane's hydraulic stabilizer systems and entirely tearing away its vertical stabilizer.
With these the 747 basically became uncontrollable and minutes later, despite tremendously heroic efforts by the pilots to turn back and crash land it with some modicum of survivability for themselves and the passengers, the flight slammed into a mountain close to Tokyo, killing hundreds.
The resulting investigation showed that the failed bulkhead had burst open due to faulty repair welding several years before. The two technicians most responsible for clearing that particular shoddy repair both committed suicide soon after the crash tragedy. One of them even left a note specifically stating "With my death I atone". (paraphrasing from memory here)
I can't even begin to imagine a modern Boeing executive or senior staffer doing the same.
Same couldn't be said for Japanese military officials after the tragedy though, so who knows about cultural tendencies:
Right after the crash, helicopters were making ready to fly to the scene (it was night by this point) and a nearby U.S military helicopter squadron also even offered to fly in immediately. The local JSDF administration however stood all these requests down until the following morning, on the claim that such a tremendous crash must not have left anyone alive, so why hurry?
As it turned out, quite a number of people had incredibly survived, and slowly died during the night from exposure to cold and their wounds, according to testimony from the four who did survive to be rescued, and doctors who later conducted postmortems on the bodies.
I like to think that at least one worker was loafing on a project that was due the next day and there was no way it was going to get done. Their job was riding on it. They got drunk to embrace the doom that faces them, only to wake up with this news. Free to loaf another day!
TL;DR: Estonia operates a Tier 4 (highest security) data center in Luxembourg with diplomatic immunity. Can actively run critical government services in real-time, not just backups.
This is because everything is in digital form. Essentially all government systems are digital-first, and for the citizen, often digital-only. If the data is lost, there may be no paper records to restore everything from land registry, business registry (operating agreements, ownership records), etc.
Without an out-of-country backup, a reversion to previous statuses means the country is lost (Estonia has been occupied a lot). With it, much of the government can continue to function, as an expat government until freedom and independence is restored.
> Estonia follows the “once-only” principle: citizens provide their data just once, and government agencies re-use it securely. The next step is proactive services—where the government initiates service delivery based on existing data, without waiting for a citizen’s request.
I wish the same concept was in Canada as well. You absolutely have to resubmit all your information every time you do a request. On top of that, federal government agencies still mail each other the information, so what usually can be done in 1 day takes a whole month to process, assuming the mail post isn't on strike (spoiler: they are now).
I think Canada is one of the worst countries in efficiency and useless bureaucracy among 1st world countries.
I wanted to update some paperwork to add my wife as a beneficiary to some accounts. I go to the bank in person and they tell me “call this number, they can add the beneficiary”. I call the number and wait on hold for 30 minutes and then the agent tells me that they will send me an email to update the beneficiary. I get an email over 24 hours later with a PDF THAT I HAVE TO PRINT OUT AND SIGN and then scan and send back to the email. I do that, but then I get another email back saying that there is another form I have to print and sign.
This is the state of banking in Canada. God forbid they just put a text box on the banking web app where I can put in my beneficiary.
Not to mention our entire health care system still runs on fax!
It blows my mind that we have some of the smartest and well educated people in the world with some of the highest gdp per capita in the world and we cannot figure out how to get rid of paper documents. You should be issued a federal digital ID at birth which is attested through a chain of trust back to the federal government. Everything related to the government should be tied back to that ID.
Definitely. Especially when considering that there were 95 other systems in this datacentre which do have backups and
> The actual number of users is about 17% of all central government officials
Far from all, and they're not sure what's recoverable yet ("“It’s difficult to determine exactly what data has been lost.”")
Which is not to say that it's not big news ("the damage to small business owners who have entered amounts to 12.6 billion Korean won.” The ‘National Happiness Card,’ used for paying childcare fees, etc., is still ‘non-functional.’"), but to put it a bit in perspective and not just "all was lost" as the original submission basically stated
Are we actually sure they didn't do due diligence?
This is the individual's work files of civil servants. These will overwhelmingly be temporary documents they were legally obliged to delete at some point in the last 8 years. Any official filings or communications would have been to systems of record that were not effected.
This is more that a very large fire, probably unlucky for once a decade, caused civil servants to lose hours of work in files they were working on. A perfect system could have obviously prevented this and ensured availability, but not without cost.
Having just visited South Korea last year, one thing that sort of caught me off guard was the lack of Google Maps or other major direction system. I wasn’t aware but turns out anything considered “detailed mapping” infrastructure has to be ran stored and on South Korean soil, probably lots of other requirements. So you’re stuck with some shotty local mapping systems that are just bad.
There may be a point in time it made sense but high resolution detailed satellite imagery is plenty accessible and someone could put a road and basically planning structure atop it, especially a foreign nation wishing to invade or whatever they’re protecting against.
Some argument may be made that it would be a heavy lift for North Korea but I don’t buy it, incredibly inconvenient for tourists for no obvious reason.
Several other countries have similar requirements with regards to storing and serving maps locally.
If you take a moment to think about it, what's weird is that so many countries have simply resorted to relying on Google Maps for everyday mapping and navigation needs. This has become such a necessity nowadays that relying on a foreign private corporation for it sounds like a liability.
Why didn't you use Kakao Maps or Naver Maps? They're not shotty and work just fine, even if you don't read Korean, you can quickly guess the UI based on the icons.
This is literally comic. The plot of the live action comic book movie "Danger: Diabolik" [1] has a segment where the a country's tax records are destroyed, thus making it impossible for the government to collect taxes from its citizens.
I'm CTO of a TINY company, with pretty much exactly half this data. I run all storage and offside backups personally, because I can't afford a full time sysadmin yet.
And the cost of everything is PAIN to us.
If our building burned down we would lose data, but only the data we are Ok with losing in a fire.
I'd love to know the real reason. It's not some useless tech... it's politics, surely.
The easy solution would be to use something like Amazon S3 to store documents as objects and let them worry about backup; but governments are worried (and rightly so) about the US government spying on them.
Thus, the not-so-easy-but-arguably-better solution would be to self-host an open source S3-compatible object storage solution.
Are there any good open source alternatives to S3?
Goodness, I have over 100TB at home and it cost less than a two or three thousand dollars to put in place. That's like $25 per TB.
> The stored data amounts to 858TB (terabytes), equivalent to 449.5 billion A4 sheets.
No, the 858TB amounts to under $25k for the government of the 10th largest economy, of one of the most sophisticated countries on the planet, to put in place.
Two of those would be less than the price of a new Hyundai Grandeur car.
> “It’s daunting as eight years’ worth of work materials have completely disappeared.”
So they're clocking in at around 100TB/year or 280GB a day. It's respectable, but not crazy. It's about 12GB/hr, doable with professional, server level hardware with backup moved over dedicated fiber to an offsite location. Multiply the price 10x and you can SSD the entire thing.
Even with data sovereignty consideration demanding an entirely 100% home grown solution rather than turning to AWS or Azure, there's no excuse. But it's not like the cloud providers don't already have CSAP certification and local, in country, sovereign clouds [1] with multiple geographic locations in country [2]
South Korea is full of granite mountains, maybe its time the government converts one into an offsite, redundant backup vault?
~1PB of data, with ingestion at a rate of 12GB per hour, is a tiny amount of data to manage and backup properly for a developed world government. This is silly. Volume clearly should not have been a hinderance.
Backup operations are often complex and difficult - but then again it's been worked on for decades and rigorous protocols exist which can and should be adopted.
"However, due to the system’s large-capacity, low-performance storage structure, no external backups were maintained" ... "the G-Drive’s structure did not allow for external backups."
Clearly [in]competence was the single factor here.
This is what happens when you come up with all kind of reasons to do something yourself, which you are not qualified to do, rather than simply paying a vendor to do it for you.
The most sophisticated countries and companies are smart enough to use the least sophisticated backup methods. SK needs to backup their data to cassette tapes and tape libraries cost a bit more than that, but not much. Even if they boat their tapes over to an iron mountain in the US, I can't imagine the equipment and service fees are going to cost them more than a few hundred grand. They'll be spending more on the headcount to manage the thing.
The operational expenses of this stuff dwarfs the hardware cost. For the tape mountain, you need robots to confirm the tapes still work (mean time to detection of device failure and recovery are key for RAID durability computations). So, someone needs to constantly repair the robots or whatever.
If I was being paid to manage that data set, I’d probably find two enterprise storage vendors, and stick two copies of the data set on them, each with primary secondary backup. Enterprise flash has been under a dollar a gigabyte for over a decade, so that’s under $1.7M per copy, amortized over five years. That’s $700K per year, and one of the four copies (at 3-4 sites) could be the primary store.
(I can’t be bothered to look up current prices, but moore’s law says there have been six capacity doublings since then, and it still applies to flash and networking, so divide my estimate by 2^6 — so, ten-ish grand per year, with zero full time babysitters required).
But it would not have been $25k, it would have been 1-2 million for an “enterprise grade” storage solution from Dell or a competitor. Which isn’t much compared with your granite mountain proposal, nor with the wages of 750,000 civil servants, but it’s a lot more than $25k.
The article reads like they actually have a fault-tolerant system to store their data. This is probably a data dump for whatever files they are working with that might have started out as a cobbled-together prototype that just picked up momentum and pushed beyond its limitations. Many such cases not only in government IT...
> The Ministry of the Interior and Safety also issued guidelines to each ministry stating, “All work materials should not be stored on office PCs but should be stored on the G-Drive.”
They very well might have only been saving to this storage system. It was probably mapped as a drive or shared folder on the PC.
Do they? It's not clear if this was two-way sync or access on-demand.
Like, I use Google Drive for Desktop but it only downloads the files I access. If I don't touch a file for a few days it's removed from my local cache.
DR/BCP fail. The old adage companies that lose all of their data typically go out of business within 6 months I guess doesn't apply when it's the government.
At a minimum, they could've stored the important bits like financial transactions, personnel/HR records, and asset inventory database backups to Tarsnap [0] and shoved the rest in encrypted tar backups to a couple of different providers like S3 Glacier and/or Box.
Business impact analysis (BIA) is a straightforward way to assessing risks of probability of event * cost to recover from event = approximate budget for spending on mitigation.
And, PSA: test your backups and DR/BCP runbooks periodically!
I wonder how many IT professionals were begging some incompetent upper management official to do this the right way, but were ignored daily. You'd think there would be concrete policies to prevent these things...
The lack of backups makes my blood boil. However, from my own experience, I want to know more before I assign blame.
The very first "computer guy" job I had starting in about 1990/1991, my mentor gave me a piece of advice that I remember to this day: "Your job is to make sure the backups are working; everything else is gravy."
While I worked in that job, we outgrew the tape backup system we were using, so I started replicating critical data between our two sites (using 14400 bps Shiva NetModems), and every month I'd write a memo requesting a working backup system and explaining the situation. Business was too cheap to buy it.
We had a hard drive failure on one of our servers, I requested permission to invalidate the drive's warranty because I was pretty sure it was a bad bearing; I got it working for a few weeks by opening the case and spinning the platter with my finger to get it started. I made sure a manager was present so that they could understand how wack the situation was- they bought me a new drive but not the extras that I asked for, in order to mirror.
After I left that job, a friend of mine called me a month later and told me that they had a server failure and were trying to blame the lack of backups on me; fortunately my successor found my stack of memos.
Yeah. I've seen it. Had one very close call. The thieves took an awful lot of stuff, including the backups, had they taken the next box off the server room rack the company would have been destroyed. They stole one of our trucks (which probably means it was an inside job) and appear to have worked their way through the building, becoming more selective as they progressed. We are guessing they filled the truck and left.
I must say, at least for me personally when I hear about such levels of incompetence it rings alarm bells in my head making me think that maybe intentional malice was involved. Like someone higher up had set up the whole thing to happen in such a matter because there was a benefit to this happening we are unaware of. I think this belief maybe stems from lack of imagination on how really stupid humans can get.
I'm sure they had dozens of process heavy cybersecurity committees producing hundreds if not thousands of powerpoints and word documents outlining procedures and best practices over the last decade.
There is this weird divide between the certified class of non-technical consultants and actual overworked and pushed to corner cut techs.
> "The outage also hit servers that host procedures meant to overcome such an outage... Company officials had no paper copies of backup procedures, one of the people added, leaving them unable to respond until power was restored."
One of the workers jumped off a building. [1] They say the person was not being investigated for the incident. But I can’t help but think he was a put under intense pressure to be scapegoat for how fucked up Korea can be in situations like this.
To be some context on Korea IT scene, you get pretty good pay and benefits if you work for a big product company, but will be treated like dogshit inside subcontracting hell if you work anywhere else.
> There is a cert and private key for rc.kt.co.kr, South Korea Telecom's Remote Control Service. It runs remote support backend from https://www.rsupport.com. Kim may have access to any company that Korea Telecom was providing remote support for.
> A firefighter cools down burnt batteries at the National Information Resources Service (NIRS) in Daejeon on Sept. 27. [YONHAP]
New caption:
> A firefighter wants to see the cool explosive reaction between water and lithium at the National Information Resources Service (NIRS) in Daejeon on Sept. 27. [YONHAP]
Its bizarre how easy it to make smart people on HN just assume people who are doing something weird are just low IQ.
Its almost weirdly a personality trait that a trained programmer just goes around believing everyone around him doesnt understand what way the wind blows.
Government installation for backups for a government ruled by a weird religious sect, have no offsite backups, it goes up in flames? Well clearly they were not smart enough to understand what an off-site backup is.
Its like wtf guys?
Now dont get me wrong, occams razor, they tried to save a few bucks, it all went Pete tong, but cmon, carelessness , chance, sure, but I doubt its all down to stupidity.
Yeah, all this chatter about technologies and processes that could have saved this: you don't think someone in all of Korean government knew about that?
The problem is more likely culture, hierarchy or corruption. Guaranteed several principal security architects have been raising the alarm on this internally along with much safer, redundant, secure alternatives that came with an increased cost. And decision makers who had a higher rank/social/networking advantage shot it down. Maybe the original storage designer was still entrenched there and sabotaging all other proposals out of pride. Or there's an unspoken business relationship with the another department providing resources for that data center that generates kickbacks.
Assuming nobody knows how to do an offsite backup or is plain ignorant of risk over there is arrogant.
It's a common problem in any field that presumably revolves around intellect, since supposedly being smarter gets you further (it may, but it is not enough on its lonesome).
People, in general, severely overestimate their own intelligence and grossly underestimate the intelligence of others.
Consider for a moment that most of the geniuses on hacker news are not even smart enough to wonder whether or not something like IQ is actually a meaningful or appropriate way to measure intelligence, examine the history of this notion, question what precisely it is we mean by that term, how its use can vary with context, etc. etc.
I know Korea is a fast-changing place, but while I was there I was taught and often observed that the value of "ppalli ppalli" (hurry hurry) was often applied to mean that a job was better done quickly than right, with predictably shoddy results. Obviously I have no insight into what happened here, but I can easily imagine a group of very hurried engineers feeling the pressure to just be done with their G-Drive tasks and move on to other suddenly urgent things. It's easy to put off preparation for something you don't feel will ever come.
I'm going to check all the smoke detectors in my house tomorrow :D
Assume the PHB's who wouldn't spring for off-site backups (vs. excuses are "free") also wouldn't spring for fire walls, decently-trained staff, or other basics of physical security.
Easy, some electronic fault and look at OVH with WOODEN FLOOR and bad management decisions. But of course the servers had automatic backups … in the datacenter in the same building. A few companies lost EVERYTHING and had to close, because of this.
>the G-Drive’s structure did not allow for external backups
That should be classified as willful sabotage. Someone looked at the cost line for having backups in another location and slashed that budget to make numbers look good.
It is very unlikely that the low performance would have prevented any backup. This was slow changing data. Here the real inability to a good, solid backup was taken as an excuse to do anything at all.
Wow. That is genuinely one of the most terrifying headlines I've read all year.
Seriously, "no backups available" for a national government's main cloud storage? That’s not a simple IT oversight; that’s an epic, unforgivable institutional mistake.
It completely exposes the biggest fear everyone in tech has: putting all the eggs in one big physical basket.
I mean, we all know the rule: if it exists in only one place, it doesn't really exist. If your phone breaks, you still have your photos on a different server, right? Now imagine that basic, common-sense rule being ignored for a country’s central data.
The fire itself is a disaster, but the real catastrophe is the planning failure. They spent millions on a complex cloud system, but they skipped the $5 solution: replicating the data somewhere else—like in a different city, or even just another building across town.
Years of official work, policy documents, and data—just gone, literally up in smoke, because they violated the most fundamental rule of data management. This is a massive, expensive, painful lesson for every government and company in the world: your fancy cloud setup is worthless if your disaster recovery plan is just "hope the building doesn't burn down." It’s an infrastructure nightmare.
Well I'll be. Backup is a discipline to not be taken lightly by any organization, specially a government. Fire? This is backup 101: files should be backed up and copies should be physically apart to avoid losing data.
There are some in this threading pointing out that this would be handled by cloud providers. That bad - you can't hope for transparent backup, you need to actively have a discipline over it.
My fear is that our profession has become very amateurish over the past decade and a lot of people are vulnerable to this kind of threat.
It is the same in Japan. They are really good for hardware and other "physical" engineering disciplines, but they are terrible when it comes to software and general IT stuff.
Seriously, I work here as an IT guy and I can't stop wondering how they could become so advance in other areas and stay so backwards in anything software-related except videogames.
Yeah. This is my exact experience too wrt japan! The japanese just somehow can't assess and manage neither the scale, nor the complexity, the risk, the effort or the cost of software projects. Working in japan as a software guy feels like working in a country lagging 30-40 years behind :/
How could you even define that as a ‘cloud’? Sounds like good old client-server on a single premise, and no backup whatsoever. Can’t have had very secure systems either.. perhaps they can buy back some of the data off the dark web.. or their next-door neighbor.
That may not be a perfect answer. One issue with fire suppression systems and spinning rust drives is that the pressure change etc. from the system can also ‘suppress’ the glass platters in drives as well.
At first you think what an incompetent government would do such things, but even OVH pretty much did the same a few years ago. Destroyed some companies in the progress. A wooden floor in a datacenter with backups in the same building …
Isn't that self-evident? Do you have two microwaves from different batches, regularly tested, solely for the eventuality that one breaks? Systems work fine until some (unlikely) risk manifests...
Idk if this sounds like I'm against backups, I'm not, I'm just surprised by the question
Does anyone have an understanding of what the impact will be of this, i.e., what kind of government impact scale and type of data are we talking about here?
Is this going to have a real impact in the near term? What kind of data are we’re talking about being permanently lost?
One of the lessons I learned from my Network Administration teacher was that if you're ultimately responsible for it and they say no backups?
You tack on the hours required to do it yourself (this includes the time you must spend actually restoring from the backups to verify integrity, anything less can not be trusted). You keep one copy in your safe, and another copy in a safety deposit box at the bank. Nobody ever has to know. It is inevitable that you will save your own ass, and theirs too.
This is a great fear of mine. I have data backups of backups. A 2 year project is coming to a close soon and I'll be able to relax again. Bring back paper printouts.
this is the kind of thing that is so fundamental to IT that not doing it is at best negligence and at worst intentional malpractice. There is simply no situation that justifies not having backups and I think it might be worth assuming intentionality here, at least for purposes of investigation. It looks like an accident but someone (perhaps several someones, somefew if you will) made a series of shriekingly bad decisions in order to put themselves in a precarious place where an accident could have an effect like this.
The board of directors should now fire the management over such as gross mismanagement. Then, the board of directors should be fired for not proactively requiring backups.
> I wouldn't be surprised if someone caused this intentionally.
What, no backup(s) set up?
Hmmm, possibly.
But, they're'd be a paper trail.
Imagine all the scrabbling going on right now - people desperately starting to cover their arses. But chances are, what they need has just burnt down, with no backups.
Is it possible that the fire was started by malicious software, for example by somehow gaining control of UPS batteries' controllers or something similar?
Backups are best thought of as a multi dimensional problem, as in, they can be connected in many dimensions. Destroy a backup, and all those in the same dimension are also destroyed. This means you must have to have redundancy in many dimensions. That all sounds a bit abstract, so ...
One dimension is two backups can be close in space (ie, physically close, as happened here). Ergo backups must be physically separated.
You've heard RAID can't be a backup? Well it sort of can, and the two drives can be physically separated in space. But they are connected in another dimension - time, as in they reflect the data at the same instant in time. So if you have a software failure that corrupts all copies, your backups are toast as you can't go back to a previous point in time to recover.
Another dimension is administrative control. Google Drive for example will backup your stuff, and separate it in space and time. But they are connected by who controls them. If you don't pay the bill or piss Google off, you've lost all your backups. I swear every week I see a headline saying someone lost their data this way.
Then backups can be all connected to you via one internet link, or connected to one electrical grid, or even one country that goes rogue. All of those are what I called dimensions, that you have to ensure your backups are held at a different location in each dimension.
Sorry, that didn't answer your question. The answer no. It's always possible all copies could be wiped out at the same time. You are always relying on luck, and perhaps prayer if you think that helps your luck.
Are we talking about actual portable thunderbolt 3 connected RAID 5 G-drive arrays with between 70 and 160TB of storage per array? We use that for film shoots to dump TB of raw footage. That G-Drive?? The math checks at 30GB for around 3000 users on a single RAID5 array. This would be truly hilarious if true.
Is it just me, or is this a massively better result than "1PB of government documents containing sensitive data about private individuals was exfiltrated to a hacker group and found for sale"?
I applaud them for honouring their obligation to keep such data private. And encourage them to work on their backup procedures while continuing to honour that obligation.
A sibling comment links to a phrack page (https://phrack.org/issues/72/7_md) about North Korean infiltration in South Korean systems. The timing on that page and the fire make for a possible, though in my opinion wildly unlikely, scenario where either a saboteur started the fire when investigations were supposed to start, or (if you like hacking movies) that a UPS battery was rigged to cause a fire by the spies inside of the South Korean systems.
It's possible that this is all just a coincidence, but the possibility that North Korea is trying to cover their tracks is there.
This is extraordinarily loony shit. Someone designed a system like this without backups? Someone authorized it's use? Someone didn't scream and yell that this was bat and apeshit wacky level crazy? Since 2018? Christ almighty.
In my twenties I worked for a "company" in Mexico that was the official QNX ditribuitor for Mexico and LatAm.
I guess the only reason was that Mexico City's Metro used QNX, and every year they bought a new license, I don't know why.
We also did a couple of sales in Colombia I think, but was a complete shit show. We really just sent them the software by mail, and they had all sorts of issues getting it out of customs.
I did get to go to a QNX training in Canada, which was really cool.
Never got to use it though.
At the very bottom of the article, I see this notice:
> This article was originally written in Korean and translated by a bilingual reporter with the help of generative AI tools. It was then edited by a native English-speaking editor. All AI-assisted translations are reviewed and refined by our newsroom.
I like that. It is direct and honest. I'm fine with people using LLMs for natural language related work, as long as they are transparent about it.
Especially since LLM tech was originally developed for translation. That’s the original reason so much work was done to create a model that could handle context and it turned out that was helpful in more areas than just translation.
While LLM usage is just spinning up in other areas, for translation they have been doing this job well for over 5 years now.
This is how I’ve done translation for a number of years, even pre-LLM, between the languages I speak natively - machine translation is good enough that it’s faster for me to fix its problems than for me to do it from scratch.
(Whether machine translation uses LLMs or not doesn’t seem especially relevant to the workflow.)
My partner is a pro-democracy fighter for her country of origin (she went to prison for it). She used to translate english articles of interest to her native language for all the fellow-exiles from her country. I showed her Google translate and it blew her mind how much work it did for her. All she had to do was review it and clean it up.
The AI hype train is bs, but there're real and concrete uses for it if you don't expect it to become a super-intelligence.
That footnote does make me question the bilingual reporter's skills in both languages though. If the reporter needs an LLM to help translate they could easily be missing subtle mistranslations.
The final note that all AI-assisted translations are reviewed by the newsroom is also interesting. If they are going to take the time to review it and have enough experience in both languages to verify the translation, why use the LLM for it at all?
> That footnote does make me question the bilingual reporter's skills in both languages though. If the reporter needs an LLM to help translate they could easily be missing subtle mistranslations.
I've done my fair share of translating as a bilingual person and having an LLM to do a first pass at translation saves TON of time. I don't "need" LLM, but it's definitely a helpful tool.
> If they are going to take the time to review it and have enough experience in both languages to verify the translation, why use the LLM for it at all?
People generally read (and make minor edits if necessary) much faster than they can write.
If using LLM can shorten the time reporter needs to rewrite the whole article again in the language the reporter is fluent but take effort to write, why not?
This will give the reporter more time to work on more articles, and we as a foreigner to Korea, getting more authentic Korean news that is reviewed by Korean and not be Google Translate.
> If the reporter needs an LLM to help translate they could easily be missing subtle mistranslations.
You raise an interesting point about "missing subtle mistranslations". Consider the stakes for this article: This highly factual news reporting. There are unlikely to be complex or subtle grammar. However, if translating an interview, this stakes are higher, as people use many idiomatic expressions when speaking their native language. Thinking deeper: The highest stakes (culturally) that I can think of is translating novels. They are full of subtle meanings.
You probably don’t want to read news websites which are nothing but LLM output without a journalist reviewing the articles. Unless you’re a fan of conspiracy theories or ultra-aligned content.
FWIW that happens sometimes with traditional reporting to. At the end of the day, it's just a matter of degree, and to be truly informed you need to need to be willing to question the accuracy of your sources. As the parent comment said, at least they're being transparent, which isn't even always the case for traditional reporting
I really don't get this take where people try to downplay AI the most where it is obviously having the most impact. Sure. A billion people are supposed to go back to awful machine translation so that a few tens of thousands can have jobs that were already commodity.
I have sympathy for those affected but this article is disingenuous. I speak Spanish and have just gone to 3 or 4 Spanish news sites, and passed their articles through to ChatGPT to translate "faithfully and literally, maintaining everything including the original tone."
First it gave a "verbatim, literal English translation" and then asked me if I would like "a version that reads naturally in English (but still faithful to the tone and details), or do you want to keep this purely literal one?"
Honestly, the English translation was perfect. I know Spanish, I knew the topic of the article and had read about it in the NYTimes and other English sources, and I am a native English speaker. It's sad, but you can't put the toothpaste back in the tube. LLMs can translate well, and the article saying otherwise is just not being intellectually honest.
I see some comments about North Korean hacking, so I feel I need to clear up some misconceptions.
First, (as you guys have seen) South Korea's IT security track record is not great. Many high-profile commercial sites have been hacked. If a government site was hacked by North Korea, it won't be the first, and while it would be another source of political bickering and finger-pointing, it's likely to blow over in a month.
In fact, given that SK's president Lee started his term in June after his predecessor Yoon's disastrous attempt at overthrowing the constitution, Lee could easily frame this as a proof of the Yoon admin's incompetence.
But deliberately setting fire on a government data center? Now that's a career ending move. If that's found out, someone's going to prison for the rest of their life. Someone should be really desperate to attempt that kind of thing. But what thing could be so horrible that they would rather risk everything to burn the evidence? Merely "we got hacked by North Korea" doesn't cut it.
Which brings us to the method. A bunch of old lithium batteries, overdue for replacement, and predictably the job was sold to the lowest bidder - and the police knows the identity of everyone involved in the job and is questioning them as we speak.
So if you are the evil perpetrator, either you bribed one of the lowest wage workers to start a fire (and the guy is being questioned right now), or you just hoped one of the age-old batteries would randomly start fire. Neither sounds like a good plan.
Which brings us to the question "Why do people consider that plausible?" And that's a doozy.
Did I mention that President Yoon almost started a coup and got kicked out? Among the countless stupid things he did, he somehow got hooked up on election conspiracy theories that say that South Korea's election commission was infiltrated by Chinese spies (along with major political parties, newspapers, courts, schools, and everything) and they cooked the numbers to make the (then incumbent) People's Power Party to lose congressional election of 2024.
Of course, the theory breaks down the moment you look close. If Chinese spies had that much power, how come they let Yoon win his own election in 2022? Never mind that South Korea uses paper ballots and every ballot and every voting place is counted under the watch of representatives from multiple parties. To change numbers in one counting place, you'll have to bribe at least a dozen people. Good luck doing that at a national scale.
But somehow that doesn't deter those devoted conspiracy theorists, and now there are millions of idiots in South Korea who shout "Yoon Again" and believe our lord savior Trump will come to Korea any day soon, smite Chinese spy Lee and communist Democratic Party from their seats, and restore Yoon at his rightful place at the presidential office.
(Really, South Korea was fortunate that Yoon had the charisma of a wet sack of potatoes. If he were half as good as Trump, who knows what would have happened ...)
So, if you listen to the news from South Korea, and somehow there's a lot of noise about Chinese masterminds controlling everything in South Korea ... well now you know what's going on.
They are clouds of smoke to begin with. The smoke from the joints of those who believed that storing their data somewhere out of their control was a good idea!
They might be singing this song now. (To the tune of 'Yesterday' from the Beatles).
Yesterday,
All those backups seemed a waste of pay.
Now my database has gone away.
Oh I believe in yesterday.
Suddenly,
There’s not half the files there used to be,
And there’s a deadline
hanging over me.
The system crashed so suddenly.
I pushed something wrong
What it was I could not say.
Now my data’s gone
and I long for yesterday-ay-ay-ay.
Yesterday,
The need for back-ups seemed so far away.
Thought all my data was here to stay,
Now I believe in yesterday.
No offsite backups is a real sin, sounds like a classic case where the money controllers thought 'cloud' automatically meant AWS level redundant cloud and instead they had a fancy centralized datacenter with insufficient backups.
Does G-Drive mean Google Drive, or "the drive you see as G:"?
If this is Google Drive, what they had locally were just pointers (for native Google Drive docs), or synchronized documents.
If this means the letter a network disk storage system was mapped to, this is a weird way of presenting the problem (I am typing on the black keyboard and the wooden table, so that you know)
Mindblowing. Took a walk. All I can say is that if business continues "as usual" and the economy and public services continue largely unaffected then either there were local copies of critical documents, or you can fire a lot of those workers; either one of those ways the "stress test" was a success.
The fire started on 26th September and news about it reached HN only now. I think this is telling how disruptive for South Korea daily life this accident really was.
Yeah you can do the same with your car too - just gradually remove parts and see what's really necessary. Seatbelts, horn, rear doors? Gone. Think of the efficiency!
Long term damage, and risk are two things that don't show up with a test like this. Also, often why things go forward is just momentum, built from the past.
I was smirking at this until I remembered that I have just one USB stick as my 'backup'. And that was made a long time ago.
Recently I have been thinking about whether we actually need governments, nation states and all of the hubris that goes with it such as new media. Technically this means 'anarchism' with everyone running riot and chaos. But, that is just a big fear, however, the more I think through the 'no government' idea, the less ludicrous it sounds. Much can be devolved to local government, and so much else isn't really needed.
South Korea's government have kind-of deleted themselves and my suspicion is that, although a bad day for some, life will go on and everything will be just fine. In time some might even be relieved that they don't have this vast data store any more. Regardless, it is an interesting story regarding my thoughts regarding the benefits of no government.
Government is whatever has a monopoly on violence in the area you happen to live. Maybe it’s the South Korean government. Maybe it’s a guy down the street. Whatever the case, it’ll be there.
The government official who insisted that commercial AWS/GCP/Azure couldn't possibly be trusted with keeping the information will be keeping their head low for a few days then...
"The Interior Ministry explained that while most systems at the Daejeon data center are backed up daily to separate equipment within the same center and to a physically remote backup facility, the G-Drive’s structure did not allow for external backups."
This is absolutely wild.
The issue here is not refusing to use a foreign third party. That makes sense.
The issue is mandating the use of remote storage and not backing it up. That’s insane. It’s like the most basic amount of preparation you do. It’s recommended to even the smallest of companies specifically because a fire is a risk.
That’s gross mismanagement.
This. Speaking specifically from the IT side of things, an employer or customer refusing to do backups is the biggest red flag I can get, an immediate warning to run the fuck away before you get blamed for their failure, stego-tech kind of situation.
That being said, I can likely guess where this ends up going:
* Current IT staff and management are almost certainly scapegoated for “allowing this to happen”, despite the program in question (G-DRIVE) existing since 2017 in some capacity.
* Nobody in government will question sufficiently what technical reason is/was given to justify the lack of backups and why that was never addressed, why the system went live with such a glaring oversight, etc, because that would mean holding the actual culprits accountable for mismanagement
* Everyone involved is unlikely to find work again anytime soon once names are bandied about in investigations
* The major cloud providers will likely win several contracts for “temporary services” that in actuality strip the sovereignty the government had in managing its own system, even if they did so poorly
* Other countries will use this to justify outsourcing their own sovereign infrastructure to private enterprise
This whole situation sucks ass because nothing good is likely to come of this, other than maybe a handful of smart teams lead by equally competent managers using this to get better backup resources for themselves.
17 replies →
Backups should be far away, too. Apparently some companies lost everything on 9/11 because their backups were in the other tower.
73 replies →
Nothing increases the risk of servers catching fire like government investigators showing up to investigate allegations that North Korea hacked the servers.
6 replies →
> The issue here is not refusing to use a foreign third party. That makes sense.
For anyone else who's confused, G-Drive means Government Drive, not Google Drive.
> The issue here is not refusing to use a foreign third party. That makes sense.
Encrypt before sending to a third party?
35 replies →
It does only make sense if you are competent enough to manage data, and I mean: Any part of it, forever. It's not impossible, of course, but it is really not as trivial as the self-host crowd makes it out to be, if you absolutely need a certain amount of 9s of reliability. There is a reason why AWS etc can exist. I am sure the cloud market is not entirely reasonable but certainly far more reasonable than relying on some mid consultant to do this for you at this scale.
Yeah, the whole supposed benefit of an organization using storage the cloud is to avoid stuff like this from happening. Instead, they managed to make the damage far worse by increasing the amount of data lost by centralizing it.
The issue is without profit incentive of course it isn’t X (backed up, redundant, highly available, whatever other aspect is optimized away by accountants).
Having worked a great deal inside of aws on these things aws provides literally every conceivable level of customer managed security down to customer owned and keyed datacenters operated by aws, with master key HSMs owned, purchased by the customer, with customer managed key hierarchies at all levels and detailed audit logs of everything done by everything including aws itself. The security assurance of aws is far and away beyond what even the most sophisticated state actor infrastructure does and is more modern to boot - because it’s profit incentive drives that.
Most likely this was not about national security than about nationalism. They’re easily confused but that’s fallacious. And they earned the dividends of fallacious thinking.
Call me a conspiracy theorist, but this kind of mismanagement is intentional by design so powerful people can hide their dirty laundry.
10 replies →
I very seriously doubt that the US cares about South Korea's deepest, darkest secrets that much, if at all.
Not using a cloud provider is asinine. You can use layered encryption so the expected lifetime of the cryptography is beyond the value of the data...and the US government themselves store data on all 3 of them, to my knowledge.
I say US because the only other major cloud providers I know of are in China, and they do have a vested interest in South Korean data, presumably for NK.
4 replies →
[flagged]
46 replies →
Agree completely that it's absolute wild to run such a system without backups. But at this point no government should keep critical data on foreign cloud storage.
Good thing Korea has cloud providers, apparently Kakao has even gone...beyond the cloud!
https://kakaocloud.com/ https://www.nhncloud.com/ https://cloud.kt.com/
To name a few.
30 replies →
Encrypted backups would have saved a lot of pain here
2 replies →
You don’t need cloud when you have the data centre, just backups in physical locations somewhere else
2 replies →
> no government should keep critical data on foreign cloud storage
Primary? No. Back-up?
These guys couldn’t provision a back-up for their on-site data. Why do you think it was competently encrypted?
15 replies →
It's 2025. Encryption is a thing now. You can store anything you want on foreign cloud storage. I'd give my backups to the FSB.
3 replies →
Why not? If the region is in country, encrypted, and with proven security attestations validated by third parties, a backup to a cloud storage would be incredibly wise. Otherwise we might end up reading an article about a fire burning down a single data center
55 replies →
Especially on US cloud storage.
The data is never safe thanks to the US Cloud Act.
If you can’t encrypt your backups such that you could store them tatooed on Putin’s ass, you need to learn about backups more.
1 reply →
Why not?
Has there been any interruption in service?
And yet here is an example where keeping critical data off public cloud storage has been significantly worse for them in the short term.
Not that they should just go all in on it, but an encrypted copy on S3 or GCS would seem really useful right about now.
6 replies →
> The government official who insisted that commercial AWS/GCP/Azure couldn't possibly be trusted with keeping the information will be keeping their head low for a few days then...
They absolutely cannot be trusted, especially sensitive govt. data. Can you imagine the US state department getting their hands on compromising data on Korean politicians?
Its like handing over the govt. to US interests wholesale.
That they did not choose to keep the backup, and then another, at different physical locations is a valuable lesson, and must lead to even better design the next time.
But the solution is not to keep it in US hands.
Using the cloud would have been the easiest way to achieve the necessary redundancy, but by far not the only one. This is just a flawed concept from the start, with no real redundancy.
But not security. And for governmental data security is a far more important consideration.
not losing data and keeping untrusted parties out of your data is a hard problem, that "cloud" aka "stored somewhere that is accessible by agents of a foreign nation" does not solve.
2 replies →
There is some data privacy requirement in SK where application servers and data have to remain in the country. I worked for a big global bank and we had 4 main instances of our application: Americas, EMEA, Asia and South Korea.
When I worked on Apple Maps infra South Korea required all servers be in South Korea.
1 reply →
If only there were a second data center in South Korea where they could backup their data…
I know there is legit hate for VMWare/Broadcom but there is a legit case to be made for VCF with an equivalent DR setup where you have replication enabled by Superna and Dell PowerProtect Data Domain protecting both local and remote with Thales Luna K160 KMIP for the data at rest encryption for the vSAN.
To add, use F710s, H710s and then add ObjectScale storage for your Kubernetes workloads.
This setup repatriates your data and gives you a Cloud like experience. Pair it with like EKS-A and you have a really good on premises Private Cloud that is resilient.
This reads very similar to the Turbo Encabulator video.
> G-Drive’s structure did not allow for external backups
Ha! "Did not allow" my ass. Let me translate:
> We didn't feel like backing anything up or insisting on that functionality.
Pretty sensible to not host it on these commercial services. What is not so sensible is to not make backups.
I was once advised to measure your backup security in zip codes and time zones.
You have a backup copy of your file, in the same folder? That helps for some "oops" moments, but nothing else.
You have a whole backup DRIVE on your desktop? That's better. Physical failure of the primary device is no longer a danger. But your house could burn down.
You have an alternate backup stored at a trusted friend's house across the street? Better! But what if a major natural disaster happens?
True life, 30+ years ago when I worked for TeleCheck, data was their lifeblood. Every week a systems operator went to Denver, the alternate site, with a briefcase full of backup tapes. TeleCheck was based in Houston, so a major hurricane could've been a major problem.
Not sure “sane backup strategy” and “park your whole government in a private company under American jurisdiction” are mutually exclusive. I feel like I can think of a bunch of things that a nation would be sad to lose, but would be even sadder to have adversaries rifling through at will. Or, for that matter, extort favors under threat of cutting off your access.
At least in this case you can track down said officials in their foxholes and give them a good talking-to. Good luck holding AWS/GCP/Azure accountable…
He may or may not have been right, but it's besides the point.
The 3-2-1 backup rule is basic.
Well it is just malpractise. Even when I was an first semester art student I knew about the concept of off-site backups.
If you (as the SK government) were going to do a deal with " AWS/GCP/Azure" to run systems for the government, wouldn't you do something like the Jones Act? The datacenters must be within the country and staffed by citizens, etc.
Microsoft exec testified that US Govt can get access to the data Azure stores in other countries. I thought this was a wild allegation but apparently is true [0].
[0]https://www.theregister.com/2025/07/25/microsoft_admits_it_c...
Because these companies never lose data, like during some lightning strikes, oh wait: https://www.bbc.com/news/technology-33989384
As a government you should not be putting your stuff in an environment under control of some other nation, period. That is a completely different issue and does not really relate to making backups.
“The BBC understands that customers, through various backup technologies, external, were able to recover all lost data.”
You backup stuff. To other regions.
2 replies →
>As a government you should not be putting your stuff in an environment under control of some other nation, period.
Why? If you encrypt it yourself before transfer, the only possible control some_other_nation will have over you or your data is availability.
9 replies →
For this reason, Microsoft has Azure US Government, Azure China etc
Yeah, I heard that consumer clouds are only locally redundant and there aren't even backups. So big DC damage could result in data loss.
6 replies →
...on a single-zone persistent disk: https://status.cloud.google.com/incident/compute/15056#57195...
> GCE instances and Persistent Disks within a zone exist in a single Google datacenter and are therefore unavoidably vulnerable to datacenter-scale disasters.
Of course, it's perfectly possible to have proper distributed storage without using a cloud provider. It happens to be hard to implement correctly, so apparently, the SK government team in question just decided... not to?
[flagged]
The simple solution here would have been something like a bunch of netapps with snapmirrors to a secondary backup site.
Or ZFS or DRBD or whatever homegrown or equivalent non-proprietart alternative is available these days and you prefer.
Usually these mandates are made by someone who evaluates “risks.” Third party risks are evaluated under the assumption that everything will be done sensibly in the 1p scenario, to boot, the 1p option will be cheaper as disk drives etc are only a fraction of total cost.
Reality hits later when budget cuts/constrained salaries prevent the maintenance of a competent team. Or the proposed backup system is deemed as excessively risk averse and the money can’t be spared.
>The government official who insisted that commercial AWS/GCP/Azure couldn't possibly be trusted with keeping the information will be keeping their head low for a few days then...
They can't. The trump admin sanctioning the international criminal court and Microsoft blocking them from all services as a result are proof of why.
They put everything only in one datacenter. A datacenter located elsewhere should have been setup to mirror.
This has nothing to do with commercial clouds. Commercial clouds are just datacenters. They could pick one commercial cloud data center and not do much more to mirror or backup in different regions. I understand some of the services have inherent backups.
Mirroring is not backup.
What a lame excuse. “The G-Drive’s structure did not allow for backups” is a blatant lie. It’s code for, “I don’t value other employees’ time and efforts enough to figure out a reliable backup system; I have better things to do.”
Whoever made this excuse should be demoted to a journeyman ops engineer. Firing would be too good for them.
It could be accurate. Let’s say, for whatever reason, it is.
Ok.
Then it wasn’t a workable design.
The idea of “backup sites” has existed forever. The fact you use the word “cloud” to describe your personal collection of servers doesn’t suddenly mean you don’t need backups in a separate physical site.
If the government mandates its use, it should have a hot site at a minimum. Even without that a physical backup in a separate physical location in case of fire/attack/tsunami/large band of hungry squirrels is a total must-have.
However it was decided that not having that was OK, that decision was negligence.
Silly to think this is the fault of ops engineers. More likely, the project manager or C-suite didn't have time nor budget to allocate on disaster recovery.
The project shipped, it's done, they've already moved us onto the next task, no one wants to pay for maintenance anyway.
This has been my experience in 99% of the companies I have worked for in my career, while the engineers that built the bloody thing groan and are well-aware of all the failure modes of the system they've built. No one cares, until it breaks, and hopefully they get the chance to say "I **** told you this was inadequate"
You could be right, but it could also be a bad summary or bad translation.
We shouldn't rush to judgement.
your first criticism was they should have handed their data sovereignty over to another country?
there are many failure points here, not paying Amazon/Google/Microsoft is hardly the main point.
Days? That's optimistic. It depends on what govt cloud contained. For example imagine all the car registrations. Or all the payments for the pension fund
Dude, the issues go wayyy beyond opting for selfhosting rather than US clouds.
We use selfhosting, but we also test our fire suppression system every year, we have two different DCs, and we use S3 backups out of town.
Whoever runs that IT department needs to be run out of the country.
cloud will also not back up your stuff if you configure it wrong so not sure how's that related
Rightfully did not trust these companies. Sure what happened is a disaster for them, but you cant simply trust Amazon & Microsoft.
Why not? You can easily encrypt your data before sending it for storage on on S3, for example.
18 replies →
For sure the only error here is zero redundancy.
S3 features have saved our bacon a number of times. Perhaps your experience and usage is different. They are worth trusting with business critical data as long as you're following their guidance. GCP though have not proven it, their data loss news is still fresh in my mind.
2 replies →
On the Microsoft side CVE-2025–55241 is still pretty recent.
https://news.ycombinator.com/item?id=45282497
I understand data sovereignty in the case where a foreign entity might cut off access to your data, but this paranoia that storing info under your bed is the safest bet is straight up false. We have post-quantum encryption widely available already. If your fear is that a foreign entity will access your data, you're technologically illiterate.
Obviously no person in a lawmaking position will ever have the patience or foresight to learn about this, but the fact they won't even try is all the more infuriating.
Encryption only makes sense if "the cloud" is just a data storage bucket to you. If you run applications in the cloud, you can't have all the data encrypted, especially not all the time. There are some technologies that make this possible, but none are mature enough to run even a small business, let alone a country on.
It sounds technologically illiterate to you because when people say "we can't safely use a foreign cloud" you think they're saying "to store data" and everyone else is thinking at the very least "to store and process data".
Sure, they could have used a cloud provider for encrypted backups, but if they knew how to do proper backups, they wouldn't be in this mess to begin with.
> The government official who insisted that commercial AWS/GCP/Azure couldn't possibly be trusted with keeping the information
They were still right though: it's absolutely clear without an ounce of doubt that whatever you put on an US cloud is being accessible by the US government, who can also decide to sanction you and deprive you from your ability to access the data yourself.
Not having backups is entirely retarded, but also completely orthogonal.
The U.S. Government can’t decrypt data for which it does not possess the key (assuming the encryption used is good).
4 replies →
"Not my fault.. I asked them to save everything in G-Drive (Google Drive)"
I mean he's still right about AWS etc. with the current US Administration and probably all that will follow - but that doesn't excuse not keeping backups.
Yeah let’s fax all government data to the Trump administration.
https://phrack.org/issues/72/7_md#article
Woah, read the timeline at the top of this. The fire happened the very day the government ordered onsite inspection was supposed to start due to Chinese/NK hacking.
Phrack's timeline may read like it, but it wasn't an onsite inspection due to hacking, but a scheduled maintenance to replace the overdue UPS, hence battery-touching involved. Even the image they linked just says "scheduled maintenance."
16 replies →
Such coincidences do happen. 20 years ago the plane which was carrying all the top brass of the Russian Black Sea Fleet as well as the Fleet’s accounting documentation for inspection to Moscow burst in flames and fell to the ground while trying to get airborne. Being loaded with fuel it immediately became one large infernal fireball. By some miracle no top brass suffered even minor burn/injury while all the accounting documentation burned completely.
4 replies →
So, someone figured out how to do backups
1 reply →
Yeah, this whole thing smells.
Who has the incentive to do this, though? China/North Korea? Or someone in South Korea trying to cover up how bad they messed up? Does adding this additional mess on top mean they looked like they messed up less? (And for that to be true, how horrifically bad does the hack have to be?)
5 replies →
The good news is: there are still off-site backups.
The bad news is: they're in North Korea.
1 reply →
"NK hackers" reminds me "my homework was eaten by a dog". It's always NK hackers that steal data/crypto and there is absolutely no possibility to do something with it or restore the data, because you know they transfer the info on a hard disk and they shoot it with an AD! Like that general!
How do we know it's NK? Because there are comments in north-korean language, duh! Why are you asking, are you russian bot or smt??
Though this is far from the most important points of this article, why do even the article’s authors defend Proton after having their accounts suspended, and after having seemingly a Korean intelligence official warn them that they weren’t secure? Even if they’re perfectly secure they clearly do not have the moral compass people believe they have.
What other service would you use?
3 replies →
When you see a chronology like that, you don't keep trying to speak truth to power.
You delete your data, trash your gear, and hop on a bus, to start over in some other city, in a different line of work.
And with no technology! Perhaps become some kind of ascetic monk.
s/city/country/
Ohh side note but this was the journalist group which was blocked by proton
The timing as well is very suspicious and I think that there can be a lot of discussion about this
Right now, I am wondering about the name most tbh which might seem silly but "APT down - The North Korean files"
It seems that APT means in this case advanced persistent threat but I am not sure what they mean by Apt Down, like the fact that it got shut down by their journalism or-? I am sorry if this may seem naive and on a serious note this raises so many questions...
“APT Down” is likely a reference to a popular Korean drinking game.
https://www.thetakeout.com/1789352/korea-apt-drinking-game-r...
For a moment there I was wondering if “apt down” was a typo and you meant “ifdown”. ;)
> 27th of September 2025, The fire is believed to have been caused while replacing Lithium-ion batteries. The batteries were manufactured by LG, the parent company of LG Uplus (the one that got hacked by the APT).
Compromised batteries or battery controllers?
Witness A said, “It appears that the fire started when a spark flew during the process of replacing the uninterruptible power supply,” and added, “Firefighters are currently out there putting out the fire. I hope that this does not lead to any disruption to the national intelligence network, including the government’s 24 channel.”[1]
[1] https://mbiz.heraldcorp.com/article/10584693
8 replies →
Silver lining: it's likely that technically there is a backup (section 1.3).
It's just in NK or china.
Yikes.
I don't backup my phone. The NSA does it for me!
3 replies →
Thanks for this, it gives a lot of extra info and content compared to the original article.
> KIM is heavily working on ToyBox for Android.
2 HN front page articles in 1!
This sounds like a real whodunit.
Well, I think we know "who"dunnit it's more of a how-dunnit & are-they-still-in-dunnit
This is the first time I see this site, who/what is phrack? A hacker group?
It’s a zine. Been around since the 80’s. Hackers / security industry types read and publish to it.
3 replies →
https://en.wikipedia.org/wiki/Phrack
1 reply →
It looks delightful, but definitely for and by a specific subculture.
thanks for the info, canceling proton rn
proton is alternative to gmail still. you replace nsa and ad networks with nsa only. it's a win.
1 reply →
holy shit lol. this is naked gun level incompetence
Figures.
What a sad news as a Korean to see a post about Korea at the top of HN during one of the largest Korean holiday.
I can share an anecdote how slow tech adoption is in Korea. It is not exactly about tech in public section but in private companies. I assume public section has slower adoption rate than private ones in general.
Just about a year ago I had a couple of projects with insurance companies. I won't name them but they are the largest ones whose headquarters you can find in the very center of Seoul. They often called me in because I was setting up on-premise servers for the projects. Not to mention that it was hard to understand their choices of database architecture to plug it into the server I was setting up, their data team seemed just incompetent, not knowing what they were doing.
The wildest thing I found was that most office workers seemed to be using windows 2000 to run their proprietary software. To be fair, I like software UIs with a lot of buttons and windows from that era. But alas, I didn't want to imagine myself connecting that legacy software to my then project service. It didn't go that far in the end.
Back when I worked for Mozilla, I had the chance to go to Seoul to meet with various companies and some governmental ministries. This was when Korean banks and ecommerce sites required Internet Explorer and Active-X controls for secure transactions. This meant that MacOS users or Linux users couldnt do secure transactions in Korea without emulating Win/IE.
What was the outcome of these meetings? Have they switched to Firefox?
2 replies →
> I can share an anecdote how slow tech adoption is in Korea. It is not exactly about tech in public section but in private companies. I assume public section has slower adoption rate than private ones in general.
I guess it's not all tech, but at least in telecoms I thought they were very quick to adopt new tech? 2nd in the world to commercially deploy 3G W-CDMA, world first LTE-Advanced [1], "first fairly substantial deployments" of 5G [2]. 90% of broadband via fibre (used to be #1 amongst OECD countries for some time, now it's only just #2).
[1] https://en.wikipedia.org/wiki/SK_Telecom#History
[2] https://en.wikipedia.org/wiki/5G#Deployment
[3] https://www.oecd.org/en/topics/sub-issues/broadband-statisti... -> Percentage of fibre connections in total broadband (June 2024) spreadsheet link
Things South Korea is good at producing: Cars, ships, steel, semiconductors, electronics, medicines, tanks, aircraft parts, nuclear reactors...
Things South Korea is bad at producing: Software.
Not too bad overall.
> Things South Korea is good at producing: Cars, ships, steel, semiconductors, electronics, medicines, tanks, aircraft parts, nuclear reactors...
Also: music and TV shows.
> Things South Korea is bad at producing: Software.
Also: babies.
Interesting interpretation of 'good' in regards to cars.
1 reply →
Seems everyone ouside of US is bad at producing software.
9 replies →
Do South Korean companies prefer hosting data on their own servers instead of using Public cloud providers like Azure, AWS, GCP?
Yes and no. They used to prefer everything on premise. Many try to move towards cloud especially newer companies. Major cloud providers you mentioned are not the usual choices though (maybe aws is the most common). They do have data centers in Seoul and try to expand their markets for South Korea. But government offers generous incentives for using domestic cloud providers like NHN which was mentioned in the article or Naver cloud. Why does this work? Because Korean services rarely target global markets mainly due to language barrier. Domestic cloud usage is sufficient enough.
I think it's very interesting that Korea is probably the country with the fastest cultural adoption of new tech, e.g. #1 for ChatGPT, but on the other hand I can see as a web developer that new web tech is often adopteded at a very slow rate.
We excel at things that look good on paper.
Article comments aside, it is entirely unclear to me whether or not there was no backups. Certainly no "external" backups, but potentially "internal" backups. My thinking is that not actually allowing backups and forcing all data there creates a prime target for the PRK folks right? I've been in low level national defense meetings about security where things like "you cannot backup off site" are discussed but there are often fire vaults[1] on site which are designed to withstand destruction of the facility by explosive force (aka a bomb) or fire or flood Etc.
That said, people do make bad calls, and this would be an epically bad one, if they really don't have any form of backup.
[1] These days creating such a facility for archiving an exabyte of essentially write mostly data are quite feasible. See this paper from nearly 20 years ago: https://research.ibm.com/publications/ibm-intelligent-bricks...
> there are often fire vaults[
Many years ago I was Unix sysadmin responsible for backups and that is exactly what we did. Once a week we rotated the backup tapes taking the oldest out of the fire safe and putting the newest in. The fire safe was in a different building.
I thought that this was quite a normal practice.
They did have backups. But the backups were also destroyed in the same fire.
Then it's just incompetence. Even I have my backup server 100 km away from the master one.
> My thinking is that not actually allowing backups and forcing all data there creates a prime target for the PRK folks right?
It's funny that you mention that...
https://phrack.org/issues/72/7_md#article
Ouch
When I visited the National Museum of Korea in Seoul, one of my favorite parts was exploring the exhibit dedicated to the backing up state data — via calligraphy, letterpress, and stone carving.
> "The Veritable Records of the Joseon Dynasty, sometimes called sillok (실록) for short, are state-compiled and published records, documenting the reigns of the kings of the Joseon dynasty in Korea. Kept from 1392 to 1865, they comprise 1,893 volumes and are thought to be the longest continual documentation of a single dynasty in the world."
> "Beginning in 1445, they began creating three additional copies of the records, which they distributed at various locations around Korea for safekeeping."
https://en.wikipedia.org/wiki/Veritable_Records_of_the_Joseo...
After the Japanese and Qing invasions of Japan, King Hyeonjong (1659–1675) started a project to collect calligraphy works written by preceding Joseon kings and carve them into stone.
It's somewhat surprising that these values didn't continue to persist in the Korean government.
Saw a few days ago that the application site for the GKS, the most important scholarship for international students in Korea, went offline for multiple days, surprising to hear that they really lost all of the data though. Great opportunity to build a better website now?
But yeah it's a big problem in Korea right now, lots of important information just vanished, many are talking about it.
Must have been a program without much trickle down into gov tech
I was the principal consultant at a subcontractor to a contractor for a large state government IT consolidation project, working on (among other things) the data centre design. This included the storage system.
I noticed that someone had daisy-chained petabytes of disk through relatively slow ports and hadn’t enabled the site-to-site replication that they had the hardware for! They had the dark fibre, the long-range SFPs, they even licensed the HA replication feature from the storage array vendor.
I figured that in a disaster just like this, the time to recover from the tape backups — assuming they were rotated off site, which might not have been the case — would have been six to eight weeks minimum, during which a huge chunk of the government would be down. A war might be less disruptive.
I raised a stink and insisted that the drives be rearranged with higher bandwidth and that the site-to-site replication be turned on.
I was a screamed at. I was called unprofessional. “Not a team player.” Several people tried to get me fired.
At one point this all culminated in a meeting where the lead architect stood up in front of dozens of people and calmly told everyone to understand one critical aspect of his beautiful design: No hardware replication!!!
(Remember: they had paid for hardware replication! The kit had arrived! The licenses were installed!)
I was younger and brave enough to put my hand up and ask “why?”
The screeched reply was: The on-prem architecture must be “cloud compatible”. To clarify: He thought that hardware-replicated data couldn’t be replicated to the cloud in the future.
This was some of the dumbest shit I had ever heard in my life, but there you go: decision made.
This. This is how disasters like the one in South Korea happen.
In private organisations you get competent shouty people at the top insisting on a job done right. In government you get incompetent shouty people insisting that the job gets done wrong.
> In private organisations you get competent shouty people at the top insisting on a job done right. In government you get incompetent shouty people insisting that the job gets done wrong.
Great post and story but this conclusion is questionable. These kinds of incompetences or misaligned incentives absolutely happen in private organisations as well.
Much more rarely in my experience, having been at both kinds of organisations.
There’s a sort-of “gradient descent” optimisation in private organisations, established by the profit motive and the competitors nipping at their heels. There’s no such gradient in government, it’s just “flat”. Promotions hence have a much weaker correlation with competence and a stronger correlation with nepotism, political skill, and willingness to participate in corruption.
I’ve worked with may senior leaders in all kinds of organisations, but only in government will you find someone who is functionally illiterate and innumerate in a position of significant power.
Obviously this is just a statistical bias, so there’s overlap and outliers. Large, established monopoly corporations can be nigh indistinguishable from a government agency.
"The stored data amounts to 858TB (terabytes), equivalent to 449.5 billion A4 sheets"
Just so we can all visualise this in an understandable way, if laid end-to-end how many times round the world would the A4 sheets go?
And what is their total area in football fields?
Attached end-to-end, they'd extend almost from the Earth to the Sun [1].
Placed in a grid, they'd cover an area larger than Wales [2].
Piled on top of each other, they'd reach a tenth the distance to the moon [3].
---
[1] https://www.wolframalpha.com/input?i=449.5+*10%5E9+*+%28leng...
[2] https://www.wolframalpha.com/input?i=449.5+*10%5E9+*+%28area...
[3] https://www.wolframalpha.com/input?i=449.5+*10%5E9+*+%28thic...
I am shocked that 1 and 2 are both true. I would have guessed 1 would have implied a much larger area than Wales.
1 reply →
I know you want to think of this is as a lot of data, but this really isn't that much. It'll cost less than a few thousand to keep a copy in glacier on s3, or a single IT dude could build a NAS at his home that could easily hold this data for a few tens of thousands tops. The entire thing.
Close to 1 petabyte for home server is quite much, honestly. It will cost tens of thousands dollars. But yeah, on government level, nothing.
If you stacked them they would be about fifty thousand Popocatépetls high, give or take a few zeroes.
UPDATE: as sibling pointed out indirectly, it's eight thousand Popocatépetls [0].
[0]: https://www.wolframalpha.com/input?i=449.5+*10%5E9+*+%28thic...
I love that people are still trying to put data on A4s and we're long past the point of being able to visualize it.
That said, if I'm ever fuck-you rich, I'm going to have a pyramid built to bury me in and a library of hardcover printed wikipedia.
Double-sided?
190,813,414 and a bit times round the equator if you place them long edge to long edge
Actual football fields please, the International Standard Unit footbal field, used in SI countries.
Football or soccer?
Regional SI football fields or Cup? ;)
> However, due to the system’s large-capacity, low-performance storage structure, no external backups were maintained — meaning all data has been permanently lost.
Yikes. You'd think they would at least have one redundant copy of it all.
> erasing work files saved individually by some 750,000 civil servants
> 30 gigabytes of storage per person
That's 22,500 terabytes, about 50 Backblaze storage pods.
Or even just mirrored locally.
It's even worse. According to other articles [1], the total data of "G drive" was 858 TB.
It's almost farcical to calculate, but AWS S3 has pricing of about $0.023/GB/month, which means the South Korean government could have reliable multi-storage backup of the whole data at about $20k/month. Or about $900/month if they opted for "Glacier deep archive" tier ($0.00099/GB/month).
They did have backup of the data ... in the same server room that burned down [2].
[1] https://www.hankyung.com/article/2025100115651
[2] https://www.hani.co.kr/arti/area/area_general/1221873.html
(both in Korean)
AWS? Linus Tech Tips has run multiple petabyte servers in their server closet just for sponsor money and for the cool of it. No need to outsource your national infrastructure to foreign governments, a moderate (in government terms) investment in a few racks across the country could've replicated everything for maybe half a year's worth of Amazon subscription fees.
3 replies →
I made an 840TB storage server last month for $15,000.
2 replies →
>AWS S3 has pricing of about $0.023/GB/month, which means ... about $20k/month
or outright buying hardware capable of storing 850TB for the same $20K one time payment. Gives you some perspective on how overpriced AWS is.
4 replies →
Couldn’t even be bothered to do a basic 3-2-1! Wow
6 replies →
I have almost 10% of that in my closet RAID5'd with large part of it backing up constantly to Backblaze for 10$/month, running on 10 year old hardware, with basically only the hard drives having any value ... Used a case made of cardboard till I wanted to improve the cooling, and got a used Fractal Design case for 20€.
_Only_ the kind of combination of incompetence and bad politics here can lead to the kind of % of how much data has been lost here, given the policy was to only save stuff on that "G-drive" and avoid local copies. The "G-drive" they intentionally did not back up because they couldn't figure out a solution to at least store a backup across the street ...
How does this even make sense business wise for AWS?
Is their cost per unit so low?
12 replies →
That's unfortunate.
2 replies →
You're assuming average worker utilized the full 30G of storage. More likely average was at like 0.3G.
On the other hand: backups should also include a version history of some kind, or you'd be vulnerable to ransomware.
A lot of folks are arguing that the real problem is that they refused to use US cloud providers. No, that's not the issue. It's a perfectly reasonable choice to build your own storage infrastructure if it is needed.
But the problem is they sacrificed "Availability" in pursuit of security and privacy. Losing your data to natural and man-made disasters is one of the biggest risks facing any storage infrastructure. Any system that cannot protect your data against those should never be deployed.
"The Interior Ministry explained that while most systems at the Daejeon data center are backed up daily to separate equipment within the same center and to a physically remote backup facility, the G-Drive’s structure did not allow for external backups."
This is not a surprise to them. They had knowingly accepted the risk of infrastructure being destroyed by natural and man-made disasters. I mean, WTF!
Yeah, it's such a lame excuse to say "did not allow for external backups", as if that's a reasonable choice that they just couldn't work around.
South Korea isn't some poor backwater, they have tech companies and expertise, that they were "unable" to do backups was an intentional choice.
Durability is more precise than availability in this context because it is about the data surviving (not avoiding downtime).
Here I was self conscious about my homelab setup and turns out I was already way ahead of the second most technologically advanced nation in the world!
What structure could possibly preclude backups? I've never seen anything that couldn't be copied elsewhere.
Maybe it was just convenient to have the possibility of losing everything.
I think that alluded to that earlier in the article:
>However, due to the system’s large-capacity, low-performance storage structure, no external backups were maintained — meaning all data has been permanently lost.
I think they decided that their storage was too slow to allow backups?
Seems hard to believe that they couldn't manage any backups... other sources said they had around 900TB of storage. An LTO-9 tape drive holds ~20TB uncompressed, so they could have backed up the entire system with 45 tapes. At 300MB/sec with a single drive, it would take them a month to complete a full backup, so seems like even a slow storage system should be able to keep up with that rate. They'd have a backup that's always a month out of date, but that seems better than no backup at all.
Too slow to allow batched backups. Which means you should just make redundant copies at the time of the initial save. Encrypt a copy and send it offsite immediately.
If your storage performance is low then you don't need fat pipes to your external provider either.
They either built this too quickly or there was too much industry corruption perverting the process and the government bought an off the shelf solution that was inadequate for their actual needs.
Let's run the numbers:
LTO-9 ~$92/tape in bulk. A 4 drive library with 80 drive capacity costs ~$40k* and can sustain about 1 Gbps. It also needs someone to barcode, inventory, and swap tapes once a week and an off-site vaulting provider like Iron Mountain. That's another $100k/year. Also, that tape library will need to be replaced every 4-7 years, so say 6 years. And those tapes wear out over X uses and sometimes go bad too. It might also require buying a server and/or backup/DR software too. Furthermore, a fire-rated data safe is recommended for about 1-2 weeks' worth of backups and spare media. Budget at least $200k/year for off-site tape backups for a minimal operation. (Let me tell you about the pains of self-destructing SSL2020 AIT-2 Sony drives.)
If backups for other critical services and this were combined, it would probably be cheaper to scale this kind of service rather reinventing the wheel for just one use-case in one department. That would allow for possibly multiple types of optimizations like network-based backups to nearline storage to then be streamed more directly to tape and using many more tape drives, possibly a tape silo robot(s) and perhaps split into 2-3 backup locations obviating the need for off-site vaulting.
Furthermore, it might be simpler, although more expensive, to operate another hot-/warm-site for backups and temporary business continuity restoration using a pile of HDDs and a network connection that's probably faster than that tape library. (Use backups, not replication because replication of errors to other sites is fail.)
Or the easiest option is to use one or more cloud vendors for even more $$$ (build vs. buy tradeoff).
* Traditionally (~20 years ago), enterprise "retail" prices of gear was sold at around 100% markup allowing for up to around 50% discount when negotiated in large orders. Enterprise gear also had a lifecycle of around 4.5 years while it still might technically work, there wouldn't be vendor support or replacements for them, and so enterprise customers are locked into perpetual planned obsolescence consumption cycles.
1 reply →
Basically it all boils down to budget. Those engineers knew this is a problem and wanted to fix that but that costs some money. And you know, bean counters in the treasury are basically like, "well it works well, why do we need that fix?" and the last conservative govt. was in a full spending cut mode. You know what happened there.
A key metric for recovery is the time it takes to read or write an entire drive (or drive array) in full. This is simply a function of the capacity and bandwidth, which has been getting worse and worse as drive capacities increase exponentially, but the throughput hasn't kept up at the same pace.
A typical 2005 era drive from two decades ago might have been 0.5 TB with a throughput of 70 MB/s for a full-drive transfer time (FDTT) of about 2 hours. A modern 32 TB drive is 64x bigger but has a throughput of only 270 MB/s which is less than 4x higher. Hence the FDDT is 33 hours!
This is the optimal scenario, things get worse in modern high-density disk arrays that may have 50 drives in a single enclosure with as little as 8-32 Gbps (1 GB/sec to 4 GB/sec) of effective bandwidth. That can push FDDT times out to many days or even weeks.
I've seen storage arrays where the drive trays were daisy chained, which meant that while the individual ports were fast, the bandwidth per drive would drop precipitously as capacity was expanded.
It's a very easy mistake to just keep buying more drives, plugging them in, and never going back to the whiteboard to rethink the HA/DR architecture and timings. The team doing this kind of BAU upgrade/maintenance is not the team that designed the thing originally!
Its Korea, so most likely fear of annoying higher up when seeking approvals.
Koreans are weird, for example they will rather eat contractual penalty than report problems to the boss.
Some more details in this article: https://www.chosun.com/english/national-en/2025/10/02/FPWGFS...
> The stored data amounts to 858TB (terabytes), equivalent to 449.5 billion A4 sheets.
This attempt at putting it in perspective makes me wonder what would put it in perspective. "100M sets of harry potter novels" would be one step in the right direction, but nobody can imagine 100M of anything either. Something like "a million movies" wouldn't work because they are very different from text media in terms of how much information is in one, even if the bulk of the data is likely media. It's an interesting problem even if this article's attempt is so bad it's almost funny
Good article otherwise though, indeed a lot more detail than the OP. It should probably replace the submission. Edit: dang was 1 minute faster than me :)
"equivalent to 50 hard drives" ?
2 replies →
Thanks! we've added that link to the toptext as well
> The Interior Ministry explained that while most systems at the Daejeon data center are backed up daily to separate equipment within the same center and to a physically remote backup facility, the G-Drive’s structure did not allow for external backups.
This is why I don't really want to run my own cloud :)
Actually testing the backups is boring.
That said, ones the flames are out, they might actually be able to recover some of it.
Testing backups is boring. If you want exciting, test restores!
Hm, care to elaborate. I kinda liked this idea even though I know that it shouldn't make much sense but still lol, would this have any benefits over testing backups other than the excitement lol
1 reply →
I'm stealing this!
While I am sure a huge portion of valuable work will be lost, I am smirking thinking of management making a call, "So, if there is any shadow IT who has been running mirror databases of valuable infrastructure, we would have a no questions asked policy on sharing that right now".
I know that I have had to keep informal copies of valuable systems because the real source of truth is continually patched,offline,churn,whatever.
Reminds me when Toy Story 2 was deleted and they found the backups on an artist's laptop that was working from home.
>artist
technically, it was the supervising technical director.
The only reason this happened (I don't think "working from home" was very common in 1999) was because she just had a baby! I love this story because it feels like good karma – management providing special accommodations for a new mom saves the show.
It was on their SGI workstation that they lugged to home, but yeah, pretty much that's how they recovered most of the files. At the end they barely used the material.
If SK is anything similar to Germany or Japan in how they are digitizing their government processes, you'll probably be able to find paper printouts of all the data that was lost.
The fun part will be finding them, figuring out their relevance, and re-digitizing them in a useful form.
1 reply →
Funny, because the same thing happened in Nepal a few weeks ago. Protestors/rioters burned some government buildings, along with the tech infrastructure within them, so now almost all electronic data is gone.
Would this have been any different if these documents were stored non-electronically though? I understand that the whole point of electronic data is that it can be backed up, but if the alternative were simply an analog system then it would have fared no better.
For paper documents, you'd make at least a few copies for storage at the source, and then every receiver will get his/her own notarized copies.
Electronically, everyone just receives a link to read the document.
Paper records are usually distributed both by agency and by locality.
It would have been better if storage was distributed.
One source,
https://www.nytimes.com/2025/09/13/world/asia/nepal-unrest-a... ("Many of the nation’s public records were destroyed in the arson strikes, complicating efforts to provide basic health care")
Not sure where you got that info. only physical documents were burned (intentionally by the incumbents you could argue) however the digital backups were untouched
Anti authoritarian patriots?
Happened in Bladerunner too
And Fight Club
[flagged]
Jim Hacker: How am I going to explain the missing documents to The Mail?
Sir Humphrey: Well, this is what we normally do in circumstances like these.
Jim Hacker: (reading) This file contains the complete set of papers, except for a number of secret documents, a few others which are part of still active files, some correspondence lost in the floods of 1967… Was 1967 a particularly bad winter?
Sir Humphrey: No, a marvelous winter. We lost no end of embarrassing files.
To those wondering where it's from: https://www.imdb.com/title/tt0751825/quotes/?item=qt0238072
Real reason is humans are way too optimistic in planning and, for some reason, tend to overlook even more rare, but catastrophic risks.
I’m almost sure that the system had some sort of local replication and versioning that was enough to deal with occasional deletions, rollbacks, and single non-widespread hardware failures, so only the very catastrophic scenario of losing all servers at the same time (that for sure wouldn’t happen anytime soon) was uncovered.
At a previous job I was not allowed to do disaster planning with customers, after I told one of them that it was entirely possible to take out both our datacenters with one plane crash. The two locations where a "safe" distance apart, but where also located fairly close the approach of an airport, and a crashing passenger jet is big enough to take out both buildings.
Apparently I plan for the rather rare catastrophes, and not those customers care about day to day.
However it's also possible that an asteroid could destroy everything or a nuclear war.
But it's extra surprising, because South Korea is a country where every young man is conscripted due to the threat of war with the north. If the conflict is serious enough for that, why hasn't someone thought about losing all the government data in a single artillery strike?
It's hard to believe this happened. South Korea has tech giants like Samsung, and yet this is how the government runs? Is the US government any better?
Software and information technology in Korea just sucks.
buttons are jpegs/gifs, everything is on Java EE and on vulnerable old webservers etc... A lot of government stuff supports only Internet Explorer even though it's long dead
Remember Log4j vulnerability? A lot of the Korea governmental sites weren't affected because the Java version was too old :)
Don't even get me started on ActiveX.
South Korean IT seemed to be stuck in 2007 just not too long ago, would be surprised if it has changed much in the last few years. Do the websites still require you to use internet explorer?
Yes. The US government requires offsite backups .
They also require routine testing distaster recovery plans.
I participated in so many different programs over the years with those tests.
Tests that would roll over to facilities across the country
Samsung's software is generally terrible; they're decent at hardware, not software.
I was going to say, Samsung anything immediately makes me assume the software is awful. With a dose of zero privacy, cloud enabled door-knob or something.
1 reply →
The first thing that comes to mind when I think of the South Korean government is the storied tradition of physical confrontation in their parliament along with more than a few viral videos of brawls and such over the years. It used to be better in the US, but with the intensity of discord in our government lately, I don't think anyone really knows anymore.
If only our politicians were young and agile enough to get into brawls.. their speed seems to be more sleeping on the job while democracy crumbles.
> The first thing that comes to mind when I think of the South Korean government is the storied tradition of physical confrontation in their parliament along with more than a few viral videos of brawls and such over the years
You're thinking of Taiwan, not South Korea.
1 reply →
Our incompetence in the US is much more distributed. It wouldn't surprise me if the same kind of data isn't backed up, but at least it's dozens of separate federal agencies not-backing up their data in different physical places.
The US government still relies heavily on physical records.
Why is there a "still" in there?
Didn't Elon shut that down?
[0]: https://www.cnbc.com/2025/02/13/company-ripped-by-elon-musk-...
Well, Elon has a recent copy of everything at least.
wow https://x.com/koryodynasty/status/1973956091638890499
> A senior government official overseeing recovery efforts for South Korea's national network crisis has reportedly died by suicide in Sejong.
If the US government and corporate executives had even half this level of shame, we'd have nobody left in those positions!
Not the same country but another example of a culturally similar attitude towards shame over failure: In Japan in 1985, Flight 123, a massive Boeing 747 carrying 524 people, lost control shortly after takeoff from Tokyo en route to Osaka.
The plane's aft pressure bulkhead catastrophically exploded, causing total decompression at the high altitude, severing all four of the massive plane's hydraulic stabilizer systems and entirely tearing away its vertical stabilizer.
With these the 747 basically became uncontrollable and minutes later, despite tremendously heroic efforts by the pilots to turn back and crash land it with some modicum of survivability for themselves and the passengers, the flight slammed into a mountain close to Tokyo, killing hundreds.
The resulting investigation showed that the failed bulkhead had burst open due to faulty repair welding several years before. The two technicians most responsible for clearing that particular shoddy repair both committed suicide soon after the crash tragedy. One of them even left a note specifically stating "With my death I atone". (paraphrasing from memory here)
I can't even begin to imagine a modern Boeing executive or senior staffer doing the same.
Same couldn't be said for Japanese military officials after the tragedy though, so who knows about cultural tendencies:
Right after the crash, helicopters were making ready to fly to the scene (it was night by this point) and a nearby U.S military helicopter squadron also even offered to fly in immediately. The local JSDF administration however stood all these requests down until the following morning, on the claim that such a tremendous crash must not have left anyone alive, so why hurry?
As it turned out, quite a number of people had incredibly survived, and slowly died during the night from exposure to cold and their wounds, according to testimony from the four who did survive to be rescued, and doctors who later conducted postmortems on the bodies.
5 replies →
You should look at the previous president of SK. Maybe a few more too... they frequently land in jail...
I'm not sure Yoon Suk Yeol had any shame
https://en.wikipedia.org/wiki/Impeachment_of_Yoon_Suk_Yeol
5 replies →
"suicide" in these circumstances is usually something else altogether.
Even in cases it is executed by themselves, shame won't be the primary motivation.
7 replies →
I like to think that at least one worker was loafing on a project that was due the next day and there was no way it was going to get done. Their job was riding on it. They got drunk to embrace the doom that faces them, only to wake up with this news. Free to loaf another day!
just his luck
Meanwhile, Estonia has a "data embassy" in Luxembourg: https://e-estonia.com/solutions/e-governance/data-embassy/
TL;DR: Estonia operates a Tier 4 (highest security) data center in Luxembourg with diplomatic immunity. Can actively run critical government services in real-time, not just backups.
This is because everything is in digital form. Essentially all government systems are digital-first, and for the citizen, often digital-only. If the data is lost, there may be no paper records to restore everything from land registry, business registry (operating agreements, ownership records), etc.
Without an out-of-country backup, a reversion to previous statuses means the country is lost (Estonia has been occupied a lot). With it, much of the government can continue to function, as an expat government until freedom and independence is restored.
> Estonia follows the “once-only” principle: citizens provide their data just once, and government agencies re-use it securely. The next step is proactive services—where the government initiates service delivery based on existing data, without waiting for a citizen’s request.
I wish the same concept was in Canada as well. You absolutely have to resubmit all your information every time you do a request. On top of that, federal government agencies still mail each other the information, so what usually can be done in 1 day takes a whole month to process, assuming the mail post isn't on strike (spoiler: they are now).
I think Canada is one of the worst countries in efficiency and useless bureaucracy among 1st world countries.
I wanted to update some paperwork to add my wife as a beneficiary to some accounts. I go to the bank in person and they tell me “call this number, they can add the beneficiary”. I call the number and wait on hold for 30 minutes and then the agent tells me that they will send me an email to update the beneficiary. I get an email over 24 hours later with a PDF THAT I HAVE TO PRINT OUT AND SIGN and then scan and send back to the email. I do that, but then I get another email back saying that there is another form I have to print and sign.
This is the state of banking in Canada. God forbid they just put a text box on the banking web app where I can put in my beneficiary.
Not to mention our entire health care system still runs on fax!
It blows my mind that we have some of the smartest and well educated people in the world with some of the highest gdp per capita in the world and we cannot figure out how to get rid of paper documents. You should be issued a federal digital ID at birth which is attested through a chain of trust back to the federal government. Everything related to the government should be tied back to that ID.
1 reply →
That is absolutely delightful. Estonia is just _good_ at this stuff. Admirable.
This comment is in some way more interesting than the topic of the article.
Definitely. Especially when considering that there were 95 other systems in this datacentre which do have backups and
> The actual number of users is about 17% of all central government officials
Far from all, and they're not sure what's recoverable yet ("“It’s difficult to determine exactly what data has been lost.”")
Which is not to say that it's not big news ("the damage to small business owners who have entered amounts to 12.6 billion Korean won.” The ‘National Happiness Card,’ used for paying childcare fees, etc., is still ‘non-functional.’"), but to put it a bit in perspective and not just "all was lost" as the original submission basically stated
Quotes from https://www.chosun.com/english/national-en/2025/10/02/FPWGFS... as linked by u/layer8 elsewhere in this thread
Totally, backup disasters are a regular occurence (maybe not to the degree of negligence) but the Estonia DR is wild.
"secured against cyberattacks or crisis situations with KSI Blockchain technology"
hmmmm
Are we actually sure they didn't do due diligence?
This is the individual's work files of civil servants. These will overwhelmingly be temporary documents they were legally obliged to delete at some point in the last 8 years. Any official filings or communications would have been to systems of record that were not effected.
This is more that a very large fire, probably unlucky for once a decade, caused civil servants to lose hours of work in files they were working on. A perfect system could have obviously prevented this and ensured availability, but not without cost.
S. Korea has the most backward infosec requirements. It's wild
Having just visited South Korea last year, one thing that sort of caught me off guard was the lack of Google Maps or other major direction system. I wasn’t aware but turns out anything considered “detailed mapping” infrastructure has to be ran stored and on South Korean soil, probably lots of other requirements. So you’re stuck with some shotty local mapping systems that are just bad.
There may be a point in time it made sense but high resolution detailed satellite imagery is plenty accessible and someone could put a road and basically planning structure atop it, especially a foreign nation wishing to invade or whatever they’re protecting against.
Some argument may be made that it would be a heavy lift for North Korea but I don’t buy it, incredibly inconvenient for tourists for no obvious reason.
Several other countries have similar requirements with regards to storing and serving maps locally.
If you take a moment to think about it, what's weird is that so many countries have simply resorted to relying on Google Maps for everyday mapping and navigation needs. This has become such a necessity nowadays that relying on a foreign private corporation for it sounds like a liability.
3 replies →
Why didn't you use Kakao Maps or Naver Maps? They're not shotty and work just fine, even if you don't read Korean, you can quickly guess the UI based on the icons.
2 replies →
>So you’re stuck with some shotty local mapping systems that are just bad.
What made you think of them as bad? Could you be more specific? I use them almost daily and I find them very good.
4 replies →
In my experience Open Street Maps was very good there.
This is literally comic. The plot of the live action comic book movie "Danger: Diabolik" [1] has a segment where the a country's tax records are destroyed, thus making it impossible for the government to collect taxes from its citizens.
[1] https://en.wikipedia.org/wiki/Danger:_Diabolik
I'm CTO of a TINY company, with pretty much exactly half this data. I run all storage and offside backups personally, because I can't afford a full time sysadmin yet.
And the cost of everything is PAIN to us.
If our building burned down we would lose data, but only the data we are Ok with losing in a fire.
I'd love to know the real reason. It's not some useless tech... it's politics, surely.
The easy solution would be to use something like Amazon S3 to store documents as objects and let them worry about backup; but governments are worried (and rightly so) about the US government spying on them.
Thus, the not-so-easy-but-arguably-better solution would be to self-host an open source S3-compatible object storage solution.
Are there any good open source alternatives to S3?
I recently learned about https://garagehq.deuxfleurs.fr/ but i have no expirience using it
[dead]
Goodness, I have over 100TB at home and it cost less than a two or three thousand dollars to put in place. That's like $25 per TB.
> The stored data amounts to 858TB (terabytes), equivalent to 449.5 billion A4 sheets.
No, the 858TB amounts to under $25k for the government of the 10th largest economy, of one of the most sophisticated countries on the planet, to put in place.
Two of those would be less than the price of a new Hyundai Grandeur car.
> “It’s daunting as eight years’ worth of work materials have completely disappeared.”
So they're clocking in at around 100TB/year or 280GB a day. It's respectable, but not crazy. It's about 12GB/hr, doable with professional, server level hardware with backup moved over dedicated fiber to an offsite location. Multiply the price 10x and you can SSD the entire thing.
Even with data sovereignty consideration demanding an entirely 100% home grown solution rather than turning to AWS or Azure, there's no excuse. But it's not like the cloud providers don't already have CSAP certification and local, in country, sovereign clouds [1] with multiple geographic locations in country [2]
South Korea is full of granite mountains, maybe its time the government converts one into an offsite, redundant backup vault?
1 - https://erp.today/south-korea-microsoft-azure-first-hypersca...
2 - https://learn.microsoft.com/en-us/azure/reliability/regions-...
~1PB of data, with ingestion at a rate of 12GB per hour, is a tiny amount of data to manage and backup properly for a developed world government. This is silly. Volume clearly should not have been a hinderance.
Backup operations are often complex and difficult - but then again it's been worked on for decades and rigorous protocols exist which can and should be adopted.
"However, due to the system’s large-capacity, low-performance storage structure, no external backups were maintained" ... "the G-Drive’s structure did not allow for external backups."
Clearly [in]competence was the single factor here.
This is what happens when you come up with all kind of reasons to do something yourself, which you are not qualified to do, rather than simply paying a vendor to do it for you.
> Backup operations are often complex and difficult
It quickly becomes much less so if you satisfy yourself with very crude methods.
Sure that would be an imperfect backup in many ways but any imperfect backup is always infinitely better than no backup at all.
The most sophisticated countries and companies are smart enough to use the least sophisticated backup methods. SK needs to backup their data to cassette tapes and tape libraries cost a bit more than that, but not much. Even if they boat their tapes over to an iron mountain in the US, I can't imagine the equipment and service fees are going to cost them more than a few hundred grand. They'll be spending more on the headcount to manage the thing.
The operational expenses of this stuff dwarfs the hardware cost. For the tape mountain, you need robots to confirm the tapes still work (mean time to detection of device failure and recovery are key for RAID durability computations). So, someone needs to constantly repair the robots or whatever.
If I was being paid to manage that data set, I’d probably find two enterprise storage vendors, and stick two copies of the data set on them, each with primary secondary backup. Enterprise flash has been under a dollar a gigabyte for over a decade, so that’s under $1.7M per copy, amortized over five years. That’s $700K per year, and one of the four copies (at 3-4 sites) could be the primary store.
(I can’t be bothered to look up current prices, but moore’s law says there have been six capacity doublings since then, and it still applies to flash and networking, so divide my estimate by 2^6 — so, ten-ish grand per year, with zero full time babysitters required).
2 replies →
But it would not have been $25k, it would have been 1-2 million for an “enterprise grade” storage solution from Dell or a competitor. Which isn’t much compared with your granite mountain proposal, nor with the wages of 750,000 civil servants, but it’s a lot more than $25k.
The article reads like they actually have a fault-tolerant system to store their data. This is probably a data dump for whatever files they are working with that might have started out as a cobbled-together prototype that just picked up momentum and pushed beyond its limitations. Many such cases not only in government IT...
Looking at the article, my read (which could be wrong) is that the backup was in the same room as the original.
4 replies →
You can buy a 24Tb drive on sale for $240 or so.
Sometimes I wonder why I still try and save disk space :-/
Link? Am both curious and skeptical
8 replies →
Azure can only be sovereign to the USA.[1] [2]
[1]: https://www.computerweekly.com/news/366629871/Microsoft-refu... [2]: https://lcrdc.co.uk/industry-news/microsoft-admits-no-guaran...
[dead]
Theoretically, they still have the primary copies (on each individual person's "cloud-enabled" device).
> The Ministry of the Interior and Safety also issued guidelines to each ministry stating, “All work materials should not be stored on office PCs but should be stored on the G-Drive.”
They very well might have only been saving to this storage system. It was probably mapped as a drive or shared folder on the PC.
Do they? It's not clear if this was two-way sync or access on-demand.
Like, I use Google Drive for Desktop but it only downloads the files I access. If I don't touch a file for a few days it's removed from my local cache.
DR/BCP fail. The old adage companies that lose all of their data typically go out of business within 6 months I guess doesn't apply when it's the government.
At a minimum, they could've stored the important bits like financial transactions, personnel/HR records, and asset inventory database backups to Tarsnap [0] and shoved the rest in encrypted tar backups to a couple of different providers like S3 Glacier and/or Box.
Business impact analysis (BIA) is a straightforward way to assessing risks of probability of event * cost to recover from event = approximate budget for spending on mitigation.
And, PSA: test your backups and DR/BCP runbooks periodically!
0. https://www.tarsnap.com
I was in Korea during the Kakao fire incident and thought it was astounding that they had no failovers. However, I thought it'd be a wake up call!
I guess not.
My guess is someone somewhere is very satisfied that this data is now unrecoverable.
I wonder how many IT professionals were begging some incompetent upper management official to do this the right way, but were ignored daily. You'd think there would be concrete policies to prevent these things...
If I worked there I'd have had a hard time believing there were really no backups. Governments can be very nebulous.
The lack of backups makes my blood boil. However, from my own experience, I want to know more before I assign blame.
The very first "computer guy" job I had starting in about 1990/1991, my mentor gave me a piece of advice that I remember to this day: "Your job is to make sure the backups are working; everything else is gravy."
While I worked in that job, we outgrew the tape backup system we were using, so I started replicating critical data between our two sites (using 14400 bps Shiva NetModems), and every month I'd write a memo requesting a working backup system and explaining the situation. Business was too cheap to buy it.
We had a hard drive failure on one of our servers, I requested permission to invalidate the drive's warranty because I was pretty sure it was a bad bearing; I got it working for a few weeks by opening the case and spinning the platter with my finger to get it started. I made sure a manager was present so that they could understand how wack the situation was- they bought me a new drive but not the extras that I asked for, in order to mirror.
After I left that job, a friend of mine called me a month later and told me that they had a server failure and were trying to blame the lack of backups on me; fortunately my successor found my stack of memos.
Yeah. I've seen it. Had one very close call. The thieves took an awful lot of stuff, including the backups, had they taken the next box off the server room rack the company would have been destroyed. They stole one of our trucks (which probably means it was an inside job) and appear to have worked their way through the building, becoming more selective as they progressed. We are guessing they filled the truck and left.
Did anything change? No.
> fortunately my successor found my stack of memos
Those, ironically, were backed up
A little more informative source:
https://www.datacenterdynamics.com/en/news/858tb-of-governme...
- G-drive stands for Government Drive
- The incident was caused due to Lithium battery fire
- The drive was of 858TB capacity
- No backup because “The G-Drive couldn’t have a backup system due to its large capacity” (!!)
I must say, at least for me personally when I hear about such levels of incompetence it rings alarm bells in my head making me think that maybe intentional malice was involved. Like someone higher up had set up the whole thing to happen in such a matter because there was a benefit to this happening we are unaware of. I think this belief maybe stems from lack of imagination on how really stupid humans can get.
Most people overestimate the prevalence of malice, und underestimate the prevalence of incompetence
What do you make of this? The guy who was in charge of restoring the system was found dead
https://www.thestar.com.my/aseanplus/aseanplus-news/2025/10/...
1 reply →
[dead]
I'm sure they had dozens of process heavy cybersecurity committees producing hundreds if not thousands of powerpoints and word documents outlining procedures and best practices over the last decade.
There is this weird divide between the certified class of non-technical consultants and actual overworked and pushed to corner cut techs.
The data seems secure. No cyberthreat actors can access it now. Effective access control: check.
I like the definition of security = confidentiality + integrity + availability.
So confidentiality was maintained but integrity and availability were not.
Ironically, see the phrack article someone linked above
Ironically many of those documents for procedures probably lived on that drive...
Here's a 2024 incident:
> "The outage also hit servers that host procedures meant to overcome such an outage... Company officials had no paper copies of backup procedures, one of the people added, leaving them unable to respond until power was restored."
https://www.reuters.com/technology/space/power-failed-spacex...
I dont know why but cant stop laughing. And the great thing is that they will get paid again to write the same thing.
You jest, but I once had a client who's IaC provisioning code was - you guessed it - stored on the very infrastructure which got destroyed.
1 reply →
One of the workers jumped off a building. [1] They say the person was not being investigated for the incident. But I can’t help but think he was a put under intense pressure to be scapegoat for how fucked up Korea can be in situations like this.
To be some context on Korea IT scene, you get pretty good pay and benefits if you work for a big product company, but will be treated like dogshit inside subcontracting hell if you work anywhere else.
[1] https://www.hani.co.kr/arti/society/society_general/1222145....
> There is a cert and private key for rc.kt.co.kr, South Korea Telecom's Remote Control Service. It runs remote support backend from https://www.rsupport.com. Kim may have access to any company that Korea Telecom was providing remote support for.
> A firefighter cools down burnt batteries at the National Information Resources Service (NIRS) in Daejeon on Sept. 27. [YONHAP]
New caption:
> A firefighter wants to see the cool explosive reaction between water and lithium at the National Information Resources Service (NIRS) in Daejeon on Sept. 27. [YONHAP]
Its bizarre how easy it to make smart people on HN just assume people who are doing something weird are just low IQ.
Its almost weirdly a personality trait that a trained programmer just goes around believing everyone around him doesnt understand what way the wind blows.
Government installation for backups for a government ruled by a weird religious sect, have no offsite backups, it goes up in flames? Well clearly they were not smart enough to understand what an off-site backup is.
Its like wtf guys?
Now dont get me wrong, occams razor, they tried to save a few bucks, it all went Pete tong, but cmon, carelessness , chance, sure, but I doubt its all down to stupidity.
Yeah, all this chatter about technologies and processes that could have saved this: you don't think someone in all of Korean government knew about that?
The problem is more likely culture, hierarchy or corruption. Guaranteed several principal security architects have been raising the alarm on this internally along with much safer, redundant, secure alternatives that came with an increased cost. And decision makers who had a higher rank/social/networking advantage shot it down. Maybe the original storage designer was still entrenched there and sabotaging all other proposals out of pride. Or there's an unspoken business relationship with the another department providing resources for that data center that generates kickbacks.
Assuming nobody knows how to do an offsite backup or is plain ignorant of risk over there is arrogant.
It's a common problem in any field that presumably revolves around intellect, since supposedly being smarter gets you further (it may, but it is not enough on its lonesome).
People, in general, severely overestimate their own intelligence and grossly underestimate the intelligence of others.
Consider for a moment that most of the geniuses on hacker news are not even smart enough to wonder whether or not something like IQ is actually a meaningful or appropriate way to measure intelligence, examine the history of this notion, question what precisely it is we mean by that term, how its use can vary with context, etc. etc.
Is there a better word/assessment for "ability"?
Just wondering what it would be, just "success" in a domain?
I agree with you just wondering
I know Korea is a fast-changing place, but while I was there I was taught and often observed that the value of "ppalli ppalli" (hurry hurry) was often applied to mean that a job was better done quickly than right, with predictably shoddy results. Obviously I have no insight into what happened here, but I can easily imagine a group of very hurried engineers feeling the pressure to just be done with their G-Drive tasks and move on to other suddenly urgent things. It's easy to put off preparation for something you don't feel will ever come.
I'm going to check all the smoke detectors in my house tomorrow :D
In Latin America, this is the normal way to erase evidence of corruption...
I would love to know how a fire of this magnitude could happen in a modern data center.
Often poor planning or just lithium based batteries far too close to the physical servers.
OVH's massive fire a couple of years ago in one of the most modern DC's at the time was a prime example of just how wrong it can go.
Assume the PHB's who wouldn't spring for off-site backups (vs. excuses are "free") also wouldn't spring for fire walls, decently-trained staff, or other basics of physical security.
Easy, some electronic fault and look at OVH with WOODEN FLOOR and bad management decisions. But of course the servers had automatic backups … in the datacenter in the same building. A few companies lost EVERYTHING and had to close, because of this.
Their decade-old NMC li-ion UPSs were placed 60cm away from the server racks.
Allegedly from replacing batteries.
>the G-Drive’s structure did not allow for external backups
That should be classified as willful sabotage. Someone looked at the cost line for having backups in another location and slashed that budget to make numbers look good.
It is very unlikely that the low performance would have prevented any backup. This was slow changing data. Here the real inability to a good, solid backup was taken as an excuse to do anything at all.
Wow. That is genuinely one of the most terrifying headlines I've read all year.
Seriously, "no backups available" for a national government's main cloud storage? That’s not a simple IT oversight; that’s an epic, unforgivable institutional mistake.
It completely exposes the biggest fear everyone in tech has: putting all the eggs in one big physical basket.
I mean, we all know the rule: if it exists in only one place, it doesn't really exist. If your phone breaks, you still have your photos on a different server, right? Now imagine that basic, common-sense rule being ignored for a country’s central data.
The fire itself is a disaster, but the real catastrophe is the planning failure. They spent millions on a complex cloud system, but they skipped the $5 solution: replicating the data somewhere else—like in a different city, or even just another building across town.
Years of official work, policy documents, and data—just gone, literally up in smoke, because they violated the most fundamental rule of data management. This is a massive, expensive, painful lesson for every government and company in the world: your fancy cloud setup is worthless if your disaster recovery plan is just "hope the building doesn't burn down." It’s an infrastructure nightmare.
Well I'll be. Backup is a discipline to not be taken lightly by any organization, specially a government. Fire? This is backup 101: files should be backed up and copies should be physically apart to avoid losing data.
There are some in this threading pointing out that this would be handled by cloud providers. That bad - you can't hope for transparent backup, you need to actively have a discipline over it.
My fear is that our profession has become very amateurish over the past decade and a lot of people are vulnerable to this kind of threat.
People keep pointing the finger at North Korea but personally I suspect a Protoss High Templar causing battery overcharge is more plausible.
after the kakao fire incident and now this i struggle to understand how they got so advanced in other areas. this is like amateur hour level shit.
It is the same in Japan. They are really good for hardware and other "physical" engineering disciplines, but they are terrible when it comes to software and general IT stuff.
Seriously, I work here as an IT guy and I can't stop wondering how they could become so advance in other areas and stay so backwards in anything software-related except videogames.
Yeah. This is my exact experience too wrt japan! The japanese just somehow can't assess and manage neither the scale, nor the complexity, the risk, the effort or the cost of software projects. Working in japan as a software guy feels like working in a country lagging 30-40 years behind :/
1 reply →
The irony -- so not only was their system hacked ("hosted onsite"), but then it was also burned down onsite with no backups.
In other words.. there was no point in the extra security of being onsite AND the risks of being onsite single failure point destroyed any evidence.
Pretty much what I'd expect tbh, but no remote backup is insane.
How could you even define that as a ‘cloud’? Sounds like good old client-server on a single premise, and no backup whatsoever. Can’t have had very secure systems either.. perhaps they can buy back some of the data off the dark web.. or their next-door neighbor.
In a world where data centers burn and cables get severed physically, the entire landscape of tradeoffs is different.
Surely there must be something that's missing in translation? This feels like it simply can't be right.
It’s accurate: https://www.chosun.com/english/national-en/2025/10/02/FPWGFS...
I agree. No automated fire suppression system for critical infrastructure with no backup?
That may not be a perfect answer. One issue with fire suppression systems and spinning rust drives is that the pressure change etc. from the system can also ‘suppress’ the glass platters in drives as well.
3 replies →
At first you think what an incompetent government would do such things, but even OVH pretty much did the same a few years ago. Destroyed some companies in the progress. A wooden floor in a datacenter with backups in the same building …
https://www.datacenterdynamics.com/en/news/ovhcloud-fire-rep...
Battery fire is impossible to suppress.
3 replies →
Because it was arson, not an accident
Arson? Sounds increasingly like espionage.
what's the point of a storage system with no back up?
It works fine as long as it doesn't break, and it's cheaper to buy than an equivalently sized system that does have back ups.
Isn't that self-evident? Do you have two microwaves from different batches, regularly tested, solely for the eventuality that one breaks? Systems work fine until some (unlikely) risk manifests...
Idk if this sounds like I'm against backups, I'm not, I'm just surprised by the question
Does anyone have an understanding of what the impact will be of this, i.e., what kind of government impact scale and type of data are we talking about here?
Is this going to have a real impact in the near term? What kind of data are we’re talking about being permanently lost?
One of the lessons I learned from my Network Administration teacher was that if you're ultimately responsible for it and they say no backups?
You tack on the hours required to do it yourself (this includes the time you must spend actually restoring from the backups to verify integrity, anything less can not be trusted). You keep one copy in your safe, and another copy in a safety deposit box at the bank. Nobody ever has to know. It is inevitable that you will save your own ass, and theirs too.
Shit happens.
This is a great fear of mine. I have data backups of backups. A 2 year project is coming to a close soon and I'll be able to relax again. Bring back paper printouts.
if you love it, make a copy of it
this is the kind of thing that is so fundamental to IT that not doing it is at best negligence and at worst intentional malpractice. There is simply no situation that justifies not having backups and I think it might be worth assuming intentionality here, at least for purposes of investigation. It looks like an accident but someone (perhaps several someones, somefew if you will) made a series of shriekingly bad decisions in order to put themselves in a precarious place where an accident could have an effect like this.
There are two types of people: those who do backups, and those who will do backups.
The board of directors should now fire the management over such as gross mismanagement. Then, the board of directors should be fired for not proactively requiring backups.
Does a half-decent job of breaking down how things were affected https://youtu.be/j454KF26IWw
I wouldn't be surprised if someone caused this intentionally.
> I wouldn't be surprised if someone caused this intentionally.
What, no backup(s) set up? Hmmm, possibly. But, they're'd be a paper trail.
Imagine all the scrabbling going on right now - people desperately starting to cover their arses. But chances are, what they need has just burnt down, with no backups.
Is it possible that the fire was started by malicious software, for example by somehow gaining control of UPS batteries' controllers or something similar?
Is there any solution to these kind of issues other than having multiple backups and praying that not all of them will caught fire at the same time?
Backups are best thought of as a multi dimensional problem, as in, they can be connected in many dimensions. Destroy a backup, and all those in the same dimension are also destroyed. This means you must have to have redundancy in many dimensions. That all sounds a bit abstract, so ...
One dimension is two backups can be close in space (ie, physically close, as happened here). Ergo backups must be physically separated.
You've heard RAID can't be a backup? Well it sort of can, and the two drives can be physically separated in space. But they are connected in another dimension - time, as in they reflect the data at the same instant in time. So if you have a software failure that corrupts all copies, your backups are toast as you can't go back to a previous point in time to recover.
Another dimension is administrative control. Google Drive for example will backup your stuff, and separate it in space and time. But they are connected by who controls them. If you don't pay the bill or piss Google off, you've lost all your backups. I swear every week I see a headline saying someone lost their data this way.
Then backups can be all connected to you via one internet link, or connected to one electrical grid, or even one country that goes rogue. All of those are what I called dimensions, that you have to ensure your backups are held at a different location in each dimension.
Sorry, that didn't answer your question. The answer no. It's always possible all copies could be wiped out at the same time. You are always relying on luck, and perhaps prayer if you think that helps your luck.
Interesting way to explain this through multiple dimensions angle.
A management / risk issue and NOT and engineering one.
What info needed to be destroyed and who did it implicate?
> The scale of damage varies by agency. [...] The Office for Government Policy Coordination, which used the platform less extensively,
Amazing
This is amazingly incompetent because all the major enterprise storage arrays support automatic replication to remote arrays.
It would be wise for governments to define "backup" as something that is at least 1km away.
Probably farther than that, right? Plenty of natural disasters, including floods and wildfires, can affect an area larger than 1 km.
Farther than 100km..
They were using a private service to manage public infrastructure? One developed by a foreign company?
The G in G-Drive stands for Government, not Google. It tricked me too.
It is not cloud storage if it's not resilient... It's just remote storage.
This is wild. Wilder would be to see that the government runs the same after the loss.
Are we talking about actual portable thunderbolt 3 connected RAID 5 G-drive arrays with between 70 and 160TB of storage per array? We use that for film shoots to dump TB of raw footage. That G-Drive?? The math checks at 30GB for around 3000 users on a single RAID5 array. This would be truly hilarious if true.
Insisting on having a SPF (single point of failure) for... reasons.
Each government should run a drill backup exercise.
Not even one redundant backup? That's unimaginable for me
Guess they'll have to ask China for their backup.
Is it just me, or is this a massively better result than "1PB of government documents containing sensitive data about private individuals was exfiltrated to a hacker group and found for sale"?
I applaud them for honouring their obligation to keep such data private. And encourage them to work on their backup procedures while continuing to honour that obligation.
A sibling comment links to a phrack page (https://phrack.org/issues/72/7_md) about North Korean infiltration in South Korean systems. The timing on that page and the fire make for a possible, though in my opinion wildly unlikely, scenario where either a saboteur started the fire when investigations were supposed to start, or (if you like hacking movies) that a UPS battery was rigged to cause a fire by the spies inside of the South Korean systems.
It's possible that this is all just a coincidence, but the possibility that North Korea is trying to cover their tracks is there.
Don't call it a coverup
Coincidence is God’s way of remaining anonymous.
Wow. Maybe backups would have been a good idea.
Look at OVH a few years … they had backups in the same datacenter.
https://www.datacenterdynamics.com/en/news/ovhcloud-fire-rep...
That's not a cloud. That is smoke
This is extraordinarily loony shit. Someone designed a system like this without backups? Someone authorized it's use? Someone didn't scream and yell that this was bat and apeshit wacky level crazy? Since 2018? Christ almighty.
No backup, no replica? Such a shame.
When you wish the S. to be an N.
sounds like will become textbook case lesson about backups and disaster planning
In my twenties I worked for a "company" in Mexico that was the official QNX ditribuitor for Mexico and LatAm. I guess the only reason was that Mexico City's Metro used QNX, and every year they bought a new license, I don't know why. We also did a couple of sales in Colombia I think, but was a complete shit show. We really just sent them the software by mail, and they had all sorts of issues getting it out of customs. I did get to go to a QNX training in Canada, which was really cool. Never got to use it though.
I think you meant to post this comment here: https://news.ycombinator.com/item?id=45481892
Hmm, yes...I don't see how to move or remove, so...sorry for that
Indistinguishable from crime.
I have full confidence that management will learn nothing from this object lesson.
I mean ... was making backups on the backlog at least? Can they at least point to the work item that was going to get done soonish?
It got pushed a couple sprints and we've got it on the plan for next quarter as long as no new features come in before then.
May be a “fast follow”? Right after launch of the “MVP”?
If it wasn't it most certainly is now
Why, there's nothing left to backup?
1 reply →
Mr Robot was here?
Sometimes it is convenient that there are no backups. Just saying…
TWO IS ONE
ONE IS NONE
Government fires are never a mistake
In 2025 data storage used by nation states, exposed to the internet, has no backups.
No offsite backups. No onsite backups. No usb drives laying around unsecure in a closet. Nothing.
What?
"A source from the Ministry of the Interior and Safety said, “The G-Drive couldn’t have a backup system due to its large capacity” "
:facepalm:
> no back-ups
Top fucking kek. What were they expecting to happen? Like, really? What were they thinking?
Could be incompetence. Highly likely. Or could be…suspect.
At the very bottom of the article, I see this notice:
I like that. It is direct and honest. I'm fine with people using LLMs for natural language related work, as long as they are transparent about it.
Especially since LLM tech was originally developed for translation. That’s the original reason so much work was done to create a model that could handle context and it turned out that was helpful in more areas than just translation.
While LLM usage is just spinning up in other areas, for translation they have been doing this job well for over 5 years now.
Specifically, GNMT came out in 2016, which is 9 years ago.
GNMT used seq2seq with attention to do translations. GNMT plus some RNN and attention lead to transformers, and here we are today.
> While LLM usage is just spinning up in other areas,
Oh?
This is how I’ve done translation for a number of years, even pre-LLM, between the languages I speak natively - machine translation is good enough that it’s faster for me to fix its problems than for me to do it from scratch.
(Whether machine translation uses LLMs or not doesn’t seem especially relevant to the workflow.)
My partner is a pro-democracy fighter for her country of origin (she went to prison for it). She used to translate english articles of interest to her native language for all the fellow-exiles from her country. I showed her Google translate and it blew her mind how much work it did for her. All she had to do was review it and clean it up.
The AI hype train is bs, but there're real and concrete uses for it if you don't expect it to become a super-intelligence.
2 replies →
That footnote does make me question the bilingual reporter's skills in both languages though. If the reporter needs an LLM to help translate they could easily be missing subtle mistranslations.
The final note that all AI-assisted translations are reviewed by the newsroom is also interesting. If they are going to take the time to review it and have enough experience in both languages to verify the translation, why use the LLM for it at all?
> That footnote does make me question the bilingual reporter's skills in both languages though. If the reporter needs an LLM to help translate they could easily be missing subtle mistranslations.
I've done my fair share of translating as a bilingual person and having an LLM to do a first pass at translation saves TON of time. I don't "need" LLM, but it's definitely a helpful tool.
1 reply →
> If they are going to take the time to review it and have enough experience in both languages to verify the translation, why use the LLM for it at all?
People generally read (and make minor edits if necessary) much faster than they can write.
If using LLM can shorten the time reporter needs to rewrite the whole article again in the language the reporter is fluent but take effort to write, why not?
This will give the reporter more time to work on more articles, and we as a foreigner to Korea, getting more authentic Korean news that is reviewed by Korean and not be Google Translate.
You raise an interesting point about "missing subtle mistranslations". Consider the stakes for this article: This highly factual news reporting. There are unlikely to be complex or subtle grammar. However, if translating an interview, this stakes are higher, as people use many idiomatic expressions when speaking their native language. Thinking deeper: The highest stakes (culturally) that I can think of is translating novels. They are full of subtle meanings.
The reporter does not need the LLM, but it's often faster to review/edit a machine translation than doing the whole translation by yourself
> It was then edited by a native English-speaking editor.
Two different editors.
But as others mentioned, this is helpful even for the same editor to do.
As long as the LLM doesn't hallucinate stuff when translating, by generating text that is inaccurate or even completely fabricated.
why would you not be fine about it?
You probably don’t want to read news websites which are nothing but LLM output without a journalist reviewing the articles. Unless you’re a fan of conspiracy theories or ultra-aligned content.
1 reply →
So just a blanket message at the bottom of the page "anything and everyone you read here might be total bullshit"
FWIW that happens sometimes with traditional reporting to. At the end of the day, it's just a matter of degree, and to be truly informed you need to need to be willing to question the accuracy of your sources. As the parent comment said, at least they're being transparent, which isn't even always the case for traditional reporting
> I'm fine with people using LLMs for natural language related work
phew I'm relieved you're okay with people using modern tools to get their job done
It's still worse than useless :
https://www.bloodinthemachine.com/p/ai-killed-my-job-transla...
I really don't get this take where people try to downplay AI the most where it is obviously having the most impact. Sure. A billion people are supposed to go back to awful machine translation so that a few tens of thousands can have jobs that were already commodity.
I have sympathy for those affected but this article is disingenuous. I speak Spanish and have just gone to 3 or 4 Spanish news sites, and passed their articles through to ChatGPT to translate "faithfully and literally, maintaining everything including the original tone."
First it gave a "verbatim, literal English translation" and then asked me if I would like "a version that reads naturally in English (but still faithful to the tone and details), or do you want to keep this purely literal one?"
Honestly, the English translation was perfect. I know Spanish, I knew the topic of the article and had read about it in the NYTimes and other English sources, and I am a native English speaker. It's sad, but you can't put the toothpaste back in the tube. LLMs can translate well, and the article saying otherwise is just not being intellectually honest.
3 replies →
I see some comments about North Korean hacking, so I feel I need to clear up some misconceptions.
First, (as you guys have seen) South Korea's IT security track record is not great. Many high-profile commercial sites have been hacked. If a government site was hacked by North Korea, it won't be the first, and while it would be another source of political bickering and finger-pointing, it's likely to blow over in a month.
In fact, given that SK's president Lee started his term in June after his predecessor Yoon's disastrous attempt at overthrowing the constitution, Lee could easily frame this as a proof of the Yoon admin's incompetence.
But deliberately setting fire on a government data center? Now that's a career ending move. If that's found out, someone's going to prison for the rest of their life. Someone should be really desperate to attempt that kind of thing. But what thing could be so horrible that they would rather risk everything to burn the evidence? Merely "we got hacked by North Korea" doesn't cut it.
Which brings us to the method. A bunch of old lithium batteries, overdue for replacement, and predictably the job was sold to the lowest bidder - and the police knows the identity of everyone involved in the job and is questioning them as we speak.
So if you are the evil perpetrator, either you bribed one of the lowest wage workers to start a fire (and the guy is being questioned right now), or you just hoped one of the age-old batteries would randomly start fire. Neither sounds like a good plan.
Which brings us to the question "Why do people consider that plausible?" And that's a doozy.
Did I mention that President Yoon almost started a coup and got kicked out? Among the countless stupid things he did, he somehow got hooked up on election conspiracy theories that say that South Korea's election commission was infiltrated by Chinese spies (along with major political parties, newspapers, courts, schools, and everything) and they cooked the numbers to make the (then incumbent) People's Power Party to lose congressional election of 2024.
Of course, the theory breaks down the moment you look close. If Chinese spies had that much power, how come they let Yoon win his own election in 2022? Never mind that South Korea uses paper ballots and every ballot and every voting place is counted under the watch of representatives from multiple parties. To change numbers in one counting place, you'll have to bribe at least a dozen people. Good luck doing that at a national scale.
But somehow that doesn't deter those devoted conspiracy theorists, and now there are millions of idiots in South Korea who shout "Yoon Again" and believe our lord savior Trump will come to Korea any day soon, smite Chinese spy Lee and communist Democratic Party from their seats, and restore Yoon at his rightful place at the presidential office.
(Really, South Korea was fortunate that Yoon had the charisma of a wet sack of potatoes. If he were half as good as Trump, who knows what would have happened ...)
So, if you listen to the news from South Korea, and somehow there's a lot of noise about Chinese masterminds controlling everything in South Korea ... well now you know what's going on.
You lost me at "Yoon overthrowing the constitution."
[stub for offtopicness]
https://mastodon.social/@nixCraft/113524310004145896
Copy/paste:
7 things all kids need to hear
1 I love you
2 I'm proud of you
3 I'm sorry
4 I forgive you
5 I'm listening
6 RAID is not backup. Make offsite backups. Verify backup. Find out restore time. Otherwise, you got what we call Schrödinger backup
7 You've got what it takes
Brilliant.
This deserves its own HN submission. I submitted it but it was flagged due to the title.
Thank you for sharing it on HN.
Technically the data is still in the cloud
I've been putting off a cloud to cloud migration, but apparently it can be done in hours?
3 replies →
Lossy upload though
2 replies →
Unfortunately, the algorithm to unhash it is written in smoke signals
Cloud of smoke, amirite.
The cloud has materialized
Should have given him back his stapler
I don't get it. Can you please explain the reference?
4 replies →
Or a piece of cake.
>the G-Drive’s structure did not allow for external backups.
ah the so called schrodingers drive. It's there unless you try to copy it
repeat after me:
multiple copies; multiple locations; multiple formats.
Good example of a Technology trap
They should ask if North has a backup.
Watching Mr. Robot and seeing the burned batteries the same time...
Well that works out doesn’t it? Saves them from discovery.
This is the reason the 3, 2, 1 rule for backing up exists.
We will learn nothing
Someone found the literal HCF instruction.
Yikes. That is a nightmare scenario.
Well, now they'll have to negotiate with North Korea to get these backups..
Now imagine they had a CBDC.
I thought most liberal governments gave up on those.
I seem to have misplaced my tiny violin
I thought clouds could not burn (:
They are clouds of smoke to begin with. The smoke from the joints of those who believed that storing their data somewhere out of their control was a good idea!
"The day the cloud went up in smoke"
They might be singing this song now. (To the tune of 'Yesterday' from the Beatles).
For the German enjoyers among us I recommend also this old song: https://www.youtube.com/watch?v=jN5mICXIG9M
1 reply →
No problem — I'm sure their Supremely nice Leader up north kept a backup. He's thoughtful like that...
[dead]
[dead]
[dead]
[dead]
LOL
nice
The Egyptians send their condolences.
Has there been a more recent event, or are you referring to Alexandria?
2 replies →
touché
[flagged]
Hope this happens to Altman’s data centers.
Too bad this can't happen everywhere.
[dead]
[dead]
Well it's really in the cloud(s) now! /s
No offsite backups is a real sin, sounds like a classic case where the money controllers thought 'cloud' automatically meant AWS level redundant cloud and instead they had a fancy centralized datacenter with insufficient backups.
[dead]
[dead]
> all documents be stored exclusively on G-Drive
Does G-Drive mean Google Drive, or "the drive you see as G:"?
If this is Google Drive, what they had locally were just pointers (for native Google Drive docs), or synchronized documents.
If this means the letter a network disk storage system was mapped to, this is a weird way of presenting the problem (I am typing on the black keyboard and the wooden table, so that you know)
G-drive was simply the name of the storage system
The name G-Drive is said to be derived from the word ‘government’.
It's now derived from the word 'gone'
2 replies →
Mindblowing. Took a walk. All I can say is that if business continues "as usual" and the economy and public services continue largely unaffected then either there were local copies of critical documents, or you can fire a lot of those workers; either one of those ways the "stress test" was a success.
How do you come to the conclusion that because things work without certain documents that you can start laying off workers?
The fire started on 26th September and news about it reached HN only now. I think this is telling how disruptive for South Korea daily life this accident really was.
>or you can fire a lot of those workers
Sometimes things can seem to run smoothly for years when neglected... until they suddenly no longer run smoothly!
Yeah you can do the same with your car too - just gradually remove parts and see what's really necessary. Seatbelts, horn, rear doors? Gone. Think of the efficiency!
Long term damage, and risk are two things that don't show up with a test like this. Also, often why things go forward is just momentum, built from the past.
“Final reports and official records submitted to the government are also stored in OnNara, so this is not a total loss”.
Surely having human-resource backups will also help with disaster recovery
I was smirking at this until I remembered that I have just one USB stick as my 'backup'. And that was made a long time ago.
Recently I have been thinking about whether we actually need governments, nation states and all of the hubris that goes with it such as new media. Technically this means 'anarchism' with everyone running riot and chaos. But, that is just a big fear, however, the more I think through the 'no government' idea, the less ludicrous it sounds. Much can be devolved to local government, and so much else isn't really needed.
South Korea's government have kind-of deleted themselves and my suspicion is that, although a bad day for some, life will go on and everything will be just fine. In time some might even be relieved that they don't have this vast data store any more. Regardless, it is an interesting story regarding my thoughts regarding the benefits of no government.
Government is whatever has a monopoly on violence in the area you happen to live. Maybe it’s the South Korean government. Maybe it’s a guy down the street. Whatever the case, it’ll be there.