← Back to context

Comment by StopDisinfo910

9 days ago

The issue here is not refusing to use a foreign third party. That makes sense.

The issue is mandating the use of remote storage and not backing it up. That’s insane. It’s like the most basic amount of preparation you do. It’s recommended to even the smallest of companies specifically because a fire is a risk.

That’s gross mismanagement.

This. Speaking specifically from the IT side of things, an employer or customer refusing to do backups is the biggest red flag I can get, an immediate warning to run the fuck away before you get blamed for their failure, stego-tech kind of situation.

That being said, I can likely guess where this ends up going:

* Current IT staff and management are almost certainly scapegoated for “allowing this to happen”, despite the program in question (G-DRIVE) existing since 2017 in some capacity.

* Nobody in government will question sufficiently what technical reason is/was given to justify the lack of backups and why that was never addressed, why the system went live with such a glaring oversight, etc, because that would mean holding the actual culprits accountable for mismanagement

* Everyone involved is unlikely to find work again anytime soon once names are bandied about in investigations

* The major cloud providers will likely win several contracts for “temporary services” that in actuality strip the sovereignty the government had in managing its own system, even if they did so poorly

* Other countries will use this to justify outsourcing their own sovereign infrastructure to private enterprise

This whole situation sucks ass because nothing good is likely to come of this, other than maybe a handful of smart teams lead by equally competent managers using this to get better backup resources for themselves.

  • > * Everyone involved is unlikely to find work again anytime soon once names are bandied about in investigations

    They might (MIGHT) get fired from their government jobs, but I'll bet they land in consulting shops because of their knowledge of how the government's IT teams operate.

    I'll also bet the internal audit team slides out of this completely unscathed.

    • > I'll also bet the internal audit team slides out of this completely unscathed.

      They really, really shouldn't. However, if they were shouted down by management (an unfortunately common experience) then it's on management.

      The trouble is that you can either be effective at internal audit or popular, and lots of CAE's choose the wrong option (but then, people like having jobs so I dunno).

      6 replies →

  • I abhor the general trend of governments outsourcing everything to private companies, but in this case, a technologically advanced country’s central government couldn’t even muster up the most basic of IT practices, and as you said, accountability will likely not rest with the people actually responsible for this debacle. Even a nefarious cloud services CEO couldn’t dream up a better sales case for the wholesale outsourcing of such infrastructure in the future.

    • I'm with you. It's really sad that this provides such a textbook case of why not to own your own infrastructure.

      Practically speaking, I think a lot of what is offered by Microsoft, Google, and the other big companies that are selling into this space is vastly overpriced and way too full of lock-in, taking this stuff in-house without sufficient knowhow and maturity is even more foolish.

      It's like not hiring professional truck drivers, but instead of at least people who can basically drive a truck, hiring someone who doesn't even know how to drive a car.

      1 reply →

    • Aside from data sovereignty concerns, I think the best rebuttal to that would be to point out that every major provider contractually disclaims liability for maintaining backups.

      Now, sure, there is AWS Backup and Microsoft 365 Backup. Nevertheless, those are backups in the same logical environment.

      If you’re a central government, you still need to be maintaining an independent and basically functional backup that you control.

      I own a small business of three people and we still run Veeam for 365 and keep backups in multiple clouds, multiple regions, and on disparate hardware.

    • One co-effects of the outsourcing strategy is to underfund internal tech teams.. which then makes them less effective in both competing against and managing outsourced capabilities.

  • There's a pretty big possibility it comes down to acquisition and cost saving from politicians in charge of the purse strings. I can all but guarantee that the systems administrators and even technical managers had suggested, recommended and all but begged for the resources for a redundant/backup system in a separate physical location were denied because it would double the expense.

    This isn't to preclude major ignorance in terms of those in the technology departments themselves. Having worked in/around govt projects a number of times, you will see some "interesting" opinions and positions. Especially around (mis)understanding security.

    • By definition if one department is given a hard veto, then there will always be a possibility that all the combined work of all other departments can amount to nothing, or even have a net negative impact.

      The real question then is more fundamental.

  • I mean - it should be part of due diligence of any competent department trying to use this G-drive. If it says there are no backups it means it could only be used as a temporary storage, maybe as a backup destination.

    It's negligence all the way, not just with this G-Drive designers, but with customers as well.

Backups should be far away, too. Apparently some companies lost everything on 9/11 because their backups were in the other tower.

  • Some foolishly believed that the twin towers were invincible after the 1993 WTC bombing.

    Before 9/11, most DR (disaster recovery) sites were in Jersey City, NJ just across the river from their main offices in WFC or WTC, or roughly 3-5 miles away. After 9/11, the financial industry adopted a 50+ miles rule.

    • Jersey City still was fine and 50 miles can be problematic for certain types of backup (failover) protocols. Regular tape backups would be fine but secondary databases can't be that far away (at least not at the time). I remember my boss at WFC saying that the most traffic over the data lines was in the middle of the night due to backups - not when everybody was in the office.

      20 replies →

    • Before 9/11, most DR (disaster recovery) sites were in Jersey City, NJ just across the river from their main offices in WFC or WTC, or roughly 3-5 miles away. After 9/11, the financial industry adopted a 50+ miles rule.

      IIRC, multiple IBM mainframes can be setup so they run and are administered as a single system for DR, but there are distance limits.

      3 replies →

    • >Some foolishly believed that the twin towers were invincible after the 1993 WTC bombing.

      I was told right after the bombing, by someone with a large engineering firm (Schlumberger or Bechtel), that the bombers could have brought the building down had they done it right.

  • Funnily enough, Germany has laws for where you are allowed to store backups exactly due to these kinda issues. Fire, flood, earthquake, tornadoes, whatever you name, backups need to be stored with appropriate security in mind.

  • They deserved to lose everything... except the human lives, of course.

    That's like storing lifeboats in the bilge section of the ship, so they won't get damaged by storms.

Nothing increases the risk of servers catching fire like government investigators showing up to investigate allegations that North Korea hacked the servers.

  • Or investigations into a major financial scandal in a large French bank!

    (While the Credit Lyonnais was investigated in the 90s, both the HQ and the site where they stored their archives were destroyed by fire within a few months)

  • >This file contains the complete set of papers, except for a number of secret documents, a few others which are part of still active files, some correspondence lost in the floods of 1967...

    >Was 1967 a particularly bad winter?

    >No, a marvellous winter. We lost no end of embarrassing files.

> The issue here is not refusing to use a foreign third party. That makes sense.

For anyone else who's confused, G-Drive means Government Drive, not Google Drive.

> The issue here is not refusing to use a foreign third party. That makes sense.

Encrypt before sending to a third party?

  • Of course you'd encrypt the data before uploading it to a third party, but there's no reason why that third party should be under control of a foreign government. South Korea has more than one data center they can store data inside of, there's no need to trust other governments sigh every byte of data you've gathered, even if there are no known backdoors or flaws in your encryption mechanism (which I'm sure some governments have been looking into for decades).

    • There is a reason that NIST recommends new encryption algorithms from time to time. If you get a copy of ALL government data, in 20 years you might be able to break encryption and get access to ALL government data from 20yr ago, no matter how classified they were, if they were stored in that cloud. Such data might still be valuable, because not all data is published after some period.

      5 replies →

  • Because even when you encrypt the foreign third party can still lock you out of your data by simply switching off the servers.

  • Why make yourself dependent on a foreign country for your own sensitive data?

    You have to integrate the special software requirements to any cloud storage anyway and hosting a large amount of files isn't an insurmountable technical problem.

    If you can provide the minimal requirements like backups, of course.

    • Presumably because you aren't capable of building what that foreign country can offer you yourself.

      Which they weren't. And here we are.

  • > Encrypt before sending to a third party?

    That sounds great, as long as nobody makes any mistake. It could be a bug on the RNG which generates the encryption keys. It could be a software or hardware defect which leaks information about the keys (IIRC, some cryptographic system are really sensitive about this, a single bit flip during encryption could make it possible to obtain the private key). It could be someone carelessly leaving the keys in an object storage bucket or source code repository. Or it could be deliberate espionage to obtain the keys.

It does only make sense if you are competent enough to manage data, and I mean: Any part of it, forever. It's not impossible, of course, but it is really not as trivial as the self-host crowd makes it out to be, if you absolutely need a certain amount of 9s of reliability. There is a reason why AWS etc can exist. I am sure the cloud market is not entirely reasonable but certainly far more reasonable than relying on some mid consultant to do this for you at this scale.

Yeah, the whole supposed benefit of an organization using storage the cloud is to avoid stuff like this from happening. Instead, they managed to make the damage far worse by increasing the amount of data lost by centralizing it.

The issue is without profit incentive of course it isn’t X (backed up, redundant, highly available, whatever other aspect is optimized away by accountants).

Having worked a great deal inside of aws on these things aws provides literally every conceivable level of customer managed security down to customer owned and keyed datacenters operated by aws, with master key HSMs owned, purchased by the customer, with customer managed key hierarchies at all levels and detailed audit logs of everything done by everything including aws itself. The security assurance of aws is far and away beyond what even the most sophisticated state actor infrastructure does and is more modern to boot - because it’s profit incentive drives that.

Most likely this was not about national security than about nationalism. They’re easily confused but that’s fallacious. And they earned the dividends of fallacious thinking.

Call me a conspiracy theorist, but this kind of mismanagement is intentional by design so powerful people can hide their dirty laundry.

  • Never attribute to malice what can be attributed to stupidity.

    There was that time when some high profile company's entire Google Cloud account was destroyed. Backups were on Google Cloud too. No off-site backups.

    • One of the data integrity people sadly committed suicide as a result of this fire, so I am also thinking this was an incompetence situation (https://www.yna.co.kr/view/AKR20251003030351530).

      For the budget spent, you’d think they would clone the setup in Busan and sync it daily or something like this in lieu of whatever crazy backup they needed to engineer but couldn’t.

    • > Never attribute to malice what can be attributed to stupidity.

      Any sufficiently advanced malice is indistinguishable from stupidity.

      I don't think there's anything that can't be attributed to stupidity, so the statement is pointless. Besides, it doesn't really matter naming an action stupidity, when the consequences are indistinguishable from that of malice.

      2 replies →

    • Hanlon's Razor is such an overused meme/trope that it's become meaningless.

      It's a fallacy to assume that malice is never a form of stupidity/folly. An evil person fails to understand what is truly good because of some kind of folly, e.g. refusing to internally acknowledge the evil consequences of evil actions. There is no clean evil-vs-stupid dichotomy. E.g. is a drunk driver who kill someone with drunk driving stupid or evil? The dangers of drunk driving are well-known, so what about both?

      Additionally, we are talking about a system/organization, not a person with a unified will/agenda. There could indeed be an evil person in an organization that wants the organization to do stupid things (not backup properly) in order to be able to hide his misdeeds.

      2 replies →

    • You have to balance that with how low can you expect human beings to lower their standards when faced with bureaucratic opposition. No backups on a key system would increase the likelihood of malice versus stupidity, since the importance of backups is known to IT staff regardless of role and seniority for only 40 years or so.

I very seriously doubt that the US cares about South Korea's deepest, darkest secrets that much, if at all.

Not using a cloud provider is asinine. You can use layered encryption so the expected lifetime of the cryptography is beyond the value of the data...and the US government themselves store data on all 3 of them, to my knowledge.

I say US because the only other major cloud providers I know of are in China, and they do have a vested interest in South Korean data, presumably for NK.

  • It's quite wild to think how US wouldn't want access to their data on a plate, through AWS/GCP/Azure. You must not be aware of the last decade of news when it comes to US and security.

    • The US and South Korea are allies, and SK doesn't have much particular strategic value that I'm aware of? At least not anything they wouldn't already be sharing with the US?

      Can you articulate what particular advantages the US would be pursuing by stealing SK secret data (assuming it was not protected sufficiently on AWS/GCP to prevent this, and assuming that platform security features have to be defeated to extract this data—this is a lot of risk from the US's side, to go after this data, if they are found out in this hypothetical, I might add, so "they would steal whatever just to have it" is doubtful to me).

      2 replies →

[flagged]

  • As a sysadmin at company that provide fairly sensitive services, I find online cloud backups to be way to slow for the purpose of protecting against something like the server room being destroyed by a fire. Even something like spinning disks at a remote location feel like a risk, as files would need to be copied onto faster disks before services could be restored, and that copying would take precious time during an emergency. When downtime means massive losses of revenue for customers, being down for hours or even days while waiting for the download to finish is not going be accepted.

    Restoring from cloud backups is one of those war stories that I occasionally hear, including the occasionally fedex solution of sending the backup disk by carrier.

    • Many organizations are willing to accept the fallbacks of cloud backup storage because it’s the tertiary backup in the event of physical catastrophe. In my experience those tertiary backups are there to prevent the total loss of company IP in the should an entire site be lost. If you only have one office and it burns down work will be severely impacted anyway.

      Obviously the calculus changes with maximally critical systems where lives are lost if the systems are down or you are losing millions per hour of downtime.

    • For truly colossal amounts of data, fedex has more bandwidth than fiber. I don’t know if any cloud providers will send you your stuff on physical storage, but most will allow you to send your stuff to them on physical storage- eg AWS snowball.

      There are two main reasons why people struggle with cloud restore:

      1. Not enough incoming bandwidth. The cloud’s pipe is almost certainly big enough to send your data to you. Yours may not be big enough to receive it.

      2. Cheaping out on storage in the cloud. If you want fast restores, you can’t use the discount reduced redundancy low performance glacier tier. You will save $$$ right until the emergency where you need it. Pay for the flagship storage tier- normal AWS S3, for example- or splurge and buy whatever cross-region redundancy offering they have. Then you only need to worry about problem #1.

      2 replies →

    • In one scenario, with offsite backups ("in the clown" or otherwise): "We had a fire at our datacenter, and there will be some downtime while we get things rolling again."

      In the other scenario, without offsite backups ("in the clown" or otherwise): "We had a fire at our datacenter, and that shit's just gone."

      Neither of these are things that are particularly good to announce, and both things can come with very severe cost, but one of them is clearly worse than the other.

    • SK would be totally fine with that though because that means there would eventually be recovery!

    • You're not designing to protect from data loss, you're designing to protect from downtime.

  • How’s that? Using encryption, which is known to have backdoors and is vulnerable to nation state cracking?

    • Can you provide an example of a commonly used cryptography system that is known to be vulnerable to nation state cracking?

      As for backdoors, they may exist if you rely on a third party but it's pretty hard to backdoor the relatively simple algorithms used in cryptography

      6 replies →

    • >Using encryption, which is known to have backdoors and is vulnerable to nation state cracking?

      WTF are you talking about? There are absolutely zero backdoors of any kind known to be in any standard open source encryption systems, and symmetric cryptography 256-bits or more is not subject to cracking by anyone or anything, not even if general purpose quantum computers are doable and prove scalable. Shor's algorithm applies to public-key not symmetric, where the best that can be done is Grover's quantum search for a square-root speed up. You seem to be crossing a number of streams here in your information.

      16 replies →