Comment by kristianc
9 days ago
The government official who insisted that commercial AWS/GCP/Azure couldn't possibly be trusted with keeping the information will be keeping their head low for a few days then...
"The Interior Ministry explained that while most systems at the Daejeon data center are backed up daily to separate equipment within the same center and to a physically remote backup facility, the G-Drive’s structure did not allow for external backups."
This is absolutely wild.
The issue here is not refusing to use a foreign third party. That makes sense.
The issue is mandating the use of remote storage and not backing it up. That’s insane. It’s like the most basic amount of preparation you do. It’s recommended to even the smallest of companies specifically because a fire is a risk.
That’s gross mismanagement.
This. Speaking specifically from the IT side of things, an employer or customer refusing to do backups is the biggest red flag I can get, an immediate warning to run the fuck away before you get blamed for their failure, stego-tech kind of situation.
That being said, I can likely guess where this ends up going:
* Current IT staff and management are almost certainly scapegoated for “allowing this to happen”, despite the program in question (G-DRIVE) existing since 2017 in some capacity.
* Nobody in government will question sufficiently what technical reason is/was given to justify the lack of backups and why that was never addressed, why the system went live with such a glaring oversight, etc, because that would mean holding the actual culprits accountable for mismanagement
* Everyone involved is unlikely to find work again anytime soon once names are bandied about in investigations
* The major cloud providers will likely win several contracts for “temporary services” that in actuality strip the sovereignty the government had in managing its own system, even if they did so poorly
* Other countries will use this to justify outsourcing their own sovereign infrastructure to private enterprise
This whole situation sucks ass because nothing good is likely to come of this, other than maybe a handful of smart teams lead by equally competent managers using this to get better backup resources for themselves.
> * Everyone involved is unlikely to find work again anytime soon once names are bandied about in investigations
They might (MIGHT) get fired from their government jobs, but I'll bet they land in consulting shops because of their knowledge of how the government's IT teams operate.
I'll also bet the internal audit team slides out of this completely unscathed.
7 replies →
I abhor the general trend of governments outsourcing everything to private companies, but in this case, a technologically advanced country’s central government couldn’t even muster up the most basic of IT practices, and as you said, accountability will likely not rest with the people actually responsible for this debacle. Even a nefarious cloud services CEO couldn’t dream up a better sales case for the wholesale outsourcing of such infrastructure in the future.
4 replies →
There's a pretty big possibility it comes down to acquisition and cost saving from politicians in charge of the purse strings. I can all but guarantee that the systems administrators and even technical managers had suggested, recommended and all but begged for the resources for a redundant/backup system in a separate physical location were denied because it would double the expense.
This isn't to preclude major ignorance in terms of those in the technology departments themselves. Having worked in/around govt projects a number of times, you will see some "interesting" opinions and positions. Especially around (mis)understanding security.
1 reply →
I mean - it should be part of due diligence of any competent department trying to use this G-drive. If it says there are no backups it means it could only be used as a temporary storage, maybe as a backup destination.
It's negligence all the way, not just with this G-Drive designers, but with customers as well.
1 reply →
Backups should be far away, too. Apparently some companies lost everything on 9/11 because their backups were in the other tower.
Some foolishly believed that the twin towers were invincible after the 1993 WTC bombing.
Before 9/11, most DR (disaster recovery) sites were in Jersey City, NJ just across the river from their main offices in WFC or WTC, or roughly 3-5 miles away. After 9/11, the financial industry adopted a 50+ miles rule.
27 replies →
Funnily enough, Germany has laws for where you are allowed to store backups exactly due to these kinda issues. Fire, flood, earthquake, tornadoes, whatever you name, backups need to be stored with appropriate security in mind.
42 replies →
They deserved to lose everything... except the human lives, of course.
That's like storing lifeboats in the bilge section of the ship, so they won't get damaged by storms.
[dead]
Nothing increases the risk of servers catching fire like government investigators showing up to investigate allegations that North Korea hacked the servers.
Or investigations into a major financial scandal in a large French bank!
(While the Credit Lyonnais was investigated in the 90s, both the HQ and the site where they stored their archives were destroyed by fire within a few months)
>This file contains the complete set of papers, except for a number of secret documents, a few others which are part of still active files, some correspondence lost in the floods of 1967...
>Was 1967 a particularly bad winter?
>No, a marvellous winter. We lost no end of embarrassing files.
2 replies →
It _almost_ sounds like you're suggesting the fire was deliberate!
1 reply →
> The issue here is not refusing to use a foreign third party. That makes sense.
For anyone else who's confused, G-Drive means Government Drive, not Google Drive.
> The issue here is not refusing to use a foreign third party. That makes sense.
Encrypt before sending to a third party?
Of course you'd encrypt the data before uploading it to a third party, but there's no reason why that third party should be under control of a foreign government. South Korea has more than one data center they can store data inside of, there's no need to trust other governments sigh every byte of data you've gathered, even if there are no known backdoors or flaws in your encryption mechanism (which I'm sure some governments have been looking into for decades).
6 replies →
Because even when you encrypt the foreign third party can still lock you out of your data by simply switching off the servers.
Would you think that the U.S would encrypt gov data and store on Alibaba's Cloud? :)
23 replies →
Why make yourself dependent on a foreign country for your own sensitive data?
You have to integrate the special software requirements to any cloud storage anyway and hosting a large amount of files isn't an insurmountable technical problem.
If you can provide the minimal requirements like backups, of course.
1 reply →
> Encrypt before sending to a third party?
That sounds great, as long as nobody makes any mistake. It could be a bug on the RNG which generates the encryption keys. It could be a software or hardware defect which leaks information about the keys (IIRC, some cryptographic system are really sensitive about this, a single bit flip during encryption could make it possible to obtain the private key). It could be someone carelessly leaving the keys in an object storage bucket or source code repository. Or it could be deliberate espionage to obtain the keys.
It does only make sense if you are competent enough to manage data, and I mean: Any part of it, forever. It's not impossible, of course, but it is really not as trivial as the self-host crowd makes it out to be, if you absolutely need a certain amount of 9s of reliability. There is a reason why AWS etc can exist. I am sure the cloud market is not entirely reasonable but certainly far more reasonable than relying on some mid consultant to do this for you at this scale.
Yeah, the whole supposed benefit of an organization using storage the cloud is to avoid stuff like this from happening. Instead, they managed to make the damage far worse by increasing the amount of data lost by centralizing it.
The issue is without profit incentive of course it isn’t X (backed up, redundant, highly available, whatever other aspect is optimized away by accountants).
Having worked a great deal inside of aws on these things aws provides literally every conceivable level of customer managed security down to customer owned and keyed datacenters operated by aws, with master key HSMs owned, purchased by the customer, with customer managed key hierarchies at all levels and detailed audit logs of everything done by everything including aws itself. The security assurance of aws is far and away beyond what even the most sophisticated state actor infrastructure does and is more modern to boot - because it’s profit incentive drives that.
Most likely this was not about national security than about nationalism. They’re easily confused but that’s fallacious. And they earned the dividends of fallacious thinking.
Call me a conspiracy theorist, but this kind of mismanagement is intentional by design so powerful people can hide their dirty laundry.
Never attribute to malice what can be attributed to stupidity.
There was that time when some high profile company's entire Google Cloud account was destroyed. Backups were on Google Cloud too. No off-site backups.
9 replies →
I very seriously doubt that the US cares about South Korea's deepest, darkest secrets that much, if at all.
Not using a cloud provider is asinine. You can use layered encryption so the expected lifetime of the cryptography is beyond the value of the data...and the US government themselves store data on all 3 of them, to my knowledge.
I say US because the only other major cloud providers I know of are in China, and they do have a vested interest in South Korean data, presumably for NK.
It's quite wild to think how US wouldn't want access to their data on a plate, through AWS/GCP/Azure. You must not be aware of the last decade of news when it comes to US and security.
3 replies →
[flagged]
As a sysadmin at company that provide fairly sensitive services, I find online cloud backups to be way to slow for the purpose of protecting against something like the server room being destroyed by a fire. Even something like spinning disks at a remote location feel like a risk, as files would need to be copied onto faster disks before services could be restored, and that copying would take precious time during an emergency. When downtime means massive losses of revenue for customers, being down for hours or even days while waiting for the download to finish is not going be accepted.
Restoring from cloud backups is one of those war stories that I occasionally hear, including the occasionally fedex solution of sending the backup disk by carrier.
7 replies →
That’s why
Microsoft can't guarantee data sovereignty
https://news.ycombinator.com/item?id=45061153
4 replies →
How’s that? Using encryption, which is known to have backdoors and is vulnerable to nation state cracking?
32 replies →
Agree completely that it's absolute wild to run such a system without backups. But at this point no government should keep critical data on foreign cloud storage.
Good thing Korea has cloud providers, apparently Kakao has even gone...beyond the cloud!
https://kakaocloud.com/ https://www.nhncloud.com/ https://cloud.kt.com/
To name a few.
They are overwhelmingly whitelabeled providers. For example, Samsung SDI Cloud (the largest "Korean" cloud) is an AWS white label.
Korea is great at a lot of engineering disciplines. Sadly, software is not one of them, though it's slowly changing. There was a similar issue a couple years ago where the government's internal intranet was down a couple days because someone deployed a switch in front of outbound connections without anyone noticing.
It's not a talent problem but a management problem - similar to Japan's issues, which is unsurprising as Korean institutions and organizations are heavily based on Japanese ones from back in the JETRO era.
25 replies →
Samsung owns Joyent
3 replies →
Encrypted backups would have saved a lot of pain here
Any backup would do at this point. I think the most best is: encrypted, off-site & tested monthly.
You don’t need cloud when you have the data centre, just backups in physical locations somewhere else
Others have pointed out: you need uptime too. So a single data center on the same electric grid or geographic fault zone wouldn’t really cut it. This is one of those times where it sucks to be a small country (geographically).
1 reply →
> no government should keep critical data on foreign cloud storage
Primary? No. Back-up?
These guys couldn’t provision a back-up for their on-site data. Why do you think it was competently encrypted?
They fucked up, that much is clear but the should not have kept that data on foreign cloud storage regardless. It's not like there are only two choices here.
14 replies →
It's 2025. Encryption is a thing now. You can store anything you want on foreign cloud storage. I'd give my backups to the FSB.
> I'd give my backups to the FSB.
Until you need them - like with the article here ;) - then the FSB says "only if you do these specific favours for us first...".
There's certifications too, which you don't get unless you conform to for example EU data protection laws. On paper anyway. But these have opened up Amazon and Azure to e.g. Dutch government agencies, the tax office will be migrating to Office365 for example.
Encryption does not ensure any type of availability.
Why not? If the region is in country, encrypted, and with proven security attestations validated by third parties, a backup to a cloud storage would be incredibly wise. Otherwise we might end up reading an article about a fire burning down a single data center
Microsoft has already testified that the American government maintains access to their data centres, in all regions. It likely applies to all American cloud companies.
America is not a stable ally, and has a history of spying on friends.
So unless the whole of your backup is encrypted offline, and you trust the NSA to never break the encryption you chose, its a national security risk.
24 replies →
Exactly.
Like, don't store it in the cloud of an enemy country of course.
But if it's encrypted and you're keeping a live backup in a second country with a second company, ideally with a different geopolitical alignment, I don't see the problem.
26 replies →
And which organization has every file, from each of their applications using the cloud, encrypted *before* it is sent to the cloud?
2 replies →
Especially on US cloud storage.
The data is never safe thanks to the US Cloud Act.
If you can’t encrypt your backups such that you could store them tatooed on Putin’s ass, you need to learn about backups more.
Governments need to worry about
1. future cryptography attacks that do not exist today
2. Availability of data
3. The legal environment of the data
Encryption is not a panacea that solves every problem
Why not?
Has there been any interruption in service?
And yet here is an example where keeping critical data off public cloud storage has been significantly worse for them in the short term.
Not that they should just go all in on it, but an encrypted copy on S3 or GCS would seem really useful right about now.
You can do a bad job with public or private cloud. What if they would have had the backup and lost the encryption key?
Cost wise probably having even a Korean different data center backup would not have been huge effort, but not doing it exposed them to a huge risk.
2 replies →
We’ve had Byzantine crypto key solutions since at least 2007 when I was evaluating one for code signing for commercial airplanes. You could put an access key on k:n smart cards, so that you could extract it from one piece of hardware to put on another, or you could put the actual key on the cards so burning down the data center only lost you the key if you locked half the card holders in before setting it on fire.
2 replies →
> The government official who insisted that commercial AWS/GCP/Azure couldn't possibly be trusted with keeping the information will be keeping their head low for a few days then...
They absolutely cannot be trusted, especially sensitive govt. data. Can you imagine the US state department getting their hands on compromising data on Korean politicians?
Its like handing over the govt. to US interests wholesale.
That they did not choose to keep the backup, and then another, at different physical locations is a valuable lesson, and must lead to even better design the next time.
But the solution is not to keep it in US hands.
Using the cloud would have been the easiest way to achieve the necessary redundancy, but by far not the only one. This is just a flawed concept from the start, with no real redundancy.
But not security. And for governmental data security is a far more important consideration.
not losing data and keeping untrusted parties out of your data is a hard problem, that "cloud" aka "stored somewhere that is accessible by agents of a foreign nation" does not solve.
It's the government of South Korea, which has a nearly 2 trillion dollar GDP. Surely they could have built a few more data centers connected with their own fiber if they were that paranoid about it.
As OP says, cloud is not the only solution, just the easiest. They should probably have had a second backup in a different building. It would probably require a bit more involvement, but def doable.
There is some data privacy requirement in SK where application servers and data have to remain in the country. I worked for a big global bank and we had 4 main instances of our application: Americas, EMEA, Asia and South Korea.
When I worked on Apple Maps infra South Korea required all servers be in South Korea.
It was the same at google. If I'm remembering right we couldn't export any vector type data (raster only) and the tiles themselves had to be served out of South Korea.
If only there were a second data center in South Korea where they could backup their data…
I know there is legit hate for VMWare/Broadcom but there is a legit case to be made for VCF with an equivalent DR setup where you have replication enabled by Superna and Dell PowerProtect Data Domain protecting both local and remote with Thales Luna K160 KMIP for the data at rest encryption for the vSAN.
To add, use F710s, H710s and then add ObjectScale storage for your Kubernetes workloads.
This setup repatriates your data and gives you a Cloud like experience. Pair it with like EKS-A and you have a really good on premises Private Cloud that is resilient.
This reads very similar to the Turbo Encabulator video.
> G-Drive’s structure did not allow for external backups
Ha! "Did not allow" my ass. Let me translate:
> We didn't feel like backing anything up or insisting on that functionality.
Pretty sensible to not host it on these commercial services. What is not so sensible is to not make backups.
I was once advised to measure your backup security in zip codes and time zones.
You have a backup copy of your file, in the same folder? That helps for some "oops" moments, but nothing else.
You have a whole backup DRIVE on your desktop? That's better. Physical failure of the primary device is no longer a danger. But your house could burn down.
You have an alternate backup stored at a trusted friend's house across the street? Better! But what if a major natural disaster happens?
True life, 30+ years ago when I worked for TeleCheck, data was their lifeblood. Every week a systems operator went to Denver, the alternate site, with a briefcase full of backup tapes. TeleCheck was based in Houston, so a major hurricane could've been a major problem.
Not sure “sane backup strategy” and “park your whole government in a private company under American jurisdiction” are mutually exclusive. I feel like I can think of a bunch of things that a nation would be sad to lose, but would be even sadder to have adversaries rifling through at will. Or, for that matter, extort favors under threat of cutting off your access.
At least in this case you can track down said officials in their foxholes and give them a good talking-to. Good luck holding AWS/GCP/Azure accountable…
He may or may not have been right, but it's besides the point.
The 3-2-1 backup rule is basic.
Well it is just malpractise. Even when I was an first semester art student I knew about the concept of off-site backups.
If you (as the SK government) were going to do a deal with " AWS/GCP/Azure" to run systems for the government, wouldn't you do something like the Jones Act? The datacenters must be within the country and staffed by citizens, etc.
Microsoft exec testified that US Govt can get access to the data Azure stores in other countries. I thought this was a wild allegation but apparently is true [0].
[0]https://www.theregister.com/2025/07/25/microsoft_admits_it_c...
Because these companies never lose data, like during some lightning strikes, oh wait: https://www.bbc.com/news/technology-33989384
As a government you should not be putting your stuff in an environment under control of some other nation, period. That is a completely different issue and does not really relate to making backups.
“The BBC understands that customers, through various backup technologies, external, were able to recover all lost data.”
You backup stuff. To other regions.
But the Korean government didn't backup, that's the problem in the first place here…
1 reply →
>As a government you should not be putting your stuff in an environment under control of some other nation, period.
Why? If you encrypt it yourself before transfer, the only possible control some_other_nation will have over you or your data is availability.
You're forgetting that you're talking nation states, here. Breaking encryption is in fact the role of the people you are giving access.
Sovereign delivery makes sense for _nations_.
5 replies →
First of all, you cannot do much if you keep all the data encrypted on the cloud (basically just backing things up, and hope you don't have to fetch it given the egress cost). Also, availability is exactly the kind of issue that a fire cause…
2 replies →
For this reason, Microsoft has Azure US Government, Azure China etc
Yeah, I heard that consumer clouds are only locally redundant and there aren't even backups. So big DC damage could result in data loss.
By default, Amazon S3 stores data across at least separate datacenters that are in the same region, but are physically separate from each other:
Amazon S3 provides a highly durable storage infrastructure designed for mission-critical and primary data storage. S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, and S3 Glacier Deep Archive redundantly store objects on multiple devices across a minimum of three Availability Zones in an AWS Region. An Availability Zone is one or more discrete data centers with redundant power, networking, and connectivity in an AWS Region. Availability Zones are physically separated by a meaningful distance, many kilometers, from any other Availability Zone, although all are within 100 km (60 miles) of each other.
You can save a little money by giving up that redundancy and having your data i a single AZ:
The S3 One Zone-IA storage class stores data redundantly across multiple devices within a single Availability Zone
For further redundancy you can set up replication to another region, but if I needed that level of redundancy, I'd probably store another copy of data with a different cloud provider so an AWS global failure (or more likely, a billing issue) doesn't leave my data trapped in one vendor).
I believe Google and Azure have similar levels of redundancy levels in their cloud storage.
What do you mean by "consumer clouds"?
2 replies →
I mean… at the risk of misinterpreting sarcasm—
Except for the backup strategy said consumers apply to their data themselves, right?
If I use a service called “it is stored in a datacenter in Virginia” then I will not be surprised when the meteor that hits Virginia destroys my data. For that reason I might also store copies of important things using the “it is stored in a datacenter in Oregon” service or something.
1 reply →
...on a single-zone persistent disk: https://status.cloud.google.com/incident/compute/15056#57195...
> GCE instances and Persistent Disks within a zone exist in a single Google datacenter and are therefore unavoidably vulnerable to datacenter-scale disasters.
Of course, it's perfectly possible to have proper distributed storage without using a cloud provider. It happens to be hard to implement correctly, so apparently, the SK government team in question just decided... not to?
[flagged]
The simple solution here would have been something like a bunch of netapps with snapmirrors to a secondary backup site.
Or ZFS or DRBD or whatever homegrown or equivalent non-proprietart alternative is available these days and you prefer.
Usually these mandates are made by someone who evaluates “risks.” Third party risks are evaluated under the assumption that everything will be done sensibly in the 1p scenario, to boot, the 1p option will be cheaper as disk drives etc are only a fraction of total cost.
Reality hits later when budget cuts/constrained salaries prevent the maintenance of a competent team. Or the proposed backup system is deemed as excessively risk averse and the money can’t be spared.
>The government official who insisted that commercial AWS/GCP/Azure couldn't possibly be trusted with keeping the information will be keeping their head low for a few days then...
They can't. The trump admin sanctioning the international criminal court and Microsoft blocking them from all services as a result are proof of why.
They put everything only in one datacenter. A datacenter located elsewhere should have been setup to mirror.
This has nothing to do with commercial clouds. Commercial clouds are just datacenters. They could pick one commercial cloud data center and not do much more to mirror or backup in different regions. I understand some of the services have inherent backups.
Mirroring is not backup.
What a lame excuse. “The G-Drive’s structure did not allow for backups” is a blatant lie. It’s code for, “I don’t value other employees’ time and efforts enough to figure out a reliable backup system; I have better things to do.”
Whoever made this excuse should be demoted to a journeyman ops engineer. Firing would be too good for them.
It could be accurate. Let’s say, for whatever reason, it is.
Ok.
Then it wasn’t a workable design.
The idea of “backup sites” has existed forever. The fact you use the word “cloud” to describe your personal collection of servers doesn’t suddenly mean you don’t need backups in a separate physical site.
If the government mandates its use, it should have a hot site at a minimum. Even without that a physical backup in a separate physical location in case of fire/attack/tsunami/large band of hungry squirrels is a total must-have.
However it was decided that not having that was OK, that decision was negligence.
Silly to think this is the fault of ops engineers. More likely, the project manager or C-suite didn't have time nor budget to allocate on disaster recovery.
The project shipped, it's done, they've already moved us onto the next task, no one wants to pay for maintenance anyway.
This has been my experience in 99% of the companies I have worked for in my career, while the engineers that built the bloody thing groan and are well-aware of all the failure modes of the system they've built. No one cares, until it breaks, and hopefully they get the chance to say "I **** told you this was inadequate"
You could be right, but it could also be a bad summary or bad translation.
We shouldn't rush to judgement.
your first criticism was they should have handed their data sovereignty over to another country?
there are many failure points here, not paying Amazon/Google/Microsoft is hardly the main point.
Days? That's optimistic. It depends on what govt cloud contained. For example imagine all the car registrations. Or all the payments for the pension fund
Dude, the issues go wayyy beyond opting for selfhosting rather than US clouds.
We use selfhosting, but we also test our fire suppression system every year, we have two different DCs, and we use S3 backups out of town.
Whoever runs that IT department needs to be run out of the country.
cloud will also not back up your stuff if you configure it wrong so not sure how's that related
Rightfully did not trust these companies. Sure what happened is a disaster for them, but you cant simply trust Amazon & Microsoft.
Why not? You can easily encrypt your data before sending it for storage on on S3, for example.
You and I can encrypt our data before saving it into the cloud, because we have nothing of value or interest to someone with the resources of a state.
Sometimes sensitive data at the government level has a pretty long shelf life; you may want it to remain secret in 30, 50, 70 years.
4 replies →
Is encryption, almost any form, really reliable protection for a countries' government entire data? I mean, this is _the_ ultimate playground for "state level actors" -- if someday there's a hole and it turns out it takes only 20 years to decrypt the data with a country-sized supercomputer, you can bet _this_ is what multiple alien countries will try to decrypt first.
4 replies →
You can encrypt them at rest, but data that lies encrypted and is never touched, is useless data. You need to decrypt them as well. Also, plenty of incompetent devops around, and writing a decryption toolchain can be difficult.
7 replies →
For sure the only error here is zero redundancy.
S3 features have saved our bacon a number of times. Perhaps your experience and usage is different. They are worth trusting with business critical data as long as you're following their guidance. GCP though have not proven it, their data loss news is still fresh in my mind.
Were you talking about this incidence? https://arstechnica.com/gadgets/2024/05/google-cloud-acciden...
I am currently evaluating between GCP and AWS right now.
1 reply →
On the Microsoft side CVE-2025–55241 is still pretty recent.
https://news.ycombinator.com/item?id=45282497
I understand data sovereignty in the case where a foreign entity might cut off access to your data, but this paranoia that storing info under your bed is the safest bet is straight up false. We have post-quantum encryption widely available already. If your fear is that a foreign entity will access your data, you're technologically illiterate.
Obviously no person in a lawmaking position will ever have the patience or foresight to learn about this, but the fact they won't even try is all the more infuriating.
Encryption only makes sense if "the cloud" is just a data storage bucket to you. If you run applications in the cloud, you can't have all the data encrypted, especially not all the time. There are some technologies that make this possible, but none are mature enough to run even a small business, let alone a country on.
It sounds technologically illiterate to you because when people say "we can't safely use a foreign cloud" you think they're saying "to store data" and everyone else is thinking at the very least "to store and process data".
Sure, they could have used a cloud provider for encrypted backups, but if they knew how to do proper backups, they wouldn't be in this mess to begin with.
> The government official who insisted that commercial AWS/GCP/Azure couldn't possibly be trusted with keeping the information
They were still right though: it's absolutely clear without an ounce of doubt that whatever you put on an US cloud is being accessible by the US government, who can also decide to sanction you and deprive you from your ability to access the data yourself.
Not having backups is entirely retarded, but also completely orthogonal.
The U.S. Government can’t decrypt data for which it does not possess the key (assuming the encryption used is good).
Well first of all neither you and I knows the decryption capabilities of the NSA, all we know is that they have hired more cryptologists than the rest of the world combined.
Also, it's much easier for an intelligence service to get the hand on a 1kB encryption key than on a PB of data: the former is much easier to exfiltrate without being noticed.
And then I don't know why you bring encryption here: pretty much none of the use-case for using a cloud allow for fully encrypted data. (The only one that does is storing encrypted backups on the cloud, but the issue here is that the operator didn't do backups in the first place…)
2 replies →
In theory. I'm very much happier to have my encrypted data also not be available to adversaries.
"Not my fault.. I asked them to save everything in G-Drive (Google Drive)"
I mean he's still right about AWS etc. with the current US Administration and probably all that will follow - but that doesn't excuse not keeping backups.
Yeah let’s fax all government data to the Trump administration.