Comment by o8r3oFTZPE
4 years ago
From the Ars reference: "There are some steep hurdles to clear for an attack to be successful. A hacker would first have to steal a target's account password and also gain covert possession of the physical key for as many as 10 hours. The cloning also requires up to $12,000 worth of equipment and custom software, plus an advanced background in electrical engineering and cryptography. That means the key cloning-were it ever to happen in the wild-would likely be done only by a nation-state pursuing its highest-value targets."
"only by a nation-state"
This ignores the possibility that the company selling the solution could itself easily defeat the solution.
Google, or another similarly-capitalised company that focuses on computers, could easily succeed in attacking these "user protections".
Further, anyone could potentially hire them to assist. What is to stop this if secrecy is preserved.
We know, for example, that Big Tech companies are motivated by money above all else, and, by-and-large, their revenue does not come from users. It comes from the ability to see into users' lives. Payments made by users for security keys are all but irrelevant when juxtaposed against advertising services revenue derived from personal data mining.
Google has an interest in putting users' minds at ease about the incredible security issues with computers connected to the internet 24/7. The last thing Google wants is for users to be more skeptical of using computers for personal matters that give insight to advertisers.
The comment on that Ars page is more realistic than the article.
Few people have a "nation-state" threat model, but many, many people have the "paying client of Big Tech" threat model.
Yes, if you don't trust Google don't use a key from Google. Is that what you're trying to say? If your threat model is Google don't buy your key from Google. Do I think that's probably a stupid waste of thought? Yes, I do. But it's totally legitimate if that's your threat model.
"But it's totally legitimate if that's your threat model."
Not mine. I have no plans to purchase a security key from Google. I have no threat model.
Nothing in the comment you replied to mentioned "trust" but since you raised the issue I did a search. It seems there are actually people commenting online who claim they do not trust Google; this has been going on for years. Can you believe it. Their CEO has called it out multiple times.^1 "[S]tupid waste of thought", as you call it. (That's not what I would call it.) It's everywhere.^2 The message to support.google and the response are quite entertaining.
1. For example, https://web.archive.org/web/20160601234401/http://allthingsd...
2.
https://support.google.com/googlenest/thread/14123369/what-i...
https://www.inc.com/jason-aten/google-is-absolutely-listenin...
https://www.consumerwatchdog.org/blog/people-dont-trust-goog...
https://www.wnd.com/2015/03/i-dont-trust-google-nor-should-y...
https://www.theguardian.com/technology/2020/jan/03/google-ex...
https://www.forbes.com/sites/kateoflahertyuk/2018/10/10/this...
> This ignores the possibility that the company selling the solution could itself easily defeat the solution.
How do you imagine this would work?
The "solution" here is just a cheap device that does mathematics. It's very clever mathematics but it's just mathematics.
I think you're imagining a lot of moving parts to the "solution" that don't exist.
All I am suggesting is that "hacker" as used by the Ars author could be a company, or backed by a company, and not necessarily a "nation-state". That is not far-fetched at all, IMO. The article makes it sound like "nation-states" are the only folks who could defeat the protection or would even have an interest in doing so. As the comment on the Ars page points out, that is ridiculous.
Assuming "hacker" could be a company what company would have such a motivation and resources to spy on people. The NSO's of the world, sure. Anyone else. Companies have better things to do than spy on people, right. Not anymore.
What about a company whose businesss is personal data mining, who goes so far as to sniff people's residential wifi (they lied about it at first when they got caught), collect audio via a "smart" thermostat (Nest), collect data from an "activity tracker" (FitBit), a "smartphone OS", a search engine, e-mail service, web analytics, etc., etc. Need I go on. I could fill up an entire page with all the different Google acquisitions and ways they are mining people's data.
Why are security keys any different. 9 out of 10 things Google sells or gives away are designed to facilitate data collection, but I guess this is the 1 in 10. "Two-factor authentication" has already been abused by Facebook and Twitter where they were caught using the data for advertising, but I suppose Google is different.
These companies want personal data. With the exception of Apple, they do not stay in business by selling physical products. Collecting data is what they do and they spend enormous amounts of time and effort doing it.
"That's all I know."
> That is not far-fetched at all, IMO.
The problem with your neat little model of the world is that it doesn't provide you with actionable predictions. Everything is a massive global conspiracy against you, nothing can be trusted, everybody is in on it, and so you can dismiss everything as just part of the charade, which feels good for a few moments, but still doesn't actually help you make any decisions at all.
> "Two-factor authentication" has already been abused by Facebook and Twitter where they were caught using the data for advertising
Right, I mean, if somebody really wanted to help provide working two factor authentication, they'd have to invent a device that offered phishing-proof authentication, didn't rely on sharing "secrets" that might be stolen by hackers, and all while not giving up any personal information and ensuring the user's identity can't be linked from one site to another. That device would look exactly like the FIDO Security Keys we're talking about... huh.
Actually no, if they weren't really part of a massive conspiracy against o8r3oFTZPE there would be one further thing, instead of only being from Google you could just buy these Security Keys from anybody and they'd work. Oh right.
10 replies →
A key part of various such tamper-resistant devices is an embedded secret that's very difficult/expensive to extract. However, the manufacturer (i.e. "the company selling the soution) may know the embedded secret without extracting it. Because of that, trust in the solution provider is essential even if it's just simple math.
For a practical illustration, see the 2011 attack on RSA (the company) that allowed attackers access to secret values used in generating RSA's SecurID tokens (essentially, cheap devices that do mathematics) allowing them to potentially clone previously issued tokens. Here's one article about the case - https://www.wired.com/story/the-full-story-of-the-stunning-r...
That's true. Yubico provide a way to just pick a new random number. Because these are typically just AES keys, just "picking a random number" is good enough, it's not going to "pick wrong".
If you worry about this attack you definitely should perform a reset after purchasing the device. This is labelled "reset" because it invalidates all your credentials, the credentials you enrolled depend on that secret, and so if you pick a random new secret obviously those credentials stop working. So, it won't make sense to do this randomly while owning it, but doing it once when you buy the device can't hurt anything.
However, although I agree it would be possible for an adversary who makes keys to just remember all the factory set secrets inside them, I will note an important practical difference from RSA SecurID:
For SecurID those are actually shared secrets. It's morally equivalent to TOTP. To authenticate you, the other party needs to know the secret which is baked inside your SecurID. So RSA's rationale was that if they remember the secret they can help their customers (the corporation that ordered 5000 SecurID dongles, I still have some laying around) when they invariably manage to lose their copy of that secret.
Whereas for a FIDO token, that secret is not shared. Each key needs a secret, but nobody else has a legitimate purpose for knowing it. So whereas RSA were arguably just foolish for keeping these keys, they had a reason - if you found out that say, Yubico kept the secrets that's a red flag, they have no reason to do that except malevolence.