Comment by staticassertion
4 years ago
Just the other day I suggested using a yubikey, and someone linked me to the Titan sidechannel where researchers demonstrated that, with persistent access, and a dozen hours of work, they could break the guarantees of a Titan chip[0]. They said "an attacker will just steal it". The researchers, on the other hand, stressed how very fundamentally difficult this was to pull off due to very limited attack surface.
This is the sort of absolutism that is so pointless.
At the same time, what's equally frustrating to me is defense without a threat model. "We'll randomize this value so it's harder to guess" without asking who's guessing, how often they can guess, how you'll randomize it, how you'll keep it a secret, etc. "Defense in depth" has become a nonsense term.
The use of memory unsafe languages for parsing untrusted input is just wild. I'm glad that I'm working in a time where I can build all of my parsers and attack surface in Rust and just think way, way less about this.
I'll also link this talk[1], for the millionth time. It's Rob Joyce, chief of the NSA's TAO, talking about how to make NSA's TAO's job harder.
[0] https://arstechnica.com/information-technology/2021/01/hacke...
I'll conclude with a philosophical note about software design: Assessing the security of software via the question "can we find any security flaws in it?" is like assessing the structure of a bridge by asking the question "has it collapsed yet?" -- it is the most important question, to be certain, but it also profoundly misses the point. Engineers design bridges with built-in safety margins in order to guard against unforeseen circumstances (unexpectedly high winds, corrosion causing joints to weaken, a traffic accident severing support cables, et cetera); secure software should likewise be designed to tolerate failures within individual components. Using a MAC to make sure that an attacker cannot exploit a bug (or a side channel) in encryption code is an example of this approach: If everything works as designed, this adds nothing to the security of the system; but in the real world where components fail, it can mean the difference between being compromised or not. The concept of "security in depth" is not new to network administators; but it's time for software engineers to start applying the same engineering principles within individual applications as well.
-cperciva, http://www.daemonology.net/blog/2009-06-24-encrypt-then-mac....
I've worked as a structural engineer (EIT) on bridges and buildings in Canada before getting bored and moving back into software.
There are major differences in designing bridges and in crafting code. So many, in fact it is difficult to even know where to start. But with that proviso, I think the concept of safety versus the concept of security is one that so many people conflate. We design bridges to be safe against the elements. Sure, there are 1000 year storms but we know what we're designing for and it is fundamentally an economic activity. We design these things to fail at some regularity because to do otherwise would require an over-investment of resources.
Security isn't like safety because the attack scales up with the value of compromising the target. For example, when someone starts a new social network and hashes passwords the strength of their algorithm may be just fine, but once they have millions of users it may become worthwhile for attackers to invest in rainbow tables or other means to thwart their salted hash.
Security is an arms race. That's why we're having so much trouble securing these systems. A flood doesn't care how strong your bridge is, or where it is most vulnerable.
Aside: The distinction between safety and security I know:
- safety is "the system cannot harm the environment"
- security is the inverse: "the environment cannot harm the system"
To me, your distinction has to do with the particular attacker model - both sides are security (under these definitions).
4 replies →
So it's like building a bridge... that needs to constantly withstand thousands of anonymous, usually untraceable, and always evolving terrorist attacks.
7 replies →
I agree that safety & security are frequently conflated, but I don't think the important aspect is that there's no analogy between IT & construction.
IT safety = construction safety. What kind of cracks/bumps does your bridge/building have, can it handle increase in car volume over time, lots of new appliances put extra load on the foundation etc. IT safety is very similar in that way.
IT security = physical infrastructure security. Is your construction safe from active malicious attacks/vandalism? Generally we give up on vandalism from a physical security perspective in cities - spray paint tagging is pretty much everywhere. Similarly, crime is generally a problem that's not solvable & we try to manage. There's also large scale terrorist attacks that can & do happen from time to time.
There are of course many nuanced differences because no analogy is perfect, but I think the main tangible difference is that one is in the physical space while the other is in the virtual space. Virtual space doesn't operate the same way because the limits are different. Attackers can easily maintain anonymity, attackers can replicate an attack easily without additional effort/cost on their part, attackers can purchase "blueprints" for an attack that are basically the same thing as the attack itself, attacks can be carried out at a distance, & there are many strong financial motives for carrying out the attack. The financial motive is particularly important because it funds the every growing arms raise between offence & defense. In the physical space this kind of race is only visible in nation states whereas in the virtual space both nation states & private actors participate in this race.
Similarly, that's why IT development is a bit different from construction. Changing a blueprint in virtual space is nearly identical from changing the actual "building" itself & the cost is several orders of magnitude lower than it would be in physical space. Larger software projects are cheaper because we can build reusable components that have tests that ensure certain behaviors of the code & then we rerun them in various environments to make sure our assumptions still hold. We can also more easily simulate behavior in the real world before we actually ship to production. In the physical space you have to do that testing upfront to qualify a part. Then if you need a new part, you're sharing less of the design whereas in virtual space you can share largely the same design (or even the exact same design) across very different environments. & there's no simulation - you build & patch, but you generally don't change your foundation once you've built half the building.
Also pointing your (or anyone's finger) at the already overworked and exploited engineers in many countries is just abysmal in my opinion. It's not an engineers decision to make what the deadlines of finishing a software is. Countless number of companies are controlled by business people. So point your finger at them because they are who don't give a flying f*&% whether the software is secure or not. We engineers are very well aware with both the need and the implications of security. So this kind of name shaming must be stopped by the security community now and forever in my opinion.
I’ve found that software, among other engineering disciplines, is uniquely managed as a manufacturing line rather than a creative art. In the other disciplines, the difference between these phases of the project is much more explicit.
I think this quote is fundamentally wrong and intentionally misleading. The equivalent question would be "can we find any cracks on it?" Which makes complete sense. And in fact it is frequently asked during inspections. Just like the security flaw question should be asked in the same vein.
This is one of the best examples I’ve ever seen supporting the claim that analogies aren’t reasoning.
Edit: apparently elaboration is in order. In mechanical engineering one deals with smooth functions. A small error results in a small propensity for failure. Software meanwhile is discrete, so a small error can result in a disproportionately large failure. Indeed getting a thousandth of a percent of a program wrong could cause total failure. No bridge ever collapsed because the engineer got a thousandth of a percent of the building material’s properties wrong. In software the margin of error is literally undefined behavior.
>No bridge ever collapsed because the engineer got a thousandth of a percent of the building material’s properties wrong.
Perhaps not with building properties, but very small errors can cause catastrophic failure.
One of the most famous ones would be the Hyatt Regency collapse, where a contractor accidentally doubled the load on a walkway because he used two shorter beans attached to the top and bottom of a slab, rather than a longer beam that passed through it.
https://en.m.wikipedia.org/wiki/Hyatt_Regency_walkway_collap...
In electrical engineering, it's very common to have ICs that function as a microcontroller at 5.5V, and an egg cooker at 5.6V.
Microsoft lost hundreds of millions of dollars repairing the original Xbox 360 because the solder on the GPU cracked under thermal stress.
It's definitely not to the same extreme as software, but tiny errors do have catastrophic consequences in physical systems too.
From the GP comment:
> Engineers design bridges with built-in safety margins in order to guard against unforeseen circumstances (unexpectedly high winds, corrosion causing joints to weaken, a traffic accident severing support cables, et cetera)
I am not a mechanical engineer, but none of these examples look like smooth functions to me. I would expect that an unexpectedly high wind can cause your structure to move in way that is not covered by your model at all, at which point it could just show a sudden non-linear response to the event.
1 reply →
> I'm glad that I'm working in a time where I can build all of my parsers and attack surface in Rust and just think way, way less about this.
I'm beginning to worry that every time Rust is mentioned as a solution for every memory-unsafe operation we're moving towards an irrational exuberance about how much value that safety really has over time. Maybe let's not jump too enthusiastically onto that bandwagon.
Not just memory safety. Rust also prevents data races in concurrent programs. And there are a few more things too.
But these tricks have the same root: What if we used all this research academics have been writing about for decades, improvements to the State of the Art, ideas which exist in toy languages nobody uses -- but we actually industrialise them so we can use the resulting language for Firefox and Linux not just get a paper into a prestigious journal or conference?
If ten years from now everybody is writing their low-level code in a memory safe new C++ epoch, or in Zig, that wouldn't astonish me at all. Rust is nice, I like Rust, lots of people like Rust, but there are other people who noticed this was a good idea and are doing it. The idea is much better than Rust is. If you can't do Rust but you can do this idea, you should.
If ten years from now people are writing unsafe C and C++ like it's still somehow OK, that would be crazy.
Imagine it's 1995, you have just seen an Internet streaming radio station demonstrated, using RealAudio.
Is RealAudio the future? In 25 years will everybody be using RealAudio? No, it turns out they will not. But, is this all just stupid hype for nothing? Er no. In 25 years everybody will understand what an "Internet streaming radio station" would be, they just aren't using RealAudio, the actual technology they use might be MPEG audio layer III aka MP3 (which exists in 1995 but is little known) or it might be something else, they do not care.
> If ten years from now people are writing unsafe C and C++ like it's still somehow OK, that would be crazy.
It's 26 years after Java was released. Java has largely been the main competitor to C++. I don't see C++ going away nor do I see C going away. And it's almost always a mistake to lump C and C++ developers together. There is rarely an intersection between the two.
I think you do not understand how short 10 years is. There are tons of people still running computers on Sandy Bridge.
1 reply →
Well, Zig isn't memory safe (as implemented today; they could add a GC), so it's not a good example of a Rust alternative in this domain. But I agree with your overall point, and would add that you could replace Zig with any one of the dozens of popular memory safe language, even old standbys like Java. The point is not to migrate to one language in particular, but rather to migrate to languages in which memory errors are compiler bugs instead of application bugs.
9 replies →
I remain unconvinced that race proof programs are nearly as big a deal as memory safety. Many classes of applications can tolerate panics and its not a safety or security issue. I don't worry about a parser or server in go like I would in C.
(I realize that racing threads can cause logic based security issues. I've never seen a traditional memory exploit from on racing goroutines though.)
3 replies →
I take that bet about C being pretty prominent in 10 years from now.
A language and a memory access model are no panacea. 10 years is like the day after tomorrow in many industries.
> also prevents data races in concurrent programs.
I have another neat trick to avoid races. Just write single threaded programs. Whenever you think you need another thread, you either don't need it, or you need another program.
5 replies →
> If ten years from now people are writing unsafe C and C++ like it's still somehow OK, that would be crazy.
I mean to be clear, modern C++ can be effectively as safe as rust is. It requires some discipline and code review, but I can construct a tool-chain and libraries that will tell me about memory violations just as well as rust will. Better even in some ways.
I think people don't realize just how much modern C++ has changed.
4 replies →
What’s with the backlash against Rust? It literally is “just another language”. It’s not the best tool for every job, but it happens to be exceptionally good at this kind of problem. Don’t you think it’s a good thing to use the right tool for the job?
It is good to keep in mind that the Rust language still has lots of trade-offs. Security is only one aspect addressed by Rust (another is speed), and hence it is not the most "secure" language.
For example, in garbage collected languages the programmer does not need to think about memory management all the time, and therefore they can think more about security issues. Rust's typesystem, on the other hand, can really get in the way and make code more opaque and more difficult to understand. And this can be problematic even if Rust solves every security bug in the class of (for instance) buffer overflows.
If you want secure, better use a suitable GC'ed language. If you want fast and reasonably secure, then you could use Rust.
11 replies →
It's unusually or suspiciously "hyped". Not to the extent as the other sibling exaggerated to, but enough for it to be noticeable and to rub people the wrong way, myself included. It rubs me the wrong way because something feels off about the way it's hyped/pushed/promoted. It's like the new javascript in the programming world. And if we allow it (like we did with JS), it'll overtake way too much mindshare with the unfortunate detriment and neglect of all others.
> What’s with the backlash against Rust?
What's with the hyping of Rust as the Holy Grail as the solution to everything not including P=NP and The Halting Problem?
16 replies →
I'm a security professional so it's based on being an experienced expert, not some sort of hype or misplaced enthusiasm.
The article we are commenting on is about targeted no-interaction exploitation of tens of thousands of high profile devices. I think this is one of the areas where there is a very clear safety value (not just theoretical).
Whole classes of bugs -- the most common class of security-related bugs in C-family languages -- just go away in safe Rust with few to no drawbacks. What's irrational about the exuberance here? Rust is a massive improvement over the status quo we can't afford not to take advantage of.
> how much value that safety really has over time
Billions and billions of dollars. Large organizations like Microsoft and Google have published numbers on the proportion of vulns in their software that are caused by memory errors. As you can imagine, a lot of effort is spent within these institutions to try to mitigate this risk (world class fuzzing, static analysis, and pentesting) yet vulns continue to persist.
Rust is not the solution. Memory-safe languages are. It is just that there aren't many such languages that can compete with C++ when it comes to speed (Rust and Swift are the big ones) so Rust gets mentioned a lot to preempt the "but I gotta go fast" concerns.
… it is a solution for every memory-unsafe operation, though?
No. Rust cannot magically avoid memory-unsafe operations when you have to deal with, well, memory. If I throw a byte stream at you and tell you it is formatted like so and so, you have to work with memory and you will create memory bugs.
It can however make it extremely difficult to exploit and it can make such use cases very esoteric (and easier to implement correctly).
9 replies →
Language absolutism.
There's literally zero evidence that a program written in Rust is actually practically safer than one written in C at the same scale. And there won't be any evidence of this for some time because no Rust program is as widely deployed as an equivalent highly used C program.
That's not true, actually. There is more than "literally zero" evidence. I don't feel like finding it for you, but at minimum Mozilla has published a case study showing that moving to Rust considerably reduced the memory safety issues they discovered. That's just one example, I believe there are others.
There are likely many other examples of, say, Java not having memory safety issues. Java makes very similar guarantees to Rust, so we can extrapolate, using common sense, that the findings roughly translate.
Common sense is a really powerful tool for these sorts of conversations. "Proof" and "evidence" are complex things, and yet the world goes on with assumptions that turn out to hold quite well.
2 replies →
I’d wager Dropbox’s Magic Pocket is up there with equivalent C/C++ based I/O / SAN stacks:
https://dropbox.tech/infrastructure/extending-magic-pocket-i...
There's still a lot of macho resistance to using safe languages, because "I can write secure code in C!"
"You" probably can. I can too. That's not the point.
What happens when the code has been worked on by other people? What happens after a few dozen pull requests are merged? What happens when it's ported to other platforms with different endian-ness or pointer sizes or hacked in a late night death march session to fix some bug or add some feature that has to ship tomorrow? What happens when someone accidentally deletes some braces with an editor's refactor feature, turning a "for { foo(); bar(); baz(); }" into a "for foo(); bar(); baz();"?
That's how bugs creep in, and the nice thing about safe languages is that the bugs that creep in are either caught by the compiler or result in a clean failure at runtime instead of exploitable undefined behavior.
Speed is no longer a good argument. Rust is within a few decimal points of C performance if you code with an eye to efficiency, and if you really need something to be as high-performance as possible code just that one thing in C (or ASM) and code the rest in Rust. You can also use unsafe to squeeze out performance if you must, sparingly.
Oh and "but it has unsafe!" is also a non-argument. The point of unsafe is that you can trivially search a code base and audit every use of it. Of course it's easy to search for unsafe code in C and C++ too... because all of it is!
If we wrote most things and especially things like parsers and network protocols in Rust, Go, Swift, or some other safe language we'd get rid of a ton of low-hanging fruit in the form of memory and logic error attack vectors.
> "You" probably can. I can too. That's not the point.
I'm not even sure that's true. I do agree with you that the argument that you need to hire other people is more convincing, but I'd wager that no single human on the planet can actually write a vuln-free parser of any complexity in C on their first attempt - even if handed the best tools that the model checking community has to offer.
Macho is the best word to describe it. It is sheer ego that would cause anybody to say that they can feasibly write a safe program in C or C++.
>Macho is the best word to describe it. It is sheer ego that would cause anybody to say that they can feasibly write a safe program in C or C++.
It reminds me a little of some of the free-wheeling nuclear physicists in the Manhattan Project - probably some of the smartest people on the planet - being hubristically lax with safety: https://en.wikipedia.org/wiki/Demon_core#Second_incident
>[...] The experimenter needed to maintain a slight separation between the reflector halves in order to stay below criticality. The standard protocol was to use shims between the halves, as allowing them to close completely could result in the instantaneous formation of a critical mass and a lethal power excursion.
>Under Slotin's own unapproved protocol, the shims were not used and the only thing preventing the closure was the blade of a standard flat-tipped screwdriver manipulated in Slotin's other hand. Slotin, who was given to bravado, became the local expert, performing the test on almost a dozen occasions, often in his trademark blue jeans and cowboy boots, in front of a roomful of observers. Enrico Fermi reportedly told Slotin and others they would be "dead within a year" if they continued performing the test in that manner. Scientists referred to this flirting with the possibility of a nuclear chain reaction as "tickling the dragon's tail", based on a remark by physicist Richard Feynman, who compared the experiments to "tickling the tail of a sleeping dragon".
>On the day of the accident, Slotin's screwdriver slipped outward a fraction of an inch while he was lowering the top reflector, allowing the reflector to fall into place around the core. Instantly, there was a flash of blue light and a wave of heat across Slotin's skin; the core had become supercritical, releasing an intense burst of neutron radiation estimated to have lasted about a half second. Slotin quickly twisted his wrist, flipping the top shell to the floor. The heating of the core and shells stopped the criticality within seconds of its initiation, while Slotin's reaction prevented a recurrence and ended the accident. The position of Slotin's body over the apparatus also shielded the others from much of the neutron radiation, but he received a lethal dose of 1,000 rad (10 Gy) neutron and 114 rad (1.14 Gy) gamma radiation in under a second and died nine days later from acute radiation poisoning.
Beat me to it. The macho effect is there for sure, but on what grounds do you claim you can write secure C? As far as I know, you can't really prove anything about C unless you severely restrict the language, and those restrictions include pointer usage. So at best, you can do a hand-wavy read through code and have some vague notion of its behaviour.
It depends on the size of the parser. As they get big and complex I would start to agree with you.
what about https://github.com/seL4/seL4?
5 replies →
I have used a Yubikey for years. Nothing is perfect, but as you mentioned, the only hacks of them have been with persistent physical access, or somehow getting the end user to hit the activate button tens of thousands of times.
On any system, if you give an attacker physical access to the device, you are done. Just assume that. If your Yubikey lives in your wallet, or on your key chain, and you only activate it when you need it, it is highly unlikely that anyone is going to crack it.
As far as physical device access, my last employer maintained a 'garage' of laptops and phones for employees traveling to about a half dozen countries. If you were going there, you left your corporate laptop and phone in the US, and borrowed one of these 'travel' devices with you for your trip. Back home, those devices were never allowed to connect to the corporate network. When you handed them in, they were wiped and inspected, but IT assumed that they were still compromised.
Lastly, Yubikey, as a second factor, is supposed to be part of a layered defense. Basically forcing the attacker to hack both you password and your Yubikey.
It bugs me that people don't understand how important two factor auth is, and also how crazy weak SMS access codes are.
This depends...
I've had an argument here about SMS for 2FA... Someone said, that SMS for 2FA is broken, because some companies misuse it for 1FA (for eg password reset)... but in essence, a simple sms verification solves 99.9% of issues with eg. password leaks and password reuse.
No security solution is perfect, but using a solution that works 99% of the time is still better than no security at all (or just one factor).
I'm pretty sure I've written on HN before that SMS 2FA doesn't do much against phishing, which we know is a big problem, but worse it creates a false reassurance.
The user doesn't reason correctly that the bank would send them this legitimate SMS 2FA message because a scammer is now logging into their account, they assume it's because this is the real bank site they've reached via the phishing email, and therefore their concern that it seemed maybe fake was unfounded.
But the scammer needs username, password and to phish the user... this is still more than just username+password (which could be reused on eg. linkedin, adobe or any of the other hacked sites), and if the scammers do the phishing attack, they can also get the OTP from the users app in the same way as they would get the number from an SMS
The phisher needs to know your phone number though to do that.
3 replies →
For anyone who's fresh to cyber security, the fundamental axiom of it is that anything can be cracked, only a matter of computations (time*resource). Just as the dose makes the poison (sola dosis facit venenum).
Suppose you have a secret, that is RSA-encrypted, we might be looking at three hundred trillion years according to Wikipedia with the kind of computer we have now. Obviously that secrecy would have lost its value then, and the resource it requires to crack the secret would worth more than the secret itself. Even with quantum computing, we are still looking at 20+ years, which is still enough for most of the secrets, you got plenty time to change it, or after it lost its value. So we say that's secure enough.
If that’s a fundamental axiom of cyber security then it’s obvious that it’s a field of fools. This is a poor, tech-driven understanding of security that will leave massive gaps in its application to technology.
From the video: "Cloud computing is really a fancy name for someone else's computer."
He goes on to discuss the expansion of "trust boundaries".
Big Tech: Use our computers, please!
> The use of memory unsafe languages for parsing untrusted input is just wild.
I think some of the vulnerabilities have been found in image file format or PDF parsing libraries. These are huge codebases that you can't just rewrite in another language.
At the same time, Apple is investing huge amounts of resources into making their (and everyone elses) code more secure. Xcode/clang includes a static analyzer that catches a lot of errors in unsafe languages, and they include a lot of "sanitizers" that try to catch problems like data races etc.
And finally, they introduced a new, much safer programming language that prevents a lot of common errors, and as far as I can tell they are taking a lot of inspiration from Rust.
So it's not like Apple isn't trying to improve things.
These are stopgaps, not long term solutions.
Msan has a nontrivial performance hit and is a problem to deploy on all code running a performance critical service. Static analysis can find some issues but any sound static analysis of a C++ program will rapidly havoc and report false positives out the wazoo. Whole-program static analysis (which you need to prevent false positives) is also a nightmore for C++ due to the single-translation-unit compilation model.
All of the big companies are spending a lot of time and money trying to make systems better with the existing legacy languages and this is necessary today because they have so much code and you can't just YOLO and run a converter tool to convert millions and millions of lines of code to Rust. But it is very clear that this does not just straight up prevent the issue completely like using a safe language.
I was with you until the parsing with memory unsafe languages. Isn’t that exactly the kind of “random security not based on a threat model” type comment you so rightly criticised in the first half of your comment?
I think you must have misunderstood the point the parent comment was trying to make. Memory-safety issues are responsible for a majority of real-world vulnerabilities. They are probably the most prevalent extant threat in the entire software ecosystem.
Buffer overflows are common in CVEs because it's the kind of thing programmers are very familiar with. But I'm pretty sure that in terms of real-world exploits things like SQL injection, cross-site scripting, authentication logic bugs, etc, are still far more common. Almost all of those are in bespoke, proprietary software. A Facebook XSS exploit doesn't get a CVE.
6 replies →
It may sound like I’m being snarky, but I’m not:
Aren’t users / social engineering make up the actual majority of real-world vulnerabilities, and pose the most prevalent extant threat in the entire software ecosystem?
2 replies →
Based on the hundreds, perhaps thousands of critical vulnerabilities that are due directly to parsing user input in memory-unsafe languages, usually resulting in remote code execution, how's this for a threat model: attacker can send crafted input that contains machine code that subsequently runs with the privileges of the process parsing the input. That's bad.
The attack surface is the parser. The ability to access it is arbitrary. I can't build a threat model beyond that for any specific case, but in the case of a text messaging app I absolutely expect "attacker can text you" to be in your threat model.
There are very few threat models that a memory unsafe parser does not break.
Even the "unskilled attacker trying other people's vulns" threat basically depends on the existence of memory-safety related vulnerabilities.
Then we’re right back in the checklist mentality of “500 things secure apps never do”. I could talk to somebody else and they’d tell me the real threat to worry about is phishing or poor CI/CD or insecure passwords or whatever.
3 replies →
I mean, the threat model is that 1. Memory leaks/errors are bad 2. Programmers make those mistakes all the time 3. Using memory safe languages is cheap Therefore, 4. We should use memory safe languages more often
The threat-model there is that the attacker controls the text that is parsed.
Add this language absolutism to the list of things we need to avoid.
“ The use of memory unsafe languages for parsing untrusted input is just wild.” Indeed! The continued casualness of attitudes towards input validation continues to floor me. “computer science” is anything but :p
> I'm glad that I'm working in a time where I can build all of my parsers and attack surface in Rust
Can you though? Where/how are you deploying your Rust executables that isn't relying deeply on OS code written in "wild" "memory unsafe languages"?
I mean, I _guess_ it'd be possible to write everything from the NIC firmware all the way through your network drivers and OS to ensure no untrusted input gets parsed before it hits your Rust code, but I doubt anyone except possibly niche academic projects or NSA/MOSSAD devs have ever done that...
Yeah I mean, 100%, I hate that I run my code on Linux, which I don't consider to be a well secured kernel. It's an unfortunate thing, but such is life.
But attackers have significantly less control over that layer. This is quite on topic with regards to security nihilism - my parser code being memory safety means that the code that's directly interfacing with attacker input is memory safe. Is the allocator under the hood memory safe? Nope, same with various other components - like my TCP stack. But again, attackers have a lot less control over that part of the stack, so while unfortunate, it's not my main concern.
I do hope to, in the future, leverage a much much more security optimized stack. I'd dive into details on how I intend to do that, but I think it's out of scope for this conversation.
> Just the other day I suggested using a yubikey
The problem is that the recent security company purchases suggest that it costs roughly $100 per month per user to have just basic security. Cost goes up from that exponentially.
Everybody defaults to a small number of security/identity providers because running the system is so stupidly painful. Hand a YubiKey to your CEO and their secretary. Make all access to corporate information require a YubiKey. They won't last a week.
We don't need better crypto. Crypto is good enough. What we need is better integration of crypto.
> The problem is that the recent security company purchases suggest that it costs roughly $100 per month per user to have just basic security. Cost goes up from that exponentially.
But what does this have to do with the FIDO authenticator?
At first I thought you said $100 per user, and I figured, wow, you are buying them all two Yubikeys, that's very generous. And then I realised you wrote "per month".
None of this costs anything "per month per user". You're buying some third party service, they charge whatever they like, this is the same as the argument when people said we can't have HTTPS Everywhere because my SSL certificate cost $100. No, you paid $100 for it, but it costs almost nothing.
I built WebAuthn enrollment and authentication for a vanity site to learn how it works. No problem, no $100 per month per user fees, just phishing proof authentication in one step, nice.
The integration doesn't get any better than this. I guess having watched a video today of people literally wrapping up stacks of cash to Fedex their money to scammers I shouldn't underestimate how dumb people can be but really even if you struggle with TOTP do not worry, WebAuthn is easier than that as a user.
And how do I use my YubiKey to access mail if its not Gmail/Office365?
And how do I enroll all my employees into GitHub/GitLab?
And how do I recover when a YubiKey gets lost?
And how do I ...
Sure, I can do YubiKeys for myself with some amount of pain and a reasonable amount of money.
Once I start rolling secure access out to everybody in the company, suddenly it sucks. And someone spends all their time doing internal customer support for all the edge cases that nobody ever thinks about. This is fine if I have 10,000 employees and a huge IT staff--this is not so fine if I've got a couple dozen employees and no real IT staff.
That's what people like okta and auth0 (now bought by okta) charge so bloody much for. And why everybody basically defaults to Microsoft as an Identity Provider. etc.
Side note: Yes, I do hand YubiKeys out as trios--main use, backup use (you lost or destroyed your main one), and emergency use (oops--something is really wrong and the other two aren't working). And a non-trivial amount of services won't allow you to enroll multiple Yubikeys on the same account.
1 reply →
> Hand a YubiKey to your CEO and their secretary.
Well, I'm the CEO lol so we have an advantage there.
> The problem is that the recent security company purchases suggest that it costs roughly $100 per month per user to have just basic security.
Totally, this is a huge issue to me. I strongly believe that we need to start getting TPMs and hardware tokens into everyone's hands, for free - public schools should be required to give it to students when they tell them to turn in homework via some website, government organizations/ anyone who's FEDRAMP should have it mandated, etc. It's far too expensive today, totally agreed.
edit: Wait, per month? No no.
> We don't need better crypto.
FWIW the kicker with yubikeys isn't really anything with regards to cryptography, it's the fact that you can't extract the seed and that the FIDO2 protocols are highly resistant to phishing.
I am scared to death of rust.
It appears that if one uses it, one become evangelicalized to it, spreads the word "Praise Rust!", and so forth.
Anything so evangelized is met with strong skepticism here.
What scares me about Rust is that people put so much trust in it. And part of that is because of what you mention, the hype in other words.
I don't follow this carefully but even I have heard of at least one Rust project that when audited failed miserably. Not because of memory safety but because the programmer had made a bunch of rookie mistakes that senior programmers might be better at.
So in other words, Rust's hype is going to lead to a lot of rewrites and a lot of new software being written in Rust. And much of that software will have simple programming errors that you can do in any language. So we're going to need a whole new wave of audits.
I recall a time at the grocery store, years ago. I wanted some sliced meat, but when I approached the counter a young woman was sweeping the floor.
Naturally, she was wearing gloves.
Seeing me, she grabbed the dustpan, threw away her sweepings, put the broom away, and was prepared to now serve me...
Still wearing the same gloves. Apparently magic gloves, for she was confused when I asked her to change them. She'd touched the broom, the dustpan, the floor, stuff in the dustpan, and the garbage. All within 20 seconds of me seeing her.
Proper procedure, understanding processes, are far more effective than a misused tool.
Is rust mildly better than some languages? Maybe.
But it is not a balm for all issues, and as you say, replacing very well maintained codebases might result in suboptimal outcomes.
From the Ars reference: "There are some steep hurdles to clear for an attack to be successful. A hacker would first have to steal a target's account password and also gain covert possession of the physical key for as many as 10 hours. The cloning also requires up to $12,000 worth of equipment and custom software, plus an advanced background in electrical engineering and cryptography. That means the key cloning-were it ever to happen in the wild-would likely be done only by a nation-state pursuing its highest-value targets."
"only by a nation-state"
This ignores the possibility that the company selling the solution could itself easily defeat the solution.
Google, or another similarly-capitalised company that focuses on computers, could easily succeed in attacking these "user protections".
Further, anyone could potentially hire them to assist. What is to stop this if secrecy is preserved.
We know, for example, that Big Tech companies are motivated by money above all else, and, by-and-large, their revenue does not come from users. It comes from the ability to see into users' lives. Payments made by users for security keys are all but irrelevant when juxtaposed against advertising services revenue derived from personal data mining.
Google has an interest in putting users' minds at ease about the incredible security issues with computers connected to the internet 24/7. The last thing Google wants is for users to be more skeptical of using computers for personal matters that give insight to advertisers.
The comment on that Ars page is more realistic than the article.
Few people have a "nation-state" threat model, but many, many people have the "paying client of Big Tech" threat model.
Yes, if you don't trust Google don't use a key from Google. Is that what you're trying to say? If your threat model is Google don't buy your key from Google. Do I think that's probably a stupid waste of thought? Yes, I do. But it's totally legitimate if that's your threat model.
"But it's totally legitimate if that's your threat model."
Not mine. I have no plans to purchase a security key from Google. I have no threat model.
Nothing in the comment you replied to mentioned "trust" but since you raised the issue I did a search. It seems there are actually people commenting online who claim they do not trust Google; this has been going on for years. Can you believe it. Their CEO has called it out multiple times.^1 "[S]tupid waste of thought", as you call it. (That's not what I would call it.) It's everywhere.^2 The message to support.google and the response are quite entertaining.
1. For example, https://web.archive.org/web/20160601234401/http://allthingsd...
2.
https://support.google.com/googlenest/thread/14123369/what-i...
https://www.inc.com/jason-aten/google-is-absolutely-listenin...
https://www.consumerwatchdog.org/blog/people-dont-trust-goog...
https://www.wnd.com/2015/03/i-dont-trust-google-nor-should-y...
https://www.theguardian.com/technology/2020/jan/03/google-ex...
https://www.forbes.com/sites/kateoflahertyuk/2018/10/10/this...
> This ignores the possibility that the company selling the solution could itself easily defeat the solution.
How do you imagine this would work?
The "solution" here is just a cheap device that does mathematics. It's very clever mathematics but it's just mathematics.
I think you're imagining a lot of moving parts to the "solution" that don't exist.
All I am suggesting is that "hacker" as used by the Ars author could be a company, or backed by a company, and not necessarily a "nation-state". That is not far-fetched at all, IMO. The article makes it sound like "nation-states" are the only folks who could defeat the protection or would even have an interest in doing so. As the comment on the Ars page points out, that is ridiculous.
Assuming "hacker" could be a company what company would have such a motivation and resources to spy on people. The NSO's of the world, sure. Anyone else. Companies have better things to do than spy on people, right. Not anymore.
What about a company whose businesss is personal data mining, who goes so far as to sniff people's residential wifi (they lied about it at first when they got caught), collect audio via a "smart" thermostat (Nest), collect data from an "activity tracker" (FitBit), a "smartphone OS", a search engine, e-mail service, web analytics, etc., etc. Need I go on. I could fill up an entire page with all the different Google acquisitions and ways they are mining people's data.
Why are security keys any different. 9 out of 10 things Google sells or gives away are designed to facilitate data collection, but I guess this is the 1 in 10. "Two-factor authentication" has already been abused by Facebook and Twitter where they were caught using the data for advertising, but I suppose Google is different.
These companies want personal data. With the exception of Apple, they do not stay in business by selling physical products. Collecting data is what they do and they spend enormous amounts of time and effort doing it.
"That's all I know."
11 replies →
A key part of various such tamper-resistant devices is an embedded secret that's very difficult/expensive to extract. However, the manufacturer (i.e. "the company selling the soution) may know the embedded secret without extracting it. Because of that, trust in the solution provider is essential even if it's just simple math.
For a practical illustration, see the 2011 attack on RSA (the company) that allowed attackers access to secret values used in generating RSA's SecurID tokens (essentially, cheap devices that do mathematics) allowing them to potentially clone previously issued tokens. Here's one article about the case - https://www.wired.com/story/the-full-story-of-the-stunning-r...
1 reply →