← Back to context

Comment by cutemonster

4 years ago

I wonder about this me too.

To me, seems to indicate that nation state supported evil hacker org (maybe posing as an individual) could place their own exploits in the kernel. Let's say they contribute 99.9% useful code, solve real problems, build trust over some years, and only rarely write an evil hard to notice exploit bug. And then, everyone thinks that obviously it was just an ordinary bug.

Maybe they can pose as 10 different people, in case some of them gets banned.

You're still in a better position with open source. The same thing happens in closed source companies.

See: https://www.reuters.com/article/us-usa-security-siliconvalle...

"As U.S. intelligence agencies accelerate efforts to acquire new technology and fund research on cybersecurity, they have invested in start-up companies, encouraged firms to put more military and intelligence veterans on company boards, and nurtured a broad network of personal relationships with top technology executives."

Foreign countries do the same thing. There are numerous public accounts of Chinese nationals or folks with vulnerable family in China engaging in espionage.

  • Plus, wouldn't it be much easier to do this under the guise of equality with some quickly thought up trash contract enforced on all developers?

    One might even say that while this useless attack is taking place, actual people with lifelong commitment to open source software and user freedom get taken out by the "NaN" flavour "NaN" koolaid of the week.

    Soon all that is left that is legal to say is whatever is approved by the "NaN" board. Eventually the number 0 will be found to be exclusionary or accused of "NaN" and we will all be stuck coding unary again.

Isn't what you've described pretty much the very definition of advanced persistent threat?

It's difficult to protect against trusted parties whom you assume, with good reason, and good-faith actors.

  • The fundamental tension is between efficiency and security. Trust permits efficiency, at the cost of security (if that trust is found to be misplaced).

    A perfectly security system is only realized by a perfectly inefficient development process.

    We can get better at lessening the efficiency tax of a given security level (through tooling, tests, audits, etc), but for a given state of tooling, there's still a trade-off.

    Different release trains seem the sanest solution to this problem.

    If you want bleeding-edge, you're going to pull in less-tested (and also less-audited) code. If you want maximum security, you're going to have to deal with 4.4.