Comment by jodrellblank
4 years ago
Eric Raymond observed it, as a shift in software development to take advantage of the wisdom of crowds. I don't see that he speaks about security directly in the original essay[2]. He's discussing the previously held idea that stable software comes from highly skilled developers working on deep and complex debugging between releases, and instead of that if all developers have different skillsets then with a large enough number of developers any bug will meet someone who thinks that bug is an easy fix. Raymond is observing that the Linux kernel development and contribution process was designed as if Linus Torvalds believed this, preferring ease of contribution and low friction patch commit to tempt more developers.
Raymond doesn't seem to claim anything like "there are sufficient eyes to swat all bugs in the kernel", or "there are eyes on all parts of the code", or "'bugs' covers all possible security flaws", or etc. He particularly mentions uptime and crashing, so less charitably the statement is "there are no crashing or corruption bugs so deep that a large enough quantity of volunteers can't bodge some way past them". Which leaves plenty of room for less used subsystems to have nobody touching them if they don't cause problems, patches that fix stability at the expense of security, absense of careful design in some areas, the amount of eyes needed being substantially larger than the amount of eyes involved or available, that maliciously submitted patches are different from traditional bugs, and more.
No comments yet
Contribute on Hacker News ↗