← Back to context

Comment by viraptor

2 months ago

The score assigned to issues has to be the worst case one, because whoever is assessing it will not know how people use the library. The downstream users can then evaluate the issue and say it does/doesn't/kinda affects them with certainty and lower their internal impact. People outside that system would be only guessing. And you really don't want to guess "nobody would use it this way, it's fine" if it turns out some huge private deployment does.

> The downstream users can then evaluate the issue and say it does/doesn't/kinda affects them with certainty and lower their internal impact.

Unfortunately that's not how it happens in practice. People run security scanners, and those report that you're using library X version Y which has a known vulnerability with a High CVSS score or whatever. Even if you provide a reasoned explanation of why that vulnerability doesn't impact your use case and you convince your customer's IT team of this, this is seen as merely a temporary waiver: very likely, you'll have the same discussion next time something is scanned and found to contain this.

The whole security audit system and industry is problematic, and often leads to huge amounts of busy work. Overly pessimistic CVEs are not the root cause, but they're still a big problem because of this.

> The downstream users can then evaluate the issue and say it does/doesn't/kinda affects them with certainty and lower their internal impact.

If you make it use the worst case lowest common denominator, it biases nearly everything towards Critical, and the actually critical stuff gets lost in a sea of prioritization. It's spam. If I got fifty emails for critical issues and two of them are really actually critical, I'm going to miss far more important ones than if I only got ten emails about critical issues.

If we all had infinite time and motivation, this wouldn't be a problem. But by being all-or-nothing purists, everything is worse in general.