← Back to context

Comment by Joker_vD

2 days ago

> Greg’s argument is a hard truth: “Usage is different for each user.” He cannot score a vulnerability because he doesn’t know if you’re running a cloud-native microservice or a legacy industrial controller.

What about having several use cases in mind, and give the scores for each of those?

> We must stop litigating which fixes matter and start treating every kernel bug fix as relevant (a bug is a bug). We must stop running patching as a project and bake it into the pipeline so that applying stable fixes is simply what the system does (the patch is the policy).

Ah, so it's simply "apply all the fixes automatically", i.e. "the Chainguard way" but, again, fully automated. Okay?

> What about having several use cases in mind, and give the scores for each of those?

i imagine the same reason they don't score for 1, it takes time that could be allocated elsewhere

tbh i think scoring for multiple scenarios would take more time and be less useful. kernel devs are not implementors, they may have never used docker or built a cut down kernel for an iot device, they just build a general purpose kernel

  • > it takes time that could be allocated elsewhere

    And not scoring means that the security triage teams everywhere have to spend their time to assess the severity on their own, and in doing so, they mostly duplicate each other's work while deduplication is nigh impossible. Is this a worthwhile trade?

    Consider e.g. vehicle recalls: the manufacturer could very well (baring legal requirements and general public's expectation) just leave it to the customers and the repairmen out there to discover and deal with the defects on their own.

    > kernel devs are not implementors, they may have never used docker or built a cut down kernel for an iot device, they just build a general purpose kernel

    Well that's a pretty condescending look upon the kernel maintainers. Making a successful general-purpose kernel (nevermind making a general-purpose kernel that also has a lot of quite specific affordances for custom scenarios) still requires understanding of how it will be used.

> What about having several use cases in mind, and give the scores for each of those?

Or assign one score according to the worst case scenario.