Libxml2's "no security embargoes" policy

2 months ago (lwn.net)

A lot of these "security bugs" are not really "security bugs" in the first place. Denial of service is not resulting in people's bank accounts being emptied or nude selfies being spread all over the internet.

Things like "panics on certain content" like [1] or [2] are "security bugs" now. By that standard anything that fixes a potential panic is a "security bug". I've probably fixed hundreds if not thousands of "security bugs" in my career by that standard.

Barely qualifies as a "security bug" yet it's rated as "6.2 Moderate" and "7.5 HIGH". To say nothing of gazillion "high severity" "regular expression DoS" nonsense and whatnot.

And the worst part is all of this makes it so much harder to find actual high-severity issues. It's not harmless spam.

[1]: https://github.com/gomarkdown/markdown/security/advisories/G...

[2]: https://rustsec.org/advisories/RUSTSEC-2024-0373.html

  • Dereferencing a null pointer is an error. It is a valid bug.

    The maintainer claims this is caused by allocator failure (malloc returning null), but it is still a valid bug. If you don't want to deal with malloc failures, just crash when malloc() returns null, instead of not checking malloc() result at all.

    The maintainer could just write a wrapper around malloc that crashes on failure and replace all calls with the wrapper. It seems like an easy fix. Because almost no software can run where there is no heap memory so it makes no sense for the program to continue.

    Another solution is to propagate every error back to the caller, but it is difficult and there is high probability that the caller won't bother checking the result because of laziness.

    A quote from a bug report [1]:

    > If xmlSchemaNewValue returns NULL (e.g., due to a failure of malloc), xmlSchemaDupVal checks for this and returns NULL.

    [1] https://gitlab.gnome.org/GNOME/libxml2/-/issues/905

    • > The maintainer could just write a wrapper around malloc that crashes on failure and replace all calls with the wrapper. It seems like an easy fix. Because almost no software can run where there is no heap memory so it makes no sense for the program to continue.

      So could the reporter of the bug. Alternatively, he could add an `if(is null){crash}` after the malloc. The fix is easy for anyone that has some knowledge of the code base. The reporter has demonstrated this knowledge in finding the issue.

      If a useful PR/patch diff was provided with the reporter, I would have expected it to be merged right away.

      However, instead of doing the obvious thing to actually solve the issue, the reporter hits the maintainer with this bureaucratic monster:

      > We'd like to inform you that we are preparing publications on the discovered vulnerability.

      > Our Researchers plan to release the technical research, which will include the description and details of the discovered vulnerability.

      > The research will be released after 90 days from the date you were informed of the vulnerability (approx. August 5th, 2025).

      > Please answer the following questions:

      >> * When and in what version will you fix the vulnerability described in the Report? (date, version)

      > * If it is not possible to release a patch in the next 90 days, then please indicate the expected release date of the patch (month).

      > * Please, provide the CVE-ID for the vulnerability that we submitted to you.

      >> In case you have any further questions, please, contact us.

      https://gitlab.gnome.org/GNOME/libxml2/-/issues/905#note_243...

      The main issue here is really one of tone. The maintainer has been investing his free time to altruistically move the state of software forward and the reporter is too lazy to even type up a tone-adjusted individual message. Would it have been so hard for the reporter to write the following?

      > Thank you for your nice library. It is very useful to us! However, we found a minor error that unfortunately might be severely exploitable. Attached is a patch that "fixes" it in an ad-hoc way. If you want to solve the issue in a different way, could we apply the patch first, and then you refactor the solution when you find time? Thanks! Could you give us some insights on when after merging to main/master, the patch will end up in a release? This is important for us to decide whether we need to work with a bleeding edge master version. Thank you again for your time!

      Ultimately, it is a very similar message content. However, it feels completely different.

      Suppose you are a maintainer without that much motivation left, and you get hit with such a message. You will feel like the reporter is an asshole. (I'm not saying he is one.) Do you really care, if he gets powned via this bug? It takes some character strength on the side of the maintainer to not just leave the issue open out of spite.

      6 replies →

    • Many systems have (whether you like the idea or not) effectively infallible allocators. If malloc won't ever return null, there's not much point in checking.

    • A while back i remember looking at the kernel source code, when overcommit is enabled, malloc would not fail if it couldnt allocate memory, it would ONLY fail if you attempted to allocate memory larger than the available memory space.

      I not think you can deal with the failure condition the way you think on Linux (and I imagine other operating systems too).

      5 replies →

    • in the event that malloc returns NULL and it isn't checked, isn't the program going to crash anyways? I usually just use a macro like "must_malloc" that does this anyways. But the out come is the same I would think. It's mostly a difference of where it happens.

  • > Denial of service is not resulting in ...

    DoS results in whatever the system happens to do. It may well result in bad things happening, for example stopping AV from scanning new files, breaking rate limiting systems to allow faster scanning, hogging all resources on a shared system for yourself, etc. It's rarely a security issue in isolation, but libraries are never used in isolation.

    • An AV system stopping because of a bug in a library is bad, but that's not because the library has a security bug. It's a security problem because the system itself does security. It would be wild if any bug that leads to a crash or a memory leak was a "security" bug because the library might have been used by someone somewhere in a context that has security implications.

      A bug in a library that does rate limiting arguably is a security issue because the library itself promises to protect against abuse. But if I make a library for running Lua in redis that ends up getting used by a rate limiting package, and my tool crashes when the input contains emoji, that's not a security issue in my library if the rate limiting library allows emails with punycode emoji in them.

      "Hogging all of the resources on a shared system" isn't a security bug, it's just a bug. Maybe an expensive one, but hogging the CPU or filling up a disk doesn't mean the system is insecure, just unavailable.

      The argument that downtime or runaway resource use due is considered a security issue but only if the problem is in someone else's code is some Big Brained CTO way of passing the buck onto open source software. If it was true, Postgres autovacuuming due to unpleasant default configuration would be up there with Heartbleed.

      Maybe we need a better way of alerting downstream users of packages when important bugs are fixed. But jamming these into CVEs and giving them severities above 5 is just alert noise and makes it confusing to understand what issues an organization should actually care about and fix. How do I know that the quadratic time regexp in a string formatting library used in my logging code is even going to matter? Is it more important than a bug in the URL parsing code of my linter? It's impossible to say because that responsibility was passed all the way downstream to the end user. Every single person needs to make decisions about what to upgrade and when, which is an outrageous status quo.

      21 replies →

  • "Security" announcements seem to be of 3 kinds lately:

    1. Serious. "This is a problem and it needs fixing yesterday."

    2. Marketing. "We discovered that if earth had two moons and they aligned right and you had local root already you could blah blah. By the way we are selling this product that will generate a positive feedback loop for your paranoid tendencies, buy buy buy!".

    3. Reputation chasing. Same as above, except they don't sell you a product, they want to establish themselves as an expert in aligning moons.

    Much easier to do 2 or 3 via "AI" by the way.

  • Particularly for [1], I strongly agree with you.

    This is so frustrating.

    The claimed CWE-125 [2] has a description that says "The product reads data past the end, or before the beginning, of the intended buffer." -- which empirically does not happen in the Go Markdown parser issue. It panics, sure, but that doesn't result in any reads past the end, or before the beginning, of the intended buffer. Said another way, *there is no out-of-bounds read* happening here at all.

    These kinds of false-positive CVE claims are super destructive to the credibility of the CVE system in general.

    --

    [1] https://github.com/gomarkdown/markdown/security/advisories/G...

    [2] https://cwe.mitre.org/data/definitions/125.html

  • A basic definition of a security bug is something that violates confidentiality, integrity or availability.

    A DoS affects the availability of an application, and as such is a real security bug. While the severity of it might be lower than a bug that allows to "empty bank accounts", and fixing it might get a lower priority, it doesn't make it any less real.

    • The problem is that DoS is the most vaguely defined category. If a library processes some inputs 1000 slower than average one may claim that this is a DoS. What if it is just 10x slower? Where to draw the line? What is the problem domain is such that some inputs just take more time and there is no way to 'fix' it? What if the input comes only from a trusted source?

    • The CIA triad is a framework for threat modeling, not a threat model in and of itself. And what those specific terms mean will also be very system-specific.

    • > A basic definition of a security bug is something that violates confidentiality, integrity or availability.

      who decided that "availability" was part of security?

  • Full disclosure is the only fair and humane way to handle “security” bugs, because as you point out, every bug is a security bug to someone. And adversaries will make their way onto embargo lists anyway. It’s good to see a principled maintainer other than openbsd fighting the fight.

  • Everything is a "security bug" in the right (wrong?) context, I suppose.

    • Well, that's sort of the problem.

      It's true that once upon a time, libxml was a critical path for a lot of applications. Those days are over. Protocols like SOAP are almost dead and there's not really a whole lot of new networking applications using XML in any sort of manor.

      The context where these issues could be security bugs is an ever-vanishing usecase.

      Now, find a similar bug in zlib or zstd and we could talk about it being an actual security bug.

      6 replies →

  • Unfortunately this is timely news: https://news.sky.com/story/patient-death-linked-to-cyber-att...

    > Denial of service is not resulting in ...

    Turns out they result in deaths. (This was DoS through ransomware)

    • We're talking about a library the GNOME project wrote to read their config files. If you put that in a pacemaker, it's not GNOME's fault.

    • Security bugs always have a context-dependent severity. An availability problem in a medical device is far more severe than a confidentiality problem. In a cloud service, the same problems might switch their severity, downtime isn't deadly and just might affect some SLAs, but disclosing sensitive data will yield significant punishment and reputation damage.

      That is why I think that "severity" and the usual kinds of vulnerability scores are BS. Anyone composing a product or operating a system has to do their own assessment, taking into account all circumstances.

      In the context of the original article this means that it is hopeless anyways, and the maintainer's point of view is valid: in some context everything is "EXTREMELY HIGH SEVERITY, PANIC NOW!". So he might as well not care and treat everything equally. Absolutely rational decision that I do agree with.

  • Denial of service is a security bug. It may seem innocuous in the context of a single library, but what happens when that library finds it way into core banking systems, energy infrastructure and so on? It's a target ripe for exploitation by foreign adversaries. It has the same potential to harm people as other bugs.

    • The importance of the system in question is not a factor in whether something is a security bug for a dependency. The threat model of the important system should preclude it from using dependencies that are not developed with a similar security paradigm. Libxml2 simplly operates under a different regime than, as an arbitrary example, the nuclear infrastructure of a country.

      The library isn't a worm, it does not find its way into anything. If the bank cares about security they will write their own, use a library that has been audited for such issues, sponsor the development, or use the software provided as is.

      You may rejoin with the fact that it could find its way into a project as a dependency of something else. The same arguments apply at any level.

      If those systems crash because they balanced their entire business on code written by randos who contribute to an open source project then the organizations in question will have to deal with the consequences. If they want better, they can do what everyone is entitled to: they can contribute to, make, or pay for something better.

      1 reply →

    • By that standard almost any bug could be considered a "security bug", including things like "returns error even though my XML is valid" or "it parses this data wrong".

      1 reply →

  • A DoS bug is not important for almost anyone. You probably aren't targeted, you probably sanitized correctly anyway, there's not a huge impact potential anyway.

    But a hospital? A bank? A stock broker? Some part of the military's stack?

    Context is important, and what is innocuous to you may kill someone or cost millions if exploited in the wild elsewhere.

    It would be profoundly difficult for a machine or a convention to understand everyone's context and be able to frame it correctly, so it's left to the developers to review what the security issues are and act accordingly.

    I do agree the system should be improved and there's a lot of spam, but your blasé attitude toward what is or is-not a security issue seems off the mark.

  • > A lot of these "security bugs" are not really "security bugs" in the first place. Denial of service ...

    Ackshually...

    Security is typically(*) classified as CIA: Confidentiality, Integrity and Availability. Denial of Service is an attack against Availability... so yeah, that kind of is inherently a security bug.

  • Example I observed firsthand: a CVE was filed because GNU C Library had a memory corruption bug. Yes, the memory corruption bug was real, but the glibc core developers did not agree that it was a security issue, and I think they are right: https://sourceware.org/bugzilla/show_bug.cgi?id=29444

    Why? Because the memory corruption only happened if you manually called a semi-undocumented API. And that API was only there to support the profiler (gprof), so it being called manually almost never happened and wasn’t officially supported, in normal use the compiler would insert calls to it automatically in profiler builds and in normal production builds the undocumented API would never be called. So in practice this is impossible to exploit, except for apps which do weird things which almost nobody does (e.g. use profiler builds in production, and then expose a REST API to let a remote user stop/start the profiler at runtime)

    And yet, now it is listed as a real vulnerability in umpteen security vendor databases. Because the CVE database just accepts anything as a vulnerability if someone claims it is, and if the developers disagree, they just mark it as “Disputed”. But then in my experience a lot of these vendors don’t treat Disputed vulnerabilities any differently, their code analysis tools will still flag them as a “security risk” even though the vast majority of them are BS

  • Any bug that can be used directly, or indirectly alongside others, is a security bug.

    A denial of service in a system related to emergency phone calls can result on people's deaths.

  • > A lot of these "security bugs" are not really "security bugs" in the first place. Denial of service is not resulting in people's bank accounts being emptied or nude selfies being spread all over the internet.

    That is not true at all. Availability is also critical. If nobody can use bank accounts, bank has no purpose.

    • Many of these issues are not the type of issues that will bring down an entire platform; most are of the "if I send wrong data, the server will return with a 500 for that request" or "my browser runs out of memory if I use a maliciously crafted regexp". Well, whoopdeedoo.

      And even if it somehow could, it's 1) just not the same thing as "I lost all my money" – that literally destroys lives and the bank not being available for a day doesn't. And 2) almost every bug has the potential to do that in at least some circumstances – circumstances with are almost never true in real-world applications.

      4 replies →

    • If every single bug in libxml is a business ending scenario for the bank, then maybe the bank can afford to hire someone to work on those bugs rather than pestering a single volunteer.

    • Security and utility are separate qualities.

      You’re correct that inaccessible money are useless, however one could make the case that they’re secure.

      13 replies →

    • I think it's context dependent whether DoS is on par with data loss/extraction, including whether it's actually a security issue or not. I would argue DoS for a bank (assuming it affects backend systems and not just the customer portal) would be a serious security issue given the kinds of things it could impact.

The breaking point here seems to be security researchers (or maybe just one) essentially “farming” this project for “reputation”. They seem to be approaching it like a computer game against NPCs where you get as much reward as time spent, except in this case they’re imposing a significant amount of work on a real life volunteer maintainer.

I suspect the maintainer would mind less if it was reported by actual users of the library who encountered a real world issue and even better if they offer a patch at the same time, but these bugs are likely the result of scanning tools or someone eyeballing the code for theoretical issues.

In light of the above, the proposed MAINTENANCE-TERMS.md makes a lot of sense, but I think it should also state that security researchers looking for CVEs or are concerned about responsible disclosure should contact the vendor of the software distributing the library.

This would put the onus on the large corporates leveraging the library (at no charge) to use their own resources to deal with addressing security researcher concerns appropriately and they can probably do most of the fix work themselves and the coordinate with the maintainer only to get a release out in a timely manner.

If maintainers find that people coming to them with security issues have done all work possible before hand, they’d probably be completely happy to help.

> Ariadne Conill, a long-time open-source contributor, observed that corporations using open source had responded with ""regulatory capture of the commons"" instead of contributing to the software they depend on.

I'm only half-joking when I say that one of the premier selling points of GPL over MIT in this day and age is that it explicitly deters these freeloading multibillion-dollar companies from depending on your software and making demands of your time.

  • With SAAS swallowing big chunk of software business GPL is much less effective.

    There isn't much difference between MIT and GPL unless you are selling a product that runs locally or on premisses and with the latter some companies try to work around GPL by renting servers with software on it - either as physical boxes or something provided on cloud provider marketplace.

    Look at what you actually have installed on your computer - odds are that unless your job requires something like CAD, photo/video editing or other highly specialized software you have nothing made by large enterprise with exception of OS and Slack/Teams/Zoom.

  • This makes an assumption that a bunch of companies are maintaining their own forks of MIT software with bug fixes and features and not giving it back.

    I find that hard to believe.

    • One of the comments on the LWN article is an analysis of exactly that happening with this very library - https://lwn.net/Articles/1026956/

      In short, Apple maintain a 448 kB diff which they 'throw across the wall' in the form of an opaque tarball, shorn of all context. Many of the changes contained within look potentially security-related, but it's been released in a way which would require a huge amount of work to unpick.

      That level of effort is unfeasible for a volunteer upstream developer, but is a nice juicy resource for a motivated attacker. Apple's behaviour, therefore, is going to be a net negative from a security point of view for all other users of this library.

      4 replies →

    • No, they're mostly not. They're throwing the maintenance demand back on the unpaid, understaffed open source developers. That's what TFA is about.

    • Oh, I've seen it plenty. Cultural awareness is just very low in places for some reason.

    • I work a bigco and this happens all the time. I have probably written 20 patches for open source stuff like kubernetes, but when I open a pull request nobody on the project looks at it and it sits open forever. We keep patch sets internally and rebase on top of upstream as some project will not take our contributions.

    • Not really. A company that does not bother contributing to a liberally-licensed project will 100% avoid GPL software like the plague. In either case, they won't contribute. In the latter case, they don't get to free-ride like a parasite.

      7 replies →

  • From a maintainers point of view there is no difference between someone from a large company reporting a bug and some random hobby programmer reporting a bug.

  • Why bother open sourcing if you're not interested in getting people to use it?

    • The GPL does not prohibit anyone from using a piece of software. It exclusively limits the actions of bad faith users. If all people engaged with FOSS in good faith, we wouldn't need licenses, because all most FOSS licenses require of the acceptors is to do a couple of small, free activities that any decent person would do anyway. Thank/give credit to the authors who so graciously allowed you to use their work, and if you make any fixes or improvements, share alike.

      Security issues like this are a prime example of why all FOSS software should be at least LGPLed. If a security bug is found in FOSS library, who's the more motivated to fix it? The dude who hacked the thing together and gave it away, or the actual users? Requesting that those users share their fixes is farrr from unreasonable, given that they have clearly found great utility in the software.

      5 replies →

    • A decent part of my job is open source. Our reason for doing it is simple: we would rather have people who are not us do the work instead of us.

      On some of our projects this has been a great success. We have some strong outside contributors doing work on our project without us needing to pay them. In some cases, those contributors are from companies that are in direct competition with us.

      On other projects we've open sourced, we've had people (including competitors) use, without anyone contributing back.

      Guess which projects stay open source.

      2 replies →

    • When I, as a little child (or at least that is how it feels now), got excited about contributing to open source, it was not the thought that one day my code might help run some giant web platform's infrastructure or ship as part of some AAA videogame codebase that motivated me. The motivation was the idea that my code might be useful to people even with no corporation or business having to be involved!

    • You can want to be helpful without wanting to have power or responsibility.

      I'm interested in people (not companies, or at least I don't care about companies) being able to read, reference, learn from, or improve the open source software that I write. It's there if folks want it. I basically never promote it, and as such, it has little uptake. It's still useful though, and I use it, and some friends use it. Hooray. But that's all.

    • So that if they find it useful, they will contribute their own improvements to benefit the project.

      I don’t think many projects see acquiring unpaying corporate customers as a goal.

    • There is tons of reasons. E.g. public money public code. We are in research and we are open sourcing because we know that we cannot maintain anything, giving people the chance to pick up stuff without having buy stuff that is constantly losing value and becomes abandon ware very soon these days (at this point we often don't even have the resources to open source). So what you most get from us is 'public money crappy unmaintained code'

    • People can use it. Corporations won't. I'm entirely unbothered by this outcome.

      This isn't a popularity contest and I'm sick of gamification of literally everything.

I really don’t understand solo unpaid maintainers who feel “pressure” from users. My response would always be: it’s my repo, my code, if you don’t like how I’m doing things, fork the code megashrug.

You owe them nothing. That fact doesn’t mean maintainers or users should be a*holes to each other, it just means that as a user, you should be grateful and you get what you get, unless you want to contribute.

Or, to put it another way: you owe them exactly what they’ve paid for!

  • Your solution is exactly right, but let me try to help understanding the problem.

    Many open source developers feel a sense of responsibility for what they create. They are emotionally invested in it. They may want to be liked or not be disliked.

    You’re able to not care about these things. Other people care but haven’t learned how to set boundaries.

    It’s important to remember, if you’re not understanding what a majority of people are doing, you are the different one. The question should be “Why am I different?” not “Why isn’t everyone else like me?”

    “Here’s the solution” comes off far better than, “I don’t understand why you don’t think like me.”

  • Sadly, that stuff backfires. The researcher will publish your response along with some snarky remarks how you are refusing to fix a "critical issue", and next time you are looking for a job and the HR googles up your name, it pops up, and -poof-, we'll call your later.

    I used to work on a kernel debugging tool and had a particularly annoying security researcher bug me about a signed/unsigned integer check that could result in a target kernel panic with a malformed debug packet. Like you couldn't do the same by just writing random stuff at random addresses, since you are literally debugging the kernel with full memory access. Sad.

    • Just be respectful and not snarky. And be clear about your boundaries.

      What I do is I add the following notice to my GitHub issue template: "X is a passion project and issues are triaged based on my personal availability. If you need immediate or ongoing support, then please purchase a support contract through my software company: [link to company webpage]".

  • > I really don’t understand solo unpaid maintainers who feel “pressure” from users.

    Some open source projects which are well funded and/or motivated to grow are giddy with excitement at the prospect you might file a bug report [1,2]. Other projects will offer $250,000 bounties for top tier security bugs [3].

    Other areas of society, like retail and food service, take an exceptionally apologetic, subservient attitude when customers report problems. Oh, sir, I'm terribly sorry your burger had pickles when you asked for no pickles. That must have made you so frustrated! I'll have the kitchen fix it right away, and of course I'll get your table some free desserts.

    Some people therefore think doing a good job, as an open source maintainer, means emulating these attitudes. That you ought to be thankful for every bug report, and so very, very sorry to everyone who encounters a crash.

    Needless to say, this isn't a sustainable way to run a one-person project, unless you're a masochist.

    [1] https://llvm.org/docs/Contributing.html#id5 [2] https://dev.java/contribute/test/ [3] https://bughunters.google.com/about/rules/chrome-friends/574...

> The point is that libxml2 never had the quality to be used in mainstream browsers or operating systems to begin with

I think that's seriously over-estimating the quality of software in mainstream browsers and operating systems. Certainly some parts of mainstream OS's and browsers are very well written. Other parts, though...

This is an alarming read. Not so much the "security bugs are bugs, go away" sentiment which seems completely legitimate, but that libxml2 and libxslt have been ~ solo dev passion projects. These aren't toys. They're part of the infrastructure computing is built on.

  • You got the timeline wrong: libxml2 has always been a solo dev passion project, then a bunch of megacorps used them for the infrastructure computing is built on. This is on them.

  • Exactly how openssl was (is?) when heartbleed happened. It's nothing new sadly, there are memes about the "unknown oss passion project" holding up the entire stack all over the internet.

  • These projects are toys. The real problem is that multi billion dollar companies are using toys to keep you safe. Maybe we shouldn't build our core infrastructure with LEGO blocks and silly putty.

Very sad read. Much of the multi-billion dollar project I work on is built on top of libxml2 and my company doesn't have a clue. Fuck, even most of my colleagues working with XML every day don't even know it because they only interface indirectly with it via lxml.

There are two types of responsible disclosure: coordinated disclosure where there's an embargo (ostensibly so that the maintainer can patch the software before the vulnerability is widely known) and full disclosure where there's no embargo (so that users can mitigate the vulnerability on their own, useful if it's already being exploited). There's no reason a maintainer shouldn't be allowed to default to full disclosure. In general, any involved party can disclose fully. Irresponsible disclosure is solely disclosing the vulnerability to groups that will exploit it, e.g. NSO.

  • Yeah, exactly. And the subtext of all of this is that big companies are going to get burnt by these kinds of decisions. But big companies work around this kind of thing all the time. OpenSSL is a good example.

It'd be great if some of these open source security initiatives could dial up the quality of reports. I've seen so so many reports for some totally unreachable code and get a cve for causing a crash. Maintainers will argue that user input is filtered elsewhere and the "vuln" isn't real, but mitre don't care.

  • > I've seen so so many reports for some totally unreachable code and get a cve for causing a crash.

    There have been a lot of cases where something once deemed "unreachable" eventually was reachable, sometimes years later, after a refactoring and now there was an issue.

    • At what rate though? Is it worth burning out devs we as a community rely upon because maybe someday 0.000001% of these bugs might have real impact? I think we need to ask more of these "security researchers". Either provide a real world attack vector or start patching these bugs along with the reports.

      11 replies →

> It includes a request for Wellnhofer to provide a CVE number for the vulnerability and provide information about an expected patch date.

“Three.”

“Like, the number 3? As in, 1, 2, …?”

“Yes. If you’re expecting me to pick, this will be CVE-3.”

  • The project doesn't have to provide one though. The person reporting it can handle it if they care. It's ok to say "I'm not interested in those".

  • I think he should just reject reports of vulnerabilities if they aren't accompanied by a patch.

If you skim past the less-interesting project history, there's interesting description of some dynamics that apply to a lot of open source projects, including:

> Even if it is a valid security flaw, it is clear why it might rankle a maintainer. The report is not coming from a user of the project, and it comes with no attempt at a patch to fix the vulnerability. It is another demand on an unpaid maintainer's time so that, apparently, a security research company can brag about the discovery to promote its services.

> If Wellnhofer follows the script expected of a maintainer, he will spend hours fixing the bugs, corresponding with the researcher, and releasing a new version of libxml2. Sveshnikov and Positive Technologies will put another notch in their CVE belts, but what does Wellnhofer get out of the arrangement? Extra work, an unwanted CVE, and negligible real-world benefit for users of libxml2.

> So, rather than honoring embargoes and dealing with deadlines for security fixes, Wellnhofer would rather treat security issues like any other bug; the issues would be made public as soon as they were reported and fixed whenever maintainers had time. Wellnhofer also announced that he was stepping down as the libxslt maintainer and said it was unlikely that it would ever be maintained again. It was even more unlikely, he said, with security researchers ""breathing down the necks of volunteers.""

> [...] He agreed that ""wealthy corporations"" with a stake in libxml2 security issues should help by becoming maintainers. If not, ""then the consequence is security issues will surely reach the disclosure deadline (whatever it is set to) and become public before they are fixed"".

As a maintainer of several open source projects over my life, I really hated these so called security researchers and their CVEs. I routinely fixed more impacting bugs due to user reports, but when one of these companies found a big, they made a whole theater around it, while the impact being pretty small. Pretty much any bug, except maybe a typo in the UI, is a security bug. It gets tiring very soon. And with the CVEs comes a lot of publicity and a lot of demands.

  • Does the security researchers provide you with patches, or is it more frequently "there's a bug here".

    In the later case I'm wondering if there's an argument to be made for "Show me the code or shut up". Simply rejecting reports on security issue which are not also accompanied by a patch. I'm think, will it devalue the CVE on the researchers resume, if the project simply says no, on the grounds of not being a fix?

    Probably not.

    • CVE is an index of vulnerabilities. Whether there's a patch and who made it is largely irrelevant in that context.

    > ...there are currently four bugs marked with the security label in the libxml2 issue tracker. Three of those were opened on May 7 by Nikita Sveshnikov, a security researcher who works for a company called Positive Technologies.

I'm confused. Why doesn't Positive Technologies submit a patch or offer to pay the lead maintainer to implement a fix?

FYI, Wiki tells me:

    > Positive Technologies is a Russian information security research company and a global leader in cybersecurity.

  • The security researcher is paid to find vulnerabilities, not to fix them. These companies are selling code analysis to their customers and the more issues they find, the more they'll be worth.

    When it comes to fixing the issues, their customers will have to beg/spam/threaten the maintainers until the problem is solved. They probably won't write a patch; after all, Apple, Google, and Microsoft are only small companies with limited funds.

  • Because they don't use libxml2 and don't actually have any need for a fix. They only want to build a reputation as pentrsters by finding vulnerabilities in high profile projects

  • I am replying to my own post instead of replying to all of the child posts:

    The point of my original post... that I hoped someone would see/interpret: Reporting "security bugs" without a patch or an offer to pay the lead maintainer to implement a fix feels like blackmail in 2025. Yes, I know this will be a hugely controversial opinion amoungst HN crowd. Personally: I don't see a huge amount of commercial value in pure infosec research that does not include funds to develop or fund a patch. The primary purpose of these "pure" infosec research firms is to generate FOMO for enterprise clients who pay them for private patches or "support".

  • Perhaps you are imagining some free software bong(o drum) circle?

    The big point is this is a critical component for Apple and Google (and maybe Microsoft), and nobody is paying any attention to it.

Don't like something? Fork and fix.

Unhappy with a maintainer? Fork and maintain it yourself.

Some open source code creates issues in your project? Fix it and try to upstream. Upstream is not accepted? Fork and announce the fix.

Unpaid open source developers owe you nothing, you can't demand anything, their work is already a huge charitable contribution to humanity. If you can do better — fork button is universally available. Don't forget to say thank you to original authors while you stay on the shoulders of giants.

Understand the stance, but the big corps using it (Apple, Google, Microsoft) are using it and acknowledge it silently at risk. It's not entirely fair though, Google did make a donation.

  • > It's not entirely fair though, Google did make a donation.

    Yup. $10 000.

    Remind me what the average Google salary is? Or how much profit Google made that year?

    Or better still, what is the livable wage is where libxml maintainer lives? You know, the maintainer of the library used in the core Google Product?

    • I agree that $10,000 isn’t a meaningful investment given the scale of reliance.

      What would a fair model look like? An open-source infrastructure endowment? Ongoing support contracts per critical library?

      At the same time, I think there’s a tension in open source we don’t talk about enough: it’s built to be free and open to all, including the corporations we might wish were more generous. No one signed a contract!

      As the article states, Libxml2 was widely promoted (and adopted) as the go-to XML parser. Now, the maintainer is understandably tired. There is now a sustainability problem that is more systemic than personal. How much did the creator of libxml benefit?

      I don’t think we should expect companies to do the right thing just because they benefit and it isn’t how open source was meant to be and this isn’t how open source is supposed to work

      But maybe that’s the real problem

      1 reply →

The only obstacle here appears to be the psychological issues of the maintainers themselves. I know it maybe hard to say "fuck off" but they will have to learn to say that to stop being exploited.

> It was certainly promoted on the project web site as a capable and portable toolkit for the purpose of parsing XML.

This is a garbage criticism. It’s perfectly adequate for that for almost everyone. If you are shipping it in a browser to billions of people, that’s a very unique situation, and any security issues are a you problem.

Not sure if this is intended to be a “show both sides” journalism thing but it’s a totally asshole throwaway comment.

    The point is that libxml2 never had the quality to be used in mainstream browsers or operating systems to begin with. It all started when Apple made libxml2 a core component of all their OSes. Then Google followed suit and now even Microsoft is using libxml2 in their OS outside of Edge. This should have never happened. Originally it was kind of a growth hack, but now these companies make billions of profits and refuse to pay back their technical debt, either by switching to better solutions, developing their own or by trying to improve libxml2.
    The behavior of these companies is irresponsible. Even if they claim otherwise, they don't care about the security and privacy of their users. They only try to fix symptoms.

Hear, hear!

Bigger companies have either policies or have policies derived from regulatory demands on the software they are using for their products and services. Defects must be fixed within a certain timeframe. Software suppliers and external code must be vetted. Have such a widely used library explicitly not maintained in theory should make it a no-go area forcing either removal or ongoing explicit security audits - it may well be cheaper for any of them to take over the full maintenance load. Will be interesting to watch.

Also the not so relevant security bugs are not just costs to the developers but the library churn is also costing more and more users if the user is forced by policy to follow in a timely manner the latest versions in the name of "security".

It seems perfectly reasonable for any library to take the stance they are not a security barrier. It is up to people using libxml2 in applications and OSs that have the resources to issue CVEs and track embargos. I am sure any resulting PRs will be gratefully welcomed.

funny - when struts2/log4j caused a lot of million-dollar problems, how many companies were looking for commercial alternatives or invested into developing their own solutions? That's right - zero. Everyone just switched to the next freebie.

Do we need a more profound solution than what the maintainer is doing here? Any given bug is either:

a) nonsense in which case nobody should spend any time fixing this (I'm thinking things like the frontend DDOS CVEs that are common) b) an actual problem in which case a compliance person at one of these mega tech companies will tell the engineers it needs to be fixed. If the maintainer refuses to be the person fixing it (a reasonable choice), the mega tech company will eventually just do it.

I suppose the risk is the mega tech company only fixes it for their internal fork.

  • > I suppose the risk is the mega tech company only fixes it for their internal fork.

    They'd rather send a patch than having to maintain and sync an internal fork with upstream.

We get so many 'security advisors' trying to blackmail us for money or blackmailing us to post on some social media that we don't care about security because we ignored their emails. A small company, let alone an opensource maintainer doesn't have time for this. Most of this stuff is just not priority or not valid for our case. We had some relief years ago when we changed our internal stuff to give off productnames and version numbers that simply don't exist, but because so much is frontend now tools are so good at finger printing that, so now we do get tons of those again.

  • Someone who runs a small company with a static content website, got an email like this, I was thinking a response like the following might be a appropriate:

    Thank you for reaching out to us. Please be aware that we do not run any kind of security bounty/reward programs.

    Having performed our own analysis we have not been able to identify any practically exploitable security risks.

    If you have found a practically exploitable security issue with our website, please provide some form of demonstration so that we may discuss further.

    I don’t think that would come across as not being concerned about security, but puts the onus on the “researcher” to prove there is a real problem.

    Chances are they did some automated scan and found some out of date JavaScript library version which despite having a vulnerability, is not actually a security risk on a static content site.

It would be better if there was a layer of maintainers between the free software authors and the end users that could act as a buffer in cases like this, in particular to take care of security vulnerabilities that genuinely need dealing with quickly.

Of course that's exactly what traditional Linux distributions signed up to do.

Clearly many people have decided that they're better off without the distributions' packaging work. But maybe they should be thinking about how to get the "buffering" part back, and ideally make it work better than the distributions managed to.

I think they are not going far enough.

"All null-pointer-referencing issues should come with an accompanying fix pull request".

  • I don't think putting the burden to fix the code should be on users. However, it also shouldn't be on developers.

    I think something like "Null-pointer-referencing issues will not be looked at by core maintainers unless someone already provides a patch". That way, someone else who knows how to fix the problem can step in, and users aren't left with the false impression that merely reporting their bug will not guarantee a solution.

  • So if I find a null pointer dereference issue in something written in a language I don’t know, I shouldn’t report it because I can’t include a fix?

When a project is up, open source developers are keen to promote it, put it on their CV, give conference talks. There is no obligation for companies to sponsor anything, this is not the reason behind open source.

Yes open source has changed, from when the early 90s. There are more users, companies use projects and make millions with other peoples work.

I feel with the maintainer, with how ungrateful people are. And demanding without giving.

Open Source licenses fall short.

Open Source projects should clearly state what they think about fixing security, taking on external contributions or if they consider the project feature complete. Just like standard licenses, we should have a standard, parseable maintenance "contract".

"I fix whatever you pay for, I fix nothing, I fix how I see fit. Including disclosure, etc."

So everyone is clear about what to expect.

  • That's in the license already, and quite clear.

    https://gitlab.gnome.org/GNOME/libxml2/-/blob/63f98ee8a3a11f...

    • It's not, license just says "Nothing guaranteed", it's not saying what to expect. A "I can do whatever I want" doesn't tell you anything about my behavior.

      "The viewpoint expressed by Wellnhofer's is understandable, though one might argue about the assertion that libxml2 was not of sufficient quality for mainstream use. It was certainly promoted on the project web site as a capable and portable toolkit for the purpose of parsing XML. Open-source proponents spent much of the late 1990s and early 2000s trying to entice companies to trust the quality of projects like libxml2, so it is hard to blame those companies now for believing it was suitable for mainstream use at the time."

      If the license says one thing, and you say and promote something else, you can't say "But it's in the license" and "I said so at a conference" just as it fits you.

      So what should I believe? What you write in the license? What you say at a conference? Nothing you say?

      2 replies →

its so weird to me to report a bug to open source but not atleast suggest a fix :/. especially around security bugs. to prove it u need to reliably trigger and exploit it so it should be plainly obvious in most cases what the fix is :/.

this is why i never report stuff to open source. if you wanna play bug bounty and cve hoarder its better to stick with bug bounty programs.

why? there the security researcher can be depressed about the process himself rather than some volunteer coder. gotta not make your issues other ppls issues.

I'm very sure if he is well paid by those corporations he will have no problem maintaining it, take note guys

I don't think this trend much matters. Serious vendors concerned about security will simply vendor things like libxml2 and handle security inbounds themselves; they'll become the real upstreams.

  • Then they all have patches floating around, and get in trouble coordinating with each other. Long term, they would have to set up a foundation to manage these patches, call it the 'a patchie' foundation. Maybe they'll think about a cute name and release a webserver.

"...the project has received the immense sum of $11,000..."

Is the author being sarcastic? Or is that genuinely an immense sum relative to how little funding most FOSS gets?

Honestly the only permanent solution to this is probably a big string of LeftPad events. Maintainers of projects like this that have been subsumed into corporate infrastructure should pull the plug and nuke the git repo.

Disastrous, apocalyptic consequences is the only way to get the attention of the real decision makers. If libxml2 just vanishes and someone explains to John Chrome or whoever that $150k a year will make the problem go away, it's a non-decision. $150k isn't even a rounding error on a rounding error for Google.

The only way to fight corporations just taking whatever they want is to absolutely wreck their shit when they misbehave.

Call it juvenile, sure, but corporations are not rational adults and usually behave like a child throwing a temper tantrum. There have to be real, painful and ongoing consequences in order to force a corporation to behave.

So software released under the MIT license and maintainer now complains that corporate users are not helping improve it? I'd file this under Stallman told you so.

I empathize with some of the frustrations, but I'm puzzled by the attempts to paint the library as low-quality and not suitable for production use:

> The viewpoint expressed by Wellnhofer's is understandable, though one might argue about the assertion that libxml2 was not of sufficient quality for mainstream use. It was certainly promoted on the project web site as a capable and portable toolkit for the purpose of parsing XML. Open-source proponents spent much of the late 1990s and early 2000s trying to entice companies to trust the quality of projects like libxml2, so it is hard to blame those companies now for believing it was suitable for mainstream use at the time.

I think it's very obvious that the maintainer is sick of this project on every level, but the efforts to trash talk its quality and the contributions of all previous developers doesn't sit right with me.

This is yet another case where I fully endorse a maintainer's right to reject requests and even step away from their project, but in my opinion it would have been better to just make an announcement about stepping away than to go down the path of trash talking the project on the way out.

  • I think Wellnhofer is accurate in his assessment of the current state of the library and its support infrastructure institutions. Software without adequate ongoing maintenance should not be used in production.

    (Disclosure: I'm a past collaborator with Nick on other projects. He's a fantastic engineer and a responsible and kind person.)

    • The crux is these seemingly bogus security “bugs”. If there were quality issues, the amount of software and people using libxml by virtue of testing in production/wild would have found most issues by now.

      There is plenty of software today that is tested within cost and schedule that’s closed source and it’s running in production. I get the point but libxml is not one of those cases

  • A large part of the problem is the legacy burden of libxml2 and libxslt. A lot of the implementation details are exposed in headers, and that makes it hard to write improvements/fixes that don't break ABI compatibility.

  • Recall similar things were said about OpenSSL, and it was effective at getting corps to start funding the project.

    • It was not however effective at getting the project to care about quality or performance.

So reading this, it sounds like the maintainer got burned out.

That's reasonable, being a maintainer is a thankless job.

However i think there is a duty to step aside when that happens. If nobody can take the maintainer's place, then so be it, its still better than the alternative. Being burned out but continuing anyways just hurts everyone.

Its absolutely not the security researcher's fault for reporting real albeit low severity bugs (to be clear though, entirely reasonable for maintainers to treat low severity security bugs as public. The security policy is the maintainer's decision, its not right to blame researchers for following the policy maintainers set)

  • Curl has the same issue and the problem is that these reports are just noise. It wastes everyone’s time and even lacks a Proof of Concept.

    • Afaik, curl was complaining about AI generated reports that were bullshit. They were not complaining about reports that legit caused crashes. Totally different thing.

      1 reply →

  • Being a free software maintainer, especially for code that you did not yourself write, is in effect a volunteer position in a charity or a non-profit organization. You yourself volunteered to take the position, and when you did, you became responsible for acting in the interests of the project and all its users. The fact that you are not paid does not mean that you can do whatever you please. If you at any time feel that you cannot fulfill your responsibilities to your users and to the development of the project, you should immediately leave your position to more eager and/or capable hands. (You should already have been prepared and have such people ready to take over, which should be possible if the project is popular enough.)