CERT is releasing six CVEs for serious security vulnerabilities in dnsmasq

16 hours ago (lists.thekelleys.org.uk)

I think this is the breaking point where replacing our code written in C for code written in memory safe languages is becoming urgent.

The vast majority of vulnerabilities found recently are directly related to being written in memory unsafe languages, it's very difficult to justify that a DNS/DHCP server can't be written in rust or go and without using unsafe (well, maybe a few unsafe calls are still needed, but these will be a very small amount)...

  • The problem is the lack of talent that is willing to work on this, not the language.

    AI Security researchers at least do something. If it was so easy to rewrite everything in rust, I don't know why the response to this incidents isn't a rock solid replacement in rust, the next day.

    I tell you why that is. Working on these things doesn't give you stars on github.

  • I disagree -- we're clearly getting better safeguards by way of AI agents to spot potential vulnerabilities!

    • The question is whether the current situation is a short burst of action, and once those most critical bugs get fixed the hype around AI vulnerability scanning will die down, or whether the current crop of system/infra software written in vulnerable languages like C are beyond redemption and they will provide an endless source of critical bugs for AI to find until we fix them by rewriting them in Rust/Go/whatever.

      2 replies →

That is pretty bad!

"a remote attacker capable of asking DNS queries or answering DNS queries can cause a large OOB write in the heap."

Malformed DNS response causes "infinite loop and dnsmasq stops responding to all queries."

Malicious DHCP request can cause buffer overlow.

To quote a famous (in certain circles) bowl of petunias, "oh no, not again!"

  • For a number of reasons, I feel that the only way we got here was via some kind of infinite improbability drive.

    (mostly unrelated to topic at hand though)

    • > For a number of reasons, I feel that the only way we got here was via some kind of infinite improbability drive.

      Oh very much so! In my mind, it seems that someone must have figured out what the universe was for, and now it's been replaced with something even more bizarre and inexplicable.

What is the nature of these findings? There’s a big difference between AI finding a buffer overflow vs. identifying a fundamental protocol flaw. Could AI realistically discover something like the Kaminsky attack? or even something which is an amplification exploit like the NXNSAttack?

Shameless plug time:

My own MaraDNS has been extensively audited now that we’re in the age of AI-assisted security audits.

Not one single serious security bug has been found since 2023. [1]

The only bugs auditers have been finding are things like “Deadwood, when fully recursive, will take longer than usual to release resources when getting this unusual packet” [2] or “This side utility included with MaraDNS, which hasn’t been able to be compiled since 2022, has a buffer overflow, but only if one’s $HOME is over 50 characters in length” [3]

I’m actually really pleased just how secure MaraDNS is now that it’s getting real in depth security audits.

[1] https://samboy.github.io/MaraDNS/webpage/security.html

[2] https://github.com/samboy/MaraDNS/discussions/136

[3] https://github.com/samboy/MaraDNS/pull/137

  • Well, as you bundle Lua 5.1 (as Lunacy), instead of making a library and loading it, and you bundled the 2012 version, you're probably affected by CVE-2014-5461 and others. Lua hasn't been security fix free.

    • Thank you for your concern.

      I fixed CVE-2014-5461 for Lunacy back in 2021:

      https://github.com/samboy/lunacy/commit/4de84e044c1219b06744...

      This is discussed here:

      https://samboy.github.io/MaraDNS/webpage/security.html#CVE-2...

      In addition, I have done other security hardening with Lunacy compared to Lua 5.1:

      https://samboy.github.io/MaraDNS/webpage/lunacy/

      Now, I should probably explain why I’m using Lua 5.1 instead of the latest “official” version of Lua. Lua has an interesting history; in particular Lua 5.1 is the most popular version and the version which is most commonly used or forked against. Adobe Illustrator uses Lua 5.1, and Roblox uses a fork of Lua 5.1 called “luau”. LuaJIT is based on Lua 5.1, and other independent implementations of Lua (Moonsharp, etc.) are based on versions mostly compatible with Lua 5.1.

      Lua 5.1 has a remarkably good security history, and of course I take responsibility for any security bugs in the Lua 5.1 codebase since I use the code with the relatively new coLunacyDNS server (Lua 5.1 isn’t used with the MaraDNS or Deadwood servers).

      Lua 5.1 is used to convert documentation, but those scripts are run offline and the converted documents are part of the MaraDNS Git tree.

      4 replies →

    • Unless the service accepts Lua code from the internet (and that would be a completely insane thing), the CVE-2014-5461 will not apply. And while I have not reviewed every Lua CVE, I bet most (all?) of then require a specifically crafted code, or at least highly-complex user input (such as arbitrary json)

      It's important to look at the actual vulnerability at the context, and not just list any CVE which matches by version.

      8 replies →

  • MaraDNS is much less popular than dnsmasq though.

    I have several libraries that I've written. Not one single serious security bug in them has been found since 1991. Granted, nobody uses my libraries...

    Not to diminish your team's achievement! :D But it's important to contextualize claims like this with information about what your userbase looks like

  • I remember being delighted finding maradns as an alternative to the “do everything” of dnsmasq way back when I set up a dns server, and more importantly, I haven't had to think about it since then.

  • > Shameless plug time: My own MaraDNS has been extensively audited now that we’re in the age of AI-assisted security audits.

    Out of curiosty: what is the point you’re trying to make? That there are alternatives to dnsmasq? That somehow your software is “better”?

    This plug provides zero value to the dnsmasq discussion.

    As others have pointed out: the more used a software is, the more scrutiny it gets and more bugs or edge cases are found.

  • good job. but it is amazing we are still writing core networking tools in vulnerable language such as c in 2026.

    • Agreed, it made a lot more sense to write MaraDNS in C in 2001 though.

      The main advantage of writing in C over Rust here in 2026 is that C has two different Lua interpreters, and there isn’t a port of Lua to Rust yet; [1] yes, there are ways to use the C version of Lua in Rust, but that’s different.

      If I were to write a new server today, I could very well write it in Go, then use GopherLua for the Lua engine:

      https://github.com/yuin/gopher-lua

      Although, even here, the advantage of C is that I could increase performance by using LuaJIT:

      https://luajit.org/luajit.html

      [1] If I were to use Rust, I would consider using Rune as an embedded language as per https://rune-rs.github.io/

  • Flagged because this discussion about dnsmasq and another dns resolver implementation that has relatively no rollout worldwide by comparison is pointless.

  • That's a bit shameless, indeed.

    dnsmasq has served me well for like an eternity in multiple setups for different use cases. As all software it has bugs. And once located those get fixed. Its author is also easy to communicate with.

    Why should I switch over to something way less proven? I'm quite sure your software also has bugs, many still not located. Maybe because it's less popular/ less well known nobody cares to hunt for those bugs? Which means even if the numbers of found bugs is less in your software at the moment, and it may look more audited for this reason, it may actually be way less secure.

    • "All software has bugs" is the most meaningless statement ever. It is just used for bonding with fellow bug writers who sit at a virtual campfire and muse about inevitabilities.

      Demonstrably some software has fewer bugs, and its authors are often hated, especially if they are a lone author like Bernstein. Because it must not happen!

      Projects with useless churn and many bug reports are more popular because only activity matters, not quality.

      13 replies →

    • > Why should I switch over to something way less proven?

      Must they prove their software to you? They're offering an alternative, not bargaining for a deal.

      4 replies →

Maybe this is the kick in the ass Debian needs to upgrade the embarrassingly ancient dnsmasq in "stable" because while I can't think of any new features, the latest versions contain many non-CVE bug fixes.

But I doubt it, they will lazily backport these patches to create some frankenstein one-off version and be done with it.

Before anyone says "tHaT's wHaT sTaBlE iS fOr": they have literally shipped straight-up broken packages before, because fixing it would somehow make it not "stable". They would rather ship useless, broken code than something too new. It's crazy.

  • They're not going to put a newer version in stable. The way stable gets newer versions of things is that you get the newer version into testing and then every two years testing becomes stable and stable becomes oldstable, at which point the newer version from testing becomes the version in stable.

    The thing to complain about is if the version in testing is ancient.

    • No, that's exactly the thing to complain about.

      That whole model dates to before automated testing was even really a thing, and no one knew how to do QA; your QA was all the people willing to run your code and report bugs, and that took time. Not to mention, you think the C of today is bad? Have you looked at old C?

      And the disadvantage is that backporting is manual, resource intensive, and prone to error - and the projects that are the most heavily invested in that model are also the projects that are investing the least in writing tests and automated test infrastructure - because engineering time is a finite resource.

      On top of that, the backport model heavily discourages the kinds of refactorings and architectural cleanups that would address bugs systemically and encourage a whack-a-mole approach - because in the backport model, people want fixes they can backport. And then things just get worse and worse.

      We'd all be a lot better off if certain projects took some of the enthusiasm with which they throw outrageous engineering time at backports, and spent at least some of that on automated testing and converting to Rust.

      42 replies →

    • Close: New versions go in unstable where development happens, testing is where things go to marinate for a while.

  • You don't have to use Debian stable, if you'd prefer Ubuntu every 6 months, or Fedora (6 months? 9 months?), or even Arch Linux updated daily ...

    I use Arch on my laptop, when I got it 2 years ago the amd gpu was a bit new so it was prudent to get the latest kernel, mesa, everything. Since I use it daily it's not bad to update weekly and keep on top of occasional config migrations.

    I use Debian stable on my home server, it's been in-place upgraded 4-ish times over 10 years. I can install weekly updates without worrying about config updates and such. I set up most stuff I wanted many years ago, and haven't really wanted new features since, though I have installed tailscale and jellyfin from their separate debian package repos so they are very current. It does the same jobs I wanted it to do 8 years ago, with super low maintenance.

    But if you don't want Debian stable, that's fine. Just let others enjoy it.

  • About a decade ago I switched to Ubuntu LTS because of Debian’s “policy?” of having pretty old packages in “stable” and a long release cycles.

    Nowadays, even with Ubuntu’s two year or so release cycle I have to use 3rd party packages to have up to date software (PHP being one) and not some version from three years ago.

    We no longer live in a world (with few exceptions) where running a 3-5 year old distribution (still supported) makes sense.

  • For what it's worth, Debian had a security update for dnsmasq yesterday, presumably to address this.

  • That's what stable is for though. Like, sure, stable's policy is ludicrous and you would have to be insane to run stable. But the remedy for that isn't to try to change Debian policy, it's to get people to stop running stable. Maybe once no-one uses it Debian will see sense.

  • What if the new release which contains the fixes has new dependencies and those also have new dependencies? I assume they have to Frankenstein packages sometimes to maintain the borders of the target app while still having major vulns patched right in stable.

  •     https://security-tracker.debian.org/tracker/CVE-2026-2291
        https://security-tracker.debian.org/tracker/CVE-2026-4890
        https://security-tracker.debian.org/tracker/CVE-2026-4891
        https://security-tracker.debian.org/tracker/CVE-2026-4892
        https://security-tracker.debian.org/tracker/CVE-2026-4893
        https://security-tracker.debian.org/tracker/CVE-2026-5172
    

    fixed, fixed, fixed, fixed, fixed and fixed

  • > ...they have literally shipped straight-up broken packages before, because fixing it would somehow make it not "stable"

    Irrelevant strawman, since you're not accusing the dnsmasq package in Debian stable of being straight-up broken.

Never liked using dnsmasq. Always felt like too much in one tool. A local caching resolver, dhcp server, and tftp/pxe boot setup were always things I preferred to configure separately.

  • That line of thinking is exactly why I ended up using maradns for my dns hosting way back.

    10/10, no regrets, would recommend.

  • I agree, it also goes against the Linux "way of doing things". For example, Opnsense uses the dhcp portions of dnsmasq only (and unbound for the dns parts) which just feels 'wrong'.

    • When I first came across Linux you would download the code (very slowly) to /usr/src/linux (extract and cd) and run "make config". You'd answer quite a lot of y/n and later y/n/m questions and then copy a binary and later on run a script to put things in place. Then you would fix up lilo and off you trot ... or not 8)

      Is that the Linux way you are on about? No obviously not 8)

      I think you mean the "unix idealized but never really happened exactly but we are quite close if you squint a bit ... way" where each tool does one job well and the pipeline takes up the slack.

"hopefully they will be releasing patched versions of their dnsmasq packages in a timely manner."

Hopefully!

I never liked dnsmasq or the Pi-Hole dderivation and do not use it but many people seem to love this software. I don't think there is any amount of CVEs that could convince people to stop using it

"The tsunami of AI-generated bug reports shows no signs of stopping, so it is likely that this process will have to be repeated again soon."

But, ai-deniers are telling us there is nothing to see ...

How bad is it if someone infects my home router using such a thing? They can MITM non-encrypted requests, but there are not a lot of those, right?

What else can they do, assuming the computers behind the router are all patched up.

  • They can block traffic to update servers so the computers behind the router aren't all patched up, then exploit them. They also get access to all the IoT devices on the internal network. They can also use your router as a proxy so their scraping/attack traffic comes from your IP address instead of theirs.

    It's definitely bad.

  • If you blindly TOFU ssh sessions, those can be pwned easily in many common use cases. Legacy software configurations like NFS with IP authentication will be bypassed. Realistically the most likely scenario is using your home as a VPN, or a DDOS node.

    • yeah, and it's not like people recently launched a coffee shop that accepts payments over tofu ssh and a shell provider doing the same

  • they could try and exploit any device on your network, and since they see which servers you connect to and how often you communicate with one they can write phishing mails which are tailored just for you.

some of these would have made to embedded hardwares, making updates more challenging if say you were to flash an update.

The AI bug report tsunami is not in all projects. As the top comment notes, MaraDNS didn't have any. I assume djbdns and tinydns didn't either, otherwise they'd shout it from the rooftops.

I never understood why some projects get extremely popular and others don't. I also suspect by now that the reports by tools that are "too dangerous to release" scan all projects but selectively only contact those with issues, so that they never have to admit that their tool didn't find anything.

  • > The AI bug report tsunami is not in all projects.

    It's in popular projects.

    • No, postfix hasn't had a single valid bug found by AI. There are legions of other projects as well.

      It is a distorted view, because projects become popular by allowing indiscriminate commits, bugs, maintainers.

      If I'd start a new project I'd allow anyone in and blog about 100 exploits every year, because that is exactly what people want. I'm serious.

if machine-learning can find all these holes

why can't machine-learning write a product from scratch that is flawless?

  • Because the problem is asymmetric: the attacker only needs to find one hole at one time. The defender has to be flawless forever.

  • LLMs certainly make it more feasible to rewrite a product in a memory-safe language, eliminating a whole class of bugs.

    Flawless software is hard for an LLM to write, because all the programs they have been trained on are flawed as well.

    As a fun exercise, you could give a coding agent a hunk of non-trivial software (such as the Linux kernel, or postgresql, or whatever), and tell it over and over again: find a flaw in this, fix it. I'm pretty sure it won't ever tell you "now it's perfect" (and do this reproducibly).

  • If humans can find bugs, why can't humans write flawless code?

    Whatever the answer to that conundrum might be, LLMs are trained on these patterns and replicate them pretty faithfully.

  • How do you define flawless though?

    The CVEs here have their fair share of silly C problems, but also more rigid input validation and handling. These more rigid validations exclude stuff which may even be valid by the spec, but entirely problematic in practice.

    As examples, take a look how many valid XML documents are practically considered unsafe and not parsed, for example due to recursive entity expansion. This renders the parsers not flawless and in fact not in spec.

    Or, my favorite bait - there should be a maximum length limit on passwords. Why would you ever need a kilobyte sized password?

  • Have you ever met a security engineer? I’ve never met one who was also a good engineer (not saying they don’t exist, I just haven’t met one). Do they find vulnerabilities? Sure. Could they write the tools they use to find vulnerabilities, most probably not.

  • Just because something is good at finding bugs, it may not find all the bugs. Finding a bug only tells you there was one bug you found, it doesn't tell if the rest is solid.

> The tsunami of AI-generated bug reports shows no signs of stopping, so it is likely that this process will have to be repeated again soon.

Welcome to the new world order.