Why does it matter? I know the answer and this is a philosophical complaint, but the purpose of CVE is simply to make sure that people are talking about the same bug, not as a certification of importance or impact.
In this particular case, the poster is complaining that 3 CVEs were assigned for memory corruption vulnerabilities reachable only from the dnsmasq configuration file. I didn't read carefully, but the presumption that config file memory corruption bugs aren't vulnerabilities is problematic, because user input can find its way into configurations through templating; it depends on how innocuous the field triggering the bug is.
I've had to generate "bill of materials" for software I've shipped, and often certain end users will beat you over the head for "vulnerabilities" even if they're a low CVSS score or do not apply to your own code. I get the resistance to wanting CVEs for everything, as regardless of the initial intentions, there's a LOT of people/enterprises that just see "oh shit there's a CVE, the whole thing is garbage, we're not going to accept this/pay you/etc." Basically CVEs are often weaponized in a really counterproductive way.
> Basically CVEs are often weaponized in a really counterproductive way.
This is inevitable when you boil everything down to a number. When that number refers to a (potentially) costly bug, people shirk critical thinking and just go straight for zero-tolerance.
Not ideal but I'm not sure if there's a better way :/
Yup, and people get real stupid with it too. I’ve seen people request an update to fix redos vulnerabilities in a go package using the stdlib only. Because some time some where a bot flagged the regex and a CVE was opened with no consideration that it was nonsensical.
You explain that the CVE makes no sense, and you’re met with the response that “ok but did when”
The Black Duck scanner in particular I've found is really easy to misconfigure to siphon up all sorts of crazy shit. Did some coworker write a one-off support script that uses an ancient container in some random repository? Oops now you've got to answer to why you've got a dozen Debian or Alpine vulns in a product that ships on bare metal RHEL. If your build process is not 100% lunar lander clean, which in the era of ship trash as fast as possible, it's not going to be, you inevitably end up with an absolute deluge of things flagged that you have no idea where they came from or how to explain to some suit that no we're not shipping Debian Jessie in 2025, calm down.
I think what's happening here is, people don't have time to assess. And frankly, can you blame them?
A person might be implementing dozens or hundreds of pieces of software from multiple vendors. Now there are CVEs on their radar. They have to deal, and assess.
What do they do?
Do a deep dive on every CVE, including looking at code, validating what the CVE represents, and assessing security risk org wide, no matter how and where and in what way the software is used? Is code even available?
Or, is the prudent thing to say "CVE -- we need the vendor to resolve".
How much work must an end user put in, when a CVE is there?
I agree 100% that this is terrible, but my point is to at least understand it from the side of implementation. What I tend to do is use my distro for everything I possibly can. This provides an entity that is handling CVEs, and even categorizing them:
This helps reduce the need to handle CVEs directly. Not eliminate of course, but vastly reduce it. Output of clicking on a CVE is helpful with a rating:
This rating may be because it does not affect debian in its default config, or because something isn't compiled in, or the impact is truly low, or so on.
This gives me something to read if I must, and to grasp when I have no time to deep dive. I trust debian to be reasonably fast and work well to resolve CVEs of importance, and properly triage the rest.
Yes, I know of edge cases, yes I know of the fact that seldom used packages often need an end user to report a CVE. It can and does happen. But the goal here is "doing our very best" and "proving we'd doing that".
So this helps by allowing me to better focus on CVEs of vendor products I use, and get a better grasp on how to pursue vendors.
Yet when dealing with the infrastructure of smaller companies -- they just don't have the time. They still have to manage the same issues as a larger company, that being SoC2 compliance or what not, as well as liability issues in their market sphere.
And the thing is, I'm willing to bet larger companies are far worse at this CVE chicanery. It's just rote to them. Smaller companies have flexibility.
Here's a hotlist for making at least some of this manageable, because if you give people information, you don't have to respond as much:
* have a RSS feed, or a webpage which is only updated if there is a security update for your software
* have a stable and development(bleeding edge) branch. One branch only has security updates and never new code. Maybe, possibly bugfixes, but bugfixes must not break the API, config files, or create requirements for newer versions of libraries
* provide a mailing list never ever ever used for marketing purposes, which alerts users to new updates for software. never spam that email address. ever.
Important:
If you have outstanding CVEs, list them somewhere on a static page, with a description of what the issue is, and how you've triaged it. If you believe it's a bogus CVE, say so. If you think it only causes issues in certain circumstances, and is thus less important that other CVEs you are working on, say so.
Keep all CVEs here by simply updating the page to indicate a CVE was resolved, but also with a version/commit and date of when. Again, information resolves so many issues.
Do these things, and your end users will love you, and it will engender more trust that security issues are being dealt with seriously. (Note: not saying that aren't, but if you make it easy for people to know when updates come out, lots of questions stop being asked)
When engineers see this sort of thing, they love you. They become stronger advocates. It falls under marketing as much as technical due diligence.
As an open source software vendor I can say two things:
1) The CVE system allows vendors to deny CVEs that relate to their product. I don't know the exact rules, so I don't know if it applies in this case. We take anything that can crash our software seriously.
2) For users without a support contract, your priority does not automatically become out priority. If you want your issues fixed then make sure we have the money to do so. Just because you got a free download doesn't give you any rights to support.
I suspect the big problem here is thinly-stretched volunteer maintainers.
I am very sympathetic to the idea that all memory corruption bugs should be fixed systematically, whether or not they're exploitable. It works well for OpenBSD. And, well, I wouldn't have leaned into Rust so early if I wasn't a bit fanatic about fixing memory corruption bugs.
But at the same time, a lot of maintainers are stretched really thin. And many pieces of software choose to trust some inputs, especially inputs that require root access to edit. If you want to take user input and use it to generate config files in /etc, you should plan to do extremely robust sanitization. Or to make donations to thinly-stretched volunteer maintainers, perhaps.
Is that not a problem with how people are using CVEs, scoring them and attaching value to them rather than whether a CVE should be assigned itself. A CVE is simply a number and some data on a vulnerability so that the community knows they are all talking about the same issue
Even if you need to be root to edit the files, it still is a deviation from the design or reasonably expected behaviour of that interface, so is still a bug and should still get a CVE. It should either be fixed or failing that documented as 'wont fix' and on the radar of anyone building an application. Someone building the next plesk or cpanel or similar management system should at least know about filtering their input and not allowing it to get to the dangerous config file.
Re: Harassment - Can't the project release a statement saying that the bug writeup is low quality and unable to be reproduced? Anyone ignoring that without question and using it as evidence that the project is bad without proof is putting way too much value in CVEs and the fault is their own
It's a bug, sure. The V in CVE is for "vulnerability", which is why people treat CVEs as more than just bugs.
If every bug got a CVE, practically every commit would get one and they'd be even less useful than they are now.
At that point, why not just use commit hashes for CVEs and get rid of the system entirely if we're going to say every bug should get a CVE?
> Re: Harassment - Can't the project release a statement saying that the bug writeup is low quality and unable to be reproduced?
If your suggested response to a human DoS is "why can't the humans just do more work and write more difficult-to-word-correctly communication", then you're not understanding the problem.
If you are wasting time wording communication then are you doing it wrong?
I imagine the response would be looking at it briefly, seeing if it looks dangerous or reproducible and getting an AI to return a templated "PoC or GTFO" response.
The mere existence of a CVE doesn't tell anyone whether a bug is valid or not, and the security reports should be handled in the same way regardless of whether one does exist. For some odd reason people have attached value to having your name logged beside CVEs, despite it not telling you anything,
"human communication is easy, just have an AI say 'buzz off' and the conversation partner and other strangers will always respond respectfully, I don't know why so many people complain about lack of spoons or other social issues".
Thanks doctor, you just solved my anxiety.
I broadly agree that having templates does lower the amount of human effort and emotional labor required, but trust me, it's not a silver bullet, even hitting someone with a template takes spoons.
I don't really care that CVEs in theory are apparently entirely without meaning and created for nonexistent bugs, we're talking about the reality of how they're perceived and used.
Like, I'm saying "Issuing garbage such that 100 people have to read it and then figure out what to do is bad, we should instead have a higher bar for the initial issuing part so 1 or 2 people have to actually read it, and 100 people can save some time. We should call out issuing garbage as bad behavior to hopefully reduce it in the future".
You're apparently disagreeing with that and saying "But reading is easy, and the thing is meaningless anyway so this real harm that actually happens is totally fine. We should keep issuing as much garbage as we can, the numbers don't mean anything. It's better to make a pile of garbage and stress the entire system such that no one values or trusts it than to add any amount of vetting or criticism over creating garbage"
idk, I guess we're probably actually on the same page and you're just arguing for arguing's sake because you think you can be a pedant and be technically correct about CVEs.
Tell me if I got a wrong read there and you have a more concrete point I'm missing?
But that's not what happened here. These are memory corruption bugs. Probably not meaningful ones, but in the subset of bugs that are generally considered vulnerabilities.
It's more complicated than that though. For security, the whole context has to be considered.
Like for example, look at the linked CVE-2025-12200, "NULL pointer dereference parsing config file"...
Please, explain a single dnsmasq setup where someone is somehow constructing a config file such that it both takes in untrusted input where this NPE is the difference between it being secure and being DoSd or insecure somehow, if you can even conjure up a plausible hypothetical way this could happen, I'd love to hear it, because this just seems so impossible to me.
This seems firmly in the realm of issuing CVEs for "post quantum crypto may not be safe from unknown alien attacks"
> Is that not a problem with how people are using CVEs, scoring them and attaching value to them
Well, yes, it is. But if that's the way the market is going to game the scoring/value system it's (mis)using, then it behooves a project that wants to be successful to play the same game and push back when the scoring unfairly penalizes it.
Basically dnsmasq doesn't really have much of a choice here. Someone found a config parser bug and tried to make a big deal out of it, so someone else (which has to be dnsmasq or a defender) needs to explain why it's not a big deal.
Some product decides not to use it. Someone loses a contract supporting it. Someone doesn't get a job because their work isn't favored anymore.
I think you're trying to invoke a frame where because dnsmasq is "open source" that it isn't subject to market forces or doesn't define value in a market-sensitive way. And... it is, and it does.
Free software hippies may be communists at heart but they still need to win on a capitalist battlefield.
Imagine a router has a web/cli interface for setting the DHCP server’s domain name. At some point the users’s data is forwarded to a process exiting the root-owned file.
Hypothetically, If a vulnerability in the parsing of such from the config could be exploited from the end-user, that would certainly matter.
And these things always seem to be one step away from bugs that allow arbitrary injection into the config file…
(I’m amazed at the hot messes exposed with HTTP and SMTP regarding difference in CR/CRLF/LF handling. Proxy servers and even “git” keep screwing this up…)
Just because you cannot see how a vulnerability can be exploited does not mean that others can. As you describe, people seem to assume that the only way the config file ends up on the server is «physically» editing it.
An anecdote: I have been struggling with exploiting a product that relies on MongoDb, I can replace the configuration file, but gaining RCE is not supported «functionality» in the embedded version as the __exec option came in a newer version.
If someone can template in data, it's a lot easier to just set "dhcp-script=/arbitrary/code"
If the person templating isn't validating data, then it's already RCE to let someone template into this config file without careful validation.
... Also, this is a segfault, the chance anyone can get an RCE out of '*r = 0' for r being slightly out of bounds is close to nil, you'd need an actively malicious compiler.
While CVE's in theory are "just a number to coordinate with no real meaning", in practice a "Severity: High" CVE will trigger a bunch of work for people, so it's obviously not ideal to issue garbage ones.
> blindly take CVSS scoring as input without evaluating the vulnerability.
Evaluating the CVSS score in your own context is the work I'm talking about.
It does no one any good to have a CVE that says "may lead to remote code execution", when in fact it cannot, and if the reporter did more work, then you wouldn't need hundreds of people to independently do that work to determine this is garbage.
People being able to collectively analyze a vulnerability instead of having to all do it independently is pretty much the whole reason for having a CVE database, so I'm glad we agree.
I mean, I'm fine with the complaint about vulnerabilities that ambiguously refer to possible code execution, but that is a problem that long predates CVE.
Vulnerabilities can and often are chained together.
While the relevant configuration does require root to edit, that doesn’t mean that editing or inserting values to dnsmasq as an unprivileged user doesn’t exist as functionality in another application or system.
There are frivolous CVEs issued without any evidence of exploitability all the time. This particular example however, isn’t that. These are pretty clearly qualified as CVEs.
The implied risk is a different story, but if you’re familiar with the industry you’ll quickly learn that there are people with far more imagination and capacity to exploit conditions you believe aren’t practically exploitable, particularly in highly available tools such as dnsmasq. You don’t make assumptions about that. You publish the CVE.
>that doesn’t mean that editing or inserting values to dnsmasq as an unprivileged user doesn’t exist as functionality in another application or system.
The developer typically defines its threat model. My threat model would not include another application inserting garbage values into my application's config, which is expected to be configured by a root (trusted) user.
The Windows threat model does not include malicious hardware with DMA tampering with kernel memory _except_ maybe under very specific configurations.
The developer is too stupid to define the threat model — they’re too busy writing vulnerabilities as they cobble together applications and libraries they barely understand.
How many wireless routers generate a config from user data plus a template. One’s lucky if they even do server side validation that ensures CRLFs not present in IP addresses and hostnames.
And if Unicode is involved … a suitcase of four leaf clovers won’t save you.
Honestly after witnessing "principal" software engineers defend storing API keys plaintext in a database in the year of our Lord 2025, and ask how that someone possibly exploit that if they can't access that column directly through an application, my cynicism is strong enough that I can believe that even a majority of "developers" don't even know what a threat model is.
> The developer typically defines its threat model.
The people running the software define the threat model.
And CNA’s issue CVEs because the developer isn’t the only one running their software, and it’s socially dangerous to allow that level of control of the narrative as it relates to security.
> The developer typically defines its threat model.
Is this the case? As we're seeing here, getting a CVE assigned does not require input or agreement from the developer. This isn't a bug bounty where the developer sets a scope and evaluates reports. It's a common database across all technology for assigning unique IDs to security risks.
The developer puts their software into the world, but how the software is used in the world defines what risks exist.
If you ever open up a CVE calculator you'll see pretty clearly that the calculation is in isolation, as part of a chain.
Sure, CVE isn't optimal but virtually no model is. It's the whole point basically to provide a simplification of reality to be able to reason about it.
How do CVEs get issued? Where do I apply, who makes decisions, and what software is covered by them?
I know these questions are technically answered out there on the internet. But I looked into it a couple of years ago after finding a horrible bug in a popular npm package and the answers weren't clear to me.
The first issue being raised is that replacing the configuration file shouldn't count as a vulnerability. Usually I'd agree, but the fact that it causes memory corruption from user input warrants at least a low severity report.
If we can't prove that a vulnerability is exploitable, we have to keep our assumptions minimal. If the memory corruption vuln is provably unexploitable, a future code change could surface it as a plausible exploit primitive. It can also point to a section of code that may have been under-speced, and may serve as an signal to pay more attention at these sections for related bugs. Also, it doesn't seem right to assume that the config files will always be under a privileged directory.
The second issue being discussed iun the mailing list is that it's LLM slop. While the reports do seem to be AI generated, I haven't seen any response about the PoC failing, but maybe there is a significant problem where a lot of PoCs are fake.
So many assumptions. As commander Data may have said today, "the most elementary and valuable statement in security, the beginning of wisdom, is 'I do not know.'"
Assuming it's AI slop, considering that there's been an upswing of AI slop CVE reports seems pretty reasonable.
However, it doesn't necessarily matter if it's submitted by an incompetent human, a malicious human, or is AI slop. The end effect of wasting time on a non-vulnerability is the same
In a world where generating AI slop is cheap, the standard should probably be that the person submitting a vulnerability needs to prove it is a vulnerability, and probably that they're a person. Having the person receiving it prove it isn't won't scale
Why does it matter? I know the answer and this is a philosophical complaint, but the purpose of CVE is simply to make sure that people are talking about the same bug, not as a certification of importance or impact.
In this particular case, the poster is complaining that 3 CVEs were assigned for memory corruption vulnerabilities reachable only from the dnsmasq configuration file. I didn't read carefully, but the presumption that config file memory corruption bugs aren't vulnerabilities is problematic, because user input can find its way into configurations through templating; it depends on how innocuous the field triggering the bug is.
I've had to generate "bill of materials" for software I've shipped, and often certain end users will beat you over the head for "vulnerabilities" even if they're a low CVSS score or do not apply to your own code. I get the resistance to wanting CVEs for everything, as regardless of the initial intentions, there's a LOT of people/enterprises that just see "oh shit there's a CVE, the whole thing is garbage, we're not going to accept this/pay you/etc." Basically CVEs are often weaponized in a really counterproductive way.
> Basically CVEs are often weaponized in a really counterproductive way.
This is inevitable when you boil everything down to a number. When that number refers to a (potentially) costly bug, people shirk critical thinking and just go straight for zero-tolerance.
Not ideal but I'm not sure if there's a better way :/
Yup, and people get real stupid with it too. I’ve seen people request an update to fix redos vulnerabilities in a go package using the stdlib only. Because some time some where a bot flagged the regex and a CVE was opened with no consideration that it was nonsensical.
You explain that the CVE makes no sense, and you’re met with the response that “ok but did when”
The Black Duck scanner in particular I've found is really easy to misconfigure to siphon up all sorts of crazy shit. Did some coworker write a one-off support script that uses an ancient container in some random repository? Oops now you've got to answer to why you've got a dozen Debian or Alpine vulns in a product that ships on bare metal RHEL. If your build process is not 100% lunar lander clean, which in the era of ship trash as fast as possible, it's not going to be, you inevitably end up with an absolute deluge of things flagged that you have no idea where they came from or how to explain to some suit that no we're not shipping Debian Jessie in 2025, calm down.
*fix not did (damn you autocorrect)
Ironically, software without a long list of CVEs is often the real hot garbage.
Some of it is surprisingly well known by name too!
If you do everything yourself you will avoid a lot of CVEs... for the time being.
Or get big enough, join the CVE board and just make the rules such that you can hide them forever
I think what's happening here is, people don't have time to assess. And frankly, can you blame them?
A person might be implementing dozens or hundreds of pieces of software from multiple vendors. Now there are CVEs on their radar. They have to deal, and assess.
What do they do?
Do a deep dive on every CVE, including looking at code, validating what the CVE represents, and assessing security risk org wide, no matter how and where and in what way the software is used? Is code even available?
Or, is the prudent thing to say "CVE -- we need the vendor to resolve".
How much work must an end user put in, when a CVE is there?
I agree 100% that this is terrible, but my point is to at least understand it from the side of implementation. What I tend to do is use my distro for everything I possibly can. This provides an entity that is handling CVEs, and even categorizing them:
https://security-tracker.debian.org/tracker/source-package/o...
This helps reduce the need to handle CVEs directly. Not eliminate of course, but vastly reduce it. Output of clicking on a CVE is helpful with a rating:
https://security-tracker.debian.org/tracker/CVE-2021-36368
This rating may be because it does not affect debian in its default config, or because something isn't compiled in, or the impact is truly low, or so on.
This gives me something to read if I must, and to grasp when I have no time to deep dive. I trust debian to be reasonably fast and work well to resolve CVEs of importance, and properly triage the rest.
Yes, I know of edge cases, yes I know of the fact that seldom used packages often need an end user to report a CVE. It can and does happen. But the goal here is "doing our very best" and "proving we'd doing that".
So this helps by allowing me to better focus on CVEs of vendor products I use, and get a better grasp on how to pursue vendors.
Yet when dealing with the infrastructure of smaller companies -- they just don't have the time. They still have to manage the same issues as a larger company, that being SoC2 compliance or what not, as well as liability issues in their market sphere.
And the thing is, I'm willing to bet larger companies are far worse at this CVE chicanery. It's just rote to them. Smaller companies have flexibility.
Here's a hotlist for making at least some of this manageable, because if you give people information, you don't have to respond as much:
* have a RSS feed, or a webpage which is only updated if there is a security update for your software
* have a stable and development(bleeding edge) branch. One branch only has security updates and never new code. Maybe, possibly bugfixes, but bugfixes must not break the API, config files, or create requirements for newer versions of libraries
* provide a mailing list never ever ever used for marketing purposes, which alerts users to new updates for software. never spam that email address. ever.
Important:
If you have outstanding CVEs, list them somewhere on a static page, with a description of what the issue is, and how you've triaged it. If you believe it's a bogus CVE, say so. If you think it only causes issues in certain circumstances, and is thus less important that other CVEs you are working on, say so.
Keep all CVEs here by simply updating the page to indicate a CVE was resolved, but also with a version/commit and date of when. Again, information resolves so many issues.
Do these things, and your end users will love you, and it will engender more trust that security issues are being dealt with seriously. (Note: not saying that aren't, but if you make it easy for people to know when updates come out, lots of questions stop being asked)
When engineers see this sort of thing, they love you. They become stronger advocates. It falls under marketing as much as technical due diligence.
As an open source software vendor I can say two things: 1) The CVE system allows vendors to deny CVEs that relate to their product. I don't know the exact rules, so I don't know if it applies in this case. We take anything that can crash our software seriously. 2) For users without a support contract, your priority does not automatically become out priority. If you want your issues fixed then make sure we have the money to do so. Just because you got a free download doesn't give you any rights to support.
I suspect the big problem here is thinly-stretched volunteer maintainers.
I am very sympathetic to the idea that all memory corruption bugs should be fixed systematically, whether or not they're exploitable. It works well for OpenBSD. And, well, I wouldn't have leaned into Rust so early if I wasn't a bit fanatic about fixing memory corruption bugs.
But at the same time, a lot of maintainers are stretched really thin. And many pieces of software choose to trust some inputs, especially inputs that require root access to edit. If you want to take user input and use it to generate config files in /etc, you should plan to do extremely robust sanitization. Or to make donations to thinly-stretched volunteer maintainers, perhaps.
[dead]
CVEs, however, do get scored according to CVSS, and they are often extremely hostile and live in fantasy land.
CVEs also cannot be denied by projects, and are often used as an avenue of harassment towards open source projects.
I agree with the poster on that mailing list, this is not, nor should be, a CVE. At no point can you edit those files without being root.
Is that not a problem with how people are using CVEs, scoring them and attaching value to them rather than whether a CVE should be assigned itself. A CVE is simply a number and some data on a vulnerability so that the community knows they are all talking about the same issue
Even if you need to be root to edit the files, it still is a deviation from the design or reasonably expected behaviour of that interface, so is still a bug and should still get a CVE. It should either be fixed or failing that documented as 'wont fix' and on the radar of anyone building an application. Someone building the next plesk or cpanel or similar management system should at least know about filtering their input and not allowing it to get to the dangerous config file.
Re: Harassment - Can't the project release a statement saying that the bug writeup is low quality and unable to be reproduced? Anyone ignoring that without question and using it as evidence that the project is bad without proof is putting way too much value in CVEs and the fault is their own
> so is still a bug and should still get a CVE
It's a bug, sure. The V in CVE is for "vulnerability", which is why people treat CVEs as more than just bugs.
If every bug got a CVE, practically every commit would get one and they'd be even less useful than they are now.
At that point, why not just use commit hashes for CVEs and get rid of the system entirely if we're going to say every bug should get a CVE?
> Re: Harassment - Can't the project release a statement saying that the bug writeup is low quality and unable to be reproduced?
If your suggested response to a human DoS is "why can't the humans just do more work and write more difficult-to-word-correctly communication", then you're not understanding the problem.
If you are wasting time wording communication then are you doing it wrong?
I imagine the response would be looking at it briefly, seeing if it looks dangerous or reproducible and getting an AI to return a templated "PoC or GTFO" response.
The mere existence of a CVE doesn't tell anyone whether a bug is valid or not, and the security reports should be handled in the same way regardless of whether one does exist. For some odd reason people have attached value to having your name logged beside CVEs, despite it not telling you anything,
"human communication is easy, just have an AI say 'buzz off' and the conversation partner and other strangers will always respond respectfully, I don't know why so many people complain about lack of spoons or other social issues".
Thanks doctor, you just solved my anxiety.
I broadly agree that having templates does lower the amount of human effort and emotional labor required, but trust me, it's not a silver bullet, even hitting someone with a template takes spoons.
I don't really care that CVEs in theory are apparently entirely without meaning and created for nonexistent bugs, we're talking about the reality of how they're perceived and used.
Like, I'm saying "Issuing garbage such that 100 people have to read it and then figure out what to do is bad, we should instead have a higher bar for the initial issuing part so 1 or 2 people have to actually read it, and 100 people can save some time. We should call out issuing garbage as bad behavior to hopefully reduce it in the future".
You're apparently disagreeing with that and saying "But reading is easy, and the thing is meaningless anyway so this real harm that actually happens is totally fine. We should keep issuing as much garbage as we can, the numbers don't mean anything. It's better to make a pile of garbage and stress the entire system such that no one values or trusts it than to add any amount of vetting or criticism over creating garbage"
idk, I guess we're probably actually on the same page and you're just arguing for arguing's sake because you think you can be a pedant and be technically correct about CVEs. Tell me if I got a wrong read there and you have a more concrete point I'm missing?
But that's not what happened here. These are memory corruption bugs. Probably not meaningful ones, but in the subset of bugs that are generally considered vulnerabilities.
It's more complicated than that though. For security, the whole context has to be considered.
Like for example, look at the linked CVE-2025-12200, "NULL pointer dereference parsing config file"...
Please, explain a single dnsmasq setup where someone is somehow constructing a config file such that it both takes in untrusted input where this NPE is the difference between it being secure and being DoSd or insecure somehow, if you can even conjure up a plausible hypothetical way this could happen, I'd love to hear it, because this just seems so impossible to me.
This seems firmly in the realm of issuing CVEs for "post quantum crypto may not be safe from unknown alien attacks"
CVE-2025-1312 bash and sudo privilege escalation
sudo may be exploited to obtain full root privilege when the shell receives attacker-controlled input
to reproduce: execute this shell script and authorize sudo when prompted
> Is that not a problem with how people are using CVEs, scoring them and attaching value to them
Well, yes, it is. But if that's the way the market is going to game the scoring/value system it's (mis)using, then it behooves a project that wants to be successful to play the same game and push back when the scoring unfairly penalizes it.
Basically dnsmasq doesn't really have much of a choice here. Someone found a config parser bug and tried to make a big deal out of it, so someone else (which has to be dnsmasq or a defender) needs to explain why it's not a big deal.
Why?
What negative thing happens to the dnsmasq project if they just don’t argue about whether or not it’s a big deal.
Some product decides not to use it. Someone loses a contract supporting it. Someone doesn't get a job because their work isn't favored anymore.
I think you're trying to invoke a frame where because dnsmasq is "open source" that it isn't subject to market forces or doesn't define value in a market-sensitive way. And... it is, and it does.
Free software hippies may be communists at heart but they still need to win on a capitalist battlefield.
It gets blurry at times though.
Imagine a router has a web/cli interface for setting the DHCP server’s domain name. At some point the users’s data is forwarded to a process exiting the root-owned file.
Hypothetically, If a vulnerability in the parsing of such from the config could be exploited from the end-user, that would certainly matter.
And these things always seem to be one step away from bugs that allow arbitrary injection into the config file…
(I’m amazed at the hot messes exposed with HTTP and SMTP regarding difference in CR/CRLF/LF handling. Proxy servers and even “git” keep screwing this up…)
Just because you cannot see how a vulnerability can be exploited does not mean that others can. As you describe, people seem to assume that the only way the config file ends up on the server is «physically» editing it.
An anecdote: I have been struggling with exploiting a product that relies on MongoDb, I can replace the configuration file, but gaining RCE is not supported «functionality» in the embedded version as the __exec option came in a newer version.
A parser bug would be most welcome here.
Why stop there? Imagine a situation where the user is allowed to patch the binary.
If someone can template in data, it's a lot easier to just set "dhcp-script=/arbitrary/code"
If the person templating isn't validating data, then it's already RCE to let someone template into this config file without careful validation.
... Also, this is a segfault, the chance anyone can get an RCE out of '*r = 0' for r being slightly out of bounds is close to nil, you'd need an actively malicious compiler.
While CVE's in theory are "just a number to coordinate with no real meaning", in practice a "Severity: High" CVE will trigger a bunch of work for people, so it's obviously not ideal to issue garbage ones.
Maybe we should issue a CVE for company vulnerability response processes that blindly take CVSS scoring as input without evaluating the vulnerability.
> blindly take CVSS scoring as input without evaluating the vulnerability.
Evaluating the CVSS score in your own context is the work I'm talking about.
It does no one any good to have a CVE that says "may lead to remote code execution", when in fact it cannot, and if the reporter did more work, then you wouldn't need hundreds of people to independently do that work to determine this is garbage.
People being able to collectively analyze a vulnerability instead of having to all do it independently is pretty much the whole reason for having a CVE database, so I'm glad we agree.
I mean, I'm fine with the complaint about vulnerabilities that ambiguously refer to possible code execution, but that is a problem that long predates CVE.
Like I said, it depends on the configuration field. But people saying "you have to be root to change this configuration" are missing the point.
If the argument is "CVSS is a complete joke", I think basically every serious practitioner in the field agrees with that.
Vulnerabilities can and often are chained together.
While the relevant configuration does require root to edit, that doesn’t mean that editing or inserting values to dnsmasq as an unprivileged user doesn’t exist as functionality in another application or system.
There are frivolous CVEs issued without any evidence of exploitability all the time. This particular example however, isn’t that. These are pretty clearly qualified as CVEs.
The implied risk is a different story, but if you’re familiar with the industry you’ll quickly learn that there are people with far more imagination and capacity to exploit conditions you believe aren’t practically exploitable, particularly in highly available tools such as dnsmasq. You don’t make assumptions about that. You publish the CVE.
>that doesn’t mean that editing or inserting values to dnsmasq as an unprivileged user doesn’t exist as functionality in another application or system.
The developer typically defines its threat model. My threat model would not include another application inserting garbage values into my application's config, which is expected to be configured by a root (trusted) user.
The Windows threat model does not include malicious hardware with DMA tampering with kernel memory _except_ maybe under very specific configurations.
The developer is too stupid to define the threat model — they’re too busy writing vulnerabilities as they cobble together applications and libraries they barely understand.
How many wireless routers generate a config from user data plus a template. One’s lucky if they even do server side validation that ensures CRLFs not present in IP addresses and hostnames.
And if Unicode is involved … a suitcase of four leaf clovers won’t save you.
Honestly after witnessing "principal" software engineers defend storing API keys plaintext in a database in the year of our Lord 2025, and ask how that someone possibly exploit that if they can't access that column directly through an application, my cynicism is strong enough that I can believe that even a majority of "developers" don't even know what a threat model is.
> The developer typically defines its threat model.
The people running the software define the threat model.
And CNA’s issue CVEs because the developer isn’t the only one running their software, and it’s socially dangerous to allow that level of control of the narrative as it relates to security.
> The developer typically defines its threat model.
Is this the case? As we're seeing here, getting a CVE assigned does not require input or agreement from the developer. This isn't a bug bounty where the developer sets a scope and evaluates reports. It's a common database across all technology for assigning unique IDs to security risks.
The developer puts their software into the world, but how the software is used in the world defines what risks exist.
If you ever open up a CVE calculator you'll see pretty clearly that the calculation is in isolation, as part of a chain.
Sure, CVE isn't optimal but virtually no model is. It's the whole point basically to provide a simplification of reality to be able to reason about it.
Why go through the trouble of exploring such a bug, when you have the ability to just replace the binary with something with a backdoor?
How do CVEs get issued? Where do I apply, who makes decisions, and what software is covered by them?
I know these questions are technically answered out there on the internet. But I looked into it a couple of years ago after finding a horrible bug in a popular npm package and the answers weren't clear to me.
Can a CVE be issued in retrospect?
> How do CVEs get issued? Where do I apply, who makes decisions
For most (but certainly not all) projects, you fill out a simple form [0]. I've done it before and it's fairly easy.
> and what software is covered by them?
All software is covered by someone, usually by the vendor themselves or MITRE.
> Can a CVE be issued in retrospect?
Absolutely, but it's fairly uncommon.
[0]: https://cveform.mitre.org/
Several issues seem to be getting mixed up.
The first issue being raised is that replacing the configuration file shouldn't count as a vulnerability. Usually I'd agree, but the fact that it causes memory corruption from user input warrants at least a low severity report.
If we can't prove that a vulnerability is exploitable, we have to keep our assumptions minimal. If the memory corruption vuln is provably unexploitable, a future code change could surface it as a plausible exploit primitive. It can also point to a section of code that may have been under-speced, and may serve as an signal to pay more attention at these sections for related bugs. Also, it doesn't seem right to assume that the config files will always be under a privileged directory.
The second issue being discussed iun the mailing list is that it's LLM slop. While the reports do seem to be AI generated, I haven't seen any response about the PoC failing, but maybe there is a significant problem where a lot of PoCs are fake.
So many assumptions. As commander Data may have said today, "the most elementary and valuable statement in security, the beginning of wisdom, is 'I do not know.'"
Assuming it's AI slop, considering that there's been an upswing of AI slop CVE reports seems pretty reasonable.
However, it doesn't necessarily matter if it's submitted by an incompetent human, a malicious human, or is AI slop. The end effect of wasting time on a non-vulnerability is the same
In a world where generating AI slop is cheap, the standard should probably be that the person submitting a vulnerability needs to prove it is a vulnerability, and probably that they're a person. Having the person receiving it prove it isn't won't scale