Comment by LiamPowell
1 day ago
> Mythos Preview identified a number of Linux kernel vulnerabilities that allow an adversary to write out-of-bounds (e.g., through a buffer overflow, use-after-free, or double-free vulnerability.) Many of these were remotely-triggerable. However, even after several thousand scans over the repository, because of the Linux kernel’s defense in depth measures Mythos Preview was unable to successfully exploit any of these.
Do they really need to include this garbage which is seemingly just designed for people to take the first sentence out of context? If there's no way to trigger a vulnerability then how is it a vulnerability? Is the following code vulnerable according to Mythos?
if (x != null) {
y = *x; // Vulnerability! X could be null!
}
Is it really so difficult for them to talk about what they've actually achieved without smearing a layer of nonsense over every single blog post?
Edit: See my reply below for why I think Claude is likely to have generated nonsensical bug reports here: https://news.ycombinator.com/item?id=47683336
I agree the wording is a bit alarmist, but a closer example to what they are saying is:
A bug like above would still be something that would be patched, even if a way to exploit it has not yet been found, so I think it's fair to call out (perhaps with less sensationalism).
FWIW there's a whole boutique industry around finding these. People have built whole careers around farming bug bounties for bugs like this. I think they will be among the first set of software engineers really in trouble from AI.
That is something a good static analyser or even optimising compiler can find ("opaque predicate detection") without the need for AI, and belongs in the category of "warning" and nowhere near "exploitable". In fact a compiler might've actually removed the unreachable code completely.
Well yeah, it’s a toy example to illustrate a point in an HN discussion :).
Imagine “silly mistake” is a parameter, and rename it “error_code” (pass by reference), put a label named “cleanup” right before the if statement, and throw in a ton of “goto cleanup” statements to the point the control flow of the function is hard to follow if you want it to model real code ever so slightly more.
It will be interesting to see the bugs it’s actually finding.
It sounds like they will fall into the lower CVE scores - real problems but not critical.
3 replies →
Just because the plane can fly on one engine doesn't mean you don't fix the other engine when it fails.
Except it didn't fail. You just looked at the left engine and said what if I fed it mashed potatoes instead of fuel. And then dropped the mic and left the room.
It's more like finding a way to shut down the engine but only if there was a movie in the entertainment system than was longer than 5 hours. You can't exploit it now, and probably never will, but it's a risk that's sitting there that I'm sure you agree should be fixed
Presumably they mean they could make user code trigger a write out of bounds to kernel memory, but they couldn’t figure out how to escalate privileges in a “useful” way.
They should show this then to demonstrate that it's not something that has already been fully considered. Running LLMs over projects that I'm very familiar with will almost always have the LLM report hundreds of "vulnerabilities" that are only valid if you look at a tiny snippet of code in isolation because the program can simply never be in the state that would make those vulnerabilities exploitable. This even happens in formally verified code where there's literally proven preconditions on subprograms that show a given state can never be achieved.
As an example, I have taken a formally verified bit of code from [1] and stripped out all the assertions, which are only used to prove the code is valid. I then gave this code to Claude with some prompting towards there being a buffer overflow and it told me there's a buffer overflow. I don't have access to Opus right now, but I'm sure it would do the same thing if you push it in that direction.
For anyone wondering about this alleged vulnerability: Natural is defined by the standard as a subtype of Integer, so what Claude is saying is simply nonsense. Even if a compiler is allowed to use a different representation here (which I think is disallowed), Ada guarantees that the base type for a non-modular integer includes negative numbers IIRC.
[1]: https://github.com/AdaCore/program_proofs_in_spark/blob/fsf/...
[2]: https://claude.ai/share/88d5973a-1fab-4adf-8d29-8a922c5ac93a
They've promised that they will show this once the responsible disclosure period expires, and pre-published SHA3 hashes for (among others) four of the Linux kernel disclosures they'll make.
> Running LLMs over projects that I'm very familiar with will almost always have the LLM report hundreds of "vulnerabilities" that are only valid if you look at a tiny snippet of code in isolation because the program can simply never be in the state that would make those vulnerabilities exploitable.
Their OpenBSD bug shows why this is not so simple. (We should note of course that this is an example they've specifically chosen to present as their first deep dive, and so it may be non-representative.)
> Mythos Preview then found a second bug. If a single SACK block simultaneously deletes the only hole in the list and also triggers the append-a-new-hole path, the append writes through a pointer that is now NULL—the walk just freed the only node and left nothing behind to link onto. This codepath is normally unreachable, because hitting it requires a SACK block whose start is simultaneously at or below the hole's start (so the hole gets deleted) and strictly above the highest byte previously acknowledged (so the append check fires).
Do you think you would be able to identify, in a routine code review or vulnerability analysis with nothing to prompt your focus on this particular paragraph, how this normally unreachable codepath enables a DoS exploit?
12 replies →
Kernel address space layout randomization they are talking about is a bit different than (x != null). Other bug may allow to locate the required address.
It could very well be an actual reachable buffer overflow, but with KASLR, canaries, CET and other security measures, it's hard to exploit it in a way that doesn't immediately crash the system.
We've very quickly reached the point where AI models are now too dangerous to publicly release, and HN users are still trying to trivialize the situation.
GPT-2 was already too dangerous to publicly release according to OpenAI, however they still did. If something is not dangerous, it's also not useful.
Are they actually too dangerous to publicly release? It seems like a little bit of marketing from the model-producing companies to raise more funding. It's important to look at who specifically is making that statement and what their incentives are. There are hundreds of billions of dollars poured into this thing at this point.
You really think some marketers got leaders from companies across the industry to come together to make a video - and they're all in on the conspiracy because money?
6 replies →
Says the marketing department of the company who is apparently still working on these AI models and will 100% release them to the public when their competitive advantage slips.
Marketing pushing to release a dangerous model is a lot more likely than marketing labeling a model of dangerous when it really isn't. If anything marketing would want to downplay the danger of a model being dangerous which is the opposite of what Anthropic is doing.
Everyone here doing mental gymnastics to imagine Anthropic playing 5-D chess because they're in denial of what is happening in front of their faces. AI is getting more capable/dangerous - it's not surprising to anyone. The trendlines have pointed in this direction for years now and we're right on schedule.
[dead]
[dead]
Because a vulnerability exists independently from the exploit. It’s a basic tenet of the current cybersecurity paradigm, that any IT related engineer should know about…
> The model autonomously found and chained together several vulnerabilities in the Linux kernel—the software that runs most of the world’s servers—to allow an attacker to escalate from ordinary user access to complete control of the machine.
I'm confused on this point. The text you quote implies that they were able to build an exploit, but the text quoted in the parent comment implies that they were not.
What were they actually able to do and not do? I got confused by this when reading the article as well.
They successfully built local privilege escalation exploits (from several bugs each), and found other remotely-accessible bugs, but were not able chain their remote bugs to make remotely-accessible exploits.
Is this code multithreaded? X could indeed be null, in that case.
That example you gave is extremely memorable as I recognised it as exactly one of the insanely stupid false positives that a highly praised (and expensive) static analyser I ran on a codebase several years ago would emit copiously.
Time to adopt Ada and SPARK.
It's incredible how when you have experienced and knowledgable software engineers analyse these marketing claims, they turn out to be full of holes. Yet at the same time, apparently "AI" will be writing all the code in the next 3-6 months.
I agree. There are more blogs talking about LLM findings vulnerabilities than there are actual exploitable vulns found by LLMs. 99.9% of these vulnerabilities will never have a PoC because they are worthless unexploitable slop and a waste of everyone's time.
The voting patterns on the comments here show how they're even trying to hide it, but the truth is clear as night and day.
I think the point they were trying to make here was “Claude did better than a fuzzer because it found a bunch of OOB writes and was able to tell us they weren’t RCE,” not “Claude is awesome because it found a bunch of unreachable OOB writes.”