So like ... I thought Mythos was just a bunch of hype? Or maybe the researchers are having their skills boosted due to using a model with such a cool name?
I jest, but I did notice having more confidence to take on more ambitious work lately. We're all centaurs now.
My opinion is that it is over-hyped because like any LLM, it requires a suitable human in the loop to keep the LLM on the straight and narrow, and then to weed through the inevitable false-positives and hallucinations.
Nicholas Carlini, for example, whose name is on many of the recent high-profile Mythos findings is not just some random dude with a Claude sub on his credit card .... he's an experienced security researcher.
Random inexperienced people thinking Mythos can replace the need for experienced pen-testers, auditors etc. are likely to be sorely disappointed if/when they get their hands on Mythos.
> likely to be sorely disappointed if/when they get their hands on Mythos.
At first they will be delighted. So much money and time saved. When their adversaries get their hands on their system (with or without Mythos), then they'll be sorely disappointed.
from what they demonstrated, this seems to only be a $100,000 exploit in Apple's bug bounty platform, but if they package it right, it could be a $1.5 million exploit
They simply have to show it against a beta version of MacOS, and frame it as unauthorized access, and maybe from locked mode if possible
The world is so not ready for the impact of LLMs on security issues. If true, congrats to the Calif team. It’s likely too technical for me to understand in details but looking forward to reading the 55 pages report
> The world is so not ready for the impact of LLMs on security issues.
I agree, but it's the people I'm worried about.
I'm hearing anecdotes from all over about devs pushing LLM-generated code changes into production without retaining any knowledge of what it is they're pushing. The changes compound, their understanding of the codebase diminishes, and so the actions become risker.
What's worse is a lot of this behavior is being driven by leaders, whether directly (e.g. unrealistic velocity goals, promoting people based on hand-wavy "use AI" initiatives, etc) or indirectly (e.g. layoffs overloading remaining devs, putting inexperienced devs in senior rolls, etc).
The world's gone mad and large swaths of the industry seem hellbent on rediscovering the security basics the hard way.
The gamble is that you can cruise on the senior engineer’s diminishing understanding for a few years until models become good enough that you don’t need any humans in the loop and you can fire all those expensive seniors.
> I'm hearing anecdotes from all over about devs pushing LLM-generated code changes into production without retaining any knowledge of what it is they're pushing. The changes compound, their understanding of the codebase diminishes, and so the actions become riskier.
I don’t think so.
An LLM can produce higher-quality documentation than most humans. If it's not already happening, when a new developer joins a team, they're going to have an LLM produce any documentation a new developer needs, including why certain decisions were made.
It could also summarize years of email threads and code reviews that, let's face it, a new person wouldn’t be able to ingest anyway; it's not like a new developer gets to take a week off to get caught up on everything that happened before they got there. English not their first language? Well, the LLM can present the information in virtually any language required.
As the models continue to improve, they'll spot patterns in the code that a human wouldn’t be able to see.
Not at all. I’m considering that the amount of vulnerable software in the wild is very, very large, with most organizations not managing their systems properly. Imagine all the small to medium size companies that do not have budgets for a dedicated, talented security team. And all the software that will never be patched. We are at the beginning of the exponential
Arm published the Memory Tagging Extension (MTE) specification in 2019 as a tool for hardware to help find memory corruption bugs. MTE is a memory tagging and tag-checking system, where every memory allocation is tagged with a secret. The hardware guarantees that later requests to access memory are granted only if the request contains the correct secret. If the secrets don’t match, the app crashes, and the event is logged. This allows developers to identify memory corruption bugs immediately as they occur.
IIRC, the GPU is behind a memory controller, so I doubt corrupting GPU memory alone could lead to an LPE. But I suppose it would give you someplace to store stuff if you can make something else read from it.
I had the same question and if this is a data-only attack, the lesson may be that MIE reduces many attack paths but does not remove every useful corruption primitive
LLMs are going to produce amazing Rube Goldberg style vulnerabilities for years to come. It's already starting, this instance isn't the case, but it's happening.
Maybe it's physically impossible to build a theoretically secure system, just as it's (presumably) impossible to have a cell that isn't susceptible to any virus. Maybe this whole time we've been getting away with a type of security by obscurity, where the obscurity is just no one having the time and focus to actually analyze the code.
1. Any given system has a finite number of findable vulnerabilities.
2. All findable vulnerabilities are fixable (if not in software then with a new hardware revision).
3. Fixing a vulnerability while keeping the same intended functionality introduces on average less than 1 other findable vulnerability.
4. It is possible to cease adding new features to a system and from that point forward only focus on fixing vulnerabilities.
If all 4 are true, then perfect security seems possible, in some sense. I think some vulnerabilities might not be fixable, if you include things like the idea that users can be tricked into revealing their passwords. If you restrict the definition of vulnerability to some narrower meaning that still captures most of what people mean when they say computer vulnerability, then I think those 4 statements are probably true.
Perfect security might be near impossible in practice because vulnerabilities will get more difficult to find and fix over time, but I think we should expect the discovery of vulnerabilities to eventually become arbitrarily slow in a hypothetical system that prioritized security above all else.
I would rather claim that building a theoretically secure system is prohibitively expensive. At the end of the day, Mythos et al. are just better tools for finding vulnerabilities that will eventually be available to both offensive and defensive actors.
If you imagine you had a vulnerability scanner as fast and convenient as a linter, it would be much cheaper to write secure code right away. Probably not perfectly secure, but still secure enough to make sure finding exploits stays expensive.
They certainly are, one of the reasons behind Embedded Swift is to replace iBoot firmware currently written in a C dialect similar in ideas to Fil-C, with something better.
However it is no different from the Linux kernel, just because Rust is now allowed, the world hasn't been rewriten, and no sane person is going to do a Claude rewrite of the kernel.
Swift is definitely being used at apple. Most recently added as a CSS parser in safari and running embedded in some of the secure enclave parts. I know there was talk from as far back as strangeloop to get it in the kernel but I'm not sure how far that has gone. That being said they've been huge proponents of fbounds check in clang which can achieve a small portion (but important!) of what memory safe languages can do. I'd also like to see more swift or alternative adoptions I think they have potential and more competition in the safe language space is always welcome.
The commenter was being sarcastic to highlight the current trend of dismissing Mythos, and LLM’s finding security vulnerabilities in general, as a non issue.
There is quite a bit of irony, or depending on your perspective it's the whole point, that this response is a great example of 'glorified autocomplete'.
So like ... I thought Mythos was just a bunch of hype? Or maybe the researchers are having their skills boosted due to using a model with such a cool name?
I jest, but I did notice having more confidence to take on more ambitious work lately. We're all centaurs now.
> I thought Mythos was just a bunch of hype?
My opinion is that it is over-hyped because like any LLM, it requires a suitable human in the loop to keep the LLM on the straight and narrow, and then to weed through the inevitable false-positives and hallucinations.
Nicholas Carlini, for example, whose name is on many of the recent high-profile Mythos findings is not just some random dude with a Claude sub on his credit card .... he's an experienced security researcher.
Random inexperienced people thinking Mythos can replace the need for experienced pen-testers, auditors etc. are likely to be sorely disappointed if/when they get their hands on Mythos.
I think it's worth to look at the recent XBOW benchmark: https://xbow.com/blog/mythos-offensive-security-xbow-evaluat... they realized that ChatGPT 5.5 works better so the secret is in the architecture (including humans in the loop).
1 reply →
> likely to be sorely disappointed if/when they get their hands on Mythos.
At first they will be delighted. So much money and time saved. When their adversaries get their hands on their system (with or without Mythos), then they'll be sorely disappointed.
Did Mythos have access to Apple's source code?
> Apple spent five years building it. Probably billions of dollars too.
This seems higher than I'd expect.
This is incredibly light in details, no verifiable claim as far as I can tell.
(I’m sure they’re not lying, but we’re not learning anything here)
It reads more like a PR piece than technical article.
from what they demonstrated, this seems to only be a $100,000 exploit in Apple's bug bounty platform, but if they package it right, it could be a $1.5 million exploit
They simply have to show it against a beta version of MacOS, and frame it as unauthorized access, and maybe from locked mode if possible
This is an lpe I believe what you’re describing is a zero click rce.
how much do you think it is worth in the bug bounty program
1 reply →
The world is so not ready for the impact of LLMs on security issues. If true, congrats to the Calif team. It’s likely too technical for me to understand in details but looking forward to reading the 55 pages report
> The world is so not ready for the impact of LLMs on security issues.
I agree, but it's the people I'm worried about.
I'm hearing anecdotes from all over about devs pushing LLM-generated code changes into production without retaining any knowledge of what it is they're pushing. The changes compound, their understanding of the codebase diminishes, and so the actions become risker.
What's worse is a lot of this behavior is being driven by leaders, whether directly (e.g. unrealistic velocity goals, promoting people based on hand-wavy "use AI" initiatives, etc) or indirectly (e.g. layoffs overloading remaining devs, putting inexperienced devs in senior rolls, etc).
The world's gone mad and large swaths of the industry seem hellbent on rediscovering the security basics the hard way.
The gamble is that you can cruise on the senior engineer’s diminishing understanding for a few years until models become good enough that you don’t need any humans in the loop and you can fire all those expensive seniors.
4 replies →
> I'm hearing anecdotes from all over about devs pushing LLM-generated code changes into production without retaining any knowledge of what it is they're pushing. The changes compound, their understanding of the codebase diminishes, and so the actions become riskier.
I don’t think so.
An LLM can produce higher-quality documentation than most humans. If it's not already happening, when a new developer joins a team, they're going to have an LLM produce any documentation a new developer needs, including why certain decisions were made.
It could also summarize years of email threads and code reviews that, let's face it, a new person wouldn’t be able to ingest anyway; it's not like a new developer gets to take a week off to get caught up on everything that happened before they got there. English not their first language? Well, the LLM can present the information in virtually any language required.
As the models continue to improve, they'll spot patterns in the code that a human wouldn’t be able to see.
1 reply →
is this exciting?
juniors have been writing code forever that is imperfect and not memorized by the people reviewing
isnt the important thing the mechanisms for maintaining the code?
4 replies →
you're assuming that blue teams and engineers are sitting around twiddling their thumbs
Most companies in the world do not have “blue teams”. They barely have any kind of security employee.
17 replies →
Not at all. I’m considering that the amount of vulnerable software in the wild is very, very large, with most organizations not managing their systems properly. Imagine all the small to medium size companies that do not have budgets for a dedicated, talented security team. And all the software that will never be patched. We are at the beginning of the exponential
6 replies →
unfortunately a little light on the details. I'm very curious how the bug survived through MTE
Memory Tagging Extension
Arm published the Memory Tagging Extension (MTE) specification in 2019 as a tool for hardware to help find memory corruption bugs. MTE is a memory tagging and tag-checking system, where every memory allocation is tagged with a secret. The hardware guarantees that later requests to access memory are granted only if the request contains the correct secret. If the secrets don’t match, the app crashes, and the event is logged. This allows developers to identify memory corruption bugs immediately as they occur.
https://support.apple.com/guide/security/operating-system-in...
Thank you. I was about to ask.
Upon further reading on data only attacks
(https://www.usenix.org/publications/loginonline/data-only-at...)
This makes more sense. You don't trigger MTE since you're not doing anything for force MTE to take action the program isn't actually changing.
My other question would be, why didn't apple use fbounds checking here? They've been doing it aggressively everywhere else.
MTE plus fbounds checking everywhere should lead to an extremly hardened OS
Quite strange indeed, given that was one of the main points on their security conference a few months ago.
4 replies →
could be a different type of data only attack, which doesnt override the boundaries
1 reply →
GPU memory/shaders/etc. isn't protected by MTE or PAC. They said "data-only", so I guess GPU commands could fit into this description.
IIRC, the GPU is behind a memory controller, so I doubt corrupting GPU memory alone could lead to an LPE. But I suppose it would give you someplace to store stuff if you can make something else read from it.
> I'm very curious how the bug survived through MTE
Its not the first time bugs get past MTE, happened with Google Pixel last year ... https://github.blog/security/vulnerability-research/bypassin...
I had the same question and if this is a data-only attack, the lesson may be that MIE reduces many attack paths but does not remove every useful corruption primitive
LLMs are going to produce amazing Rube Goldberg style vulnerabilities for years to come. It's already starting, this instance isn't the case, but it's happening.
Maybe it's physically impossible to build a theoretically secure system, just as it's (presumably) impossible to have a cell that isn't susceptible to any virus. Maybe this whole time we've been getting away with a type of security by obscurity, where the obscurity is just no one having the time and focus to actually analyze the code.
Suppose the following:
1. Any given system has a finite number of findable vulnerabilities.
2. All findable vulnerabilities are fixable (if not in software then with a new hardware revision).
3. Fixing a vulnerability while keeping the same intended functionality introduces on average less than 1 other findable vulnerability.
4. It is possible to cease adding new features to a system and from that point forward only focus on fixing vulnerabilities.
If all 4 are true, then perfect security seems possible, in some sense. I think some vulnerabilities might not be fixable, if you include things like the idea that users can be tricked into revealing their passwords. If you restrict the definition of vulnerability to some narrower meaning that still captures most of what people mean when they say computer vulnerability, then I think those 4 statements are probably true.
Perfect security might be near impossible in practice because vulnerabilities will get more difficult to find and fix over time, but I think we should expect the discovery of vulnerabilities to eventually become arbitrarily slow in a hypothetical system that prioritized security above all else.
2 replies →
I would rather claim that building a theoretically secure system is prohibitively expensive. At the end of the day, Mythos et al. are just better tools for finding vulnerabilities that will eventually be available to both offensive and defensive actors.
If you imagine you had a vulnerability scanner as fast and convenient as a linter, it would be much cheaper to write secure code right away. Probably not perfectly secure, but still secure enough to make sure finding exploits stays expensive.
1 reply →
another "obscurity": I'm not valuable enough to be attacked, compared with the cost. But what if cost has been reduced a lot?
Do you mean by vibecoding these vulnerabilities into the kernel or by finding them?
I’m surprised Apple is still not dogfooding their allegedly safe language Swift. Or was the whole exercise of Swift 6 mostly marketing
They certainly are, one of the reasons behind Embedded Swift is to replace iBoot firmware currently written in a C dialect similar in ideas to Fil-C, with something better.
However it is no different from the Linux kernel, just because Rust is now allowed, the world hasn't been rewriten, and no sane person is going to do a Claude rewrite of the kernel.
Swift is definitely being used at apple. Most recently added as a CSS parser in safari and running embedded in some of the secure enclave parts. I know there was talk from as far back as strangeloop to get it in the kernel but I'm not sure how far that has gone. That being said they've been huge proponents of fbounds check in clang which can achieve a small portion (but important!) of what memory safe languages can do. I'd also like to see more swift or alternative adoptions I think they have potential and more competition in the safe language space is always welcome.
You might be interested in the Strict Memory Safety option
https://docs.swift.org/compiler/documentation/diagnostics/st...
[dead]
[flagged]
Cisco put up a totally bogus 10.0 CVE just for this reason, too
? can you expand?
1 reply →
apple didn't "make up" this vulnerability, it was an external team reporting an issue
The commenter was being sarcastic to highlight the current trend of dismissing Mythos, and LLM’s finding security vulnerabilities in general, as a non issue.
[flagged]
There is quite a bit of irony, or depending on your perspective it's the whole point, that this response is a great example of 'glorified autocomplete'.
[flagged]
[flagged]
These people don’t work for Apple or Anthropic.
I bought the M5 specifically cause of MIE. Now I feel dumb.
You shouldn’t, MTE blocks a large chunk of vulnerabilities and makes things like rop and jop very difficult if not impossible now.
I should've added /s.
1 reply →
you should worry about npm/pypi malware, not memory corruption bugs
Did the article get edited? There is not much description of the field trip.