I followed the link to the Pixel 9 bug/exploit and saw this:
"Over the past few years, several AI-powered features have been added to mobile phones that allow users to better search and understand their messages. One effect of this change is increased 0-click attack surface, as efficient analysis often requires message media to be decoded before the message is opened by the user"
Haven't we learned our lesson on this? Don't read and act on my sms messages without me asking you to!
What is the purported lesson we should have learned? Users choose phones with rich messaging features. This was a major selling point for iPhone, first, with iMessage, and later with Android until iOS caught up with RCS.
One of the things Apple's Lockdown mode does is disable previews of images or links that are sent to you.
It seems like the lesson is that you shouldn't be processing data sent to the device by random strangers without the user explicitly choosing to open the file or follow the link.
Where are users being given an actual choice? There is no option for "iphone without these features", and I would wager that it has 0 bearing on anyone's decision to purchase a new iphone
Even that's not sufficient. Consider an email client that doesn't parse images until you interact with the message. So you click on it, realize it's dodgy, but it's too late now because all the complex bug prone machinery has already been triggered.
Or my favorite, I marked an extremely suspicious message with what was almost certainly a malicious attachment as junk in a certain BigTech webmail client (the only other option was phishing which it most certainly was not) and it "helpfully" opened the unsubscribe link in my local browser without first asking me for permission. It's difficult to imagine the level of incompetence and dysfunction required to not only write but review, approve, and deploy such a feature in a security and privacy sensitive context.
Getting users to open a message isn’t a terribly high bar.
As a user I would not find it acceptable if needed to be careful with which message I open.
We tried putting the responsibility on the user with email attachments and I think it’s fair to say it’s been a disaster.
Malicious attachments are probably the most important distribution vector for malware.
This isn't even an exploit if the crappy AI or whatever that's trying to do something fancy never "processes" the message. At least give me a choice before you automatically do that
> Don't read and act on my sms messages without me asking you to!
Doesn't that just turn a 0-click exploit into a 1-click exploit? It's unlikely the user can make an informed decision to not process a potentially malicious message, without clicking on the message.
I don't know if that is the right lesson. It's kind of like "don't click on links"... Err, no. You should be able to click any link without getting hacked.
"This is notably fast given that this is the first time that an Android driver bug I reported was patched within 90 days of the vendor first learning about the vulnerability."
This makes me feel better about Google, but also makes me kind of frightened of the rest of Android. I wonder what Apple's response time is?
Android vendors have been notorious about updates for a long time. Part of that is supposedly because all of the phone companies want to distinguish themselves from each other, and so they all want to fork the default Android UI so they can offer some psychedelic UI vision with some brand-specific features. But that means that when an update to stock Android comes out, it's a lot of work to migrate.
I don't think Android UI customization is the main issue. Many vendors are not even able to keep device firmware and Linux kernels in sync. Qualcomm and others are doing monthly bulletins:
Since a lot of vendors are months or even years behind, their phones are full of known holes.
When it comes to security, basically: GrapheneOS > iOS > PixelOS >> Samsung OneUI >>>>>>>> everybody else.
Sadly, Samsung lets anyone who pays enough push bloatware and analytics on their phones. E.g. AppCloud from an Isreali company, Meta services that stay even when you remove Meta apps (only removable with ADB/UAD), etc. So there are only three somewhat serious options (and for two of them, you still give a lot of analytics to Apple or Google).
I've reported security bugs to Apple before. Was a couple years back but I remember it taking around 6 months to patch (there was a couple back and forth for me to get a more reliable POC). Maybe 2 months from when I submitted a POC with 100% reproducibility
Not sure how much it helps, but I just run all my Apple devices in "Lockdown mode", don't install apps (use Safari), and try to mostly use Safari in private sandboxed mode.
Given that 42% of Android devices are unpatched as of now [1] it's an interesting decision on their part to release their research and make them all vulnerable
That's perennially the case. A big portion of the world buys bargain-basement android devices that are unsupported right out of the box.
Search "android phone" on aliexpress and there's top selling phones on the first page running android 8, android 10, etc. They're not getting security updates of any sort, let alone driver updates.
The old way of keeping security bugs private is just completely broken now. If you aren't on a device that gets security updates you are in significant danger, regardless of what Google decides to publish. No name hackers are sitting on stacks of exploits these days and are actively using them.
On brand-name android devices you can count on getting OS security updates. The first-party vendor can build and push these themselves. Driver and firmware security updates are a maybe. These often have to come from an upstream vendor, who may or may not care to fix the issues.
Smaller brands often ship budget android devices and never update them.
Semi-related: has the rate of published exploits picked up as if late, or is it simply the fact that there’s hype around ai as security tool (offense or defense) so it’s simply in the news more often?
Feels like there’s something new every other day - linux, windows, mobile, various commonplace tools used by everybody, the list goes on
I wouldn't look at the numbers. There used to be a lot of "scam" CVEs before LLMs, that weren't actual vulns. Nowadays its more popular to collect CVEs, and there is a lot of people scanning with LLMs and reporting without checking (like it was in case of cURL). These CVEs are often not verified by anyone.
There probably is more vulnerabilities found, but the amount of CVEs is not a good metric.
There are reports from people who manage security bugs in OSS that there has been a big uptick in reports: initially low quality ones that were mostly bogus, but now many more legitimate ones as well.
This is pure guesswork, I am not a security researcher, but my guess would be that AI is increasing the amount of low quality exploitable attack surface available, while simultaneously providing security researchers with an accelerant for their work. Which is to say, its great if you use it well and really bad if you use it poorly.
I've reported a few very serious issues to vendors of widely used tools in recent weeks, and it's been even more difficult than usual to get them to be acknowledged - the teams that respond are reportedly swamped.
There definitely is hype around AI as a security tool right now. Someone else pointed out that the rate of CVEs has gone up, but that doesn't tell is why.
This article doesn't mention AI helping find this bug. Seems like humans can still do that on their own.
A bit of both (it finds new things and news is hyped/blown up), and a third factor is that more people are trying to find things. The authors might have been able to do this already, because you still need to have a decent understanding to get useful work out of it and verify the results, but the shiny new toy and FOMO factors make people spend more hours on it that they'd have spent doing something else otherwise
I've seen quite a few saying that they were inspired by the previous report that is presented as "the model pointed us to it" and you get FOMO about missing out if you don't snatch bugs now as well
Hmmm... I'd like someone to double check my thinking here. I posted this exact prompt for gpt 5.5 xhigh:
```
does this look right to you? don't do any searches or check memory, just think through first principles
static int vpu_mmap(struct file fp, struct vm_area_struct vm) { unsigned long pfn; struct vpu_core core = container_of(fp->f_inode->i_cdev, struct vpu_core, cdev); vm_flags_set(vm, VM_IO | VM_DONTEXPAND | VM_DONTDUMP); / This is a CSRs mapping, use pgprot_device */ vm->vm_page_prot = pgprot_device(vm->vm_page_prot); pfn = core->paddr >> PAGE_SHIFT; return remap_pfn_range(vm, vm->vm_start, pfn, vm->vm_end-vm->vm_start, vm->vm_page_prot) ? -EAGAIN : 0; }
```
And it correctly identified the issue at hand, without web searches. I'd love to try something more comprehensive, e.g. shoving whole chunks of the codebase into the prompt instead of just the specific function, but it seems the latent ability to catch security exploits is there.
So then.... I wonder how this got out in the first place. I know I'm using a toy example but would love to learn more!
That's not really a fair test because you're leading the model pretty hard, even if the prompt doesn't specifically say there's a bug to be found. It's basically the same objections that people raised in the thread where someone claimed current models are just as good as mythos.
As an anecdote, I provided fragnesia.c and the subsequent proposed patch to fix the issue and while it was not able to discover an entirely new vulnerability, I think it was able to find 2 new ways of exploiting the same underlying bug.
This is quite impressive considering I’m just a dumbass with a Claude subscription.
I pasted the code into claude Opus 4.7 with no internet access and just asked it to just tell me what the function did, and it explained it and also called out the bug. I did not tell it to look for bugs:
> Observations & Potential Issues
A few things worth flagging:
1. No bounds checking on the mapping size. Userspace controls vm_end - vm_start and vm->vm_pgoff. Here vm_pgoff is ignored entirely and the size is trusted blindly. If the VPU's register block is, say, 64KB but userspace requests a 1MB mapping, the driver will happily map 1MB of physical address space starting at core->paddr — potentially exposing whatever hardware happens to live at adjacent physical addresses. A defensive check would be:
---
70 day release cycles are very quickly not going to be fast enough to stop widespread use of exploits when you have bots able to scan every PR on every open source project as it comes out.
It's the usual problem of having no consequences for the person who wrote catastrophic code like this and the company who released it. If the person who wrote this were to be imprisoned for the rest of their life, for instance, or if the company were to be fined $1 million per user put at risk (which would probably mean a $1-10 trillion fine for Google -enough to trigger bankruptcy), then things would be very different
Someone T-bones you in parking lot, chef causes food poisoning, plumber's leak floods your bathroom, personal trainer pushes to injury, mislabeled allergen on food, movers break your armoire, roofer leaves a leak -- I bet we'd see a lot less of all that if a $1MM fine + life in jail loomed over everyone.
Nobody would want to do business, but boy would we be in a golden age.
> If the person who wrote this were to be imprisoned for the rest of their life [...] then things would be very different
Yes, they certainly would. You wouldn't have smartphones, for instance.
I can't tell if this is satirical or not. But there are so many takes like this recently (hold the website liable for user content, hold the corporate developer liable for zero days in a project they happened to touch) that would all result in the same outcome (no more product at all) that I can't help but wonder if there's some luddite psy-op trying desperately to bring us back to a pre-Internet era in any way they can...
Where are the iPhone jailbreaks didn’t see anything since a long time.. what’s happening? Did I miss them or isn’t anything available? I mean props to Apple however they do it but is it a matter of time in regard to the current timeline or what is actually going on?
Exploits that can survive reboots are almost impossible these days. And a jailbreak enabling exploit now requires a whole chain of exploits which are worth significant money and also get patched as soon as they become public.
So something like the old iphone jailbreaking scene is just impossible now.
This is a great bug report! I am not a kernel expert by any means even though I have read some about it... 10+ years ago. And I was able to follow along and see what was going on.
It does make me scared for what other dangers lurk since this was a really bad one and it was so little work to find.
Also of note: so many security issues lately have been done using AI. This report makes me think two things:
1. Expertise is still immensely valuable, the more niche, the more valuable.
2. There are lots of niches still where AI doesn't dominate...
There have been some V4L2 enhancements to support hardware video decoding pending a merge for a long time, they do seem to be in the mainline kernel now, I guess people didn't want to wait that long.
Project Zero has to report bugs to Android through the front door, and deal with Android VRP severity classification? I always assumed they could just walk over to the Android office and advocate for their bugs, face to face.
hm. surprised there aren't idioms like copy_(to|from)_user for these kinds of kernel to userspace mappings for custom device nodes that ensure bounds are supplied...
Randomizing the kernel location is of marginal utility at best. There are so many info leaks that KASLR ends up being only a small speed bump on the way to exploitation.
on selling ads or what do you mean their focus used to be that they've lost? I'm not at all negative about more paid features that they've been offering over time, from workspace to youtube to hardware. Still very conflicted about giving Google of all places my custom, but for e.g. phones it's hard to avoid and second-hand the prices are really quite competitive for a tangible hardware product (not a software subscription that you're stuck on). Not bad to shift focus to making these Pixel devices imo, so long as they remain open that is
I've not seen someone refer to a portion of GrapheneOS's mitigations as superficial and meaningless before. What might an OS with significant improvements to usable attack surface reduction and exploit mitigations look like to you? What sort of things (given a team of less than a dozen contending with OS updates, upgrades and device support) would you have liked to see implemented?
And that is against a device whose BSP is actually open source and available for research!
Now imagine the dark horrors hiding in the BSPs of other Android devices... or embedded devices in general.
Frankly, it should be a requirement of Google's certification process that everything regarding drivers gets upstreamed into the Linux kernel. Yes, even if this adds quite a time delay to the usual hardware development process.
I hope the average person will soon understand the importance of security and will be OK with making the necessary sacrifices to achieve it. Almost everyone has something to protect, be it personal information or property (money, IP).
People love new technologies and features that make their lives easier, but so far only a small subset of these people have made a conscious decision to limit their exposure to risk by depriving themselves of benefits provided by some of these features.
It sure is wonderful to have your whole life digitized on a single computer. You can analyze, share, organize, gamify, record and so on every aspect of your life instantly and effortlessly. It's incredible, really. Technology is amazing. Expect for the pesky bad actors that can do the digital equivalent to most physical crime from the other side of the world anonymously without you noticing.
It's like germs - if you don't wash your hands after touching something questionable and you don't experience any negative consequences, you'll learn not to wash them most of the time. It's just a waste of time. Maybe if you've touched something really gross, you'd wash them, but that would be the exception. Security is the same. If you've been using computers the same way for years, you'll learn nothing bad happens so why bother having a hygiene, why bother making any tradeoffs?
Yes, you've heard the news of someone's nudes posted online, of someone's bank account drained or of some company's files ransomed, but you've also heard of something dying from a brain parasite after touching a muddy puddle and rubbing their eyes afterwards. That happens rarely, we shouldn't worry about it. A car can hit you when you cross the street, a lightning can strike you when you're just walking about, an aneurysm can end you at anytime. No one is washing their hands all the time or constantly trying to minimize the streets they cross or anything like that. That would be foolish and impractical, and I agree.
That mindset is carried over to digital security, sadly. The risks are higher, the effort to keep a good hygiene is lower, the ability for bad actors to completely fuck you is much greater than in meat space. The rewards are seemingly greater, too, until we realize that what we get from technology is just marginally better than what we get without it. Tech is amazing, but it doesn't make us transcend time and space. It let's us organize our schedule, tag people and places in photos and summarize chats. All of that is born out of meat space. Without tech we'd still have conversation, we'd still see new places, we'd still have calendars and todo lists. We get maybe 1% more than we would have if we didn't have any tech but we let all our information and property sit unsecured for that 1% gain. That's fucked up, because the risks are big and will get bigger. And the tradeoffs we have to make to secure our digital lives may seem annoying, but are actually quite trivial. Less unnecessary sharing, more isolation and compartmentalization, different computers for different tasks, less proprietary hardware and software, etc.. We could get 90% of that 1% benefit from tech if we spend just a bit of time and energy of securing out digital lives. But fuck it. Let's but the latest flagship, let's use it for ID, banking, communication, file storage, camera, health tracking, everything. Because it's a tiny bit more inconvenient to get multiple computers for different purposes, to not get the latest and newest, to not install a bunch of unnecessary shit, to be careful about the digital realm at all.
Not really on topic, but a rant. I'm tired of people (friends and friends of friends) complaining to me that they got majorly fucked one way or another and acting like the universe owes them not to get fucked while they buy a computer that exposes their asshole to the world.
I read about Pixel 9 Dolby Decoder bug, and it is based on integer overflow. It was a mistake to allow "+" operator to overflow, and this must be fixed in new languages like Rust, but it is not.
In Rust the decision about whether to pay for overflow checks or just wrap (because all modern hardware will just wrap if you don't check and that's cheaper) is a choice you can make when compiling software, by default you get checks except in release builds but you can choose checks everywhere, even in release builds or no checks even in debug.
By definition in Rust it's incorrect to overflow the non-overflowing integer types, and so if you intend say wrapping you should use the explicit wrapping operations such as wrapping_add or the Wrapping<T> types in which the default operators do wrap - but if you turn off checks then it's still safe to be wrong, just as if you'd call the wrapping operations by hand instead of using the non-wrapping operations.
That Dolby overflow code looks awkward enough that I can't imagine writing it in Rust even if the checking was off - but I wasn't there. However the reason it's on Project Zero is that it resulted in a bounds miss, and that Rust would have prevented anyway.
I love most of what Rust does, but this is something they just got wrong. The + operator should always trap on overflow. Which Rust kinda wanted to do (hence why it does that in debug builds), but then they chickened out about the performance risk for release builds, undermining the entire thing. The result is just weak lip service to "no UB!", since debug and release still have very different behavior
I think Zig has the most interesting approach here with 3 different "+" operators (+ aborts on overflow, +& wraps, and +| saturates) along with addWithOverflow builtin. It'd probably be a challenge for Rust to adopt that at this point, but it'd be a great improvement
> is a choice you can make when compiling software
That is not a solution because it means the code can behave differently, and expose vulnerability if wrong compilation settings are chosen.
The functions like "wrapping_add" have such a long names so that nobody wants to use them and they make the code ugly. Instead, "+" should be used for addition with exceptions, and something like "wrap+" or "<+>" or "[+]" used for wrapping addition.
That's how people work, they will choose the laziest path (the simplest function name) and this is why you should use "+" for safer, non-wrapping addition and make the symbol for wrapping addition long and unattractive. Make writing unsafe code harder. This is just basic psychology.
C has the same problem, they have functions checking for overflow, but they also have long and ugly names that discourage their use.
> modern hardware will just wrap if you don't check and that's cheaper
So you suggest that because x86 is a poorly designed architecture, we should adapt programing languages to its poor design? x86 will be gone sooner or later anyway.
Also, there are languages like JS, Python, Swift which chose the right path, it is only C and Rust developers who seem to be backwards.
I've been using this as a touchstone for whether or not we are actually going to take security seriously for a long time.
We've moved slightly closer to this, but in a world where we're still arguing over memory safety being necessary we've probably still got a ways to go before we notice that addition silently overflowing is a top-10 security issue. It's the silent top-10 security issue, I guess.
It isn't because no ISA implements add like that, so there's always performance on the table if you check every time, and people would probably endlessly moan about how Rust is 20% slower than C on this add-heavy microbenchmark.
That said you can enable overflow checks in Rust's release mode. It's literally two lines:
[profile.release]
overflow-checks = true
I wonder if it would make sense for ISAs to have trapping versions of add and subtract. RISC-V's justification for not doing that is that it's only a couple more instructions to check afterwards. It would be interesting to see the performance difference of `overflow-check = true` on high performance RISC-V chips once they are available.
I think it is 3 extra instructions on RISC-V if you add signed numbers. So 1 addition (the most popular operation) turns into 4 instructions. What are those people thinking? I generally like RISC-V but this part in my opinion, is wrong. They should just have added "overflow enabled" bit to the add instruction.
It does seem like "What if we offer checked integer arithmetic operations?" is a cheaper experiment than CHERI's "What if we mechanically reify extent based provenance"?"
I followed the link to the Pixel 9 bug/exploit and saw this:
"Over the past few years, several AI-powered features have been added to mobile phones that allow users to better search and understand their messages. One effect of this change is increased 0-click attack surface, as efficient analysis often requires message media to be decoded before the message is opened by the user"
Haven't we learned our lesson on this? Don't read and act on my sms messages without me asking you to!
> Haven't we learned our lesson on this?
What is the purported lesson we should have learned? Users choose phones with rich messaging features. This was a major selling point for iPhone, first, with iMessage, and later with Android until iOS caught up with RCS.
One of the things Apple's Lockdown mode does is disable previews of images or links that are sent to you.
It seems like the lesson is that you shouldn't be processing data sent to the device by random strangers without the user explicitly choosing to open the file or follow the link.
9 replies →
Well, one could argue that the lesson from CVE-2017-0780[1] should've been "don't automatically decode rich messages from untrusted sources".
[1]: https://www.trendmicro.com/en_us/research/17/i/cve-2017-0780...
Where are users being given an actual choice? There is no option for "iphone without these features", and I would wager that it has 0 bearing on anyone's decision to purchase a new iphone
1 reply →
Didn't Android switch their codec stack to rust?
> What is the purported lesson we should have learned?
Not to automatically execute things within data that we have been sent.
4 replies →
Even that's not sufficient. Consider an email client that doesn't parse images until you interact with the message. So you click on it, realize it's dodgy, but it's too late now because all the complex bug prone machinery has already been triggered.
Or my favorite, I marked an extremely suspicious message with what was almost certainly a malicious attachment as junk in a certain BigTech webmail client (the only other option was phishing which it most certainly was not) and it "helpfully" opened the unsubscribe link in my local browser without first asking me for permission. It's difficult to imagine the level of incompetence and dysfunction required to not only write but review, approve, and deploy such a feature in a security and privacy sensitive context.
The email client I use doesn't display images in an email until I explicitly ask it to.
1 reply →
> Don't read and act on my sms messages without me asking you to!
Being an accidental or curious tap away from an RCE isn't actually much better. The fix is using sanitizing and safe parsers.
Getting users to open a message isn’t a terribly high bar. As a user I would not find it acceptable if needed to be careful with which message I open. We tried putting the responsibility on the user with email attachments and I think it’s fair to say it’s been a disaster. Malicious attachments are probably the most important distribution vector for malware.
This isn't even an exploit if the crappy AI or whatever that's trying to do something fancy never "processes" the message. At least give me a choice before you automatically do that
> Don't read and act on my sms messages without me asking you to!
Doesn't that just turn a 0-click exploit into a 1-click exploit? It's unlikely the user can make an informed decision to not process a potentially malicious message, without clicking on the message.
Preferably a two-click exploit. One to view the message and one (if I decide it's safe) to process it through your buggy code.
A 0-click exploit is horrendously worse than even a 1-click one. I often don't even open messages from numbers I don't recognize
I don't know if that is the right lesson. It's kind of like "don't click on links"... Err, no. You should be able to click any link without getting hacked.
Wr aren't talking about clicking links even. This is a bug in some stupid code that tries to read your messages for you and act on them. No thank you!
Sure, in an ideal world different from this one. You should be able to do anything on any device and never worry about security.
Unfortunately, since we don't live in that world, we need to not open links, emails, text messages, etc, if they are sketchy.
A better solution may someday exist, but as of yet has not been found.
3 replies →
How are they going to make trillions of dollars if not!?
> Don't read and act on my sms messages without me asking you to!
Somewhere there's an NSA agent reading this and laughing like a gin addict on payday.
"move fast and break things"
"But the users never know what they want to do! We have to shove suggestions and recommendations at them at every! waking! moment!"
"This is notably fast given that this is the first time that an Android driver bug I reported was patched within 90 days of the vendor first learning about the vulnerability."
This makes me feel better about Google, but also makes me kind of frightened of the rest of Android. I wonder what Apple's response time is?
Android vendors have been notorious about updates for a long time. Part of that is supposedly because all of the phone companies want to distinguish themselves from each other, and so they all want to fork the default Android UI so they can offer some psychedelic UI vision with some brand-specific features. But that means that when an update to stock Android comes out, it's a lot of work to migrate.
I don't think Android UI customization is the main issue. Many vendors are not even able to keep device firmware and Linux kernels in sync. Qualcomm and others are doing monthly bulletins:
https://docs.qualcomm.com/securitybulletin/may-2026-bulletin...
Since a lot of vendors are months or even years behind, their phones are full of known holes.
When it comes to security, basically: GrapheneOS > iOS > PixelOS >> Samsung OneUI >>>>>>>> everybody else.
Sadly, Samsung lets anyone who pays enough push bloatware and analytics on their phones. E.g. AppCloud from an Isreali company, Meta services that stay even when you remove Meta apps (only removable with ADB/UAD), etc. So there are only three somewhat serious options (and for two of them, you still give a lot of analytics to Apple or Google).
3 replies →
I've reported security bugs to Apple before. Was a couple years back but I remember it taking around 6 months to patch (there was a couple back and forth for me to get a more reliable POC). Maybe 2 months from when I submitted a POC with 100% reproducibility
At least in the past there has been instances where Apple sat on security bugs for years until they were fixed, one example: https://jonbottarini.com/2021/12/09/dont-reply-a-clever-phis...
I've heard they cleaned up their program recently to respond much quicker nowadays
Not sure how much it helps, but I just run all my Apple devices in "Lockdown mode", don't install apps (use Safari), and try to mostly use Safari in private sandboxed mode.
15 replies →
Given that 42% of Android devices are unpatched as of now [1] it's an interesting decision on their part to release their research and make them all vulnerable
[1] https://gs.statcounter.com/android-version-market-share [2] https://www.cybersecurity-insiders.com/survey-reveals-over-1...
That's perennially the case. A big portion of the world buys bargain-basement android devices that are unsupported right out of the box.
Search "android phone" on aliexpress and there's top selling phones on the first page running android 8, android 10, etc. They're not getting security updates of any sort, let alone driver updates.
The old way of keeping security bugs private is just completely broken now. If you aren't on a device that gets security updates you are in significant danger, regardless of what Google decides to publish. No name hackers are sitting on stacks of exploits these days and are actively using them.
On brand-name android devices you can count on getting OS security updates. The first-party vendor can build and push these themselves. Driver and firmware security updates are a maybe. These often have to come from an upstream vendor, who may or may not care to fix the issues.
Smaller brands often ship budget android devices and never update them.
Semi-related: has the rate of published exploits picked up as if late, or is it simply the fact that there’s hype around ai as security tool (offense or defense) so it’s simply in the news more often?
Feels like there’s something new every other day - linux, windows, mobile, various commonplace tools used by everybody, the list goes on
If one reads between the lines in part 1, the code in question was introduced due to AI features and the exploit was found by humans:
https://projectzero.google/2026/01/pixel-0-click-part-1.html
So AI usage increases bugs and humans have to weed them out!
These days I'd expect much of Android is vibe coded with minimal review.
I just did some analysis on this last weekend, in 2024 there were roughly 100 CVEs published every day. In April we hit approximately 200 per day.
Going backwards from 2023, the doubling interval for published CVEs was approximately 4 to 4 1/2 years. Since then it’s approximately two years.
There has definitely been a rapid uptick.
Published CVEs seems a bad metric to use for this- unless we assume that the ratio of really nasty vulns/not-too-bad vulns is consistent.
4 replies →
I wouldn't look at the numbers. There used to be a lot of "scam" CVEs before LLMs, that weren't actual vulns. Nowadays its more popular to collect CVEs, and there is a lot of people scanning with LLMs and reporting without checking (like it was in case of cURL). These CVEs are often not verified by anyone.
There probably is more vulnerabilities found, but the amount of CVEs is not a good metric.
Did you publish this anywhere? Would love to read more.
The rules around CVE reporting changed recently and it would be expected a lot more are accepted.
There are reports from people who manage security bugs in OSS that there has been a big uptick in reports: initially low quality ones that were mostly bogus, but now many more legitimate ones as well.
This is pure guesswork, I am not a security researcher, but my guess would be that AI is increasing the amount of low quality exploitable attack surface available, while simultaneously providing security researchers with an accelerant for their work. Which is to say, its great if you use it well and really bad if you use it poorly.
Not low quality if it works!
4 replies →
I've reported a few very serious issues to vendors of widely used tools in recent weeks, and it's been even more difficult than usual to get them to be acknowledged - the teams that respond are reportedly swamped.
https://lwn.net/Articles/1065620/
There definitely is hype around AI as a security tool right now. Someone else pointed out that the rate of CVEs has gone up, but that doesn't tell is why.
This article doesn't mention AI helping find this bug. Seems like humans can still do that on their own.
I think AI helped researchers navigate better in the codebase, not necessarily the AI is succeeding in exploiting.
[dead]
A bit of both (it finds new things and news is hyped/blown up), and a third factor is that more people are trying to find things. The authors might have been able to do this already, because you still need to have a decent understanding to get useful work out of it and verify the results, but the shiny new toy and FOMO factors make people spend more hours on it that they'd have spent doing something else otherwise
I've seen quite a few saying that they were inspired by the previous report that is presented as "the model pointed us to it" and you get FOMO about missing out if you don't snatch bugs now as well
The Mythos announcement was crazy I think "...has already found _thousands_ of severe security vulnerabilities across _all_ OSes"!
[dead]
Hmmm... I'd like someone to double check my thinking here. I posted this exact prompt for gpt 5.5 xhigh:
```
does this look right to you? don't do any searches or check memory, just think through first principles
static int vpu_mmap(struct file fp, struct vm_area_struct vm) { unsigned long pfn; struct vpu_core core = container_of(fp->f_inode->i_cdev, struct vpu_core, cdev); vm_flags_set(vm, VM_IO | VM_DONTEXPAND | VM_DONTDUMP); / This is a CSRs mapping, use pgprot_device */ vm->vm_page_prot = pgprot_device(vm->vm_page_prot); pfn = core->paddr >> PAGE_SHIFT; return remap_pfn_range(vm, vm->vm_start, pfn, vm->vm_end-vm->vm_start, vm->vm_page_prot) ? -EAGAIN : 0; }
```
And it correctly identified the issue at hand, without web searches. I'd love to try something more comprehensive, e.g. shoving whole chunks of the codebase into the prompt instead of just the specific function, but it seems the latent ability to catch security exploits is there.
So then.... I wonder how this got out in the first place. I know I'm using a toy example but would love to learn more!
That's not really a fair test because you're leading the model pretty hard, even if the prompt doesn't specifically say there's a bug to be found. It's basically the same objections that people raised in the thread where someone claimed current models are just as good as mythos.
right exactly, but clearly it's possible to elicit the behavior we want in the model, which means the capabilities are there!
1 reply →
As an anecdote, I provided fragnesia.c and the subsequent proposed patch to fix the issue and while it was not able to discover an entirely new vulnerability, I think it was able to find 2 new ways of exploiting the same underlying bug.
This is quite impressive considering I’m just a dumbass with a Claude subscription.
How do you know it didn't search the web?
no tool calls!
I pasted the code into claude Opus 4.7 with no internet access and just asked it to just tell me what the function did, and it explained it and also called out the bug. I did not tell it to look for bugs:
> Observations & Potential Issues A few things worth flagging: 1. No bounds checking on the mapping size. Userspace controls vm_end - vm_start and vm->vm_pgoff. Here vm_pgoff is ignored entirely and the size is trusted blindly. If the VPU's register block is, say, 64KB but userspace requests a 1MB mapping, the driver will happily map 1MB of physical address space starting at core->paddr — potentially exposing whatever hardware happens to live at adjacent physical addresses. A defensive check would be:
---
70 day release cycles are very quickly not going to be fast enough to stop widespread use of exploits when you have bots able to scan every PR on every open source project as it comes out.
It's the usual problem of having no consequences for the person who wrote catastrophic code like this and the company who released it. If the person who wrote this were to be imprisoned for the rest of their life, for instance, or if the company were to be fined $1 million per user put at risk (which would probably mean a $1-10 trillion fine for Google -enough to trigger bankruptcy), then things would be very different
If this rule were implemented, would you be walking free right now? Think it over.
12 replies →
We should roll this out for everything.
Someone T-bones you in parking lot, chef causes food poisoning, plumber's leak floods your bathroom, personal trainer pushes to injury, mislabeled allergen on food, movers break your armoire, roofer leaves a leak -- I bet we'd see a lot less of all that if a $1MM fine + life in jail loomed over everyone.
Nobody would want to do business, but boy would we be in a golden age.
1 reply →
> If the person who wrote this were to be imprisoned for the rest of their life [...] then things would be very different
Yes, they certainly would. You wouldn't have smartphones, for instance.
I can't tell if this is satirical or not. But there are so many takes like this recently (hold the website liable for user content, hold the corporate developer liable for zero days in a project they happened to touch) that would all result in the same outcome (no more product at all) that I can't help but wonder if there's some luddite psy-op trying desperately to bring us back to a pre-Internet era in any way they can...
Yes...no one would write any code.
1 reply →
Do we have any evidence on how AI has affected NSO et als’ businesses? Does it render them obsolete? Or are they now superpowered?
Without knowing details I guess that ai is changing the game a lot and a lot of 'capital' in the form of zero days has been destroyed.
If this is the case it's good news for everyone else besides NSO and Co
[dead]
Where are the iPhone jailbreaks didn’t see anything since a long time.. what’s happening? Did I miss them or isn’t anything available? I mean props to Apple however they do it but is it a matter of time in regard to the current timeline or what is actually going on?
Apple's security posture with lockdown mode, memory tagging, and secure allocators is significantly better than Android. You can read some about it here: https://security.apple.com/blog/memory-integrity-enforcement...
Exploits that can survive reboots are almost impossible these days. And a jailbreak enabling exploit now requires a whole chain of exploits which are worth significant money and also get patched as soon as they become public.
So something like the old iphone jailbreaking scene is just impossible now.
This is a great bug report! I am not a kernel expert by any means even though I have read some about it... 10+ years ago. And I was able to follow along and see what was going on.
It does make me scared for what other dangers lurk since this was a really bad one and it was so little work to find.
Also of note: so many security issues lately have been done using AI. This report makes me think two things:
1. Expertise is still immensely valuable, the more niche, the more valuable.
2. There are lots of niches still where AI doesn't dominate...
> This is rendered even easier by the fact that the kernel is always at the same physical address on Pixel
OpenBSD fixed this back in 2017.
There have been some V4L2 enhancements to support hardware video decoding pending a merge for a long time, they do seem to be in the mainline kernel now, I guess people didn't want to wait that long.
Project Zero has to report bugs to Android through the front door, and deal with Android VRP severity classification? I always assumed they could just walk over to the Android office and advocate for their bugs, face to face.
If they felt it was too painful to do it the "normal" way then that would probably be the next thing for Project Zero try to get fixed.
hm. surprised there aren't idioms like copy_(to|from)_user for these kinds of kernel to userspace mappings for custom device nodes that ensure bounds are supplied...
fascinating how GrapheneOS achieves high security level on the same hardware where Google failed to even randomize android's kernel location
Randomizing the kernel location is of marginal utility at best. There are so many info leaks that KASLR ends up being only a small speed bump on the way to exploitation.
Here's a cool project that inventories all your KASLR info leaks: https://github.com/bcoles/kasld
Is Graphene vulnerable to these exploits?
It's easy to be secure if you just remove features. There's obvious tension here.
Could you be any more specific about what features they've removed such that the hardening functions work? Because I think there are none
3 replies →
google has lost its focus with pixel phones
on selling ads or what do you mean their focus used to be that they've lost? I'm not at all negative about more paid features that they've been offering over time, from workspace to youtube to hardware. Still very conflicted about giving Google of all places my custom, but for e.g. phones it's hard to avoid and second-hand the prices are really quite competitive for a tangible hardware product (not a software subscription that you're stuck on). Not bad to shift focus to making these Pixel devices imo, so long as they remain open that is
KASLR isn't an effective mitigation against anything, and to me this is part of GrapheneOS's catalog of superficial but meaningless claims.
I've not seen someone refer to a portion of GrapheneOS's mitigations as superficial and meaningless before. What might an OS with significant improvements to usable attack surface reduction and exploit mitigations look like to you? What sort of things (given a team of less than a dozen contending with OS updates, upgrades and device support) would you have liked to see implemented?
And that is against a device whose BSP is actually open source and available for research!
Now imagine the dark horrors hiding in the BSPs of other Android devices... or embedded devices in general.
Frankly, it should be a requirement of Google's certification process that everything regarding drivers gets upstreamed into the Linux kernel. Yes, even if this adds quite a time delay to the usual hardware development process.
I hope the average person will soon understand the importance of security and will be OK with making the necessary sacrifices to achieve it. Almost everyone has something to protect, be it personal information or property (money, IP).
People love new technologies and features that make their lives easier, but so far only a small subset of these people have made a conscious decision to limit their exposure to risk by depriving themselves of benefits provided by some of these features.
It sure is wonderful to have your whole life digitized on a single computer. You can analyze, share, organize, gamify, record and so on every aspect of your life instantly and effortlessly. It's incredible, really. Technology is amazing. Expect for the pesky bad actors that can do the digital equivalent to most physical crime from the other side of the world anonymously without you noticing.
It's like germs - if you don't wash your hands after touching something questionable and you don't experience any negative consequences, you'll learn not to wash them most of the time. It's just a waste of time. Maybe if you've touched something really gross, you'd wash them, but that would be the exception. Security is the same. If you've been using computers the same way for years, you'll learn nothing bad happens so why bother having a hygiene, why bother making any tradeoffs?
Yes, you've heard the news of someone's nudes posted online, of someone's bank account drained or of some company's files ransomed, but you've also heard of something dying from a brain parasite after touching a muddy puddle and rubbing their eyes afterwards. That happens rarely, we shouldn't worry about it. A car can hit you when you cross the street, a lightning can strike you when you're just walking about, an aneurysm can end you at anytime. No one is washing their hands all the time or constantly trying to minimize the streets they cross or anything like that. That would be foolish and impractical, and I agree.
That mindset is carried over to digital security, sadly. The risks are higher, the effort to keep a good hygiene is lower, the ability for bad actors to completely fuck you is much greater than in meat space. The rewards are seemingly greater, too, until we realize that what we get from technology is just marginally better than what we get without it. Tech is amazing, but it doesn't make us transcend time and space. It let's us organize our schedule, tag people and places in photos and summarize chats. All of that is born out of meat space. Without tech we'd still have conversation, we'd still see new places, we'd still have calendars and todo lists. We get maybe 1% more than we would have if we didn't have any tech but we let all our information and property sit unsecured for that 1% gain. That's fucked up, because the risks are big and will get bigger. And the tradeoffs we have to make to secure our digital lives may seem annoying, but are actually quite trivial. Less unnecessary sharing, more isolation and compartmentalization, different computers for different tasks, less proprietary hardware and software, etc.. We could get 90% of that 1% benefit from tech if we spend just a bit of time and energy of securing out digital lives. But fuck it. Let's but the latest flagship, let's use it for ID, banking, communication, file storage, camera, health tracking, everything. Because it's a tiny bit more inconvenient to get multiple computers for different purposes, to not get the latest and newest, to not install a bunch of unnecessary shit, to be careful about the digital realm at all.
Not really on topic, but a rant. I'm tired of people (friends and friends of friends) complaining to me that they got majorly fucked one way or another and acting like the universe owes them not to get fucked while they buy a computer that exposes their asshole to the world.
I read about Pixel 9 Dolby Decoder bug, and it is based on integer overflow. It was a mistake to allow "+" operator to overflow, and this must be fixed in new languages like Rust, but it is not.
In Rust the decision about whether to pay for overflow checks or just wrap (because all modern hardware will just wrap if you don't check and that's cheaper) is a choice you can make when compiling software, by default you get checks except in release builds but you can choose checks everywhere, even in release builds or no checks even in debug.
By definition in Rust it's incorrect to overflow the non-overflowing integer types, and so if you intend say wrapping you should use the explicit wrapping operations such as wrapping_add or the Wrapping<T> types in which the default operators do wrap - but if you turn off checks then it's still safe to be wrong, just as if you'd call the wrapping operations by hand instead of using the non-wrapping operations.
That Dolby overflow code looks awkward enough that I can't imagine writing it in Rust even if the checking was off - but I wasn't there. However the reason it's on Project Zero is that it resulted in a bounds miss, and that Rust would have prevented anyway.
I love most of what Rust does, but this is something they just got wrong. The + operator should always trap on overflow. Which Rust kinda wanted to do (hence why it does that in debug builds), but then they chickened out about the performance risk for release builds, undermining the entire thing. The result is just weak lip service to "no UB!", since debug and release still have very different behavior
I think Zig has the most interesting approach here with 3 different "+" operators (+ aborts on overflow, +& wraps, and +| saturates) along with addWithOverflow builtin. It'd probably be a challenge for Rust to adopt that at this point, but it'd be a great improvement
> is a choice you can make when compiling software
That is not a solution because it means the code can behave differently, and expose vulnerability if wrong compilation settings are chosen.
The functions like "wrapping_add" have such a long names so that nobody wants to use them and they make the code ugly. Instead, "+" should be used for addition with exceptions, and something like "wrap+" or "<+>" or "[+]" used for wrapping addition.
That's how people work, they will choose the laziest path (the simplest function name) and this is why you should use "+" for safer, non-wrapping addition and make the symbol for wrapping addition long and unattractive. Make writing unsafe code harder. This is just basic psychology.
C has the same problem, they have functions checking for overflow, but they also have long and ugly names that discourage their use.
> modern hardware will just wrap if you don't check and that's cheaper
So you suggest that because x86 is a poorly designed architecture, we should adapt programing languages to its poor design? x86 will be gone sooner or later anyway.
Also, there are languages like JS, Python, Swift which chose the right path, it is only C and Rust developers who seem to be backwards.
1 reply →
__builtin_add_overflow Exists and it’s basically free on most CPUs out there.
1 reply →
I've been using this as a touchstone for whether or not we are actually going to take security seriously for a long time.
We've moved slightly closer to this, but in a world where we're still arguing over memory safety being necessary we've probably still got a ways to go before we notice that addition silently overflowing is a top-10 security issue. It's the silent top-10 security issue, I guess.
Isn't it often combined with poor bounds checks to be exploitable? It's not as if rust or VM based languages don't help a lot with this
It isn't because no ISA implements add like that, so there's always performance on the table if you check every time, and people would probably endlessly moan about how Rust is 20% slower than C on this add-heavy microbenchmark.
That said you can enable overflow checks in Rust's release mode. It's literally two lines:
I wonder if it would make sense for ISAs to have trapping versions of add and subtract. RISC-V's justification for not doing that is that it's only a couple more instructions to check afterwards. It would be interesting to see the performance difference of `overflow-check = true` on high performance RISC-V chips once they are available.
I think it is 3 extra instructions on RISC-V if you add signed numbers. So 1 addition (the most popular operation) turns into 4 instructions. What are those people thinking? I generally like RISC-V but this part in my opinion, is wrong. They should just have added "overflow enabled" bit to the add instruction.
1 reply →
It does seem like "What if we offer checked integer arithmetic operations?" is a cheaper experiment than CHERI's "What if we mechanically reify extent based provenance"?"
1 reply →
> It isn't because no ISA implements add like that
MIPS does (did?). And VAX, IBM/360, ....